commit 17a4d48033813f2ba893d6918fa2931afcd9af02 Author: Greg Kroah-Hartman Date: Sat May 20 14:38:27 2017 +0200 Linux 4.10.17 commit 291e716bb382710d0685d6d1c8c1ac02551e70f8 Author: Kees Cook Date: Mon Mar 6 12:42:12 2017 -0800 pstore: Shut down worker when unregistering commit 6330d5534786d5315d56d558aa6d20740f97d80a upstream. When built as a module and running with update_ms >= 0, pstore will Oops during module unload since the work timer is still running. This makes sure the worker is stopped before unloading. Signed-off-by: Kees Cook Signed-off-by: Greg Kroah-Hartman commit e5590e3d92754b85629d2a06d70bb5b09c2165c9 Author: Ankit Kumar Date: Thu Apr 27 17:03:13 2017 +0530 pstore: Fix flags to enable dumps on powerpc commit 041939c1ec54208b42f5cd819209173d52a29d34 upstream. After commit c950fd6f201a kernel registers pstore write based on flag set. Pstore write for powerpc is broken as flags(PSTORE_FLAGS_DMESG) is not set for powerpc architecture. On panic, kernel doesn't write message to /fs/pstore/dmesg*(Entry doesn't gets created at all). This patch enables pstore write for powerpc architecture by setting PSTORE_FLAGS_DMESG flag. Fixes: c950fd6f201a ("pstore: Split pstore fragile flags") Signed-off-by: Ankit Kumar Signed-off-by: Kees Cook Signed-off-by: Greg Kroah-Hartman commit e3d4daa7f6d47aeb7fd3ea95abc2937092d301eb Author: Dan Williams Date: Thu May 4 19:54:42 2017 -0700 libnvdimm, pfn: fix 'npfns' vs section alignment commit d5483feda85a8f39ee2e940e279547c686aac30c upstream. Fix failures to create namespaces due to the vmem_altmap not advertising enough free space to store the memmap. WARNING: CPU: 15 PID: 8022 at arch/x86/mm/init_64.c:656 arch_add_memory+0xde/0xf0 [..] Call Trace: dump_stack+0x63/0x83 __warn+0xcb/0xf0 warn_slowpath_null+0x1d/0x20 arch_add_memory+0xde/0xf0 devm_memremap_pages+0x244/0x440 pmem_attach_disk+0x37e/0x490 [nd_pmem] nd_pmem_probe+0x7e/0xa0 [nd_pmem] nvdimm_bus_probe+0x71/0x120 [libnvdimm] driver_probe_device+0x2bb/0x460 bind_store+0x114/0x160 drv_attr_store+0x25/0x30 In commit 658922e57b84 "libnvdimm, pfn: fix memmap reservation sizing" we arranged for the capacity to be allocated, but failed to also update the 'npfns' parameter. This leads to cases where there is enough capacity reserved to hold all the allocated sections, but vmemmap_populate_hugepages() still encounters -ENOMEM from altmap_alloc_block_buf(). This fix is a stop-gap until we can teach the core memory hotplug implementation to permit sub-section hotplug. Fixes: 658922e57b84 ("libnvdimm, pfn: fix memmap reservation sizing") Reported-by: Anisha Allada Signed-off-by: Dan Williams Signed-off-by: Greg Kroah-Hartman commit 116ada1a98ab84bf5d1405cc69f83357fa99b36a Author: Dan Williams Date: Fri Apr 28 22:05:14 2017 -0700 libnvdimm: fix nvdimm_bus_lock() vs device_lock() ordering commit 452bae0aede774f87bf56c28b6dd50b72c78986c upstream. A debug patch to turn the standard device_lock() into something that lockdep can analyze yielded the following: ====================================================== [ INFO: possible circular locking dependency detected ] 4.11.0-rc4+ #106 Tainted: G O ------------------------------------------------------- lt-libndctl/1898 is trying to acquire lock: (&dev->nvdimm_mutex/3){+.+.+.}, at: [] nd_attach_ndns+0x178/0x1b0 [libnvdimm] but task is already holding lock: (&nvdimm_bus->reconfig_mutex){+.+.+.}, at: [] nvdimm_bus_lock+0x21/0x30 [libnvdimm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&nvdimm_bus->reconfig_mutex){+.+.+.}: lock_acquire+0xf6/0x1f0 __mutex_lock+0x88/0x980 mutex_lock_nested+0x1b/0x20 nvdimm_bus_lock+0x21/0x30 [libnvdimm] nvdimm_namespace_capacity+0x1b/0x40 [libnvdimm] nvdimm_namespace_common_probe+0x230/0x510 [libnvdimm] nd_pmem_probe+0x14/0x180 [nd_pmem] nvdimm_bus_probe+0xa9/0x260 [libnvdimm] -> #0 (&dev->nvdimm_mutex/3){+.+.+.}: __lock_acquire+0x1107/0x1280 lock_acquire+0xf6/0x1f0 __mutex_lock+0x88/0x980 mutex_lock_nested+0x1b/0x20 nd_attach_ndns+0x178/0x1b0 [libnvdimm] nd_namespace_store+0x308/0x3c0 [libnvdimm] namespace_store+0x87/0x220 [libnvdimm] In this case '&dev->nvdimm_mutex/3' mirrors '&dev->mutex'. Fix this by replacing the use of device_lock() with nvdimm_bus_lock() to protect nd_{attach,detach}_ndns() operations. Fixes: 8c2f7e8658df ("libnvdimm: infrastructure for btt devices") Reported-by: Yi Zhang Signed-off-by: Dan Williams Signed-off-by: Greg Kroah-Hartman commit f92a2fe7cdb163e482a7d69826c03e621ac6b11e Author: Toshi Kani Date: Tue Apr 25 17:04:13 2017 -0600 libnvdimm, pmem: fix a NULL pointer BUG in nd_pmem_notify commit b2518c78ce76896f0f8f7940bf02104b227e1709 upstream. The following BUG was observed when nd_pmem_notify() was called for a BTT device. The use of a pmem_device pointer is not valid with BTT. BUG: unable to handle kernel NULL pointer dereference at 0000000000000030 IP: nd_pmem_notify+0x30/0xf0 [nd_pmem] Call Trace: nd_device_notify+0x40/0x50 child_notify+0x10/0x20 device_for_each_child+0x50/0x90 nd_region_notify+0x20/0x30 nd_device_notify+0x40/0x50 nvdimm_region_notify+0x27/0x30 acpi_nfit_scrub+0x341/0x590 [nfit] process_one_work+0x197/0x450 worker_thread+0x4e/0x4a0 kthread+0x109/0x140 Fix nd_pmem_notify() by setting nd_region and badblocks pointers properly for BTT. Cc: Vishal Verma Fixes: 719994660c24 ("libnvdimm: async notification support") Signed-off-by: Toshi Kani Signed-off-by: Dan Williams Signed-off-by: Greg Kroah-Hartman commit 72393c00f00ada1afeca1895af5cf177a04f4033 Author: Dan Williams Date: Mon Apr 24 15:43:05 2017 -0700 libnvdimm, region: fix flush hint detection crash commit bc042fdfbb92b5b13421316b4548e2d6e98eed37 upstream. In the case where a dimm does not have any associated flush hints the ndrd->flush_wpq array may be uninitialized leading to crashes with the following signature: BUG: unable to handle kernel NULL pointer dereference at 0000000000000010 IP: region_visible+0x10f/0x160 [libnvdimm] Call Trace: internal_create_group+0xbe/0x2f0 sysfs_create_groups+0x40/0x80 device_add+0x2d8/0x650 nd_async_device_register+0x12/0x40 [libnvdimm] async_run_entry_fn+0x39/0x170 process_one_work+0x212/0x6c0 ? process_one_work+0x197/0x6c0 worker_thread+0x4e/0x4a0 kthread+0x10c/0x140 ? process_one_work+0x6c0/0x6c0 ? kthread_create_on_node+0x60/0x60 ret_from_fork+0x31/0x40 Reviewed-by: Jeff Moyer Fixes: f284a4f23752 ("libnvdimm: introduce nvdimm_flush() and nvdimm_has_flush()") Signed-off-by: Dan Williams Signed-off-by: Greg Kroah-Hartman commit b821a605977e0d79b6c45a99549c3f40bd8e5082 Author: Joeseph Chang Date: Mon Mar 27 20:22:09 2017 -0600 ipmi: Fix kernel panic at ipmi_ssif_thread() commit 6de65fcfdb51835789b245203d1bfc8d14cb1e06 upstream. msg_written_handler() may set ssif_info->multi_data to NULL when using ipmitool to write fru. Before setting ssif_info->multi_data to NULL, add new local pointer "data_to_send" and store correct i2c data pointer to it to fix NULL pointer kernel panic and incorrect ssif_info->multi_pos. Signed-off-by: Joeseph Chang Signed-off-by: Corey Minyard Signed-off-by: Greg Kroah-Hartman commit c8e4805dd6341ac32ab9d2dcbd09499eb6530c33 Author: Johan Hovold Date: Wed Mar 29 18:15:28 2017 +0200 Bluetooth: hci_intel: add missing tty-device sanity check commit dcb9cfaa5ea9aa0ec08aeb92582ccfe3e4c719a9 upstream. Make sure to check the tty-device pointer before looking up the sibling platform device to avoid dereferencing a NULL-pointer when the tty is one end of a Unix98 pty. Fixes: 74cdad37cd24 ("Bluetooth: hci_intel: Add runtime PM support") Fixes: 1ab1f239bf17 ("Bluetooth: hci_intel: Add support for platform driver") Cc: Loic Poulain Signed-off-by: Johan Hovold Signed-off-by: Marcel Holtmann Signed-off-by: Greg Kroah-Hartman commit a8620f066675e0e2bc9feb9698a83bd67f288b52 Author: Johan Hovold Date: Wed Mar 29 18:15:27 2017 +0200 Bluetooth: hci_bcm: add missing tty-device sanity check commit 95065a61e9bf25fb85295127fba893200c2bbbd8 upstream. Make sure to check the tty-device pointer before looking up the sibling platform device to avoid dereferencing a NULL-pointer when the tty is one end of a Unix98 pty. Fixes: 0395ffc1ee05 ("Bluetooth: hci_bcm: Add PM for BCM devices") Cc: Frederic Danis Signed-off-by: Johan Hovold Signed-off-by: Marcel Holtmann Signed-off-by: Greg Kroah-Hartman commit 9a3054df3cf62e9988f81291f9e2fc1b493aedac Author: Szymon Janc Date: Mon Apr 24 18:25:04 2017 -0700 Bluetooth: Fix user channel for 32bit userspace on 64bit kernel commit ab89f0bdd63a3721f7cd3f064f39fc4ac7ca14d4 upstream. Running 32bit userspace on 64bit kernel results in MSG_CMSG_COMPAT being defined as 0x80000000. This results in sendmsg failure if used from 32bit userspace running on 64bit kernel. Fix this by accounting for MSG_CMSG_COMPAT in flags check in hci_sock_sendmsg. Signed-off-by: Szymon Janc Signed-off-by: Marko Kiiskila Signed-off-by: Marcel Holtmann Signed-off-by: Greg Kroah-Hartman commit 58d4794410293a11878c4bd0d9d1b169efcbc060 Author: Wang YanQing Date: Wed Feb 22 19:37:08 2017 +0800 tty: pty: Fix ldisc flush after userspace become aware of the data already commit 77dae6134440420bac334581a3ccee94cee1c054 upstream. While using emacs, cat or others' commands in konsole with recent kernels, I have met many times that CTRL-C freeze konsole. After konsole freeze I can't type anything, then I have to open a new one, it is very annoying. See bug report: https://bugs.kde.org/show_bug.cgi?id=175283 The platform in that bug report is Solaris, but now the pty in linux has the same problem or the same behavior as Solaris :) It has high possibility to trigger the problem follow steps below: Note: In my test, BigFile is a text file whose size is bigger than 1G 1:open konsole 1:cat BigFile 2:CTRL-C After some digging, I find out the reason is that commit 1d1d14da12e7 ("pty: Fix buffer flush deadlock") changes the behavior of pty_flush_buffer. Thread A Thread B -------- -------- 1:n_tty_poll return POLLIN 2:CTRL-C trigger pty_flush_buffer tty_buffer_flush n_tty_flush_buffer 3:attempt to check count of chars: ioctl(fd, TIOCINQ, &available) available is equal to 0 4:read(fd, buffer, avaiable) return 0 5:konsole close fd Yes, I know we could use the same patch included in the BUG report as a workaround for linux platform too. But I think the data in ldisc is belong to application of another side, we shouldn't clear it when we want to flush write buffer of this side in pty_flush_buffer. So I think it is better to disable ldisc flush in pty_flush_buffer, because its new hehavior bring no benefit except that it mess up the behavior between POLLIN, and TIOCINQ or FIONREAD. Also I find no flush_buffer function in others' tty driver has the same behavior as current pty_flush_buffer. Fixes: 1d1d14da12e7 ("pty: Fix buffer flush deadlock") Signed-off-by: Wang YanQing Signed-off-by: Greg Kroah-Hartman commit 9e3b9909bce302b128a076f60f32448b3cfaf0a4 Author: Johan Hovold Date: Mon Apr 10 11:21:39 2017 +0200 serial: omap: suspend device on probe errors commit 77e6fe7fd2b7cba0bf2f2dc8cde51d7b9a35bf74 upstream. Make sure to actually suspend the device before returning after a failed (or deferred) probe. Note that autosuspend must be disabled before runtime pm is disabled in order to balance the usage count due to a negative autosuspend delay as well as to make the final put suspend the device synchronously. Fixes: 388bc2622680 ("omap-serial: Fix the error handling in the omap_serial probe") Cc: Shubhrajyoti D Signed-off-by: Johan Hovold Acked-by: Tony Lindgren Signed-off-by: Greg Kroah-Hartman commit c1ce1f427e0a62b96b2e5a73ddb5766eb5c90bcb Author: Johan Hovold Date: Mon Apr 10 11:21:38 2017 +0200 serial: omap: fix runtime-pm handling on unbind commit 099bd73dc17ed77aa8c98323e043613b6e8f54fc upstream. An unbalanced and misplaced synchronous put was used to suspend the device on driver unbind, something which with a likewise misplaced pm_runtime_disable leads to external aborts when an open port is being removed. Unhandled fault: external abort on non-linefetch (0x1028) at 0xfa024010 ... [] (serial_omap_set_mctrl) from [] (uart_update_mctrl+0x50/0x60) [] (uart_update_mctrl) from [] (uart_shutdown+0xbc/0x138) [] (uart_shutdown) from [] (uart_hangup+0x94/0x190) [] (uart_hangup) from [] (__tty_hangup+0x404/0x41c) [] (__tty_hangup) from [] (tty_vhangup+0x1c/0x20) [] (tty_vhangup) from [] (uart_remove_one_port+0xec/0x260) [] (uart_remove_one_port) from [] (serial_omap_remove+0x40/0x60) [] (serial_omap_remove) from [] (platform_drv_remove+0x34/0x4c) Fix this up by resuming the device before deregistering the port and by suspending and disabling runtime pm only after the port has been removed. Also make sure to disable autosuspend before disabling runtime pm so that the usage count is balanced and device actually suspended before returning. Note that due to a negative autosuspend delay being set in probe, the unbalanced put would actually suspend the device on first driver unbind, while rebinding and again unbinding would result in a negative power.usage_count. Fixes: 7e9c8e7dbf3b ("serial: omap: make sure to suspend device before remove") Cc: Felipe Balbi Cc: Santosh Shilimkar Signed-off-by: Johan Hovold Acked-by: Tony Lindgren Signed-off-by: Greg Kroah-Hartman commit 2578dd75ad123860d92e2bfc8ab539451587822c Author: Marek Szyprowski Date: Mon Apr 3 08:20:59 2017 +0200 serial: samsung: Use right device for DMA-mapping calls commit 768d64f491a530062ddad50e016fb27125f8bd7c upstream. Driver should provide its own struct device for all DMA-mapping calls instead of extracting device pointer from DMA engine channel. Although this is harmless from the driver operation perspective on ARM architecture, it is always good to use the DMA mapping API in a proper way. This patch fixes following DMA API debug warning: WARNING: CPU: 0 PID: 0 at lib/dma-debug.c:1241 check_sync+0x520/0x9f4 samsung-uart 12c20000.serial: DMA-API: device driver tries to sync DMA memory it has not allocated [device address=0x000000006df0f580] [size=64 bytes] Modules linked in: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.11.0-rc1-00137-g07ca963 #51 Hardware name: SAMSUNG EXYNOS (Flattened Device Tree) [] (unwind_backtrace) from [] (show_stack+0x20/0x24) [] (show_stack) from [] (dump_stack+0x84/0xa0) [] (dump_stack) from [] (__warn+0x14c/0x180) [] (__warn) from [] (warn_slowpath_fmt+0x48/0x50) [] (warn_slowpath_fmt) from [] (check_sync+0x520/0x9f4) [] (check_sync) from [] (debug_dma_sync_single_for_device+0x88/0xc8) [] (debug_dma_sync_single_for_device) from [] (s3c24xx_serial_start_tx_dma+0x100/0x2f8) [] (s3c24xx_serial_start_tx_dma) from [] (s3c24xx_serial_tx_chars+0x198/0x33c) Reported-by: Seung-Woo Kim Fixes: 62c37eedb74c8 ("serial: samsung: add dma reqest/release functions") Signed-off-by: Marek Szyprowski Reviewed-by: Bartlomiej Zolnierkiewicz Reviewed-by: Krzysztof Kozlowski Reviewed-by: Shuah Khan Signed-off-by: Greg Kroah-Hartman commit a78ddcd2a858ea3968afa284d964329a4d7f74c2 Author: Eric Biggers Date: Fri Apr 7 10:58:37 2017 -0700 fscrypt: fix context consistency check when key(s) unavailable commit 272f98f6846277378e1758a49a49d7bf39343c02 upstream. To mitigate some types of offline attacks, filesystem encryption is designed to enforce that all files in an encrypted directory tree use the same encryption policy (i.e. the same encryption context excluding the nonce). However, the fscrypt_has_permitted_context() function which enforces this relies on comparing struct fscrypt_info's, which are only available when we have the encryption keys. This can cause two incorrect behaviors: 1. If we have the parent directory's key but not the child's key, or vice versa, then fscrypt_has_permitted_context() returned false, causing applications to see EPERM or ENOKEY. This is incorrect if the encryption contexts are in fact consistent. Although we'd normally have either both keys or neither key in that case since the master_key_descriptors would be the same, this is not guaranteed because keys can be added or removed from keyrings at any time. 2. If we have neither the parent's key nor the child's key, then fscrypt_has_permitted_context() returned true, causing applications to see no error (or else an error for some other reason). This is incorrect if the encryption contexts are in fact inconsistent, since in that case we should deny access. To fix this, retrieve and compare the fscrypt_contexts if we are unable to set up both fscrypt_infos. While this slightly hurts performance when accessing an encrypted directory tree without the key, this isn't a case we really need to be optimizing for; access *with* the key is much more important. Furthermore, the performance hit is barely noticeable given that we are already retrieving the fscrypt_context and doing two keyring searches in fscrypt_get_encryption_info(). If we ever actually wanted to optimize this case we might start by caching the fscrypt_contexts. Signed-off-by: Eric Biggers Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman commit 659ccd97668ab6c8b6bdf8665bc7ae8635559d62 Author: Jaegeuk Kim Date: Tue Apr 11 19:01:26 2017 -0700 f2fs: fix fs corruption due to zero inode page commit 9bb02c3627f46e50246bf7ab957b56ffbef623cb upstream. This patch fixes the following scenario. - f2fs_create/f2fs_mkdir - write_checkpoint - f2fs_mark_inode_dirty_sync - block_operations - f2fs_lock_all - f2fs_sync_inode_meta - f2fs_unlock_all - sync_inode_metadata - f2fs_lock_op - f2fs_write_inode - update_inode_page - get_node_page return -ENOENT - new_inode_page - fill_node_footer - f2fs_mark_inode_dirty_sync - ... - f2fs_unlock_op - f2fs_inode_synced - f2fs_lock_all - do_checkpoint In this checkpoint, we can get an inode page which contains zeros having valid node footer only. Signed-off-by: Jaegeuk Kim Signed-off-by: Greg Kroah-Hartman commit 717946b469cfafc9f31ebfcefa34a18ff369fc2e Author: Jan Kara Date: Fri May 12 15:46:50 2017 -0700 mm: fix data corruption due to stale mmap reads commit cd656375f94632d7b5af57bf67b7b5c0270c591c upstream. Currently, we didn't invalidate page tables during invalidate_inode_pages2() for DAX. That could result in e.g. 2MiB zero page being mapped into page tables while there were already underlying blocks allocated and thus data seen through mmap were different from data seen by read(2). The following sequence reproduces the problem: - open an mmap over a 2MiB hole - read from a 2MiB hole, faulting in a 2MiB zero page - write to the hole with write(3p). The write succeeds but we incorrectly leave the 2MiB zero page mapping intact. - via the mmap, read the data that was just written. Since the zero page mapping is still intact we read back zeroes instead of the new data. Fix the problem by unconditionally calling invalidate_inode_pages2_range() in dax_iomap_actor() for new block allocations and by properly invalidating page tables in invalidate_inode_pages2_range() for DAX mappings. Fixes: c6dcf52c23d2d3fb5235cec42d7dd3f786b87d55 Link: http://lkml.kernel.org/r/20170510085419.27601-3-jack@suse.cz Signed-off-by: Jan Kara Signed-off-by: Ross Zwisler Cc: Dan Williams Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit 35223d76e2cf6366be1d281548f099c345ace3f1 Author: Ross Zwisler Date: Fri May 12 15:46:47 2017 -0700 dax: prevent invalidation of mapped DAX entries commit 4636e70bb0a8b871998b6841a2e4b205cf2bc863 upstream. Patch series "mm,dax: Fix data corruption due to mmap inconsistency", v4. This series fixes data corruption that can happen for DAX mounts when page faults race with write(2) and as a result page tables get out of sync with block mappings in the filesystem and thus data seen through mmap is different from data seen through read(2). The series passes testing with t_mmap_stale test program from Ross and also other mmap related tests on DAX filesystem. This patch (of 4): dax_invalidate_mapping_entry() currently removes DAX exceptional entries only if they are clean and unlocked. This is done via: invalidate_mapping_pages() invalidate_exceptional_entry() dax_invalidate_mapping_entry() However, for page cache pages removed in invalidate_mapping_pages() there is an additional criteria which is that the page must not be mapped. This is noted in the comments above invalidate_mapping_pages() and is checked in invalidate_inode_page(). For DAX entries this means that we can can end up in a situation where a DAX exceptional entry, either a huge zero page or a regular DAX entry, could end up mapped but without an associated radix tree entry. This is inconsistent with the rest of the DAX code and with what happens in the page cache case. We aren't able to unmap the DAX exceptional entry because according to its comments invalidate_mapping_pages() isn't allowed to block, and unmap_mapping_range() takes a write lock on the mapping->i_mmap_rwsem. Since we essentially never have unmapped DAX entries to evict from the radix tree, just remove dax_invalidate_mapping_entry(). Fixes: c6dcf52c23d2 ("mm: Invalidate DAX radix tree entries only if appropriate") Link: http://lkml.kernel.org/r/20170510085419.27601-2-jack@suse.cz Signed-off-by: Ross Zwisler Signed-off-by: Jan Kara Reported-by: Jan Kara Cc: Dan Williams Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit fa7043b3a2e0ab086b28639b3ac260477798bc9e Author: Dan Williams Date: Sun Apr 30 06:57:01 2017 -0700 device-dax: fix sysfs attribute deadlock commit 565851c972b50612f3a4542e26879ffb3e906fc2 upstream. Usage of device_lock() for dax_region attributes is unnecessary and deadlock prone. It's unnecessary because the order of registration / un-registration guarantees that drvdata is always valid. It's deadlock prone because it sets up this situation: ndctl D 0 2170 2082 0x00000000 Call Trace: __schedule+0x31f/0x980 schedule+0x3d/0x90 schedule_preempt_disabled+0x15/0x20 __mutex_lock+0x402/0x980 ? __mutex_lock+0x158/0x980 ? align_show+0x2b/0x80 [dax] ? kernfs_seq_start+0x2f/0x90 mutex_lock_nested+0x1b/0x20 align_show+0x2b/0x80 [dax] dev_attr_show+0x20/0x50 ndctl D 0 2186 2079 0x00000000 Call Trace: __schedule+0x31f/0x980 schedule+0x3d/0x90 __kernfs_remove+0x1f6/0x340 ? kernfs_remove_by_name_ns+0x45/0xa0 ? remove_wait_queue+0x70/0x70 kernfs_remove_by_name_ns+0x45/0xa0 remove_files.isra.1+0x35/0x70 sysfs_remove_group+0x44/0x90 sysfs_remove_groups+0x2e/0x50 dax_region_unregister+0x25/0x40 [dax] devm_action_release+0xf/0x20 release_nodes+0x16d/0x2b0 devres_release_all+0x3c/0x60 device_release_driver_internal+0x17d/0x220 device_release_driver+0x12/0x20 unbind_store+0x112/0x160 ndctl/2170 is trying to acquire the device_lock() to read an attribute, and ndctl/2186 is holding the device_lock() while trying to drain all active attribute readers. Thanks to Yi Zhang for the reproduction script. Fixes: d7fe1a67f658 ("dax: add region 'id', 'size', and 'align' attributes") Reported-by: Yi Zhang Signed-off-by: Dan Williams Signed-off-by: Greg Kroah-Hartman commit e1a19ef5291902ec413abdd4037ee27bf6a35594 Author: Dan Williams Date: Fri Mar 17 12:48:09 2017 -0600 device-dax: fix cdev leak commit ed01e50acdd3e4a640cf9ebd28a7e810c3ceca97 upstream. If device_add() fails, cleanup the cdev. Otherwise, we leak a kobj_map() with a stale device number. As Jason points out, there is a small possibility that userspace has opened and mapped the device in the time between cdev_add() and the device_add() failure. We need a new kill_dax_dev() helper to invalidate any established mappings. Fixes: ba09c01d2fa8 ("dax: convert to the cdev api") Reported-by: Jason Gunthorpe Signed-off-by: Dan Williams Signed-off-by: Logan Gunthorpe Reviewed-by: Johannes Thumshirn Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit 81845f52017951b6623618d68f79a7530073f2d0 Author: NeilBrown Date: Thu Apr 6 12:06:37 2017 +1000 md/raid1: avoid reusing a resync bio after error handling. commit 0c9d5b127f695818c2c5a3868c1f28ca2969e905 upstream. fix_sync_read_error() modifies a bio on a newly faulty device by setting bi_end_io to end_sync_write. This ensure that put_buf() will still call rdev_dec_pending() as required, but makes sure that subsequent code in fix_sync_read_error() doesn't try to read from the device. Unfortunately this interacts badly with sync_request_write() which assumes that any bio with bi_end_io set to non-NULL other than end_sync_read is safe to write to. As the device is now faulty it doesn't make sense to write. As the bio was recently used for a read, it is "dirty" and not suitable for immediate submission. In particular, ->bi_next might be non-NULL, which will cause generic_make_request() to complain. Break this interaction by refusing to write to devices which are marked as Faulty. Reported-and-tested-by: Michael Wang Fixes: 2e52d449bcec ("md/raid1: add failfast handling for reads.") Signed-off-by: NeilBrown Signed-off-by: Shaohua Li Signed-off-by: Greg Kroah-Hartman commit 23ebf6aa650dab9c3e08f9c65b8d4a29beae7d43 Author: Jason A. Donenfeld Date: Fri Apr 7 02:33:30 2017 +0200 padata: free correct variable commit 07a77929ba672d93642a56dc2255dd21e6e2290b upstream. The author meant to free the variable that was just allocated, instead of the one that failed to be allocated, but made a simple typo. This patch rectifies that. Signed-off-by: Jason A. Donenfeld Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman commit 586aa5a6537f0f5f67487ec3f535488e77bbba82 Author: Amir Goldstein Date: Mon Apr 24 22:26:40 2017 +0300 ovl: do not set overlay.opaque on non-dir create commit 4a99f3c83dc493c8ea84693d78cd792839c8aa64 upstream. The optimization for opaque dir create was wrongly being applied also to non-dir create. Fixes: 97c684cc9110 ("ovl: create directories inside merged parent opaque") Signed-off-by: Amir Goldstein Signed-off-by: Miklos Szeredi Signed-off-by: Greg Kroah-Hartman commit cf95696518f5ffdf98b64df850fe6bcf208f4a4f Author: Björn Jacke Date: Fri May 5 04:36:16 2017 +0200 CIFS: add misssing SFM mapping for doublequote commit 85435d7a15294f9f7ef23469e6aaf7c5dfcc54f0 upstream. SFM is mapping doublequote to 0xF020 Without this patch creating files with doublequote fails to Windows/Mac Signed-off-by: Bjoern Jacke Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman commit 582fb96084c3d7ccc8e294c9586ddd346cf5c864 Author: David Disseldorp Date: Thu May 4 00:41:13 2017 +0200 cifs: fix CIFS_IOC_GET_MNT_INFO oops commit d8a6e505d6bba2250852fbc1c1c86fe68aaf9af3 upstream. An open directory may have a NULL private_data pointer prior to readdir. Fixes: 0de1f4c6f6c0 ("Add way to query server fs info for smb3") Signed-off-by: David Disseldorp Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman commit 4452b80eaef8c4ed9a9d52f98cad40287a2f62eb Author: Rabin Vincent Date: Wed May 3 17:54:01 2017 +0200 CIFS: fix oplock break deadlocks commit 3998e6b87d4258a70df358296d6f1c7234012bfe upstream. When the final cifsFileInfo_put() is called from cifsiod and an oplock break work is queued, lockdep complains loudly: ============================================= [ INFO: possible recursive locking detected ] 4.11.0+ #21 Not tainted --------------------------------------------- kworker/0:2/78 is trying to acquire lock: ("cifsiod"){++++.+}, at: flush_work+0x215/0x350 but task is already holding lock: ("cifsiod"){++++.+}, at: process_one_work+0x255/0x8e0 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock("cifsiod"); lock("cifsiod"); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by kworker/0:2/78: #0: ("cifsiod"){++++.+}, at: process_one_work+0x255/0x8e0 #1: ((&wdata->work)){+.+...}, at: process_one_work+0x255/0x8e0 stack backtrace: CPU: 0 PID: 78 Comm: kworker/0:2 Not tainted 4.11.0+ #21 Workqueue: cifsiod cifs_writev_complete Call Trace: dump_stack+0x85/0xc2 __lock_acquire+0x17dd/0x2260 ? match_held_lock+0x20/0x2b0 ? trace_hardirqs_off_caller+0x86/0x130 ? mark_lock+0xa6/0x920 lock_acquire+0xcc/0x260 ? lock_acquire+0xcc/0x260 ? flush_work+0x215/0x350 flush_work+0x236/0x350 ? flush_work+0x215/0x350 ? destroy_worker+0x170/0x170 __cancel_work_timer+0x17d/0x210 ? ___preempt_schedule+0x16/0x18 cancel_work_sync+0x10/0x20 cifsFileInfo_put+0x338/0x7f0 cifs_writedata_release+0x2a/0x40 ? cifs_writedata_release+0x2a/0x40 cifs_writev_complete+0x29d/0x850 ? preempt_count_sub+0x18/0xd0 process_one_work+0x304/0x8e0 worker_thread+0x9b/0x6a0 kthread+0x1b2/0x200 ? process_one_work+0x8e0/0x8e0 ? kthread_create_on_node+0x40/0x40 ret_from_fork+0x31/0x40 This is a real warning. Since the oplock is queued on the same workqueue this can deadlock if there is only one worker thread active for the workqueue (which will be the case during memory pressure when the rescuer thread is handling it). Furthermore, there is at least one other kind of hang possible due to the oplock break handling if there is only worker. (This can be reproduced without introducing memory pressure by having passing 1 for the max_active parameter of cifsiod.) cifs_oplock_break() can wait indefintely in the filemap_fdatawait() while the cifs_writev_complete() work is blocked: sysrq: SysRq : Show Blocked State task PC stack pid father kworker/0:1 D 0 16 2 0x00000000 Workqueue: cifsiod cifs_oplock_break Call Trace: __schedule+0x562/0xf40 ? mark_held_locks+0x4a/0xb0 schedule+0x57/0xe0 io_schedule+0x21/0x50 wait_on_page_bit+0x143/0x190 ? add_to_page_cache_lru+0x150/0x150 __filemap_fdatawait_range+0x134/0x190 ? do_writepages+0x51/0x70 filemap_fdatawait_range+0x14/0x30 filemap_fdatawait+0x3b/0x40 cifs_oplock_break+0x651/0x710 ? preempt_count_sub+0x18/0xd0 process_one_work+0x304/0x8e0 worker_thread+0x9b/0x6a0 kthread+0x1b2/0x200 ? process_one_work+0x8e0/0x8e0 ? kthread_create_on_node+0x40/0x40 ret_from_fork+0x31/0x40 dd D 0 683 171 0x00000000 Call Trace: __schedule+0x562/0xf40 ? mark_held_locks+0x29/0xb0 schedule+0x57/0xe0 io_schedule+0x21/0x50 wait_on_page_bit+0x143/0x190 ? add_to_page_cache_lru+0x150/0x150 __filemap_fdatawait_range+0x134/0x190 ? do_writepages+0x51/0x70 filemap_fdatawait_range+0x14/0x30 filemap_fdatawait+0x3b/0x40 filemap_write_and_wait+0x4e/0x70 cifs_flush+0x6a/0xb0 filp_close+0x52/0xa0 __close_fd+0xdc/0x150 SyS_close+0x33/0x60 entry_SYSCALL_64_fastpath+0x1f/0xbe Showing all locks held in the system: 2 locks held by kworker/0:1/16: #0: ("cifsiod"){.+.+.+}, at: process_one_work+0x255/0x8e0 #1: ((&cfile->oplock_break)){+.+.+.}, at: process_one_work+0x255/0x8e0 Showing busy workqueues and worker pools: workqueue cifsiod: flags=0xc pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/1 in-flight: 16:cifs_oplock_break delayed: cifs_writev_complete, cifs_echo_request pool 0: cpus=0 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 750 3 Fix these problems by creating a a new workqueue (with a rescuer) for the oplock break work. Signed-off-by: Rabin Vincent Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman commit cd01b999953bc9f7239429eb6dcabe1ae1f50e45 Author: David Disseldorp Date: Wed May 3 17:39:08 2017 +0200 cifs: fix CIFS_ENUMERATE_SNAPSHOTS oops commit 6026685de33b0db5b2b6b0e9b41b3a1a3261033c upstream. As with 618763958b22, an open directory may have a NULL private_data pointer prior to readdir. CIFS_ENUMERATE_SNAPSHOTS must check for this before dereference. Fixes: 834170c85978 ("Enable previous version support") Signed-off-by: David Disseldorp Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman commit 6ec05086dca927ba9574840e0e92ea9c92c439e6 Author: David Disseldorp Date: Wed May 3 17:39:09 2017 +0200 cifs: fix leak in FSCTL_ENUM_SNAPS response handling commit 0e5c795592930d51fd30d53a2e7b73cba022a29b upstream. The server may respond with success, and an output buffer less than sizeof(struct smb_snapshot_array) in length. Do not leak the output buffer in this case. Fixes: 834170c85978 ("Enable previous version support") Signed-off-by: David Disseldorp Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman commit b1b295efad9f32d7eae9eed06752d0872112e550 Author: Björn Jacke Date: Wed May 3 23:47:44 2017 +0200 CIFS: fix mapping of SFM_SPACE and SFM_PERIOD commit b704e70b7cf48f9b67c07d585168e102dfa30bb4 upstream. - trailing space maps to 0xF028 - trailing period maps to 0xF029 This fix corrects the mapping of file names which have a trailing character that would otherwise be illegal (period or space) but is allowed by POSIX. Signed-off-by: Bjoern Jacke Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman commit ae6c2182b8536fafe21c2dcf747eacd20d867ade Author: Steve French Date: Wed May 3 21:12:20 2017 -0500 SMB3: Work around mount failure when using SMB3 dialect to Macs commit 7db0a6efdc3e990cdfd4b24820d010e9eb7890ad upstream. Macs send the maximum buffer size in response on ioctl to validate negotiate security information, which causes us to fail the mount as the response buffer is larger than the expected response. Changed ioctl response processing to allow for padding of validate negotiate ioctl response and limit the maximum response size to maximum buffer size. Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman commit 6716949b0029f5b92c185dedad321e7853f841dd Author: Steve French Date: Tue May 2 13:35:20 2017 -0500 Set unicode flag on cifs echo request to avoid Mac error commit 26c9cb668c7fbf9830516b75d8bee70b699ed449 upstream. Mac requires the unicode flag to be set for cifs, even for the smb echo request (which doesn't have strings). Without this Mac rejects the periodic echo requests (when mounting with cifs) that we use to check if server is down Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman commit b7174f40382813750fd2ef225cc0a5b19e9c89d4 Author: Sachin Prabhu Date: Wed Apr 26 14:05:46 2017 +0100 Fix match_prepath() commit cd8c42968ee651b69e00f8661caff32b0086e82d upstream. Incorrect return value for shares not using the prefix path means that we will never match superblocks for these shares. Fixes: commit c1d8b24d1819 ("Compare prepaths when comparing superblocks") Signed-off-by: Sachin Prabhu Reviewed-by: Pavel Shilovsky Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman commit 93697e1e509948e16d588c93b39d38d7b144efe7 Author: Vlastimil Babka Date: Mon May 8 15:59:46 2017 -0700 mm: prevent potential recursive reclaim due to clearing PF_MEMALLOC commit 62be1511b1db8066220b18b7d4da2e6b9fdc69fb upstream. Patch series "more robust PF_MEMALLOC handling" This series aims to unify the setting and clearing of PF_MEMALLOC, which prevents recursive reclaim. There are some places that clear the flag unconditionally from current->flags, which may result in clearing a pre-existing flag. This already resulted in a bug report that Patch 1 fixes (without the new helpers, to make backporting easier). Patch 2 introduces the new helpers, modelled after existing memalloc_noio_* and memalloc_nofs_* helpers, and converts mm core to use them. Patches 3 and 4 convert non-mm code. This patch (of 4): __alloc_pages_direct_compact() sets PF_MEMALLOC to prevent deadlock during page migration by lock_page() (see the comment in __unmap_and_move()). Then it unconditionally clears the flag, which can clear a pre-existing PF_MEMALLOC flag and result in recursive reclaim. This was not a problem until commit a8161d1ed609 ("mm, page_alloc: restructure direct compaction handling in slowpath"), because direct compation was called only after direct reclaim, which was skipped when PF_MEMALLOC flag was set. Even now it's only a theoretical issue, as the new callsite of __alloc_pages_direct_compact() is reached only for costly orders and when gfp_pfmemalloc_allowed() is true, which means either __GFP_NOMEMALLOC is in gfp_flags or in_interrupt() is true. There is no such known context, but let's play it safe and make __alloc_pages_direct_compact() robust for cases where PF_MEMALLOC is already set. Fixes: a8161d1ed609 ("mm, page_alloc: restructure direct compaction handling in slowpath") Link: http://lkml.kernel.org/r/20170405074700.29871-2-vbabka@suse.cz Signed-off-by: Vlastimil Babka Reported-by: Andrey Ryabinin Acked-by: Michal Hocko Acked-by: Hillf Danton Cc: Mel Gorman Cc: Johannes Weiner Cc: Boris Brezillon Cc: Chris Leech Cc: "David S. Miller" Cc: Eric Dumazet Cc: Josef Bacik Cc: Lee Duncan Cc: Michal Hocko Cc: Richard Weinberger Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit 3302d94ab6f9e93061dc338d3b34a3ccc16fb11e Author: Andrey Ryabinin Date: Wed May 3 14:56:02 2017 -0700 fs/block_dev: always invalidate cleancache in invalidate_bdev() commit a5f6a6a9c72eac38a7fadd1a038532bc8516337c upstream. invalidate_bdev() calls cleancache_invalidate_inode() iff ->nrpages != 0 which doen't make any sense. Make sure that invalidate_bdev() always calls cleancache_invalidate_inode() regardless of mapping->nrpages value. Fixes: c515e1fd361c ("mm/fs: add hooks to support cleancache") Link: http://lkml.kernel.org/r/20170424164135.22350-3-aryabinin@virtuozzo.com Signed-off-by: Andrey Ryabinin Reviewed-by: Jan Kara Acked-by: Konrad Rzeszutek Wilk Cc: Alexander Viro Cc: Ross Zwisler Cc: Jens Axboe Cc: Johannes Weiner Cc: Alexey Kuznetsov Cc: Christoph Hellwig Cc: Nikolay Borisov Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit f174092ec373720bceb4090de9ba357e04072ce5 Author: Luis Henriques Date: Fri Apr 28 11:14:04 2017 +0100 ceph: fix memory leak in __ceph_setxattr() commit eeca958dce0a9231d1969f86196653eb50fcc9b3 upstream. The ceph_inode_xattr needs to be released when removing an xattr. Easily reproducible running the 'generic/020' test from xfstests or simply by doing: attr -s attr0 -V 0 /mnt/test && attr -r attr0 /mnt/test While there, also fix the error path. Here's the kmemleak splat: unreferenced object 0xffff88001f86fbc0 (size 64): comm "attr", pid 244, jiffies 4294904246 (age 98.464s) hex dump (first 32 bytes): 40 fa 86 1f 00 88 ff ff 80 32 38 1f 00 88 ff ff @........28..... 00 01 00 00 00 00 ad de 00 02 00 00 00 00 ad de ................ backtrace: [] kmemleak_alloc+0x49/0xa0 [] kmem_cache_alloc+0x9b/0xf0 [] __ceph_setxattr+0x17e/0x820 [] ceph_set_xattr_handler+0x37/0x40 [] __vfs_removexattr+0x4b/0x60 [] vfs_removexattr+0x77/0xd0 [] removexattr+0x41/0x60 [] path_removexattr+0x75/0xa0 [] SyS_lremovexattr+0xb/0x10 [] entry_SYSCALL_64_fastpath+0x13/0x94 [] 0xffffffffffffffff Signed-off-by: Luis Henriques Reviewed-by: "Yan, Zheng" Signed-off-by: Ilya Dryomov Signed-off-by: Greg Kroah-Hartman commit 594d4eca1c49a953ba41a21ae14bff4a5c13ec1a Author: Michal Hocko Date: Mon May 8 15:57:24 2017 -0700 fs/xattr.c: zero out memory copied to userspace in getxattr commit 81be3dee96346fbe08c31be5ef74f03f6b63cf68 upstream. getxattr uses vmalloc to allocate memory if kzalloc fails. This is filled by vfs_getxattr and then copied to the userspace. vmalloc, however, doesn't zero out the memory so if the specific implementation of the xattr handler is sloppy we can theoretically expose a kernel memory. There is no real sign this is really the case but let's make sure this will not happen and use vzalloc instead. Fixes: 779302e67835 ("fs/xattr.c:getxattr(): improve handling of allocation failures") Link: http://lkml.kernel.org/r/20170306103327.2766-1-mhocko@kernel.org Acked-by: Kees Cook Reported-by: Vlastimil Babka Signed-off-by: Michal Hocko Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit 49302d5313252a62622b942b35b2bf7538c5bbb0 Author: Martin Brandenburg Date: Tue Apr 25 15:38:04 2017 -0400 orangefs: do not check possibly stale size on truncate commit 53950ef541675df48c219a8d665111a0e68dfc2f upstream. Let the server figure this out because our size might be out of date or not present. The bug was that xfs_io -f -t -c "pread -v 0 100" /mnt/foo echo "Test" > /mnt/foo xfs_io -f -t -c "pread -v 0 100" /mnt/foo fails because the second truncate did not happen if nothing had requested the size after the write in echo. Thus i_size was zero (not present) and the orangefs_setattr though i_size was zero and there was nothing to do. Signed-off-by: Martin Brandenburg Signed-off-by: Mike Marshall Signed-off-by: Greg Kroah-Hartman commit 42d86d92af64e041adb999da74acad54f14d29cb Author: Martin Brandenburg Date: Tue Apr 25 15:37:58 2017 -0400 orangefs: do not set getattr_time on orangefs_lookup commit 17930b252cd6f31163c259eaa99dd8aa630fb9ba upstream. Since orangefs_lookup calls orangefs_iget which calls orangefs_inode_getattr, getattr_time will get set. Signed-off-by: Martin Brandenburg Signed-off-by: Mike Marshall Signed-off-by: Greg Kroah-Hartman commit d2c326c7ff6da3c752d2eaea35c4932283ab106d Author: Martin Brandenburg Date: Tue Apr 25 15:37:57 2017 -0400 orangefs: clean up oversize xattr validation commit e675c5ec51fe2554719a7b6bcdbef0a770f2c19b upstream. Also don't check flags as this has been validated by the VFS already. Fix an off-by-one error in the max size checking. Stop logging just because userspace wants to write attributes which do not fit. This and the previous commit fix xfstests generic/020. Signed-off-by: Martin Brandenburg Signed-off-by: Mike Marshall Signed-off-by: Greg Kroah-Hartman commit 4af222e1d681b17cd82d32008a602813ea1ffea2 Author: Martin Brandenburg Date: Tue Apr 25 15:37:56 2017 -0400 orangefs: fix bounds check for listxattr commit a956af337b9ff25822d9ce1a59c6ed0c09fc14b9 upstream. Signed-off-by: Martin Brandenburg Signed-off-by: Mike Marshall Signed-off-by: Greg Kroah-Hartman commit e3e77f8ba5f657447d2f21ec7d755f16fb026b6c Author: Eric Biggers Date: Sun Apr 30 00:10:50 2017 -0400 ext4: evict inline data when writing to memory map commit 7b4cc9787fe35b3ee2dfb1c35e22eafc32e00c33 upstream. Currently the case of writing via mmap to a file with inline data is not handled. This is maybe a rare case since it requires a writable memory map of a very small file, but it is trivial to trigger with on inline_data filesystem, and it causes the 'BUG_ON(ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA));' in ext4_writepages() to be hit: mkfs.ext4 -O inline_data /dev/vdb mount /dev/vdb /mnt xfs_io -f /mnt/file \ -c 'pwrite 0 1' \ -c 'mmap -w 0 1m' \ -c 'mwrite 0 1' \ -c 'fsync' kernel BUG at fs/ext4/inode.c:2723! invalid opcode: 0000 [#1] SMP CPU: 1 PID: 2532 Comm: xfs_io Not tainted 4.11.0-rc1-xfstests-00301-g071d9acf3d1f #633 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-20170228_101828-anatol 04/01/2014 task: ffff88003d3a8040 task.stack: ffffc90000300000 RIP: 0010:ext4_writepages+0xc89/0xf8a RSP: 0018:ffffc90000303ca0 EFLAGS: 00010283 RAX: 0000028410000000 RBX: ffff8800383fa3b0 RCX: ffffffff812afcdc RDX: 00000a9d00000246 RSI: ffffffff81e660e0 RDI: 0000000000000246 RBP: ffffc90000303dc0 R08: 0000000000000002 R09: 869618e8f99b4fa5 R10: 00000000852287a2 R11: 00000000a03b49f4 R12: ffff88003808e698 R13: 0000000000000000 R14: 7fffffffffffffff R15: 7fffffffffffffff FS: 00007fd3e53094c0(0000) GS:ffff88003e400000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fd3e4c51000 CR3: 000000003d554000 CR4: 00000000003406e0 Call Trace: ? _raw_spin_unlock+0x27/0x2a ? kvm_clock_read+0x1e/0x20 do_writepages+0x23/0x2c ? do_writepages+0x23/0x2c __filemap_fdatawrite_range+0x80/0x87 filemap_write_and_wait_range+0x67/0x8c ext4_sync_file+0x20e/0x472 vfs_fsync_range+0x8e/0x9f ? syscall_trace_enter+0x25b/0x2d0 vfs_fsync+0x1c/0x1e do_fsync+0x31/0x4a SyS_fsync+0x10/0x14 do_syscall_64+0x69/0x131 entry_SYSCALL64_slow_path+0x25/0x25 We could try to be smart and keep the inline data in this case, or at least support delayed allocation when allocating the block, but these solutions would be more complicated and don't seem worthwhile given how rare this case seems to be. So just fix the bug by calling ext4_convert_inline_data() when we're asked to make a page writable, so that any inline data gets evicted, with the block allocated immediately. Reported-by: Nick Alcock Reviewed-by: Andreas Dilger Signed-off-by: Eric Biggers Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman commit fd469456ad6df0d8189da30a973c04d93a15c8cb Author: Jan Kara Date: Sat Apr 29 21:07:30 2017 -0400 jbd2: fix dbench4 performance regression for 'nobarrier' mounts commit 5052b069acf73866d00077d8bc49983c3ee903e5 upstream. Commit b685d3d65ac7 "block: treat REQ_FUA and REQ_PREFLUSH as synchronous" removed REQ_SYNC flag from WRITE_FUA implementation. Since JBD2 strips REQ_FUA and REQ_FLUSH flags from submitted IO when the filesystem is mounted with nobarrier mount option, journal superblock writes ended up being async writes after this patch and that caused heavy performance regression for dbench4 benchmark with high number of processes. In my test setup with HP RAID array with non-volatile write cache and 32 GB ram, dbench4 runs with 8 processes regressed by ~25%. Fix the problem by making sure journal superblock writes are always treated as synchronous since they generally block progress of the journalling machinery and thus the whole filesystem. Fixes: b685d3d65ac791406e0dfd8779cc9b3707fea5a3 Signed-off-by: Jan Kara Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman commit e2e596f2888c713f150b22c0e37508b8f5133c96 Author: Christian Borntraeger Date: Thu Apr 6 09:51:52 2017 +0200 perf annotate s390: Implement jump types for perf annotate commit d9f8dfa9baf9b6ae1f2f84f887176558ecde5268 upstream. Implement simple detection for all kind of jumps and branches. Signed-off-by: Christian Borntraeger Cc: Andreas Krebbel Cc: Hendrik Brueckner Cc: Martin Schwidefsky Cc: Peter Zijlstra Cc: linux-s390 Link: http://lkml.kernel.org/r/1491465112-45819-3-git-send-email-borntraeger@de.ibm.com Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Greg Kroah-Hartman commit d122da54d33ebbec7fba087613edfa7a874314a2 Author: Christian Borntraeger Date: Thu Apr 6 09:51:51 2017 +0200 perf annotate s390: Fix perf annotate error -95 (4.10 regression) commit e77852b32d6d4430c68c38aaf73efe5650fa25af upstream. since 4.10 perf annotate exits on s390 with an "unknown error -95". Turns out that commit 786c1b51844d ("perf annotate: Start supporting cross arch annotation") added a hard requirement for architecture support when objdump is used but only provided x86 and arm support. Meanwhile power was added so lets add s390 as well. While at it make sure to implement the branch and jump types. Signed-off-by: Christian Borntraeger Cc: Andreas Krebbel Cc: Hendrik Brueckner Cc: Martin Schwidefsky Cc: Peter Zijlstra Cc: linux-s390 Fixes: 786c1b51844 "perf annotate: Start supporting cross arch annotation" Link: http://lkml.kernel.org/r/1491465112-45819-2-git-send-email-borntraeger@de.ibm.com Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Greg Kroah-Hartman commit ba60060043532a601a9ad21da297ad1fcfeab39c Author: Adrian Hunter Date: Fri Mar 24 14:15:52 2017 +0200 perf auxtrace: Fix no_size logic in addr_filter__resolve_kernel_syms() commit c3a0bbc7ad7598dec5a204868bdf8a2b1b51df14 upstream. Address filtering with kernel symbols incorrectly resulted in the error "Cannot determine size of symbol" because the no_size logic was the wrong way around. Signed-off-by: Adrian Hunter Tested-by: Andi Kleen Link: http://lkml.kernel.org/r/1490357752-27942-1-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Greg Kroah-Hartman commit d187c9e135d307ffba95c8fc0c550c3faa9be017 Author: Mike Marciniszyn Date: Sun Apr 9 10:16:35 2017 -0700 IB/hfi1: Prevent kernel QP post send hard lockups commit b6eac931b9bb2bce4db7032c35b41e5e34ec22a5 upstream. The driver progress routines can call cond_resched() when a timeslice is exhausted and irqs are enabled. If the ULP had been holding a spin lock without disabling irqs and the post send directly called the progress routine, the cond_resched() could yield allowing another thread from the same ULP to deadlock on that same lock. Correct by replacing the current hfi1_do_send() calldown with a unique one for post send and adding an argument to hfi1_do_send() to indicate that the send engine is running in a thread. If the routine is not running in a thread, avoid calling cond_resched(). Fixes: Commit 831464ce4b74 ("IB/hfi1: Don't call cond_resched in atomic mode when sending packets") Reviewed-by: Dennis Dalessandro Signed-off-by: Mike Marciniszyn Signed-off-by: Dennis Dalessandro Signed-off-by: Doug Ledford Signed-off-by: Greg Kroah-Hartman commit 04692adb3aacb0b64f84a07215367a6d8b0f79cc Author: Jack Morgenstein Date: Tue Mar 21 12:57:06 2017 +0200 IB/mlx4: Reduce SRIOV multicast cleanup warning message to debug level commit fb7a91746af18b2ebf596778b38a709cdbc488d3 upstream. A warning message during SRIOV multicast cleanup should have actually been a debug level message. The condition generating the warning does no harm and can fill the message log. In some cases, during testing, some tests were so intense as to swamp the message log with these warning messages, causing a stall in the console message log output task. This stall caused an NMI to be sent to all CPUs (so that they all dumped their stacks into the message log). Aside from the message flood causing an NMI, the tests all passed. Once the message flood which caused the NMI is removed (by reducing the warning message to debug level), the NMI no longer occurs. Sample message log (console log) output illustrating the flood and resultant NMI (snippets with comments and modified with ... instead of hex digits, to satisfy checkpatch.pl): _mlx4_ib_mcg_port_cleanup: ... WARNING: group refcount 1!!!... *** About 4000 almost identical lines in less than one second *** _mlx4_ib_mcg_port_cleanup: ... WARNING: group refcount 1!!!... INFO: rcu_sched detected stalls on CPUs/tasks: { 17} (...) *** { 17} above indicates that CPU 17 was the one that stalled *** sending NMI to all CPUs: ... NMI backtrace for cpu 17 CPU: 17 PID: 45909 Comm: kworker/17:2 Hardware name: HP ProLiant DL360p Gen8, BIOS P71 09/08/2013 Workqueue: events fb_flashcursor task: ffff880478...... ti: ffff88064e...... task.ti: ffff88064e...... RIP: 0010:[ffffffff81......] [ffffffff81......] io_serial_in+0x15/0x20 RSP: 0018:ffff88064e257cb0 EFLAGS: 00000002 RAX: 0000000000...... RBX: ffffffff81...... RCX: 0000000000...... RDX: 0000000000...... RSI: 0000000000...... RDI: ffffffff81...... RBP: ffff88064e...... R08: ffffffff81...... R09: 0000000000...... R10: 0000000000...... R11: ffff88064e...... R12: 0000000000...... R13: 0000000000...... R14: ffffffff81...... R15: 0000000000...... FS: 0000000000......(0000) GS:ffff8804af......(0000) knlGS:000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080...... CR2: 00007f2a2f...... CR3: 0000000001...... CR4: 0000000000...... DR0: 0000000000...... DR1: 0000000000...... DR2: 0000000000...... DR3: 0000000000...... DR6: 00000000ff...... DR7: 0000000000...... Stack: ffff88064e...... ffffffff81...... ffffffff81...... 0000000000...... ffffffff81...... ffff88064e...... ffffffff81...... ffffffff81...... ffffffff81...... ffff88064e...... ffffffff81...... 0000000000...... Call Trace: [] wait_for_xmitr+0x3b/0xa0 [] serial8250_console_putchar+0x1c/0x30 [] ? serial8250_console_write+0x140/0x140 [] uart_console_write+0x3a/0x80 [] serial8250_console_write+0xae/0x140 [] call_console_drivers.constprop.15+0x91/0xf0 [] console_unlock+0x3bf/0x400 [] fb_flashcursor+0x5d/0x140 [] ? bit_clear+0x120/0x120 [] process_one_work+0x17b/0x470 [] worker_thread+0x11b/0x400 [] ? rescuer_thread+0x400/0x400 [] kthread+0xcf/0xe0 [] ? kthread_create_on_node+0x140/0x140 [] ret_from_fork+0x58/0x90 [] ? kthread_create_on_node+0x140/0x140 Code: 48 89 e5 d3 e6 48 63 f6 48 03 77 10 8b 06 5d c3 66 0f 1f 44 00 00 66 66 66 6 As indicated in the stack trace above, the console output task got swamped. Fixes: b9c5d6a64358 ("IB/mlx4: Add multicast group (MCG) paravirtualization for SR-IOV") Signed-off-by: Jack Morgenstein Signed-off-by: Leon Romanovsky Signed-off-by: Doug Ledford Signed-off-by: Greg Kroah-Hartman commit e4e17bce167258994b1e7d2a6aeeb06e3c48d341 Author: Jack Morgenstein Date: Tue Mar 21 12:57:05 2017 +0200 IB/mlx4: Fix ib device initialization error flow commit 99e68909d5aba1861897fe7afc3306c3c81b6de0 upstream. In mlx4_ib_add, procedure mlx4_ib_alloc_eqs is called to allocate EQs. However, in the mlx4_ib_add error flow, procedure mlx4_ib_free_eqs is not called to free the allocated EQs. Fixes: e605b743f33d ("IB/mlx4: Increase the number of vectors (EQs) available for ULPs") Signed-off-by: Jack Morgenstein Signed-off-by: Leon Romanovsky Signed-off-by: Doug Ledford Signed-off-by: Greg Kroah-Hartman commit 5d691b80ca4d45bd7fe875f9a4931ed5333ef487 Author: Shamir Rabinovitch Date: Wed Mar 29 06:21:59 2017 -0400 IB/IPoIB: ibX: failed to create mcg debug file commit 771a52584096c45e4565e8aabb596eece9d73d61 upstream. When udev renames the netdev devices, ipoib debugfs entries does not get renamed. As a result, if subsequent probe of ipoib device reuse the name then creating a debugfs entry for the new device would fail. Also, moved ipoib_create_debug_files and ipoib_delete_debug_files as part of ipoib event handling in order to avoid any race condition between these. Fixes: 1732b0ef3b3a ([IPoIB] add path record information in debugfs) Signed-off-by: Vijay Kumar Signed-off-by: Shamir Rabinovitch Reviewed-by: Mark Bloch Signed-off-by: Doug Ledford Signed-off-by: Greg Kroah-Hartman commit 53bd2ccebd51bb8b3325b8771c7e0ab6686d9450 Author: Michael J. Ruhl Date: Sun Apr 9 10:15:51 2017 -0700 IB/core: For multicast functions, verify that LIDs are multicast LIDs commit 8561eae60ff9417a50fa1fb2b83ae950dc5c1e21 upstream. The Infiniband spec defines "A multicast address is defined by a MGID and a MLID" (section 10.5). Currently the MLID value is not validated. Add check to verify that the MLID value is in the correct address range. Fixes: 0c33aeedb2cf ("[IB] Add checks to multicast attach and detach") Reviewed-by: Ira Weiny Reviewed-by: Dasaratharaman Chandramouli Signed-off-by: Michael J. Ruhl Signed-off-by: Dennis Dalessandro Reviewed-by: Leon Romanovsky Signed-off-by: Doug Ledford Signed-off-by: Greg Kroah-Hartman commit b40c7a502b1ede667caddcaaa6109f3ce345aa54 Author: Jack Morgenstein Date: Sun Mar 19 10:55:57 2017 +0200 IB/core: Fix sysfs registration error flow commit b312be3d87e4c80872cbea869e569175c5eb0f9a upstream. The kernel commit cited below restructured ib device management so that the device kobject is initialized in ib_alloc_device. As part of the restructuring, the kobject is now initialized in procedure ib_alloc_device, and is later added to the device hierarchy in the ib_register_device call stack, in procedure ib_device_register_sysfs (which calls device_add). However, in the ib_device_register_sysfs error flow, if an error occurs following the call to device_add, the cleanup procedure device_unregister is called. This call results in the device object being deleted -- which results in various use-after-free crashes. The correct cleanup call is device_del -- which undoes device_add without deleting the device object. The device object will then (correctly) be deleted in the ib_register_device caller's error cleanup flow, when the caller invokes ib_dealloc_device. Fixes: 55aeed06544f6 ("IB/core: Make ib_alloc_device init the kobject") Signed-off-by: Jack Morgenstein Signed-off-by: Leon Romanovsky Signed-off-by: Doug Ledford Signed-off-by: Greg Kroah-Hartman commit f269df7bad86799c23ea8637689b96f26cd7b9f3 Author: Ding Tianhong Date: Sat Apr 29 10:38:48 2017 +0800 iov_iter: don't revert iov buffer if csum error commit a6a5993243550b09f620941dea741b7421fdf79c upstream. The patch 327868212381 (make skb_copy_datagram_msg() et.al. preserve ->msg_iter on error) will revert the iov buffer if copy to iter failed, but it didn't copy any datagram if the skb_checksum_complete error, so no need to revert any data at this place. v2: Sabrina notice that return -EFAULT when checksum error is not correct here, it would confuse the caller about the return value, so fix it. Fixes: 327868212381 ("make skb_copy_datagram_msg() et.al. preserve->msg_iter on error") Signed-off-by: Ding Tianhong Acked-by: Al Viro Signed-off-by: Wei Yongjun Signed-off-by: Al Viro Signed-off-by: Greg Kroah-Hartman commit fc483680829ae9666ffc0ce2195c3e5bdecf119f Author: Alex Williamson Date: Thu Apr 13 14:10:15 2017 -0600 vfio/type1: Remove locked page accounting workqueue commit 0cfef2b7410b64d7a430947e0b533314c4f97153 upstream. If the mmap_sem is contented then the vfio type1 IOMMU backend will defer locked page accounting updates to a workqueue task. This has a few problems and depending on which side the user tries to play, they might be over-penalized for unmaps that haven't yet been accounted or race the workqueue to enter more mappings than they're allowed. The original intent of this workqueue mechanism seems to be focused on reducing latency through the ioctl, but we cannot do so at the cost of correctness. Remove this workqueue mechanism and update the callers to allow for failure. We can also now recheck the limit under write lock to make sure we don't exceed it. vfio_pin_pages_remote() also now necessarily includes an unwind path which we can jump to directly if the consecutive page pinning finds that we're exceeding the user's memory limits. This avoids the current lazy approach which does accounting and mapping up to the fault, only to return an error on the next iteration to unwind the entire vfio_dma. Reviewed-by: Peter Xu Reviewed-by: Kirti Wankhede Signed-off-by: Alex Williamson Signed-off-by: Greg Kroah-Hartman commit c85990cf511d6ebe20d9055fc6529de3088d11fb Author: Dennis Yang Date: Tue Apr 18 15:27:06 2017 +0800 dm thin: fix a memory leak when passing discard bio down commit 948f581a53b704b984aa20df009f0a2b4cf7f907 upstream. dm-thin does not free the discard_parent bio after all chained sub bios finished. The following kmemleak report could be observed after pool with discard_passdown option processes discard bios in linux v4.11-rc7. To fix this, we drop the discard_parent bio reference when its endio (passdown_endio) called. unreferenced object 0xffff8803d6b29700 (size 256): comm "kworker/u8:0", pid 30349, jiffies 4379504020 (age 143002.776s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 01 00 00 00 00 00 00 f0 00 00 00 00 00 00 00 00 ................ backtrace: [] kmemleak_alloc+0x49/0xa0 [] kmem_cache_alloc+0xb4/0x100 [] mempool_alloc_slab+0x10/0x20 [] mempool_alloc+0x55/0x150 [] bio_alloc_bioset+0xb9/0x260 [] process_prepared_discard_passdown_pt1+0x40/0x1c0 [dm_thin_pool] [] break_up_discard_bio+0x1a9/0x200 [dm_thin_pool] [] process_discard_cell_passdown+0x24/0x40 [dm_thin_pool] [] process_discard_bio+0xdd/0xf0 [dm_thin_pool] [] do_worker+0xa76/0xd50 [dm_thin_pool] [] process_one_work+0x139/0x370 [] worker_thread+0x61/0x450 [] kthread+0xd6/0xf0 [] ret_from_fork+0x3f/0x70 [] 0xffffffffffffffff Signed-off-by: Dennis Yang Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman commit bd0db3b70b590c2e2f5344bff478a062ac67ee7c Author: Bart Van Assche Date: Thu Apr 27 10:11:19 2017 -0700 dm rq: check blk_mq_register_dev() return value in dm_mq_init_request_queue() commit 23a601248958fa4142d49294352fe8d1fdf3e509 upstream. Otherwise the request-based DM blk-mq request_queue will be put into service without being properly exported via sysfs. Signed-off-by: Bart Van Assche Reviewed-by: Hannes Reinecke Cc: Christoph Hellwig Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman commit 5d953aa1cd2e8d16b2d2bfab5b0dcdf296446753 Author: Somasundaram Krishnasamy Date: Fri Apr 7 12:14:55 2017 -0700 dm era: save spacemap metadata root after the pre-commit commit 117aceb030307dcd431fdcff87ce988d3016c34a upstream. When committing era metadata to disk, it doesn't always save the latest spacemap metadata root in superblock. Due to this, metadata is getting corrupted sometimes when reopening the device. The correct order of update should be, pre-commit (shadows spacemap root), save the spacemap root (newly shadowed block) to in-core superblock and then the final commit. Signed-off-by: Somasundaram Krishnasamy Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman commit 4c1dad842bfc7e4140774bd58c70b2612930eebd Author: Ondrej Kozina Date: Mon Apr 24 14:21:53 2017 +0200 dm crypt: rewrite (wipe) key in crypto layer using random data commit c82feeec9a014b72c4ffea36648cfb6f81cc1b73 upstream. The message "key wipe" used to wipe real key stored in crypto layer by rewriting it with zeroes. Since commit 28856a9 ("crypto: xts - consolidate sanity check for keys") this no longer works in FIPS mode for XTS. While running in FIPS mode the crypto key part has to differ from the tweak key. Fixes: 28856a9 ("crypto: xts - consolidate sanity check for keys") Signed-off-by: Ondrej Kozina Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman commit bce0767157c3b540a3fad82c3f1ea4be480bb03e Author: Gary R Hook Date: Fri Apr 21 10:50:14 2017 -0500 crypto: ccp - Change ISR handler method for a v5 CCP commit 6263b51eb3190d30351360fd168959af7e3a49a9 upstream. The CCP has the ability to perform several operations simultaneously, but only one interrupt. When implemented as a PCI device and using MSI-X/MSI interrupts, use a tasklet model to service interrupts. By disabling and enabling interrupts from the CCP, coupled with the queuing that tasklets provide, we can ensure that all events (occurring on the device) are recognized and serviced. This change fixes a problem wherein 2 or more busy queues can cause notification bits to change state while a (CCP) interrupt is being serviced, but after the queue state has been evaluated. This results in the event being 'lost' and the queue hanging, waiting to be serviced. Since the status bits are never fully de-asserted, the CCP never generates another interrupt (all bits zero -> one or more bits one), and no further CCP operations will be executed. Signed-off-by: Gary R Hook Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman commit f106cd8575c41a45d3a329bd9fdfdce28ab98601 Author: Gary R Hook Date: Fri Apr 21 10:50:05 2017 -0500 crypto: ccp - Change ISR handler method for a v3 CCP commit 7b537b24e76a1e8e6d7ea91483a45d5b1426809b upstream. The CCP has the ability to perform several operations simultaneously, but only one interrupt. When implemented as a PCI device and using MSI-X/MSI interrupts, use a tasklet model to service interrupts. By disabling and enabling interrupts from the CCP, coupled with the queuing that tasklets provide, we can ensure that all events (occurring on the device) are recognized and serviced. This change fixes a problem wherein 2 or more busy queues can cause notification bits to change state while a (CCP) interrupt is being serviced, but after the queue state has been evaluated. This results in the event being 'lost' and the queue hanging, waiting to be serviced. Since the status bits are never fully de-asserted, the CCP never generates another interrupt (all bits zero -> one or more bits one), and no further CCP operations will be executed. Signed-off-by: Gary R Hook Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman commit 595c7ad3c64bf99b0aed83ddeb2183735fbf28d7 Author: Gary R Hook Date: Thu Apr 20 15:24:22 2017 -0500 crypto: ccp - Disable interrupts early on unload commit 116591fe3eef11c6f06b662c9176385f13891183 upstream. Ensure that we disable interrupts first when shutting down the driver. Signed-off-by: Gary R Hook Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman commit e1adc5e04af5add90c25f285df7516a86cfc770d Author: Gary R Hook Date: Thu Apr 20 15:24:09 2017 -0500 crypto: ccp - Use only the relevant interrupt bits commit 56467cb11cf8ae4db9003f54b3d3425b5f07a10a upstream. Each CCP queue can product interrupts for 4 conditions: operation complete, queue empty, error, and queue stopped. This driver only works with completion and error events. Signed-off-by: Gary R Hook Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman commit 7ae1df9048db86b460dff8574b8d82db1c941cc1 Author: Stephan Mueller Date: Mon Apr 24 11:15:23 2017 +0200 crypto: algif_aead - Require setkey before accept(2) commit 2a2a251f110576b1d89efbd0662677d7e7db21a8 upstream. Some cipher implementations will crash if you try to use them without calling setkey first. This patch adds a check so that the accept(2) call will fail with -ENOKEY if setkey hasn't been done on the socket yet. Fixes: 400c40cf78da ("crypto: algif - add AEAD support") Signed-off-by: Stephan Mueller Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman commit fe51605c951264712f52afb3226a640df330e784 Author: Krzysztof Kozlowski Date: Fri Mar 17 16:49:19 2017 +0200 crypto: s5p-sss - Close possible race for completed requests commit 42d5c176b76e190a4a3e0dfeffdae661755955b6 upstream. Driver is capable of handling only one request at a time and it stores it in its state container struct s5p_aes_dev. This stored request must be protected between concurrent invocations (e.g. completing current request and scheduling new one). Combination of lock and "busy" field is used for that purpose. When "busy" field is true, the driver will not accept new request thus it will not overwrite currently handled data. However commit 28b62b145868 ("crypto: s5p-sss - Fix spinlock recursion on LRW(AES)") moved some of the write to "busy" field out of a lock protected critical section. This might lead to potential race between completing current request and scheduling a new one. Effectively the request completion might try to operate on new crypto request. Fixes: 28b62b145868 ("crypto: s5p-sss - Fix spinlock recursion on LRW(AES)") Signed-off-by: Krzysztof Kozlowski Reviewed-by: Bartlomiej Zolnierkiewicz Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman commit 635aff41e59a42b7a13884ab65de78df32800a9b Author: Mike Snitzer Date: Sat Apr 22 17:22:09 2017 -0400 block: fix blk_integrity_register to use template's interval_exp if not 0 commit 2859323e35ab5fc42f351fbda23ab544eaa85945 upstream. When registering an integrity profile: if the template's interval_exp is not 0 use it, otherwise use the ilog2() of logical block size of the provided gendisk. This fixes a long-standing DM linear target bug where it cannot pass integrity data to the underlying device if its logical block size conflicts with the underlying device's logical block size. Reported-by: Mikulas Patocka Signed-off-by: Mike Snitzer Acked-by: Martin K. Petersen Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman commit 5c5d86be4f3fdb95cf9356687ca79af6cc325177 Author: Marc Zyngier Date: Thu Apr 27 19:06:48 2017 +0100 arm64: KVM: Fix decoding of Rt/Rt2 when trapping AArch32 CP accesses commit c667186f1c01ca8970c785888868b7ffd74e51ee upstream. Our 32bit CP14/15 handling inherited some of the ARMv7 code for handling the trapped system registers, completely missing the fact that the fields for Rt and Rt2 are now 5 bit wide, and not 4... Let's fix it, and provide an accessor for the most common Rt case. Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier Signed-off-by: Christoffer Dall Signed-off-by: Greg Kroah-Hartman commit 8348ffba88e50c1ca2f222fced1996d0f026f89e Author: Andrew Jones Date: Tue Apr 18 17:59:58 2017 +0200 KVM: arm/arm64: fix races in kvm_psci_vcpu_on commit 6c7a5dce22b3f3cc44be098e2837fa6797edb8b8 upstream. Fix potential races in kvm_psci_vcpu_on() by taking the kvm->lock mutex. In general, it's a bad idea to allow more than one PSCI_CPU_ON to process the same target VCPU at the same time. One such problem that may arise is that one PSCI_CPU_ON could be resetting the target vcpu, which fills the entire sys_regs array with a temporary value including the MPIDR register, while another looks up the VCPU based on the MPIDR value, resulting in no target VCPU found. Resolves both races found with the kvm-unit-tests/arm/psci unit test. Reviewed-by: Marc Zyngier Reviewed-by: Christoffer Dall Reported-by: Levente Kurusa Suggested-by: Christoffer Dall Signed-off-by: Andrew Jones Signed-off-by: Christoffer Dall Signed-off-by: Greg Kroah-Hartman commit 74cbcb5afa75cab817b4c9fc42bdfadb15500e7b Author: David Hildenbrand Date: Thu Mar 23 11:46:03 2017 +0100 KVM: x86: fix user triggerable warning in kvm_apic_accept_events() commit 28bf28887976d8881a3a59491896c718fade7355 upstream. If we already entered/are about to enter SMM, don't allow switching to INIT/SIPI_RECEIVED, otherwise the next call to kvm_apic_accept_events() will report a warning. Same applies if we are already in MP state INIT_RECEIVED and SMM is requested to be turned on. Refuse to set the VCPU events in this case. Fixes: cd7764fe9f73 ("KVM: x86: latch INITs while in system management mode") Reported-by: Dmitry Vyukov Signed-off-by: David Hildenbrand Signed-off-by: Radim Krčmář Signed-off-by: Greg Kroah-Hartman commit f22d13c45f2d607b58a867a7e8c3d6fefd9385bd Author: Vince Weaver Date: Tue May 2 14:08:50 2017 -0400 perf/x86: Fix Broadwell-EP DRAM RAPL events commit 33b88e708e7dfa58dc896da2a98f5719d2eb315c upstream. It appears as though the Broadwell-EP DRAM units share the special units quirk with Haswell-EP/KNL. Without this patch, you get really high results (a single DRAM using 20W of power). The powercap driver in drivers/powercap/intel_rapl.c already has this change. Signed-off-by: Vince Weaver Cc: Alexander Shishkin Cc: Arnaldo Carvalho de Melo Cc: Jiri Olsa Cc: Kan Liang Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Stephane Eranian Cc: Stephane Eranian Cc: Thomas Gleixner Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar Signed-off-by: Greg Kroah-Hartman commit 29d07bb20ee5954d87e78e808fac9503652b3ec6 Author: Richard Weinberger Date: Sat Apr 1 00:41:57 2017 +0200 um: Fix PTRACE_POKEUSER on x86_64 commit 9abc74a22d85ab29cef9896a2582a530da7e79bf upstream. This is broken since ever but sadly nobody noticed. Recent versions of GDB set DR_CONTROL unconditionally and UML dies due to a heap corruption. It turns out that the PTRACE_POKEUSER was copy&pasted from i386 and assumes that addresses are 4 bytes long. Fix that by using 8 as address size in the calculation. Reported-by: jie cao Signed-off-by: Richard Weinberger Signed-off-by: Greg Kroah-Hartman commit efbd8cc8f6f8ddb81079c0e416358d101f46f5e7 Author: Ben Hutchings Date: Tue May 9 18:00:43 2017 +0100 x86, pmem: Fix cache flushing for iovec write < 8 bytes commit 8376efd31d3d7c44bd05be337adde023cc531fa1 upstream. Commit 11e63f6d920d added cache flushing for unaligned writes from an iovec, covering the first and last cache line of a >= 8 byte write and the first cache line of a < 8 byte write. But an unaligned write of 2-7 bytes can still cover two cache lines, so make sure we flush both in that case. Fixes: 11e63f6d920d ("x86, pmem: fix broken __copy_user_nocache ...") Signed-off-by: Ben Hutchings Signed-off-by: Dan Williams Signed-off-by: Greg Kroah-Hartman commit f0896a0d1e6dada4ef24160cde8f303dfaa874f5 Author: Andy Lutomirski Date: Wed Mar 22 14:32:29 2017 -0700 selftests/x86/ldt_gdt_32: Work around a glibc sigaction() bug commit 65973dd3fd31151823f4b8c289eebbb3fb7e6bc0 upstream. i386 glibc is buggy and calls the sigaction syscall incorrectly. This is asymptomatic for normal programs, but it blows up on programs that do evil things with segmentation. The ldt_gdt self-test is an example of such an evil program. This doesn't appear to be a regression -- I think I just got lucky with the uninitialized memory that glibc threw at the kernel when I wrote the test. This hackish fix manually issues sigaction(2) syscalls to undo the damage. Without the fix, ldt_gdt_32 segfaults; with the fix, it passes for me. See: https://sourceware.org/bugzilla/show_bug.cgi?id=21269 Signed-off-by: Andy Lutomirski Cc: Boris Ostrovsky Cc: Borislav Petkov Cc: Brian Gerst Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Josh Poimboeuf Cc: Juergen Gross Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Garnier Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/aaab0f9f93c9af25396f01232608c163a760a668.1490218061.git.luto@kernel.org Signed-off-by: Ingo Molnar Signed-off-by: Greg Kroah-Hartman commit c4b0426385eaf6e205488a348acb32ab1ee162ec Author: Ashish Kalra Date: Wed Apr 19 20:50:15 2017 +0530 x86/boot: Fix BSS corruption/overwrite bug in early x86 kernel startup commit d594aa0277e541bb997aef0bc0a55172d8138340 upstream. The minimum size for a new stack (512 bytes) setup for arch/x86/boot components when the bootloader does not setup/provide a stack for the early boot components is not "enough". The setup code executing as part of early kernel startup code, uses the stack beyond 512 bytes and accidentally overwrites and corrupts part of the BSS section. This is exposed mostly in the early video setup code, where it was corrupting BSS variables like force_x, force_y, which in-turn affected kernel parameters such as screen_info (screen_info.orig_video_cols) and later caused an exception/panic in console_init(). Most recent boot loaders setup the stack for early boot components, so this stack overwriting into BSS section issue has not been exposed. Signed-off-by: Ashish Kalra Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Brian Gerst Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Josh Poimboeuf Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/20170419152015.10011-1-ashishkalra@Ashishs-MacBook-Pro.local Signed-off-by: Ingo Molnar Signed-off-by: Greg Kroah-Hartman commit b1a8c141c88ce62d18350ceac7eca02d08ad1960 Author: Guenter Roeck Date: Mon Mar 20 14:30:50 2017 -0700 usb: hub: Do not attempt to autosuspend disconnected devices commit f5cccf49428447dfbc9edb7a04bb8fc316269781 upstream. While running a bind/unbind stress test with the dwc3 usb driver on rk3399, the following crash was observed. Unable to handle kernel NULL pointer dereference at virtual address 00000218 pgd = ffffffc00165f000 [00000218] *pgd=000000000174f003, *pud=000000000174f003, *pmd=0000000001750003, *pte=00e8000001751713 Internal error: Oops: 96000005 [#1] PREEMPT SMP Modules linked in: uinput uvcvideo videobuf2_vmalloc cmac ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat rfcomm xt_mark fuse bridge stp llc zram btusb btrtl btbcm btintel bluetooth ip6table_filter mwifiex_pcie mwifiex cfg80211 cdc_ether usbnet r8152 mii joydev snd_seq_midi snd_seq_midi_event snd_rawmidi snd_seq snd_seq_device ppp_async ppp_generic slhc tun CPU: 1 PID: 29814 Comm: kworker/1:1 Not tainted 4.4.52 #507 Hardware name: Google Kevin (DT) Workqueue: pm pm_runtime_work task: ffffffc0ac540000 ti: ffffffc0af4d4000 task.ti: ffffffc0af4d4000 PC is at autosuspend_check+0x74/0x174 LR is at autosuspend_check+0x70/0x174 ... Call trace: [] autosuspend_check+0x74/0x174 [] usb_runtime_idle+0x20/0x40 [] __rpm_callback+0x48/0x7c [] rpm_idle+0x1e8/0x498 [] pm_runtime_work+0x88/0xcc [] process_one_work+0x390/0x6b8 [] worker_thread+0x480/0x610 [] kthread+0x164/0x178 [] ret_from_fork+0x10/0x40 Source: (gdb) l *0xffffffc00080dcc0 0xffffffc00080dcc0 is in autosuspend_check (drivers/usb/core/driver.c:1778). 1773 /* We don't need to check interfaces that are 1774 * disabled for runtime PM. Either they are unbound 1775 * or else their drivers don't support autosuspend 1776 * and so they are permanently active. 1777 */ 1778 if (intf->dev.power.disable_depth) 1779 continue; 1780 if (atomic_read(&intf->dev.power.usage_count) > 0) 1781 return -EBUSY; 1782 w |= intf->needs_remote_wakeup; Code analysis shows that intf is set to NULL in usb_disable_device() prior to setting actconfig to NULL. At the same time, usb_runtime_idle() does not lock the usb device, and neither does any of the functions in the traceback. This means that there is no protection against a race condition where usb_disable_device() is removing dev->actconfig->interface[] pointers while those are being accessed from autosuspend_check(). To solve the problem, synchronize and validate device state between autosuspend_check() and usb_disconnect(). Acked-by: Alan Stern Signed-off-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman commit 5830c376e3afad4272e31e2eab87f04333f0ca80 Author: Guenter Roeck Date: Mon Mar 20 11:16:11 2017 -0700 usb: hub: Fix error loop seen after hub communication errors commit 245b2eecee2aac6fdc77dcafaa73c33f9644c3c7 upstream. While stress testing a usb controller using a bind/unbind looop, the following error loop was observed. usb 7-1.2: new low-speed USB device number 3 using xhci-hcd usb 7-1.2: hub failed to enable device, error -108 usb 7-1-port2: cannot disable (err = -22) usb 7-1-port2: couldn't allocate usb_device usb 7-1-port2: cannot disable (err = -22) hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: activate --> -22 hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: activate --> -22 hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: activate --> -22 hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: activate --> -22 hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: activate --> -22 hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: activate --> -22 hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: activate --> -22 hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: activate --> -22 hub 7-1:1.0: hub_ext_port_status failed (err = -22) hub 7-1:1.0: hub_ext_port_status failed (err = -22) ** 57 printk messages dropped ** hub 7-1:1.0: activate --> -22 ** 82 printk messages dropped ** hub 7-1:1.0: hub_ext_port_status failed (err = -22) This continues forever. After adding tracebacks into the code, the call sequence leading to this is found to be as follows. [] hub_activate+0x368/0x7b8 [] hub_resume+0x2c/0x3c [] usb_resume_interface.isra.6+0x128/0x158 [] usb_suspend_both+0x1e8/0x288 [] usb_runtime_suspend+0x3c/0x98 [] __rpm_callback+0x48/0x7c [] rpm_callback+0xa8/0xd4 [] rpm_suspend+0x84/0x758 [] rpm_idle+0x2c8/0x498 [] __pm_runtime_idle+0x60/0xac [] usb_autopm_put_interface+0x6c/0x7c [] hub_event+0x10ac/0x12ac [] process_one_work+0x390/0x6b8 [] worker_thread+0x480/0x610 [] kthread+0x164/0x178 [] ret_from_fork+0x10/0x40 kick_hub_wq() is called from hub_activate() even after failures to communicate with the hub. This results in an endless sequence of hub event -> hub activate -> wq trigger -> hub event -> ... Provide two solutions for the problem. - Only trigger the hub event queue if communication with the hub is successful. - After a suspend failure, only resume already suspended interfaces if the communication with the device is still possible. Each of the changes fixes the observed problem. Use both to improve robustness. Acked-by: Alan Stern Signed-off-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman commit 19c9dacddf7d165daf5c039c0549a42f664680fe Author: Alexey Brodkin Date: Thu Apr 13 15:33:34 2017 +0300 usb: Make sure usb/phy/of gets built-in commit 3d6159640da9c9175d1ca42f151fc1a14caded59 upstream. DWC3 driver uses of_usb_get_phy_mode() which is implemented in drivers/usb/phy/of.c and in bare minimal configuration it might not be pulled in kernel binary. In case of ARC or ARM this could be easily reproduced with "allnodefconfig" +CONFIG_USB=m +CONFIG_USB_DWC3=m. On building all ends-up with: ---------------------->8------------------ Kernel: arch/arm/boot/Image is ready Kernel: arch/arm/boot/zImage is ready Building modules, stage 2. MODPOST 5 modules ERROR: "of_usb_get_phy_mode" [drivers/usb/dwc3/dwc3.ko] undefined! make[1]: *** [__modpost] Error 1 make: *** [modules] Error 2 ---------------------->8------------------ Signed-off-by: Alexey Brodkin Cc: Greg Kroah-Hartman Cc: Masahiro Yamada Cc: Geert Uytterhoeven Cc: Nicolas Pitre Cc: Thomas Gleixner Cc: Felipe Balbi Cc: Felix Fietkau Cc: Jeremy Kerr Cc: linux-snps-arc@lists.infradead.org Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit 934c4e338e7e47ea5dd0d642a1615f7d0e42c70a Author: Romain Izard Date: Fri Mar 10 14:11:41 2017 +0100 usb: gadget: legacy gadgets are optional commit 6e253d0fbc665b36192b8ed3cecdbb65b413a1eb upstream. With commit bc49d1d17dcf ("usb: gadget: don't couple configfs to legacy gadgets"),it is possible to build a modular kernel with both built-in configfs support and modular legacy gadget drivers. But when building a kernel without modules, it is also necessary to be able to build with configfs but without any legacy gadget driver. This was a possible configuration when the USB_CONFIGFS was a part of the choice options, but not anymore. Mark the choice for legacy gadget drivers as optional restores this. Fixes: bc49d1d17dcf ("usb: gadget: don't couple configfs to legacy gadgets") Signed-off-by: Romain Izard Signed-off-by: Felipe Balbi Signed-off-by: Greg Kroah-Hartman commit 7f7a4b58e257123573c9dd488ac7fea0f7a01df1 Author: Gustavo A. R. Silva Date: Mon Apr 3 22:48:40 2017 -0500 usb: misc: add missing continue in switch commit 2c930e3d0aed1505e86e0928d323df5027817740 upstream. Add missing continue in switch. Addresses-Coverity-ID: 1248733 Signed-off-by: Gustavo A. R. Silva Acked-by: Alan Stern Signed-off-by: Greg Kroah-Hartman commit 34006e9621c7824a0210dafb6163af020aec2aef Author: Ian Abbott Date: Fri Feb 17 11:09:09 2017 +0000 staging: comedi: jr3_pci: cope with jiffies wraparound commit 8ec04a491825e08068e92bed0bba7821893b6433 upstream. The timer expiry routine `jr3_pci_poll_dev()` checks for expiry by checking whether the absolute value of `jiffies` (stored in local variable `now`) is greater than the expected expiry time in jiffy units. This will fail when `jiffies` wraps around. Also, it seems to make sense to handle the expiry one jiffy earlier than the current test. Use `time_after_eq()` to check for expiry. Signed-off-by: Ian Abbott Signed-off-by: Greg Kroah-Hartman commit acb79180c55e39ed8121b292e5dcf125b54c7d49 Author: Ian Abbott Date: Fri Feb 17 11:09:08 2017 +0000 staging: comedi: jr3_pci: fix possible null pointer dereference commit 45292be0b3db0b7f8286683b376e2d9f949d11f9 upstream. For some reason, the driver does not consider allocation of the subdevice private data to be a fatal error when attaching the COMEDI device. It tests the subdevice private data pointer for validity at certain points, but omits some crucial tests. In particular, `jr3_pci_auto_attach()` calls `jr3_pci_alloc_spriv()` to allocate and initialize the subdevice private data, but the same function subsequently dereferences the pointer to access the `next_time_min` and `next_time_max` members without checking it first. The other missing test is in the timer expiry routine `jr3_pci_poll_dev()`, but it will crash before it gets that far. Fix the bug by returning `-ENOMEM` from `jr3_pci_auto_attach()` as soon as one of the calls to `jr3_pci_alloc_spriv()` returns `NULL`. The COMEDI core will subsequently call `jr3_pci_detach()` to clean up. Signed-off-by: Ian Abbott Signed-off-by: Greg Kroah-Hartman commit 7a6b4c3721189bfc1a047c631e5101e9e3dfad36 Author: Aditya Shankar Date: Fri Apr 7 17:24:58 2017 +0530 staging: wilc1000: Fix problem with wrong vif index commit 0e490657c7214cce33fbca3d88227298c5c968ae upstream. The vif->idx value is always 0 for two interfaces. wl->vif_num = 0; loop { ... vif->idx = wl->vif_num; ... wl->vif_num = i; .... i++; ... } At present, vif->idx is assigned the value of wl->vif_num at the beginning of this block and device is initialized based on this index value. In the next iteration, wl->vif_num is still 0 as it is only updated later but gets assigned to vif->idx in the beginning. This causes problems later when we try to reference a particular interface and also while configuring the firmware. This patch moves the assignment to vif->idx from the beginning of the block to after wl->vif_num is updated with latest value of i. Fixes: commit 735bb39ca3be ("staging: wilc1000: simplify vif[i]->ndev accesses") Signed-off-by: Aditya Shankar Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit 4097eda73b4cbb189ea7f0e6193b380339e39ce5 Author: Johan Hovold Date: Wed Apr 26 12:23:04 2017 +0200 staging: gdm724x: gdm_mux: fix use-after-free on module unload commit b58f45c8fc301fe83ee28cad3e64686c19e78f1c upstream. Make sure to deregister the USB driver before releasing the tty driver to avoid use-after-free in the USB disconnect callback where the tty devices are deregistered. Fixes: 61e121047645 ("staging: gdm7240: adding LTE USB driver") Cc: Won Kang Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman commit 808dc8810896bb8ea1298b4f1ab14100ab787ca3 Author: Malcolm Priestley Date: Sat Apr 22 11:14:57 2017 +0100 staging: vt6656: use off stack for out buffer USB transfers. commit 12ecd24ef93277e4e5feaf27b0b18f2d3828bc5e upstream. Since 4.9 mandated USB buffers be heap allocated this causes the driver to fail. Since there is a wide range of buffer sizes use kmemdup to create allocated buffer. Signed-off-by: Malcolm Priestley Signed-off-by: Greg Kroah-Hartman commit 4f19197ce58d2673ada1f968129c9647d8eb138d Author: Malcolm Priestley Date: Sat Apr 22 11:14:58 2017 +0100 staging: vt6656: use off stack for in buffer USB transfers. commit 05c0cf88bec588a7cb34de569acd871ceef26760 upstream. Since 4.9 mandated USB buffers to be heap allocated. This causes the driver to fail. Create buffer for USB transfers. Signed-off-by: Malcolm Priestley Signed-off-by: Greg Kroah-Hartman commit 5b92090a53eb584276454588e81db105dcf07954 Author: Bjørn Mork Date: Fri Apr 21 10:01:29 2017 +0200 USB: Revert "cdc-wdm: fix "out-of-sync" due to missing notifications" commit 19445816996d1a89682c37685fe95959631d9f32 upstream. This reverts commit 833415a3e781 ("cdc-wdm: fix "out-of-sync" due to missing notifications") There have been several reports of wdm_read returning unexpected EIO errors with QMI devices using the qmi_wwan driver. The reporters confirm that reverting prevents these errors. I have been unable to reproduce the bug myself, and have no explanation to offer either. But reverting is the safe choice here, given that the commit was an attempt to work around a firmware problem. Living with a firmware problem is still better than adding driver bugs. Reported-by: Kasper Holtze Reported-by: Aleksander Morgado Reported-by: Daniele Palmas Fixes: 833415a3e781 ("cdc-wdm: fix "out-of-sync" due to missing notifications") Signed-off-by: Bjørn Mork Acked-by: Oliver Neukum Signed-off-by: Greg Kroah-Hartman commit 32dd9987fbd9a0f1a80484146aeafdb7f3b88e49 Author: Ajay Kaher Date: Tue Mar 28 08:09:32 2017 -0400 USB: Proper handling of Race Condition when two USB class drivers try to call init_usb_class simultaneously commit 2f86a96be0ccb1302b7eee7855dbee5ce4dc5dfb upstream. There is race condition when two USB class drivers try to call init_usb_class at the same time and leads to crash. code path: probe->usb_register_dev->init_usb_class To solve this, mutex locking has been added in init_usb_class() and destroy_usb_class(). As pointed by Alan, removed "if (usb_class)" test from destroy_usb_class() because usb_class can never be NULL there. Signed-off-by: Ajay Kaher Acked-by: Alan Stern Signed-off-by: Greg Kroah-Hartman commit e349a57233228a8f479fa833a3606312ac830610 Author: Marek Vasut Date: Tue Apr 18 20:07:56 2017 +0200 USB: serial: ftdi_sio: add device ID for Microsemi/Arrow SF2PLUS Dev Kit commit 31c5d1922b90ddc1da6a6ddecef7cd31f17aa32b upstream. This development kit has an FT4232 on it with a custom USB VID/PID. The FT4232 provides four UARTs, but only two are used. The UART 0 is used by the FlashPro5 programmer and UART 2 is connected to the SmartFusion2 CortexM3 SoC UART port. Note that the USB VID is registered to Actel according to Linux USB VID database, but that was acquired by Microsemi. Signed-off-by: Marek Vasut Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman commit dffe5d4b0511cada0e8a833b1e3297df65e4eadb Author: Peter Chen Date: Wed Apr 19 16:55:52 2017 +0300 usb: host: xhci: print correct command ring address commit 6fc091fb0459ade939a795bfdcaf645385b951d4 upstream. Print correct command ring address using 'val_64'. Signed-off-by: Peter Chen Signed-off-by: Mathias Nyman Signed-off-by: Greg Kroah-Hartman commit a561f35aeaa92b6ecf083453fbf2fb30f58e1520 Author: Roger Quadros Date: Fri Apr 7 17:57:12 2017 +0300 usb: xhci: bInterval quirk for TI TUSB73x0 commit 69307ccb9ad7ccb653e332de68effdeaaab6907d upstream. As per [1] issue #4, "The periodic EP scheduler always tries to schedule the EPs that have large intervals (interval equal to or greater than 128 microframes) into different microframes. So it maintains an internal counter and increments for each large interval EP added. When the counter is greater than 128, the scheduler rejects the new EP. So when the hub re-enumerated 128 times, it triggers this condition." This results in Bandwidth error when devices with periodic endpoints (ISO/INT) having bInterval > 7 are plugged and unplugged several times on a TUSB73x0 XHCI host. Workaround this issue by limiting the bInterval to 7 (i.e. interval to 6) for High-speed or faster periodic endpoints. [1] - http://www.ti.com/lit/er/sllz076/sllz076.pdf Signed-off-by: Roger Quadros Signed-off-by: Mathias Nyman Signed-off-by: Greg Kroah-Hartman commit b3e01cd15d1740ac6edba847fa0797ba0418a617 Author: Nicholas Bellinger Date: Tue Apr 25 10:55:12 2017 -0700 iscsi-target: Set session_fall_back_to_erl0 when forcing reinstatement commit 197b806ae5db60c6f609d74da04ddb62ea5e1b00 upstream. While testing modification of per se_node_acl queue_depth forcing session reinstatement via lio_target_nacl_cmdsn_depth_store() -> core_tpg_set_initiator_node_queue_depth(), a hung task bug triggered when changing cmdsn_depth invoked session reinstatement while an iscsi login was already waiting for session reinstatement to complete. This can happen when an outstanding se_cmd descriptor is taking a long time to complete, and session reinstatement from iscsi login or cmdsn_depth change occurs concurrently. To address this bug, explicitly set session_fall_back_to_erl0 = 1 when forcing session reinstatement, so session reinstatement is not attempted if an active session is already being shutdown. This patch has been tested with two scenarios. The first when iscsi login is blocked waiting for iscsi session reinstatement to complete followed by queue_depth change via configfs, and second when queue_depth change via configfs us blocked followed by a iscsi login driven session reinstatement. Note this patch depends on commit d36ad77f702 to handle multiple sessions per se_node_acl when changing cmdsn_depth, and for pre v4.5 kernels will need to be included for stable as well. Reported-by: Gary Guo Tested-by: Gary Guo Cc: Gary Guo Signed-off-by: Nicholas Bellinger Signed-off-by: Greg Kroah-Hartman commit d39ebfe9a1b7f5c04fb4ecd7d0e9950277ec60ea Author: Bart Van Assche Date: Thu May 4 15:50:47 2017 -0700 target/fileio: Fix zero-length READ and WRITE handling commit 59ac9c078141b8fd0186c0b18660a1b2c24e724e upstream. This patch fixes zero-length READ and WRITE handling in target/FILEIO, which was broken a long time back by: Since: commit d81cb44726f050d7cf1be4afd9cb45d153b52066 Author: Paolo Bonzini Date: Mon Sep 17 16:36:11 2012 -0700 target: go through normal processing for all zero-length commands which moved zero-length READ and WRITE completion out of target-core, to doing submission into backend driver code. To address this, go ahead and invoke target_complete_cmd() for any non negative return value in fd_do_rw(). Signed-off-by: Bart Van Assche Reviewed-by: Hannes Reinecke Reviewed-by: Christoph Hellwig Cc: Andy Grover Cc: David Disseldorp Signed-off-by: Nicholas Bellinger Signed-off-by: Greg Kroah-Hartman commit f78392c0160c418dd88d45591686a8e527dfd868 Author: Nicholas Bellinger Date: Tue Apr 11 16:24:16 2017 -0700 target: Fix compare_and_write_callback handling for non GOOD status commit a71a5dc7f833943998e97ca8fa6a4c708a0ed1a9 upstream. Following the bugfix for handling non SAM_STAT_GOOD COMPARE_AND_WRITE status during COMMIT phase in commit 9b2792c3da1, the same bug exists for the READ phase as well. This would manifest first as a lost SCSI response, and eventual hung task during fabric driver logout or re-login, as existing shutdown logic waited for the COMPARE_AND_WRITE se_cmd->cmd_kref to reach zero. To address this bug, compare_and_write_callback() has been changed to set post_ret = 1 and return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE as necessary to signal failure status. Reported-by: Bill Borsari Cc: Bill Borsari Tested-by: Gary Guo Cc: Gary Guo Signed-off-by: Nicholas Bellinger Signed-off-by: Greg Kroah-Hartman commit 8fe6ee0b6e695b8e5de802c9d0a1360493653ee8 Author: Juergen Gross Date: Wed May 10 06:08:44 2017 +0200 xen: adjust early dom0 p2m handling to xen hypervisor behavior commit 69861e0a52f8733355ce246f0db15e1b240ad667 upstream. When booted as pv-guest the p2m list presented by the Xen is already mapped to virtual addresses. In dom0 case the hypervisor might make use of 2M- or 1G-pages for this mapping. Unfortunately while being properly aligned in virtual and machine address space, those pages might not be aligned properly in guest physical address space. So when trying to obtain the guest physical address of such a page pud_pfn() and pmd_pfn() must be avoided as those will mask away guest physical address bits not being zero in this special case. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman