commit 95afdc9fef289a42958570d5a4bcc9c939a5b5b8 Author: Ben Hutchings Date: Fri Nov 27 12:48:26 2015 +0000 Linux 3.2.74 commit fcb2781782b61631db4ed31e98757795eacd31cb Author: Christophe Leroy Date: Wed May 6 17:26:47 2015 +0200 splice: sendfile() at once fails for big files commit 0ff28d9f4674d781e492bcff6f32f0fe48cf0fed upstream. Using sendfile with below small program to get MD5 sums of some files, it appear that big files (over 64kbytes with 4k pages system) get a wrong MD5 sum while small files get the correct sum. This program uses sendfile() to send a file to an AF_ALG socket for hashing. /* md5sum2.c */ #include #include #include #include #include #include #include #include #include int main(int argc, char **argv) { int sk = socket(AF_ALG, SOCK_SEQPACKET, 0); struct stat st; struct sockaddr_alg sa = { .salg_family = AF_ALG, .salg_type = "hash", .salg_name = "md5", }; int n; bind(sk, (struct sockaddr*)&sa, sizeof(sa)); for (n = 1; n < argc; n++) { int size; int offset = 0; char buf[4096]; int fd; int sko; int i; fd = open(argv[n], O_RDONLY); sko = accept(sk, NULL, 0); fstat(fd, &st); size = st.st_size; sendfile(sko, fd, &offset, size); size = read(sko, buf, sizeof(buf)); for (i = 0; i < size; i++) printf("%2.2x", buf[i]); printf(" %s\n", argv[n]); close(fd); close(sko); } exit(0); } Test below is done using official linux patch files. First result is with a software based md5sum. Second result is with the program above. root@vgoip:~# ls -l patch-3.6.* -rw-r--r-- 1 root root 64011 Aug 24 12:01 patch-3.6.2.gz -rw-r--r-- 1 root root 94131 Aug 24 12:01 patch-3.6.3.gz root@vgoip:~# md5sum patch-3.6.* b3ffb9848196846f31b2ff133d2d6443 patch-3.6.2.gz c5e8f687878457db77cb7158c38a7e43 patch-3.6.3.gz root@vgoip:~# ./md5sum2 patch-3.6.* b3ffb9848196846f31b2ff133d2d6443 patch-3.6.2.gz 5fd77b24e68bb24dcc72d6e57c64790e patch-3.6.3.gz After investivation, it appears that sendfile() sends the files by blocks of 64kbytes (16 times PAGE_SIZE). The problem is that at the end of each block, the SPLICE_F_MORE flag is missing, therefore the hashing operation is reset as if it was the end of the file. This patch adds SPLICE_F_MORE to the flags when more data is pending. With the patch applied, we get the correct sums: root@vgoip:~# md5sum patch-3.6.* b3ffb9848196846f31b2ff133d2d6443 patch-3.6.2.gz c5e8f687878457db77cb7158c38a7e43 patch-3.6.3.gz root@vgoip:~# ./md5sum2 patch-3.6.* b3ffb9848196846f31b2ff133d2d6443 patch-3.6.2.gz c5e8f687878457db77cb7158c38a7e43 patch-3.6.3.gz Signed-off-by: Christophe Leroy Signed-off-by: Jens Axboe Signed-off-by: Ben Hutchings commit f79c83d6c41930362bc66fc71489e92975a2facf Author: Eric Dumazet Date: Mon Nov 2 07:50:07 2015 -0800 net: avoid NULL deref in inet_ctl_sock_destroy() [ Upstream commit 8fa677d2706d325d71dab91bf6e6512c05214e37 ] Under low memory conditions, tcp_sk_init() and icmp_sk_init() can both iterate on all possible cpus and call inet_ctl_sock_destroy(), with eventual NULL pointer. Signed-off-by: Eric Dumazet Reported-by: Dmitry Vyukov Signed-off-by: David S. Miller Signed-off-by: Ben Hutchings commit 33cf84ba8c25b40c7de52029efc8d4372725c95f Author: Ani Sinha Date: Fri Oct 30 16:54:31 2015 -0700 ipmr: fix possible race resulting from improper usage of IP_INC_STATS_BH() in preemptible context. [ Upstream commit 44f49dd8b5a606870a1f21101522a0f9c4414784 ] Fixes the following kernel BUG : BUG: using __this_cpu_add() in preemptible [00000000] code: bash/2758 caller is __this_cpu_preempt_check+0x13/0x15 CPU: 0 PID: 2758 Comm: bash Tainted: P O 3.18.19 #2 ffffffff8170eaca ffff880110d1b788 ffffffff81482b2a 0000000000000000 0000000000000000 ffff880110d1b7b8 ffffffff812010ae ffff880007cab800 ffff88001a060800 ffff88013a899108 ffff880108b84240 ffff880110d1b7c8 Call Trace: [] dump_stack+0x52/0x80 [] check_preemption_disabled+0xce/0xe1 [] __this_cpu_preempt_check+0x13/0x15 [] ipmr_queue_xmit+0x647/0x70c [] ip_mr_forward+0x32f/0x34e [] ip_mroute_setsockopt+0xe03/0x108c [] ? get_parent_ip+0x11/0x42 [] ? pollwake+0x4d/0x51 [] ? default_wake_function+0x0/0xf [] ? get_parent_ip+0x11/0x42 [] ? __wake_up_common+0x45/0x77 [] ? _raw_spin_unlock_irqrestore+0x1d/0x32 [] ? __wake_up_sync_key+0x4a/0x53 [] ? sock_def_readable+0x71/0x75 [] do_ip_setsockopt+0x9d/0xb55 [] ? unix_seqpacket_sendmsg+0x3f/0x41 [] ? sock_sendmsg+0x6d/0x86 [] ? sockfd_lookup_light+0x12/0x5d [] ? SyS_sendto+0xf3/0x11b [] ? new_sync_read+0x82/0xaa [] compat_ip_setsockopt+0x3b/0x99 [] compat_raw_setsockopt+0x11/0x32 [] compat_sock_common_setsockopt+0x18/0x1f [] compat_SyS_setsockopt+0x1a9/0x1cf [] compat_SyS_socketcall+0x180/0x1e3 [] cstar_dispatch+0x7/0x1e Signed-off-by: Ani Sinha Acked-by: Eric Dumazet Signed-off-by: David S. Miller [bwh: Backported to 3.2: ipmr doesn't implement IPSTATS_MIB_OUTOCTETS] Signed-off-by: Ben Hutchings commit f114d9374ba3e42c86b112c8b4dbcba50a7330e7 Author: Sowmini Varadhan Date: Mon Oct 26 12:46:37 2015 -0400 RDS-TCP: Recover correctly from pskb_pull()/pksb_trim() failure in rds_tcp_data_recv [ Upstream commit 8ce675ff39b9958d1c10f86cf58e357efaafc856 ] Either of pskb_pull() or pskb_trim() may fail under low memory conditions. If rds_tcp_data_recv() ignores such failures, the application will receive corrupted data because the skb has not been correctly carved to the RDS datagram size. Avoid this by handling pskb_pull/pskb_trim failure in the same manner as the skb_clone failure: bail out of rds_tcp_data_recv(), and retry via the deferred call to rds_send_worker() that gets set up on ENOMEM from rds_tcp_read_sock() Signed-off-by: Sowmini Varadhan Acked-by: Santosh Shilimkar Signed-off-by: David S. Miller Signed-off-by: Ben Hutchings commit a8ab3e639d481658fc9da76de80485a5247c3a68 Author: Dan Carpenter Date: Mon Oct 19 13:16:49 2015 +0300 irda: precedence bug in irlmp_seq_hb_idx() [ Upstream commit 50010c20597d14667eff0fdb628309986f195230 ] This is decrementing the pointer, instead of the value stored in the pointer. KASan detects it as an out of bounds reference. Reported-by: "Berry Cheng 程君(成淼)" Signed-off-by: Dan Carpenter Signed-off-by: David S. Miller Signed-off-by: Ben Hutchings commit b208d27d457678cab6584d29c1db827a696c7f2b Author: Jann Horn Date: Wed Sep 9 15:38:28 2015 -0700 fs: if a coredump already exists, unlink and recreate with O_EXCL commit fbb1816942c04429e85dbf4c1a080accc534299e upstream. It was possible for an attacking user to trick root (or another user) into writing his coredumps into an attacker-readable, pre-existing file using rename() or link(), causing the disclosure of secret data from the victim process' virtual memory. Depending on the configuration, it was also possible to trick root into overwriting system files with coredumps. Fix that issue by never writing coredumps into existing files. Requirements for the attack: - The attack only applies if the victim's process has a nonzero RLIMIT_CORE and is dumpable. - The attacker can trick the victim into coredumping into an attacker-writable directory D, either because the core_pattern is relative and the victim's cwd is attacker-writable or because an absolute core_pattern pointing to a world-writable directory is used. - The attacker has one of these: A: on a system with protected_hardlinks=0: execute access to a folder containing a victim-owned, attacker-readable file on the same partition as D, and the victim-owned file will be deleted before the main part of the attack takes place. (In practice, there are lots of files that fulfill this condition, e.g. entries in Debian's /var/lib/dpkg/info/.) This does not apply to most Linux systems because most distros set protected_hardlinks=1. B: on a system with protected_hardlinks=1: execute access to a folder containing a victim-owned, attacker-readable and attacker-writable file on the same partition as D, and the victim-owned file will be deleted before the main part of the attack takes place. (This seems to be uncommon.) C: on any system, independent of protected_hardlinks: write access to a non-sticky folder containing a victim-owned, attacker-readable file on the same partition as D (This seems to be uncommon.) The basic idea is that the attacker moves the victim-owned file to where he expects the victim process to dump its core. The victim process dumps its core into the existing file, and the attacker reads the coredump from it. If the attacker can't move the file because he does not have write access to the containing directory, he can instead link the file to a directory he controls, then wait for the original link to the file to be deleted (because the kernel checks that the link count of the corefile is 1). A less reliable variant that requires D to be non-sticky works with link() and does not require deletion of the original link: link() the file into D, but then unlink() it directly before the kernel performs the link count check. On systems with protected_hardlinks=0, this variant allows an attacker to not only gain information from coredumps, but also clobber existing, victim-writable files with coredumps. (This could theoretically lead to a privilege escalation.) Signed-off-by: Jann Horn Cc: Kees Cook Cc: Al Viro Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds [bwh: Backported to 3.2: adjust filename, context] Signed-off-by: Ben Hutchings commit 0677d4e0ba08de16967336dfecd45ade5b010057 Author: Kees Cook Date: Mon Jul 30 14:39:15 2012 -0700 fs: make dumpable=2 require fully qualified path commit 9520628e8ceb69fa9a4aee6b57f22675d9e1b709 upstream. When the suid_dumpable sysctl is set to "2", and there is no core dump pipe defined in the core_pattern sysctl, a local user can cause core files to be written to root-writable directories, potentially with user-controlled content. This means an admin can unknowningly reintroduce a variation of CVE-2006-2451, allowing local users to gain root privileges. $ cat /proc/sys/fs/suid_dumpable 2 $ cat /proc/sys/kernel/core_pattern core $ ulimit -c unlimited $ cd / $ ls -l core ls: cannot access core: No such file or directory $ touch core touch: cannot touch `core': Permission denied $ OHAI="evil-string-here" ping localhost >/dev/null 2>&1 & $ pid=$! $ sleep 1 $ kill -SEGV $pid $ ls -l core -rw------- 1 root kees 458752 Jun 21 11:35 core $ sudo strings core | grep evil OHAI=evil-string-here While cron has been fixed to abort reading a file when there is any parse error, there are still other sensitive directories that will read any file present and skip unparsable lines. Instead of introducing a suid_dumpable=3 mode and breaking all users of mode 2, this only disables the unsafe portion of mode 2 (writing to disk via relative path). Most users of mode 2 (e.g. Chrome OS) already use a core dump pipe handler, so this change will not break them. For the situations where a pipe handler is not defined but mode 2 is still active, crash dumps will only be written to fully qualified paths. If a relative path is defined (e.g. the default "core" pattern), dump attempts will trigger a printk yelling about the lack of a fully qualified path. Signed-off-by: Kees Cook Cc: Alexander Viro Cc: Alan Cox Cc: "Eric W. Biederman" Cc: Doug Ledford Cc: Serge Hallyn Cc: James Morris Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings Reviewed-by: James Morris commit beebd9fa9d8aeb8f1a3028acc1987c808b601e7d Author: Maciej W. Rozycki Date: Mon Oct 26 15:48:19 2015 +0000 binfmt_elf: Don't clobber passed executable's file header commit b582ef5c53040c5feef4c96a8f9585b6831e2441 upstream. Do not clobber the buffer space passed from `search_binary_handler' and originally preloaded by `prepare_binprm' with the executable's file header by overwriting it with its interpreter's file header. Instead keep the buffer space intact and directly use the data structure locally allocated for the interpreter's file header, fixing a bug introduced in 2.1.14 with loadable module support (linux-mips.org commit beb11695 [Import of Linux/MIPS 2.1.14], predating kernel.org repo's history). Adjust the amount of data read from the interpreter's file accordingly. This was not an issue before loadable module support, because back then `load_elf_binary' was executed only once for a given ELF executable, whether the function succeeded or failed. With loadable module support supported and enabled, upon a failure of `load_elf_binary' -- which may for example be caused by architecture code rejecting an executable due to a missing hardware feature requested in the file header -- a module load is attempted and then the function reexecuted by `search_binary_handler'. With the executable's file header replaced with its interpreter's file header the executable can then be erroneously accepted in this subsequent attempt. Signed-off-by: Maciej W. Rozycki Signed-off-by: Al Viro Signed-off-by: Ben Hutchings commit cf1857234d83095c437e2cc8346e5a2783667cf1 Author: David Howells Date: Wed Nov 4 15:20:42 2015 +0000 FS-Cache: Handle a write to the page immediately beyond the EOF marker commit 102f4d900c9c8f5ed89ae4746d493fe3ebd7ba64 upstream. Handle a write being requested to the page immediately beyond the EOF marker on a cache object. Currently this gets an assertion failure in CacheFiles because the EOF marker is used there to encode information about a partial page at the EOF - which could lead to an unknown blank spot in the file if we extend the file over it. The problem is actually in fscache where we check the index of the page being written against store_limit. store_limit is set to the number of pages that we're allowed to store by fscache_set_store_limit() - which means it's one more than the index of the last page we're allowed to store. The problem is that we permit writing to a page with an index _equal_ to the store limit - when we should reject that case. Whilst we're at it, change the triggered assertion in CacheFiles to just return -ENOBUFS instead. The assertion failure looks something like this: CacheFiles: Assertion failed 1000 < 7b1 is false ------------[ cut here ]------------ kernel BUG at fs/cachefiles/rdwr.c:962! ... RIP: 0010:[] [] cachefiles_write_page+0x273/0x2d0 [cachefiles] Signed-off-by: David Howells Signed-off-by: Al Viro [bwh: Backported to 3.2: we don't have __kernel_write() so keep using the open-coded equivalent] Signed-off-by: Ben Hutchings commit 8f2746bba547a5dc23edf74b6006a0a593dfc08f Author: Kinglong Mee Date: Wed Nov 4 15:20:24 2015 +0000 FS-Cache: Don't override netfs's primary_index if registering failed commit b130ed5998e62879a66bad08931a2b5e832da95c upstream. Only override netfs->primary_index when registering success. Signed-off-by: Kinglong Mee Signed-off-by: David Howells Signed-off-by: Al Viro [bwh: Backported to 3.2: no n_active or flags fields in fscache_cookie] Signed-off-by: Ben Hutchings commit bdb28a40d4d09250d442c03ffe1dfe6e88126c0a Author: Kinglong Mee Date: Wed Nov 4 15:20:15 2015 +0000 FS-Cache: Increase reference of parent after registering, netfs success commit 86108c2e34a26e4bec3c6ddb23390bf8cedcf391 upstream. If netfs exist, fscache should not increase the reference of parent's usage and n_children, otherwise, never be decreased. v2: thanks David's suggest, move increasing reference of parent if success use kmem_cache_free() freeing primary_index directly v3: don't move "netfs->primary_index->parent = &fscache_fsdef_index;" Signed-off-by: Kinglong Mee Signed-off-by: David Howells Signed-off-by: Al Viro Signed-off-by: Ben Hutchings commit b42506c6c820764f26e3036dfd733e0401525c88 Author: Paolo Bonzini Date: Tue Nov 10 09:14:39 2015 +0100 KVM: svm: unconditionally intercept #DB commit cbdb967af3d54993f5814f1cee0ed311a055377d upstream. This is needed to avoid the possibility that the guest triggers an infinite stream of #DB exceptions (CVE-2015-8104). VMX is not affected: because it does not save DR6 in the VMCS, it already intercepts #DB unconditionally. Reported-by: Jan Beulich Signed-off-by: Paolo Bonzini [bwh: Backported to 3.2, with thanks to Paolo: - update_db_bp_intercept() was called update_db_intercept() - The remaining call is in svm_guest_debug() rather than through svm_x86_ops] Signed-off-by: Ben Hutchings commit 1a513170dd8f7a56dcabc879a2ce235131d99225 Author: Eric Dumazet Date: Mon Nov 9 17:51:23 2015 -0800 net: fix a race in dst_release() commit d69bbf88c8d0b367cf3e3a052f6daadf630ee566 upstream. Only cpu seeing dst refcount going to 0 can safely dereference dst->flags. Otherwise an other cpu might already have freed the dst. Fixes: 27b75c95f10d ("net: avoid RCU for NOCACHE dst") Reported-by: Greg Thelen Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings commit 7abbc81bd03cd019f4d59bfba291d965acc5b5f0 Author: Filipe Manana Date: Mon Nov 9 18:06:38 2015 +0000 Btrfs: fix race when listing an inode's xattrs commit f1cd1f0b7d1b5d4aaa5711e8f4e4898b0045cb6d upstream. When listing a inode's xattrs we have a time window where we race against a concurrent operation for adding a new hard link for our inode that makes us not return any xattr to user space. In order for this to happen, the first xattr of our inode needs to be at slot 0 of a leaf and the previous leaf must still have room for an inode ref (or extref) item, and this can happen because an inode's listxattrs callback does not lock the inode's i_mutex (nor does the VFS does it for us), but adding a hard link to an inode makes the VFS lock the inode's i_mutex before calling the inode's link callback. If we have the following leafs: Leaf X (has N items) Leaf Y [ ... (257 INODE_ITEM 0) (257 INODE_REF 256) ] [ (257 XATTR_ITEM 12345), ... ] slot N - 2 slot N - 1 slot 0 The race illustrated by the following sequence diagram is possible: CPU 1 CPU 2 btrfs_listxattr() searches for key (257 XATTR_ITEM 0) gets path with path->nodes[0] == leaf X and path->slots[0] == N because path->slots[0] is >= btrfs_header_nritems(leaf X), it calls btrfs_next_leaf() btrfs_next_leaf() releases the path adds key (257 INODE_REF 666) to the end of leaf X (slot N), and leaf X now has N + 1 items searches for the key (257 INODE_REF 256), with path->keep_locks == 1, because that is the last key it saw in leaf X before releasing the path ends up at leaf X again and it verifies that the key (257 INODE_REF 256) is no longer the last key in leaf X, so it returns with path->nodes[0] == leaf X and path->slots[0] == N, pointing to the new item with key (257 INODE_REF 666) btrfs_listxattr's loop iteration sees that the type of the key pointed by the path is different from the type BTRFS_XATTR_ITEM_KEY and so it breaks the loop and stops looking for more xattr items --> the application doesn't get any xattr listed for our inode So fix this by breaking the loop only if the key's type is greater than BTRFS_XATTR_ITEM_KEY and skip the current key if its type is smaller. Signed-off-by: Filipe Manana [bwh: Backported to 3.2: old code used the trivial accessor btrfs_key_type()] Signed-off-by: Ben Hutchings commit 74d26be3e560b7d8b2abca3dcb5430e9893d1767 Author: Peter Oberparleiter Date: Tue Oct 27 10:49:54 2015 +0100 scsi_sysfs: Fix queue_ramp_up_period return code commit 863e02d0e173bb9d8cea6861be22820b25c076cc upstream. Writing a number to /sys/bus/scsi/devices//queue_ramp_up_period returns the value of that number instead of the number of bytes written. This behavior can confuse programs expecting POSIX write() semantics. Fix this by returning the number of bytes written instead. Signed-off-by: Peter Oberparleiter Reviewed-by: Hannes Reinecke Reviewed-by: Matthew R. Ochs Reviewed-by: Ewan D. Milne Signed-off-by: Martin K. Petersen Signed-off-by: Ben Hutchings commit 4edb9551ca35b2598ff1a605bf7ae75fd365deba Author: Peter Zijlstra Date: Mon Nov 2 10:50:51 2015 +0100 perf: Fix inherited events vs. tracepoint filters commit b71b437eedaed985062492565d9d421d975ae845 upstream. Arnaldo reported that tracepoint filters seem to misbehave (ie. not apply) on inherited events. The fix is obvious; filters are only set on the actual (parent) event, use the normal pattern of using this parent event for filters. This is safe because each child event has a reference to it. Reported-by: Arnaldo Carvalho de Melo Tested-by: Arnaldo Carvalho de Melo Signed-off-by: Peter Zijlstra (Intel) Cc: Adrian Hunter Cc: Arnaldo Carvalho de Melo Cc: David Ahern Cc: Frédéric Weisbecker Cc: Jiri Olsa Cc: Jiri Olsa Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Steven Rostedt Cc: Thomas Gleixner Cc: Wang Nan Link: http://lkml.kernel.org/r/20151102095051.GN17308@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar Signed-off-by: Ben Hutchings commit f5daa8cd066e89132bcf023a6d035fe59031c4e4 Author: Filipe Manana Date: Mon Nov 9 00:33:58 2015 +0000 Btrfs: fix race leading to BUG_ON when running delalloc for nodatacow commit 1d512cb77bdbda80f0dd0620a3b260d697fd581d upstream. If we are using the NO_HOLES feature, we have a tiny time window when running delalloc for a nodatacow inode where we can race with a concurrent link or xattr add operation leading to a BUG_ON. This happens because at run_delalloc_nocow() we end up casting a leaf item of type BTRFS_INODE_[REF|EXTREF]_KEY or of type BTRFS_XATTR_ITEM_KEY to a file extent item (struct btrfs_file_extent_item) and then analyse its extent type field, which won't match any of the expected extent types (values BTRFS_FILE_EXTENT_[REG|PREALLOC|INLINE]) and therefore trigger an explicit BUG_ON(1). The following sequence diagram shows how the race happens when running a no-cow dellaloc range [4K, 8K[ for inode 257 and we have the following neighbour leafs: Leaf X (has N items) Leaf Y [ ... (257 INODE_ITEM 0) (257 INODE_REF 256) ] [ (257 EXTENT_DATA 8192), ... ] slot N - 2 slot N - 1 slot 0 (Note the implicit hole for inode 257 regarding the [0, 8K[ range) CPU 1 CPU 2 run_dealloc_nocow() btrfs_lookup_file_extent() --> searches for a key with value (257 EXTENT_DATA 4096) in the fs/subvol tree --> returns us a path with path->nodes[0] == leaf X and path->slots[0] == N because path->slots[0] is >= btrfs_header_nritems(leaf X), it calls btrfs_next_leaf() btrfs_next_leaf() --> releases the path hard link added to our inode, with key (257 INODE_REF 500) added to the end of leaf X, so leaf X now has N + 1 keys --> searches for the key (257 INODE_REF 256), because it was the last key in leaf X before it released the path, with path->keep_locks set to 1 --> ends up at leaf X again and it verifies that the key (257 INODE_REF 256) is no longer the last key in the leaf, so it returns with path->nodes[0] == leaf X and path->slots[0] == N, pointing to the new item with key (257 INODE_REF 500) the loop iteration of run_dealloc_nocow() does not break out the loop and continues because the key referenced in the path at path->nodes[0] and path->slots[0] is for inode 257, its type is < BTRFS_EXTENT_DATA_KEY and its offset (500) is less then our delalloc range's end (8192) the item pointed by the path, an inode reference item, is (incorrectly) interpreted as a file extent item and we get an invalid extent type, leading to the BUG_ON(1): if (extent_type == BTRFS_FILE_EXTENT_REG || extent_type == BTRFS_FILE_EXTENT_PREALLOC) { (...) } else if (extent_type == BTRFS_FILE_EXTENT_INLINE) { (...) } else { BUG_ON(1) } The same can happen if a xattr is added concurrently and ends up having a key with an offset smaller then the delalloc's range end. So fix this by skipping keys with a type smaller than BTRFS_EXTENT_DATA_KEY. Signed-off-by: Filipe Manana Signed-off-by: Ben Hutchings commit b4f5eab61bdf4a39c38268d237549c4efede2531 Author: Filipe Manana Date: Fri Nov 6 13:33:33 2015 +0000 Btrfs: fix race leading to incorrect item deletion when dropping extents commit aeafbf8486c9e2bd53f5cc3c10c0b7fd7149d69c upstream. While running a stress test I got the following warning triggered: [191627.672810] ------------[ cut here ]------------ [191627.673949] WARNING: CPU: 8 PID: 8447 at fs/btrfs/file.c:779 __btrfs_drop_extents+0x391/0xa50 [btrfs]() (...) [191627.701485] Call Trace: [191627.702037] [] dump_stack+0x4f/0x7b [191627.702992] [] ? console_unlock+0x356/0x3a2 [191627.704091] [] warn_slowpath_common+0xa1/0xbb [191627.705380] [] ? __btrfs_drop_extents+0x391/0xa50 [btrfs] [191627.706637] [] warn_slowpath_null+0x1a/0x1c [191627.707789] [] __btrfs_drop_extents+0x391/0xa50 [btrfs] [191627.709155] [] ? cache_alloc_debugcheck_after.isra.32+0x171/0x1d0 [191627.712444] [] ? kmemleak_alloc_recursive.constprop.40+0x16/0x18 [191627.714162] [] insert_reserved_file_extent.constprop.40+0x83/0x24e [btrfs] [191627.715887] [] ? start_transaction+0x3bb/0x610 [btrfs] [191627.717287] [] btrfs_finish_ordered_io+0x273/0x4e2 [btrfs] [191627.728865] [] finish_ordered_fn+0x15/0x17 [btrfs] [191627.730045] [] normal_work_helper+0x14c/0x32c [btrfs] [191627.731256] [] btrfs_endio_write_helper+0x12/0x14 [btrfs] [191627.732661] [] process_one_work+0x24c/0x4ae [191627.733822] [] worker_thread+0x206/0x2c2 [191627.734857] [] ? process_scheduled_works+0x2f/0x2f [191627.736052] [] ? process_scheduled_works+0x2f/0x2f [191627.737349] [] kthread+0xef/0xf7 [191627.738267] [] ? time_hardirqs_on+0x15/0x28 [191627.739330] [] ? __kthread_parkme+0xad/0xad [191627.741976] [] ret_from_fork+0x42/0x70 [191627.743080] [] ? __kthread_parkme+0xad/0xad [191627.744206] ---[ end trace bbfddacb7aaada8d ]--- $ cat -n fs/btrfs/file.c 691 int __btrfs_drop_extents(struct btrfs_trans_handle *trans, (...) 758 btrfs_item_key_to_cpu(leaf, &key, path->slots[0]); 759 if (key.objectid > ino || 760 key.type > BTRFS_EXTENT_DATA_KEY || key.offset >= end) 761 break; 762 763 fi = btrfs_item_ptr(leaf, path->slots[0], 764 struct btrfs_file_extent_item); 765 extent_type = btrfs_file_extent_type(leaf, fi); 766 767 if (extent_type == BTRFS_FILE_EXTENT_REG || 768 extent_type == BTRFS_FILE_EXTENT_PREALLOC) { (...) 774 } else if (extent_type == BTRFS_FILE_EXTENT_INLINE) { (...) 778 } else { 779 WARN_ON(1); 780 extent_end = search_start; 781 } (...) This happened because the item we were processing did not match a file extent item (its key type != BTRFS_EXTENT_DATA_KEY), and even on this case we cast the item to a struct btrfs_file_extent_item pointer and then find a type field value that does not match any of the expected values (BTRFS_FILE_EXTENT_[REG|PREALLOC|INLINE]). This scenario happens due to a tiny time window where a race can happen as exemplified below. For example, consider the following scenario where we're using the NO_HOLES feature and we have the following two neighbour leafs: Leaf X (has N items) Leaf Y [ ... (257 INODE_ITEM 0) (257 INODE_REF 256) ] [ (257 EXTENT_DATA 8192), ... ] slot N - 2 slot N - 1 slot 0 Our inode 257 has an implicit hole in the range [0, 8K[ (implicit rather than explicit because NO_HOLES is enabled). Now if our inode has an ordered extent for the range [4K, 8K[ that is finishing, the following can happen: CPU 1 CPU 2 btrfs_finish_ordered_io() insert_reserved_file_extent() __btrfs_drop_extents() Searches for the key (257 EXTENT_DATA 4096) through btrfs_lookup_file_extent() Key not found and we get a path where path->nodes[0] == leaf X and path->slots[0] == N Because path->slots[0] is >= btrfs_header_nritems(leaf X), we call btrfs_next_leaf() btrfs_next_leaf() releases the path inserts key (257 INODE_REF 4096) at the end of leaf X, leaf X now has N + 1 keys, and the new key is at slot N btrfs_next_leaf() searches for key (257 INODE_REF 256), with path->keep_locks set to 1, because it was the last key it saw in leaf X finds it in leaf X again and notices it's no longer the last key of the leaf, so it returns 0 with path->nodes[0] == leaf X and path->slots[0] == N (which is now < btrfs_header_nritems(leaf X)), pointing to the new key (257 INODE_REF 4096) __btrfs_drop_extents() casts the item at path->nodes[0], slot path->slots[0], to a struct btrfs_file_extent_item - it does not skip keys for the target inode with a type less than BTRFS_EXTENT_DATA_KEY (BTRFS_INODE_REF_KEY < BTRFS_EXTENT_DATA_KEY) sees a bogus value for the type field triggering the WARN_ON in the trace shown above, and sets extent_end = search_start (4096) does the if-then-else logic to fixup 0 length extent items created by a past bug from hole punching: if (extent_end == key.offset && extent_end >= search_start) goto delete_extent_item; that evaluates to true and it ends up deleting the key pointed to by path->slots[0], (257 INODE_REF 4096), from leaf X The same could happen for example for a xattr that ends up having a key with an offset value that matches search_start (very unlikely but not impossible). So fix this by ensuring that keys smaller than BTRFS_EXTENT_DATA_KEY are skipped, never casted to struct btrfs_file_extent_item and never deleted by accident. Also protect against the unexpected case of getting a key for a lower inode number by skipping that key and issuing a warning. Signed-off-by: Filipe Manana [bwh: Backported to 3.2: drop use of ASSERT()] Signed-off-by: Ben Hutchings commit 7f28be3247d4e8f37fcc5a5ecec2368073ae4e6b Author: Borislav Petkov Date: Thu Nov 5 16:57:56 2015 +0100 x86/cpu: Call verify_cpu() after having entered long mode too commit 04633df0c43d710e5f696b06539c100898678235 upstream. When we get loaded by a 64-bit bootloader, kernel entry point is startup_64 in head_64.S. We don't trust any and all bootloaders because some will fiddle with CPU configuration so we go ahead and massage each CPU into sanity again. For example, some dell BIOSes have this XD disable feature which set IA32_MISC_ENABLE[34] and disable NX. This might be some dumb workaround for other OSes but Linux sure doesn't need it. A similar thing is present in the Surface 3 firmware - see https://bugzilla.kernel.org/show_bug.cgi?id=106051 - which sets this bit only on the BSP: # rdmsr -a 0x1a0 400850089 850089 850089 850089 I know, right?! There's not even an off switch in there. So fix all those cases by sanitizing the 64-bit entry point too. For that, make verify_cpu() callable in 64-bit mode also. Requested-and-debugged-by: "H. Peter Anvin" Reported-and-tested-by: Bastien Nocera Signed-off-by: Borislav Petkov Cc: Matt Fleming Cc: Peter Zijlstra Link: http://lkml.kernel.org/r/1446739076-21303-1-git-send-email-bp@alien8.de Signed-off-by: Thomas Gleixner [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings commit 67022ef102acafab56301d3e87ba7c4a3ee1b406 Author: Christoph Hellwig Date: Mon Oct 19 16:35:46 2015 +0200 scsi: restart list search after unlock in scsi_remove_target commit 40998193560dab6c3ce8d25f4fa58a23e252ef38 upstream. When dropping a lock while iterating a list we must restart the search as other threads could have manipulated the list under us. Without this we can get stuck in an endless loop. This bug was introduced by commit bc3f02a795d3b4faa99d37390174be2a75d091bd Author: Dan Williams Date: Tue Aug 28 22:12:10 2012 -0700 [SCSI] scsi_remove_target: fix softlockup regression on hot remove Which was itself trying to fix a reported soft lockup issue http://thread.gmane.org/gmane.linux.kernel/1348679 However, we believe even with this revert of the original patch, the soft lockup problem has been fixed by commit f2495e228fce9f9cec84367547813cbb0d6db15a Author: James Bottomley Date: Tue Jan 21 07:01:41 2014 -0800 [SCSI] dual scan thread bug fix Thanks go to Dan Williams for tracking all this prior history down. Reported-by: Johannes Thumshirn Signed-off-by: Christoph Hellwig Tested-by: Johannes Thumshirn Reviewed-by: Johannes Thumshirn Fixes: bc3f02a795d3b4faa99d37390174be2a75d091bd Signed-off-by: James Bottomley Signed-off-by: Ben Hutchings commit 785381e98b73f87174d78b0ae4ba799a32eb882f Author: Stefan Richter Date: Tue Nov 3 01:46:21 2015 +0100 firewire: ohci: fix JMicron JMB38x IT context discovery commit 100ceb66d5c40cc0c7018e06a9474302470be73c upstream. Reported by Clifford and Craig for JMicron OHCI-1394 + SDHCI combo controllers: Often or even most of the time, the controller is initialized with the message "added OHCI v1.10 device as card 0, 4 IR + 0 IT contexts, quirks 0x10". With 0 isochronous transmit DMA contexts (IT contexts), applications like audio output are impossible. However, OHCI-1394 demands that at least 4 IT contexts are implemented by the link layer controller, and indeed JMicron JMB38x do implement four of them. Only their IsoXmitIntMask register is unreliable at early access. With my own JMB381 single function controller I found: - I can reproduce the problem with a lower probability than Craig's. - If I put a loop around the section which clears and reads IsoXmitIntMask, then either the first or the second attempt will return the correct initial mask of 0x0000000f. I never encountered a case of needing more than a second attempt. - Consequently, if I put a dummy reg_read(...IsoXmitIntMaskSet) before the first write, the subsequent read will return the correct result. - If I merely ignore a wrong read result and force the known real result, later isochronous transmit DMA usage works just fine. So let's just fix this chip bug up by the latter method. Tested with JMB381 on kernel 3.13 and 4.3. Since OHCI-1394 generally requires 4 IT contexts at a minium, this workaround is simply applied whenever the initial read of IsoXmitIntMask returns 0, regardless whether it's a JMicron chip or not. I never heard of this issue together with any other chip though. I am not 100% sure that this fix works on the OHCI-1394 part of JMB380 and JMB388 combo controllers exactly the same as on the JMB381 single- function controller, but so far I haven't had a chance to let an owner of a combo chip run a patched kernel. Strangely enough, IsoRecvIntMask is always reported correctly, even though it is probed right before IsoXmitIntMask. Reported-by: Clifford Dunn Reported-by: Craig Moore Signed-off-by: Stefan Richter [bwh: Backported to 3.2: log with fw_notify() instead of ohci_notice()] Signed-off-by: Ben Hutchings commit 4405adbd29e90b9c568029a83f2042909fe51b98 Author: Takashi Iwai Date: Wed Nov 4 22:39:16 2015 +0100 ALSA: hda - Apply pin fixup for HP ProBook 6550b commit c932b98c1e47312822d911c1bb76e81ef50e389c upstream. HP ProBook 6550b needs the same pin fixup applied to other HP B-series laptops with docks for making its headphone and dock headphone jacks working properly. We just need to add the codec SSID to the list. Bugzilla: https://bugzilla.kernel.org/attachment.cgi?id=191971 Signed-off-by: Takashi Iwai Signed-off-by: Ben Hutchings commit 29fbb7880914dcdefe243d1bb26990c64134404b Author: Michal Kubeček Date: Tue Nov 3 08:51:07 2015 +0100 ipv6: fix tunnel error handling commit ebac62fe3d24c0ce22dd83afa7b07d1a2aaef44d upstream. Both tunnel6_protocol and tunnel46_protocol share the same error handler, tunnel6_err(), which traverses through tunnel6_handlers list. For ipip6 tunnels, we need to traverse tunnel46_handlers as we do e.g. in tunnel46_rcv(). Current code can generate an ICMPv6 error message with an IPv4 packet embedded in it. Fixes: 73d605d1abbd ("[IPSEC]: changing API of xfrm6_tunnel_register") Signed-off-by: Michal Kubecek Signed-off-by: David S. Miller Signed-off-by: Ben Hutchings commit 6b8120dbc56a7ecc9e4b664d860011877bdce985 Author: libin Date: Tue Nov 3 08:58:47 2015 +0800 recordmcount: Fix endianness handling bug for nop_mcount commit c84da8b9ad3761eef43811181c7e896e9834b26b upstream. In nop_mcount, shdr->sh_offset and welp->r_offset should handle endianness properly, otherwise it will trigger Segmentation fault if the recordmcount main and file.o have different endianness. Link: http://lkml.kernel.org/r/563806C7.7070606@huawei.com Signed-off-by: Li Bin Signed-off-by: Steven Rostedt Signed-off-by: Ben Hutchings commit f7e70badb0ee4fb9737bf9793e17eedd907b68f8 Author: sumit.saxena@avagotech.com Date: Thu Oct 15 13:40:54 2015 +0530 megaraid_sas : SMAP restriction--do not access user memory from IOCTL code commit 323c4a02c631d00851d8edc4213c4d184ef83647 upstream. This is an issue on SMAP enabled CPUs and 32 bit apps running on 64 bit OS. Do not access user memory from kernel code. The SMAP bit restricts accessing user memory from kernel code. Signed-off-by: Sumit Saxena Signed-off-by: Kashyap Desai Reviewed-by: Tomas Henzl Signed-off-by: Martin K. Petersen Signed-off-by: Ben Hutchings commit bd65107fc1d80498ea8d8185edb48d05a1a85255 Author: Herbert Xu Date: Sun Nov 1 17:11:19 2015 +0800 crypto: algif_hash - Only export and import on sockets with data commit 4afa5f9617927453ac04b24b584f6c718dfb4f45 upstream. The hash_accept call fails to work on sockets that have not received any data. For some algorithm implementations it may cause crashes. This patch fixes this by ensuring that we only export and import on sockets that have received data. Reported-by: Harsh Jain Signed-off-by: Herbert Xu Tested-by: Stephan Mueller Signed-off-by: Ben Hutchings commit 419608b9adde80d8638daa01c8614e1b92416fc0 Author: Brian Norris Date: Mon Oct 26 10:20:23 2015 -0700 mtd: blkdevs: fix potential deadlock + lockdep warnings commit f3c63795e90f0c6238306883b6c72f14d5355721 upstream. Commit 073db4a51ee4 ("mtd: fix: avoid race condition when accessing mtd->usecount") fixed a race condition but due to poor ordering of the mutex acquisition, introduced a potential deadlock. The deadlock can occur, for example, when rmmod'ing the m25p80 module, which will delete one or more MTDs, along with any corresponding mtdblock devices. This could potentially race with an acquisition of the block device as follows. -> blktrans_open() -> mutex_lock(&dev->lock); -> mutex_lock(&mtd_table_mutex); -> del_mtd_device() -> mutex_lock(&mtd_table_mutex); -> blktrans_notify_remove() -> del_mtd_blktrans_dev() -> mutex_lock(&dev->lock); This is a classic (potential) ABBA deadlock, which can be fixed by making the A->B ordering consistent everywhere. There was no real purpose to the ordering in the original patch, AFAIR, so this shouldn't be a problem. This ordering was actually already present in del_mtd_blktrans_dev(), for one, where the function tried to ensure that its caller already held mtd_table_mutex before it acquired &dev->lock: if (mutex_trylock(&mtd_table_mutex)) { mutex_unlock(&mtd_table_mutex); BUG(); } So, reverse the ordering of acquisition of &dev->lock and &mtd_table_mutex so we always acquire mtd_table_mutex first. Snippets of the lockdep output follow: # modprobe -r m25p80 [ 53.419251] [ 53.420838] ====================================================== [ 53.427300] [ INFO: possible circular locking dependency detected ] [ 53.433865] 4.3.0-rc6 #96 Not tainted [ 53.437686] ------------------------------------------------------- [ 53.444220] modprobe/372 is trying to acquire lock: [ 53.449320] (&new->lock){+.+...}, at: [] del_mtd_blktrans_dev+0x80/0xdc [ 53.457271] [ 53.457271] but task is already holding lock: [ 53.463372] (mtd_table_mutex){+.+.+.}, at: [] del_mtd_device+0x18/0x100 [ 53.471321] [ 53.471321] which lock already depends on the new lock. [ 53.471321] [ 53.479856] [ 53.479856] the existing dependency chain (in reverse order) is: [ 53.487660] -> #1 (mtd_table_mutex){+.+.+.}: [ 53.492331] [] blktrans_open+0x34/0x1a4 [ 53.497879] [] __blkdev_get+0xc4/0x3b0 [ 53.503364] [] blkdev_get+0x108/0x320 [ 53.508743] [] do_dentry_open+0x218/0x314 [ 53.514496] [] path_openat+0x4c0/0xf9c [ 53.519959] [] do_filp_open+0x5c/0xc0 [ 53.525336] [] do_sys_open+0xfc/0x1cc [ 53.530716] [] ret_fast_syscall+0x0/0x1c [ 53.536375] -> #0 (&new->lock){+.+...}: [ 53.540587] [] mutex_lock_nested+0x38/0x3cc [ 53.546504] [] del_mtd_blktrans_dev+0x80/0xdc [ 53.552606] [] blktrans_notify_remove+0x7c/0x84 [ 53.558891] [] del_mtd_device+0x74/0x100 [ 53.564544] [] del_mtd_partitions+0x80/0xc8 [ 53.570451] [] mtd_device_unregister+0x24/0x48 [ 53.576637] [] spi_drv_remove+0x1c/0x34 [ 53.582207] [] __device_release_driver+0x88/0x114 [ 53.588663] [] device_release_driver+0x20/0x2c [ 53.594843] [] bus_remove_device+0xd8/0x108 [ 53.600748] [] device_del+0x10c/0x210 [ 53.606127] [] device_unregister+0xc/0x20 [ 53.611849] [] __unregister+0x10/0x20 [ 53.617211] [] device_for_each_child+0x50/0x7c [ 53.623387] [] spi_unregister_master+0x58/0x8c [ 53.629578] [] release_nodes+0x15c/0x1c8 [ 53.635223] [] __device_release_driver+0x90/0x114 [ 53.641689] [] driver_detach+0xb4/0xb8 [ 53.647147] [] bus_remove_driver+0x4c/0xa0 [ 53.652970] [] SyS_delete_module+0x11c/0x1e4 [ 53.658976] [] ret_fast_syscall+0x0/0x1c [ 53.664621] [ 53.664621] other info that might help us debug this: [ 53.664621] [ 53.672979] Possible unsafe locking scenario: [ 53.672979] [ 53.679169] CPU0 CPU1 [ 53.683900] ---- ---- [ 53.688633] lock(mtd_table_mutex); [ 53.692383] lock(&new->lock); [ 53.698306] lock(mtd_table_mutex); [ 53.704658] lock(&new->lock); [ 53.707946] [ 53.707946] *** DEADLOCK *** Fixes: 073db4a51ee4 ("mtd: fix: avoid race condition when accessing mtd->usecount") Reported-by: Felipe Balbi Tested-by: Felipe Balbi Signed-off-by: Brian Norris Signed-off-by: Ben Hutchings commit 78f78e5a5d77e95575059443ce3b9b3446e7398a Author: Marek Vasut Date: Fri Oct 30 13:48:19 2015 +0100 can: Use correct type in sizeof() in nla_put() commit 562b103a21974c2f9cd67514d110f918bb3e1796 upstream. The sizeof() is invoked on an incorrect variable, likely due to some copy-paste error, and this might result in memory corruption. Fix this. Signed-off-by: Marek Vasut Cc: Wolfgang Grandegger Cc: netdev@vger.kernel.org Signed-off-by: Marc Kleine-Budde [bwh: Backported to 3.2: - Keep using the old NLA_PUT macro - Adjust context] Signed-off-by: Ben Hutchings commit 5e9c1dca8eb475a3f3e216cccf18f56dcdc7e6d8 Author: sumit.saxena@avagotech.com Date: Thu Oct 15 13:40:04 2015 +0530 megaraid_sas: Do not use PAGE_SIZE for max_sectors commit 357ae967ad66e357f78b5cfb5ab6ca07fb4a7758 upstream. Do not use PAGE_SIZE marco to calculate max_sectors per I/O request. Driver code assumes PAGE_SIZE will be always 4096 which can lead to wrongly calculated value if PAGE_SIZE is not 4096. This issue was reported in Ubuntu Bugzilla Bug #1475166. Signed-off-by: Sumit Saxena Signed-off-by: Kashyap Desai Reviewed-by: Tomas Henzl Reviewed-by: Martin K. Petersen Signed-off-by: Martin K. Petersen Signed-off-by: Ben Hutchings commit f3b2d51d915e6c3539595208955145f90fbc2931 Author: Takashi Iwai Date: Tue Oct 27 14:21:51 2015 +0100 ALSA: hda - Disable 64bit address for Creative HDA controllers commit cadd16ea33a938d49aee99edd4758cc76048b399 upstream. We've had many reports that some Creative sound cards with CA0132 don't work well. Some reported that it starts working after reloading the module, while some reported it starts working when a 32bit kernel is used. All these facts seem implying that the chip fails to communicate when the buffer is located in 64bit address. This patch addresses these issues by just adding AZX_DCAPS_NO_64BIT flag to the corresponding PCI entries. I casually had a chance to test an SB Recon3D board, and indeed this seems helping. Although this hasn't been tested on all Creative devices, it's safer to assume that this restriction applies to the rest of them, too. So the flag is applied to all Creative entries. Signed-off-by: Takashi Iwai [bwh: Backported to 3.2: drop the change to AZX_DCAPS_PRESET_CTHDA] Signed-off-by: Ben Hutchings commit 3ce1b9beb44fb0f7cc06ca86bf316071d43b7384 Author: Ralf Baechle Date: Fri Oct 16 23:09:57 2015 +0200 MIPS: atomic: Fix comment describing atomic64_add_unless's return value. commit f25319d2cb439249a6859f53ad42ffa332b0acba upstream. Signed-off-by: Ralf Baechle Fixes: f24219b4e90cf70ec4a211b17fbabc725a0ddf3c (cherry picked from commit f0a232cde7be18a207fd057dd79bbac8a0a45dec) Signed-off-by: Ben Hutchings commit bf177657504c6091395f7dd7e5916770113df60b Author: Chen Yu Date: Sun Oct 25 01:02:19 2015 +0800 ACPI: Use correct IRQ when uninstalling ACPI interrupt handler commit 49e4b84333f338d4f183f28f1f3c1131b9fb2b5a upstream. Currently when the system is trying to uninstall the ACPI interrupt handler, it uses acpi_gbl_FADT.sci_interrupt as the IRQ number. However, the IRQ number that the ACPI interrupt handled is installed for comes from acpi_gsi_to_irq() and that is the number that should be used for the handler removal. Fix this problem by using the mapped IRQ returned from acpi_gsi_to_irq() as appropriate. Acked-by: Lv Zheng Signed-off-by: Chen Yu Signed-off-by: Rafael J. Wysocki [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings commit e936d80d4d24e79fe2fd038d33f56830391838c2 Author: Larry Finger Date: Sun Oct 18 22:14:48 2015 -0500 staging: rtl8712: Add device ID for Sitecom WLA2100 commit 1e6e63283691a2a9048a35d9c6c59cf0abd342e4 upstream. This adds the USB ID for the Sitecom WLA2100. The Windows 10 inf file was checked to verify that the addition is correct. Reported-by: Frans van de Wiel Signed-off-by: Larry Finger Cc: Frans van de Wiel Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings commit 2b42ac30d4a874fb3360562f67ebf6a90afc1794 Author: Dmitry Tunin Date: Fri Oct 16 11:45:26 2015 +0300 Bluetooth: ath3k: Add support of AR3012 0cf3:817b device commit 18e0afab8ce3f1230ce3fef52b2e73374fd9c0e7 upstream. T: Bus=04 Lev=02 Prnt=02 Port=04 Cnt=01 Dev#= 3 Spd=12 MxCh= 0 D: Ver= 1.10 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=0cf3 ProdID=817b Rev=00.02 C: #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb I: If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb BugLink: https://bugs.launchpad.net/bugs/1506615 Signed-off-by: Dmitry Tunin Signed-off-by: Marcel Holtmann Signed-off-by: Ben Hutchings commit 467b9ca3a1e73865099b73aa1568bf179c32b380 Author: Dmitry Tunin Date: Mon Oct 5 19:29:33 2015 +0300 Bluetooth: ath3k: Add new AR3012 0930:021c id commit cd355ff071cd37e7197eccf9216770b2b29369f7 upstream. This adapter works with the existing linux-firmware. T: Bus=01 Lev=01 Prnt=01 Port=03 Cnt=02 Dev#= 3 Spd=12 MxCh= 0 D: Ver= 1.10 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=0930 ProdID=021c Rev=00.01 C: #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb I: If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb BugLink: https://bugs.launchpad.net/bugs/1502781 Signed-off-by: Dmitry Tunin Signed-off-by: Marcel Holtmann Signed-off-by: Ben Hutchings commit 08c89a611041ac892e150dbed3e745c1bb255baa Author: Daeho Jeong Date: Sun Oct 18 17:02:56 2015 -0400 ext4, jbd2: ensure entering into panic after recording an error in superblock commit 4327ba52afd03fc4b5afa0ee1d774c9c5b0e85c5 upstream. If a EXT4 filesystem utilizes JBD2 journaling and an error occurs, the journaling will be aborted first and the error number will be recorded into JBD2 superblock and, finally, the system will enter into the panic state in "errors=panic" option. But, in the rare case, this sequence is little twisted like the below figure and it will happen that the system enters into panic state, which means the system reset in mobile environment, before completion of recording an error in the journal superblock. In this case, e2fsck cannot recognize that the filesystem failure occurred in the previous run and the corruption wouldn't be fixed. Task A Task B ext4_handle_error() -> jbd2_journal_abort() -> __journal_abort_soft() -> __jbd2_journal_abort_hard() | -> journal->j_flags |= JBD2_ABORT; | | __ext4_abort() | -> jbd2_journal_abort() | | -> __journal_abort_soft() | | -> if (journal->j_flags & JBD2_ABORT) | | return; | -> panic() | -> jbd2_journal_update_sb_errno() Tested-by: Hobin Woo Signed-off-by: Daeho Jeong Signed-off-by: Theodore Ts'o Signed-off-by: Ben Hutchings commit 2a97932f99303b32c6683f136628298da7f85323 Author: Filipe Manana Date: Fri Oct 16 12:34:25 2015 +0100 Btrfs: fix truncation of compressed and inlined extents commit 0305cd5f7fca85dae392b9ba85b116896eb7c1c7 upstream. When truncating a file to a smaller size which consists of an inline extent that is compressed, we did not discard (or made unusable) the data between the new file size and the old file size, wasting metadata space and allowing for the truncated data to be leaked and the data corruption/loss mentioned below. We were also not correctly decrementing the number of bytes used by the inode, we were setting it to zero, giving a wrong report for callers of the stat(2) syscall. The fsck tool also reported an error about a mismatch between the nbytes of the file versus the real space used by the file. Now because we weren't discarding the truncated region of the file, it was possible for a caller of the clone ioctl to actually read the data that was truncated, allowing for a security breach without requiring root access to the system, using only standard filesystem operations. The scenario is the following: 1) User A creates a file which consists of an inline and compressed extent with a size of 2000 bytes - the file is not accessible to any other users (no read, write or execution permission for anyone else); 2) The user truncates the file to a size of 1000 bytes; 3) User A makes the file world readable; 4) User B creates a file consisting of an inline extent of 2000 bytes; 5) User B issues a clone operation from user A's file into its own file (using a length argument of 0, clone the whole range); 6) User B now gets to see the 1000 bytes that user A truncated from its file before it made its file world readbale. User B also lost the bytes in the range [1000, 2000[ bytes from its own file, but that might be ok if his/her intention was reading stale data from user A that was never supposed to be public. Note that this contrasts with the case where we truncate a file from 2000 bytes to 1000 bytes and then truncate it back from 1000 to 2000 bytes. In this case reading any byte from the range [1000, 2000[ will return a value of 0x00, instead of the original data. This problem exists since the clone ioctl was added and happens both with and without my recent data loss and file corruption fixes for the clone ioctl (patch "Btrfs: fix file corruption and data loss after cloning inline extents"). So fix this by truncating the compressed inline extents as we do for the non-compressed case, which involves decompressing, if the data isn't already in the page cache, compressing the truncated version of the extent, writing the compressed content into the inline extent and then truncate it. The following test case for fstests reproduces the problem. In order for the test to pass both this fix and my previous fix for the clone ioctl that forbids cloning a smaller inline extent into a larger one, which is titled "Btrfs: fix file corruption and data loss after cloning inline extents", are needed. Without that other fix the test fails in a different way that does not leak the truncated data, instead part of destination file gets replaced with zeroes (because the destination file has a larger inline extent than the source). seq=`basename $0` seqres=$RESULT_DIR/$seq echo "QA output created by $seq" tmp=/tmp/$$ status=1 # failure is the default! trap "_cleanup; exit \$status" 0 1 2 3 15 _cleanup() { rm -f $tmp.* } # get standard environment, filters and checks . ./common/rc . ./common/filter # real QA test starts here _need_to_be_root _supported_fs btrfs _supported_os Linux _require_scratch _require_cloner rm -f $seqres.full _scratch_mkfs >>$seqres.full 2>&1 _scratch_mount "-o compress" # Create our test files. File foo is going to be the source of a clone operation # and consists of a single inline extent with an uncompressed size of 512 bytes, # while file bar consists of a single inline extent with an uncompressed size of # 256 bytes. For our test's purpose, it's important that file bar has an inline # extent with a size smaller than foo's inline extent. $XFS_IO_PROG -f -c "pwrite -S 0xa1 0 128" \ -c "pwrite -S 0x2a 128 384" \ $SCRATCH_MNT/foo | _filter_xfs_io $XFS_IO_PROG -f -c "pwrite -S 0xbb 0 256" $SCRATCH_MNT/bar | _filter_xfs_io # Now durably persist all metadata and data. We do this to make sure that we get # on disk an inline extent with a size of 512 bytes for file foo. sync # Now truncate our file foo to a smaller size. Because it consists of a # compressed and inline extent, btrfs did not shrink the inline extent to the # new size (if the extent was not compressed, btrfs would shrink it to 128 # bytes), it only updates the inode's i_size to 128 bytes. $XFS_IO_PROG -c "truncate 128" $SCRATCH_MNT/foo # Now clone foo's inline extent into bar. # This clone operation should fail with errno EOPNOTSUPP because the source # file consists only of an inline extent and the file's size is smaller than # the inline extent of the destination (128 bytes < 256 bytes). However the # clone ioctl was not prepared to deal with a file that has a size smaller # than the size of its inline extent (something that happens only for compressed # inline extents), resulting in copying the full inline extent from the source # file into the destination file. # # Note that btrfs' clone operation for inline extents consists of removing the # inline extent from the destination inode and copy the inline extent from the # source inode into the destination inode, meaning that if the destination # inode's inline extent is larger (N bytes) than the source inode's inline # extent (M bytes), some bytes (N - M bytes) will be lost from the destination # file. Btrfs could copy the source inline extent's data into the destination's # inline extent so that we would not lose any data, but that's currently not # done due to the complexity that would be needed to deal with such cases # (specially when one or both extents are compressed), returning EOPNOTSUPP, as # it's normally not a very common case to clone very small files (only case # where we get inline extents) and copying inline extents does not save any # space (unlike for normal, non-inlined extents). $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/foo $SCRATCH_MNT/bar # Now because the above clone operation used to succeed, and due to foo's inline # extent not being shinked by the truncate operation, our file bar got the whole # inline extent copied from foo, making us lose the last 128 bytes from bar # which got replaced by the bytes in range [128, 256[ from foo before foo was # truncated - in other words, data loss from bar and being able to read old and # stale data from foo that should not be possible to read anymore through normal # filesystem operations. Contrast with the case where we truncate a file from a # size N to a smaller size M, truncate it back to size N and then read the range # [M, N[, we should always get the value 0x00 for all the bytes in that range. # We expected the clone operation to fail with errno EOPNOTSUPP and therefore # not modify our file's bar data/metadata. So its content should be 256 bytes # long with all bytes having the value 0xbb. # # Without the btrfs bug fix, the clone operation succeeded and resulted in # leaking truncated data from foo, the bytes that belonged to its range # [128, 256[, and losing data from bar in that same range. So reading the # file gave us the following content: # # 0000000 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 # * # 0000200 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a # * # 0000400 echo "File bar's content after the clone operation:" od -t x1 $SCRATCH_MNT/bar # Also because the foo's inline extent was not shrunk by the truncate # operation, btrfs' fsck, which is run by the fstests framework everytime a # test completes, failed reporting the following error: # # root 5 inode 257 errors 400, nbytes wrong status=0 exit Signed-off-by: Filipe Manana [bwh: Backported to 3.2: - Adjust parameters to btrfs_truncate_page() and btrfs_truncate_item() - Pass transaction pointer into truncate_inline_extent() - Add prototype of btrfs_truncate_page() - s/test_bit(BTRFS_ROOT_REF_COWS, &root->state)/root->ref_cows/ - Keep using BUG_ON() for other error cases, as there is no btrfs_abort_transaction() - Adjust context] Signed-off-by: Ben Hutchings commit 85c36cd4ff9c60b6d3b425b44ccf72c0d2ff0157 Author: Chris Mason Date: Fri Jan 3 21:07:00 2014 -0800 Btrfs: don't use ram_bytes for uncompressed inline items commit 514ac8ad8793a097c0c9d89202c642479d6dfa34 upstream. If we truncate an uncompressed inline item, ram_bytes isn't updated to reflect the new size. The fixe uses the size directly from the item header when reading uncompressed inlines, and also fixes truncate to update the size as it goes. Reported-by: Jens Axboe Signed-off-by: Chris Mason [bwh: Backported to 3.2: - Don't use btrfs_map_token API - There are fewer callers of btrfs_file_extent_inline_len() to change - Adjust context] Signed-off-by: Ben Hutchings commit df2ef64a3a36b91e374d7339cf53112f403d6164 Author: Arnd Bergmann Date: Mon Oct 12 15:46:08 2015 +0200 ARM: pxa: remove incorrect __init annotation on pxa27x_set_pwrmode commit 54c09889bff6d99c8733eed4a26c9391b177c88b upstream. The z2 machine calls pxa27x_set_pwrmode() in order to power off the machine, but this function gets discarded early at boot because it is marked __init, as pointed out by kbuild: WARNING: vmlinux.o(.text+0x145c4): Section mismatch in reference from the function z2_power_off() to the function .init.text:pxa27x_set_pwrmode() The function z2_power_off() references the function __init pxa27x_set_pwrmode(). This is often because z2_power_off lacks a __init annotation or the annotation of pxa27x_set_pwrmode is wrong. This removes the __init section modifier to fix rebooting and the build error. Signed-off-by: Arnd Bergmann Fixes: ba4a90a6d86a ("ARM: pxa/z2: fix building error of pxa27x_cpu_suspend() no longer available") Signed-off-by: Robert Jarzmik [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings commit a9f83a62bf7a47edc592ddd58564d7f0c74f120d Author: David Woodhouse Date: Thu Oct 15 09:28:06 2015 +0100 iommu/vt-d: Fix ATSR handling for Root-Complex integrated endpoints commit d14053b3c714178525f22660e6aaf41263d00056 upstream. The VT-d specification says that "Software must enable ATS on endpoint devices behind a Root Port only if the Root Port is reported as supporting ATS transactions." We walk up the tree to find a Root Port, but for integrated devices we don't find one — we get to the host bridge. In that case we *should* allow ATS. Currently we don't, which means that we are incorrectly failing to use ATS for the integrated graphics. Fix that. We should never break out of this loop "naturally" with bus==NULL, since we'll always find bridge==NULL in that case (and now return 1). So remove the check for (!bridge) after the loop, since it can never happen. If it did, it would be worthy of a BUG_ON(!bridge). But since it'll oops anyway in that case, that'll do just as well. Signed-off-by: David Woodhouse [bwh: Backported to 3.2: - Adjust context - There's no (!bridge) check to remove] Signed-off-by: Ben Hutchings commit 16f61ceb051055108ce71b6c3420ae674ae32065 Author: Filipe Manana Date: Tue Oct 13 15:15:00 2015 +0100 Btrfs: fix file corruption and data loss after cloning inline extents commit 8039d87d9e473aeb740d4fdbd59b9d2f89b2ced9 upstream. Currently the clone ioctl allows to clone an inline extent from one file to another that already has other (non-inlined) extents. This is a problem because btrfs is not designed to deal with files having inline and regular extents, if a file has an inline extent then it must be the only extent in the file and must start at file offset 0. Having a file with an inline extent followed by regular extents results in EIO errors when doing reads or writes against the first 4K of the file. Also, the clone ioctl allows one to lose data if the source file consists of a single inline extent, with a size of N bytes, and the destination file consists of a single inline extent with a size of M bytes, where we have M > N. In this case the clone operation removes the inline extent from the destination file and then copies the inline extent from the source file into the destination file - we lose the M - N bytes from the destination file, a read operation will get the value 0x00 for any bytes in the the range [N, M] (the destination inode's i_size remained as M, that's why we can read past N bytes). So fix this by not allowing such destructive operations to happen and return errno EOPNOTSUPP to user space. Currently the fstest btrfs/035 tests the data loss case but it totally ignores this - i.e. expects the operation to succeed and does not check the we got data loss. The following test case for fstests exercises all these cases that result in file corruption and data loss: seq=`basename $0` seqres=$RESULT_DIR/$seq echo "QA output created by $seq" tmp=/tmp/$$ status=1 # failure is the default! trap "_cleanup; exit \$status" 0 1 2 3 15 _cleanup() { rm -f $tmp.* } # get standard environment, filters and checks . ./common/rc . ./common/filter # real QA test starts here _need_to_be_root _supported_fs btrfs _supported_os Linux _require_scratch _require_cloner _require_btrfs_fs_feature "no_holes" _require_btrfs_mkfs_feature "no-holes" rm -f $seqres.full test_cloning_inline_extents() { local mkfs_opts=$1 local mount_opts=$2 _scratch_mkfs $mkfs_opts >>$seqres.full 2>&1 _scratch_mount $mount_opts # File bar, the source for all the following clone operations, consists # of a single inline extent (50 bytes). $XFS_IO_PROG -f -c "pwrite -S 0xbb 0 50" $SCRATCH_MNT/bar \ | _filter_xfs_io # Test cloning into a file with an extent (non-inlined) where the # destination offset overlaps that extent. It should not be possible to # clone the inline extent from file bar into this file. $XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 16K" $SCRATCH_MNT/foo \ | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo # Doing IO against any range in the first 4K of the file should work. # Due to a past clone ioctl bug which allowed cloning the inline extent, # these operations resulted in EIO errors. echo "File foo data after clone operation:" # All bytes should have the value 0xaa (clone operation failed and did # not modify our file). od -t x1 $SCRATCH_MNT/foo $XFS_IO_PROG -c "pwrite -S 0xcc 0 100" $SCRATCH_MNT/foo | _filter_xfs_io # Test cloning the inline extent against a file which has a hole in its # first 4K followed by a non-inlined extent. It should not be possible # as well to clone the inline extent from file bar into this file. $XFS_IO_PROG -f -c "pwrite -S 0xdd 4K 12K" $SCRATCH_MNT/foo2 \ | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo2 # Doing IO against any range in the first 4K of the file should work. # Due to a past clone ioctl bug which allowed cloning the inline extent, # these operations resulted in EIO errors. echo "File foo2 data after clone operation:" # All bytes should have the value 0x00 (clone operation failed and did # not modify our file). od -t x1 $SCRATCH_MNT/foo2 $XFS_IO_PROG -c "pwrite -S 0xee 0 90" $SCRATCH_MNT/foo2 | _filter_xfs_io # Test cloning the inline extent against a file which has a size of zero # but has a prealloc extent. It should not be possible as well to clone # the inline extent from file bar into this file. $XFS_IO_PROG -f -c "falloc -k 0 1M" $SCRATCH_MNT/foo3 | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo3 # Doing IO against any range in the first 4K of the file should work. # Due to a past clone ioctl bug which allowed cloning the inline extent, # these operations resulted in EIO errors. echo "First 50 bytes of foo3 after clone operation:" # Should not be able to read any bytes, file has 0 bytes i_size (the # clone operation failed and did not modify our file). od -t x1 $SCRATCH_MNT/foo3 $XFS_IO_PROG -c "pwrite -S 0xff 0 90" $SCRATCH_MNT/foo3 | _filter_xfs_io # Test cloning the inline extent against a file which consists of a # single inline extent that has a size not greater than the size of # bar's inline extent (40 < 50). # It should be possible to do the extent cloning from bar to this file. $XFS_IO_PROG -f -c "pwrite -S 0x01 0 40" $SCRATCH_MNT/foo4 \ | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo4 # Doing IO against any range in the first 4K of the file should work. echo "File foo4 data after clone operation:" # Must match file bar's content. od -t x1 $SCRATCH_MNT/foo4 $XFS_IO_PROG -c "pwrite -S 0x02 0 90" $SCRATCH_MNT/foo4 | _filter_xfs_io # Test cloning the inline extent against a file which consists of a # single inline extent that has a size greater than the size of bar's # inline extent (60 > 50). # It should not be possible to clone the inline extent from file bar # into this file. $XFS_IO_PROG -f -c "pwrite -S 0x03 0 60" $SCRATCH_MNT/foo5 \ | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo5 # Reading the file should not fail. echo "File foo5 data after clone operation:" # Must have a size of 60 bytes, with all bytes having a value of 0x03 # (the clone operation failed and did not modify our file). od -t x1 $SCRATCH_MNT/foo5 # Test cloning the inline extent against a file which has no extents but # has a size greater than bar's inline extent (16K > 50). # It should not be possible to clone the inline extent from file bar # into this file. $XFS_IO_PROG -f -c "truncate 16K" $SCRATCH_MNT/foo6 | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo6 # Reading the file should not fail. echo "File foo6 data after clone operation:" # Must have a size of 16K, with all bytes having a value of 0x00 (the # clone operation failed and did not modify our file). od -t x1 $SCRATCH_MNT/foo6 # Test cloning the inline extent against a file which has no extents but # has a size not greater than bar's inline extent (30 < 50). # It should be possible to clone the inline extent from file bar into # this file. $XFS_IO_PROG -f -c "truncate 30" $SCRATCH_MNT/foo7 | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo7 # Reading the file should not fail. echo "File foo7 data after clone operation:" # Must have a size of 50 bytes, with all bytes having a value of 0xbb. od -t x1 $SCRATCH_MNT/foo7 # Test cloning the inline extent against a file which has a size not # greater than the size of bar's inline extent (20 < 50) but has # a prealloc extent that goes beyond the file's size. It should not be # possible to clone the inline extent from bar into this file. $XFS_IO_PROG -f -c "falloc -k 0 1M" \ -c "pwrite -S 0x88 0 20" \ $SCRATCH_MNT/foo8 | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo8 echo "File foo8 data after clone operation:" # Must have a size of 20 bytes, with all bytes having a value of 0x88 # (the clone operation did not modify our file). od -t x1 $SCRATCH_MNT/foo8 _scratch_unmount } echo -e "\nTesting without compression and without the no-holes feature...\n" test_cloning_inline_extents echo -e "\nTesting with compression and without the no-holes feature...\n" test_cloning_inline_extents "" "-o compress" echo -e "\nTesting without compression and with the no-holes feature...\n" test_cloning_inline_extents "-O no-holes" "" echo -e "\nTesting with compression and with the no-holes feature...\n" test_cloning_inline_extents "-O no-holes" "-o compress" status=0 exit Signed-off-by: Filipe Manana [bwh: Backported to 3.2: - Adjust parameters to btrfs_drop_extents() - Drop use of ASSERT() - Keep using BUG_ON() for other error cases, as there is no btrfs_abort_transaction() - Adjust context] Signed-off-by: Ben Hutchings commit e83c48cb331eec759cfb9cf93ed2c5face453885 Author: Jan Schmidt Date: Tue Nov 22 15:14:33 2011 +0100 Btrfs: added helper btrfs_next_item() commit c7d22a3c3cdb73d8a0151e2ccc8cf4a48c48310b upstream. btrfs_next_item() makes the btrfs path point to the next item, crossing leaf boundaries if needed. Signed-off-by: Arne Jansen Signed-off-by: Jan Schmidt [bwh: Dependency of the following fix] Signed-off-by: Ben Hutchings commit 9914546b256457c85e45c99f9d95759b07452f1a Author: Eric Dumazet Date: Fri Oct 9 11:29:32 2015 -0700 packet: fix match_fanout_group() commit 161642e24fee40fba2c5bc2ceacc00d118a22d65 upstream. Recent TCP listener patches exposed a prior af_packet bug : match_fanout_group() blindly assumes it is always safe to cast sk to a packet socket to compare fanout with af_packet_priv But SYNACK packets can be sent while attached to request_sock, which are smaller than a "struct sock". We can read non existent memory and crash. Fixes: c0de08d04215 ("af_packet: don't emit packet on orig fanout group") Fixes: ca6fb0651883 ("tcp: attach SYNACK messages to request sockets instead of listener") Signed-off-by: Eric Dumazet Cc: Willem de Bruijn Cc: Eric Leblond Signed-off-by: David S. Miller Signed-off-by: Ben Hutchings commit e7102453150c7081a27744989374c474d2ebea8e Author: Dan Carpenter Date: Mon Sep 21 19:21:51 2015 +0300 devres: fix a for loop bounds check commit 1f35d04a02a652f14566f875aef3a6f2af4cb77b upstream. The iomap[] array has PCIM_IOMAP_MAX (6) elements and not DEVICE_COUNT_RESOURCE (16). This bug was found using a static checker. It may be that the "if (!(mask & (1 << i)))" check means we never actually go past the end of the array in real life. Fixes: ec04b075843d ('iomap: implement pcim_iounmap_regions()') Signed-off-by: Dan Carpenter Acked-by: Tejun Heo Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings commit f9ac3882c96ad4d554be337d28ea202988b68b3d Author: Boris BREZILLON Date: Thu Jul 30 12:18:03 2015 +0200 mtd: mtdpart: fix add_mtd_partitions error path commit e5bae86797141e4a95e42d825f737cb36d7b8c37 upstream. If we fail to allocate a partition structure in the middle of the partition creation process, the already allocated partitions are never removed, which means they are still present in the partition list and their resources are never freed. Signed-off-by: Boris Brezillon Signed-off-by: Brian Norris Signed-off-by: Ben Hutchings commit a6a8977eadc2a96aef3af051d020ed7cb9331d1a Author: Dan Carpenter Date: Mon Sep 21 19:19:53 2015 +0300 mwifiex: fix mwifiex_rdeeprom_read() commit 1f9c6e1bc1ba5f8a10fcd6e99d170954d7c6d382 upstream. There were several bugs here. 1) The done label was in the wrong place so we didn't copy any information out when there was no command given. 2) We were using PAGE_SIZE as the size of the buffer instead of "PAGE_SIZE - pos". 3) snprintf() returns the number of characters that would have been printed if there were enough space. If there was not enough space (and we had fixed the memory corruption bug #2) then it would result in an information leak when we do simple_read_from_buffer(). I've changed it to use scnprintf() instead. I also removed the initialization at the start of the function, because I thought it made the code a little more clear. Fixes: 5e6e3a92b9a4 ('wireless: mwifiex: initial commit for Marvell mwifiex driver') Signed-off-by: Dan Carpenter Acked-by: Amitkumar Karwar Signed-off-by: Kalle Valo Signed-off-by: Ben Hutchings commit 7293a7e1ca7b1bb62977cd2fb97dc57ab1e50acf Author: Valentin Rothberg Date: Tue Sep 22 19:00:40 2015 +0200 wm831x_power: Use IRQF_ONESHOT to request threaded IRQs commit 90adf98d9530054b8e665ba5a928de4307231d84 upstream. Since commit 1c6c69525b40 ("genirq: Reject bogus threaded irq requests") threaded IRQs without a primary handler need to be requested with IRQF_ONESHOT, otherwise the request will fail. scripts/coccinelle/misc/irqf_oneshot.cocci detected this issue. Fixes: b5874f33bbaf ("wm831x_power: Use genirq") Signed-off-by: Valentin Rothberg Signed-off-by: Sebastian Reichel Signed-off-by: Ben Hutchings commit 604bfd00358e3d7fce8dc789fe52d2f2be0fa4c7 Author: Richard Purdie Date: Fri Sep 18 16:31:33 2015 -0700 HID: core: Avoid uninitialized buffer access commit 79b568b9d0c7c5d81932f4486d50b38efdd6da6d upstream. hid_connect adds various strings to the buffer but they're all conditional. You can find circumstances where nothing would be written to it but the kernel will still print the supposedly empty buffer with printk. This leads to corruption on the console/in the logs. Ensure buf is initialized to an empty string. Signed-off-by: Richard Purdie [dvhart: Initialize string to "" rather than assign buf[0] = NULL;] Cc: Jiri Kosina Cc: linux-input@vger.kernel.org Signed-off-by: Darren Hart Signed-off-by: Jiri Kosina Signed-off-by: Ben Hutchings commit adc82592cde76e5d6182c66bf0c67560f0ee0abf Author: Johannes Berg Date: Fri Aug 28 10:52:53 2015 +0200 mac80211: fix driver RSSI event calculations commit 8ec6d97871f37e4743678ea4a455bd59580aa0f4 upstream. The ifmgd->ave_beacon_signal value cannot be taken as is for comparisons, it must be divided by since it's represented like that for better accuracy of the EWMA calculations. This would lead to invalid driver RSSI events. Fix the used value. Fixes: 615f7b9bb1f8 ("mac80211: add driver RSSI threshold events") Signed-off-by: Johannes Berg Signed-off-by: Ben Hutchings commit b2af40d1de4d8d3e22c8a6aeb785ab1d1c41801a Author: Alex Williamson Date: Tue Sep 15 22:24:46 2015 -0600 PCI: Use function 0 VPD for identical functions, regular VPD for others commit da2d03ea27f6ed9d2005a67b20dd021ddacf1e4d upstream. 932c435caba8 ("PCI: Add dev_flags bit to access VPD through function 0") added PCI_DEV_FLAGS_VPD_REF_F0. Previously, we set the flag on every non-zero function of quirked devices. If a function turned out to be different from function 0, i.e., it had a different class, vendor ID, or device ID, the flag remained set but we didn't make VPD accessible at all. Flip this around so we only set PCI_DEV_FLAGS_VPD_REF_F0 for functions that are identical to function 0, and allow regular VPD access for any other functions. [bhelgaas: changelog, stable tag] Fixes: 932c435caba8 ("PCI: Add dev_flags bit to access VPD through function 0") Signed-off-by: Alex Williamson Signed-off-by: Bjorn Helgaas Acked-by: Myron Stowe Acked-by: Mark Rustad [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings commit 991e923fb23b1fff62283078fd4734f42e1bf398 Author: Alex Williamson Date: Tue Sep 15 11:17:21 2015 -0600 PCI: Fix devfn for VPD access through function 0 commit 9d9240756e63dd87d6cbf5da8b98ceb8f8192b55 upstream. Commit 932c435caba8 ("PCI: Add dev_flags bit to access VPD through function 0") passes PCI_SLOT(devfn) for the devfn parameter of pci_get_slot(). Generally this works because we're fairly well guaranteed that a PCIe device is at slot address 0, but for the general case, including conventional PCI, it's incorrect. We need to get the slot and then convert it back into a devfn. Fixes: 932c435caba8 ("PCI: Add dev_flags bit to access VPD through function 0") Signed-off-by: Alex Williamson Signed-off-by: Bjorn Helgaas Acked-by: Myron Stowe Acked-by: Mark Rustad Signed-off-by: Ben Hutchings