Brian Foster [Wed, 31 May 2017 15:22:52 +0000 (08:22 -0700)]
xfs: use ->b_state to fix buffer I/O accounting release race
commit
63db7c815bc0997c29e484d2409684fdd9fcd93b upstream.
We've had user reports of unmount hangs in xfs_wait_buftarg() that
analysis shows is due to btp->bt_io_count == -1. bt_io_count
represents the count of in-flight asynchronous buffers and thus
should always be >= 0. xfs_wait_buftarg() waits for this value to
stabilize to zero in order to ensure that all untracked (with
respect to the lru) buffers have completed I/O processing before
unmount proceeds to tear down in-core data structures.
The value of -1 implies an I/O accounting decrement race. Indeed,
the fact that xfs_buf_ioacct_dec() is called from xfs_buf_rele()
(where the buffer lock is no longer held) means that bp->b_flags can
be updated from an unsafe context. While a user-level reproducer is
currently not available, some intrusive hacks to run racing buffer
lookups/ioacct/releases from multiple threads was used to
successfully manufacture this problem.
Existing callers do not expect to acquire the buffer lock from
xfs_buf_rele(). Therefore, we can not safely update ->b_flags from
this context. It turns out that we already have separate buffer
state bits and associated serialization for dealing with buffer LRU
state in the form of ->b_state and ->b_lock. Therefore, replace the
_XBF_IN_FLIGHT flag with a ->b_state variant, update the I/O
accounting wrappers appropriately and make sure they are used with
the correct locking. This ensures that buffer in-flight state can be
modified at buffer release time without racing with modifications
from a buffer lock holder.
Fixes: 9c7504aa72b6 ("xfs: track and serialize in-flight async buffers against unmount")
Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Tested-by: Libor Pechacek <lpechacek@suse.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jan Kara [Thu, 18 May 2017 23:36:22 +0000 (16:36 -0700)]
xfs: Fix missed holes in SEEK_HOLE implementation
commit
5375023ae1266553a7baa0845e82917d8803f48c upstream.
XFS SEEK_HOLE implementation could miss a hole in an unwritten extent as
can be seen by the following command:
xfs_io -c "falloc 0 256k" -c "pwrite 0 56k" -c "pwrite 128k 8k"
-c "seek -h 0" file
wrote 57344/57344 bytes at offset 0
56 KiB, 14 ops; 0.0000 sec (49.312 MiB/sec and 12623.9856 ops/sec)
wrote 8192/8192 bytes at offset 131072
8 KiB, 2 ops; 0.0000 sec (70.383 MiB/sec and 18018.0180 ops/sec)
Whence Result
HOLE 139264
Where we can see that hole at offset 56k was just ignored by SEEK_HOLE
implementation. The bug is in xfs_find_get_desired_pgoff() which does
not properly detect the case when pages are not contiguous.
Fix the problem by properly detecting when found page has larger offset
than expected.
Fixes: d126d43f631f996daeee5006714fed914be32368
Signed-off-by: Jan Kara <jack@suse.cz>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Patrik Jakobsson [Tue, 18 Apr 2017 11:43:32 +0000 (13:43 +0200)]
drm/gma500/psb: Actually use VBT mode when it is found
commit
82bc9a42cf854fdf63155759c0aa790bd1f361b0 upstream.
With LVDS we were incorrectly picking the pre-programmed mode instead of
the prefered mode provided by VBT. Make sure we pick the VBT mode if
one is provided. It is likely that the mode read-out code is still wrong
but this patch fixes the immediate problem on most machines.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78562
Signed-off-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170418114332.12183-1-patrik.r.jakobsson@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Thompson [Tue, 24 Jan 2017 23:18:02 +0000 (15:18 -0800)]
mm/slub.c: trace free objects at KERN_INFO
commit
aa2efd5ea4041754da4046c3d2e7edaac9526258 upstream.
Currently when trace is enabled (e.g. slub_debug=T,kmalloc-128 ) the
trace messages are mostly output at KERN_INFO. However the trace code
also calls print_section() to hexdump the head of a free object. This
is hard coded to use KERN_ERR, meaning the console is deluged with trace
messages even if we've asked for quiet.
Fix this the obvious way but adding a level parameter to
print_section(), allowing calls from the trace code to use the same
trace level as other trace messages.
Link: http://lkml.kernel.org/r/20170113154850.518-1-daniel.thompson@linaro.org
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Acked-by: Christoph Lameter <cl@linux.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Fri, 2 Jun 2017 21:46:25 +0000 (14:46 -0700)]
slub/memcg: cure the brainless abuse of sysfs attributes
commit
478fe3037b2278d276d4cd9cd0ab06c4cb2e9b32 upstream.
memcg_propagate_slab_attrs() abuses the sysfs attribute file functions
to propagate settings from the root kmem_cache to a newly created
kmem_cache. It does that with:
attr->show(root, buf);
attr->store(new, buf, strlen(bug);
Aside of being a lazy and absurd hackery this is broken because it does
not check the return value of the show() function.
Some of the show() functions return 0 w/o touching the buffer. That
means in such a case the store function is called with the stale content
of the previous show(). That causes nonsense like invoking
kmem_cache_shrink() on a newly created kmem_cache. In the worst case it
would cause handing in an uninitialized buffer.
This should be rewritten proper by adding a propagate() callback to
those slub_attributes which must be propagated and avoid that insane
conversion to and from ASCII, but that's too large for a hot fix.
Check at least the return value of the show() function, so calling
store() with stale content is prevented.
Steven said:
"It can cause a deadlock with get_online_cpus() that has been uncovered
by recent cpu hotplug and lockdep changes that Thomas and Peter have
been doing.
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(cpu_hotplug.lock);
lock(slab_mutex);
lock(cpu_hotplug.lock);
lock(slab_mutex);
*** DEADLOCK ***"
Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1705201244540.2255@nanos
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reported-by: Steven Rostedt <rostedt@goodmis.org>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrea Arcangeli [Fri, 2 Jun 2017 21:46:11 +0000 (14:46 -0700)]
ksm: prevent crash after write_protect_page fails
commit
a7306c3436e9c8e584a4b9fad5f3dc91be2a6076 upstream.
"err" needs to be left set to -EFAULT if split_huge_page succeeds.
Otherwise if "err" gets clobbered with zero and write_protect_page
fails, try_to_merge_one_page() will succeed instead of returning -EFAULT
and then try_to_merge_with_ksm_page() will continue thinking kpage is a
PageKsm when in fact it's still an anonymous page. Eventually it'll
crash in page_add_anon_rmap.
This has been reproduced on Fedora25 kernel but I can reproduce with
upstream too.
The bug was introduced in commit
f765f540598a ("ksm: prepare to new THP
semantics") introduced in v4.5.
page:
fffff67546ce1cc0 count:4 mapcount:2 mapping:
ffffa094551e36e1 index:0x7f0f46673
flags: 0x2ffffc0004007c(referenced|uptodate|dirty|lru|active|swapbacked)
page dumped because: VM_BUG_ON_PAGE(!PageLocked(page))
page->mem_cgroup:
ffffa09674bf0000
------------[ cut here ]------------
kernel BUG at mm/rmap.c:1222!
CPU: 1 PID: 76 Comm: ksmd Not tainted 4.9.3-200.fc25.x86_64 #1
RIP: do_page_add_anon_rmap+0x1c4/0x240
Call Trace:
page_add_anon_rmap+0x18/0x20
try_to_merge_with_ksm_page+0x50b/0x780
ksm_scan_thread+0x1211/0x1410
? prepare_to_wait_event+0x100/0x100
? try_to_merge_with_ksm_page+0x780/0x780
kthread+0xd9/0xf0
? kthread_park+0x60/0x60
ret_from_fork+0x25/0x30
Fixes: f765f54059 ("ksm: prepare to new THP semantics")
Link: http://lkml.kernel.org/r/20170513131040.21732-1-aarcange@redhat.com
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Reported-by: Federico Simoncelli <fsimonce@redhat.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rob Landley [Sat, 20 May 2017 20:03:29 +0000 (15:03 -0500)]
x86/boot: Use CROSS_COMPILE prefix for readelf
commit
3780578761921f094179c6289072a74b2228c602 upstream.
The boot code Makefile contains a straight 'readelf' invocation. This
causes build warnings in cross compile environments, when there is no
unprefixed readelf accessible via $PATH.
Add the missing $(CROSS_COMPILE) prefix.
[ tglx: Rewrote changelog ]
Fixes: 98f78525371b ("x86/boot: Refuse to build with data relocations")
Signed-off-by: Rob Landley <rob@landley.net>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Paul Bolle <pebolle@tiscali.nl>
Cc: "H.J. Lu" <hjl.tools@gmail.com>
Link: http://lkml.kernel.org/r/ced18878-693a-9576-a024-113ef39a22c0@landley.net
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mike Marciniszyn [Fri, 12 May 2017 16:02:00 +0000 (09:02 -0700)]
RDMA/qib,hfi1: Fix MR reference count leak on write with immediate
commit
1feb40067cf04ae48d65f728d62ca255c9449178 upstream.
The handling of IB_RDMA_WRITE_ONLY_WITH_IMMEDIATE will leak a memory
reference when a buffer cannot be allocated for returning the immediate
data.
The issue is that the rkey validation has already occurred and the RNR
nak fails to release the reference that was fruitlessly gotten. The
the peer will send the identical single packet request when its RNR
timer pops.
The fix is to release the held reference prior to the rnr nak exit.
This is the only sequence the requires both rkey validation and the
buffer allocation on the same packet.
Tested-by: Tadeusz Struk <tadeusz.struk@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michal Hocko [Fri, 2 Jun 2017 21:46:49 +0000 (14:46 -0700)]
mm: consider memblock reservations for deferred memory initialization sizing
commit
864b9a393dcb5aed09b8fd31b9bbda0fdda99374 upstream.
We have seen an early OOM killer invocation on ppc64 systems with
crashkernel=4096M:
kthreadd invoked oom-killer: gfp_mask=0x16040c0(GFP_KERNEL|__GFP_COMP|__GFP_NOTRACK), nodemask=7, order=0, oom_score_adj=0
kthreadd cpuset=/ mems_allowed=7
CPU: 0 PID: 2 Comm: kthreadd Not tainted 4.4.68-1.gd7fe927-default #1
Call Trace:
dump_stack+0xb0/0xf0 (unreliable)
dump_header+0xb0/0x258
out_of_memory+0x5f0/0x640
__alloc_pages_nodemask+0xa8c/0xc80
kmem_getpages+0x84/0x1a0
fallback_alloc+0x2a4/0x320
kmem_cache_alloc_node+0xc0/0x2e0
copy_process.isra.25+0x260/0x1b30
_do_fork+0x94/0x470
kernel_thread+0x48/0x60
kthreadd+0x264/0x330
ret_from_kernel_thread+0x5c/0xa4
Mem-Info:
active_anon:0 inactive_anon:0 isolated_anon:0
active_file:0 inactive_file:0 isolated_file:0
unevictable:0 dirty:0 writeback:0 unstable:0
slab_reclaimable:5 slab_unreclaimable:73
mapped:0 shmem:0 pagetables:0 bounce:0
free:0 free_pcp:0 free_cma:0
Node 7 DMA free:0kB min:0kB low:0kB high:0kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:52428800kB managed:110016kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:320kB slab_unreclaimable:4672kB kernel_stack:1152kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 0
Node 7 DMA: 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB 0*8192kB 0*16384kB = 0kB
0 total pagecache pages
0 pages in swap cache
Swap cache stats: add 0, delete 0, find 0/0
Free swap = 0kB
Total swap = 0kB
819200 pages RAM
0 pages HighMem/MovableOnly
817481 pages reserved
0 pages cma reserved
0 pages hwpoisoned
the reason is that the managed memory is too low (only 110MB) while the
rest of the the 50GB is still waiting for the deferred intialization to
be done. update_defer_init estimates the initial memoty to initialize
to 2GB at least but it doesn't consider any memory allocated in that
range. In this particular case we've had
Reserving 4096MB of memory at 128MB for crashkernel (System RAM: 51200MB)
so the low 2GB is mostly depleted.
Fix this by considering memblock allocations in the initial static
initialization estimation. Move the max_initialise to
reset_deferred_meminit and implement a simple memblock_reserved_memory
helper which iterates all reserved blocks and sums the size of all that
start below the given address. The cumulative size is than added on top
of the initial estimation. This is still not ideal because
reset_deferred_meminit doesn't consider holes and so reservation might
be above the initial estimation whihch we ignore but let's make the
logic simpler until we really need to handle more complicated cases.
Fixes: 3a80a7fa7989 ("mm: meminit: initialise a subset of struct pages if CONFIG_DEFERRED_STRUCT_PAGE_INIT is set")
Link: http://lkml.kernel.org/r/20170531104010.GI27783@dhcp22.suse.cz
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Tested-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yisheng Xie [Fri, 2 Jun 2017 21:46:43 +0000 (14:46 -0700)]
mlock: fix mlock count can not decrease in race condition
commit
70feee0e1ef331b22cc51f383d532a0d043fbdcc upstream.
Kefeng reported that when running the follow test, the mlock count in
meminfo will increase permanently:
[1] testcase
linux:~ # cat test_mlockal
grep Mlocked /proc/meminfo
for j in `seq 0 10`
do
for i in `seq 4 15`
do
./p_mlockall >> log &
done
sleep 0.2
done
# wait some time to let mlock counter decrease and 5s may not enough
sleep 5
grep Mlocked /proc/meminfo
linux:~ # cat p_mlockall.c
#include <sys/mman.h>
#include <stdlib.h>
#include <stdio.h>
#define SPACE_LEN 4096
int main(int argc, char ** argv)
{
int ret;
void *adr = malloc(SPACE_LEN);
if (!adr)
return -1;
ret = mlockall(MCL_CURRENT | MCL_FUTURE);
printf("mlcokall ret = %d\n", ret);
ret = munlockall();
printf("munlcokall ret = %d\n", ret);
free(adr);
return 0;
}
In __munlock_pagevec() we should decrement NR_MLOCK for each page where
we clear the PageMlocked flag. Commit
1ebb7cc6a583 ("mm: munlock: batch
NR_MLOCK zone state updates") has introduced a bug where we don't
decrement NR_MLOCK for pages where we clear the flag, but fail to
isolate them from the lru list (e.g. when the pages are on some other
cpu's percpu pagevec). Since PageMlocked stays cleared, the NR_MLOCK
accounting gets permanently disrupted by this.
Fix it by counting the number of page whose PageMlock flag is cleared.
Fixes: 1ebb7cc6a583 (" mm: munlock: batch NR_MLOCK zone state updates")
Link: http://lkml.kernel.org/r/1495678405-54569-1-git-send-email-xieyisheng1@huawei.com
Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
Reported-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Joern Engel <joern@logfs.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michel Lespinasse <walken@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Xishi Qiu <qiuxishi@huawei.com>
Cc: zhongjiang <zhongjiang@huawei.com>
Cc: Hanjun Guo <guohanjun@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Punit Agrawal [Fri, 2 Jun 2017 21:46:40 +0000 (14:46 -0700)]
mm/migrate: fix refcount handling when !hugepage_migration_supported()
commit
30809f559a0d348c2dfd7ab05e9a451e2384962e upstream.
On failing to migrate a page, soft_offline_huge_page() performs the
necessary update to the hugepage ref-count.
But when !hugepage_migration_supported() , unmap_and_move_hugepage()
also decrements the page ref-count for the hugepage. The combined
behaviour leaves the ref-count in an inconsistent state.
This leads to soft lockups when running the overcommitted hugepage test
from mce-tests suite.
Soft offlining pfn 0x83ed600 at process virtual address 0x400000000000
soft offline: 0x83ed600: migration failed 1, type
1fffc00000008008 (uptodate|head)
INFO: rcu_preempt detected stalls on CPUs/tasks:
Tasks blocked on level-0 rcu_node (CPUs 0-7): P2715
(detected by 7, t=5254 jiffies, g=963, c=962, q=321)
thugetlb_overco R running task 0 2715 2685 0x00000008
Call trace:
dump_backtrace+0x0/0x268
show_stack+0x24/0x30
sched_show_task+0x134/0x180
rcu_print_detail_task_stall_rnp+0x54/0x7c
rcu_check_callbacks+0xa74/0xb08
update_process_times+0x34/0x60
tick_sched_handle.isra.7+0x38/0x70
tick_sched_timer+0x4c/0x98
__hrtimer_run_queues+0xc0/0x300
hrtimer_interrupt+0xac/0x228
arch_timer_handler_phys+0x3c/0x50
handle_percpu_devid_irq+0x8c/0x290
generic_handle_irq+0x34/0x50
__handle_domain_irq+0x68/0xc0
gic_handle_irq+0x5c/0xb0
Address this by changing the putback_active_hugepage() in
soft_offline_huge_page() to putback_movable_pages().
This only triggers on systems that enable memory failure handling
(ARCH_SUPPORTS_MEMORY_FAILURE) but not hugepage migration
(!ARCH_ENABLE_HUGEPAGE_MIGRATION).
I imagine this wasn't triggered as there aren't many systems running
this configuration.
[akpm@linux-foundation.org: remove dead comment, per Naoya]
Link: http://lkml.kernel.org/r/20170525135146.32011-1-punit.agrawal@arm.com
Reported-by: Manoj Iyer <manoj.iyer@canonical.com>
Tested-by: Manoj Iyer <manoj.iyer@canonical.com>
Suggested-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexander Tsoy [Mon, 22 May 2017 17:58:11 +0000 (20:58 +0300)]
ALSA: hda - apply STAC_9200_DELL_M22 quirk for Dell Latitude D430
commit
1fc2e41f7af4572b07190f9dec28396b418e9a36 upstream.
This model is actually called 92XXM2-8 in Windows driver. But since pin
configs for M22 and M28 are identical, just reuse M22 quirk.
Fixes external microphone (tested) and probably docking station ports
(not tested).
Signed-off-by: Alexander Tsoy <alexander@tsoy.me>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicolas Iooss [Fri, 2 Jun 2017 21:46:28 +0000 (14:46 -0700)]
pcmcia: remove left-over %Z format
commit
ff5a20169b98d84ad8d7f99f27c5ebbb008204d6 upstream.
Commit
5b5e0928f742 ("lib/vsprintf.c: remove %Z support") removed some
usages of format %Z but forgot "%.2Zx". This makes clang 4.0 reports a
-Wformat-extra-args warning because it does not know about %Z.
Replace %Z with %z.
Link: http://lkml.kernel.org/r/20170520090946.22562-1-nicolas.iooss_linux@m4x.org
Signed-off-by: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
Cc: Harald Welte <laforge@gnumonks.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michel Dänzer [Mon, 30 Jan 2017 03:06:35 +0000 (12:06 +0900)]
drm/radeon: Fix vram_size/visible values in DRM_RADEON_GEM_INFO ioctl
commit
51964e9e12d0a054002a1a0d1dec4f661c7aaf28 upstream.
vram_size is supposed to be the total amount of VRAM that can be used by
userspace, which corresponds to the TTM VRAM manager size (which is
normally the full amount of VRAM, but can be just the visible VRAM when
DMA can't be used for BO migration for some reason).
The above was incorrectly used for vram_visible before, resulting in
generally too large values being reported.
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Lyude [Thu, 11 May 2017 23:31:12 +0000 (19:31 -0400)]
drm/radeon: Unbreak HPD handling for r600+
commit
3d18e33735a02b1a90aecf14410bf3edbfd4d3dc upstream.
We end up reading the interrupt register for HPD5, and then writing it
to HPD6 which on systems without anything using HPD5 results in
permanently disabling hotplug on one of the display outputs after the
first time we acknowledge a hotplug interrupt from the GPU.
This code is really bad. But for now, let's just fix this. I will
hopefully have a large patch series to refactor all of this soon.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Lyude <lyude@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Deucher [Thu, 11 May 2017 17:14:14 +0000 (13:14 -0400)]
drm/radeon/ci: disable mclk switching for high refresh rates (v2)
commit
58d7e3e427db1bd68f33025519a9468140280a75 upstream.
Even if the vblank period would allow it, it still seems to
be problematic on some cards.
v2: fix logic inversion (Nils)
bug: https://bugs.freedesktop.org/show_bug.cgi?id=96868
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ram Pai [Thu, 26 Jan 2017 18:37:01 +0000 (16:37 -0200)]
scsi: mpt3sas: Force request partial completion alignment
commit
f2e767bb5d6ee0d988cb7d4e54b0b21175802b6b upstream.
The firmware or device, possibly under a heavy I/O load, can return on a
partial unaligned boundary. Scsi-ml expects these requests to be
completed on an alignment boundary. Scsi-ml blindly requeues the I/O
without checking the alignment boundary of the I/O request for the
remaining bytes. This leads to errors, since devices cannot perform
non-aligned read/write operations.
This patch fixes the issue in the driver. It aligns unaligned
completions of FS requests, by truncating them to the nearest alignment
boundary.
[mkp: simplified if statement]
Reported-by: Mauricio Faria De Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Acked-by: Sreekanth Reddy <Sreekanth.Reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ming Lei [Mon, 22 May 2017 15:05:04 +0000 (23:05 +0800)]
nvme: avoid to use blk_mq_abort_requeue_list()
commit
986f75c876dbafed98eba7cb516c5118f155db23 upstream.
NVMe may add request into requeue list simply and not kick off the
requeue if hw queues are stopped. Then blk_mq_abort_requeue_list()
is called in both nvme_kill_queues() and nvme_ns_remove() for
dealing with this issue.
Unfortunately blk_mq_abort_requeue_list() is absolutely a
race maker, for example, one request may be requeued during
the aborting. So this patch just calls blk_mq_kick_requeue_list() in
nvme_kill_queues() to handle this issue like what nvme_start_queues()
does. Now all requests in requeue list when queues are stopped will be
handled by blk_mq_kick_requeue_list() when queues are restarted, either
in nvme_start_queues() or in nvme_kill_queues().
Reported-by: Zhang Yi <yizhan@redhat.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ming Lei [Mon, 22 May 2017 15:05:03 +0000 (23:05 +0800)]
nvme: use blk_mq_start_hw_queues() in nvme_kill_queues()
commit
806f026f9b901eaf1a6baeb48b5da18d6a4f818e upstream.
Inside nvme_kill_queues(), we have to start hw queues for
draining requests in sw queues, .dispatch list and requeue list,
so use blk_mq_start_hw_queues() instead of blk_mq_start_stopped_hw_queues()
which only run queues if queues are stopped, but the queues may have
been started already, for example nvme_start_queues() is called in reset work
function.
blk_mq_start_hw_queues() run hw queues in current context, instead
of running asynchronously like before. Given nvme_kill_queues() is
run from either remove context or reset worker context, both are fine
to run hw queue directly. And the mutex of namespaces_mutex isn't a
problem too becasue nvme_start_freeze() runs hw queue in this way
already.
Reported-by: Zhang Yi <yizhan@redhat.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marta Rybczynska [Mon, 10 Apr 2017 15:12:34 +0000 (17:12 +0200)]
nvme-rdma: support devices with queue size < 32
commit
0544f5494a03b8846db74e02be5685d1f32b06c9 upstream.
In the case of small NVMe-oF queue size (<32) we may enter a deadlock
caused by the fact that the IB completions aren't sent waiting for 32
and the send queue will fill up.
The error is seen as (using mlx5):
[ 2048.693355] mlx5_0:mlx5_ib_post_send:3765:(pid 7273):
[ 2048.693360] nvme nvme1: nvme_rdma_post_send failed with error code -12
This patch changes the way the signaling is done so that it depends on
the queue depth now. The magic define has been removed completely.
Signed-off-by: Marta Rybczynska <marta.rybczynska@kalray.eu>
Signed-off-by: Samuel Jones <sjones@kalray.eu>
Acked-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jason Gerecke [Tue, 25 Apr 2017 18:29:56 +0000 (11:29 -0700)]
HID: wacom: Have wacom_tpc_irq guard against possible NULL dereference
commit
2ac97f0f6654da14312d125005c77a6010e0ea38 upstream.
The following Smatch complaint was generated in response to commit
2a6cdbd ("HID: wacom: Introduce new 'touch_input' device"):
drivers/hid/wacom_wac.c:1586 wacom_tpc_irq()
error: we previously assumed 'wacom->touch_input' could be null (see line 1577)
The 'touch_input' and 'pen_input' variables point to the 'struct input_dev'
used for relaying touch and pen events to userspace, respectively. If a
device does not have a touch interface or pen interface, the associated
input variable is NULL. The 'wacom_tpc_irq()' function is responsible for
forwarding input reports to a more-specific IRQ handler function. An
unknown report could theoretically be mistaken as e.g. a touch report
on a device which does not have a touch interface. This can be prevented
by only calling the pen/touch functions are called when the pen/touch
pointers are valid.
Fixes: 2a6cdbd ("HID: wacom: Introduce new 'touch_input' device")
Signed-off-by: Jason Gerecke <jason.gerecke@wacom.com>
Reviewed-by: Ping Cheng <ping.cheng@wacom.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bryant G. Ly [Wed, 10 May 2017 19:35:47 +0000 (14:35 -0500)]
ibmvscsis: Fix the incorrect req_lim_delta
commit
75dbf2d36f6b122ad3c1070fe4bf95f71bbff321 upstream.
The current code is not correctly calculating the req_lim_delta.
We want to make sure vscsi->credit is always incremented when
we do not send a response for the scsi op. Thus for the case where
there is a successfully aborted task we need to make sure the
vscsi->credit is incremented.
v2 - Moves the original location of the vscsi->credit increment
to a better spot. Since if we increment credit, the next command
we send back will have increased req_lim_delta. But we probably
shouldn't be doing that until the aborted cmd is actually released.
Otherwise the client will think that it can send a new command, and
we could find ourselves short of command elements. Not likely, but could
happen.
This patch depends on both:
commit
25e78531268e ("ibmvscsis: Do not send aborted task response")
commit
98883f1b5415 ("ibmvscsis: Clear left-over abort_cmd pointers")
Signed-off-by: Bryant G. Ly <bryantly@linux.vnet.ibm.com>
Reviewed-by: Michael Cyr <mikecyr@linux.vnet.ibm.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bryant G. Ly [Tue, 9 May 2017 16:50:26 +0000 (11:50 -0500)]
ibmvscsis: Clear left-over abort_cmd pointers
commit
98883f1b5415ea9dce60d5178877d15f4faa10b8 upstream.
With the addition of ibmvscsis->abort_cmd pointer within
commit
25e78531268e ("ibmvscsis: Do not send aborted task response"),
make sure to explicitly NULL these pointers when clearing
DELAY_SEND flag.
Do this for two cases, when getting the new new ibmvscsis
descriptor in ibmvscsis_get_free_cmd() and before posting
the response completion in ibmvscsis_send_messages().
Signed-off-by: Bryant G. Ly <bryantly@linux.vnet.ibm.com>
Reviewed-by: Michael Cyr <mikecyr@linux.vnet.ibm.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiang Yi [Tue, 16 May 2017 09:57:55 +0000 (17:57 +0800)]
iscsi-target: Always wait for kthread_should_stop() before kthread exit
commit
5e0cf5e6c43b9e19fc0284f69e5cd2b4a47523b0 upstream.
There are three timing problems in the kthread usages of iscsi_target_mod:
- np_thread of struct iscsi_np
- rx_thread and tx_thread of struct iscsi_conn
In iscsit_close_connection(), it calls
send_sig(SIGINT, conn->tx_thread, 1);
kthread_stop(conn->tx_thread);
In conn->tx_thread, which is iscsi_target_tx_thread(), when it receive
SIGINT the kthread will exit without checking the return value of
kthread_should_stop().
So if iscsi_target_tx_thread() exit right between send_sig(SIGINT...)
and kthread_stop(...), the kthread_stop() will try to stop an already
stopped kthread.
This is invalid according to the documentation of kthread_stop().
(Fix -ECONNRESET logout handling in iscsi_target_tx_thread and
early iscsi_target_rx_thread failure case - nab)
Signed-off-by: Jiang Yi <jiangyilism@gmail.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Srinath Mannam [Thu, 18 May 2017 16:57:40 +0000 (22:27 +0530)]
mmc: sdhci-iproc: suppress spurious interrupt with Multiblock read
commit
f5f968f2371ccdebb8a365487649673c9af68d09 upstream.
The stingray SDHCI hardware supports ACMD12 and automatically
issues after multi block transfer completed.
If ACMD12 in SDHCI is disabled, spurious tx done interrupts are seen
on multi block read command with below error message:
Got data interrupt 0x00000002 even though no data
operation was in progress.
This patch uses SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12 to enable
ACM12 support in SDHCI hardware and suppress spurious interrupt.
Signed-off-by: Srinath Mannam <srinath.mannam@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Fixes: b580c52d58d9 ("mmc: sdhci-iproc: add IPROC SDHCI driver")
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Benjamin Tissoires [Wed, 10 May 2017 16:12:40 +0000 (18:12 +0200)]
Revert "ACPI / button: Change default behavior to lid_init_state=open"
commit
878d8db039daac0938238e9a40a5bd6e50ee3c9b upstream.
Revert commit
77e9a4aa9de1 (ACPI / button: Change default behavior to
lid_init_state=open) which changed the kernel's behavior on laptops
that boot with closed lids and expect the lid switch state to be
reported accurately by the kernel.
If you boot or resume your laptop with the lid closed on a docking
station while using an external monitor connected to it, both internal
and external displays will light on, while only the external should.
There is a design choice in gdm to only provide the greeter on the
internal display when lit on, so users only see a gray area on the
external monitor. Also, the cursor will not show up as it's by
default on the internal display too.
To "fix" that, users have to open the laptop once and close it once
again to sync the state of the switch with the hardware state.
Even if the "method" operation mode implementation can be buggy on
some platforms, the "open" choice is worse. It breaks docking
stations basically and there is no way to have a user-space hwdb to
fix that.
On the contrary, it's rather easy in user-space to have a hwdb
with the problematic platforms. Then, libinput (1.7.0+) can fix
the state of the lid switch for us: you need to set the udev
property LIBINPUT_ATTR_LID_SWITCH_RELIABILITY to 'write_open'.
When libinput detects internal keyboard events, it will overwrite the
state of the switch to open, making it reliable again. Given that
logind only checks the lid switch value after a timeout, we can
assume the user will use the internal keyboard before this timeout
expires.
For example, such a hwdb entry is:
libinput:name:*Lid Switch*:dmi:*svnMicrosoftCorporation:pnSurface3:*
LIBINPUT_ATTR_LID_SWITCH_RELIABILITY=write_open
Link: https://bugzilla.gnome.org/show_bug.cgi?id=782380
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vishal Verma [Fri, 19 May 2017 09:39:10 +0000 (11:39 +0200)]
acpi, nfit: Fix the memory error check in nfit_handle_mce()
commit
fc08a4703a418a398bbb575ac311d36d110ac786 upstream.
The check for an MCE being a memory error in the NFIT mce handler was
bogus. Use the new mce_is_memory_error() helper to detect the error
properly.
Reported-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: http://lkml.kernel.org/r/20170519093915.15413-3-bp@alien8.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Borislav Petkov [Fri, 19 May 2017 09:39:09 +0000 (11:39 +0200)]
x86/MCE: Export memory_error()
commit
2d1f406139ec20320bf38bcd2461aa8e358084b5 upstream.
Export the function which checks whether an MCE is a memory error to
other users so that we can reuse the logic. Drop the boot_cpu_data use,
while at it, as mce.cpuvendor already has the CPU vendor in there.
Integrate a piece from a patch from Vishal Verma
<vishal.l.verma@intel.com> to export it for modules (nfit).
The main reason we're exporting it is that the nfit handler
nfit_handle_mce() needs to detect a memory error properly before doing
its recovery actions.
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vishal Verma <vishal.l.verma@intel.com>
Link: http://lkml.kernel.org/r/20170519093915.15413-2-bp@alien8.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sebastian Reichel [Fri, 5 May 2017 09:06:50 +0000 (11:06 +0200)]
i2c: i2c-tiny-usb: fix buffer not being DMA capable
commit
5165da5923d6c7df6f2927b0113b2e4d9288661e upstream.
Since v4.9 i2c-tiny-usb generates the below call trace
and longer works, since it can't communicate with the
USB device. The reason is, that since v4.9 the USB
stack checks, that the buffer it should transfer is DMA
capable. This was a requirement since v2.2 days, but it
usually worked nevertheless.
[ 17.504959] ------------[ cut here ]------------
[ 17.505488] WARNING: CPU: 0 PID: 93 at drivers/usb/core/hcd.c:1587 usb_hcd_map_urb_for_dma+0x37c/0x570
[ 17.506545] transfer buffer not dma capable
[ 17.507022] Modules linked in:
[ 17.507370] CPU: 0 PID: 93 Comm: i2cdetect Not tainted 4.11.0-rc8+ #10
[ 17.508103] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 17.509039] Call Trace:
[ 17.509320] ? dump_stack+0x5c/0x78
[ 17.509714] ? __warn+0xbe/0xe0
[ 17.510073] ? warn_slowpath_fmt+0x5a/0x80
[ 17.510532] ? nommu_map_sg+0xb0/0xb0
[ 17.510949] ? usb_hcd_map_urb_for_dma+0x37c/0x570
[ 17.511482] ? usb_hcd_submit_urb+0x336/0xab0
[ 17.511976] ? wait_for_completion_timeout+0x12f/0x1a0
[ 17.512549] ? wait_for_completion_timeout+0x65/0x1a0
[ 17.513125] ? usb_start_wait_urb+0x65/0x160
[ 17.513604] ? usb_control_msg+0xdc/0x130
[ 17.514061] ? usb_xfer+0xa4/0x2a0
[ 17.514445] ? __i2c_transfer+0x108/0x3c0
[ 17.514899] ? i2c_transfer+0x57/0xb0
[ 17.515310] ? i2c_smbus_xfer_emulated+0x12f/0x590
[ 17.515851] ? _raw_spin_unlock_irqrestore+0x11/0x20
[ 17.516408] ? i2c_smbus_xfer+0x125/0x330
[ 17.516876] ? i2c_smbus_xfer+0x125/0x330
[ 17.517329] ? i2cdev_ioctl_smbus+0x1c1/0x2b0
[ 17.517824] ? i2cdev_ioctl+0x75/0x1c0
[ 17.518248] ? do_vfs_ioctl+0x9f/0x600
[ 17.518671] ? vfs_write+0x144/0x190
[ 17.519078] ? SyS_ioctl+0x74/0x80
[ 17.519463] ? entry_SYSCALL_64_fastpath+0x1e/0xad
[ 17.519959] ---[ end trace
d047c04982f5ac50 ]---
Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.co.uk>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Till Harbaum <till@harbaum.org>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ard Biesheuvel [Thu, 18 May 2017 11:29:55 +0000 (12:29 +0100)]
drivers/tty: 8250: only call fintek_8250_probe when doing port I/O
commit
4c4fc90964b1cf205a67df566cc82ea1731bcb00 upstream.
Commit
fa01e2ca9f53 ("serial: 8250: Integrate Fintek into 8250_base")
modified the probing logic for PNP0501 devices, to remove a collision
between the generic 16550A driver and the Fintek driver, which reused
the same ACPI _HID.
The Fintek device probe is now incorporated into the common 8250 probe
path, and gets called for all discovered 16550A compatible devices,
including ones that are MMIO mapped rather than IO mapped. However,
the Fintek driver assumes the port base is a I/O address, and proceeds
to probe some arbitrary offsets above it.
This is generally a wrong thing to do, but on ARM systems (having no
native port I/O), this may result in faulting accesses of completely
unrelated MMIO regions in the PCI I/O space. Given that this is at
serial probe time, this results in hard to diagnose crashes at boot.
So let's restrict the Fintek probe to devices that we know are using
port I/O in the first place.
Fixes: fa01e2ca9f53 ("serial: 8250: Integrate Fintek into 8250_base")
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Ricardo Ribalda <ricardo.ribalda@gmail.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jeremy Kerr [Wed, 24 May 2017 06:49:59 +0000 (16:49 +1000)]
powerpc/spufs: Fix hash faults for kernel regions
commit
d75e4919cc0b6fbcbc8d6654ef66d87a9dbf1526 upstream.
Commit
ac29c64089b7 ("powerpc/mm: Replace _PAGE_USER with
_PAGE_PRIVILEGED") swapped _PAGE_USER for _PAGE_PRIVILEGED, and
introduced check_pte_access() which denied kernel access to
non-_PAGE_PRIVILEGED pages.
However, it didn't add _PAGE_PRIVILEGED to the hash fault handler
for spufs' kernel accesses, so the DMAs required to establish SPE
memory no longer work.
This change adds _PAGE_PRIVILEGED to the hash fault handler for
kernel accesses.
Fixes: ac29c64089b7 ("powerpc/mm: Replace _PAGE_USER with _PAGE_PRIVILEGED")
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Reported-by: Sombat Tragolgosol <sombat3960@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Richard Narron [Sun, 4 Jun 2017 23:23:18 +0000 (16:23 -0700)]
fs/ufs: Set UFS default maximum bytes per file
commit
239e250e4acbc0104d514307029c0839e834a51a upstream.
This fixes a problem with reading files larger than 2GB from a UFS-2
file system:
https://bugzilla.kernel.org/show_bug.cgi?id=195721
The incorrect UFS s_maxsize limit became a problem as of commit
c2a9737f45e2 ("vfs,mm: fix a dead loop in truncate_inode_pages_range()")
which started using s_maxbytes to avoid a page index overflow in
do_generic_file_read().
That caused files to be truncated on UFS-2 file systems because the
default maximum file size is 2GB (MAX_NON_LFS) and UFS didn't update it.
Here I simply increase the default to a common value used by other file
systems.
Signed-off-by: Richard Narron <comet.berkeley@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Will B <will.brokenbourgh2877@gmail.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Liam R. Howlett [Wed, 17 May 2017 15:47:00 +0000 (11:47 -0400)]
sparc/ftrace: Fix ftrace graph time measurement
[ Upstream commit
48078d2dac0a26f84f5f3ec704f24f7c832cce14 ]
The ftrace function_graph time measurements of a given function is not
accurate according to those recorded by ftrace using the function
filters. This change pulls the x86_64 fix from 'commit
722b3c746953
("ftrace/graph: Trace function entry before updating index")' into the
sparc specific prepare_ftrace_return which stops ftrace from
counting interrupted tasks in the time measurement.
Example measurements for select_task_rq_fair running "hackbench 100
process 1000":
| tracing/trace_stat/function0 | function_graph
Before patch | 2.802 us | 4.255 us
After patch | 2.749 us | 3.094 us
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Orlando Arias [Tue, 16 May 2017 19:34:00 +0000 (15:34 -0400)]
sparc: Fix -Wstringop-overflow warning
[ Upstream commit
deba804c90642c8ed0f15ac1083663976d578f54 ]
Greetings,
GCC 7 introduced the -Wstringop-overflow flag to detect buffer overflows
in calls to string handling functions [1][2]. Due to the way
``empty_zero_page'' is declared in arch/sparc/include/setup.h, this
causes a warning to trigger at compile time in the function mem_init(),
which is subsequently converted to an error. The ensuing patch fixes
this issue and aligns the declaration of empty_zero_page to that of
other architectures. Thank you.
Cheers,
Orlando.
[1] https://gcc.gnu.org/ml/gcc-patches/2016-10/msg02308.html
[2] https://gcc.gnu.org/gcc-7/changes.html
Signed-off-by: Orlando Arias <oarias@knights.ucf.edu>
--------------------------------------------------------------------------------
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Borkmann [Wed, 24 May 2017 23:05:07 +0000 (01:05 +0200)]
bpf: add bpf_clone_redirect to bpf_helper_changes_pkt_data
[ Upstream commit
41703a731066fde79c3e5ccf3391cf77a98aeda5 ]
The bpf_clone_redirect() still needs to be listed in
bpf_helper_changes_pkt_data() since we call into
bpf_try_make_head_writable() from there, thus we need
to invalidate prior pkt regs as well.
Fixes: 36bbef52c7eb ("bpf: direct packet write and access for helpers for clsact progs")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Thu, 25 May 2017 21:27:35 +0000 (14:27 -0700)]
ipv4: add reference counting to metrics
[ Upstream commit
3fb07daff8e99243366a081e5129560734de4ada ]
Andrey Konovalov reported crashes in ipv4_mtu()
I could reproduce the issue with KASAN kernels, between
10.246.7.151 and 10.246.7.152 :
1) 20 concurrent netperf -t TCP_RR -H 10.246.7.152 -l 1000 &
2) At the same time run following loop :
while :
do
ip ro add 10.246.7.152 dev eth0 src 10.246.7.151 mtu 1500
ip ro del 10.246.7.152 dev eth0 src 10.246.7.151 mtu 1500
done
Cong Wang attempted to add back rt->fi in commit
82486aa6f1b9 ("ipv4: restore rt->fi for reference counting")
but this proved to add some issues that were complex to solve.
Instead, I suggested to add a refcount to the metrics themselves,
being a standalone object (in particular, no reference to other objects)
I tried to make this patch as small as possible to ease its backport,
instead of being super clean. Note that we believe that only ipv4 dst
need to take care of the metric refcount. But if this is wrong,
this patch adds the basic infrastructure to extend this to other
families.
Many thanks to Julian Anastasov for reviewing this patch, and Cong Wang
for his efforts on this problem.
Fixes: 2860583fe840 ("ipv4: Kill rt->fi")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Julian Anastasov <ja@ssi.bg>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Davide Caratti [Thu, 25 May 2017 17:14:56 +0000 (19:14 +0200)]
sctp: fix ICMP processing if skb is non-linear
[ Upstream commit
804ec7ebe8ea003999ca8d1bfc499edc6a9e07df ]
sometimes ICMP replies to INIT chunks are ignored by the client, even if
the encapsulated SCTP headers match an open socket. This happens when the
ICMP packet is carried by a paged skb: use skb_header_pointer() to read
packet contents beyond the SCTP header, so that chunk header and initiate
tag are validated correctly.
v2:
- don't use skb_header_pointer() to read the transport header, since
icmp_socket_deliver() already puts these 8 bytes in the linear area.
- change commit message to make specific reference to INIT chunks.
Signed-off-by: Davide Caratti <dcaratti@redhat.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Reviewed-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Wei Wang [Wed, 24 May 2017 16:59:31 +0000 (09:59 -0700)]
tcp: avoid fastopen API to be used on AF_UNSPEC
[ Upstream commit
ba615f675281d76fd19aa03558777f81fb6b6084 ]
Fastopen API should be used to perform fastopen operations on the TCP
socket. It does not make sense to use fastopen API to perform disconnect
by calling it with AF_UNSPEC. The fastopen data path is also prone to
race conditions and bugs when using with AF_UNSPEC.
One issue reported and analyzed by Vegard Nossum is as follows:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Thread A: Thread B:
------------------------------------------------------------------------
sendto()
- tcp_sendmsg()
- sk_stream_memory_free() = 0
- goto wait_for_sndbuf
- sk_stream_wait_memory()
- sk_wait_event() // sleep
| sendto(flags=MSG_FASTOPEN, dest_addr=AF_UNSPEC)
| - tcp_sendmsg()
| - tcp_sendmsg_fastopen()
| - __inet_stream_connect()
| - tcp_disconnect() //because of AF_UNSPEC
| - tcp_transmit_skb()// send RST
| - return 0; // no reconnect!
| - sk_stream_wait_connect()
| - sock_error()
| - xchg(&sk->sk_err, 0)
| - return -ECONNRESET
- ... // wake up, see sk->sk_err == 0
- skb_entail() on TCP_CLOSE socket
If the connection is reopened then we will send a brand new SYN packet
after thread A has already queued a buffer. At this point I think the
socket internal state (sequence numbers etc.) becomes messed up.
When the new connection is closed, the FIN-ACK is rejected because the
sequence number is outside the window. The other side tries to
retransmit,
but __tcp_retransmit_skb() calls tcp_trim_head() on an empty skb which
corrupts the skb data length and hits a BUG() in copy_and_csum_bits().
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hence, this patch adds a check for AF_UNSPEC in the fastopen data path
and return EOPNOTSUPP to user if such case happens.
Fixes: cf60af03ca4e7 ("tcp: Fast Open client - sendmsg(MSG_FASTOPEN)")
Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Wei Wang <weiwan@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vlad Yasevich [Tue, 23 May 2017 17:38:43 +0000 (13:38 -0400)]
virtio-net: enable TSO/checksum offloads for Q-in-Q vlans
[ Upstream commit
2836b4f224d4fd7d1a2b23c3eecaf0f0ae199a74 ]
Since virtio does not provide it's own ndo_features_check handler,
TSO, and now checksum offload, are disabled for stacked vlans.
Re-enable the support and let the host take care of it. This
restores/improves Guest-to-Guest performance over Q-in-Q vlans.
Acked-by: Jason Wang <jasowang@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vlad Yasevich [Tue, 23 May 2017 17:38:42 +0000 (13:38 -0400)]
be2net: Fix offload features for Q-in-Q packets
[ Upstream commit
cc6e9de62a7f84c9293a2ea41bc412b55bb46e85 ]
At least some of the be2net cards do not seem to be capabled
of performing checksum offload computions on Q-in-Q packets.
In these case, the recevied checksum on the remote is invalid
and TCP syn packets are dropped.
This patch adds a call to check disbled acceleration features
on Q-in-Q tagged traffic.
CC: Sathya Perla <sathya.perla@broadcom.com>
CC: Ajit Khaparde <ajit.khaparde@broadcom.com>
CC: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
CC: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vlad Yasevich [Tue, 23 May 2017 17:38:41 +0000 (13:38 -0400)]
vlan: Fix tcp checksum offloads in Q-in-Q vlans
[ Upstream commit
35d2f80b07bbe03fb358afb0bdeff7437a7d67ff ]
It appears that TCP checksum offloading has been broken for
Q-in-Q vlans. The behavior was execerbated by the
series
commit
afb0bc972b52 ("Merge branch 'stacked_vlan_tso'")
that that enabled accleleration features on stacked vlans.
However, event without that series, it is possible to trigger
this issue. It just requires a lot more specialized configuration.
The root cause is the interaction between how
netdev_intersect_features() works, the features actually set on
the vlan devices and HW having the ability to run checksum with
longer headers.
The issue starts when netdev_interesect_features() replaces
NETIF_F_HW_CSUM with a combination of NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM,
if the HW advertises IP|IPV6 specific checksums. This happens
for tagged and multi-tagged packets. However, HW that enables
IP|IPV6 checksum offloading doesn't gurantee that packets with
arbitrarily long headers can be checksummed.
This patch disables IP|IPV6 checksums on the packet for multi-tagged
packets.
CC: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
CC: Michal Kubecek <mkubecek@suse.cz>
Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
Acked-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrew Lunn [Tue, 23 May 2017 15:49:13 +0000 (17:49 +0200)]
net: phy: marvell: Limit errata to 88m1101
[ Upstream commit
f2899788353c13891412b273fdff5f02d49aa40f ]
The 88m1101 has an errata when configuring autoneg. However, it was
being applied to many other Marvell PHYs as well. Limit its scope to
just the 88m1101.
Fixes: 76884679c644 ("phylib: Add support for Marvell 88e1111S and 88e1145")
Reported-by: Daniel Walker <danielwa@cisco.com>
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Acked-by: Harini Katakam <harinik@xilinx.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mohamad Haj Yahia [Thu, 23 Feb 2017 09:19:36 +0000 (11:19 +0200)]
net/mlx5: Avoid using pending command interface slots
[ Upstream commit
73dd3a4839c1d27c36d4dcc92e1ff44225ecbeb7 ]
Currently when firmware command gets stuck or it takes long time to
complete, the driver command will get timeout and the command slot is
freed and can be used for new commands, and if the firmware receive new
command on the old busy slot its behavior is unexpected and this could
be harmful.
To fix this when the driver command gets timeout we return failure,
but we don't free the command slot and we wait for the firmware to
explicitly respond to that command.
Once all the entries are busy we will stop processing new firmware
commands.
Fixes: 9cba4ebcf374 ('net/mlx5: Fix potential deadlock in command mode change')
Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com>
Cc: kernel-team@fb.com
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jarod Wilson [Fri, 19 May 2017 23:43:45 +0000 (19:43 -0400)]
bonding: fix accounting of active ports in 3ad
[ Upstream commit
751da2a69b7cc82d83dc310ed7606225f2d6e014 ]
As of
7bb11dc9f59d and
0622cab0341c, bond slaves in a 3ad bond are not
removed from the aggregator when they are down, and the active slave count
is NOT equal to number of ports in the aggregator, but rather the number
of ports in the aggregator that are still enabled. The sysfs spew for
bonding_show_ad_num_ports() has a comment that says "Show number of active
802.3ad ports.", but it's currently showing total number of ports, both
active and inactive. Remedy it by using the same logic introduced in
0622cab0341c in __bond_3ad_get_active_agg_info(), so sysfs, procfs and
netlink all report the number of active ports. Note that this means that
IFLA_BOND_AD_INFO_NUM_PORTS really means NUM_ACTIVE_PORTS instead of
NUM_PORTS, and thus perhaps should be renamed for clarity.
Lightly tested on a dual i40e lacp bond, simulating link downs with an ip
link set dev <slave2> down, was able to produce the state where I could
see both in the same aggregator, but a number of ports count of 1.
MII Status: up
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2 <---
Slave Interface: ens10
MII Status: up <---
Aggregator ID: 1
Slave Interface: ens11
MII Status: up
Aggregator ID: 1
MII Status: up
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 1 <---
Slave Interface: ens10
MII Status: down <---
Aggregator ID: 1
Slave Interface: ens11
MII Status: up
Aggregator ID: 1
CC: Jay Vosburgh <j.vosburgh@gmail.com>
CC: Veaceslav Falico <vfalico@gmail.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: netdev@vger.kernel.org
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Xin Long [Fri, 19 May 2017 14:20:29 +0000 (22:20 +0800)]
bridge: start hello_timer when enabling KERNEL_STP in br_stp_start
[ Upstream commit
6d18c732b95c0a9d35e9f978b4438bba15412284 ]
Since commit
76b91c32dd86 ("bridge: stp: when using userspace stp stop
kernel hello and hold timers"), bridge would not start hello_timer if
stp_enabled is not KERNEL_STP when br_dev_open.
The problem is even if users set stp_enabled with KERNEL_STP later,
the timer will still not be started. It causes that KERNEL_STP can
not really work. Users have to re-ifup the bridge to avoid this.
This patch is to fix it by starting br->hello_timer when enabling
KERNEL_STP in br_stp_start.
As an improvement, it's also to start hello_timer again only when
br->stp_enabled is KERNEL_STP in br_hello_timer_expired, there is
no reason to start the timer again when it's NO_STP.
Fixes: 76b91c32dd86 ("bridge: stp: when using userspace stp stop kernel hello and hold timers")
Reported-by: Haidong Li <haili@redhat.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Reviewed-by: Ivan Vecera <cera@cera.cz>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bjørn Mork [Wed, 17 May 2017 14:31:41 +0000 (16:31 +0200)]
qmi_wwan: add another Lenovo EM74xx device ID
[ Upstream commit
486181bcb3248e2f1977f4e69387a898234a4e1e ]
In their infinite wisdom, and never ending quest for end user frustration,
Lenovo has decided to use a new USB device ID for the wwan modules in
their 2017 laptops. The actual hardware is still the Sierra Wireless
EM7455 or EM7430, depending on region.
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tobias Jungel [Wed, 17 May 2017 07:29:12 +0000 (09:29 +0200)]
bridge: netlink: check vlan_default_pvid range
[ Upstream commit
a285860211bf257b0e6d522dac6006794be348af ]
Currently it is allowed to set the default pvid of a bridge to a value
above VLAN_VID_MASK (0xfff). This patch adds a check to br_validate and
returns -EINVAL in case the pvid is out of bounds.
Reproduce by calling:
[root@test ~]# ip l a type bridge
[root@test ~]# ip l a type dummy
[root@test ~]# ip l s bridge0 type bridge vlan_filtering 1
[root@test ~]# ip l s bridge0 type bridge vlan_default_pvid 9999
[root@test ~]# ip l s dummy0 master bridge0
[root@test ~]# bridge vlan
port vlan ids
bridge0 9999 PVID Egress Untagged
dummy0 9999 PVID Egress Untagged
Fixes: 0f963b7592ef ("bridge: netlink: add support for default_pvid")
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: Tobias Jungel <tobias.jungel@bisdn.de>
Acked-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Ahern [Tue, 16 May 2017 06:19:17 +0000 (23:19 -0700)]
net: Improve handling of failures on link and route dumps
[ Upstream commit
f6c5775ff0bfa62b072face6bf1d40f659f194b2 ]
In general, rtnetlink dumps do not anticipate failure to dump a single
object (e.g., link or route) on a single pass. As both route and link
objects have grown via more attributes, that is no longer a given.
netlink dumps can handle a failure if the dump function returns an
error; specifically, netlink_dump adds the return code to the response
if it is <= 0 so userspace is notified of the failure. The missing
piece is the rtnetlink dump functions returning the error.
Fix route and link dump functions to return the errors if no object is
added to an skb (detected by skb->len != 0). IPv6 route dumps
(rt6_dump_route) already return the error; this patch updates IPv4 and
link dumps. Other dump functions may need to be ajusted as well.
Reported-by: Jan Moskyto Matejka <mq@ucw.cz>
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Soheil Hassas Yeganeh [Mon, 15 May 2017 21:05:47 +0000 (17:05 -0400)]
tcp: eliminate negative reordering in tcp_clean_rtx_queue
[ Upstream commit
bafbb9c73241760023d8981191ddd30bb1c6dbac ]
tcp_ack() can call tcp_fragment() which may dededuct the
value tp->fackets_out when MSS changes. When prior_fackets
is larger than tp->fackets_out, tcp_clean_rtx_queue() can
invoke tcp_update_reordering() with negative values. This
results in absurd tp->reodering values higher than
sysctl_tcp_max_reordering.
Note that tcp_update_reordering indeeds sets tp->reordering
to min(sysctl_tcp_max_reordering, metric), but because
the comparison is signed, a negative metric always wins.
Fixes: c7caf8d3ed7a ("[TCP]: Fix reord detection due to snd_una covered holes")
Reported-by: Rebecca Isaacs <risaacs@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gal Pressman [Wed, 19 Apr 2017 11:35:15 +0000 (14:35 +0300)]
net/mlx5e: Fix ethtool pause support and advertise reporting
[ Upstream commit
e3c19503712d6360239b19c14cded56dd63c40d7 ]
Pause bit should set when RX pause is on, not TX pause.
Also, setting Asym_Pause is incorrect, and should be turned off.
Fixes: 665bc53969d7 ("net/mlx5e: Use new ethtool get/set link ksettings API")
Signed-off-by: Gal Pressman <galp@mellanox.com>
Cc: kernel-team@fb.com
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gal Pressman [Mon, 3 Apr 2017 12:11:22 +0000 (15:11 +0300)]
net/mlx5e: Use the correct pause values for ethtool advertising
[ Upstream commit
b383b544f2666d67446b951a9a97af239dafed5d ]
Query the operational pause from firmware (PFCC register) instead of
always passing zeros.
Fixes: 665bc53969d7 ("net/mlx5e: Use new ethtool get/set link ksettings API")
Signed-off-by: Gal Pressman <galp@mellanox.com>
Cc: kernel-team@fb.com
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Douglas Caetano dos Santos [Fri, 12 May 2017 18:19:15 +0000 (15:19 -0300)]
net/packet: fix missing net_device reference release
[ Upstream commit
d19b183cdc1fa3d70d6abe2a4c369e748cd7ebb8 ]
When using a TX ring buffer, if an error occurs processing a control
message (e.g. invalid message), the net_device reference is not
released.
Fixes
c14ac9451c348 ("sock: enable timestamping using control messages")
Signed-off-by: Douglas Caetano dos Santos <douglascs@taghos.com.br>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Xin Long [Fri, 12 May 2017 06:39:52 +0000 (14:39 +0800)]
sctp: fix src address selection if using secondary addresses for ipv6
[ Upstream commit
dbc2b5e9a09e9a6664679a667ff81cff6e5f2641 ]
Commit
0ca50d12fe46 ("sctp: fix src address selection if using secondary
addresses") has fixed a src address selection issue when using secondary
addresses for ipv4.
Now sctp ipv6 also has the similar issue. When using a secondary address,
sctp_v6_get_dst tries to choose the saddr which has the most same bits
with the daddr by sctp_v6_addr_match_len. It may make some cases not work
as expected.
hostA:
[1] fd21:356b:459a:cf10::11 (eth1)
[2] fd21:356b:459a:cf20::11 (eth2)
hostB:
[a] fd21:356b:459a:cf30::2 (eth1)
[b] fd21:356b:459a:cf40::2 (eth2)
route from hostA to hostB:
fd21:356b:459a:cf30::/64 dev eth1 metric 1024 mtu 1500
The expected path should be:
fd21:356b:459a:cf10::11 <-> fd21:356b:459a:cf30::2
But addr[2] matches addr[a] more bits than addr[1] does, according to
sctp_v6_addr_match_len. It causes the path to be:
fd21:356b:459a:cf20::11 <-> fd21:356b:459a:cf30::2
This patch is to fix it with the same way as Marcelo's fix for sctp ipv4.
As no ip_dev_find for ipv6, this patch is to use ipv6_chk_addr to check
if the saddr is in a dev instead.
Note that for backwards compatibility, it will still do the addr_match_len
check here when no optimal is found.
Reported-by: Patrick Talbert <ptalbert@redhat.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yuchung Cheng [Thu, 11 May 2017 00:01:27 +0000 (17:01 -0700)]
tcp: avoid fragmenting peculiar skbs in SACK
[ Upstream commit
b451e5d24ba6687c6f0e7319c727a709a1846c06 ]
This patch fixes a bug in splitting an SKB during SACK
processing. Specifically if an skb contains multiple
packets and is only partially sacked in the higher sequences,
tcp_match_sack_to_skb() splits the skb and marks the second fragment
as SACKed.
The current code further attempts rounding up the first fragment
to MSS boundaries. But it misses a boundary condition when the
rounded-up fragment size (pkt_len) is exactly skb size. Spliting
such an skb is pointless and causses a kernel warning and aborts
the SACK processing. This patch universally checks such over-split
before calling tcp_fragment to prevent these unnecessary warnings.
Fixes: adb92db857ee ("tcp: Make SACK code to split only at mss boundaries")
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Tue, 16 May 2017 20:27:53 +0000 (13:27 -0700)]
net: fix compile error in skb_orphan_partial()
[ Upstream commit
9142e9007f2d7ab58a587a1e1d921b0064a339aa ]
If CONFIG_INET is not set, net/core/sock.c can not compile :
net/core/sock.c: In function ‘skb_orphan_partial’:
net/core/sock.c:1810:2: error: implicit declaration of function
‘skb_is_tcp_pure_ack’ [-Werror=implicit-function-declaration]
if (skb_is_tcp_pure_ack(skb))
^
Fix this by always including <net/tcp.h>
Fixes: f6ba8d33cfbb ("netem: fix skb_orphan_partial()")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Thu, 11 May 2017 22:24:41 +0000 (15:24 -0700)]
netem: fix skb_orphan_partial()
[ Upstream commit
f6ba8d33cfbb46df569972e64dbb5bb7e929bfd9 ]
I should have known that lowering skb->truesize was dangerous :/
In case packets are not leaving the host via a standard Ethernet device,
but looped back to local sockets, bad things can happen, as reported
by Michael Madsen ( https://bugzilla.kernel.org/show_bug.cgi?id=195713 )
So instead of tweaking skb->truesize, lets change skb->destructor
and keep a reference on the owner socket via its sk_refcnt.
Fixes: f2f872f9272a ("netem: Introduce skb_orphan_partial() helper")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Michael Madsen <mkm@nabto.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Borkmann [Wed, 10 May 2017 23:53:15 +0000 (01:53 +0200)]
bpf, arm64: fix faulty emission of map access in tail calls
[ Upstream commit
d8b54110ee944de522ccd3531191f39986ec20f9 ]
Shubham was recently asking on netdev why in arm64 JIT we don't multiply
the index for accessing the tail call map by 8. That led me into testing
out arm64 JIT wrt tail calls and it turned out I got a NULL pointer
dereference on the tail call.
The buggy access is at:
prog = array->ptrs[index];
if (prog == NULL)
goto out;
[...]
00000060:
d2800e0a mov x10, #0x70 // #112
00000064:
f86a682a ldr x10, [x1,x10]
00000068:
f862694b ldr x11, [x10,x2]
0000006c:
b40000ab cbz x11, 0x00000080
[...]
The code triggering the crash is
f862694b. x1 at the time contains the
address of the bpf array, x10 offsetof(struct bpf_array, ptrs). Meaning,
above we load the pointer to the program at map slot 0 into x10. x10
can then be NULL if the slot is not occupied, which we later on try to
access with a user given offset in x2 that is the map index.
Fix this by emitting the following instead:
[...]
00000060:
d2800e0a mov x10, #0x70 // #112
00000064:
8b0a002a add x10, x1, x10
00000068:
d37df04b lsl x11, x2, #3
0000006c:
f86b694b ldr x11, [x10,x11]
00000070:
b40000ab cbz x11, 0x00000084
[...]
This basically adds the offset to ptrs to the base address of the bpf
array we got and we later on access the map with an index * 8 offset
relative to that. The tail call map itself is basically one large area
with meta data at the head followed by the array of prog pointers.
This makes tail calls working again, tested on Cavium ThunderX ARMv8.
Fixes: ddb55992b04d ("arm64: bpf: implement bpf_tail_call() helper")
Reported-by: Shubham Bansal <illusionist.neo@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ursula Braun [Wed, 10 May 2017 17:07:54 +0000 (19:07 +0200)]
s390/qeth: add missing hash table initializations
[ Upstream commit
ebccc7397e4a49ff64c8f44a54895de9d32fe742 ]
commit
5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback")
added new hash tables, but missed to initialize them.
Fixes: 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback")
Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
Reviewed-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Julian Wiedmann [Wed, 10 May 2017 17:07:53 +0000 (19:07 +0200)]
s390/qeth: avoid null pointer dereference on OSN
[ Upstream commit
25e2c341e7818a394da9abc403716278ee646014 ]
Access card->dev only after checking whether's its valid.
Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Reviewed-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Julian Wiedmann [Wed, 10 May 2017 17:07:52 +0000 (19:07 +0200)]
s390/qeth: unbreak OSM and OSN support
[ Upstream commit
2d2ebb3ed0c6acfb014f98e427298673a5d07b82 ]
commit
b4d72c08b358 ("qeth: bridgeport support - basic control")
broke the support for OSM and OSN devices as follows:
As OSM and OSN are L2 only, qeth_core_probe_device() does an early
setup by loading the l2 discipline and calling qeth_l2_probe_device().
In this context, adding the l2-specific bridgeport sysfs attributes
via qeth_l2_create_device_attributes() hits a BUG_ON in fs/sysfs/group.c,
since the basic sysfs infrastructure for the device hasn't been
established yet.
Note that OSN actually has its own unique sysfs attributes
(qeth_osn_devtype), so the additional attributes shouldn't be created
at all.
For OSM, add a new qeth_l2_devtype that contains all the common
and l2-specific sysfs attributes.
When qeth_core_probe_device() does early setup for OSM or OSN, assign
the corresponding devtype so that the ccwgroup probe code creates the
full set of sysfs attributes.
This allows us to skip qeth_l2_create_device_attributes() in case
of an early setup.
Any device that can't do early setup will initially have only the
generic sysfs attributes, and when it's probed later
qeth_l2_probe_device() adds the l2-specific attributes.
If an early-setup device is removed (by calling ccwgroup_ungroup()),
device_unregister() will - using the devtype - delete the
l2-specific attributes before qeth_l2_remove_device() is called.
So make sure to not remove them twice.
What complicates the issue is that qeth_l2_probe_device() and
qeth_l2_remove_device() is also called on a device when its
layer2 attribute changes (ie. its layer mode is switched).
For early-setup devices this wouldn't work properly - we wouldn't
remove the l2-specific attributes when switching to L3.
But switching the layer mode doesn't actually make any sense;
we already decided that the device can only operate in L2!
So just refuse to switch the layer mode on such devices. Note that
OSN doesn't have a layer2 attribute, so we only need to special-case
OSM.
Based on an initial patch by Ursula Braun.
Fixes: b4d72c08b358 ("qeth: bridgeport support - basic control")
Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ursula Braun [Wed, 10 May 2017 17:07:51 +0000 (19:07 +0200)]
s390/qeth: handle sysfs error during initialization
[ Upstream commit
9111e7880ccf419548c7b0887df020b08eadb075 ]
When setting up the device from within the layer discipline's
probe routine, creating the layer-specific sysfs attributes can fail.
Report this error back to the caller, and handle it by
releasing the layer discipline.
Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
[jwi: updated commit msg, moved an OSN change to a subsequent patch]
Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gao Feng [Tue, 9 May 2017 10:27:33 +0000 (18:27 +0800)]
driver: vrf: Fix one possible use-after-free issue
[ Upstream commit
1a4a5bf52a4adb477adb075e5afce925824ad132 ]
The current codes only deal with the case that the skb is dropped, it
may meet one use-after-free issue when NF_HOOK returns 0 that means
the skb is stolen by one netfilter rule or hook.
When one netfilter rule or hook stoles the skb and return NF_STOLEN,
it means the skb is taken by the rule, and other modules should not
touch this skb ever. Maybe the skb is queued or freed directly by the
rule.
Now uses the nf_hook instead of NF_HOOK to get the result of netfilter,
and check the return value of nf_hook. Only when its value equals 1, it
means the skb could go ahead. Or reset the skb as NULL.
BTW, because vrf_rcv_finish is empty function, so needn't invoke it
even though nf_hook returns 1. But we need to modify vrf_rcv_finish
to deal with the NF_STOLEN case.
There are two cases when skb is stolen.
1. The skb is stolen and freed directly.
There is nothing we need to do, and vrf_rcv_finish isn't invoked.
2. The skb is queued and reinjected again.
The vrf_rcv_finish would be invoked as okfn, so need to free the
skb in it.
Signed-off-by: Gao Feng <gfree.wind@vip.163.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Phil Elwell [Wed, 7 Jun 2017 07:53:36 +0000 (08:53 +0100)]
overlays: Fix i2c-rtc order and fragment numbering
See: https://github.com/raspberrypi/linux/issues/2059
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Andrei Gherzan [Mon, 5 Jun 2017 15:40:38 +0000 (16:40 +0100)]
dma-bcm2708: Fix module compilation of CONFIG_DMA_BCM2708
bcm2708-dmaengine.c defines functions like bcm_dma_start which are
defined as well in dma-bcm2708.h as inline versions when
CONFIG_DMA_BCM2708 is not defined. This works fine when
CONFIG_DMA_BCM2708 is built in, but when it is selected as module build
fails with redefinition errors because in the build system when
CONFIG_DMA_BCM2708 is selected as module, the macro becomes
CONFIG_DMA_BCM2708_MODULE.
This patch makes the header use CONFIG_DMA_BCM2708_MODULE too when
available.
Fixes https://github.com/raspberrypi/linux/issues/2056
Signed-off-by: Andrei Gherzan <andrei@gherzan.com>
Phil Elwell [Fri, 26 May 2017 12:03:41 +0000 (13:03 +0100)]
BCM270X_DT: Add midi-uart1 overlay
Add a scaler to the ttyS0 clock so that requesting 38400 baud results
in an approximately 31250 baud signal. This is analagous to
midi-uart0, except for ttyS0, which may be useful on Pi3 and also
may avoid an issue with ttyAMA0 failing to synchronise to an active
data stream.
See: https://www.raspberrypi.org/forums/viewtopic.php?f=107&t=183860
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Tejun Heo [Fri, 28 Apr 2017 19:14:55 +0000 (15:14 -0400)]
cgroup_get() expected to be called only on live cgroups and triggers
warning on a dead cgroup; however, cgroup_sk_alloc() may be called
while cloning a socket which is left in an empty and removed cgroup
and thus may legitimately duplicate its reference on a dead cgroup.
This currently triggers the following warning spuriously.
WARNING: CPU: 14 PID: 0 at kernel/cgroup.c:490 cgroup_get+0x55/0x60
...
[<
ffffffff8107e123>] __warn+0xd3/0xf0
[<
ffffffff8107e20e>] warn_slowpath_null+0x1e/0x20
[<
ffffffff810ff465>] cgroup_get+0x55/0x60
[<
ffffffff81106061>] cgroup_sk_alloc+0x51/0xe0
[<
ffffffff81761beb>] sk_clone_lock+0x2db/0x390
[<
ffffffff817cce06>] inet_csk_clone_lock+0x16/0xc0
[<
ffffffff817e8173>] tcp_create_openreq_child+0x23/0x4b0
[<
ffffffff818601a1>] tcp_v6_syn_recv_sock+0x91/0x670
[<
ffffffff817e8b16>] tcp_check_req+0x3a6/0x4e0
[<
ffffffff81861ba3>] tcp_v6_rcv+0x693/0xa00
[<
ffffffff81837429>] ip6_input_finish+0x59/0x3e0
[<
ffffffff81837cb2>] ip6_input+0x32/0xb0
[<
ffffffff81837387>] ip6_rcv_finish+0x57/0xa0
[<
ffffffff81837ac8>] ipv6_rcv+0x318/0x4d0
[<
ffffffff817778c7>] __netif_receive_skb_core+0x2d7/0x9a0
[<
ffffffff81777fa6>] __netif_receive_skb+0x16/0x70
[<
ffffffff81778023>] netif_receive_skb_internal+0x23/0x80
[<
ffffffff817787d8>] napi_gro_frags+0x208/0x270
[<
ffffffff8168a9ec>] mlx4_en_process_rx_cq+0x74c/0xf40
[<
ffffffff8168b270>] mlx4_en_poll_rx_cq+0x30/0x90
[<
ffffffff81778b30>] net_rx_action+0x210/0x350
[<
ffffffff8188c426>] __do_softirq+0x106/0x2c7
[<
ffffffff81082bad>] irq_exit+0x9d/0xa0 [<
ffffffff8188c0e4>] do_IRQ+0x54/0xd0
[<
ffffffff8188a63f>] common_interrupt+0x7f/0x7f <EOI>
[<
ffffffff8173d7e7>] cpuidle_enter+0x17/0x20
[<
ffffffff810bdfd9>] cpu_startup_entry+0x2a9/0x2f0
[<
ffffffff8103edd1>] start_secondary+0xf1/0x100
This patch renames the existing cgroup_get() with the dead cgroup
warning to cgroup_get_live() after cgroup_kn_lock_live() and
introduces the new cgroup_get() which doesn't check whether the cgroup
is live or dead.
All existing cgroup_get() users except for cgroup_sk_alloc() are
converted to use cgroup_get_live().
Fixes: d979a39d7242 ("cgroup: duplicate cgroup reference when cloning sockets")
Cc: stable@vger.kernel.org # v4.5+
Cc: Johannes Weiner <hannes@cmpxchg.org>
Reported-by: Chris Mason <clm@fb.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
sandeepal [Fri, 2 Jun 2017 13:29:46 +0000 (18:59 +0530)]
Allo Digione Driver (#2048)
Driver for the Allo Digione soundcard
P33M [Fri, 26 May 2017 11:50:31 +0000 (12:50 +0100)]
dwc_otg: fiq_fsm: Make isochronous compatibility checks work properly
Get rid of the spammy printk and local pointer mangling.
Also, there is a nominal benefit for using fiq_fsm for isochronous
transfers in FS mode (~1.1k IRQs per second vs 2.1k IRQs per second)
so remove the root port speed check.
Stefan Tatschner [Mon, 29 May 2017 19:46:16 +0000 (21:46 +0200)]
Add device tree config for htu21
See: https://github.com/raspberrypi/linux/pull/2041
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Phil Elwell [Wed, 31 May 2017 14:27:39 +0000 (15:27 +0100)]
BCM270X_DT: Improve i2c-sensor and i2c-rtc overlay
Use the "__dormant__" feature to permit multiple instances of each
overlay, which is more useful now that changing the "reg" property
also changes the node address. Although the overlay grows slightly,
when applied only the requested node is included.
Usage does not change, except that the "lm75addr" parameter of the
i2c-sensor overlay has been deprecated in favour of the generic
"addr" parameter.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Phil Elwell [Wed, 31 May 2017 08:33:55 +0000 (09:33 +0100)]
config: Adding SENSOR_JC42
The jc42 module supports a number of I2C-based temperature
sensor modules.
[ DM_RAID0 config lost because now selected by DM_RAID ]
See: https://github.com/raspberrypi/linux/issues/2046
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
P33M [Thu, 25 May 2017 15:04:53 +0000 (16:04 +0100)]
dwc_otg: make periodic scheduling behave properly for FS buses
If the root port is in full-speed mode, transfer times at 12mbit/s
would be calculated but matched against high-speed quotas.
Reinitialise hcd->frame_usecs[i] on each port enable event so that
full-speed bandwidth can be tracked sensibly.
Also, don't bother using the FIQ for transfers when in full-speed
mode - at the slower bus speed, interrupt frequency is reduced by
an order of magnitude.
Related issue: https://github.com/raspberrypi/linux/issues/2020
popcornmix [Mon, 14 Jul 2014 21:02:09 +0000 (22:02 +0100)]
hid: Reduce default mouse polling interval to 60Hz
Reduces overhead when using X
Tobias Jakobi [Sat, 25 Feb 2017 19:27:27 +0000 (20:27 +0100)]
HID: usbhid: extend polling interval configuration to joysticks
For mouse devices we can currently change the polling interval
via usbhid.mousepoll. Implement the same thing for joysticks, so
users can reduce input latency this way.
This has been tested with a Logitech RumblePad 2 with jspoll=2,
resulting in a polling rate of 500Hz (verified with evhz).
Signed-off-by: Tobias Jakobi <tjakobi@math.uni-bielefeld.de>
Acked-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
popcornmix [Wed, 24 May 2017 17:06:02 +0000 (18:06 +0100)]
Revert "hid: Reduce default mouse polling interval to 60Hz"
This reverts commit
b45c0448b60d691508251cdccf242ea43bbabb14.
Phil Elwell [Mon, 22 May 2017 12:56:41 +0000 (13:56 +0100)]
clk: bcm2835: Minimise clock jitter for PCM clock
Fractional clock dividers generate accurate average frequencies but
with jitter, particularly when the integer divisor is small.
Introduce a new metric of clock accuracy to penalise clocks with a good
average but worse jitter compared to clocks with an average which is no
better but with lower jitter. The metric is the ideal rate minus the
worse deviation from that ideal using the nearest integer divisors.
Use this metric for parent selection for clocks requiring low jitter
(currently just PCM).
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Anton Onishchenko [Tue, 23 May 2017 15:55:46 +0000 (18:55 +0300)]
mpu6050 device tree overlay (#2031)
Add overlay and config options for InvenSense MPU6050 6-axis motion
detector.
popcornmix [Mon, 22 May 2017 12:35:28 +0000 (13:35 +0100)]
config: Add CONFIG_IPV6_SIT_6RD
popcornmix [Mon, 22 May 2017 14:28:27 +0000 (15:28 +0100)]
config: Add CONFIG_IPV6_ROUTE_INFO
Liviu Dudau [Wed, 1 Mar 2017 12:26:28 +0000 (12:26 +0000)]
ASoC: TLV320AIC23: Unquote NULL from control name
commit
a03faba972cb0f9b3a46d8054e674d5492e06c38 upstream.
Without this I am getting the following messages at boot on my Trimslice:
tlv320aic23-codec 2-001a: Control not supported for path LLINEIN -> [NULL] -> Line Input
tlv320aic23-codec 2-001a: ASoC: no dapm match for LLINEIN --> NULL --> Line Input
tlv320aic23-codec 2-001a: ASoC: Failed to add route LLINEIN -> NULL -> Line Input
tlv320aic23-codec 2-001a: Control not supported for path RLINEIN -> [NULL] -> Line Input
tlv320aic23-codec 2-001a: ASoC: no dapm match for RLINEIN --> NULL --> Line Input
tlv320aic23-codec 2-001a: ASoC: Failed to add route RLINEIN -> NULL -> Line Input
tlv320aic23-codec 2-001a: Control not supported for path MICIN -> [NULL] -> Mic Input
tlv320aic23-codec 2-001a: ASoC: no dapm match for MICIN --> NULL --> Mic Input
tlv320aic23-codec 2-001a: ASoC: Failed to add route MICIN -> NULL -> Mic Input
tegra-snd-trimslice sound: tlv320aic23-hifi <->
70002800.i2s mapping ok
Signed-off-by: Liviu Dudau <liviu@dudau.co.uk>
Signed-off-by: Mark Brown <broonie@kernel.org>
Phil Elwell [Sat, 20 May 2017 21:10:14 +0000 (22:10 +0100)]
overlays: README: remove vestigial SDIO parameters
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Karl Palsson [Tue, 18 Mar 2014 23:33:27 +0000 (23:33 +0000)]
usb/serial/ch341: Add parity support
Based on wireshark packet traces from a windows machine.
ch340 and ch341 both seem to support all parity modes, but only the ch341
appears to support variable data bits and variable stop bits, so those are left
unimplemented, as before.
Tested on a generic usb-rs485 dongle with the chip label scratched off, and
some Modbus/RTU devices that required various parity settings.
Signed-off-by: Karl Palsson <karlp@tweak.net.au>
popcornmix [Thu, 18 May 2017 10:40:43 +0000 (11:40 +0100)]
config: Add FB_TFT_ST7789V module
Phil Elwell [Fri, 19 May 2017 14:07:27 +0000 (15:07 +0100)]
serial: 8250: Add CAP_MINI, set for bcm2835aux
The AUX/mini-UART in the BCM2835 family of procesors is a cut-down
8250 clone. In particular it is lacking support for the following
features: CSTOPB PARENB PARODD CMSPAR CS5 CS6
Add a new capability (UART_CAP_MINI) that exposes the restrictions to
the user of the termios API by turning off the unsupported features in
the request.
N.B. It is almost possible to automatically discover the missing
features by reading back the LCR register, but the CSIZE bits don't
cooperate (contrary to the documentation, both bits are significant,
but CS5 and CS6 are mapped to CS7) and the code is much longer.
See: https://github.com/raspberrypi/linux/issues/1561
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Ed Blake [Thu, 10 Nov 2016 18:07:54 +0000 (18:07 +0000)]
serial: 8250: Add IrDA to UART capabilities
commit
98838d95075a5295f3478ceba18bcccf472e30f4 upstream.
Add an IrDA UART capability flag and change the type of
uart_8250_port.capabilities to be u32 rather than unsigned short to
accommodate the additional flag.
Signed-off-by: Ed Blake <ed.blake@imgtec.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
chenzhiwo [Wed, 17 May 2017 08:34:57 +0000 (16:34 +0800)]
Add device tree overlay for GPIO connected rotary encoder.
See Documentation/input/rotary-encoder.txt for more information.
popcornmix [Tue, 16 May 2017 18:34:52 +0000 (19:34 +0100)]
config: Add CONFIG_I2C_ROBOTFUZZ_OSIF
popcornmix [Tue, 16 May 2017 14:58:00 +0000 (15:58 +0100)]
config: Add CONFIG_TOUCHSCREEN_EDT_FT5X06
popcornmix [Tue, 16 May 2017 15:55:36 +0000 (16:55 +0100)]
config: Drop CONFIG_TOUCHSCREEN_EKTF2127
Ahmet Inan [Mon, 15 May 2017 15:10:53 +0000 (17:10 +0200)]
overlays: Add Goodix overlay
Add support for I2C connected Goodix gt9271 multiple touch controller using
GPIOs 4 and 17 (pins 7 and 11 on GPIO header) for interrupt and reset.
Signed-off-by: Ahmet Inan <inan@distec.de>
Ahmet Inan [Mon, 15 May 2017 14:55:56 +0000 (16:55 +0200)]
config: Add Goodix touch controller module
Signed-off-by: Ahmet Inan <inan@distec.de>
Eric Anholt [Mon, 15 May 2017 18:35:13 +0000 (11:35 -0700)]
BCM270X: Drop position requirement for CMA in VC4 overlay.
No longer necessary since
2aefcd576195a739a7a256099571c9c4a401005f,
and will probably let peeople that want to choose a larger CMA
allocation (particularly on pi0/1).
Signed-off-by: Eric Anholt <eric@anholt.net>
Eric Anholt [Mon, 15 May 2017 16:28:36 +0000 (09:28 -0700)]
drm/vc4: Mark the device as active when enabling runtime PM.
Failing to do so meant that we got a resume() callback on first use of
the device, so we would leak the bin BO that we allocated during
probe.
Signed-off-by: Eric Anholt <eric@anholt.net>
Fixes: 553c942f8b2c ("drm/vc4: Allow using more than 256MB of CMA memory.")
P33M [Mon, 15 May 2017 13:51:42 +0000 (14:51 +0100)]
dwc_otg: remove unnecessary dma-mode channel halts on disconnect interrupt
Host channels are already halted in kill_urbs_in_qh_list() with the
subsequent interrupt processing behaving as if the URB was dequeued
via HCD callback.
There's no need to clobber the host channel registers a second time
as this exposes races between the driver and host channel resulting
in hcd->free_hc_list becoming corrupted.
P33M [Mon, 15 May 2017 13:27:48 +0000 (14:27 +0100)]
dwc_otg: delete hcd->channel_lock
The lock serves no purpose as it is only held while the HCD spinlock
is already being held.
P33M [Fri, 12 May 2017 11:24:00 +0000 (12:24 +0100)]
dwc_otg: fix several potential crash sources
On root port disconnect events, the host driver state is cleared and
in-progress host channels are forcibly stopped. This doesn't play
well with the FIQ running in the background, so:
- Guard the disconnect callback with both the host spinlock and FIQ
spinlock
- Move qtd dereference in dwc_otg_handle_hc_fsm() after the early-out
so we don't dereference a qtd that has gone away
- Turn catch-all BUG()s in dwc_otg_handle_hc_fsm() into warnings.
Phil Elwell [Thu, 11 May 2017 21:00:20 +0000 (22:00 +0100)]
SQUASH: BCM270X_DT: Fix typo in mmc overlay
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Phil Elwell [Thu, 11 May 2017 15:58:16 +0000 (16:58 +0100)]
BCM270X_DT: Tidy up mmc, sdhost, sdio overlays
The mmc, sdhost, sdio and sdio-1bit overlays had a few
anachronisms and oddities which were overdue for fixing.
The new versions should be functionally equivalent.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
popcornmix [Wed, 10 May 2017 11:47:46 +0000 (12:47 +0100)]
dwcotg: Allow to build without FIQ on ARM64
Signed-off-by: popcornmix <popcornmix@gmail.com>
Nisar Sayed [Tue, 9 May 2017 17:51:42 +0000 (18:51 +0100)]
According to RFC 2460, IPv6 UDP calculated checksum yields a result
of zero must be changed to 0xffff, however this feature is not
supported by smsc95xx family hence enable csum offload only for
IPv4 TCP/UDP packets.
Signed-off-by: Nisar Sayed <Nisar.Sayed@microchip.com>
Reported-by: popcorn mix <popcornmix@gmail.com>