CHROMIUM: virtio/wl: Fix locking order in do_new()
Fix following lockdep splat. It is a false positive, since the vfd->lock
belongs to a vfd just created and nobody else could reference it, but it
confuses lock debugging tools and the fix is trivial, so just fix it.
[ 4.634549] ======================================================
[ 4.634549] WARNING: possible circular locking dependency detected
[ 4.634551] 4.14.58-06408-ge3d9dad0bb21 #10 Not tainted
[ 4.634551] ------------------------------------------------------
[ 4.634552] kworker/0:2/144 is trying to acquire lock:
[ 4.634552] (&vfd->lock){+.+.}, at: [<ffffffff814c889a>] vq_in_work_handler+0x1ac/0x366
[ 4.634567]
[ 4.634567] but task is already holding lock:
[ 4.634567] (&vi->vfds_lock){+.+.}, at: [<ffffffff814c8877>] vq_in_work_handler+0x189/0x366
[ 4.634570]
[ 4.634570] which lock already depends on the new lock.
[ 4.634570]
[ 4.634570]
[ 4.634570] the existing dependency chain (in reverse order) is:
[ 4.634571]
[ 4.634571] -> #1 (&vi->vfds_lock){+.+.}:
[ 4.634579] __mutex_lock+0x81/0x39d
[ 4.634580] do_new+0x95/0x276
[ 4.634581] virtwl_ioctl_ptr+0x14c/0x20f
[ 4.634588] compat_SyS_ioctl+0x50d/0x1d0d
[ 4.634591] do_fast_syscall_32+0xbb/0xff
[ 4.634593] entry_SYSENTER_compat+0x84/0x96
[ 4.634593]
[ 4.634593] -> #0 (&vfd->lock){+.+.}:
[ 4.634599] lock_acquire+0x185/0x1bf
[ 4.634600] __mutex_lock+0x81/0x39d
[ 4.634602] vq_in_work_handler+0x1ac/0x366
[ 4.634604] process_one_work+0x2aa/0x4d8
[ 4.634605] worker_thread+0x193/0x24b
[ 4.634606] kthread+0xf8/0x100
[ 4.634608] ret_from_fork+0x3a/0x50
[ 4.634608]
[ 4.634608] other info that might help us debug this:
[ 4.634608]
[ 4.634609] Possible unsafe locking scenario:
[ 4.634609]
[ 4.634609] CPU0 CPU1
[ 4.634609] ---- ----
[ 4.634610] lock(&vi->vfds_lock);
[ 4.634610] lock(&vfd->lock);
[ 4.634611] lock(&vi->vfds_lock);
[ 4.634612] lock(&vfd->lock);
[ 4.634613]
[ 4.634613] *** DEADLOCK ***
[ 4.634613]
[ 4.634614] 3 locks held by kworker/0:2/144:
[ 4.634614] #0: ("events"){+.+.}, at: [<ffffffff810d1486>] process_one_work+0x146/0x4d8
[ 4.634617] #1: ((&vi->in_vq_work)){+.+.}, at: [<ffffffff810d1486>] process_one_work+0x146/0x4d8
[ 4.634619] #2: (&vi->vfds_lock){+.+.}, at: [<ffffffff814c8877>] vq_in_work_handler+0x189/0x366
[ 4.634621]
[ 4.634621] stack backtrace:
[ 4.634623] CPU: 0 PID: 144 Comm: kworker/0:2 Not tainted 4.14.58-06408-ge3d9dad0bb21 #10
[ 4.634625] Workqueue: events vq_in_work_handler
[ 4.634626] Call Trace:
[ 4.634630] dump_stack+0x9f/0xd5
[ 4.634632] print_circular_bug.isra.42+0x1c7/0x1d4
[ 4.634634] __lock_acquire+0xbbd/0xe7d
[ 4.634637] ? vq_in_work_handler+0x1ac/0x366
[ 4.634638] ? lock_acquire+0x185/0x1bf
[ 4.634639] lock_acquire+0x185/0x1bf
[ 4.634641] ? vq_in_work_handler+0x1ac/0x366
[ 4.634642] __mutex_lock+0x81/0x39d
[ 4.634643] ? vq_in_work_handler+0x1ac/0x366
[ 4.634645] ? vq_in_work_handler+0x1ac/0x366
[ 4.634647] ? vq_in_work_handler+0x1ac/0x366
[ 4.634648] vq_in_work_handler+0x1ac/0x366
[ 4.634650] process_one_work+0x2aa/0x4d8
[ 4.634651] ? worker_thread+0x1d0/0x24b
[ 4.634653] ? rescuer_thread+0x2ab/0x2ab
[ 4.634654] worker_thread+0x193/0x24b
[ 4.634655] ? rescuer_thread+0x2ab/0x2ab
[ 4.634656] kthread+0xf8/0x100
[ 4.634658] ? kthread_create_on_node+0x5d/0x5d
[ 4.634659] ? do_fast_syscall_32+0xbb/0xff
[ 4.634661] ? SyS_exit_group+0xb/0xb
[ 4.634662] ret_from_fork+0x3a/0x50
BUG=none
TEST=Enable CONFIG_PROVE_LOCKING and do not see the splat anymore
Change-Id: I24d2f01bf8266fd6ae4bc558cf52295fe3b07773
Signed-off-by: Tomasz Figa <tfiga@chromium.org>
Reviewed-on: https://chromium-review.googlesource.com/1154233
Reviewed-by: Zach Reizner <zachr@chromium.org>
1 file changed