diff options
author | Alex Dai <yu.dai@intel.com> | 2015-12-02 16:56:29 -0800 |
---|---|---|
committer | Daniel Vetter <daniel.vetter@ffwll.ch> | 2015-12-03 15:11:54 +0100 |
commit | 5a843307cdf5ffa65a9f2382b3827e86576bbfe8 (patch) | |
tree | 860571cb59772a289bfd0ff2bd97d9bb042ddfb1 /drivers/gpu/drm/i915/i915_debugfs.c | |
parent | ee7d6cfa4b15aafa1d87f913572f30dd64cdd85a (diff) | |
download | blackbird-obmc-linux-5a843307cdf5ffa65a9f2382b3827e86576bbfe8.tar.gz blackbird-obmc-linux-5a843307cdf5ffa65a9f2382b3827e86576bbfe8.zip |
drm/i915/guc: Clean up locks in GuC
For now, remove the spinlocks that protected the GuC's
statistics block and work queue; they are only accessed
by code that already holds the global struct_mutex, and
so are redundant (until the big struct_mutex rewrite!).
The specific problem that the spinlocks caused was that
if the work queue was full, the driver would try to
spinwait for one jiffy, but with interrupts disabled the
jiffy count would not advance, leading to a system hang.
The issue was found using test case igt/gem_close_race.
The new version will usleep() instead, still holding
the struct_mutex but without any spinlocks.
v4: Reorganize commit message (Dave Gordon)
v3: Remove unnecessary whitespace churn
v2: Clean up wq_lock too
v1: Clean up host2guc lock as well
Signed-off-by: Alex Dai <yu.dai@intel.com>
Reviewed-by: Dave Gordon <david.s.gordon@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449104189-27591-1-git-send-email-yu.dai@intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Diffstat (limited to 'drivers/gpu/drm/i915/i915_debugfs.c')
-rw-r--r-- | drivers/gpu/drm/i915/i915_debugfs.c | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 9cc57296cbb1..a8721fccd8a0 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -2469,15 +2469,15 @@ static int i915_guc_info(struct seq_file *m, void *data) if (!HAS_GUC_SCHED(dev_priv->dev)) return 0; + if (mutex_lock_interruptible(&dev->struct_mutex)) + return 0; + /* Take a local copy of the GuC data, so we can dump it at leisure */ - spin_lock(&dev_priv->guc.host2guc_lock); guc = dev_priv->guc; - if (guc.execbuf_client) { - spin_lock(&guc.execbuf_client->wq_lock); + if (guc.execbuf_client) client = *guc.execbuf_client; - spin_unlock(&guc.execbuf_client->wq_lock); - } - spin_unlock(&dev_priv->guc.host2guc_lock); + + mutex_unlock(&dev->struct_mutex); seq_printf(m, "GuC total action count: %llu\n", guc.action_count); seq_printf(m, "GuC action failure count: %u\n", guc.action_fail); |