summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/drm/i915/gvt/scheduler.c
Commit message (Collapse)AuthorAgeFilesLines
...
| * drm/i915/gvt: add a NULL pointer check to avoid kernel panicChuanxiao Dong2017-02-231-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Due to the request replay, context switch interrupt may come after gvt free the workload thus can cause a kernel NULL pointer kernel panic. This patch will add a simple check to avoid this for a short term. From long term, gvt workload lifecycle doesn't match with i915 request and need to find a proper way to manage this. v4: simplify the NULL pointer check. v5: add unlikely to optimize. Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* | drm/i915: Remove superfluous i915_add_request_no_flush() helperChris Wilson2017-03-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | The only time we need to emit a flush inside request emission is after an execbuffer, for which we can use the full __i915_add_request(). All other instances want the simpler i915_add_request() without flushing, so remove the useless helper. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170317114709.8388-1-chris@chris-wilson.co.uk
* | drm/i915: make context status notifier head be per engineChangbin Du2017-03-161-26/+19
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GVTg has introduced the context status notifier to schedule the GVTg workload. At that time, the notifier is bound to GVTg context only, so GVTg is not aware of host workloads. Now we are going to improve GVTg's guest workload scheduler policy, and add Guc emulation support for new Gen graphics. Both these two features require acknowledgment for all contexts running on hardware. (But will not alter host workload.) So here try to make some change. The change is simple: 1. Move the context status notifier head from i915_gem_context to intel_engine_cs. Which means there is a notifier head per engine instead of per context. Execlist driver still call notifier for each context sched-in/out events of current engine. 2. At GVTg side, it binds a notifier_block for each physical engine at GVTg initialization period. Then GVTg can hear all context status events. In this patch, GVTg do nothing for host context event, but later will add a function there. But in any case, the notifier callback is a noop if this is no active vGPU. Since intel_gvt_init() is called at early initialization stage and require the status notifier head has been initiated, I initiate it in intel_engine_setup(). v2: remove a redundant newline. (chris) Fixes: 3c7ba6359d70 ("drm/i915: Introduce execlist context status change notification") Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100232 Signed-off-by: Changbin Du <changbin.du@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Zhi Wang <zhi.a.wang@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170313024711.28591-1-changbin.du@intel.com Acked-by: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
* drm/i915/gvt: Fix shadow context descriptorZhenyu Wang2017-02-141-1/+2
| | | | | | | | We need to be careful to only update addr mode for gvt shadow context descriptor but keep other valid config. This fixes GPU hang caused by invalid descriptor submitted for gvt workload. Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* drm/i915/gvt: remove a redundant end of line in debug logChangbin Du2017-02-091-1/+1
| | | | | | | | Remove a redundant end of line in below log. 'will complete workload %p\n, status: %d\n' Signed-off-by: Changbin Du <changbin.du@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* Merge branch 'master' of ↵Dave Airlie2017-01-271-7/+7
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux into drm-next Backmerge Linus master to get the connector locking revert. * 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux: (645 commits) sysctl: fix proc_doulongvec_ms_jiffies_minmax() Revert "drm/probe-helpers: Drop locking from poll_enable" MAINTAINERS: add Dan Streetman to zbud maintainers MAINTAINERS: add Dan Streetman to zswap maintainers mm: do not export ioremap_page_range symbol for external module mn10300: fix build error of missing fpu_save() romfs: use different way to generate fsid for BLOCK or MTD frv: add missing atomic64 operations mm, page_alloc: fix premature OOM when racing with cpuset mems update mm, page_alloc: move cpuset seqcount checking to slowpath mm, page_alloc: fix fast-path race with cpuset update or removal mm, page_alloc: fix check for NULL preferred_zone kernel/panic.c: add missing \n fbdev: color map copying bounds checking frv: add atomic64_add_unless() mm/mempolicy.c: do not put mempolicy before using its nodemask radix-tree: fix private list warnings Documentation/filesystems/proc.txt: add VmPin mm, memcg: do not retry precharge charges proc: add a schedule point in proc_pid_readdir() ...
| * drm/i915/gvt: dec vgpu->running_workload_num after the workload is really doneChangbin Du2017-01-091-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The vgpu->running_workload_num is used to determine whether a vgpu has any workload running or not. So we should make sure the workload is really done before we dec running_workload_num. Function complete_current_workload is not the right place to do it, since this function is still processing the workload. This patch move the dec op afterward. v2: move dec op before wake_up(&scheduler->workload_complete_wq) (Min He) Signed-off-by: Changbin Du <changbin.du@intel.com> Reviewed-by: Min He <min.he@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
| * drm/i915/gvt: fix use after free for workloadChangbin Du2017-01-091-2/+2
| | | | | | | | | | | | | | | | | | | | In the function workload_thread(), we invoke complete_current_workload() to cleanup the just processed workload (workload will be freed there). So we cannot access workload->req after that. This patch move complete_current_workload() afterward. Signed-off-by: Changbin Du <changbin.du@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* | Merge tag 'v4.10-rc2' into drm-intel-next-queuedDaniel Vetter2017-01-041-4/+6
|\| | | | | | | | | | | | | Backmerge Linux 4.10-rc2 to resync with our -fixes cherry-picks. I've done the backmerge directly because Dave is on vacation. Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
| * drm/i915/gvt: fix lock not released bug for dispatch_workload() err pathZhenyu Wang2016-11-251-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Need to be careful to release struct_mutext when request alloc failed and take consistent handling for return status as with normal go out path. Ensure to check correct workload request in complete path too. v2: Add Fixes note Fixes: 90d27a1b180e ("drm/i915/gvt: fix deadlock in workload_thread") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Pei Zhang <pei.zhang@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* | drm/i915: Mark the shadow gvt context as closedChris Wilson2016-12-181-9/+1
|/ | | | | | | | | | As the shadow gvt is not user accessible and does not have an associated vm, we can mark it as closed during its construction. This saves leaking the internal knowledge of i915_gem_context into gvt/. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161218153724.8439-5-chris@chris-wilson.co.uk
* drm/i915/gvt: fix deadlock in workload_threadPei Zhang2016-11-141-20/+13
| | | | | | | | | | | | | | | | | | | | It's a classical abba type deadlock when using 2 mutex objects, which are gvt.lock(a) and drm.struct_mutex(b). Deadlock happens in threads: 1. intel_gvt_create/destroy_vgpu: P(a)->P(b) 2. workload_thread: P(b)->P(a) Fix solution is align the lock acquire sequence in both threads. This patch choose to adjust the sequence in workload_thread function. This fixed lockup symptom for guest-reboot stress test. v2: adjust sequence in workload_thread based on zhenyu's suggestion. adjust sequence in create/destroy_vgpu function. v3: fix to still require struct_mutex for dispatch_workload() Signed-off-by: Pei Zhang <pei.zhang@intel.com> [zhenyuw: fix unused variables warnings.] Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* drm/i915/gvt: use kmap instead of kmap_atomic around guest memory accessXiaoguang Chen2016-11-101-8/+8
| | | | | | | | | | | kmap_atomic doesn't allow sleep until unmapped. However, it's necessary to allow sleep during reading/writing guest memory, so use kmap instead. Signed-off-by: Bing Niu <bing.niu@intel.com> Signed-off-by: Xiaoguang Chen <xiaoguang.chen@intel.com> Signed-off-by: Jike Song <jike.song@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* drm/i915/gvt: Fix workload status after waitZhenyu Wang2016-11-071-0/+2
| | | | | | | | | | | | | From commit e95433c73a11759203af1cae5958f998c9673370, workload status setting was changed to only capture on error path, but we need to set it properly in normal path too, otherwise we'll fail to complete workload which could lead guest VM vGPU reset. v2: uses braces and add Fixes tag. Fixes: e95433c73a11 ("drm/i915: Rearrange i915_wait_request() accounting with callers") Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* drm/i915: Rearrange i915_wait_request() accounting with callersChris Wilson2016-10-281-3/+6
| | | | | | | | | | | | | | | | | | | Our low-level wait routine has evolved from our generic wait interface that handled unlocked, RPS boosting, waits with time tracking. If we push our GEM fence tracking to use reservation_objects (required for handling multiple timelines), we lose the ability to pass the required information down to i915_wait_request(). However, if we push the extra functionality from i915_wait_request() to the individual callsites (i915_gem_object_wait_rendering and i915_gem_wait_ioctl) that make use of those extras, we can both simplify our low level wait and prepare for extending the GEM interface for use of reservation_objects. v2: Rewrite i915_wait_request() kerneldocs Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-4-chris@chris-wilson.co.uk
* drm/i915/gvt: fix nested sleeping issueDu, Changbin2016-10-271-7/+12
| | | | | | | | | | | | | | | We cannot use blocking method mutex_lock inside a wait loop. Here we invoke pick_next_workload() which needs acquire a mutex in our "condition" experssion. Then we go into a another of the going-to-sleep sequence and changing the task state. This is a dangerous. Let's rewrite the wait sequence to avoid nested sleeping. v2: fix do...while loop exit condition (zhenyu) v3: rebase to gvt-staging branch Signed-off-by: Du, Changbin <changbin.du@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* drm/i915/gvt: mark symbols static where possibleDu, Changbin2016-10-201-1/+2
| | | | | | | Mark all local functions & variables as static. Signed-off-by: Du, Changbin <changbin.du@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* drm/i915/gvt: properly access enabled intel_engine_csZhenyu Wang2016-10-201-0/+4
| | | | | | | | | Switch to use new for_each_engine() helper to properly access enabled intel_engine_cs as i915 core has changed that to be dynamic managed. At GVT-g init time would still depend on ring mask to determine engine list as it's earlier. Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* drm/i915/gvt: Stop waiting whilst holding struct_mutexChris Wilson2016-10-201-9/+13
| | | | | | | | | | | | For whatever reason, the gvt scheduler runs synchronously. At the very least, lets run synchronously without holding the struct_mutex. v2: cut'n'paste mutex_lock instead of unlock. Replace long hold of struct_mutex with a mutex to serialise the worker threads. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* drm/i915/gvt: Stop checking for impossible interrupts from a kthreadChris Wilson2016-10-201-7/+2
| | | | | | | | The kthread will not be interrupted, don't even bother checking. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* drm/i915/gvt: Hold a reference on the requestChris Wilson2016-10-201-11/+12
| | | | | | | | | | | | | | The workload took a pointer to the request, and even waited upon, without holding a reference on the request. Take that reference explicitly and fix up the error path following request allocation that missed flushing the request. v2: [zhenyuw] - drop request put in error path for dispatch, as main thread caller will handle it identically to a real request. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* drm/i915/gvt: clean up intel_gvt.h as interface for i915 coreZhenyu Wang2016-10-201-2/+3
| | | | | | | | | | | | | | | | | | | | | i915 core should only call functions and structures exposed through intel_gvt.h. Remove internal gvt.h and i915_pvinfo.h. Change for internal intel_gvt structure as private handler which not requires to expose gvt internal structure for i915 core. v2: Fix per Chris's comment - carefully handle dev_priv->gvt assignment - add necessary bracket for macro helper - forward declartion struct intel_gvt - keep free operation within same file handling alloc v3: fix use after free and remove intel_gvt.initialized v4: change to_gvt() to an inline Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* drm/i915/gvt: Fix build failure after intel_engine_cs changeZhenyu Wang2016-10-181-3/+3
| | | | | | | | | | Change GVT-g code reference for intel_engine_cs from static array to allocated pointer after commit 3b3f1650b1ca ("drm/i915: Allocate intel_engine_cs structure only for the enabled engines"). Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20161018014007.29369-1-zhenyuw@linux.intel.com
* drm/i915/gvt: vGPU command scannerZhi Wang2016-10-141-0/+14
| | | | | | | | This patch introduces a command scanner to scan guest command buffers. Signed-off-by: Yulei Zhang <yulei.zhang@intel.com> Signed-off-by: Zhi Wang <zhi.a.wang@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* drm/i915/gvt: vGPU context switchZhi Wang2016-10-141-0/+4
| | | | | | | | | As different VM may configure different render MMIOs when executing workload, to schedule workloads between different VM, the render MMIOs have to be switched. Signed-off-by: Zhi Wang <zhi.a.wang@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
* drm/i915/gvt: vGPU workload schedulerZhi Wang2016-10-141-0/+554
This patch introduces the vGPU workload scheduler routines. GVT workload scheduler is responsible for picking and executing GVT workload from current scheduled vGPU. Before the workload is submitted to host i915, the guest execlist context will be shadowed in the host GVT shadow context. the instructions in guest ring buffer will be copied into GVT shadow ring buffer. Then GVT-g workload scheduler will scan the instructions in guest ring buffer and submit it to host i915. Signed-off-by: Zhi Wang <zhi.a.wang@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
OpenPOWER on IntegriCloud