diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2019-09-19 16:24:24 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-09-19 16:24:24 -0700 |
commit | 574cc4539762561d96b456dbc0544d8898bd4c6e (patch) | |
tree | 07d84db8cf9fd30cbde6f539ce3a3f6116593e41 /drivers/gpu/drm/i915/i915_pmu.c | |
parent | 3c2edc36a77420d8be05d656019dbc8c31535992 (diff) | |
parent | 945b584c94f8c665b2df3834a8a6a8faf256cd5f (diff) | |
download | blackbird-op-linux-574cc4539762561d96b456dbc0544d8898bd4c6e.tar.gz blackbird-op-linux-574cc4539762561d96b456dbc0544d8898bd4c6e.zip |
Merge tag 'drm-next-2019-09-18' of git://anongit.freedesktop.org/drm/drm
Pull drm updates from Dave Airlie:
"This is the main pull request for 5.4-rc1 merge window. I don't think
there is anything outstanding so next week should just be fixes, but
we'll see if I missed anything. I landed some fixes earlier in the
week but got delayed writing summary and sending it out, due to a mix
of sick kid and jetlag!
There are some fixes pending, but I'd rather get the main merge out of
the way instead of delaying it longer.
It's also pretty large in commit count and new amd header file size.
The largest thing is four new amdgpu products (navi12/14, arcturus and
renoir APU support).
Otherwise it's pretty much lots of work across the board, i915 has
started landing tigerlake support, lots of icelake fixes and lots of
locking reworking for future gpu support, lots of header file rework
(drmP.h is nearly gone), some old legacy hacks (DRM_WAIT_ON) have been
put into the places they are needed.
uapi:
- content protection type property for HDCP
core:
- rework include dependencies
- lots of drmP.h removals
- link rate calculation robustness fix
- make fb helper map only when required
- add connector->DDC adapter link
- DRM_WAIT_ON removed
- drop DRM_AUTH usage from drivers
dma-buf:
- reservation object fence helper
dma-fence:
- shrink dma_fence struct
- merge signal functions
- store timestamps in dma_fence
- selftests
ttm:
- embed drm_get_object struct into ttm_buffer_object
- release_notify callback
bridges:
- sii902x - audio graph card support
- tc358767 - aux data handling rework
- ti-snd64dsi86 - debugfs support, DSI mode flags support
panels:
- Support for GiantPlus GPM940B0, Sharp LQ070Y3DG3B, Ortustech
COM37H3M, Novatek NT39016, Sharp LS020B1DD01D, Raydium RM67191, Boe
Himax8279d, Sharp LD-D5116Z01B
- TI nspire, NEC NL8048HL11, LG Philips LB035Q02, Sharp LS037V7DW01,
Sony ACX565AKM, Toppoly TD028TTEC1 Toppoly TD043MTEA1
i915:
- Initial tigerlake platform support
- Locking simplification work, general all over refactoring.
- Selftests
- HDCP debug info improvements
- DSI properties
- Icelake display PLL fixes, colorspace fixes, bandwidth fixes, DSI
suspend/resume
- GuC fixes
- Perf fixes
- ElkhartLake enablement
- DP MST fixes
- GVT - command parser enhancements
amdgpu:
- add wipe memory on release flag for buffer creation
- Navi12/14 support (may be marked experimental)
- Arcturus support
- Renoir APU support
- mclk DPM for Navi
- DC display fixes
- Raven scatter/gather support
- RAS support for GFX
- Navi12 + Arcturus power features
- GPU reset for Picasso
- smu11 i2c controller support
amdkfd:
- navi12/14 support
- Arcturus support
radeon:
- kexec fix
nouveau:
- improved display color management
- detect lack of GPU power cables
vmwgfx:
- evicition priority support
- remove unused security feature
msm:
- msm8998 display support
- better async commit support for cursor updates
etnaviv:
- per-process address space support
- performance counter fixes
- softpin support
mcde:
- DCS transfers fix
exynos:
- drmP.h cleanup
lima:
- reduce logging
kirin:
- misc clenaups
komeda:
- dual-link support
- DT memory regions
hisilicon:
- misc fixes
imx:
- IPUv3 image converter fixes
- 32-bit RGB V4L2 pixel format support
ingenic:
- more support for panel related cases
mgag200:
- cursor support fix
panfrost:
- export GPU features register to userspace
- gpu heap allocations
- per-fd address space support
pl111:
- CLD pads wiring support removed from DT
rockchip:
- rework to use DRM PSR helpers
- fix bug in VOP_WIN_GET macro
- DSI DT binding rework
sun4i:
- improve support for color encoding and range
- DDC enabled GPIO
tinydrm:
- rework SPI support
- improve MIPI-DBI support
- moved to drm/tiny
vkms:
- rework CRC tracking
dw-hdmi:
- get_eld and i2s improvements
gm12u320:
- misc fixes
meson:
- global code cleanup
- vpu feature detect
omap:
- alpha/pixel blend mode properties
rcar-du:
- misc fixes"
* tag 'drm-next-2019-09-18' of git://anongit.freedesktop.org/drm/drm: (2112 commits)
drm/nouveau/bar/gm20b: Avoid BAR1 teardown during init
drm/nouveau: Fix ordering between TTM and GEM release
drm/nouveau/prime: Extend DMA reservation object lock
drm/nouveau: Fix fallout from reservation object rework
drm/nouveau/kms/nv50-: Don't create MSTMs for eDP connectors
drm/i915: Use NOEVICT for first pass on attemping to pin a GGTT mmap
drm/i915: to make vgpu ppgtt notificaiton as atomic operation
drm/i915: Flush the existing fence before GGTT read/write
drm/i915: Hold irq-off for the entire fake lock period
drm/i915/gvt: update RING_START reg of vGPU when the context is submitted to i915
drm/i915/gvt: update vgpu workload head pointer correctly
drm/mcde: Fix DSI transfers
drm/msm: Use the correct dma_sync calls harder
drm/msm: remove unlikely() from WARN_ON() conditions
drm/msm/dsi: Fix return value check for clk_get_parent
drm/msm: add atomic traces
drm/msm/dpu: async commit support
drm/msm: async commit support
drm/msm: split power control from prepare/complete_commit
drm/msm: add kms->flush_commit()
...
Diffstat (limited to 'drivers/gpu/drm/i915/i915_pmu.c')
-rw-r--r-- | drivers/gpu/drm/i915/i915_pmu.c | 298 |
1 files changed, 155 insertions, 143 deletions
diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c index 8fe46ee920a0..8e251e719390 100644 --- a/drivers/gpu/drm/i915/i915_pmu.c +++ b/drivers/gpu/drm/i915/i915_pmu.c @@ -8,6 +8,9 @@ #include <linux/pm_runtime.h> #include "gt/intel_engine.h" +#include "gt/intel_engine_pm.h" +#include "gt/intel_engine_user.h" +#include "gt/intel_gt_pm.h" #include "i915_drv.h" #include "i915_pmu.h" @@ -74,8 +77,9 @@ static unsigned int event_enabled_bit(struct perf_event *event) return config_enabled_bit(event->attr.config); } -static bool pmu_needs_timer(struct drm_i915_private *i915, bool gpu_active) +static bool pmu_needs_timer(struct i915_pmu *pmu, bool gpu_active) { + struct drm_i915_private *i915 = container_of(pmu, typeof(*i915), pmu); u64 enable; /* @@ -83,7 +87,7 @@ static bool pmu_needs_timer(struct drm_i915_private *i915, bool gpu_active) * * We start with a bitmask of all currently enabled events. */ - enable = i915->pmu.enable; + enable = pmu->enable; /* * Mask out all the ones which do not need the timer, or in @@ -102,10 +106,8 @@ static bool pmu_needs_timer(struct drm_i915_private *i915, bool gpu_active) /* * Also there is software busyness tracking available we do not * need the timer for I915_SAMPLE_BUSY counter. - * - * Use RCS as proxy for all engines. */ - else if (intel_engine_supports_stats(i915->engine[RCS0])) + else if (i915->caps.scheduler & I915_SCHEDULER_CAP_ENGINE_BUSY_STATS) enable &= ~BIT(I915_SAMPLE_BUSY); /* @@ -116,24 +118,26 @@ static bool pmu_needs_timer(struct drm_i915_private *i915, bool gpu_active) void i915_pmu_gt_parked(struct drm_i915_private *i915) { - if (!i915->pmu.base.event_init) + struct i915_pmu *pmu = &i915->pmu; + + if (!pmu->base.event_init) return; - spin_lock_irq(&i915->pmu.lock); + spin_lock_irq(&pmu->lock); /* * Signal sampling timer to stop if only engine events are enabled and * GPU went idle. */ - i915->pmu.timer_enabled = pmu_needs_timer(i915, false); - spin_unlock_irq(&i915->pmu.lock); + pmu->timer_enabled = pmu_needs_timer(pmu, false); + spin_unlock_irq(&pmu->lock); } -static void __i915_pmu_maybe_start_timer(struct drm_i915_private *i915) +static void __i915_pmu_maybe_start_timer(struct i915_pmu *pmu) { - if (!i915->pmu.timer_enabled && pmu_needs_timer(i915, true)) { - i915->pmu.timer_enabled = true; - i915->pmu.timer_last = ktime_get(); - hrtimer_start_range_ns(&i915->pmu.timer, + if (!pmu->timer_enabled && pmu_needs_timer(pmu, true)) { + pmu->timer_enabled = true; + pmu->timer_last = ktime_get(); + hrtimer_start_range_ns(&pmu->timer, ns_to_ktime(PERIOD), 0, HRTIMER_MODE_REL_PINNED); } @@ -141,15 +145,17 @@ static void __i915_pmu_maybe_start_timer(struct drm_i915_private *i915) void i915_pmu_gt_unparked(struct drm_i915_private *i915) { - if (!i915->pmu.base.event_init) + struct i915_pmu *pmu = &i915->pmu; + + if (!pmu->base.event_init) return; - spin_lock_irq(&i915->pmu.lock); + spin_lock_irq(&pmu->lock); /* * Re-enable sampling timer when GPU goes active. */ - __i915_pmu_maybe_start_timer(i915); - spin_unlock_irq(&i915->pmu.lock); + __i915_pmu_maybe_start_timer(pmu); + spin_unlock_irq(&pmu->lock); } static void @@ -159,32 +165,30 @@ add_sample(struct i915_pmu_sample *sample, u32 val) } static void -engines_sample(struct drm_i915_private *dev_priv, unsigned int period_ns) +engines_sample(struct intel_gt *gt, unsigned int period_ns) { + struct drm_i915_private *i915 = gt->i915; struct intel_engine_cs *engine; enum intel_engine_id id; - intel_wakeref_t wakeref; - unsigned long flags; - if ((dev_priv->pmu.enable & ENGINE_SAMPLE_MASK) == 0) + if ((i915->pmu.enable & ENGINE_SAMPLE_MASK) == 0) return; - wakeref = 0; - if (READ_ONCE(dev_priv->gt.awake)) - wakeref = intel_runtime_pm_get_if_in_use(&dev_priv->runtime_pm); - if (!wakeref) - return; - - spin_lock_irqsave(&dev_priv->uncore.lock, flags); - for_each_engine(engine, dev_priv, id) { + for_each_engine(engine, i915, id) { struct intel_engine_pmu *pmu = &engine->pmu; + unsigned long flags; bool busy; u32 val; - val = I915_READ_FW(RING_CTL(engine->mmio_base)); - if (val == 0) /* powerwell off => engine idle */ + if (!intel_engine_pm_get_if_awake(engine)) continue; + spin_lock_irqsave(&engine->uncore->lock, flags); + + val = ENGINE_READ_FW(engine, RING_CTL); + if (val == 0) /* powerwell off => engine idle */ + goto skip; + if (val & RING_WAIT) add_sample(&pmu->sample[I915_SAMPLE_WAIT], period_ns); if (val & RING_WAIT_SEMAPHORE) @@ -199,15 +203,16 @@ engines_sample(struct drm_i915_private *dev_priv, unsigned int period_ns) */ busy = val & (RING_WAIT_SEMAPHORE | RING_WAIT); if (!busy) { - val = I915_READ_FW(RING_MI_MODE(engine->mmio_base)); + val = ENGINE_READ_FW(engine, RING_MI_MODE); busy = !(val & MODE_IDLE); } if (busy) add_sample(&pmu->sample[I915_SAMPLE_BUSY], period_ns); - } - spin_unlock_irqrestore(&dev_priv->uncore.lock, flags); - intel_runtime_pm_put(&dev_priv->runtime_pm, wakeref); +skip: + spin_unlock_irqrestore(&engine->uncore->lock, flags); + intel_engine_pm_put(engine); + } } static void @@ -217,34 +222,30 @@ add_sample_mult(struct i915_pmu_sample *sample, u32 val, u32 mul) } static void -frequency_sample(struct drm_i915_private *dev_priv, unsigned int period_ns) +frequency_sample(struct intel_gt *gt, unsigned int period_ns) { - if (dev_priv->pmu.enable & - config_enabled_mask(I915_PMU_ACTUAL_FREQUENCY)) { - u32 val; + struct drm_i915_private *i915 = gt->i915; + struct intel_uncore *uncore = gt->uncore; + struct i915_pmu *pmu = &i915->pmu; - val = dev_priv->gt_pm.rps.cur_freq; - if (dev_priv->gt.awake) { - intel_wakeref_t wakeref; + if (pmu->enable & config_enabled_mask(I915_PMU_ACTUAL_FREQUENCY)) { + u32 val; - with_intel_runtime_pm_if_in_use(&dev_priv->runtime_pm, - wakeref) { - val = intel_uncore_read_notrace(&dev_priv->uncore, - GEN6_RPSTAT1); - val = intel_get_cagf(dev_priv, val); - } + val = i915->gt_pm.rps.cur_freq; + if (intel_gt_pm_get_if_awake(gt)) { + val = intel_uncore_read_notrace(uncore, GEN6_RPSTAT1); + val = intel_get_cagf(i915, val); + intel_gt_pm_put(gt); } - add_sample_mult(&dev_priv->pmu.sample[__I915_SAMPLE_FREQ_ACT], - intel_gpu_freq(dev_priv, val), + add_sample_mult(&pmu->sample[__I915_SAMPLE_FREQ_ACT], + intel_gpu_freq(i915, val), period_ns / 1000); } - if (dev_priv->pmu.enable & - config_enabled_mask(I915_PMU_REQUESTED_FREQUENCY)) { - add_sample_mult(&dev_priv->pmu.sample[__I915_SAMPLE_FREQ_REQ], - intel_gpu_freq(dev_priv, - dev_priv->gt_pm.rps.cur_freq), + if (pmu->enable & config_enabled_mask(I915_PMU_REQUESTED_FREQUENCY)) { + add_sample_mult(&pmu->sample[__I915_SAMPLE_FREQ_REQ], + intel_gpu_freq(i915, i915->gt_pm.rps.cur_freq), period_ns / 1000); } } @@ -253,15 +254,17 @@ static enum hrtimer_restart i915_sample(struct hrtimer *hrtimer) { struct drm_i915_private *i915 = container_of(hrtimer, struct drm_i915_private, pmu.timer); + struct i915_pmu *pmu = &i915->pmu; + struct intel_gt *gt = &i915->gt; unsigned int period_ns; ktime_t now; - if (!READ_ONCE(i915->pmu.timer_enabled)) + if (!READ_ONCE(pmu->timer_enabled)) return HRTIMER_NORESTART; now = ktime_get(); - period_ns = ktime_to_ns(ktime_sub(now, i915->pmu.timer_last)); - i915->pmu.timer_last = now; + period_ns = ktime_to_ns(ktime_sub(now, pmu->timer_last)); + pmu->timer_last = now; /* * Strictly speaking the passed in period may not be 100% accurate for @@ -269,8 +272,8 @@ static enum hrtimer_restart i915_sample(struct hrtimer *hrtimer) * grabbing the forcewake. However the potential error from timer call- * back delay greatly dominates this so we keep it simple. */ - engines_sample(i915, period_ns); - frequency_sample(i915, period_ns); + engines_sample(gt, period_ns); + frequency_sample(gt, period_ns); hrtimer_forward(hrtimer, now, ns_to_ktime(PERIOD)); @@ -423,8 +426,9 @@ static int i915_pmu_event_init(struct perf_event *event) return 0; } -static u64 __get_rc6(struct drm_i915_private *i915) +static u64 __get_rc6(struct intel_gt *gt) { + struct drm_i915_private *i915 = gt->i915; u64 val; val = intel_rc6_residency_ns(i915, @@ -441,17 +445,19 @@ static u64 __get_rc6(struct drm_i915_private *i915) return val; } -static u64 get_rc6(struct drm_i915_private *i915) +static u64 get_rc6(struct intel_gt *gt) { #if IS_ENABLED(CONFIG_PM) + struct drm_i915_private *i915 = gt->i915; struct intel_runtime_pm *rpm = &i915->runtime_pm; + struct i915_pmu *pmu = &i915->pmu; intel_wakeref_t wakeref; unsigned long flags; u64 val; wakeref = intel_runtime_pm_get_if_in_use(rpm); if (wakeref) { - val = __get_rc6(i915); + val = __get_rc6(gt); intel_runtime_pm_put(rpm, wakeref); /* @@ -460,16 +466,16 @@ static u64 get_rc6(struct drm_i915_private *i915) * previously. */ - spin_lock_irqsave(&i915->pmu.lock, flags); + spin_lock_irqsave(&pmu->lock, flags); - if (val >= i915->pmu.sample[__I915_SAMPLE_RC6_ESTIMATED].cur) { - i915->pmu.sample[__I915_SAMPLE_RC6_ESTIMATED].cur = 0; - i915->pmu.sample[__I915_SAMPLE_RC6].cur = val; + if (val >= pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur) { + pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur = 0; + pmu->sample[__I915_SAMPLE_RC6].cur = val; } else { - val = i915->pmu.sample[__I915_SAMPLE_RC6_ESTIMATED].cur; + val = pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur; } - spin_unlock_irqrestore(&i915->pmu.lock, flags); + spin_unlock_irqrestore(&pmu->lock, flags); } else { struct device *kdev = rpm->kdev; @@ -480,7 +486,7 @@ static u64 get_rc6(struct drm_i915_private *i915) * on top of the last known real value, as the approximated RC6 * counter value. */ - spin_lock_irqsave(&i915->pmu.lock, flags); + spin_lock_irqsave(&pmu->lock, flags); /* * After the above branch intel_runtime_pm_get_if_in_use failed @@ -496,25 +502,25 @@ static u64 get_rc6(struct drm_i915_private *i915) if (pm_runtime_status_suspended(kdev)) { val = pm_runtime_suspended_time(kdev); - if (!i915->pmu.sample[__I915_SAMPLE_RC6_ESTIMATED].cur) - i915->pmu.suspended_time_last = val; + if (!pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur) + pmu->suspended_time_last = val; - val -= i915->pmu.suspended_time_last; - val += i915->pmu.sample[__I915_SAMPLE_RC6].cur; + val -= pmu->suspended_time_last; + val += pmu->sample[__I915_SAMPLE_RC6].cur; - i915->pmu.sample[__I915_SAMPLE_RC6_ESTIMATED].cur = val; - } else if (i915->pmu.sample[__I915_SAMPLE_RC6_ESTIMATED].cur) { - val = i915->pmu.sample[__I915_SAMPLE_RC6_ESTIMATED].cur; + pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur = val; + } else if (pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur) { + val = pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur; } else { - val = i915->pmu.sample[__I915_SAMPLE_RC6].cur; + val = pmu->sample[__I915_SAMPLE_RC6].cur; } - spin_unlock_irqrestore(&i915->pmu.lock, flags); + spin_unlock_irqrestore(&pmu->lock, flags); } return val; #else - return __get_rc6(i915); + return __get_rc6(gt); #endif } @@ -522,6 +528,7 @@ static u64 __i915_pmu_event_read(struct perf_event *event) { struct drm_i915_private *i915 = container_of(event->pmu, typeof(*i915), pmu.base); + struct i915_pmu *pmu = &i915->pmu; u64 val = 0; if (is_engine_event(event)) { @@ -544,19 +551,19 @@ static u64 __i915_pmu_event_read(struct perf_event *event) switch (event->attr.config) { case I915_PMU_ACTUAL_FREQUENCY: val = - div_u64(i915->pmu.sample[__I915_SAMPLE_FREQ_ACT].cur, + div_u64(pmu->sample[__I915_SAMPLE_FREQ_ACT].cur, USEC_PER_SEC /* to MHz */); break; case I915_PMU_REQUESTED_FREQUENCY: val = - div_u64(i915->pmu.sample[__I915_SAMPLE_FREQ_REQ].cur, + div_u64(pmu->sample[__I915_SAMPLE_FREQ_REQ].cur, USEC_PER_SEC /* to MHz */); break; case I915_PMU_INTERRUPTS: val = count_interrupts(i915); break; case I915_PMU_RC6_RESIDENCY: - val = get_rc6(i915); + val = get_rc6(&i915->gt); break; } } @@ -584,24 +591,25 @@ static void i915_pmu_enable(struct perf_event *event) struct drm_i915_private *i915 = container_of(event->pmu, typeof(*i915), pmu.base); unsigned int bit = event_enabled_bit(event); + struct i915_pmu *pmu = &i915->pmu; unsigned long flags; - spin_lock_irqsave(&i915->pmu.lock, flags); + spin_lock_irqsave(&pmu->lock, flags); /* * Update the bitmask of enabled events and increment * the event reference counter. */ - BUILD_BUG_ON(ARRAY_SIZE(i915->pmu.enable_count) != I915_PMU_MASK_BITS); - GEM_BUG_ON(bit >= ARRAY_SIZE(i915->pmu.enable_count)); - GEM_BUG_ON(i915->pmu.enable_count[bit] == ~0); - i915->pmu.enable |= BIT_ULL(bit); - i915->pmu.enable_count[bit]++; + BUILD_BUG_ON(ARRAY_SIZE(pmu->enable_count) != I915_PMU_MASK_BITS); + GEM_BUG_ON(bit >= ARRAY_SIZE(pmu->enable_count)); + GEM_BUG_ON(pmu->enable_count[bit] == ~0); + pmu->enable |= BIT_ULL(bit); + pmu->enable_count[bit]++; /* * Start the sampling timer if needed and not already enabled. */ - __i915_pmu_maybe_start_timer(i915); + __i915_pmu_maybe_start_timer(pmu); /* * For per-engine events the bitmask and reference counting @@ -627,7 +635,7 @@ static void i915_pmu_enable(struct perf_event *event) engine->pmu.enable_count[sample]++; } - spin_unlock_irqrestore(&i915->pmu.lock, flags); + spin_unlock_irqrestore(&pmu->lock, flags); /* * Store the current counter value so we can report the correct delta @@ -642,9 +650,10 @@ static void i915_pmu_disable(struct perf_event *event) struct drm_i915_private *i915 = container_of(event->pmu, typeof(*i915), pmu.base); unsigned int bit = event_enabled_bit(event); + struct i915_pmu *pmu = &i915->pmu; unsigned long flags; - spin_lock_irqsave(&i915->pmu.lock, flags); + spin_lock_irqsave(&pmu->lock, flags); if (is_engine_event(event)) { u8 sample = engine_event_sample(event); @@ -666,18 +675,18 @@ static void i915_pmu_disable(struct perf_event *event) engine->pmu.enable &= ~BIT(sample); } - GEM_BUG_ON(bit >= ARRAY_SIZE(i915->pmu.enable_count)); - GEM_BUG_ON(i915->pmu.enable_count[bit] == 0); + GEM_BUG_ON(bit >= ARRAY_SIZE(pmu->enable_count)); + GEM_BUG_ON(pmu->enable_count[bit] == 0); /* * Decrement the reference count and clear the enabled * bitmask when the last listener on an event goes away. */ - if (--i915->pmu.enable_count[bit] == 0) { - i915->pmu.enable &= ~BIT_ULL(bit); - i915->pmu.timer_enabled &= pmu_needs_timer(i915, true); + if (--pmu->enable_count[bit] == 0) { + pmu->enable &= ~BIT_ULL(bit); + pmu->timer_enabled &= pmu_needs_timer(pmu, true); } - spin_unlock_irqrestore(&i915->pmu.lock, flags); + spin_unlock_irqrestore(&pmu->lock, flags); } static void i915_pmu_event_start(struct perf_event *event, int flags) @@ -826,8 +835,9 @@ add_pmu_attr(struct perf_pmu_events_attr *attr, const char *name, } static struct attribute ** -create_event_attributes(struct drm_i915_private *i915) +create_event_attributes(struct i915_pmu *pmu) { + struct drm_i915_private *i915 = container_of(pmu, typeof(*i915), pmu); static const struct { u64 config; const char *name; @@ -851,7 +861,6 @@ create_event_attributes(struct drm_i915_private *i915) struct i915_ext_attribute *i915_attr = NULL, *i915_iter; struct attribute **attr = NULL, **attr_iter; struct intel_engine_cs *engine; - enum intel_engine_id id; unsigned int i; /* Count how many counters we will be exposing. */ @@ -860,7 +869,7 @@ create_event_attributes(struct drm_i915_private *i915) count++; } - for_each_engine(engine, i915, id) { + for_each_uabi_engine(engine, i915) { for (i = 0; i < ARRAY_SIZE(engine_events); i++) { if (!engine_event_status(engine, engine_events[i].sample)) @@ -911,7 +920,7 @@ create_event_attributes(struct drm_i915_private *i915) } /* Initialize supported engine counters. */ - for_each_engine(engine, i915, id) { + for_each_uabi_engine(engine, i915) { for (i = 0; i < ARRAY_SIZE(engine_events); i++) { char *str; @@ -928,7 +937,7 @@ create_event_attributes(struct drm_i915_private *i915) i915_iter = add_i915_attr(i915_iter, str, __I915_PMU_ENGINE(engine->uabi_class, - engine->instance, + engine->uabi_instance, engine_events[i].sample)); str = kasprintf(GFP_KERNEL, "%s-%s.unit", @@ -941,8 +950,8 @@ create_event_attributes(struct drm_i915_private *i915) } } - i915->pmu.i915_attr = i915_attr; - i915->pmu.pmu_attr = pmu_attr; + pmu->i915_attr = i915_attr; + pmu->pmu_attr = pmu_attr; return attr; @@ -958,7 +967,7 @@ err_alloc: return NULL; } -static void free_event_attributes(struct drm_i915_private *i915) +static void free_event_attributes(struct i915_pmu *pmu) { struct attribute **attr_iter = i915_pmu_events_attr_group.attrs; @@ -966,12 +975,12 @@ static void free_event_attributes(struct drm_i915_private *i915) kfree((*attr_iter)->name); kfree(i915_pmu_events_attr_group.attrs); - kfree(i915->pmu.i915_attr); - kfree(i915->pmu.pmu_attr); + kfree(pmu->i915_attr); + kfree(pmu->pmu_attr); i915_pmu_events_attr_group.attrs = NULL; - i915->pmu.i915_attr = NULL; - i915->pmu.pmu_attr = NULL; + pmu->i915_attr = NULL; + pmu->pmu_attr = NULL; } static int i915_pmu_cpu_online(unsigned int cpu, struct hlist_node *node) @@ -1008,7 +1017,7 @@ static int i915_pmu_cpu_offline(unsigned int cpu, struct hlist_node *node) static enum cpuhp_state cpuhp_slot = CPUHP_INVALID; -static int i915_pmu_register_cpuhp_state(struct drm_i915_private *i915) +static int i915_pmu_register_cpuhp_state(struct i915_pmu *pmu) { enum cpuhp_state slot; int ret; @@ -1021,7 +1030,7 @@ static int i915_pmu_register_cpuhp_state(struct drm_i915_private *i915) return ret; slot = ret; - ret = cpuhp_state_add_instance(slot, &i915->pmu.node); + ret = cpuhp_state_add_instance(slot, &pmu->node); if (ret) { cpuhp_remove_multi_state(slot); return ret; @@ -1031,72 +1040,75 @@ static int i915_pmu_register_cpuhp_state(struct drm_i915_private *i915) return 0; } -static void i915_pmu_unregister_cpuhp_state(struct drm_i915_private *i915) +static void i915_pmu_unregister_cpuhp_state(struct i915_pmu *pmu) { WARN_ON(cpuhp_slot == CPUHP_INVALID); - WARN_ON(cpuhp_state_remove_instance(cpuhp_slot, &i915->pmu.node)); + WARN_ON(cpuhp_state_remove_instance(cpuhp_slot, &pmu->node)); cpuhp_remove_multi_state(cpuhp_slot); } void i915_pmu_register(struct drm_i915_private *i915) { + struct i915_pmu *pmu = &i915->pmu; int ret; if (INTEL_GEN(i915) <= 2) { - DRM_INFO("PMU not supported for this GPU."); + dev_info(i915->drm.dev, "PMU not supported for this GPU."); return; } - i915_pmu_events_attr_group.attrs = create_event_attributes(i915); + i915_pmu_events_attr_group.attrs = create_event_attributes(pmu); if (!i915_pmu_events_attr_group.attrs) { ret = -ENOMEM; goto err; } - i915->pmu.base.attr_groups = i915_pmu_attr_groups; - i915->pmu.base.task_ctx_nr = perf_invalid_context; - i915->pmu.base.event_init = i915_pmu_event_init; - i915->pmu.base.add = i915_pmu_event_add; - i915->pmu.base.del = i915_pmu_event_del; - i915->pmu.base.start = i915_pmu_event_start; - i915->pmu.base.stop = i915_pmu_event_stop; - i915->pmu.base.read = i915_pmu_event_read; - i915->pmu.base.event_idx = i915_pmu_event_event_idx; - - spin_lock_init(&i915->pmu.lock); - hrtimer_init(&i915->pmu.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); - i915->pmu.timer.function = i915_sample; - - ret = perf_pmu_register(&i915->pmu.base, "i915", -1); + pmu->base.attr_groups = i915_pmu_attr_groups; + pmu->base.task_ctx_nr = perf_invalid_context; + pmu->base.event_init = i915_pmu_event_init; + pmu->base.add = i915_pmu_event_add; + pmu->base.del = i915_pmu_event_del; + pmu->base.start = i915_pmu_event_start; + pmu->base.stop = i915_pmu_event_stop; + pmu->base.read = i915_pmu_event_read; + pmu->base.event_idx = i915_pmu_event_event_idx; + + spin_lock_init(&pmu->lock); + hrtimer_init(&pmu->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + pmu->timer.function = i915_sample; + + ret = perf_pmu_register(&pmu->base, "i915", -1); if (ret) goto err; - ret = i915_pmu_register_cpuhp_state(i915); + ret = i915_pmu_register_cpuhp_state(pmu); if (ret) goto err_unreg; return; err_unreg: - perf_pmu_unregister(&i915->pmu.base); + perf_pmu_unregister(&pmu->base); err: - i915->pmu.base.event_init = NULL; - free_event_attributes(i915); + pmu->base.event_init = NULL; + free_event_attributes(pmu); DRM_NOTE("Failed to register PMU! (err=%d)\n", ret); } void i915_pmu_unregister(struct drm_i915_private *i915) { - if (!i915->pmu.base.event_init) + struct i915_pmu *pmu = &i915->pmu; + + if (!pmu->base.event_init) return; - WARN_ON(i915->pmu.enable); + WARN_ON(pmu->enable); - hrtimer_cancel(&i915->pmu.timer); + hrtimer_cancel(&pmu->timer); - i915_pmu_unregister_cpuhp_state(i915); + i915_pmu_unregister_cpuhp_state(pmu); - perf_pmu_unregister(&i915->pmu.base); - i915->pmu.base.event_init = NULL; - free_event_attributes(i915); + perf_pmu_unregister(&pmu->base); + pmu->base.event_init = NULL; + free_event_attributes(pmu); } |