summaryrefslogtreecommitdiffstats
path: root/llvm/test/tools/llvm-mca/X86/BdVer2/load-throughput.s
Commit message (Collapse)AuthorAgeFilesLines
* [MCA] Show aggregate over Average Wait times for the whole snippet (PR43219)Roman Lebedev2019-10-101-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: As disscused in https://bugs.llvm.org/show_bug.cgi?id=43219, i believe it may be somewhat useful to show //some// aggregates over all the sea of statistics provided. Example: ``` Average Wait times (based on the timeline view): [0]: Executions [1]: Average time spent waiting in a scheduler's queue [2]: Average time spent waiting in a scheduler's queue while ready [3]: Average time elapsed from WB until retire stage [0] [1] [2] [3] 0. 3 1.0 1.0 4.7 vmulps %xmm0, %xmm1, %xmm2 1. 3 2.7 0.0 2.3 vhaddps %xmm2, %xmm2, %xmm3 2. 3 6.0 0.0 0.0 vhaddps %xmm3, %xmm3, %xmm4 3 3.2 0.3 2.3 <total> ``` I.e. we average the averages. Reviewers: andreadb, mattd, RKSimon Reviewed By: andreadb Subscribers: gbedwell, arphaman, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D68714 llvm-svn: 374361
* [MCA][LSUnit] Track loads and stores until retirement.Andrea Di Biagio2019-10-081-22/+22
| | | | | | | | | | | | | | | | | | | | | | | | | Before this patch, loads and stores were only tracked by their corresponding queues in the LSUnit from dispatch until execute stage. In practice we should be more conservative and assume that memory opcodes leave their queues at retirement stage. Basically, loads should leave the load queue only when they have completed and delivered their data. We conservatively assume that a load is completed when it is retired. Stores should be tracked by the store queue from dispatch until retirement. In practice, stores can only leave the store queue if their data can be written to the data cache. This is mostly a mechanical change. With this patch, the retire stage notifies the LSUnit when a memory instruction is retired. That would triggers the release of LDQ/STQ entries. The only visible change is in memory tests for the bdver2 model. That is because bdver2 is the only model that defines the load/store queue size. This patch partially addresses PR39830. Differential Revision: https://reviews.llvm.org/D68266 llvm-svn: 374034
* [X86] AMD Piledriver (BdVer2): major cleanup (mainly inverse throughput)Roman Lebedev2019-05-091-198/+201
| | | | | | | | | | | | | | | | I've started this cleanup more several times now, but got sidetracked elsewhere, e.g. by llvm-exegesis problems. Not this time, finally! This is mainly cleaning up the inverse throughput values, and a few latencies/uops, based on the llvm-exegesis measured values. Though this is not complete by any means, there's certainly more cleanup to be done. The performance numbers (i've only checked by RawSpeed benchmark) aren't really surprising - overall this *slightly* (< -1%) improves perf. llvm-svn: 360341
* [llvm-mca][scheduler-stats] Print issued micro opcodes per cycle. NFCIAndrea Di Biagio2019-04-081-8/+8
| | | | | | | | | It makes more sense to print out the number of micro opcodes that are issued every cycle rather than the number of instructions issued per cycle. This behavior is also consistent with the dispatch-stats: numbers from the two views can now be easily compared. llvm-svn: 357919
* [llvm-mca][MC] Add the ability to declare which processor resources model ↵Andrea Di Biagio2018-11-291-45/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | load/store queues (PR36666). This patch adds the ability to specify via tablegen which processor resources are load/store queue resources. A new tablegen class named MemoryQueue can be optionally used to mark resources that model load/store queues. Information about the load/store queue is collected at 'CodeGenSchedule' stage, and analyzed by the 'SubtargetEmitter' to initialize two new fields in struct MCExtraProcessorInfo named `LoadQueueID` and `StoreQueueID`. Those two fields are identifiers for buffered resources used to describe the load queue and the store queue. Field `BufferSize` is interpreted as the number of entries in the queue, while the number of units is a throughput indicator (i.e. number of available pickers for loads/stores). At construction time, LSUnit in llvm-mca checks for the presence of extra processor information (i.e. MCExtraProcessorInfo) in the scheduling model. If that information is available, and fields LoadQueueID and StoreQueueID are set to a value different than zero (i.e. the invalid processor resource index), then LSUnit initializes its LoadQueue/StoreQueue based on the BufferSize value declared by the two processor resources. With this patch, we more accurately track dynamic dispatch stalls caused by the lack of LS tokens (i.e. load/store queue full). This is also shown by the differences in two BdVer2 tests. Stalls that were previously classified as generic SCHEDULER FULL stalls, are not correctly classified either as "load queue full" or "store queue full". About the differences in the -scheduler-stats view: those differences are expected, because entries in the load/store queue are not released at instruction issue stage. Instead, those are released at instruction executed stage. This is the main reason why for the modified tests, the load/store queues gets full before PdEx is full. Differential Revision: https://reviews.llvm.org/D54957 llvm-svn: 347857
* [llvm-mca] pass -dispatch-stats flag to a couple of tests. NFCAndrea Di Biagio2018-11-271-1/+98
| | | | | | | | | | | | | | | | This change is in preparation for a patch that fixes PR36666. llvm-mca currently doesn't know if a buffered processor resource describes a load or store queue. So, any dynamic dispatch stall caused by the lack of load/store queue entries is normally reported as a generic SCHEDULER stall. See for example the -dispatch-stats output from the two tests modified by this patch. In future, processor models will be able to tag processor resources that are used to describe load/store queues. That information would then be used by llvm-mca to correctly classify dynamic dispatch stalls caused by the lack of tokens in the LS. llvm-svn: 347662
* [llvm-mca] Correctly update the resource strategy for processor resources ↵Andrea Di Biagio2018-11-121-14/+14
| | | | | | | | | | | | | | | | | | | | | | | with multiple units. When looking at the tests committed by Roman at r346587, I noticed that numbers reported by the resource pressure for PdAGU01 were wrong. In particular, according to the aut-generated CHECK lines in tests memcpy-like-test.s and store-throughput.s, resource pressure for PdAGU01 was not uniformly distributed among the two AGEN pipes. It turns out that the reason why pressure was not correctly distributed, was because the "resource selection strategy" object associated with PdAGU01 was not correctly updated on the event of AGEN pipe used. As a result, llvm-mca was not simulating a round-robin pipeline allocation for PdAGU01. Instead, PdAGU1 was always prioritized over PdAGU0. This patch fixes the issue; now processor resource strategy objects for resources declaring multiple units, are correctly notified in the event of "resource used". llvm-svn: 346650
* [X86][BdVer2] Fix loads/stores throughput for Piledriver (PR39465)Roman Lebedev2018-11-101-77/+98
| | | | | | | | | | | | | There are two AGU units, and per 1cy, there can be either two loads, or a load and a store; but not two stores, or two loads and a store. Additionally, loads shouldn't affect the store scheduler and vice versa. (but *should* affect the PdEX scheduler.) Required rL346545. Fixes https://bugs.llvm.org/show_bug.cgi?id=39465 llvm-svn: 346587
* [NFC][BdVer2] Load and store throughput tests: also check sched stats (PR39465)Roman Lebedev2018-11-081-1/+120
| | | | | | | | As noted by Andrea Di Biagio in https://bugs.llvm.org/show_bug.cgi?id=39465 both the loads and stores occupy both the store and load queues. This is clearly wrong. llvm-svn: 346425
* [NFC][BdVer2] Tests for load and store throughput (PR39465)Roman Lebedev2018-11-081-0/+604
During review it was noted that while it appears that the Piledriver can do two [consecutive] loads per cycle, it can only do one store per cycle. It was suggested that the sched model incorrectly models that, but it was opted to fix this afterwards. These tests show that the two consecutive loads are modelled correctly, and one consecutive stores is not modelled incorrectly. Unless i'm missing the point. https://bugs.llvm.org/show_bug.cgi?id=39465 llvm-svn: 346404
OpenPOWER on IntegriCloud