summaryrefslogtreecommitdiffstats
path: root/llvm/lib/MCA
Commit message (Collapse)AuthorAgeFilesLines
* [NFC] Fixes -Wrange-loop-analysis warningsMark de Wever2020-01-012-2/+3
| | | | | | This avoids new warnings due to D68912 adds -Wrange-loop-analysis to -Wall. Differential Revision: https://reviews.llvm.org/D71857
* [cmake] Explicitly mark libraries defined in lib/ as "Component Libraries"Tom Stellard2019-11-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Most libraries are defined in the lib/ directory but there are also a few libraries defined in tools/ e.g. libLLVM, libLTO. I'm defining "Component Libraries" as libraries defined in lib/ that may be included in libLLVM.so. Explicitly marking the libraries in lib/ as component libraries allows us to remove some fragile checks that attempt to differentiate between lib/ libraries and tools/ libraires: 1. In tools/llvm-shlib, because llvm_map_components_to_libnames(LIB_NAMES "all") returned a list of all libraries defined in the whole project, there was custom code needed to filter out libraries defined in tools/, none of which should be included in libLLVM.so. This code assumed that any library defined as static was from lib/ and everything else should be excluded. With this change, llvm_map_components_to_libnames(LIB_NAMES, "all") only returns libraries that have been added to the LLVM_COMPONENT_LIBS global cmake property, so this custom filtering logic can be removed. Doing this also fixes the build with BUILD_SHARED_LIBS=ON and LLVM_BUILD_LLVM_DYLIB=ON. 2. There was some code in llvm_add_library that assumed that libraries defined in lib/ would not have LLVM_LINK_COMPONENTS or ARG_LINK_COMPONENTS set. This is only true because libraries defined lib lib/ use LLVMBuild.txt and don't set these values. This code has been fixed now to check if the library has been explicitly marked as a component library, which should now make it easier to remove LLVMBuild at some point in the future. I have tested this patch on Windows, MacOS and Linux with release builds and the following combinations of CMake options: - "" (No options) - -DLLVM_BUILD_LLVM_DYLIB=ON - -DLLVM_LINK_LLVM_DYLIB=ON - -DBUILD_SHARED_LIBS=ON - -DBUILD_SHARED_LIBS=ON -DLLVM_BUILD_LLVM_DYLIB=ON - -DBUILD_SHARED_LIBS=ON -DLLVM_LINK_LLVM_DYLIB=ON Reviewers: beanz, smeenai, compnerd, phosek Reviewed By: beanz Subscribers: wuzish, jholewinski, arsenm, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, mgorny, mehdi_amini, sbc100, jgravelle-google, hiraditya, aheejin, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, steven_wu, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, jocewei, jsji, dang, Jim, lenary, s.egerton, pzheng, sameer.abuasal, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70179
* [MCA] Fix a spelling mistake in a comment. NFCGreg Bedwell2019-10-271-1/+1
|
* [MCA][LSUnit] Track loads and stores until retirement.Andrea Di Biagio2019-10-083-8/+14
| | | | | | | | | | | | | | | | | | | | | | | | | Before this patch, loads and stores were only tracked by their corresponding queues in the LSUnit from dispatch until execute stage. In practice we should be more conservative and assume that memory opcodes leave their queues at retirement stage. Basically, loads should leave the load queue only when they have completed and delivered their data. We conservatively assume that a load is completed when it is retired. Stores should be tracked by the store queue from dispatch until retirement. In practice, stores can only leave the store queue if their data can be written to the data cache. This is mostly a mechanical change. With this patch, the retire stage notifies the LSUnit when a memory instruction is retired. That would triggers the release of LDQ/STQ entries. The only visible change is in memory tests for the bdver2 model. That is because bdver2 is the only model that defines the load/store queue size. This patch partially addresses PR39830. Differential Revision: https://reviews.llvm.org/D68266 llvm-svn: 374034
* [MCA] Use references to LSUnitBase in class Scheduler and add helper methods ↵Andrea Di Biagio2019-09-301-4/+4
| | | | | | to acquire/release LS queue entries. NFCI llvm-svn: 373236
* [Tblgen][MCA] Add the ability to mark groups as LoadQueue and StoreQueue. NFCIAndrea Di Biagio2019-08-271-2/+2
| | | | | | | | | | | | | | | | Before this patch, users were not allowed to optionally mark processor resource groups as load/store queues. That is because tablegen class MemoryQueue was originally declared as expecting a ProcResource template argument (instead of a more generic ProcResourceKind). That was an oversight, since the original intention from D54957 was to let user mark any processor resource as either load/store queue. This patch adds the ability to use processor resource groups in MemoryQueue definitions. This is not a user visible change. Differential Revision: https://reviews.llvm.org/D66810 llvm-svn: 370091
* [MCA] consistently use MCPhysReg instead of unsigned as register type. NFCIAndrea Di Biagio2019-08-224-17/+15
| | | | llvm-svn: 369648
* [llvm] Migrate llvm::make_unique to std::make_uniqueJonas Devlieghere2019-08-155-16/+16
| | | | | | | | Now that we've moved to C++14, we no longer need the llvm::make_unique implementation from STLExtras.h. This patch is a mechanical replacement of (hopefully) all the llvm::make_unique instances across the monorepo. llvm-svn: 369013
* [MCA] Slightly refactor class RetireControlUnit, and add the ability to ↵Andrea Di Biagio2019-08-155-44/+60
| | | | | | | | | | | | | | | | | | | override the mask of used buffered resources in class mca::Instruction. NFCI This patch teaches the RCU how to peek 'next' RCUTokens. A new method has been added to the RetireControlUnit class with the goal of minimizing the complexity of follow-up patches that will enable macro-fusion support in mca. This patch also adds method Instruction::getNumMicroOpcodes() to simplify common interactions with the instruction descriptor (a pattern quite common in some pipeline stages). Added the ability to override the default set of consumed scheduler resources (this -again- is to simplify future patches that add support for macro-op fusion). No functional change intended. llvm-svn: 369010
* [MCA] Slightly refactor the logic in ResourceManager. NFCIAndrea Di Biagio2019-08-154-48/+57
| | | | | | | | This patch slightly changes the API in the attempt to simplify resource buffer queries. It is done in preparation for a patch that will enable support for macro fusion. llvm-svn: 368994
* [MCA] Add flag -show-encoding to llvm-mca.Andrea Di Biagio2019-08-092-0/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Flag -show-encoding enables the printing of instruction encodings as part of the the instruction info view. Example (with flags -mtriple=x86_64-- -mcpu=btver2): Instruction Info: [1]: #uOps [2]: Latency [3]: RThroughput [4]: MayLoad [5]: MayStore [6]: HasSideEffects (U) [7]: Encoding Size [1] [2] [3] [4] [5] [6] [7] Encodings: Instructions: 1 2 1.00 4 c5 f0 59 d0 vmulps %xmm0, %xmm1, %xmm2 1 4 1.00 4 c5 eb 7c da vhaddps %xmm2, %xmm2, %xmm3 1 4 1.00 4 c5 e3 7c e3 vhaddps %xmm3, %xmm3, %xmm4 In this example, column Encoding Size is the size in bytes of the instruction encoding. Column Encodings reports the actual instruction encodings as byte sequences in hex (objdump style). The computation of encodings is done by a utility class named mca::CodeEmitter. In future, I plan to expose the CodeEmitter to the instruction builder, so that information about instruction encoding sizes can be used by the simulator. That would be a first step towards simulating the throughput from the decoders in the hardware frontend. Differential Revision: https://reviews.llvm.org/D65948 llvm-svn: 368432
* [MCA] Remove dependency from InstrBuilder in mca::Context. NFCAndrea Di Biagio2019-08-081-2/+1
| | | | | | InstrBuilder is not required to construct the default pipeline. llvm-svn: 368275
* [MCA] Ignore invalid processor resource writes of zero cycles. NFCIAndrea Di Biagio2019-06-141-2/+14
| | | | | | | | In debug mode, the tool also raises a warning and prints out a message which helps identify the problematic MCWriteProcResEntry from the scheduling class. This message would have been useful to have when triaging PR42282. llvm-svn: 363387
* [MCA] Further refactor the bottleneck analysis view. NFCI.Andrea Di Biagio2019-06-101-1/+2
| | | | llvm-svn: 362933
* [MCA][Scheduler] Change how memory instructions are dispatched to the ↵Andrea Di Biagio2019-06-011-28/+25
| | | | | | pending set. NFCI llvm-svn: 362302
* [MCA] Refactor class LSUnit. NFCIAndrea Di Biagio2019-05-292-182/+146
| | | | | | | | | | | | | | | | | | | | | | | | | | This should be the last bit of refactoring in preparation for a patch that would finally fix PR37494. This patch introduces the concept of memory dependency groups (class MemoryGroup) and "Load/Store Unit token" (LSUToken) to track the status of a memory operation. A MemoryGroup is a node of a memory dependency graph. It is used internally to classify memory operations based on the memory operations they depend on. Let I and J be two memory operations, we say that I and J equivalent (for the purpose of mapping instructions to memory dependency groups) if the set of memory operations they depend depend on is identical. MemoryGroups are identified by so-called LSUToken (a unique group identifier assigned by the LSUnit to every group). When an instruction I is dispatched to the LSUnit, the LSUnit maps I to a group, and then returns a LSUToken. LSUTokens are used by class Scheduler to track memory dependencies. This patch simplifies the LSUnit interface and moves most of the implementation details to its base class (LSUnitBase). There is no user visible change to the output. llvm-svn: 361950
* [MCA][Scheduler] Improved critical memory dependency computation.Andrea Di Biagio2019-05-261-6/+16
| | | | | | | | This fixes a problem where back-pressure increases caused by register dependencies were not correctly notified if execution was also delayed by memory dependencies. llvm-svn: 361740
* [MCA] Refactor the logic that computes the critical memory dependency info. NFCIAndrea Di Biagio2019-05-263-25/+74
| | | | | | | | CriticalRegDep has been renamed CriticalDependency, and it is now used by class Instruction to store information about the critical register dependency and the critical memory dependency. No functional change intendend. llvm-svn: 361737
* [MCA] Add the ability to compute critical register dependency of an instruction.Andrea Di Biagio2019-05-231-1/+33
| | | | | | | | | | | | | This patch adds the methods `getCriticalRegDep()` and `computeCriticalRegDep()` to class InstructionBase. The goal is to allow users to obtain information about the critical register dependency that most affects the latency of an instruction. These methods are currently unused. However, the long term plan is to use them in order to allow the computation of a critical-path as part of the bottleneck analysis. So, this is yet another step towards fixing PR37494. llvm-svn: 361509
* [MCA] Introduce class LSUnitBase and let LSUnit derive from it.Andrea Di Biagio2019-05-232-72/+76
| | | | | | | | | | | | | | | | | | | Class LSUnitBase provides a abstract interface for all the concrete LS units in llvm-mca. Methods exposed by the public abstract LSUnitBase interface are: - Status isAvailable(const InstRef&); - void dispatch(const InstRef &); - const InstRef &isReady(const InstRef &); LSUnitBase standardises the API, but not the data structures internally used by LS units. This allows for more flexibility. Previously, only method `isReady()` was declared virtual by class LSUnit. Also, derived classes had to inherit all the internal data members of LSUnit. No functional change intended. llvm-svn: 361496
* [MCA] Make the bool conversion operator in class InstRef explicit. NFCIAndrea Di Biagio2019-05-231-1/+3
| | | | | | | This patch makes the bool conversion operator in InstRef explicit. It also adds a operator< to hel comparing InstRef objects in sets. llvm-svn: 361482
* [MCA] Remove dead assignment. NFCAndrea Di Biagio2019-05-081-1/+0
| | | | llvm-svn: 360237
* [MCA] Notify event listeners when instructions transition to the Pending ↵Andrea Di Biagio2019-05-052-8/+35
| | | | | | state. NFCI llvm-svn: 359983
* [MCA] Add field `IsEliminated` to class Instruction. NFCIAndrea Di Biagio2019-04-271-3/+3
| | | | llvm-svn: 359377
* [MCA] Add an experimental MicroOpQueue stage.Andrea Di Biagio2019-03-293-0/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds an experimental stage named MicroOpQueueStage. MicroOpQueueStage can be used to simulate a hardware micro-op queue (basically, a decoupling queue between 'decode' and 'dispatch'). Users can specify a queue size, as well as a optional MaxIPC (which - in the absence of a "Decoders" stage - can be used to simulate a different throughput from the decoders). This stage is added to the default pipeline between the EntryStage and the DispatchStage only if PipelineOption::MicroOpQueue is different than zero. By default, llvm-mca sets PipelineOption::MicroOpQueue to the value of hidden flag -micro-op-queue-size. Throughput from the decoder can be simulated via another hidden flag named -decoder-throughput. That flag allows us to quickly experiment with different frontend throughputs. For targets that declare a loop buffer, flag -decoder-throughput allows users to do multiple runs, each time simulating a different throughput from the decoders. This stage can/will be extended in future. For example, we could add a "buffer full" event to notify bottlenecks caused by backpressure. flag -decoder-throughput would probably go away if in future we delegate to another stage (DecoderStage?) the simulation of a (potentially variable) throughput from the decoders. For now, flag -decoder-throughput is "good enough" to run some simple experiments. Differential Revision: https://reviews.llvm.org/D59928 llvm-svn: 357248
* [MCA] Fix -Wparentheses warning breaking the -Werror build.Andrea Di Biagio2019-03-271-1/+2
| | | | | | Waring was introduced at r357074. llvm-svn: 357085
* [MCA][Pipeline] Don't visit stages in reverse order when calling method ↵Andrea Di Biagio2019-03-271-3/+3
| | | | | | | | | | cycleEnd(). NFCI There is no reason why stages should be visited in reverse order. This patch allows the definition of stages that push instructions forward from their cycleEnd() routine. llvm-svn: 357074
* [MCA] Correctly update the UsedResourceGroups mask in the InstrBuilder.Andrea Di Biagio2019-03-262-0/+11
| | | | | | | | Found by inspection when looking at the debug output of MCA. This problem was latent, and none of the upstream models were affected by it. No functional change intended. llvm-svn: 357000
* [MCA] Highlight kernel bottlenecks in the summary view.Andrea Di Biagio2019-03-043-4/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a new flag named -bottleneck-analysis to print out information about throughput bottlenecks. MCA knows how to identify and classify dynamic dispatch stalls. However, it doesn't know how to analyze and highlight kernel bottlenecks. The goal of this patch is to teach MCA how to correlate increases in backend pressure to backend stalls (and therefore, the loss of throughput). From a Scheduler point of view, backend pressure is a function of the scheduler buffer usage (i.e. how the number of uOps in the scheduler buffers changes over time). Backend pressure increases (or decreases) when there is a mismatch between the number of opcodes dispatched, and the number of opcodes issued in the same cycle. Since buffer resources are limited, continuous increases in backend pressure would eventually leads to dispatch stalls. So, there is a strong correlation between dispatch stalls, and how backpressure changed over time. This patch teaches how to identify situations where backend pressure increases due to: - unavailable pipeline resources. - data dependencies. Data dependencies may delay execution of instructions and therefore increase the time that uOps have to spend in the scheduler buffers. That often translates to an increase in backend pressure which may eventually lead to a bottleneck. Contention on pipeline resources may also delay execution of instructions, and lead to a temporary increase in backend pressure. Internally, the Scheduler classifies instructions based on whether register / memory operands are available or not. An instruction is marked as "ready to execute" only if data dependencies are fully resolved. Every cycle, the Scheduler attempts to execute all instructions that are ready to execute. If an instruction cannot execute because of unavailable pipeline resources, then the Scheduler internally updates a BusyResourceUnits mask with the ID of each unavailable resource. ExecuteStage is responsible for tracking changes in backend pressure. If backend pressure increases during a cycle because of contention on pipeline resources, then ExecuteStage sends a "backend pressure" event to the listeners. That event would contain information about instructions delayed by resource pressure, as well as the BusyResourceUnits mask. Note that ExecuteStage also knows how to identify situations where backpressure increased because of delays introduced by data dependencies. The SummaryView observes "backend pressure" events and prints out a "bottleneck report". Example of bottleneck report: ``` Cycles with backend pressure increase [ 99.89% ] Throughput Bottlenecks: Resource Pressure [ 0.00% ] Data Dependencies: [ 99.89% ] - Register Dependencies [ 0.00% ] - Memory Dependencies [ 99.89% ] ``` A bottleneck report is printed out only if increases in backend pressure eventually caused backend stalls. About the time complexity: Time complexity is linear in the number of instructions in the Scheduler::PendingSet. The average slowdown tends to be in the range of ~5-6%. For memory intensive kernels, the slowdown can be significant if flag -noalias=false is specified. In the worst case scenario I have observed a slowdown of ~30% when flag -noalias=false was specified. We can definitely recover part of that slowdown if we optimize class LSUnit (by doing extra bookkeeping to speedup queries). For now, this new analysis is disabled by default, and it can be enabled via flag -bottleneck-analysis. Users of MCA as a library can enable the generation of pressure events through the constructor of ExecuteStage. This patch partially addresses https://bugs.llvm.org/show_bug.cgi?id=37494 Differential Revision: https://reviews.llvm.org/D58728 llvm-svn: 355308
* [MCA] Always check if scheduler resources are unavailable when reporting ↵Andrea Di Biagio2019-02-262-4/+13
| | | | | | | | | | | dispatch stalls. Dispatch stall cycles may be associated to multiple dispatch stall events. Before this patch, each stall cycle was associated with a single stall event. This patch also improves a couple of code comments, and adds a helper method to query the Scheduler for dispatch stalls. llvm-svn: 354877
* [MCA][Scheduler] Collect resource pressure and memory dependency bottlenecks.Andrea Di Biagio2019-02-203-26/+31
| | | | | | | | | | | | | | | | | | Every cycle, the Scheduler checks if instructions in the ReadySet can be issued to the underlying pipelines. If an instruction cannot be issued because one or more pipeline resources are unavailable, then field Instruction::CriticalResourceMask is updated with the resource identifier of the unavailable resources. If an instruction cannot be promoted from the PendingSet to the ReadySet because of a memory dependency, then field Instruction::CriticalMemDep is updated with the identifier of the dependending memory instruction. Bottleneck information is collected after every cycle for instructions that are waiting to execute. The idea is to help identify causes of bottlenecks; this information can be used in future to implement a bottleneck analysis. llvm-svn: 354490
* [MCA][ResourceManager] Add a table that maps processor resource indices to ↵Andrea Di Biagio2019-02-201-21/+24
| | | | | | | | | | | | processor resource identifiers. This patch adds a lookup table to speed up resource queries in the ResourceManager. This patch also moves helper function 'getResourceStateIndex()' from ResourceManager.cpp to Support.h, so that we can reuse that logic in the SummaryView (and potentially other views in llvm-mca). No functional change intended. llvm-svn: 354470
* [MCA] Correctly update register definitions in the PRF after move elimination.Andrea Di Biagio2019-02-181-14/+9
| | | | | | | | | | This patch fixes a bug where register writes performed by optimizable register moves were sometimes wrongly treated like partial register updates. Before this patch, llvm-mca wrongly predicted a 1.50 IPC for test reg-move-elimination-6.s (added by this patch). With this patch, llvm-mca correctly updates the register defintions in the PRF, and the IPC for that test is now correctly reported as 2. llvm-svn: 354271
* [MCA] Slightly refactor method writeStartEvent in WriteState and ReadState. NFCIAndrea Di Biagio2019-02-183-13/+13
| | | | | | | This is another change in preparation for PR37494. No functional change intended. llvm-svn: 354261
* [MCA] Improved code comment. NFCAndrea Di Biagio2019-02-151-1/+2
| | | | llvm-svn: 354154
* [MCA][LSUnit] Return the ID of the dependent memory operation from methodAndrea Di Biagio2019-02-152-12/+16
| | | | | | | | isReady(). NFCI This is yet another change in preparation for a fix for PR37494. llvm-svn: 354150
* [MCA] Store a bitmask of used groups in the instruction descriptor.Andrea Di Biagio2019-02-132-8/+22
| | | | | | | This is to speedup 'checkAvailability' queries in class ResourceManager. No functional change intended. llvm-svn: 353949
* [MCA][Scheduler] Use latency information to further classify busy instructions.Andrea Di Biagio2019-02-132-18/+82
| | | | | | | | | | | | | | | | | | This patch introduces a new instruction stage named 'IS_PENDING'. An instruction transitions from the IS_DISPATCHED to the IS_PENDING stage if input registers are not available, but their latency is known. This patch also adds a new set of instructions named 'PendingSet' to class Scheduler. The idea is that the PendingSet will only contain instructions that have reached the IS_PENDING stage. By construction, an instruction in the PendingSet is only dependent on instructions that have already reached the execution stage. The plan is to use this knowledge to identify bottlenecks caused by data dependencies (see PR37494). Differential Revision: https://reviews.llvm.org/D58066 llvm-svn: 353937
* [MCA] Improved debug prints. NFCAndrea Di Biagio2019-02-122-3/+21
| | | | llvm-svn: 353852
* [MCA][Scheduler] Track resources that were found busy when issuing an ↵Andrea Di Biagio2019-02-111-0/+3
| | | | | | | | | | | | instruction. This is a follow up of r353706. When the scheduler fails to issue a ready instruction to the underlying pipelines, it now updates a mask of 'busy resource units'. That information will be used in future to obtain the set of "problematic" resources in the case of bottlenecks caused by resource pressure. No functional change intended. llvm-svn: 353728
* [MCA] Return a mask of busy resources from method ↵Andrea Di Biagio2019-02-112-10/+23
| | | | | | | | | | | ResourceManager::checkAvailability(). NFCI In case of bottlenecks caused by pipeline pressure, we want to be able to correctly report the set of problematic pipelines. This is a first step towards adding support for bottleneck hints in llvm-mca (see PR37494). No functional change intended. llvm-svn: 353706
* [MCA] Speedup ResourceManager queries. NFCIAndrea Di Biagio2019-02-061-8/+9
| | | | | | | | | | | | | When a resource unit R is released, the ResourceManager notifies groups that contain R. Before this patch, the logic in method ResourceManager::release() implemented a potentially slow iterative search of dependent groups on the entire set of processor resources. This patch replaces that logic with a simpler (and often faster) lookup on array `Resource2Groups`. This patch gives an average speedup of ~3-4% (observed on a release build when testing for target btver2). No functional change intended. llvm-svn: 353301
* [MCA] Moved the logic that updates register dependencies from DispatchStage ↵Andrea Di Biagio2019-02-053-26/+20
| | | | | | | | | | | to RegisterFile. NFC DispatchStage should always delegate to an object of class RegisterFile the task of updating data dependencies. ReadState and WriteState objects should not be modified directly by DispatchStage. This patch also renames stage IS_AVAILABLE to IS_DISPATCHED. llvm-svn: 353170
* [MCA] Simplify the logic in method WriteState::addUser. NFCIAndrea Di Biagio2019-02-051-5/+1
| | | | | | | | In some cases, it is faster to just grow the set of 'Users' rather than performing a llvm::find_if every time a new user is added to the set. No functional change intended. llvm-svn: 353162
* [MC][X86] Correctly model additional operand latency caused by transfer ↵Andrea Di Biagio2019-01-231-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | delays from the integer to the floating point unit. This patch adds a new ReadAdvance definition named ReadInt2Fpu. ReadInt2Fpu allows x86 scheduling models to accurately describe delays caused by data transfers from the integer unit to the floating point unit. ReadInt2Fpu currently defaults to a delay of zero cycles (i.e. no delay) for all x86 models excluding BtVer2. That means, this patch is only a functional change for the Jaguar cpu model only. Tablegen definitions for instructions (V)PINSR* have been updated to account for the new ReadInt2Fpu. That read is mapped to the the GPR input operand. On Jaguar, int-to-fpu transfers are modeled as a +6cy delay. Before this patch, that extra delay was added to the opcode latency. In practice, the insert opcode only executes for 1cy. Most of the actual latency is actually contributed by the so-called operand-latency. According to the AMD SOG for family 16h, (V)PINSR* latency is defined by expression f+1, where f is defined as a forwarding delay from the integer unit to the fpu. When printing instruction latency from MCA (see InstructionInfoView.cpp) and LLC (only when flag -print-schedule is speified), we now need to account for any extra forwarding delays. We do this by checking if scheduling classes declare any negative ReadAdvance entries. Quoting a code comment in TargetSchedule.td: "A negative advance effectively increases latency, which may be used for cross-domain stalls". When computing the instruction latency for the purpose of our scheduling tests, we now add any extra delay to the formula. This avoids regressing existing codegen and mca schedule tests. It comes with the cost of an extra (but very simple) hook in MCSchedModel. Differential Revision: https://reviews.llvm.org/D57056 llvm-svn: 351965
* Update the file headers across all of the LLVM projects in the monorepoChandler Carruth2019-01-1919-76/+57
| | | | | | | | | | | | | | | | | to reflect the new license. We understand that people may be surprised that we're moving the header entirely to discuss the new license. We checked this carefully with the Foundation's lawyer and we believe this is the correct approach. Essentially, all code in the project is now made available by the LLVM project under our new license, so you will see that the license headers include that license only. Some of our contributors have contributed code under our old license, and accordingly, we have retained a copy of our old license notice in the top-level files in each project and repository. llvm-svn: 351636
* [MCA] Fix wrong definition of ResourceUnitMask in DefaultResourceStrategy.Andrea Di Biagio2019-01-105-15/+36
| | | | | | | | | | | | | | Field ResourceUnitMask was incorrectly defined as a 'const unsigned' mask. It should have been a 64 bit quantity instead. That means, ResourceUnitMask was always implicitly truncated to a 32 bit quantity. This issue has been found by inspection. Surprisingly, that bug was latent, and it never negatively affected any existing upstream targets. This patch fixes the wrong definition of ResourceUnitMask, and adds a bunch of extra debug prints to help debugging potential issues related to invalid processor resource masks. llvm-svn: 350820
* [llvm-mca] Display masks in hexEvandro Menezes2019-01-092-5/+6
| | | | | | Display the resources masks as hexadecimal. Otherwise, NFC. llvm-svn: 350777
* [llvm-mca] Improve debugging (NFC)Evandro Menezes2019-01-082-0/+4
| | | | llvm-svn: 350661
* [MCA] Improved handling of in-order issue/dispatch resources.Andrea Di Biagio2019-01-043-21/+15
| | | | | | | | | | | Added field 'MustIssueImmediately' to the instruction descriptor of instructions that only consume in-order issue/dispatch processor resources. This speeds up queries from the hardware Scheduler, and gives an average ~5% speedup on a release build. No functional change intended. llvm-svn: 350397
OpenPOWER on IntegriCloud