summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/AArch64/AArch64InstrInfo.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* MachineScheduler: Export function to construct "default" scheduler.Matthias Braun2016-11-281-2/+2
| | | | | | | | | | | | | | | | | | This makes the createGenericSchedLive() function that constructs the default scheduler available for the public API. This should help when you want to get a scheduler and the default list of DAG mutations. This also shrinks the list of default DAG mutations: {Load|Store}ClusterDAGMutation and MacroFusionDAGMutation are no longer added by default. Targets can easily add them if they need them. It also makes it easier for targets to add alternative/custom macrofusion or clustering mutations while staying with the default createGenericSchedLive(). It also saves the callback back and forth in TargetInstrInfo::enableClusterLoads()/enableClusterStores(). Differential Revision: https://reviews.llvm.org/D26986 llvm-svn: 288057
* Test commitAbderrazek Zaafrani2016-10-211-1/+0
| | | | llvm-svn: 284832
* AArch64: Move remaining target specific BranchRelaxation bits to TIIMatt Arsenault2016-10-061-22/+27
| | | | llvm-svn: 283458
* AArch64: Macrofusion: Split features, add missing combinations.Matthias Braun2016-10-041-4/+45
| | | | | | | | | | | | | | | | AArch64InstrInfo::shouldScheduleAdjacent() determines whether two instruction can benefit from macroop fusion on apple CPUs. The list turned out to be incomplete: - the "rr" variants of the instructions were missing - even the "rs" variants can have shift value == 0 and behave like the "rr" variants This also splits the MacropFusion target feature into ArithmeticBccFusion and ArithmeticCbzFusion. Differential Revision: https://reviews.llvm.org/D25142 llvm-svn: 283243
* [AArch64] Support for FP FMA when -ffp-contract=fastEvandro Menezes2016-09-151-3/+5
| | | | | | | | | | | | Currently, the machine combiner can proceed matching when -ffast-math is on. It should also match when only -ffp-contract=fast is specified as was the case before when DAGCombiner was doing the job. Patch by: Abderrazek Zaafrani <a.zaafrani@samsung.com>. Differential Revision: https://reviews.llvm.org/D24366 llvm-svn: 281649
* Finish renaming remaining analyzeBranch functionsMatt Arsenault2016-09-141-2/+2
| | | | llvm-svn: 281535
* Make analyzeBranch family of instruction names consistentMatt Arsenault2016-09-141-2/+2
| | | | | | | analyzeBranch was renamed to use lowercase first, rename the related set to match. llvm-svn: 281506
* AArch64: Use TTI branch functions in branch relaxationMatt Arsenault2016-09-141-4/+23
| | | | | | | | | The main change is to return the code size from InsertBranch/RemoveBranch. Patch mostly by Tim Northover llvm-svn: 281505
* [AArch64] Support stackmap/patchpoint in getInstSizeInBytesDiana Picus2016-09-131-4/+21
| | | | | | | | | | | | | | | | | We currently return 4 for stackmaps and patchpoints, which is very optimistic and can in rare cases cause the branch relaxation pass to fail to relax certain branches. This patch causes getInstSizeInBytes to return a pessimistic estimate of the size as the number of bytes requested in the stackmap/patchpoint. In the future, we could provide a more accurate estimate by sharing some of the logic in AArch64::LowerSTACKMAP/PATCHPOINT. Fixes part of https://llvm.org/bugs/show_bug.cgi?id=28750 Differential Revision: https://reviews.llvm.org/D24073 llvm-svn: 281301
* CodeGen: Give MachineBasicBlock::reverse_iterator a handle to the current MIDuncan P. N. Exon Smith2016-09-111-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that MachineBasicBlock::reverse_instr_iterator knows when it's at the end (since r281168 and r281170), implement MachineBasicBlock::reverse_iterator directly on top of an ilist::reverse_iterator by adding an IsReverse template parameter to MachineInstrBundleIterator. This replaces another hard-to-reason-about use of std::reverse_iterator on list iterators, matching the changes for ilist::reverse_iterator from r280032 (see the "out of scope" section at the end of that commit message). MachineBasicBlock::reverse_iterator now has a handle to the current node and has obvious invalidation semantics. r280032 has a more detailed explanation of how list-style reverse iterators (invalidated when the pointed-at node is deleted) are different from vector-style reverse iterators like std::reverse_iterator (invalidated on every operation). A great motivating example is this commit's changes to lib/CodeGen/DeadMachineInstructionElim.cpp. Note: If your out-of-tree backend deletes instructions while iterating on a MachineBasicBlock::reverse_iterator or converts between MachineBasicBlock::iterator and MachineBasicBlock::reverse_iterator, you'll need to update your code in similar ways to r280032. The following table might help: [Old] ==> [New] delete &*RI, RE = end() delete &*RI++ RI->erase(), RE = end() RI++->erase() reverse_iterator(I) std::prev(I).getReverse() reverse_iterator(I) ++I.getReverse() --reverse_iterator(I) I.getReverse() reverse_iterator(std::next(I)) I.getReverse() RI.base() std::prev(RI).getReverse() RI.base() ++RI.getReverse() --RI.base() RI.getReverse() std::next(RI).base() RI.getReverse() (For more details, have a look at r280032.) llvm-svn: 281172
* Typo fixes. NFCDiana Picus2016-08-311-1/+1
| | | | llvm-svn: 280229
* Make some LLVM_CONSTEXPR variables const. NFC.George Burgess IV2016-08-251-1/+1
| | | | | | | | | | This patch changes LLVM_CONSTEXPR variable declarations to const variable declarations, since LLVM_CONSTEXPR expands to nothing if the current compiler doesn't support constexpr. In all of the changed cases, it looks like the code intended the variable to be const instead of sometimes-constexpr sometimes-not. llvm-svn: 279696
* Replace "fallthrough" comments with LLVM_FALLTHROUGHJustin Bogner2016-08-171-2/+4
| | | | | | | This is a mechanical change of comments in switches like fallthrough, fall-through, or fall-thru to use the LLVM_FALLTHROUGH macro instead. llvm-svn: 278902
* [AArch64] Re-factor code shared by AArch64LoadStoreOpt and AArch64InstrInfo.Geoff Berry2016-08-121-30/+5
| | | | | | | | | | | | | | | | | | | | | | | | | This re-factoring could cause the following slight changes in generated code, though none were observed during testing: - MachineScheduler could decide not to cluster some loads/stores if there are other load/stores with non-pairable opcodes that have the same base register and offset as a pairable set of load/stores. One case of different MachineScheduler pairing did show up in my testing, but it wasn't due to this issue, but due BaseMemOpClusterMutation::clusterNeighboringMemOps() being unstable w.r.t. the order it considers memory operations. See PR28942. - The ImplicitNullChecks optimization could be done for more load/store opcodes. This optimization isn't done for C/C++ code, so it didn't show up in my testing. Reviewers: mcrosier, t.p.northover Subscribers: aemerson, rengolin, mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D23365 llvm-svn: 278515
* Move helpers into anonymous namespaces. NFC.Benjamin Kramer2016-08-061-0/+2
| | | | llvm-svn: 277916
* AArch64: Assert on branch displacement bitsMatt Arsenault2016-08-021-0/+7
| | | | llvm-svn: 277434
* AArch64: BranchRelaxtion cleanupsMatt Arsenault2016-08-021-0/+50
| | | | | | Move some logic into TII. llvm-svn: 277430
* [AArch64] Return the correct size for TLSDESC_CALLSEQDiana Picus2016-08-011-0/+3
| | | | | | | | | | | | | | The branch relaxation pass is computing the wrong offsets because it assumes TLSDESC_CALLSEQ eats up 4 bytes, when in fact it is lowered to an instruction sequence taking up 16 bytes. This can become a problem in huge files with lots of TLS accesses, as it may slowly move branch targets out of the range computed by the branch relaxation pass. Fixes PR24234 https://llvm.org/bugs/show_bug.cgi?id=24234 Differential Revision: https://reviews.llvm.org/D22870 llvm-svn: 277331
* MachineFunction: Return reference for getFrameInfo(); NFCMatthias Braun2016-07-281-2/+2
| | | | | | | getFrameInfo() never returns nullptr so we should use a reference instead of a pointer. llvm-svn: 277017
* TargetInstrInfo: rename GetInstSizeInBytes to getInstSizeInBytes. NFCSjoerd Meijer2016-07-281-2/+2
| | | | | | Differential Revision: https://reviews.llvm.org/D22925 llvm-svn: 276997
* Typo fix. NFCDiana Picus2016-07-271-1/+1
| | | | llvm-svn: 276879
* [AArch64] Cleanup sign extend in genAlternativeCodeSequenceDavid Majnemer2016-07-211-3/+3
| | | | | | | | Use the machinery in MathExtras instead of rolling it by hand. This fixes PR28624. llvm-svn: 276366
* Rename AnalyzeBranch* to analyzeBranch*.Jacques Pienaar2016-07-151-5/+5
| | | | | | | | | | | | Summary: NFC. Rename AnalyzeBranch/AnalyzeBranchPredicate to analyzeBranch/analyzeBranchPredicate to follow LLVM coding style and be consistent with TargetInstrInfo's analyzeCompare and analyzeSelect. Reviewers: tstellarAMD, mcrosier Subscribers: mcrosier, jholewinski, jfb, arsenm, dschuff, jyknight, dsanders, nemanjai Differential Revision: https://reviews.llvm.org/D22409 llvm-svn: 275564
* [AArch64] Set COPY ZR isAsCheapAsAMove when needed.Haicheng Wu2016-07-151-2/+6
| | | | | | | | | If a subtarget has both ZCZeroing and CustomCheapAsMoveHandling features (now only Kryo has both), set COPY (W|X)ZR isAsCheapAsAMove. Differential Revision: http://reviews.llvm.org/D22360 llvm-svn: 275503
* s/constexpr/LLVM_CONSTEXPR in AArch64InstrInfo.cpp.Justin Lebar2016-07-141-1/+1
| | | | | | Yet again. llvm-svn: 275463
* [CodeGen] Refactor MachineMemOperand::Flags's target-specific flags.Justin Lebar2016-07-141-15/+7
| | | | | | | | | | | | | | | | | Summary: Make the target-specific flags in MachineMemOperand::Flags real, bona fide enum values. This simplifies users, prevents various constants from going out of sync, and avoids the false sense of security provided by declaring static members in classes and then forgetting to define them inside of cpp files. Reviewers: MatzeB Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D22372 llvm-svn: 275451
* [CodeGen] Refactor MachineMemOperand's Flags enum.Justin Lebar2016-07-141-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | Summary: - Give it a shorter name (because we're going to refer to it often from SelectionDAG and friends). - Split the flags and alignment into separate variables. - Specialize FlagsEnumTraits for it, so we can do bitwise ops on it without losing type information. - Make some enum values constants in MachineMemOperand instead. MOMaxBits should not be a valid Flag. - Simplify some of the bitwise ops for dealing with Flags. Reviewers: chandlerc Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D22281 llvm-svn: 275438
* [AArch64] Set FMOVS0 and FMOVD0 as isAsCheapAsAMove when needed.Haicheng Wu2016-07-121-0/+6
| | | | | | | | | If a subtarget has both ZCZeroing and CustomCheapAsMoveHandling features (now only Kryo has both), set FMOVS0 and FMOVD0 isAsCheapAsAMove. Differential Revision: http://reviews.llvm.org/D22256 llvm-svn: 275178
* AArch64: Avoid implicit iterator conversions, NFCDuncan P. N. Exon Smith2016-07-081-7/+6
| | | | | | | | Avoid implicit conversions from MachineInstrBundleInstr to MachineInstr* in the AArch64 backend, mainly by preferring MachineInstr& over MachineInstr* when a pointer isn't nullable. llvm-svn: 274924
* CodeGen: Use MachineInstr& in TargetInstrInfo, NFCDuncan P. N. Exon Smith2016-06-301-180/+177
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is mostly a mechanical change to make TargetInstrInfo API take MachineInstr& (instead of MachineInstr* or MachineBasicBlock::iterator) when the argument is expected to be a valid MachineInstr. This is a general API improvement. Although it would be possible to do this one function at a time, that would demand a quadratic amount of churn since many of these functions call each other. Instead I've done everything as a block and just updated what was necessary. This is mostly mechanical fixes: adding and removing `*` and `&` operators. The only non-mechanical change is to split ARMBaseInstrInfo::getOperandLatencyImpl out from ARMBaseInstrInfo::getOperandLatency. Previously, the latter took a `MachineInstr*` which it updated to the instruction bundle leader; now, the latter calls the former either with the same `MachineInstr&` or the bundle leader. As a side effect, this removes a bunch of MachineInstr* to MachineBasicBlock::iterator implicit conversions, a necessary step toward fixing PR26753. Note: I updated WebAssembly, Lanai, and AVR (despite being off-by-default) since it turned out to be easy. I couldn't run tests for AVR since llc doesn't link with it turned on. llvm-svn: 274189
* Pass DebugLoc and SDLoc by const ref.Benjamin Kramer2016-06-121-12/+14
| | | | | | | | This used to be free, copying and moving DebugLocs became expensive after the metadata rewrite. Passing by reference eliminates a ton of track/untrack operations. No functionality change intended. llvm-svn: 272512
* AArch64: Do not test for CPUs, use SubtargetFeaturesMatthias Braun2016-06-021-10/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | Testing for specific CPUs has a number of problems, better use subtarget features: - When some tweak is added for a specific CPU it is often desirable for the next version of that CPU as well, yet we often forget to add it. - It is hard to keep track of checks scattered around the target code; Declaring all target specifics together with the CPU in the tablegen file is a clear representation. - Subtarget features can be tweaked from the command line. To discourage people from using CPU checks in the future I removed the isCortexXX(), isCyclone(), ... functions. I added an getProcFamily() function for exceptional circumstances but made it clear in the comment that usage is discouraged. Reformat feature list in AArch64.td to have 1 feature per line in alphabetical order to simplify merging and sorting for out of tree tweaks. No functional change intended. Differential Revision: http://reviews.llvm.org/D20762 llvm-svn: 271555
* Delete AArch64II::MO_CONSTPOOL.Rafael Espindola2016-05-311-2/+1
| | | | | | | | A constant pool holding the address of a variable in equivalent to a got entry. It produces exactly the same instruction sequence as a got use and unlike a got use this is not uniqued by the linker. llvm-svn: 271311
* AArch64: Fix indentationMatthias Braun2016-05-281-9/+9
| | | | llvm-svn: 271084
* Apply clang-tidy's misc-static-assert where it makes sense.Benjamin Kramer2016-05-271-4/+4
| | | | | | | Also fold conditions into assert(0) where it makes sense. No functional change intended. llvm-svn: 270982
* [foldMemoryOperand()] Pass LiveIntervals to enable liveness check.Jonas Paulsson2016-05-101-1/+2
| | | | | | | | | | | | | | | SystemZ (and probably other targets as well) can fold a memory operand by changing the opcode into a new instruction that as a side-effect also clobbers the CC-reg. In order to do this, liveness of that reg must first be checked. When LIS is passed, getRegUnit() can be called on it and the right LiveRange is computed on demand. Reviewed by Matthias Braun. http://reviews.llvm.org/D19861 llvm-svn: 269026
* [AArch64] Combine callee-save and local stack SP adjustment instructions.Geoff Berry2016-05-061-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: If a function needs to allocate both callee-save stack memory and local stack memory, we currently decrement/increment the SP in two steps: first for the callee-save area, and then for the local stack area. This changes the code to allocate them both at once at the very beginning/end of the function. This has two benefits: 1) there is one fewer sub/add micro-op in the prologue/epilogue 2) the stack adjustment instructions act as a scheduling barrier, so moving them to the very beginning/end of the function increases post-RA scheduler's ability to move instructions (that only depend on argument registers) before any of the callee-save stores This change can cause an increase in instructions if the original local stack SP decrement could be folded into the first store to the stack. This occurs when the first local stack store is to stack offset 0. In this case we are trading off one more sub instruction for one fewer sub micro-op (along with benefits (2) and (3) above). Reviewers: t.p.northover Subscribers: aemerson, rengolin, mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D18619 llvm-svn: 268746
* [AArch64] Add cheap as move instructions for Exynos M1Evandro Menezes2016-05-041-2/+33
| | | | llvm-svn: 268549
* AArch64/optimizeCondBranch: Remove earlier kill flag when forming TBZMatthias Braun2016-05-031-0/+2
| | | | | | | This fixes -verify-machineinstrs complaints when compiling test-suite/SingleSource/Benchmarks/Shootout-C++/wordfreq.cpp llvm-svn: 268360
* Cleanup comments. NFC.Chad Rosier2016-05-021-2/+3
| | | | llvm-svn: 268236
* Re-apply r267206 with a fix for the encoding problem: when the immediate ofQuentin Colombet2016-04-251-3/+14
| | | | | | | | | | | | | | | | | | log2(Mask) is smaller than 32, we must use the 32-bit variant because the 64-bit variant cannot encode it. Therefore, set the subreg part accordingly. [AArch64] Fix optimizeCondBranch logic. The opcode for the optimized branch does not depend on the size of the activate bits in the AND masks, but the AND opcode itself. Indeed, we need to use a X or W variant based on the AND variant not based on whether the mask fits into the related variant. Otherwise, we may end up using the W variant of the optimized branch for 64-bit register inputs! This fixes the last make check verifier issues for AArch64: PR27479. llvm-svn: 267465
* [MachineCombiner] Support for floating-point FMA on ARM64 (re-commit r267098)Gerolf Hoflehner2016-04-241-35/+545
| | | | | | | | | | | | | | | | | | | The original patch caused crashes because it could derefence a null pointer for SelectionDAGTargetInfo for targets that do not define it. Evaluates fmul+fadd -> fmadd combines and similar code sequences in the machine combiner. It adds support for float and double similar to the existing integer implementation. The key features are: - DAGCombiner checks whether it should combine greedily or let the machine combiner do the evaluation. This is only supported on ARM64. - It gives preference to throughput over latency: the heuristic used is to combine always in loops. The targets decides whether the machine combiner should optimize for throughput or latency. - Supports for fmadd, f(n)msub, fmla, fmls patterns - On by default at O3 ffast-math llvm-svn: 267328
* Revert "[AArch64] Fix optimizeCondBranch logic."Renato Golin2016-04-231-5/+5
| | | | | | This reverts commit r267206, as it broke self-hosting on AArch64. llvm-svn: 267294
* [AArch64] Fix optimizeCondBranch logic.Quentin Colombet2016-04-221-5/+5
| | | | | | | | | | | | | The opcode for the optimized branch does not depend on the size of the activate bits in the AND masks, but the AND opcode itself. Indeed, we need to use a X or W variant based on the AND variant not based on whether the mask fits into the related variant. Otherwise, we may end up using the W variant of the optimized branch for 64-bit register inputs! This fixes the last make check verifier issues for AArch64: PR27479. llvm-svn: 267206
* [AArch64] When creating MRS instruction, make sure the destination register isQuentin Colombet2016-04-221-2/+1
| | | | | | | | declared as a definition. This fixes the machine verifier error for CodeGen/AArch64/nzcv-save.ll. llvm-svn: 267185
* Revert r267098 - [MachineCombiner] Support for floating-point FMA on ARM64Daniel Sanders2016-04-221-545/+35
| | | | | | It introduced buildbot failures on clang-cmake-mips, clang-ppc64le-linux, among others. llvm-svn: 267127
* [MachineCombiner] Support for floating-point FMA on ARM64Gerolf Hoflehner2016-04-221-35/+545
| | | | | | | | | | | | | | | | Evaluates fmul+fadd -> fmadd combines and similar code sequences in the machine combiner. It adds support for float and double similar to the existing integer implementation. The key features are: - DAGCombiner checks whether it should combine greedily or let the machine combiner do the evaluation. This is only supported on ARM64. - It gives preference to throughput over latency: the heuristic used is to combine always in loops. The targets decides whether the machine combiner should optimize for throughput or latency. - Supports for fmadd, f(n)msub, fmla, fmls patterns - On by default at O3 ffast-math llvm-svn: 267098
* [AArch64][CodeGen] Fix of PR27158: incorrect peephole optimization in ↵Evgeny Astigeevich2016-04-211-73/+155
| | | | | | | | | | | | | | | | AArch64InstrInfo::optimizeCompareInstr AArch64InstrInfo::optimizeCompareInstr has bug PR27158 which causes generation of incorrect code. A compare instruction is substituted with another instruction which does not produce the same flags as the original compare instruction. This patch contains: 1. Fix of the bug. 2. A regression test in MIR. 3. A new test to check that SUBS is replaced by SUB. Differential Revision: http://reviews.llvm.org/D18838 llvm-svn: 266969
* [AArch64] Add load/store pair instructions to getMemOpBaseRegImmOfsWidth().Chad Rosier2016-04-151-5/+46
| | | | | | | | | This improves AA in the MI schduler when reason about paired instructions. Phabricator Revision: http://reviews.llvm.org/D17098 PR26358 llvm-svn: 266462
* [MachineScheduler]Add support for store clusteringJun Bum Lim2016-04-151-3/+13
| | | | | | | | | | | | Perform store clustering just like load clustering. This change add StoreClusterMutation in machine-scheduler. To control StoreClusterMutation, added enableClusterStores() in TargetInstrInfo.h. This is enabled only on AArch64 for now. This change also add support for unscaled stores which were not handled in getMemOpBaseRegImmOfs(). llvm-svn: 266437
OpenPOWER on IntegriCloud