summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
...
* GlobalISel: support selection of G_ICMP on AArch64.Tim Northover2016-10-121-0/+71
| | | | | | Patch from Ahmed Bougaca again. llvm-svn: 284072
* GlobalISel: select G_BRCOND instructions on AArch64.Tim Northover2016-10-121-0/+22
| | | | llvm-svn: 284071
* GlobalISel: mark G_BRCOND on s1 as legal.Tim Northover2016-10-121-3/+2
| | | | | | It's going to be a TBNZ (at -O0) anyway, so the high bits don't matter. llvm-svn: 284070
* Create llvm.addressofreturnaddress intrinsicAlbert Gutowski2016-10-122-0/+8
| | | | | | | | | | | | Summary: We need a new LLVM intrinsic to implement MS _AddressOfReturnAddress builtin on 64-bit Windows. Reviewers: majnemer, rnk Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D25293 llvm-svn: 284061
* AMDGPU: Initial implementation of VGPR indexing modeMatt Arsenault2016-10-123-43/+194
| | | | | | | | | | | This is the most basic handling of the indirect access pseudos using GPR indexing mode. This currently only enables the mode for a single v_mov_b32 and then disables it. This is much more complicated to use than the movrel instructions, so a new optimization pass is probably needed to fold the access into the uses and keep the mode enabled for them. llvm-svn: 284031
* AMDGPU: Add instruction definitions for VGPR indexingMatt Arsenault2016-10-1210-8/+126
| | | | | | | VI added a second method of indexing into VGPRs besides using v_movrel* llvm-svn: 284027
* AMDGPU/SI: Change mimg intrinsic signaturesTom Stellard2016-10-121-18/+23
| | | | | | | | This makes more fields overridable and removes redundant bits. Patch by: Changpeng Fang llvm-svn: 284024
* NFC: The Cost Model specialization, by Andrey TischenkoAlexey Bataev2016-10-121-0/+25
| | | | | | | | | | | | | | The current Cost Model implementation is very inaccurate and has to be updated, improved, re-implemented to be able to take into account the concrete CPU models and the concrete targets where this Cost Model is being used. For example, the Latency Cost Model should be differ from Code Size Cost Model, etc. This patch is the first step to launch the developing and implementation of a new Cost Model generation. Differential Revision: https://reviews.llvm.org/D25186 llvm-svn: 284012
* [AArch64][InstrustionSelector] Teach the selector about G_BITCAST.Quentin Colombet2016-10-121-59/+2
| | | | llvm-svn: 283973
* [AArch64][InstructionSelector] Refactor the handling of copies.Quentin Colombet2016-10-121-26/+83
| | | | | | | | | | | | | | Although Copies are not specific to preISel, we still have to assign them a proper register class. However, given they are not constrained to anything we do not have to handle the source register at the copy. It will be properly mapped when reaching the related definition. In the process, the handlong of G_ANYEXT is slightly modified as those end up being selected as copy. The difference is that when register size do not match on both sides, we need to insert SUBREG_TO_REG operation, otherwise the post RA copy expansion will not be happy! llvm-svn: 283972
* [AArch64][MachineLegalizer] Mark more bitcasts as legal.Quentin Colombet2016-10-121-0/+3
| | | | | | Those are copies, we do not have to do any legalization action for them. llvm-svn: 283970
* [PPCMIPeephole] Fix splat eliminationTim Shen2016-10-121-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: In PPCMIPeephole, when we see two splat instructions, we can't simply do the following transformation: B = Splat A C = Splat B => C = Splat A because B may still be used between these two instructions. Instead, we should make the second Splat a PPC::COPY and let later passes decide whether to remove it or not: B = Splat A C = Splat B => B = Splat A C = COPY B Fixes PR30663. Reviewers: echristo, iteratee, kbarton, nemanjai Subscribers: mehdi_amini, llvm-commits Differential Revision: https://reviews.llvm.org/D25493 llvm-svn: 283961
* GlobalISel: support same-size casts on AArch64.Tim Northover2016-10-112-0/+75
| | | | | | | Mostly Ahmed's work again, I'm just sprucing things up slightly before committing. llvm-svn: 283952
* Re-land "[Thumb] Save/restore high registers in Thumb1 pro/epilogues"Reid Kleckner2016-10-113-24/+375
| | | | | | | | | Reverts r283938 to reinstate r283867 with a fix. The original change had an ArrayRef referring to a destroyed temporary initializer list. Use plain C arrays instead. llvm-svn: 283942
* Revert "[Thumb] Save/restore high registers in Thumb1 pro/epilogues"Reid Kleckner2016-10-113-369/+24
| | | | | | | | | | | | | | | | | | This reverts r283867. This appears to be an infinite loop: while (HiRegToSave != AllHighRegs.end() && CopyReg != AllCopyRegs.end()) { if (HiRegsToSave.count(*HiRegToSave)) { ... CopyReg = findNextOrderedReg(++CopyReg, CopyRegs, AllCopyRegs.end()); HiRegToSave = findNextOrderedReg(++HiRegToSave, HiRegsToSave, AllHighRegs.end()); } } llvm-svn: 283938
* GlobalISel: support selection of extend operations.Tim Northover2016-10-111-0/+99
| | | | | | Patch mostly by Ahmed Bougaca. llvm-svn: 283937
* [AMDGPU] Refactor waitcnt encodingKonstantin Zhuravlyov2016-10-115-66/+171
| | | | | | | | | | | | | - Refactor bit packing/unpacking - Calculate bit mask given bit shift and bit width - Introduce function for decoding bits of waitcnt - Introduce function for encoding bits of waitcnt - Introduce function for getting waitcnt mask (instead of using bare numbers) - Introduce function fot getting max waitcnt(s) (instead of using bare numbers) Differential Revision: https://reviews.llvm.org/D25298 llvm-svn: 283919
* Fix "static initialization order fiasco" for the XCore Target.Mehdi Amini2016-10-116-15/+19
| | | | | | | | I fixed all the other Targets in r283702, and interestingly the sanitizers are only now "sometimes" catching this bug on the only one I missed. llvm-svn: 283914
* ARMMachineFunctionInfo.cpp: Add an initializer of ↵NAKAMURA Takumi2016-10-111-2/+2
| | | | | | | | ARMFunctionInfo::ReturnRegsCount in the explicit ctor. It caused crash since r283867. llvm-svn: 283909
* Reformat.NAKAMURA Takumi2016-10-111-4/+4
| | | | llvm-svn: 283908
* Silence unused warning in non-assert builds.Daniel Jasper2016-10-111-0/+1
| | | | llvm-svn: 283899
* AMDGPU/SI: Update ISA version numbers for Tonga and Polaris10/11.Changpeng Fang2016-10-114-3/+8
| | | | | | | | | | Differential Revision: http://reviews.llvm.org/D25454 Reviewers: tstellarAMD llvm-svn: 283893
* [Thumb] Save/restore high registers in Thumb1 pro/epiloguesOliver Stannard2016-10-113-24/+368
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The high registers are not allocatable in Thumb1 functions, but they could still be used by inline assembly, so we need to save and restore the callee-saved high registers (r8-r11) in the prologue and epilogue. This is complicated by the fact that the Thumb1 push and pop instructions cannot access these registers. Therefore, we have to move them down into low registers before pushing, and move them back after popping into low registers. In most functions, we will have low registers that are also being pushed/popped, which we can use as the temporary registers for saving/restoring the high registers. However, this is not guaranteed, so we may need to push some extra low registers to ensure that the high registers can be saved/restored. For correctness, it would be sufficient to use just one low register, but if we have enough low registers available then we only need one push/pop instruction, rather than one per high register. We can also use the argument/return registers when they are not live, and the link register when saving (but not restoring), reducing the number of extra registers we need to push. There are still a few extreme edge cases where we need two push/pop instructions, because not enough low registers can be made live in the prologue or epilogue. In addition to the regression tests included here, I've also tested this using a script to generate functions which clobber different combinations of registers, have different numbers of argument and return registers (including variadic arguments), allocate different fixed sized objects on the stack, and do or don't use variable sized allocas and the __builtin_return_address intrinsic (all of which affect the available registers in the prologue and epilogue). I ran these functions in a test harness which verifies that all of the callee-saved registers are correctly preserved. Differential Revision: https://reviews.llvm.org/D24228 llvm-svn: 283867
* [ARM] Fix registers clobbered by SjLj EH on soft-float targetsOliver Stannard2016-10-114-2/+15
| | | | | | | | | | | | | | | | | | | Currently, the Int_eh_sjlj_dispatchsetup intrinsic is marked as clobbering all registers, including floating-point registers that may not be present on the target. This is technically true, as we could get linked against code that does use the FP registers, but that will not actually work, as the soft-float code cannot save and restore the FP registers. SjLj exception handling can only work correctly if either all or none of the code is built for a target with FP registers. Therefore, we can assume that, when Int_eh_sjlj_dispatchsetup is compiled for a soft-float target, it is only going to be linked against other soft-float code, and so only clobbers the general-purpose registers. This allows us to check that no non-savable registers are clobbered when generating the prologue/epilogue. Differential Revision: https://reviews.llvm.org/D25180 llvm-svn: 283866
* [AArch64] Allow label arithmetic with add/sub/cmpDiana Picus2016-10-113-26/+44
| | | | | | | | | | | | | Allow instructions such as 'cmp w0, #(end - start)' by folding the expression into a constant. For ELF, we fold only if the symbols are in the same section. For MachO, we fold if the expression contains only symbols that are not linker visible. Fixes https://llvm.org/bugs/show_bug.cgi?id=18920 Differential Revision: https://reviews.llvm.org/D23834 llvm-svn: 283862
* [AArch64][InstructionSelector] Teach how to select FP load/store.Quentin Colombet2016-10-111-0/+7
| | | | | | This patch allows to select 32 and 64-bit FP load and store. llvm-svn: 283832
* [AArch64][InstructionSelector] Teach the selector how to handle vector OR.Quentin Colombet2016-10-111-0/+2
| | | | | | | | | | This only adds the support for 64-bit vector OR. Adding more sizes is not difficult, but it requires a bigger refactoring because ORs work on any size, not necessarly the ones that match the width of the register width. Right now, this is not expressed in the legalization, so don't bother pushing the refactoring yet. llvm-svn: 283831
* [AArch64][MachineLegalizer] Mark v2s32 G_LOAD as legal.Quentin Colombet2016-10-111-1/+1
| | | | | | | Actually every 64-bit loads are legal, but right now the API does not offer a simple way to express that. llvm-svn: 283829
* Revert r283690, "MC: Remove unused entities."Peter Collingbourne2016-10-1012-14/+22
| | | | llvm-svn: 283814
* GlobalISel: select G_GLOBAL_VALUE uses on AArch64.Tim Northover2016-10-103-4/+30
| | | | llvm-svn: 283809
* GlobalISel: allow G_GLOBAL_VALUEs in AArch64 legalization.Tim Northover2016-10-101-0/+1
| | | | llvm-svn: 283808
* GlobalISel: support selecting G_GEP instructions.Tim Northover2016-10-101-1/+3
| | | | | | They're basically just an alias for G_ADD on AArch64. llvm-svn: 283807
* GlobalISel: support selecting constants on AArch64.Tim Northover2016-10-101-0/+10
| | | | llvm-svn: 283806
* [ARM] Fix invalid VLDM/VSTM access when targeting Big Endian with NEONAlexandros Lamprineas2016-10-101-2/+12
| | | | | | | | | | | | | | | The instructions VLDM/VSTM can only access word-aligned memory locations and produce alignment fault if the condition is not met. The compiler currently generates VLDM/VSTM for v2f64 load/store regardless the alignment of the memory access. Instead, if a v2f64 load/store is not word-aligned, the compiler should generate VLD1/VST1. For each non double-word-aligned VLD1/VST1, a VREV instruction should be generated when targeting Big Endian. Differential Revision: https://reviews.llvm.org/D25281 llvm-svn: 283763
* [X86] Prefer rotate by 1 over rotate by immZvi Rackover2016-10-101-4/+4
| | | | | | | | | | | | | Summary: Rotate by 1 is translated to 1 micro-op, while rotate with imm8 is translated to 2 micro-ops. Fixes pr30644. Reviewers: delena, igorb, craig.topper, spatel, RKSimon Differential Revision: https://reviews.llvm.org/D25399 llvm-svn: 283758
* This pass, fixing an erratum in some LEON 2 processors ensures that the SDIV ↵Chris Dewhurst2016-10-105-1/+18
| | | | | | | | instruction is not issued, but replaced by SDIVcc instead, which does not exhibit the error. Unit test included. Differential Review: https://reviews.llvm.org/D24660 llvm-svn: 283727
* Fix WebAssembly build after r283702.Daniel Jasper2016-10-101-2/+8
| | | | llvm-svn: 283723
* [AVX-512] Add missing pattern sext or zext from bytes to quad words with a ↵Craig Topper2016-10-101-0/+2
| | | | | | 128-bit load as input. llvm-svn: 283720
* [x86][inline-asm][llvm] accept 'v' constraintMichael Zuckerman2016-10-101-0/+15
| | | | | | | | | | | | | | Commit in the name of:Coby Tayree 1.'v' constraint for (x86) non-avx arch imitates the already implemented 'x' constraint, i.e. allows XMM{0-15} & YMM{0-15} depending on the apparent arch & mode (32/64). 2.for the avx512 arch it allows [X,Y,Z]MM{0-31} (mode dependent) This patch applies the needed changes to clang clang patch: https://reviews.llvm.org/D25004 Differential Revision: D25005 llvm-svn: 283717
* [AVR] Enable generation of the TableGen assembly writer tablesDylan McKay2016-10-101-2/+3
| | | | | | | This also changes the order of the statements in CMakeLists.txt to be alphabetical. llvm-svn: 283711
* [AVX-512] Port 128 and 256-bit memory->register sign/zero extend patterns ↵Craig Topper2016-10-091-0/+142
| | | | | | from SSE file. Also add a minimal set for 512-bit. llvm-svn: 283704
* [X86] Remove redundant patterns. The same pattern appears a few lines up.Craig Topper2016-10-091-6/+0
| | | | llvm-svn: 283703
* Move the global variables representing each Target behind accessor functionMehdi Amini2016-10-09100-331/+470
| | | | | | | | This avoids "static initialization order fiasco" Differential Revision: https://reviews.llvm.org/D25412 llvm-svn: 283702
* DAG: Setting Masked-Expand-Load as a variant of Masked-Load nodeElena Demikhovsky2016-10-094-30/+52
| | | | | | | | | | Masked-expand-load node represents load operation that loads a variable amount of elements from memory according to amount of "true" bits in the mask and expands the loaded elements according to their position in the mask vector. Right now, the node is used in intrinsics for VEXPAND* instructions. The work is done towards implementation of masked.expandload and masked.compressstore intrinsics. Differential Revision: https://reviews.llvm.org/D25322 llvm-svn: 283694
* [AVX-512] Fix execution domain for EVEX encoded VINSERTPS.Craig Topper2016-10-091-0/+2
| | | | llvm-svn: 283692
* MC: Remove unused entities.Peter Collingbourne2016-10-0912-22/+14
| | | | llvm-svn: 283691
* Target: Remove unused entities.Peter Collingbourne2016-10-099-51/+4
| | | | llvm-svn: 283690
* [AVX-512] Add subvector insert and extract to load/store folding tables.Craig Topper2016-10-091-0/+25
| | | | llvm-svn: 283689
* [AVX-512] Add the vector down convert instructions to the store folding tables.Craig Topper2016-10-091-0/+24
| | | | llvm-svn: 283687
* Turn cl::values() (for enum) from a vararg function to using C++ variadic ↵Mehdi Amini2016-10-086-12/+6
| | | | | | | | | | | | | | | template The core of the change is supposed to be NFC, however it also fixes what I believe was an undefined behavior when calling: va_start(ValueArgs, Desc); with Desc being a StringRef. Differential Revision: https://reviews.llvm.org/D25342 llvm-svn: 283671
OpenPOWER on IntegriCloud