summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
...
* Rename PPC MTCTRse to MTCTRloopHal Finkel2013-05-203-7/+7
| | | | | | | | | | As the pairing of this instruction form with the bdnz/bdz branches is now enforced by the verification pass, make it clear from the name that these are used only for counter-based loops. No functionality change intended. llvm-svn: 182296
* Add a PPCCTRLoops verification passHal Finkel2013-05-203-0/+164
| | | | | | | | | | | | | | | | | | When asserts are enabled, this adds a verification pass for PPC counter-loop formation. Unfortunately, without sacrificing code quality, there is no better way of forming counter-based loops except at the (late) IR level. This means that we need to recognize, at the IR level, anything which might turn into a function call (or indirect branch). Because this is currently a finite set of things, and because SelectionDAG lowering is basic-block local, this can be done. Nevertheless, it is fragile, and failure results in a miscompile. This verification pass checks that all (reachable) counter-based branches are dominated by a loop mtctr instruction, and that no instructions in between clobber the counter register. If these conditions are not satisfied, then an ICE will be triggered. In short, this is to help us sleep better at night. llvm-svn: 182295
* R600: Fix bug detected by GCC warning.Benjamin Kramer2013-05-201-2/+2
| | | | | | | | | R600TextureIntrinsicsReplacer.cpp:232: warning: the address of ‘ArgsType’ will always evaluate as ‘true’ This doesn't have any effect on the output as a vararg intrinsic behaves the same way as a non-vararg one. llvm-svn: 182293
* R600/SI: Use a multiclass for MUBUF_Load_HelperTom Stellard2013-05-202-20/+30
| | | | | | | This will simplify the instructions and also the pattern definitions. Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> llvm-svn: 182288
* R600/SI: Add a pattern for S_LOAD_DWORDX2_* instructionsTom Stellard2013-05-201-0/+1
| | | | | Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> llvm-svn: 182287
* R600/SI: Add pattern for rotrTom Stellard2013-05-201-0/+2
| | | | | Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> llvm-svn: 182286
* R600: Swap the legality of rotl and rotrTom Stellard2013-05-207-28/+11
| | | | | | The hardware supports rotr and not rotl. llvm-svn: 182285
* R600/SI: Add patterns for 64-bit shift operationsTom Stellard2013-05-202-3/+22
| | | | | Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> llvm-svn: 182284
* R600/SI: Use the same names for VOP3 operands and encoding fieldsTom Stellard2013-05-202-37/+37
| | | | | | | | This makes it possible to reorder the operands without breaking the encoding. Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> llvm-svn: 182283
* R600/SI: Make fitsRegClass() operands constTom Stellard2013-05-202-2/+3
| | | | | Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> llvm-svn: 182282
* VSTn instructions have a number of encoding constraints which are not ↵Mihai Popa2013-05-202-21/+72
| | | | | | implemented. I have added these using wrapper methods around the original custom decoder (incidentally - this is a huge poorly written method that should be cleaned up. I have left it as is since the changes would be much to hard to review). llvm-svn: 182281
* Q registers are encoded in fields of the same length as D registers. As Q ↵Mihai Popa2013-05-201-1/+1
| | | | | | registers are half as many, the ARM reference manual mandates the least significant bit to be zeroed out. Failure to do so should result in an undefined instruction. With this change test/MC/Disassembler/ARM/invalid-VQADD-arm.txt is passing (removed XFAIL). llvm-svn: 182279
* [SystemZ] Add long branch passRichard Sandiford2013-05-2011-34/+399
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before this change, the SystemZ backend would use BRCL for all branches and only consider shortening them to BRC when generating an object file. E.g. a branch on equal would use the JGE alias of BRCL in assembly output, but might be shortened to the JE alias of BRC in ELF output. This was a useful first step, but it had two problems: (1) The z assembler isn't traditionally supposed to perform branch shortening or branch relaxation. We followed this rule by not relaxing branches in assembler input, but that meant that generating assembly code and then assembling it would not produce the same result as going directly to object code; the former would give long branches everywhere, whereas the latter would use short branches where possible. (2) Other useful branches, like COMPARE AND BRANCH, do not have long forms. We would need to do something else before supporting them. (Although COMPARE AND BRANCH does not change the condition codes, the plan is to model COMPARE AND BRANCH as a CC-clobbering instruction during codegen, so that we can safely lower it to a separate compare and long branch where necessary. This is not a valid transformation for the assembler proper to make.) This patch therefore moves branch relaxation to a pre-emit pass. For now, calls are still shortened from BRASL to BRAS by the assembler, although this too is not really the traditional behaviour. The first test takes about 1.5s to run, and there are likely to be more tests in this vein once further branch types are added. The feeling on IRC was that 1.5s is a bit much for a single test, so I've restricted it to SystemZ hosts for now. The patch exposes (and fixes) some typos in the main CodeGen/SystemZ tests. A later patch will remove the {{g}}s from that directory. llvm-svn: 182274
* [NVPTX] Add GenericToNVVM IR converter to better handle idiomatic LLVM IR inputsJustin Holewinski2013-05-207-80/+525
| | | | | | | | | | | | | | | This converter currently only handles global variables in address space 0. For these variables, they are promoted to address space 1 (global memory), and all uses are updated to point to the result of a cvta.global instruction on the new variable. The motivation for this is address space 0 global variables are illegal since we cannot declare variables in the generic address space. Instead, we place the variables in address space 1 and explicitly convert the pointer to address space 0. This is primarily intended to help new users who expect to be able to place global variables in the default address space. llvm-svn: 182254
* [NVPTX] Fix i1 kernel parameters and global variables. ABI rules say we ↵Justin Holewinski2013-05-201-2/+12
| | | | | | need to use .u8 for i1 parameters for kernels. llvm-svn: 182253
* PR15868 fix.Stepan Dyatkovskiy2013-05-205-11/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduction: In case when stack alignment is 8 and GPRs parameter part size is not N*8: we add padding to GPRs part, so part's last byte must be recovered at address K*8-1. We need to do it, since remained (stack) part of parameter starts from address K*8, and we need to "attach" "GPRs head" without gaps to it: Stack: |---- 8 bytes block ----| |---- 8 bytes block ----| |---- 8 bytes... [ [padding] [GPRs head] ] [ ------ Tail passed via stack ------ ... FIX: Note, once we added padding we need to correct *all* Arg offsets that are going after padded one. That's why we need this fix: Arg offsets were never corrected before this patch. See new test-cases included in patch. We also don't need to insert padding for byval parameters that are stored in GPRs only. We need pad only last byval parameter and only in case it outsides GPRs and stack alignment = 8. Though, stack area, allocated for recovered byval params, must satisfy "Size mod 8 = 0" restriction. This patch reduces stack usage for some cases: We can reduce ArgRegsSaveArea since inner N*4 bytes sized byval params my be "packed" with alignment 4 in some cases. llvm-svn: 182237
* Also expand 64-bit bitcasts.Jakob Stoklund Olesen2013-05-201-0/+2
| | | | llvm-svn: 182229
* Implement spill and fill of I64Regs.Jakob Stoklund Olesen2013-05-201-2/+9
| | | | llvm-svn: 182228
* Mark i64 SETCC as expand so it is turned into a SELECT_CC.Jakob Stoklund Olesen2013-05-201-0/+2
| | | | llvm-svn: 182227
* Replace some bit operations with simpler ones. No functionality change.Benjamin Kramer2013-05-193-12/+9
| | | | llvm-svn: 182226
* Don't use %g0 to materialize 0 directly.Jakob Stoklund Olesen2013-05-192-4/+2
| | | | | | | | The wired physreg doesn't work on tied operands like on MOVXCC. Add a README note to fix this later. llvm-svn: 182225
* Select i64 values with %icc conditions.Jakob Stoklund Olesen2013-05-191-0/+5
| | | | llvm-svn: 182224
* Add floating point selects on %xcc predicates.Jakob Stoklund Olesen2013-05-191-0/+10
| | | | llvm-svn: 182222
* Implement SPselectfcc for i64 operands.Jakob Stoklund Olesen2013-05-192-27/+31
| | | | | | | Also clean up the arguments to all the MOVCC instructions so the operands always are (true-val, false-val, cond-code). llvm-svn: 182221
* [Sparc] Rearrange integer registers' allocation order so that register ↵Venkatraman Govindaraju2013-05-192-10/+23
| | | | | | | | allocator will use I and G registers before using L and O registers. Also, enable registers %g2-%g4 to be used in application and %g5 in 64 bit mode. llvm-svn: 182219
* Handle i64 FrameIndex nodes in SPARC v9 mode.Jakob Stoklund Olesen2013-05-191-1/+1
| | | | llvm-svn: 182216
* Check InlineAsm clobbers in PPCCTRLoopsHal Finkel2013-05-181-0/+15
| | | | | | | | We don't need to reject all inline asm as using the counter register (most does not). Only those that explicitly clobber the counter register need to prevent the transformation. llvm-svn: 182191
* AArch64: add CMake dependency to fix very parallel buildsTim Northover2013-05-181-0/+2
| | | | llvm-svn: 182190
* X86: Bad peephole interaction between adc, MOV32r0David Majnemer2013-05-181-3/+18
| | | | | | | | | | | | | | | | The peephole tries to reorder MOV32r0 instructions such that they are before the instruction that modifies EFLAGS. The problem is that the peephole does not consider the case where the instruction that modifies EFLAGS also depends on the previous state of EFLAGS. Instead, walk backwards until we find an instruction that has a def for EFLAGS but does not have a use. If we find such an instruction, insert the MOV32r0 before it. If it cannot find such an instruction, skip the optimization. llvm-svn: 182184
* Add LLVMContext argument to getSetCCResultTypeMatt Arsenault2013-05-1819-38/+40
| | | | llvm-svn: 182180
* Support unaligned load/store on more ARM targetsJF Bastien2013-05-172-10/+46
| | | | | | | | | | | | | | | | | | | | | This patch matches GCC behavior: the code used to only allow unaligned load/store on ARM for v6+ Darwin, it will now allow unaligned load/store for v6+ Darwin as well as for v7+ on Linux and NaCl. The distinction is made because v6 doesn't guarantee support (but LLVM assumes that Apple controls hardware+kernel and therefore have conformant v6 CPUs), whereas v7 does provide this guarantee (and Linux/NaCl behave sanely). The patch keeps the -arm-strict-align command line option, and adds -arm-no-strict-align. They behave similarly to GCC's -mstrict-align and -mnostrict-align. I originally encountered this discrepancy in FastIsel tests which expect unaligned load/store generation. Overall this should slightly improve performance in most cases because of reduced I$ pressure. llvm-svn: 182175
* Fix the build in c++11 mode.Rafael Espindola2013-05-171-2/+2
| | | | | | | | | | | | The errors were: non-constant-expression cannot be narrowed from type 'int64_t' (aka 'long') to 'uint32_t' (aka 'unsigned int') in initializer list and non-constant-expression cannot be narrowed from type 'long' to 'uint32_t' (aka 'unsigned int') in initializer list llvm-svn: 182168
* R600: Lower int_load_input to copyFromReg instead of Register nodeVincent Lejeune2013-05-171-1/+5
| | | | | | | It solves a bug uncovered by dot4 patch where the register class of int_load_input use was ignored. llvm-svn: 182130
* R600: Use bottom up scheduling algorithmVincent Lejeune2013-05-174-25/+37
| | | | llvm-svn: 182129
* R600: Use depth first scheduling algorithmVincent Lejeune2013-05-172-79/+31
| | | | | | | It should increase PV substitution opportunities and lower gpr usage (pending computations path are "flushed" sooner) llvm-svn: 182128
* R600: Replace big texture opcode switch in scheduler by usesTC/usesVCVincent Lejeune2013-05-171-23/+3
| | | | llvm-svn: 182127
* R600: Relax some vector constraints on Dot4.Vincent Lejeune2013-05-1710-26/+280
| | | | | | | | | | Dot4 now uses 8 scalar operands instead of 2 vectors one which allows register coalescer to remove some unneeded COPY. This patch also defines some structures/functions that can be used to handle every vector instructions (CUBE, Cayman special instructions...) in a similar fashion. llvm-svn: 182126
* R600: Improve texture handlingVincent Lejeune2013-05-1711-201/+725
| | | | llvm-svn: 182125
* R600: Rename 128 bit registers.Vincent Lejeune2013-05-172-10/+9
| | | | | | | | | Almost all instructions that takes a 128 bits reg as input (fetch, export...) have the abilities to swizzle their argument and output. Instead of printing default swizzle for each 128 bits reg, rename T*.XYZW to T* and let instructions print potentially optimized swizzles themselves. llvm-svn: 182124
* R600: Some factorizationVincent Lejeune2013-05-175-203/+221
| | | | llvm-svn: 182123
* R600: Factorize Fetch size limit inside AMDGPUSubTargetVincent Lejeune2013-05-174-13/+13
| | | | llvm-svn: 182122
* R600: prettier dump of clampVincent Lejeune2013-05-172-4/+4
| | | | llvm-svn: 182121
* R600: Fix encoding for R600 family GPUsTom Stellard2013-05-171-0/+7
| | | | | | | | | | | Reviewed-by: Vincent Lejeune <vljn@ovi.com> https://bugs.freedesktop.org/show_bug.cgi?id=64193 https://bugs.freedesktop.org/show_bug.cgi?id=64257 https://bugs.freedesktop.org/show_bug.cgi?id=64320 NOTE: This is a candidate for the 3.3 branch. llvm-svn: 182113
* R600: Pass MCSubtargetInfo reference to R600CodeEmitterTom Stellard2013-05-173-6/+10
| | | | llvm-svn: 182112
* [Sparc] Implements hasReservedCallFrame and hasFP.Venkatraman Govindaraju2013-05-172-1/+17
| | | | | | | This is to generate correct framesetup code when the function has variable sized allocas. llvm-svn: 182108
* X86: Make shuffle -> shift conversion more aggressive about undefs.Benjamin Kramer2013-05-171-18/+28
| | | | | | | | | | | Shuffles that only move an element into position 0 of the vector are common in the output of the loop vectorizer and often generate suboptimal code when SSSE3 is not available. Lower them to vector shifts if possible. We still prefer palignr over psrldq because it has higher throughput on sandybridge. llvm-svn: 182102
* [PowerPC] Fix hi/lo encoding in old-style code emitterUlrich Weigand2013-05-174-33/+17
| | | | | | | | | | | | | | | | | This patch implements the equivalent change to r182091/r182092 in the old-style code emitter. Instead of having two separate 16-bit immediate encoding routines depending on the instruction, this patch introduces a single encoder that checks the machine operand flags to decide whether the low or high half of a symbol address is required. Since now both encoders make no further distinction between "symbolLo" and "symbolHi", the .td operand can now use a single getS16ImmEncoding method. Tested by running the old-style JIT tests on 32-bit Linux. llvm-svn: 182097
* [PowerPC] Merge/rename PPC fixup typesUlrich Weigand2013-05-174-28/+17
| | | | | | | | | | | | | | Now that fixup_ppc_ha16 and fixup_ppc_lo16 are being treated exactly the same everywhere, it no longer makes sense to have two fixup types. This patch merges them both into a single type fixup_ppc_half16, and renames fixup_ppc_lo16_ds to fixup_ppc_half16ds for consistency. (The half16 and half16ds names are taken from the description of relocation types in the PowerPC ABI.) No change in code generation expected. llvm-svn: 182092
* [PowerPC] Fix processing of ha16/lo16 fixupsUlrich Weigand2013-05-172-7/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current PowerPC MC back end distinguishes between fixup_ppc_ha16 and fixup_ppc_lo16, which are determined by the instruction the fixup applies to, and uses this distinction to decide whether a fixup ought to resolve to the high or the low part of a symbol address. This isn't quite correct, however. It is valid -if unusual- assembler to use, e.g. li 1, symbol@ha or lis 1, symbol@l Whether the high or the low part of the address is used depends solely on the @ suffix, not on the instruction. In addition, both li 1, symbol and lis 1, symbol are valid, assuming the symbol address fits into 16 bits; again, both will then refer to the actual symbol value (so li will load the value itself, while lis will load the value shifted by 16). To fix this, two places need to be adapted. If the fixup cannot be resolved at assembler time, a relocation needs to be emitted via PPCELFObjectWriter::getRelocType. This routine already looks at the VK_ type to determine the relocation. The only problem is that will reject any _LO modifier in a ha16 fixup and vice versa. This is simply incorrect; any of those modifiers ought to be accepted for either fixup type. If the fixup *can* be resolved at assembler time, adjustFixupValue currently selects the high bits of the symbol value if the fixup type is ha16. Again, this is incorrect; see the above example lis 1, symbol Now, in theory we'd have to respect a VK_ modifier here. However, in fact common code never even attempts to resolve symbol references using any nontrivial VK_ modifier at assembler time; it will always fall back to emitting a reloc and letting the linker handle it. If this ever changes, presumably there'd have to be a target callback to resolve VK_ modifiers. We'd then have to handle @ha etc. there. llvm-svn: 182091
* Don't cast away constness.Benjamin Kramer2013-05-171-2/+2
| | | | llvm-svn: 182086
OpenPOWER on IntegriCloud