summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/X86/X86FastISel.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Revert "Change memcpy/memset/memmove to have dest and source alignments."Pete Cooper2015-11-191-2/+2
| | | | | | | | | | This reverts commit r253511. This likely broke the bots in http://lab.llvm.org:8011/builders/clang-ppc64-elf-linux2/builds/20202 http://bb.pgr.jp/builders/clang-3stage-i686-linux/builds/3787 llvm-svn: 253543
* Change memcpy/memset/memmove to have dest and source alignments.Pete Cooper2015-11-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html These intrinsics currently have an explicit alignment argument which is required to be a constant integer. It represents the alignment of the source and dest, and so must be the minimum of those. This change allows source and dest to each have their own alignments by using the alignment attribute on their arguments. The alignment argument itself is removed. There are a few places in the code for which the code needs to be checked by an expert as to whether using only src/dest alignment is safe. For those places, they currently take the minimum of src/dest alignments which matches the current behaviour. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 500, i32 8, i1 false) will now read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 8 %dest, i8* align 8 %src, i32 500, i1 false) For out of tree owners, I was able to strip alignment from calls using sed by replacing: (call.*llvm\.memset.*)i32\ [0-9]*\,\ i1 false\) with: $1i1 false) and similarly for memmove and memcpy. I then added back in alignment to test cases which needed it. A similar commit will be made to clang which actually has many differences in alignment as now IRBuilder can generate different source/dest alignments on calls. In IRBuilder itself, a new argument was added. Instead of calling: CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, /* isVolatile */ false) you now call CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, SrcAlign, /* isVolatile */ false) There is a temporary class (IntegerAlignment) which takes the source alignment and rejects implicit conversion from bool. This is to prevent isVolatile here from passing its default parameter to the source alignment. Note, changes in future can now be made to codegen. I didn't change anything here, but this change should enable better memcpy code sequences. Reviewed by Hal Finkel. llvm-svn: 253511
* [IR] Add bounds checking to paramHasAttrSanjoy Das2015-11-041-4/+6
| | | | | | | | | | | | | | | | | Summary: This is intended to make a later change simpler. Note: adding this bounds checking required fixing `X86FastISel`. As far I can tell I've preserved original behavior but a careful review will be appreciated. Reviewers: reames Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D14304 llvm-svn: 252073
* X86: Remove implicit ilist iterator conversions, NFCDuncan P. N. Exon Smith2015-10-191-2/+2
| | | | llvm-svn: 250741
* [X86][FastISel] Teach how to select SSE4A nontemporal stores.Simon Pilgrim2015-10-171-4/+15
| | | | | | | | | | Add FastISel support for SSE4A scalar float / double non-temporal stores Follow up to D13698 Differential Revision: http://reviews.llvm.org/D13773 llvm-svn: 250610
* [x86][FastISel] Teach how to select nontemporal stores.Andrea Di Biagio2015-10-141-16/+34
| | | | | | | | | | | | | | | | | | | | | | This patch teaches x86 fast-isel how to select nontemporal stores. On x86, we can use MOVNTI for nontemporal stores of doublewords/quadwords. Instructions (V)MOVNTPS/PD/DQ can be used for SSE2/AVX aligned nontemporal vector stores. Before this patch, fast-isel always selected 'movd/movq' instead of 'movnti' for doubleword/quadword nontemporal stores. In the case of nontemporal stores of aligned vectors, fast-isel always selected movaps/movapd/movdqa instead of movntps/movntpd/movntdq. With this patch, if we use SSE2/AVX intrinsics for nontemporal stores we now always get the expected (V)MOVNT instructions. The lack of fast-isel support for nontemporal stores was spotted when analyzing the -O0 codegen for nontemporal stores. Differential Revision: http://reviews.llvm.org/D13698 llvm-svn: 250285
* Reapply r249121 : "[FastISel][x86] Teach how to select SSE2/AVX bitcasts ↵Andrea Di Biagio2015-10-021-0/+24
| | | | | | | | | | | | | | | | | | | | | | | between 128/256-bit vector types." This patch teaches FastIsel the following two things: 1) On SSE2, no instructions are needed for bitcasts between 128-bit vector types; 2) On AVX, no instructions are needed for bitcasts between 256-bit vector types. Example: %1 = bitcast <4 x i31> %V to <2 x i64> Before (-fast-isel -fast-isel-abort=1): FastIsel miss: %1 = bitcast <4 x i31> %V to <2 x i64> Now we don't fall back to SelectionDAG and we correctly fold that computation propagating the register associated to %V. Originally reviewed here: http://reviews.llvm.org/D13347 llvm-svn: 249147
* Revert: [FastISel][x86] Teach how to select SSE2/AVX bitcasts between ↵Andrea Di Biagio2015-10-021-24/+0
| | | | | | | | | 128/256-bit vector types. r249121 caused a Clang test failure (avx2-buitins.c). Revert r249121 while I keep investigating on the reason why that test failed. llvm-svn: 249124
* [FastISel][x86] Teach how to select SSE2/AVX bitcasts between 128/256-bit ↵Andrea Di Biagio2015-10-021-0/+24
| | | | | | | | | | | | | | | | | | | | | | | vector types. This patch teaches FastIsel the following two things: 1) On SSE2, no instructions are needed for bitcasts between 128-bit vector types; 2) On AVX, no instructions are needed for bitcasts between 256-bit vector types. Example: %1 = bitcast <4 x i31> %V to <2 x i64> Before (-fast-isel -fast-isel-abort=1): FastIsel miss: %1 = bitcast <4 x i31> %V to <2 x i64> Now we don't fall back to SelectionDAG and we correctly fold that computation propagating the register associated to %V. Differential Revision: http://reviews.llvm.org/D13347 llvm-svn: 249121
* Arguments spilled on the stack before a function call may haveJeroen Ketema2015-09-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | alignment requirements, for example in the case of vectors. These requirements are exploited by the code generator by using move instructions that have similar alignment requirements, e.g., movaps on x86. Although the code generator properly aligns the arguments with respect to the displacement of the stack pointer it computes, the displacement itself may cause misalignment. For example if we have %3 = load <16 x float>, <16 x float>* %1, align 64 call void @bar(<16 x float> %3, i32 0) the x86 back-end emits: movaps 32(%ecx), %xmm2 movaps (%ecx), %xmm0 movaps 16(%ecx), %xmm1 movaps 48(%ecx), %xmm3 subl $20, %esp <-- if %esp was 16-byte aligned before this instruction, it no longer will be afterwards movaps %xmm3, (%esp) <-- movaps requires 16-byte alignment, while %esp is not aligned as such. movl $0, 16(%esp) calll __bar To solve this, we need to make sure that the computed value with which the stack pointer is changed is a multiple af the maximal alignment seen during its computation. With this change we get proper alignment: subl $32, %esp movaps %xmm3, (%esp) Differential Revision: http://reviews.llvm.org/D12337 llvm-svn: 248786
* X86: Fix FastISel SSESelect register classMatthias Braun2015-08-311-3/+9
| | | | | | | | | X86FastISel has been using the wrong register class for VBLENDVPS which produces a VR128 and needs an extra copy to the target register. The problem was already hit by the existing test cases when using > llvm-lit -Dllc="llc -verify-machineinstr" llvm-svn: 246461
* FastISel: Factor out common code; NFC intendedMatthias Braun2015-08-261-29/+5
| | | | | | | | | This should be no functional change but for the record: For three cases in X86FastISel this will change the order in which the FalseMBB and TrueMBB of a conditional branch is addedd to the successor/predecessor lists. llvm-svn: 245997
* PseudoSourceValue: Replace global manager with a manager in a machine function.Alex Lorenz2015-08-111-4/+4
| | | | | | | | | | | | | | | | | | | | | | This commit removes the global manager variable which is responsible for storing and allocating pseudo source values and instead it introduces a new manager class named 'PseudoSourceValueManager'. Machine functions now own an instance of the pseudo source value manager class. This commit also modifies the 'get...' methods in the 'MachinePointerInfo' class to construct pseudo source values using the instance of the pseudo source value manager object from the machine function. This commit updates calls to the 'get...' methods from the 'MachinePointerInfo' class in a lot of different files because those calls now need to pass in a reference to a machine function to those methods. This change will make it easier to serialize pseudo source values as it will enable me to transform the mips specific MipsCallEntry PseudoSourceValue subclass into two target independent subclasses. Reviewers: Akira Hatanaka llvm-svn: 244693
* Make TargetLowering::getPointerTy() taking DataLayout as an argumentMehdi Amini2015-07-091-34/+34
| | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. Reviewers: echristo Subscribers: jholewinski, ted, yaron.keren, rafael, llvm-commits Differential Revision: http://reviews.llvm.org/D11028 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 241775
* Redirect DataLayout from TargetMachine to Module in ComputeValueVTs()Mehdi Amini2015-07-091-1/+2
| | | | | | | | | | | | | | | | | | | | Summary: Avoid using the TargetMachine owned DataLayout and use the Module owned one instead. This requires passing the DataLayout up the stack to ComputeValueVTs(). This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. Reviewers: echristo Subscribers: jholewinski, yaron.keren, rafael, llvm-commits Differential Revision: http://reviews.llvm.org/D11019 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 241773
* IR: Do not consider available_externally linkage to be linker-weak.Peter Collingbourne2015-07-051-1/+1
| | | | | | | | | | | | | | | From the linker's perspective, an available_externally global is equivalent to an external declaration (per isDeclarationForLinker()), so it is incorrect to consider it to be a weak definition. Also clean up some logic in the dead argument elimination pass and clarify its comments to better explain how its behavior depends on linkage, introduce GlobalValue::isStrongDefinitionForLinker() and start using it throughout the optimizers and backend. Differential Revision: http://reviews.llvm.org/D10941 llvm-svn: 241413
* Use MCSymbols for FastISel.Rafael Espindola2015-06-231-3/+4
| | | | | | | | | | | The summary is that it moves the mangling earlier and replaces a few calls to .addExternalSymbol with addSym. I originally wanted to replace all the uses of addExternalSymbol with addSym, but noticed it was a lot of work and doesn't need to be done all at once. llvm-svn: 240395
* [InstrInfo] Refactor foldOperandImpl to thread through InsertPt. NFCKeno Fischer2015-06-081-4/+3
| | | | | | | | | | | | | | | | | | Summary: This was a longstanding FIXME and is a necessary precursor to cases where foldOperandImpl may have to create more than one instruction (e.g. to constrain a register class). This is the split out NFC changes from D6262. Reviewers: pete, ributzka, uweigand, mcrosier Reviewed By: mcrosier Subscribers: mcrosier, ted, llvm-commits Differential Revision: http://reviews.llvm.org/D10174 llvm-svn: 239336
* CodeGen: Use mop_iterator instead of MIOperands/ConstMIOperandsMatthias Braun2015-05-291-6/+8
| | | | | | | | | | | | | | | | | | | | | | | | MIOperands/ConstMIOperands are classes iterating over the MachineOperand of a MachineInstr, however MachineInstr::mop_iterator does the same thing. I assume these two iterators exist to have a uniform interface to iterate over the operands of a machine instruction bundle and a single machine instruction. However in practice I find it more confusing to have 2 different iterator classes, so this patch transforms (nearly all) the code to use mop_iterators. The only exception being MIOperands::anlayzePhysReg() and MIOperands::analyzeVirtReg() still needing an equivalent, I leave that as an exercise for the next patch. Differential Revision: http://reviews.llvm.org/D9932 This version is slightly modified from the proposed revision in that it introduces MachineInstr::getOperandNo to avoid the extra counting variable in the few loops that previously used MIOperands::getOperandNo. llvm-svn: 238539
* Migrate existing backends that care about software floating pointEric Christopher2015-05-121-1/+1
| | | | | | | | | | | | | | | | | | | | to use the information in the module rather than TargetOptions. We've had and clang has used the use-soft-float attribute for some time now so have the backends set a subtarget feature based on a particular function now that subtargets are created based on functions and function attributes. For the one middle end soft float check go ahead and create an overloadable TargetLowering::useSoftFloat function that just checks the TargetSubtargetInfo in all cases. Also remove the command line option that hard codes whether or not soft-float is set by using the attribute for all of the target specific test cases - for the generic just go ahead and add the attribute in the one case that showed up. llvm-svn: 237079
* [X86] Fast-ISel was incorrectly always killing the source of a truncate.Pete Cooper2015-05-081-1/+3
| | | | | | | | | | | A trunc from i32 to i1 on x86_64 generates an instruction such as %vreg19<def> = COPY %vreg9:sub_8bit<kill>; GR8:%vreg19 GR32:%vreg9 However, the copy here should only have the kill flag on the 32-bit path, not the 64-bit one. Otherwise, we are killing the source of the truncate which could be used later in the program. llvm-svn: 236890
* [x86] Fix register class of folded load index reg.Pete Cooper2015-05-061-0/+17
| | | | | | | When folding a load in to another instruction, we need to fix the class of the index register Otherwise, it could be something like GR64 not GR64_NOSP and would fail the machine verifier. llvm-svn: 236644
* [X86 fast-isel] Constrain the index reg class to not include SP.Pete Cooper2015-05-051-6/+23
| | | | | | The index reg on instructions with complex address modes is a GPR64_NOSP. Constrain it to appease the machine verifier. llvm-svn: 236557
* [X86][FastIsel] Fix assertion failure when selecting int-to-double ↵Andrea Di Biagio2015-04-201-5/+6
| | | | | | | | | | | | | | | | | | | | | | conversion (PR23273). This fixes a regression introduced at revision 231243. The target-independent selection algorithm in FastISel knows how to select a SINT_TO_FP if the target is SSE but not AVX. That is because on X86, the tablegen'd 'fastEmit' functions know how to select CVTSI2SSrr and CVTSI2SDrr. Method X86FastISel::X86SelectSIToFP was therefore working under the wrong assumption that the target was AVX. That assumption was incorrect since we can have a target that is neither AVX nor SSE. So, rather than asserting for the presence of AVX, we should have had an early exit from 'X86SelectSIToFP' if the target was not AVX. This patch fixes the issue replacing the invalid assertion with an early exit. Thanks to Dimitry Andric for reporting this problem and for providing a small reproducible testcase. Added test pr23273.ll. llvm-svn: 235295
* CodeGen: Assert that inlined-at locations agreeDuncan P. N. Exon Smith2015-04-031-0/+2
| | | | | | | | | | | | | | | | | As a follow-up to r234021, assert that a debug info intrinsic variable's `MDLocalVariable::getInlinedAt()` always matches the `MDLocation::getInlinedAt()` of its `!dbg` attachment. The goal here is to get rid of `MDLocalVariable::getInlinedAt()` entirely (PR22778), but I'll let these assertions bake for a while first. If you have an out-of-tree backend that just broke, you're probably attaching the wrong `DebugLoc` to a `DBG_VALUE` instruction. The one you want is the location that was attached to the corresponding `@llvm.dbg.declare` or `@llvm.dbg.value` call that you started with. llvm-svn: 234038
* [X86][FastIsel] Teach how to select vector load instructions.Andrea Di Biagio2015-03-261-3/+34
| | | | | | | | | This patch teaches fast-isel how to select 128-bit vector load instructions. Added test CodeGen/X86/fast-isel-vecload.ll Differential Revision: http://reviews.llvm.org/D8605 llvm-svn: 233270
* Use the i8 immediate cmp instructions when possible.Rafael Espindola2015-03-161-1/+8
| | | | llvm-svn: 232378
* Don't repeat names in comments and clang-format this function.Rafael Espindola2015-03-161-7/+10
| | | | llvm-svn: 232375
* Have getCallPreservedMask and getThisCallPreservedMask take aEric Christopher2015-03-111-1/+1
| | | | | | | MachineFunction argument so that we can grab subtarget specific features off of it. llvm-svn: 231979
* [AVX] Lower / fast-isel scalar FP selects into VBLENDV instructions (PR22483)Sanjay Patel2015-03-051-18/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch reduces code size for all AVX targets and increases speed for some chips. SSE 4.1 introduced the useless (see code comments) 2-register form of BLENDV and only in the packed float/double flavors. AVX subsequently made the instruction useful by adding a 4-register operand form. So we just need to paper over the lack of scalar forms of this instruction, complicate the code to choose float or double forms, and use blendv on scalars since all FP is in xmm registers anyway. This gives us an approximately 50% speed up for a blendv microbenchmark sequence on SandyBridge and Haswell: blendv : 29.73 cycles/iter logic : 43.15 cycles/iter No new test cases with this patch because: 1. fast-isel-select-sse.ll tests the positive side for regular X86 lowering and fast-isel 2. sse-minmax.ll and fp-select-cmp-and.ll confirm that we're not firing for scalar selects without AVX 3. fp-select-cmp-and.ll and logical-load-fold.ll confirm that we're not firing for scalar selects with constants. http://llvm.org/bugs/show_bug.cgi?id=22483 Differential Revision: http://reviews.llvm.org/D8063 llvm-svn: 231408
* [X86][FastISel] Simplify the logic in method X86SelectSIToFP.Andrea Di Biagio2015-03-041-21/+13
| | | | | | | | | | | | | | | | The target-independent selection algorithm in FastISel already knows how to select a SINT_TO_FP if the target is SSE but not AVX. On targets that have SSE but not AVX, the tablegen'd 'fastEmit' functions for ISD::SINT_TO_FP know how to select instruction X86::CVTSI2SSrr (for an i32 to f32 conversion) and X86::CVTSI2SDrr (for an i32 to f64 conversion). This patch simplifies the logic in method X86SelectSIToFP knowing that the code would not be reachable if the subtarget doesn't have AVX. No functional change intended. llvm-svn: 231243
* CodeGen: convert CCState interface to using ArrayRefsTim Northover2015-02-211-1/+1
| | | | | | | | | | | Everyone except R600 was manually passing the length of a static array at each callsite, calculated in a variety of interesting ways. Far easier to let ArrayRef handle that. There should be no functional change, but out of tree targets may have to tweak their calls as with these examples. llvm-svn: 230118
* [X86][FastIsel] Teach how to select float-half conversion intrinsics.Andrea Di Biagio2015-02-201-0/+62
| | | | | | | | | | | This patch teaches X86FastISel how to select intrinsic 'convert_from_fp16' and intrinsic 'convert_to_fp16'. If the target has F16C, we can select VCVTPS2PHrr for a float-half conversion, and VCVTPH2PSrr for a half-float conversion. Differential Revision: http://reviews.llvm.org/D7673 llvm-svn: 230043
* [X86][FastIsel] Teach how to select scalar integer to float/double conversions.Andrea Di Biagio2015-02-171-0/+48
| | | | | | | | | | | | This patch teaches fast-isel how to select a (V)CVTSI2SSrr for an integer to float conversion, and how to select a (V)CVTSI2SDrr for an integer to double conversion. Added test 'fast-isel-int-float-conversion.ll'. Differential Revision: http://reviews.llvm.org/D7698 llvm-svn: 229589
* X86: @llvm.frameaddress should defer to SelectionDAG for Win CFIDavid Majnemer2015-02-101-2/+7
| | | | llvm-svn: 228754
* [X86][FastIsel] Avoid introducing legacy SSE instructions if the target has AVX.Andrea Di Biagio2015-02-101-28/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch teaches X86FastISel how to select AVX instructions for scalar float/double convert operations. Before this patch, X86FastISel always selected legacy SSE instructions for FPExt (from float to double) and FPTrunc (from double to float). For example: \code define double @foo(float %f) { %conv = fpext float %f to double ret double %conv } \end code Before (with -mattr=+avx -fast-isel) X86FastIsel selected a CVTSS2SDrr which is legacy SSE: cvtss2sd %xmm0, %xmm0 With this patch, X86FastIsel selects a VCVTSS2SDrr instead: vcvtss2sd %xmm0, %xmm0, %xmm0 Added test fast-isel-fptrunc-fpext.ll to check both the register-register and the register-memory float/double conversion variants. Differential Revision: http://reviews.llvm.org/D7438 llvm-svn: 228682
* Migrate to using the subtarget on the machine function and updateEric Christopher2015-02-021-7/+5
| | | | | | all uses. llvm-svn: 227891
* [X86] Convert esp-relative movs of function arguments to pushes, step 2Michael Kuperstein2015-02-011-1/+1
| | | | | | | | | | | | | | This moves the transformation introduced in r223757 into a separate MI pass. This allows it to cover many more cases (not only cases where there must be a reserved call frame), and perform rudimentary call folding. It still doesn't have a heuristic, so it is enabled only for optsize/minsize, with stack alignment <= 8, where it ought to be a fairly clear win. (Re-commit of r227728) Differential Revision: http://reviews.llvm.org/D6789 llvm-svn: 227752
* Revert r227728 due to bad line endings.Michael Kuperstein2015-02-011-3358/+3358
| | | | llvm-svn: 227746
* [X86] Convert esp-relative movs of function arguments to pushes, step 2Michael Kuperstein2015-02-011-3358/+3358
| | | | | | | | | | | | This moves the transformation introduced in r223757 into a separate MI pass. This allows it to cover many more cases (not only cases where there must be a reserved call frame), and perform rudimentary call folding. It still doesn't have a heuristic, so it is enabled only for optsize/minsize, with stack alignment <= 8, where it ought to be a fairly clear win. Differential Revision: http://reviews.llvm.org/D6789 llvm-svn: 227728
* DebugInfo: Teach Fast ISel to respect the debug location of comparisons in jumpsDavid Blaikie2015-01-291-9/+9
| | | | | | | | | The use of the DbgLoc in FastISel is probably something we should fix. It's prone to leaking the wrong location into instructions - we should have a clear chain of custody from the debug location of an IR Instruction to that of a MachineInstr to avoid such leakage. llvm-svn: 227481
* Move DataLayout back to the TargetMachine from TargetSubtargetInfoEric Christopher2015-01-261-2/+2
| | | | | | | | | | | | | | | | | | | derived classes. Since global data alignment, layout, and mangling is often based on the DataLayout, move it to the TargetMachine. This ensures that global data is going to be layed out and mangled consistently if the subtarget changes on a per function basis. Prior to this all targets(*) have had subtarget dependent code moved out and onto the TargetMachine. *One target hasn't been migrated as part of this change: R600. The R600 port has, as a subtarget feature, the size of pointers and this affects global data layout. I've currently hacked in a FIXME to enable progress, but the port needs to be updated to either pass the 64-bitness to the TargetMachine, or fix the DataLayout to avoid subtarget dependent features. llvm-svn: 227113
* [x32] Fast ISel should use LEA64_32r instead of LEA32r to adjust addresses ↵Michael Kuperstein2015-01-211-2/+8
| | | | | | in x32 mode. llvm-svn: 226661
* [X86] Make isel select the 2-byte register form of INC/DEC even in ↵Craig Topper2015-01-061-7/+4
| | | | | | | | non-64-bit mode. Convert to the 1-byte form in non-64-bit mode as part of MCInst lowering. Overall this seems simpler. It reduces duplication of patterns between both modes and it simplifies the memory folding/unfolding tables as they don't need to create fake instructions just to keep track of 64-bitness. llvm-svn: 225252
* [X86] Make isel select the shorter form of jump instructions instead of the ↵Craig Topper2015-01-061-4/+4
| | | | | | | | long form. The assembler backend will relax to the long form if necessary. This removes a swap from long form to short form in the MCInstLowering code. Selecting the long form used to be required by the old JIT. llvm-svn: 225242
* [X86][ISel] Fix a regression I introduced in r224884Keno Fischer2014-12-281-3/+3
| | | | | | | | | | | | | The else case ResultReg was not checked for validity. To my surprise, this case was not hit in any of the existing test cases. This includes a new test cases that tests this path. Also drop the `target triple` declaration from the original test as suggested by H.J. Lu, because apparently with it the test won't be run on Linux llvm-svn: 224901
* [FastIsel][X86] Fix invalid register replacement for bool argsKeno Fischer2014-12-271-28/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Consider the following IR: %3 = load i8* undef %4 = trunc i8 %3 to i1 %5 = call %jl_value_t.0* @foo(..., i1 %4, ...) ret %jl_value_t.0* %5 Bools (that are the result of direct truncs) are lowered as whatever the argument to the trunc was and a "and 1", causing the part of the MBB responsible for this argument to look something like this: %vreg8<def,tied1> = AND8ri %vreg7<kill,tied0>, 1, %EFLAGS<imp-def>; GR8:%vreg8,%vreg7 Later, when the load is lowered, it will insert %vreg15<def> = MOV8rm %vreg14, 1, %noreg, 0, %noreg; mem:LD1[undef] GR8:%vreg15 GR64:%vreg14 but remember to (at the end of isel) replace vreg7 by vreg15. Now for the bug. In fast isel lowering, we mistakenly mark vreg8 as the result of the load instead of the trunc. This adds a fixup to have vreg8 replaced by whatever the result of the load is as well, so we end up with %vreg15<def,tied1> = AND8ri %vreg15<kill,tied0>, 1, %EFLAGS<imp-def>; GR8:%vreg15 which is an SSA violation and causes problems later down the road. This fixes PR21557. Test Plan: Test test case from PR21557 is added to the test suite. Reviewers: ributzka Reviewed By: ributzka Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D6245 llvm-svn: 224884
* Use 32-bit ebp for NaCl64 in a limited case: llvm.frameaddress.Jan Wen Voung2014-12-051-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Follow up to [x32] "Use ebp/esp as frame and stack pointer": http://reviews.llvm.org/D4617 In that earlier patch, NaCl64 was made to always use rbp. That's needed for most cases because rbp should hold a full 64-bit address within the NaCl sandbox so that load/stores off of rbp don't require sandbox adjustment (zeroing the top 32-bits, then filling those by adding r15). However, llvm.frameaddress returns a pointer and pointers are 32-bit for NaCl64. In this case, use ebp instead, which will make the register copy type check. A similar mechanism may be needed for llvm.eh.return, but is not added in this change. Test Plan: test/CodeGen/X86/frameaddr.ll Reviewers: dschuff, nadav Subscribers: jfb, llvm-commits Differential Revision: http://reviews.llvm.org/D6514 llvm-svn: 223510
* [X86] Clean up whitespace as well as minor coding styleMichael Liao2014-12-041-21/+20
| | | | llvm-svn: 223339
* Remove a bunch of unnecessary typecasts to 'const TargetRegisterClass *'Craig Topper2014-11-211-3/+2
| | | | llvm-svn: 222509
OpenPOWER on IntegriCloud