summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen
Commit message (Collapse)AuthorAgeFilesLines
* Make this test disable fast isel as it's not needed.Eric Christopher2011-04-251-1/+1
| | | | llvm-svn: 130165
* Lower BlockAddress node when relocation-model is static.Akira Hatanaka2011-04-251-5/+10
| | | | llvm-svn: 130131
* A dbg.declare may not be in entry block, even if it is referring to an ↵Devang Patel2011-04-251-0/+123
| | | | | | incoming argument. However, It is appropriate to emit DBG_VALUE referring to this incoming argument in entry block in MachineFunction. llvm-svn: 130129
* Make tests more useful.Benjamin Kramer2011-04-257-10/+10
| | | | | | lit needs a linter ... llvm-svn: 130126
* Accidental function name mangling.Andrew Trick2011-04-231-1/+1
| | | | llvm-svn: 130050
* Thumb2 and ARM add/subtract with carry fixes.Andrew Trick2011-04-234-8/+41
| | | | | | | | | | | | | Fixes Thumb2 ADCS and SBCS lowering: <rdar://problem/9275821>. t2ADCS/t2SBCS are now pseudo instructions, consistent with ARM, so the assembly printer correctly prints the 's' suffix. Fixes Thumb2 adde -> SBC matching to check for live/dead carry flags. Fixes the internal ARM machine opcode mnemonic for ADCS/SBCS. Fixes ARM SBC lowering to check for live carry (potential bug). llvm-svn: 130048
* whitespaceAndrew Trick2011-04-231-2/+2
| | | | llvm-svn: 130046
* test/CodeGen/X86/shrink-compare.ll: Relax expressions for Win64.NAKAMURA Takumi2011-04-231-2/+2
| | | | llvm-svn: 130039
* Recommit the fix for rdar://9289512 with a couple tweaks toChris Lattner2011-04-222-0/+57
| | | | | | | | | | | fix bugs exposed by the gcc dejagnu testsuite: 1. The load may actually be used by a dead instruction, which would cause an assert. 2. The load may not be used by the current chain of instructions, and we could move it past a side-effecting instruction. Change how we process uses to define the problem away. llvm-svn: 130018
* DAGCombine: fold "(zext x) == C" into "x == (trunc C)" if the trunc is lossless.Benjamin Kramer2011-04-221-0/+36
| | | | | | | | | | | | On x86 this allows to fold a load into the cmp, greatly reducing register pressure. movzbl (%rdi), %eax cmpl $47, %eax -> cmpb $47, (%rdi) This shaves 8k off gcc.o on i386. I'll leave applying the patch in README.txt to Chris :) llvm-svn: 130005
* X86: Try to use a smaller encoding by transforming (X << C1) & C2 into (X & ↵Benjamin Kramer2011-04-221-0/+101
| | | | | | | | | | | | | | | | | | | | | | | (C2 >> C1)) & C1. (Part of PR5039) This tends to happen a lot with bitfield code generated by clang. A simple example for x86_64 is uint64_t foo(uint64_t x) { return (x&1) << 42; } which used to compile into bloated code: shlq $42, %rdi ## encoding: [0x48,0xc1,0xe7,0x2a] movabsq $4398046511104, %rax ## encoding: [0x48,0xb8,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0x00] andq %rdi, %rax ## encoding: [0x48,0x21,0xf8] ret ## encoding: [0xc3] with this patch we can fold the immediate into the and: andq $1, %rdi ## encoding: [0x48,0x83,0xe7,0x01] movq %rdi, %rax ## encoding: [0x48,0x89,0xf8] shlq $42, %rax ## encoding: [0x48,0xc1,0xe0,0x2a] ret ## encoding: [0xc3] It's possible to save another byte by using 'andl' instead of 'andq' but I currently see no way of doing that without making this code even more complicated. See the TODOs in the code. llvm-svn: 129990
* In Thumb2 mode, lower frame indix references to:Evan Cheng2011-04-221-0/+23
| | | | | | | | | | | add <rd>, sp, #<imm8> ldr <rd>, [sp, #<imm8>] When the offset from sp is multiple of 4 and in range of 0-1020. This saves code size by utilizing 16-bit instructions. rdar://9321541 llvm-svn: 129971
* Fix DWARF description of Q registers.Devang Patel2011-04-211-0/+94
| | | | llvm-svn: 129952
* Fix DWARF description of S registers.Devang Patel2011-04-211-0/+116
| | | | llvm-svn: 129947
* Test case for r129922Devang Patel2011-04-211-0/+105
| | | | llvm-svn: 129934
* Revert r1296656, "Fix rdar://9289512 - not folding load into compare at -O0...",Daniel Dunbar2011-04-211-22/+0
| | | | | | which broke a couple GCC test suite tests at -O0. llvm-svn: 129914
* ptx: fix parameter orderingChe-Liang Chiou2011-04-211-4/+4
| | | | | | | | | This patch depends on the prior fix r129908 that changes to use std::find, rather than std::binary_search, on unordered array. Patch by Dan Bailey llvm-svn: 129909
* Remove -use-divmod-libcall. Let targets opt in when they are available.Evan Cheng2011-04-201-1/+1
| | | | llvm-svn: 129884
* Un-XFAIL this test for ARM. <rdar://problem/7662569>Stuart Hastings2011-04-201-1/+0
| | | | llvm-svn: 129875
* PTX: Add intrinsics to list of built-in intrinsics, which allows them to beJustin Holewinski2011-04-2019-24/+24
| | | | | | | | | | used by Clang. To help Clang integration, the PTX target has been split into two targets: ptx32 and ptx64, depending on the desired pointer size. - Add GCCBuiltin class to all intrinsics - Split PTX target into ptx32 and ptx64 llvm-svn: 129851
* Rewrite the expander for umulo/smulo to remember to sign extend the inputEric Christopher2011-04-201-0/+27
| | | | | | | | | manually and pass all (now) 4 arguments to the mul libcall. Add a new ExpandLibCall for just this (copied gratuitously from type legalization). Fixes rdar://9292577 llvm-svn: 129842
* llc: Eliminate a use of getDarwinMajorNumber().Daniel Dunbar2011-04-192-3/+3
| | | | | | | | | - As before, there is a minor semantic change here (evidenced by the test change) for Darwin triples that have no version component. I debated changing the default behavior of isOSVersionLT, but decided it made more sense for triples to be explicit. llvm-svn: 129805
* CodeGen: Eliminate a use of getDarwinMajorNumber().Daniel Dunbar2011-04-191-1/+1
| | | | | | | | | - There is a minor semantic change here (evidenced by the test change) for Darwin triples that have no version component. I debated changing the default behavior of isOSVersionLT, but decided it made more sense for triples to be explicit. llvm-svn: 129802
* This patch combines several changes from Evan Cheng for rdar://8659675.Bob Wilson2011-04-191-0/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Making use of VFP / NEON floating point multiply-accumulate / subtraction is difficult on current ARM implementations for a few reasons. 1. Even though a single vmla has latency that is one cycle shorter than a pair of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause additional pipeline stall. So it's frequently better to single codegen vmul + vadd. 2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to stall for 4 cycles. We need to schedule them apart. 3. A vmla followed vmla is a special case. Obvious issuing back to back RAW vmla + vmla is very bad. But this isn't ideal either: vmul vadd vmla Instead, we want to expand the second vmla: vmla vmul vadd Even with the 4 cycle vmul stall, the second sequence is still 2 cycles faster. Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough but it isn't the optimial solution. This patch attempts to make it possible to use vmla / vmls in cases where it is profitable. A. Add missing isel predicates which cause vmla to be codegen'ed. B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to compute a fmul and a fmla. C. Add additional isel checks for vmla, avoid cases where vmla is feeding into fp instructions (except for the #3 exceptional case). D. Add ARM hazard recognizer to model the vmla / vmls hazards. E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the vmla / vmls will trigger one of the special hazards. Enable these fp vmlx codegen changes for Cortex-A9. llvm-svn: 129775
* Add -mcpu=cortex-a9-mp. It's cortex-a9 with MP extension. rdar://8648637.Bob Wilson2011-04-191-8/+13
| | | | llvm-svn: 129774
* Avoid some 's' 16-bit instruction which partially update CPSRBob Wilson2011-04-191-0/+16
| | | | | | | (and add false dependency) when it isn't dependent on last CPSR defining instruction. rdar://8928208 llvm-svn: 129773
* Avoid write-after-write issue hazards for Cortex-A9.Bob Wilson2011-04-195-8/+8
| | | | | | | | | | | Add a avoidWriteAfterWrite() target hook to identify register classes that suffer from write-after-write hazards. For those register classes, try to avoid writing the same register in two consecutive instructions. This is currently disabled by default. We should not spill to avoid hazards! The command line flag -avoid-waw-hazard can be used to enable waw avoidance. llvm-svn: 129772
* Add support for FastISel'ing varargs calls.Eli Friedman2011-04-191-0/+19
| | | | llvm-svn: 129765
* Tighten test case a bit.Jakob Stoklund Olesen2011-04-191-1/+2
| | | | | | | Ideally, we would match an S-register to its containing D-register, but that requires arithmetic (divide by 2). llvm-svn: 129756
* Implement support for x86 fastisel of small fixed-sized memcpys, which are ↵Chris Lattner2011-04-191-0/+11
| | | | | | | | | generated en-mass for C++ PODs. On my c++ test file, this cuts the fast isel rejects by 10x and shrinks the generated .s file by 5% llvm-svn: 129755
* Implement support for fast isel of calls of i1 arguments, even though they ↵Chris Lattner2011-04-191-0/+13
| | | | | | | | | | | | are illegal, when they are a truncate from something else. This eliminates fully half of all the fastisel rejections on a test c++ file I'm working with, which should make a substantial improvement for -O0 compile of c++ code. This fixed rdar://9297003 - fast isel bails out on all functions taking bools llvm-svn: 129752
* Handle i1/i8/i16 constant integer arguments to calls by prepromoting them.Chris Lattner2011-04-191-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | Before we would bail out on i1 arguments all together, now we just bail on non-constant ones. Also, we used to emit extraneous code. e.g. test12 was: movb $0, %al movzbl %al, %edi callq _test12 and test13 was: movb $0, %al xorl %edi, %edi movb %al, 7(%rsp) callq _test13f Now we get: movl $0, %edi callq _test12 and: movl $0, %edi callq _test13f llvm-svn: 129751
* be layout aware, to produce:Chris Lattner2011-04-191-0/+2
| | | | | | | | | | | | | | | | | | | testb $1, %al je LBB0_2 ## BB#1: ## %if.then movb $0, %al instead of: testb $1, %al jne LBB0_1 jmp LBB0_2 LBB0_1: ## %if.then movb $0, %al how 'bout that. llvm-svn: 129749
* fix rdar://9297006 - fast isel bails out on trunc to i1 -> bools cry,Chris Lattner2011-04-191-0/+17
| | | | | | a common cause of fast isel rejects on c++ code. llvm-svn: 129748
* Make tests register allocation independent again.Jakob Stoklund Olesen2011-04-194-14/+10
| | | | llvm-svn: 129739
* Do not lose mem_operands while lowering VLD / VST intrinsics.Evan Cheng2011-04-192-5/+7
| | | | llvm-svn: 129738
* Fix a bug where we were counting the alias sets as completely usedEric Christopher2011-04-181-0/+15
| | | | | | | | | registers for fast allocation a different way. This has us updating used registers only when we're using that exact register. Fixes rdar://9207598 llvm-svn: 129711
* while we're at it, handle 'sdiv exact' of a power of 2 also,Chris Lattner2011-04-181-0/+8
| | | | | | this fixes a few rejects on c++ iterator loops. llvm-svn: 129694
* fix rdar://9297011 - udiv by power of two causing fast-isel rejectsChris Lattner2011-04-181-1/+9
| | | | llvm-svn: 129693
* Implement major new fastisel functionality: the matcher can now handle ↵Chris Lattner2011-04-181-0/+18
| | | | | | | | | | | | | | | | | | | | | | | immediates with value constraints on them (when defined as ImmLeaf's). This is particularly important for X86-64, where almost all reg/imm instructions take a i64immSExt32 immediate operand, which has a value constraint. Before this patch we ended up iseling the examples into such amazing code as: movabsq $7, %rax imulq %rax, %rdi movq %rdi, %rax ret now we produce: imulq $7, %rdi, %rax ret This dramatically shrinks the generated code at -O0 on x86-64. llvm-svn: 129691
* relax this test to just check that the lock prefix is encoded properly,Chris Lattner2011-04-181-2/+1
| | | | | | and to not rely on the register allocator's arbitrary operand choices. llvm-svn: 129690
* 1. merge fast-isel-shift-imm.ll into fast-isel-x86-64.llChris Lattner2011-04-173-12/+36
| | | | | | | | | | 2. implement rdar://9289501 - fast isel should fold trivial multiplies to shifts 3. teach tblgen to handle shift immediates that are different sizes than the shifted operands, eliminating some code from the X86 fast isel backend. 4. Have FastISel::SelectBinaryOp use (the poorly named) FastEmit_ri_ function instead of FastEmit_ri to simplify code. llvm-svn: 129666
* fix an x86 fast isel issue where we'd completely give up on folding an addressChris Lattner2011-04-171-4/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | when we have a global variable base an an index. Instead, just give up on folding the global variable. Before we'd geenrate: _test: ## @test ## BB#0: movq _rtx_length@GOTPCREL(%rip), %rax leaq (%rax), %rax addq %rdi, %rax movzbl (%rax), %eax ret now we generate: _test: ## @test ## BB#0: movq _rtx_length@GOTPCREL(%rip), %rax movzbl (%rax,%rdi), %eax ret The difference is even more significant when there is a scale involved. This fixes rdar://9289558 - total fail with addr mode formation at -O0/x86-64 llvm-svn: 129664
* fix an oversight which caused us to compile the testcase (and otherChris Lattner2011-04-171-0/+12
| | | | | | | | | | | | | | | | | | | less trivial things) into a dummy lea. Before we generated: _test: ## @test movq _G@GOTPCREL(%rip), %rax leaq (%rax), %rax ret now we produce: _test: ## @test movq _G@GOTPCREL(%rip), %rax ret This is part of rdar://9289558 llvm-svn: 129662
* Fix rdar://9289512 - not folding load into compare at -O0Chris Lattner2011-04-171-1/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | The basic issue here is that bottom-up isel is matching the branch and compare, and was failing to fold the load into the branch/compare combo. Fixing this (by allowing folding into any instruction of a sequence that is selected) allows us to produce things like: cmpb $0, 52(%rax) je LBB4_2 instead of: movb 52(%rax), %cl cmpb $0, %cl je LBB4_2 This makes the generated -O0 code run a bit faster, but also speeds up compile time by putting less pressure on the register allocator and generating less code. This was one of the biggest classes of missing load folding. Implementing this shrinks 176.gcc's c-decl.s (as a random example) by about 4% in (verbose-asm) line count. llvm-svn: 129656
* Remove working entry from README.Eli Friedman2011-04-171-1/+1
| | | | llvm-svn: 129654
* fix rdar://9289583 - fast isel should handle non-canonical commutative binopsChris Lattner2011-04-171-0/+14
| | | | | | | | | | allowing us to fold the immediate into the 'and' in this case: int test1(int i) { return 8&i; } llvm-svn: 129653
* PR9055: extend the fix to PR4050 (r70179) to apply to zext and anyext.Eli Friedman2011-04-161-0/+23
| | | | | | | Returning a new node makes the code try to replace the old node, which in the included testcase is killed by CSE. llvm-svn: 129650
* Fix divmod libcall lowering. Convert to {S|U}DIVREM first and then expand ↵Evan Cheng2011-04-161-0/+31
| | | | | | the node to a libcall. rdar://9280991 llvm-svn: 129633
* Re-enable test o32_cc_vararg.ll.Akira Hatanaka2011-04-151-3/+0
| | | | llvm-svn: 129616
OpenPOWER on IntegriCloud