summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/X86
Commit message (Collapse)AuthorAgeFilesLines
...
* Rename a function.Nadav Rotem2012-12-231-4/+4
| | | | llvm-svn: 170996
* Loop Vectorizer: Update the cost model of scatter/gather operations and makeNadav Rotem2012-12-231-1/+0
| | | | | | them more expensive. llvm-svn: 170995
* X86: Turn mul of <4 x i32> into pmuludq when no SSE4.1 is available.Benjamin Kramer2012-12-221-5/+29
| | | | | | | pmuludq is slow, but it turns out that all the unpacking and packing of the scalarized mul is even slower. 10% speedup on loop-vectorized paq8p. llvm-svn: 170985
* X86: Emit vector sext as shuffle + sra if vpmovsx is not available.Benjamin Kramer2012-12-221-8/+39
| | | | | | | Also loosen the SSSE3 dependency a bit, expanded pshufb + psra is still better than scalarized loads. Fixes PR14590. llvm-svn: 170984
* In some cases, due to scheduling constraints we copy the EFLAGS.Nadav Rotem2012-12-212-1/+21
| | | | | | | | | | | | The only way to read the eflags is using push and pop. If we don't adjust the stack then we run over the first frame index. This is not something that we want to do, so we have to make sure that our machine function does not copy the flags. If it does then we have to emit the prolog that adjusts the stack. rdar://12896831 llvm-svn: 170961
* X86: Match pmin/pmax as a target specific dag combine. This occurs during ↵Benjamin Kramer2012-12-211-0/+77
| | | | | | | | vectorization. Part of PR14667. llvm-svn: 170908
* X86: Match the SSE/AVX min/max vector ops using a custom node instead of ↵Benjamin Kramer2012-12-215-97/+171
| | | | | | | | intrinsics This is very mechanical, no functionality change. Preparation for PR14667. llvm-svn: 170898
* Add a missing "virtual" keyword.Nadav Rotem2012-12-211-2/+2
| | | | llvm-svn: 170842
* Improve the X86 cost model for loads and stores.Nadav Rotem2012-12-212-0/+28
| | | | llvm-svn: 170830
* Add an MF argument to MI::copyImplicitOps().Jakob Stoklund Olesen2012-12-201-1/+1
| | | | | | | | | This function is often used to decorate dangling instructions, so a context reference is required to allocate memory for the operands. Also add a corresponding MachineInstrBuilder method. llvm-svn: 170797
* Remove MCTargetAsmLexer and its derived classes now that edis,Roman Divacky2012-12-203-164/+0
| | | | | | its only user, is gone. llvm-svn: 170699
* Fix use-before-construction of X86TargetLowering.Richard Smith2012-12-202-4/+4
| | | | llvm-svn: 170654
* MC: Add MCInstrDesc::mayAffectControlFlow() method.Jim Grosbach2012-12-192-4/+7
| | | | | | | | | MC disassembler clients (LLDB) are interested in querying if an instruction may affect control flow other than by virtue of being an explicit branch instruction. For example, instructions which write directly to the PC on some architectures. llvm-svn: 170610
* Remove the explicit MachineInstrBuilder(MI) constructor.Jakob Stoklund Olesen2012-12-191-18/+21
| | | | | | | | | | | | | Use the version that also takes an MF reference instead. It would technically be possible to extract an MF reference from the MI as MI->getParent()->getParent(), but that would not work for MIs that are not inserted into any basic block. Given the reasonably small number of places this constructor was used at all, I preferred the compile time check to a run time assertion. llvm-svn: 170588
* Remove edis - the enhanced disassembler. Fixes PR14654.Roman Divacky2012-12-194-13/+1
| | | | llvm-svn: 170578
* Transform (x&C)>V into (x&C)!=0 where possiblePaul Redmond2012-12-191-37/+0
| | | | | | | | | | | | When the least bit of C is greater than V, (x&C) must be greater than V if it is not zero, so the comparison can be simplified. Although this was suggested in Target/X86/README.txt, it benefits any architecture with a directly testable form of AND. Patch by Kevin Schoedel llvm-svn: 170576
* Change TargetLowering::getTypeForExtArgOrReturn to take and returnPatrik Hagglund2012-12-192-6/+5
| | | | | | MVTs, instead of EVTs. llvm-svn: 170537
* Change TargetLowering::RegisterTypeForVT to contain MVTs, instead ofPatrik Hagglund2012-12-191-2/+2
| | | | | | EVTs. llvm-svn: 170535
* Change TargetLowering::findRepresentativeClass to take an MVT, insteadPatrik Hagglund2012-12-192-3/+3
| | | | | | of EVT. llvm-svn: 170532
* X86ISelLowering.cpp: Fix warnings. [-Wlogical-op-parentheses]NAKAMURA Takumi2012-12-191-2/+2
| | | | llvm-svn: 170523
* Optimized load + SIGN_EXTEND patterns in the X86 backend.Elena Demikhovsky2012-12-192-4/+103
| | | | llvm-svn: 170506
* Rename the 'Attributes' class to 'Attribute'. It's going to represent a ↵Bill Wendling2012-12-196-22/+22
| | | | | | single attribute in the future. llvm-svn: 170502
* Reverse order of checking SSE level when calculating compare cost, so we checkJakub Staszak2012-12-181-6/+6
| | | | | | AVX2 before AVX. llvm-svn: 170464
* Remove EFLAGS from the BLSI/BLSMSK/BLSR patterns. The nodes created by DAG ↵Craig Topper2012-12-171-11/+11
| | | | | | combine don't contain an EFLAGS def. llvm-svn: 170308
* Simplify BMI ANDN matching to use patterns instead of a DAG combine. Also ↵Craig Topper2012-12-174-13/+16
| | | | | | add ANDN to isDefConvertible. llvm-svn: 170305
* Add rest of BMI/BMI2 instructions to the folding tables as well as popcnt ↵Craig Topper2012-12-171-1/+26
| | | | | | and lzcnt. llvm-svn: 170304
* Remove store forms of DEC/INC from isDefConvertible. Since they are stores ↵Craig Topper2012-12-171-6/+2
| | | | | | they don't have a register def. llvm-svn: 170303
* X86: Add a couple of target-specific dag combines that turn VSELECTS into ↵Benjamin Kramer2012-12-154-18/+88
| | | | | | | | | | | psubus if possible. We match the pattern "x >= y ? x-y : 0" into "subus x, y" and two special cases if y is a constant. DAGCombiner canonicalizes those so we first have to undo the canonicalization for those cases. The pattern occurs in gzip when the loop vectorizer is enabled. Part of PR14613. llvm-svn: 170273
* Make '-mtune=x86_64' assume fast unaligned memory accesses.Chandler Carruth2012-12-151-1/+2
| | | | | | | | | | | | | | Not all chips targeted by x86_64 have this feature, but a dramatically increasing number do. Specifying a chip-specific tuning parameter will continue to turn the feature on or off as appropriate for that particular chip, but the generic flag should try to achieve the best performance on the most widely available hardware. Today, the number of chips with fast UA access dwarfs those without in the x86-64 space. Note that this also brings LLVM's code generation for this '-march' flag more in line with that of modern GCCs. Reviewed by Dan Gohman. llvm-svn: 170269
* TypeLegalizer: Do not generate target specific nodes with illegal types, ↵Nadav Rotem2012-12-141-0/+3
| | | | | | because we cant type-legalize them. llvm-svn: 170245
* Fix a bogus commentEli Bendersky2012-12-131-3/+3
| | | | llvm-svn: 170052
* Sorry about the churn. One more change to getOptimalMemOpType() hook. Did IEvan Cheng2012-12-122-14/+12
| | | | | | | | | | | | mention the inline memcpy / memset expansion code is a mess? This patch split the ZeroOrLdSrc argument into two: IsMemset and ZeroMemset. The first indicates whether it is expanding a memset or a memcpy / memmove. The later is whether the memset is a memset of zero. It's totally possible (likely even) that targets may want to do different things for memcpy and memset of zero. llvm-svn: 169959
* - Rename isLegalMemOpType to isSafeMemOpType. "Legal" is a very overloade term.Evan Cheng2012-12-122-11/+12
| | | | | | | | | Also added more comments to explain why it is generally ok to return true. - Rename getOptimalMemOpType argument IsZeroVal to ZeroOrLdSrc. It's meant to be true for loaded source (memcpy) or zero constants (memset). The poor name choice is probably some kind of legacy issue. llvm-svn: 169954
* Avoid using lossy load / stores for memcpy / memset expansion. e.g.Evan Cheng2012-12-122-0/+15
| | | | | | f64 load / store on non-SSE2 x86 targets. llvm-svn: 169944
* Revert EVT->MVT changes, r169836-169851, due to buildbot failures.Patrik Hagglund2012-12-113-10/+10
| | | | llvm-svn: 169854
* Change TargetLowering::getTypeForExtArgOrReturn to take and returnPatrik Hagglund2012-12-112-5/+5
| | | | | | | | MVTs, instead of EVTs. Accordingly, add bitsLT (and similar) to MVT. llvm-svn: 169850
* Change TargetLowering::RegisterTypeForVT to contain MVTs, instead ofPatrik Hagglund2012-12-111-2/+2
| | | | | | EVTs. llvm-svn: 169848
* Change TargetLowering::findRepresentativeClass to take an MVT, insteadPatrik Hagglund2012-12-112-3/+3
| | | | | | of EVT. llvm-svn: 169845
* Fall back to the selection dag isel to select tail calls.Chad Rosier2012-12-111-0/+4
| | | | | | | | | | | | | | | | | | | This shouldn't affect codegen for -O0 compiles as tail call markers are not emitted in unoptimized compiles. Testing with the external/internal nightly test suite reveals no change in compile time performance. Testing with -O1, -O2 and -O3 with fast-isel enabled did not cause any compile-time or execution-time failures. All tests were performed on my x86 machine. I'll monitor our arm testers to ensure no regressions occur there. In an upcoming clang patch I will be marking the objc_autoreleaseReturnValue and objc_retainAutoreleaseReturnValue as tail calls unconditionally. While it's theoretically true that this is just an optimization, it's an optimization that we very much want to happen even at -O0, or else ARC applications become substantially harder to debug. Part of rdar://12553082 llvm-svn: 169796
* Some enhancements for memcpy / memset inline expansion.Evan Cheng2012-12-102-4/+10
| | | | | | | | | | | | | | | | | | | | | 1. Teach it to use overlapping unaligned load / store to copy / set the trailing bytes. e.g. On 86, use two pairs of movups / movaps for 17 - 31 byte copies. 2. Use f64 for memcpy / memset on targets where i64 is not legal but f64 is. e.g. x86 and ARM. 3. When memcpy from a constant string, do *not* replace the load with a constant if it's not possible to materialize an integer immediate with a single instruction (required a new target hook: TLI.isIntImmLegal()). 4. Use unaligned load / stores more aggressively if target hooks indicates they are "fast". 5. Update ARM target hooks to use unaligned load / stores. e.g. vld1.8 / vst1.8. Also increase the threshold to something reasonable (8 for memset, 4 pairs for memcpy). This significantly improves Dhrystone, up to 50% on ARM iOS devices. rdar://12760078 llvm-svn: 169791
* Revert "Make '-mtune=x86_64' assume fast unaligned memory accesses."Chandler Carruth2012-12-101-2/+1
| | | | | | Accidental commit... git svn betrayed me. Sorry for the noise. llvm-svn: 169741
* Make '-mtune=x86_64' assume fast unaligned memory accesses.Chandler Carruth2012-12-101-1/+2
| | | | | | | | | | | | | | | | | | | Summary: Not all chips targeted by x86_64 have this feature, but a dramatically increasing number do. Specifying a chip-specific tuning parameter will continue to turn the feature on or off as appropriate for that particular chip, but the generic flag should try to achieve the best performance on the most widely available hardware. Today, the number of chips with fast UA access dwarfs those without in the x86-64 space. Note that this also brings LLVM's code generation for this '-march' flag more in line with that of modern GCCs. CC: llvm-commits Differential Revision: http://llvm-reviews.chandlerc.com/D195 llvm-svn: 169740
* Fix a typo in my previous commit -- bloomfield is 0x1A not 0x2A.Chandler Carruth2012-12-101-1/+1
| | | | | | | Thanks to the PaX folks for noticing in review! We need some tests here, any sugestions welcome... llvm-svn: 169739
* Address a FIXME and update the fast unaligned memory feature for newerChandler Carruth2012-12-102-13/+21
| | | | | | | | | | | | | | | Intel chips. The model number rules were determined by inspecting Intel's documentation for their newer chip model numbers. My understanding is that all of the newer Intel chips have fast unaligned memory access, but if anyone is concerned about a particular chip, just shout. No tests updated; it's not clear we have dedicated tests for the chips' various features, but if anyone would like tests (or can point me at some existing ones), I'm happy to oblige. llvm-svn: 169730
* - Re-enable population count loop idiom recognization Shuxin Yang2012-12-093-1/+20
| | | | | | | - fix a bug which cause sigfault. - add two testing cases which was causing crash llvm-svn: 169687
* Revert the patches adding a popcount loop idiom recognition pass.Chandler Carruth2012-12-083-20/+1
| | | | | | | | | | | | | | There are still bugs in this pass, as well as other issues that are being worked on, but the bugs are crashers that occur pretty easily in the wild. Test cases have been sent to the original commit's review thread. This reverts the commits: r169671: Fix a logic error. r169604: Move the popcnt tests to an X86 subdirectory. r168931: Initial commit adding the pass. llvm-svn: 169683
* s/AttrListPtr/AttributeSet/g to better label what this class is going to be ↵Bill Wendling2012-12-071-1/+1
| | | | | | in the near future. llvm-svn: 169651
* When we use the BLEND instruction that uses the MSB as a mask, we can removeNadav Rotem2012-12-071-1/+6
| | | | | | | | the VSRI instruction before it since it does not affect the MSB. Thanks Craig Topper for suggesting this. llvm-svn: 169638
* X86: Prefer using VPSHUFD over VPERMIL because it has better throughput.Nadav Rotem2012-12-071-3/+4
| | | | llvm-svn: 169624
* Replace r169459 with something safer. Rather than having computeMaskedBits toEvan Cheng2012-12-062-38/+25
| | | | | | | | | | understand target implementation of any_extend / extload, just generate zero_extend in place of any_extend for liveouts when the target knows the zero_extend will be implicit (e.g. ARM ldrb / ldrh) or folded (e.g. x86 movz). rdar://12771555 llvm-svn: 169536
OpenPOWER on IntegriCloud