summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* Hoist the logic to transform shift+mask combinations into sub-registerChandler Carruth2012-01-111-56/+68
| | | | | | | | extracts and scaled addressing modes into its own helper function. No functionality changed here, just hoisting and layout fixes falling out of that hoisting. llvm-svn: 147937
* Teach the X86 instruction selection to do some heroic transforms toChandler Carruth2012-01-112-0/+169
| | | | | | | | | | | | | | | | | | | | | | | | | | | | detect a pattern which can be implemented with a small 'shl' embedded in the addressing mode scale. This happens in real code as follows: unsigned x = my_accelerator_table[input >> 11]; Here we have some lookup table that we look into using the high bits of 'input'. Each entity in the table is 4-bytes, which means this implicitly gets turned into (once lowered out of a GEP): *(unsigned*)((char*)my_accelerator_table + ((input >> 11) << 2)); The shift right followed by a shift left is canonicalized to a smaller shift right and masking off the low bits. That hides the shift right which x86 has an addressing mode designed to support. We now detect masks of this form, and produce the longer shift right followed by the proper addressing mode. In addition to saving a (rather large) instruction, this also reduces stalls in Intel chips on benchmarks I've measured. In order for all of this to work, one part of the DAG needs to be canonicalized *still further* than it currently is. This involves removing pointless 'trunc' nodes between a zextload and a zext. Without that, we end up generating spurious masks and hiding the pattern. llvm-svn: 147936
* Improved compile time:Stepan Dyatkovskiy2012-01-111-38/+98
| | | | | | | | | | | | 1. Size heuristics changed. Now we calculate number of unswitching branches only once per loop. 2. Some checks was moved from UnswitchIfProfitable to processCurrentLoop, since it is not changed during processCurrentLoop iteration. It allows decide to skip some loops at an early stage. Extended statistics: - Added total number of instructions analyzed. llvm-svn: 147935
* Clarified the SCEV getSmallConstantTripCount interface with in-your-face ↵Andrew Trick2012-01-111-9/+18
| | | | | | | | comments. This interface is misleading and dangerous, but it is actually what we need for unrolling. llvm-svn: 147926
* Add big endian mips support. Based on a patch by Jack Carter.Rafael Espindola2012-01-113-16/+20
| | | | llvm-svn: 147924
* Add the skeleton of an asm parser for mips.Rafael Espindola2012-01-117-2/+114
| | | | llvm-svn: 147923
* ARM Ld/St Optimizer fix.Andrew Trick2012-01-111-3/+4
| | | | | | | | Allow LDRD to be formed from pairs with different LDR encodings. This was the original intention of the pass. Somewhere along the way, the LDR opcodes were refined which broke the optimization. We really don't care what the original opcodes are as long as they both map to the same LDRD and the immediate still fits. Fixes rdar://10435045 ARMLoadStoreOptimization cannot handle mixed LDRi8/LDRi12 llvm-svn: 147922
* Detect when a value is undefined on an edge to a landing pad.Jakob Stoklund Olesen2012-01-111-4/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Consider this code: int h() { int x; try { x = f(); g(); } catch (...) { return x+1; } return x; } The variable x is undefined on the first edge to the landing pad, but it has the f() return value on the second edge to the landing pad. SplitAnalysis::getLastSplitPoint() would assume that the return value from f() was live into the landing pad when f() throws, which is of course impossible. Detect these cases, and treat them as if the landing pad wasn't there. This allows spill code to be inserted after the function call to f(). <rdar://problem/10664933> llvm-svn: 147912
* Exclusively use SplitAnalysis::getLastSplitPoint().Jakob Stoklund Olesen2012-01-113-25/+14
| | | | | | | | | Delete the alternative implementation in LiveIntervalAnalysis. These functions computed the same thing, but SplitAnalysis caches the result. llvm-svn: 147911
* Avoid CSE of instructions which define physical registers across MBBs unlessEvan Cheng2012-01-111-4/+12
| | | | | | the physical registers are not allocatable. llvm-svn: 147902
* If the global variable is removed by the linker, then don't constant merge itBill Wendling2012-01-111-6/+10
| | | | | | | | | | | | | | with other symbols. An object in the __cfstring section is suppoed to be filled with CFString objects, which have a pointer to ___CFConstantStringClassReference followed by a pointer to a __cstring. If we allow the object in the __cstring section to be merged with another global, then it could end up in any section. Because the linker is going to remove these symbols in the final executable, we shouldn't bother to merge them. <rdar://problem/10564621> llvm-svn: 147899
* Don't avoid recursing for pointer types, just reference types. Expand onEric Christopher2012-01-111-3/+4
| | | | | | | | the comment. Fixes constvars.exp on the gdb test builder. llvm-svn: 147897
* Fixed order of operands in comment to match code.Lang Hames2012-01-101-1/+1
| | | | llvm-svn: 147890
* Default stack alignment for 32bit x86 should be 4 Bytes, not 8 Bytes.Joerg Sonnenberger2012-01-101-1/+1
| | | | | | | Add a test that checks the stack alignment of a simple function for Darwin, Linux and NetBSD for 32bit and 64bit mode. llvm-svn: 147888
* Consider unknown alignment caused by OptimizeThumb2Instructions().Jakob Stoklund Olesen2012-01-101-4/+25
| | | | | | | | | | | | | | | | | | | | This function runs after all constant islands have been placed, and may shrink some instructions to their 2-byte forms. This can actually cause some constant pool entries to move out of range because of growing alignment padding. Treat instructions that may be shrunk the same as inline asm - they erode the known alignment bits. Also reinstate an old assertion in verify(). It is correct now that basic block offsets include alignments. Add a single large test case that will hopefully exercise many parts of the constant island pass. <rdar://problem/10670199> llvm-svn: 147885
* 80 col violation.Evan Cheng2012-01-101-2/+2
| | | | llvm-svn: 147884
* Add missing VEX predicates to VMOVSDto64rr/VMOVSDto64mr. This fixes a fewChad Rosier2012-01-101-2/+3
| | | | | | | failing test cases on our internal AVX nightly tester. rdar://10663637 llvm-svn: 147881
* Let asm parser query asm syntax dialect.Devang Patel2012-01-101-0/+1
| | | | llvm-svn: 147880
* This is the matching change for the data structure name changes for theKevin Enderby2012-01-102-21/+21
| | | | | | | functional change in r147860 to use DW_TAG_label's instead TAG_subprogram's. This only changes names and updates comments. No functional change. llvm-svn: 147877
* ARM updating VST2 pseudo-lowering fixed vs. register update.Jim Grosbach2012-01-103-8/+8
| | | | | | rdar://10663487 llvm-svn: 147876
* Fix some leftover control reaches end of non-void function warnings.Benjamin Kramer2012-01-104-8/+9
| | | | llvm-svn: 147874
* Teach the triple library about the androideabi environment.Chandler Carruth2012-01-101-0/+3
| | | | | | Patch by Evgeniy Stepanov. llvm-svn: 147871
* Move default case for covered enum outside of switch.Richard Smith2012-01-101-1/+1
| | | | llvm-svn: 147870
* For i386, don't use the generic code.Bill Wendling2012-01-101-2/+3
| | | | | | | | | As the comment around 7746 says, it's better to use the x87 extended precision here than SSE. And the generic code doesn't know how to do that. It also regains the speed lost for the uint64_to_float.c testcase. <rdar://problem/10669858> llvm-svn: 147869
* Fix a -Wreturn-type warning in g++.Richard Smith2012-01-101-0/+1
| | | | llvm-svn: 147867
* Cleanup these asserts to follow common LLVM style and codingChandler Carruth2012-01-101-5/+5
| | | | | | | conventions. Also, clarify the grouping of one of the asserts to silence -Wparentheses. llvm-svn: 147863
* Add 'llvm_unreachable' to passify GCC's understanding of the constraintsChandler Carruth2012-01-1011-0/+15
| | | | | | | | of several newly un-defaulted switches. This also helps optimizers (including LLVM's) recognize that every case is covered, and we should assume as much. llvm-svn: 147861
* Various crash reporting tools have a problem with the dwarf generated forKevin Enderby2012-01-101-17/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | assembly source when it generates the TAG_subprogram dwarf debug info for the labels that have nothing between them as in this bit of assembly source: % cat ZeroLength.s _func1: _func2: nop One solution would be to not emit the subsequent labels with the same address and use the next label with a different address or the end of the section for the AT_high_pc value of the TAG_subprogram. Turns out in llvm-mc it is not possible in all cases to determine of two symbols have the same value at the point we put out the TAG_subprogram dwarf debug info. So we will have llvm-mc instead of putting out TAG_subprogram's put out DW_TAG_label's. And the DW_TAG_label does not have a AT_high_pc value which avoids the problem. This commit is only the functional change to make the diffs clear as to what is really being changed. The next commit will be to clean up the names of such things like MCGenDwarfSubprogramEntry to something like MCGenDwarfLabelEntry. rdar://10666925 llvm-svn: 147860
* Add definition for intel asm variant.Devang Patel2012-01-101-1/+11
| | | | | | Right now, this just adds additional entries in match table. The parser does not use them yet. llvm-svn: 147859
* Remove unnecessary default cases in switches that cover all enum values.David Blaikie2012-01-1034-71/+0
| | | | llvm-svn: 147855
* Fix a bug in the legalization of shuffle vectors. When we emulate shuffles ↵Nadav Rotem2012-01-101-1/+3
| | | | | | using BUILD_VECTORS we may be using a BV of different type. Make sure to cast it back. llvm-svn: 147851
* Add definitions for AMD's bobcat (aka btver1)Benjamin Kramer2012-01-102-0/+7
| | | | llvm-svn: 147846
* Fix a crash in AVX2 when trying to broadcast a double into a 128-bit vector. ↵Craig Topper2012-01-101-18/+20
| | | | | | There is no vbroadcastsd xmm, but we do need to support 64-bit integers broadcasted into xmm. Also factor the AVX check into the isVectorBroadcast function. This makes more sense since the AVX2 check was already inside. llvm-svn: 147844
* Remove hasXMM/hasXMMInt functions. Move callers to hasSSE1/hasSSE2. This is ↵Craig Topper2012-01-105-79/+77
| | | | | | the final piece to remove the AVX hack that disabled SSE. llvm-svn: 147843
* Remove hasSSE*orAVX functions and change all callers to use just hasSSE*. ↵Craig Topper2012-01-102-31/+27
| | | | | | AVX is now an SSE level and no longer disables SSE checks. llvm-svn: 147842
* Instruction selection priority fixes to remove the XMM/XMMInt/orAVX ↵Craig Topper2012-01-106-116/+89
| | | | | | predicates. Another commit will remove orAVX functions from X86SubTarget. llvm-svn: 147841
* Allow machine-cse to look across MBB boundary when cse'ing instructions thatEvan Cheng2012-01-101-15/+54
| | | | | | | | | | define physical registers. It's currently very restrictive, only catching cases where the CE is in an immediate (and only) predecessor. But it catches a surprising large number of cases. rdar://10660865 llvm-svn: 147827
* Enable LSR IV Chains with sufficient heuristics.Andrew Trick2012-01-102-7/+215
| | | | | | | | | | | | | | | | | | | | | | | | | These heuristics are sufficient for enabling IV chains by default. Performance analysis has been done for i386, x86_64, and thumbv7. The optimization is rarely important, but can significantly speed up certain cases by eliminating spill code within the loop. Unrolled loops are prime candidates for IV chains. In many cases, the final code could still be improved with more target specific optimization following LSR. The goal of this feature is for LSR to make the best choice of induction variables. Instruction selection may not completely take advantage of this feature yet. As a result, there could be cases of slight code size increase. Code size can be worse on x86 because it doesn't support postincrement addressing. In fact, when chains are formed, you may see redundant address plus stride addition in the addressing mode. GenerateIVChains tries to compensate for the common cases. On ARM, code size increase can be mitigated by using postincrement addressing, but downstream codegen currently misses some opportunities. llvm-svn: 147826
* Accurately model hardware alignment rounding.Jakob Stoklund Olesen2012-01-101-21/+56
| | | | | | | | | | | | | | | | | | | On Thumb, the displacement computation hardware uses the address of the current instruction rouned down to a multiple of 4. Include this rounding in the UserOffset we compute for each instruction. When inline asm is present, the instruction alignment may not be known. Constrain the maximum displacement instead in that case. This makes it possible for CreateNewWater() and OffsetIsInRange() to agree about the valid displacements. When they disagree, infinite looping happens. As always, test cases for this stuff are insane. <rdar://problem/10660175> llvm-svn: 147825
* Remove the logging streamer.Rafael Espindola2012-01-104-266/+0
| | | | llvm-svn: 147820
* Catch runaway ARMConstantIslandPass even in -Asserts builds.Jakob Stoklund Olesen2012-01-091-2/+2
| | | | | | | | | The pass is prone to looping, and it is better to crash than loop forever, even in a -Asserts build. <rdar://problem/10660175> llvm-svn: 147806
* Fix asm string wrt variants.Devang Patel2012-01-092-7/+7
| | | | llvm-svn: 147805
* Adding IV chain generation to LSR.Andrew Trick2012-01-091-5/+228
| | | | | | | | | | | | | | | | | | After collecting chains, check if any should be materialized. If so, hide the chained IV users from the LSR solver. LSR will only solve for the head of the chain. GenerateIVChains will then materialize the chained IV users by computing the IV relative to its previous value in the chain. In theory, chained IV users could be exposed to LSR's solver. This would be considerably complicated to implement and I'm not aware of a case where we need it. In practice it's more important to intelligently prune the search space of nontrivial loops before running the solver, otherwise the solver is often forced to prune the most optimal solutions. Hiding the chained users does this well, so that LSR is more likely to find the best IV for the chain as a whole. llvm-svn: 147801
* Adding collection of IV chains to LSR.Andrew Trick2012-01-091-0/+242
| | | | | | | | This collects a set of IV uses within the loop whose values can be computed relative to each other in a sequence. Following checkins will make use of this information. llvm-svn: 147797
* Split AsmParser into two components - AsmParser and AsmParserVariantDevang Patel2012-01-091-2/+4
| | | | | | | AsmParser holds info specific to target parser. AsmParserVariant holds info specific to asm variants supported by the target. llvm-svn: 147787
* "Minor LSR debugging stuff"Andrew Trick2012-01-091-1/+4
| | | | llvm-svn: 147785
* Update language check. Do not ignore DW_LANG_Python.Devang Patel2012-01-091-1/+2
| | | | | | Patch by Joe Groff! llvm-svn: 147781
* Move assert to the right place.Benjamin Kramer2012-01-091-1/+1
| | | | llvm-svn: 147779
* InstCombine: Teach foldLogOpOfMaskedICmpsHelper that sign bit tests are bit ↵Benjamin Kramer2012-01-091-81/+82
| | | | | | | | tests. This subsumes several other transforms while enabling us to catch more cases. llvm-svn: 147777
* Don't rely on the fact that shift values are never very large, and thusChandler Carruth2012-01-091-1/+1
| | | | | | | | | | | | this substraction will result in small negative numbers at worst which become very large positive numbers on assignment and are thus caught by the <=4 check on the next line. The >0 check clearly intended to catch these as negative numbers. Spotted by inspection, and impossible to trigger given the shift widths that can be used. llvm-svn: 147773
OpenPOWER on IntegriCloud