summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* Minor cleanup to all the switches after MatchInstructionImpl in all the ↵Craig Topper2015-01-038-29/+20
| | | | | | | | AsmParsers. Make sure they all have llvm_unreachable on the default path out of the switch. Remove unnecessary "default: break". Remove a 'return' after unreachable. Fix some indentation. llvm-svn: 225114
* ValueTracking: Make computeKnownBits for Arguments a little more clearDavid Majnemer2015-01-031-0/+3
| | | | | | | | | | | We would sometimes leave the out-param APInts untouched while going through computeKnownBits. While I don't know of a way to trigger a bug involving this in practice, it goes against the overall design of computeKnownBits. Found via code inspection. llvm-svn: 225109
* [PowerPC] Add support for the CMPB instructionHal Finkel2015-01-038-8/+278
| | | | | | | | | | | | | | Newer POWER cores, and the A2, support the cmpb instruction. This instruction compares its operands, treating each of the 8 bytes in the GPRs separately, returning a 'mask' result of 0 (for false) or -1 (for true) in each byte. Code generation support is added, in the form of a PPCISelDAGToDAG DAG-preprocessing routine, that recognizes patterns close to what the instruction computes (either exactly, or related by a constant masking operation), and generates the cmpb instruction (along with any necessary constant masking operation). This can be expanded if use cases arise. llvm-svn: 225106
* [asan] simplify the tracing code, make it use the same guard variables as ↵Kostya Serebryany2015-01-031-25/+12
| | | | | | coverage llvm-svn: 225103
* [X86] Disassembler support for move to/from %rax with a 32-bit memory offset ↵Craig Topper2015-01-033-0/+22
| | | | | | is REX.W and AdSize prefix are both present. llvm-svn: 225099
* [X86] Use 32-bit sign extended immediate for 64-bit LOCK_ArithBinOp with ↵Craig Topper2015-01-031-6/+6
| | | | | | sign extended immediate. llvm-svn: 225098
* [PM] Fix some formatting where clang-format has improved recently.Chandler Carruth2015-01-021-2/+2
| | | | llvm-svn: 225092
* Improved comments. No functional change intended.Andrea Di Biagio2015-01-021-2/+2
| | | | llvm-svn: 225080
* [X86] Bring some better consistency to the naming of the move to/from ↵Craig Topper2015-01-022-44/+41
| | | | | | %al/ax/eax/rax with memory offset. llvm-svn: 225078
* InstCombine: Detect when llvm.umul.with.overflow always overflowsDavid Majnemer2015-01-022-7/+18
| | | | | | | We know overflow always occurs if both ~LHSKnownZero * ~RHSKnownZero and LHSKnownOne * RHSKnownOne overflow. llvm-svn: 225077
* Analysis: Reformulate WillNotOverflowUnsignedMul for reusabilityDavid Majnemer2015-01-024-53/+48
| | | | | | | | WillNotOverflowUnsignedMul's smarts will live in ValueTracking as computeOverflowForUnsignedMul. It now returns a tri-state result: never overflows, always overflows and sometimes overflows. llvm-svn: 225076
* [X86] Make the instructions that use AdSize16/32/64 co-exist together ↵Craig Topper2015-01-025-144/+241
| | | | | | | | | | without using mode predicates. This is necessary to allow the disassembler to be able to handle AdSize32 instructions in 64-bit mode when address size prefix is used. Eventually we should probably also support 'addr32' and 'addr16' in the assembler to override the address size on some of these instructions. But for now we'll just use special operand types that will lookup the current mode size to select the right instruction. llvm-svn: 225075
* [SROA] Teach SROA to be more aggressive in splitting now that we haveChandler Carruth2015-01-021-27/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | a pre-splitting pass over loads and stores. Historically, splitting could cause enough problems that I hamstrung the entire process with a requirement that splittable integer loads and stores must cover the entire alloca. All smaller loads and stores were unsplittable to prevent chaos from ensuing. With the new pre-splitting logic that does load/store pair splitting I introduced in r225061, we can now very nicely handle arbitrarily splittable loads and stores. In order to fully benefit from these smarts, we need to mark all of the integer loads and stores as splittable. However, we don't actually want to rewrite partitions with all integer loads and stores marked as splittable. This will fail to extract scalar integers from aggregates, which is kind of the point of SROA. =] In order to resolve this, what we really want to do is only do pre-splitting on the alloca slices with integer loads and stores fully splittable. This allows us to uncover all non-integer uses of the alloca that would benefit from a split in an integer load or store (and where introducing the split is safe because it is just memory transfer from a load to a store). Once done, we make all the non-whole-alloca integer loads and stores unsplittable just as they have historically been, repartition and rewrite. The result is that when there are integer loads and stores anywhere within an alloca (such as from a memcpy of a sub-object of a larger object), we can split them up if there are non-integer components to the aggregate hiding beneath. I've added the challenging test cases to demonstrate how this is able to promote to scalars even a case where we have even *partially* overlapping loads and stores. This restores the single-store behavior for small arrays of i8s which is really nice. I've restored both the little endian testing and big endian testing for these exactly as they were prior to r225061. It also forced me to be more aggressive in an alignment test to actually defeat SROA. =] Without the added volatiles there, we actually split up the weird i16 loads and produce nice double allocas with better alignment. This also uncovered a number of bugs where we failed to handle splittable load and store slices which didn't have a begininng offset of zero. Those fixes are included, and without them the existing test cases explode in glorious fireworks. =] I've kept support for leaving whole-alloca integer loads and stores as splittable even for the purpose of rewriting, but I think that's likely no longer needed. With the new pre-splitting, we might be able to remove all the splitting support for loads and stores from the rewriter. Not doing that in this patch to try to isolate any performance regressions that causes in an easy to find and revert chunk. llvm-svn: 225074
* [SROA] Make the computation of adjusted pointers not leak GEPChandler Carruth2015-01-021-10/+14
| | | | | | | | | | | | | | | | | | | | | | | | | instructions. I noticed this when working on dialing up how aggressively we can pre-split loads and stores. My test case wasn't passing because dead GEPs into the allocas persisted when they were built by this routine. This isn't terribly harmful, we still rewrote and promoted the alloca and I can't conceive of how to cause this to happen in a case where we will keep the exact same alloca but rewrite and promote the uses of it. If that ever happened, we'd get an assert out of mem2reg. So I don't have a direct test case yet, but the subsequent commit's test case wouldn't pass without this. There are other problems fixed by this patch that I spotted purely by inspection such as the fact that getAdjustedPtr could have actually deleted dead base pointers. I don't know how to get a base pointer to go into getAdjustedPtr today, so I think this bug could never have manifested (and I certainly can't write a test case for it) but, it wasn't the intent of the code. The code really just wanted to GC the new instructions built. That can be done more directly by comparing with the base pointer which is the only non-new instruction that this code can return. llvm-svn: 225073
* [SROA] Fix the loop exit placement to be prior to indexing the splitsChandler Carruth2015-01-021-4/+8
| | | | | | | | array. This prevents it from walking out of bounds on the splits array. Bug found with the existing tests by ASan and by the MSVC debug build. llvm-svn: 225069
* [SROA] Fix two total think-os in r225061 that should have been caught onChandler Carruth2015-01-011-2/+2
| | | | | | | | | | | | | | | | | | | | | a +asserts bootstrap, but my bootstrap had asserts off. Oops. Anyways, in some places it is reasonable to cast (as a sanity check) the pointer operand to a load or store to an instruction within SROA -- namely when the pointer operand is expected to be derived from an alloca, and thus always an instruction. However, the pre-splitting code also deals with loads and stores to non-alloca pointers and there we need to just use the Value*. Nothing about the code relied on the instruction cast, it was only there essentially as an invariant assertion. Remove the two that don't actually hold. This should fix the proximate issue in PR22080, but I'm also doing an asserts bootstrap myself to see if there are other issues lurking. I'll craft a reduced test case in a moment, but I wanted to get the tree healthy as quickly as possible. llvm-svn: 225068
* [PowerPC] use UINT64_C instead of ulHal Finkel2015-01-011-4/+4
| | | | | | | Attempting to fix PR22078 (building on 32-bit systems) by replacing my careless use of 1ul to be a uint64_t constant with UINT64_C(1). llvm-svn: 225066
* [SROA] Switch to using a more direct debug logging technique in one partChandler Carruth2015-01-011-4/+6
| | | | | | | | | | | | | | of my new load and store splitting, and fix a bug where it logged a totally irrelevant slice rather than the actual slice in question. The logging here previously worked because we used to place new slices onto the back of the core sequence, but that caused other problems. I updated the actual code to store new slices in their own vector but didn't update the logging. There isn't a good way to reuse the logging any more, and frankly it wasn't needed. We can directly log this bit more easily. llvm-svn: 225063
* [SROA] Fix formatting with clang-format which I managed to fail to doChandler Carruth2015-01-011-48/+48
| | | | | | prior to committing r225061. Sorry for that. llvm-svn: 225062
* [SROA] Teach SROA how to much more intelligently handle split loads andChandler Carruth2015-01-011-2/+484
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | stores. When there are accesses to an entire alloca with an integer load or store as well as accesses to small pieces of the alloca, SROA splits up the large integer accesses. In order to do that, it uses bit math to merge the small accesses into large integers. While this is effective, it produces insane IR that can cause significant problems in the rest of the optimizer: - It can cause load and store mismatches with GVN on the non-alloca side where we end up loading an i64 (or some such) rather than loading specific elements that are stored. - We can't always get rid of the integer bit math, which is why we can't always fix the loads and stores to work well with GVN. - This is especially bad when we have operations that mix poorly with integer bit math such as floating point operations. - It will block things like the vectorizer which might be able to handle the scalar stores that underly the aggregate. At the same time, we can't just directly split up these loads and stores in all cases. If there is actual integer arithmetic involved on the values, then using integer bit math is actually the perfect lowering because we can often combine it heavily with the surrounding math. The solution this patch provides is to find places where SROA is partitioning aggregates into small elements, and look for splittable loads and stores that it can split all the way to some other adjacent load and store. These are uniformly the cases where failing to split the loads and stores hurts the optimizer that I have seen, and I've looked extensively at the code produced both from more and less aggressive approaches to this problem. However, it is quite tricky to actually do this in SROA. We may have loads and stores to the same alloca, or other complex patterns that are hard to handle. This complexity leads to the somewhat subtle algorithm implemented here. We have to do this entire process as a separate pass over the partitioning of the alloca, and split up all of the loads prior to splitting the stores so that we can handle safely the cases of overlapping, including partially overlapping, loads and stores to the same alloca. We also have to reconstitute the post-split slice configuration so we can avoid iterating again over all the alloca uses (the slow part of SROA). But we also have to ensure that when we split up loads and stores to *other* allocas, we *do* re-iterate over them in SROA to adapt to the more refined partitioning now required. With this, I actually think we can fix a long-standing TODO in SROA where I avoided splitting as many loads and stores as probably should be splittable. This limitation historically mitigated the fallout of all the bad things mentioned above. Now that we have more intelligent handling, I plan to remove the FIXME and more aggressively mark integer loads and stores as splittable. I'll do that in a follow-up patch to help with bisecting any fallout. The net result of this change should be more fine-grained and accurate scalars being formed out of aggregates. At the very least, Clang now generates perfect code for this high-level test case using std::complex<float>: #include <complex> void g1(std::complex<float> &x, float a, float b) { x += std::complex<float>(a, b); } void g2(std::complex<float> &x, float a, float b) { x -= std::complex<float>(a, b); } void foo(const std::complex<float> &x, float a, float b, std::complex<float> &x1, std::complex<float> &x2) { std::complex<float> l1 = x; g1(l1, a, b); std::complex<float> l2 = x; g2(l2, a, b); x1 = l1; x2 = l2; } This code isn't just hypothetical either. It was reduced out of the hot inner loops of essentially every part of the Eigen math library when using std::complex<float>. Those loops would consistently and pervasively hop between the floating point unit and the integer unit due to bit math extraction and insertion of floating point values that were "stored" in a 64-bit integer register around the loop backedge. So far, this change has passed a bootstrap and I have done some other testing and so far, no issues. That doesn't mean there won't be though, so I'll be prepared to help with any fallout. If you performance swings in particular, please let me know. I'm very curious what all the impact of this change will be. Stay tuned for the follow-up to also split more integer loads and stores. llvm-svn: 225061
* [PowerPC] Improve instruction selection bit-permuting operations (64-bit)Hal Finkel2015-01-012-97/+868
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the second installment of improvements to instruction selection for "bit permutation" instruction sequences. r224318 added logic for instruction selection for 32-bit bit permutation sequences, and this adds lowering for 64-bit sequences. The 64-bit sequences are more complicated than the 32-bit ones because: a) the 64-bit versions of the 32-bit rotate-and-mask instructions work by replicating the lower 32-bits of the value-to-be-rotated into the upper 32 bits -- and integrating this into the cost modeling for the various bit group operations is non-trivial b) unlike the 32-bit instructions in 32-bit mode, the rotate-and-mask instructions cannot, in one instruction, specify the mask starting index, the mask ending index, and the rotation factor. Also, forming arbitrary 64-bit constants is more complicated than in 32-bit mode because the number of instructions necessary is value dependent. Plus, support for 'late masking' was added: it is sometimes more efficient to treat the overall value as if it had no mandatory zero bits when planning the bit-group insertions, and then mask them in at the very end. Unfortunately, as the structure of the bit groups is different in the two cases, the more feasible implementation technique was to generate both instruction sequences, and then pick the shorter one. And finally, we now generate reasonable code for i64 bswap: rldicl 5, 3, 16, 0 rldicl 4, 3, 8, 0 rldicl 6, 3, 24, 0 rldimi 4, 5, 8, 48 rldicl 5, 3, 32, 0 rldimi 4, 6, 16, 40 rldicl 6, 3, 48, 0 rldimi 4, 5, 24, 32 rldicl 5, 3, 56, 0 rldimi 4, 6, 40, 16 rldimi 4, 5, 48, 8 rldimi 4, 3, 56, 0 vs. what we used to produce: li 4, 255 rldicl 5, 3, 24, 40 rldicl 6, 3, 40, 24 rldicl 7, 3, 56, 8 sldi 8, 3, 8 sldi 10, 3, 24 sldi 12, 3, 40 rldicl 0, 3, 8, 56 sldi 9, 4, 32 sldi 11, 4, 40 sldi 4, 4, 48 andi. 5, 5, 65280 andis. 6, 6, 255 andis. 7, 7, 65280 sldi 3, 3, 56 and 8, 8, 9 and 4, 12, 4 and 9, 10, 11 or 6, 7, 6 or 5, 5, 0 or 3, 3, 4 or 7, 9, 8 or 4, 6, 5 or 3, 3, 7 or 3, 3, 4 which is 12 instructions, instead of 25, and seems optimal (at least in terms of code size). llvm-svn: 225056
* InstCombine: fsub nsz 0, X ==> fsub nsz -0.0, XSanjay Patel2014-12-311-0/+8
| | | | | | | | | | | | | Some day the backend may handle instruction-level fast math flags and make this transform unnecessary, but it's still better practice to use the canonical representation of fneg when possible (use a -0.0). This is a partial fix for PR20870 ( http://llvm.org/bugs/show_bug.cgi?id=20870 ). See also http://reviews.llvm.org/D6723. Differential Revision: http://reviews.llvm.org/D6731 llvm-svn: 225050
* Add r224985 back with a fix.Rafael Espindola2014-12-3112-228/+152
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The issues was that AArch64 has additional restrictions on when local relocations can be used. We have to take those into consideration when deciding to put a L symbol in the symbol table or not. Original message: Remove doesSectionRequireSymbols. In an assembly expression like bar: .long L0 + 1 the intended semantics is that bar will contain a pointer one byte past L0. In sections that are merged by content (strings, 4 byte constants, etc), a single position in the section doesn't give the linker enough information. For example, it would not be able to tell a relocation must point to the end of a string, since that would look just like the start of the next. The solution used in ELF to use relocation with symbols if there is a non-zero addend. In MachO before this patch we would just keep all symbols in some sections. This would miss some cases (only cstrings on x86_64 were implemented) and was inefficient since most relocations have an addend of 0 and can be represented without the symbol. This patch implements the non-zero addend logic for MachO too. llvm-svn: 225048
* Reverting 225045 and 225043 and XFAIL multiline.ll on hexagonColin LeMahieu2014-12-311-1/+1
| | | | llvm-svn: 225047
* [Hexagon] Removing assertion to appease buildbot until I can reproduce the ↵Colin LeMahieu2014-12-311-1/+0
| | | | | | problem llvm-svn: 225045
* Revert "Remove doesSectionRequireSymbols."Rafael Espindola2014-12-3112-143/+223
| | | | | | | | This reverts commit r224985. I am investigating why it made an Apple bot unhappy. llvm-svn: 225044
* [Hexagon] Changing an llvm_unreachable to an assertion and returning 0. ↵Colin LeMahieu2014-12-311-1/+2
| | | | | | Relocations aren't implemented yet but we don't need to abort for this in release builds. llvm-svn: 225043
* [X86] Fix disassembly of absolute moves to work correctly in 16 and 32-bit ↵Craig Topper2014-12-312-6/+35
| | | | | | modes with all 4 combinations of OpSize and AdSize prefixes being present or not. llvm-svn: 225036
* [x86] Simplify detection of jcxz/jecxz/jrcxz in disassembler.Craig Topper2014-12-311-16/+5
| | | | llvm-svn: 225035
* InstCombine: try to transform A-B < 0 into A < BDavid Majnemer2014-12-311-0/+20
| | | | | | | We are allowed to move the 'B' to the right hand side if we an prove there is no signed overflow and if the comparison itself is signed. llvm-svn: 225034
* Revert "merge consecutive stores of extracted vector elements"Alexey Samsonov2014-12-311-75/+4
| | | | | | | This reverts commit r224611. This change causes crashes in X86 DAG->DAG Instruction Selection. llvm-svn: 225031
* [Hexagon] Adding accumulating add/sub, doubleword logic-not variants, ↵Colin LeMahieu2014-12-311-0/+111
| | | | | | doubleword bitfield extract, word parity, accumulating multiplies with saturation. llvm-svn: 225024
* [Hexagon] Adding double-logic on predicate instructions.Colin LeMahieu2014-12-301-0/+60
| | | | llvm-svn: 225018
* [Hexagon] Adding newvalue compare and jumps.Colin LeMahieu2014-12-301-17/+35
| | | | llvm-svn: 225015
* RTDyldMemoryManager.cpp: Make the reference to __morestack weak.Peter Collingbourne2014-12-301-2/+4
| | | | | | | This fixes the DSO build for now. Eventually we should develop some other mechanism to make this work correctly with DSOs. llvm-svn: 225014
* DebugInfo: Omit is_stmt from line table entries on the same line.David Blaikie2014-12-302-3/+5
| | | | | | | | | | | | | | | | | | | | | | GCC does this for non-zero discriminators and since GCC doesn't produce column info, that was the only place it comes up there. For LLVM, since we can emit discriminators and/or column info, it makes more sense to invert the condition and just test for changes in line number. This should resolve at least some of the GDB 7.5 test suite failures created by recent Clang changes that increase the location fidelity (which, since Clang defaults to including column info on Linux by default created a bunch of cases that confused GDB). In theory we could do this better/differently by grouping actual source statements together in a similar manner to the way lexical scopes are handled but given that GDB isn't really in a position to consume that (& users are probably somewhat used to different lines being different 'statements') this seems the safest and cheapest change. (I'm concerned that doing this 'right' would bloat the debugloc data even further - something Duncan's working hard to address) llvm-svn: 225011
* [Hexagon] Adding postincrement register newvalue stores.Colin LeMahieu2014-12-301-0/+30
| | | | llvm-svn: 225010
* [Hexagon] Removing old newvalue store variants. Adding postincrement ↵Colin LeMahieu2014-12-302-96/+90
| | | | | | immediate newvalue stores. llvm-svn: 225009
* [mips][microMIPS] Relocate with symbol for micromips symbolsZoran Jovanovic2014-12-301-1/+5
| | | | | | Differential Revision: http://reviews.llvm.org/D6796 llvm-svn: 225008
* [Hexagon] Adding indexed store new-value variants.Colin LeMahieu2014-12-302-45/+100
| | | | llvm-svn: 225007
* [Hexagon] Adding indexed store of immediates.Colin LeMahieu2014-12-302-48/+97
| | | | llvm-svn: 225006
* [Hexagon] Adding indexed stores.Colin LeMahieu2014-12-302-81/+167
| | | | llvm-svn: 225005
* x86_64: Fix calls to __morestack under the large code model.Peter Collingbourne2014-12-303-7/+48
| | | | | | | | | | | | | | | | | | | | Under the large code model, we cannot assume that __morestack lives within 2^31 bytes of the call site, so we cannot use pc-relative addressing. We cannot perform the call via a temporary register, as the rax register may be used to store the static chain, and all other suitable registers may be either callee-save or used for parameter passing. We cannot use the stack at this point either because __morestack manipulates the stack directly. To avoid these issues, perform an indirect call via a read-only memory location containing the address. This solution is not perfect, as it assumes that the .rodata section is laid out within 2^31 bytes of each function body, but this seems to be sufficient for JIT. Differential Revision: http://reviews.llvm.org/D6787 llvm-svn: 225003
* [asan] change _sanitizer_cov_module_init to accept int* instead of int**Kostya Serebryany2014-12-301-18/+34
| | | | llvm-svn: 224999
* [COFF] Don't try to add quotes to already quoted linker directivesMichael Kuperstein2014-12-301-1/+1
| | | | | | | | | | If a linker directive is already quoted, don't try to quote it again, otherwise it creates a mess. This pops up in places like: #pragma comment(linker,"\"/foo bar'\"") Differential Revision: http://reviews.llvm.org/D6792 llvm-svn: 224998
* [Hexagon] Adding reg-reg indexed load forms.Colin LeMahieu2014-12-303-85/+135
| | | | llvm-svn: 224997
* The __morestack function is only available on i386 and x86_64 architectures.Peter Collingbourne2014-12-301-1/+4
| | | | llvm-svn: 224994
* Make the __morestack function available to the JIT memory manager under Linux.Peter Collingbourne2014-12-301-0/+7
| | | | | | | | | This function's implementation lives in libgcc, a static library, so we need to expose it explicitly, like the other such functions. Differential Revision: http://reviews.llvm.org/D6788 llvm-svn: 224993
* [Hexagon] Dropping old combine instructions without encodings.Colin LeMahieu2014-12-303-79/+68
| | | | llvm-svn: 224992
* [Hexagon] Adding compare byte/halfword reg-reg/reg-imm forms. Adding ↵Colin LeMahieu2014-12-301-55/+121
| | | | | | compare to general register reg-imm form. llvm-svn: 224991
OpenPOWER on IntegriCloud