summaryrefslogtreecommitdiffstats
path: root/llvm
Commit message (Collapse)AuthorAgeFilesLines
* [SROA] Switch to using a more direct debug logging technique in one partChandler Carruth2015-01-011-4/+6
| | | | | | | | | | | | | | of my new load and store splitting, and fix a bug where it logged a totally irrelevant slice rather than the actual slice in question. The logging here previously worked because we used to place new slices onto the back of the core sequence, but that caused other problems. I updated the actual code to store new slices in their own vector but didn't update the logging. There isn't a good way to reuse the logging any more, and frankly it wasn't needed. We can directly log this bit more easily. llvm-svn: 225063
* [SROA] Fix formatting with clang-format which I managed to fail to doChandler Carruth2015-01-011-48/+48
| | | | | | prior to committing r225061. Sorry for that. llvm-svn: 225062
* [SROA] Teach SROA how to much more intelligently handle split loads andChandler Carruth2015-01-013-81/+520
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | stores. When there are accesses to an entire alloca with an integer load or store as well as accesses to small pieces of the alloca, SROA splits up the large integer accesses. In order to do that, it uses bit math to merge the small accesses into large integers. While this is effective, it produces insane IR that can cause significant problems in the rest of the optimizer: - It can cause load and store mismatches with GVN on the non-alloca side where we end up loading an i64 (or some such) rather than loading specific elements that are stored. - We can't always get rid of the integer bit math, which is why we can't always fix the loads and stores to work well with GVN. - This is especially bad when we have operations that mix poorly with integer bit math such as floating point operations. - It will block things like the vectorizer which might be able to handle the scalar stores that underly the aggregate. At the same time, we can't just directly split up these loads and stores in all cases. If there is actual integer arithmetic involved on the values, then using integer bit math is actually the perfect lowering because we can often combine it heavily with the surrounding math. The solution this patch provides is to find places where SROA is partitioning aggregates into small elements, and look for splittable loads and stores that it can split all the way to some other adjacent load and store. These are uniformly the cases where failing to split the loads and stores hurts the optimizer that I have seen, and I've looked extensively at the code produced both from more and less aggressive approaches to this problem. However, it is quite tricky to actually do this in SROA. We may have loads and stores to the same alloca, or other complex patterns that are hard to handle. This complexity leads to the somewhat subtle algorithm implemented here. We have to do this entire process as a separate pass over the partitioning of the alloca, and split up all of the loads prior to splitting the stores so that we can handle safely the cases of overlapping, including partially overlapping, loads and stores to the same alloca. We also have to reconstitute the post-split slice configuration so we can avoid iterating again over all the alloca uses (the slow part of SROA). But we also have to ensure that when we split up loads and stores to *other* allocas, we *do* re-iterate over them in SROA to adapt to the more refined partitioning now required. With this, I actually think we can fix a long-standing TODO in SROA where I avoided splitting as many loads and stores as probably should be splittable. This limitation historically mitigated the fallout of all the bad things mentioned above. Now that we have more intelligent handling, I plan to remove the FIXME and more aggressively mark integer loads and stores as splittable. I'll do that in a follow-up patch to help with bisecting any fallout. The net result of this change should be more fine-grained and accurate scalars being formed out of aggregates. At the very least, Clang now generates perfect code for this high-level test case using std::complex<float>: #include <complex> void g1(std::complex<float> &x, float a, float b) { x += std::complex<float>(a, b); } void g2(std::complex<float> &x, float a, float b) { x -= std::complex<float>(a, b); } void foo(const std::complex<float> &x, float a, float b, std::complex<float> &x1, std::complex<float> &x2) { std::complex<float> l1 = x; g1(l1, a, b); std::complex<float> l2 = x; g2(l2, a, b); x1 = l1; x2 = l2; } This code isn't just hypothetical either. It was reduced out of the hot inner loops of essentially every part of the Eigen math library when using std::complex<float>. Those loops would consistently and pervasively hop between the floating point unit and the integer unit due to bit math extraction and insertion of floating point values that were "stored" in a 64-bit integer register around the loop backedge. So far, this change has passed a bootstrap and I have done some other testing and so far, no issues. That doesn't mean there won't be though, so I'll be prepared to help with any fallout. If you performance swings in particular, please let me know. I'm very curious what all the impact of this change will be. Stay tuned for the follow-up to also split more integer loads and stores. llvm-svn: 225061
* Just use a using directive in SmallMapVector instead of inheriting from ↵Michael Gottesman2015-01-011-6/+3
| | | | | | MapVector itself. llvm-svn: 225059
* [PowerPC] Improve instruction selection bit-permuting operations (64-bit)Hal Finkel2015-01-013-97/+1107
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the second installment of improvements to instruction selection for "bit permutation" instruction sequences. r224318 added logic for instruction selection for 32-bit bit permutation sequences, and this adds lowering for 64-bit sequences. The 64-bit sequences are more complicated than the 32-bit ones because: a) the 64-bit versions of the 32-bit rotate-and-mask instructions work by replicating the lower 32-bits of the value-to-be-rotated into the upper 32 bits -- and integrating this into the cost modeling for the various bit group operations is non-trivial b) unlike the 32-bit instructions in 32-bit mode, the rotate-and-mask instructions cannot, in one instruction, specify the mask starting index, the mask ending index, and the rotation factor. Also, forming arbitrary 64-bit constants is more complicated than in 32-bit mode because the number of instructions necessary is value dependent. Plus, support for 'late masking' was added: it is sometimes more efficient to treat the overall value as if it had no mandatory zero bits when planning the bit-group insertions, and then mask them in at the very end. Unfortunately, as the structure of the bit groups is different in the two cases, the more feasible implementation technique was to generate both instruction sequences, and then pick the shorter one. And finally, we now generate reasonable code for i64 bswap: rldicl 5, 3, 16, 0 rldicl 4, 3, 8, 0 rldicl 6, 3, 24, 0 rldimi 4, 5, 8, 48 rldicl 5, 3, 32, 0 rldimi 4, 6, 16, 40 rldicl 6, 3, 48, 0 rldimi 4, 5, 24, 32 rldicl 5, 3, 56, 0 rldimi 4, 6, 40, 16 rldimi 4, 5, 48, 8 rldimi 4, 3, 56, 0 vs. what we used to produce: li 4, 255 rldicl 5, 3, 24, 40 rldicl 6, 3, 40, 24 rldicl 7, 3, 56, 8 sldi 8, 3, 8 sldi 10, 3, 24 sldi 12, 3, 40 rldicl 0, 3, 8, 56 sldi 9, 4, 32 sldi 11, 4, 40 sldi 4, 4, 48 andi. 5, 5, 65280 andis. 6, 6, 255 andis. 7, 7, 65280 sldi 3, 3, 56 and 8, 8, 9 and 4, 12, 4 and 9, 10, 11 or 6, 7, 6 or 5, 5, 0 or 3, 3, 4 or 7, 9, 8 or 4, 6, 5 or 3, 3, 7 or 3, 3, 4 which is 12 instructions, instead of 25, and seems optimal (at least in terms of code size). llvm-svn: 225056
* Add 2x constructors for TinyPtrVector, one that takes in one elemenet and ↵Michael Gottesman2014-12-312-0/+33
| | | | | | | | | the other that takes in an ArrayRef<EltTy> Currently one can only construct an empty TinyPtrVector. These are just missing elements of the API. llvm-svn: 225055
* Add a SmallMapVector class that is a MapVector with a Map of SmallDenseMap ↵Michael Gottesman2014-12-312-0/+229
| | | | | | and a Vector of SmallVector. llvm-svn: 225054
* Add an ArrayRef upcasting constructor from ArrayRef<U*> -> ArrayRef<T*> ↵Michael Gottesman2014-12-312-0/+45
| | | | | | where T is a base of U. llvm-svn: 225053
* InstCombine: fsub nsz 0, X ==> fsub nsz -0.0, XSanjay Patel2014-12-312-0/+16
| | | | | | | | | | | | | Some day the backend may handle instruction-level fast math flags and make this transform unnecessary, but it's still better practice to use the canonical representation of fneg when possible (use a -0.0). This is a partial fix for PR20870 ( http://llvm.org/bugs/show_bug.cgi?id=20870 ). See also http://reviews.llvm.org/D6723. Differential Revision: http://reviews.llvm.org/D6731 llvm-svn: 225050
* Add r224985 back with a fix.Rafael Espindola2014-12-3119-247/+303
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The issues was that AArch64 has additional restrictions on when local relocations can be used. We have to take those into consideration when deciding to put a L symbol in the symbol table or not. Original message: Remove doesSectionRequireSymbols. In an assembly expression like bar: .long L0 + 1 the intended semantics is that bar will contain a pointer one byte past L0. In sections that are merged by content (strings, 4 byte constants, etc), a single position in the section doesn't give the linker enough information. For example, it would not be able to tell a relocation must point to the end of a string, since that would look just like the start of the next. The solution used in ELF to use relocation with symbols if there is a non-zero addend. In MachO before this patch we would just keep all symbols in some sections. This would miss some cases (only cstrings on x86_64 were implemented) and was inefficient since most relocations have an addend of 0 and can be represented without the symbol. This patch implements the non-zero addend logic for MachO too. llvm-svn: 225048
* Reverting 225045 and 225043 and XFAIL multiline.ll on hexagonColin LeMahieu2014-12-312-1/+2
| | | | llvm-svn: 225047
* Add a test for the recent compiler-rt build failure.Rafael Espindola2014-12-311-0/+14
| | | | llvm-svn: 225046
* [Hexagon] Removing assertion to appease buildbot until I can reproduce the ↵Colin LeMahieu2014-12-311-1/+0
| | | | | | problem llvm-svn: 225045
* Revert "Remove doesSectionRequireSymbols."Rafael Espindola2014-12-3119-294/+242
| | | | | | | | This reverts commit r224985. I am investigating why it made an Apple bot unhappy. llvm-svn: 225044
* [Hexagon] Changing an llvm_unreachable to an assertion and returning 0. ↵Colin LeMahieu2014-12-311-1/+2
| | | | | | Relocations aren't implemented yet but we don't need to abort for this in release builds. llvm-svn: 225043
* [X86] Update disassembler tests for absolute move instructions to check the ↵Craig Topper2014-12-311-37/+37
| | | | | | encodings. This provides testing for r225036. 64-bit mode is still broken. llvm-svn: 225037
* [X86] Fix disassembly of absolute moves to work correctly in 16 and 32-bit ↵Craig Topper2014-12-314-20/+53
| | | | | | modes with all 4 combinations of OpSize and AdSize prefixes being present or not. llvm-svn: 225036
* [x86] Simplify detection of jcxz/jecxz/jrcxz in disassembler.Craig Topper2014-12-311-16/+5
| | | | llvm-svn: 225035
* InstCombine: try to transform A-B < 0 into A < BDavid Majnemer2014-12-312-0/+56
| | | | | | | We are allowed to move the 'B' to the right hand side if we an prove there is no signed overflow and if the comparison itself is signed. llvm-svn: 225034
* Revert "merge consecutive stores of extracted vector elements"Alexey Samsonov2014-12-312-108/+4
| | | | | | | This reverts commit r224611. This change causes crashes in X86 DAG->DAG Instruction Selection. llvm-svn: 225031
* [Hexagon] Adding accumulating add/sub, doubleword logic-not variants, ↵Colin LeMahieu2014-12-314-0/+135
| | | | | | doubleword bitfield extract, word parity, accumulating multiplies with saturation. llvm-svn: 225024
* Fix a test case to not depend on asm comment syntax, so as to be portableDavid Blaikie2014-12-301-9/+9
| | | | | | | | Too many different comment characters - instead of trying to account for them all, instead disable the comments and just check for end-of-line instead. llvm-svn: 225020
* Generalize even further, for ARM comment syntax (@)David Blaikie2014-12-301-8/+8
| | | | llvm-svn: 225019
* [Hexagon] Adding double-logic on predicate instructions.Colin LeMahieu2014-12-302-2/+82
| | | | llvm-svn: 225018
* Generalize test case to handle different asm syntax (# or // comments)David Blaikie2014-12-301-8/+8
| | | | llvm-svn: 225017
* [Hexagon] Adding newvalue compare and jumps.Colin LeMahieu2014-12-302-17/+169
| | | | llvm-svn: 225015
* RTDyldMemoryManager.cpp: Make the reference to __morestack weak.Peter Collingbourne2014-12-301-2/+4
| | | | | | | This fixes the DSO build for now. Eventually we should develop some other mechanism to make this work correctly with DSOs. llvm-svn: 225014
* DebugInfo: Omit is_stmt from line table entries on the same line.David Blaikie2014-12-304-4/+87
| | | | | | | | | | | | | | | | | | | | | | GCC does this for non-zero discriminators and since GCC doesn't produce column info, that was the only place it comes up there. For LLVM, since we can emit discriminators and/or column info, it makes more sense to invert the condition and just test for changes in line number. This should resolve at least some of the GDB 7.5 test suite failures created by recent Clang changes that increase the location fidelity (which, since Clang defaults to including column info on Linux by default created a bunch of cases that confused GDB). In theory we could do this better/differently by grouping actual source statements together in a similar manner to the way lexical scopes are handled but given that GDB isn't really in a position to consume that (& users are probably somewhat used to different lines being different 'statements') this seems the safest and cheapest change. (I'm concerned that doing this 'right' would bloat the debugloc data even further - something Duncan's working hard to address) llvm-svn: 225011
* [Hexagon] Adding postincrement register newvalue stores.Colin LeMahieu2014-12-302-0/+39
| | | | llvm-svn: 225010
* [Hexagon] Removing old newvalue store variants. Adding postincrement ↵Colin LeMahieu2014-12-303-96/+141
| | | | | | immediate newvalue stores. llvm-svn: 225009
* [mips][microMIPS] Relocate with symbol for micromips symbolsZoran Jovanovic2014-12-302-1/+21
| | | | | | Differential Revision: http://reviews.llvm.org/D6796 llvm-svn: 225008
* [Hexagon] Adding indexed store new-value variants.Colin LeMahieu2014-12-303-45/+151
| | | | llvm-svn: 225007
* [Hexagon] Adding indexed store of immediates.Colin LeMahieu2014-12-303-48/+133
| | | | llvm-svn: 225006
* [Hexagon] Adding indexed stores.Colin LeMahieu2014-12-304-81/+282
| | | | llvm-svn: 225005
* x86_64: Fix calls to __morestack under the large code model.Peter Collingbourne2014-12-305-7/+78
| | | | | | | | | | | | | | | | | | | | Under the large code model, we cannot assume that __morestack lives within 2^31 bytes of the call site, so we cannot use pc-relative addressing. We cannot perform the call via a temporary register, as the rax register may be used to store the static chain, and all other suitable registers may be either callee-save or used for parameter passing. We cannot use the stack at this point either because __morestack manipulates the stack directly. To avoid these issues, perform an indirect call via a read-only memory location containing the address. This solution is not perfect, as it assumes that the .rodata section is laid out within 2^31 bytes of each function body, but this seems to be sufficient for JIT. Differential Revision: http://reviews.llvm.org/D6787 llvm-svn: 225003
* [asan] change _sanitizer_cov_module_init to accept int* instead of int**Kostya Serebryany2014-12-302-19/+35
| | | | llvm-svn: 224999
* [COFF] Don't try to add quotes to already quoted linker directivesMichael Kuperstein2014-12-302-2/+3
| | | | | | | | | | If a linker directive is already quoted, don't try to quote it again, otherwise it creates a mess. This pops up in places like: #pragma comment(linker,"\"/foo bar'\"") Differential Revision: http://reviews.llvm.org/D6792 llvm-svn: 224998
* [Hexagon] Adding reg-reg indexed load forms.Colin LeMahieu2014-12-306-94/+216
| | | | llvm-svn: 224997
* The __morestack function is only available on i386 and x86_64 architectures.Peter Collingbourne2014-12-301-1/+4
| | | | llvm-svn: 224994
* Make the __morestack function available to the JIT memory manager under Linux.Peter Collingbourne2014-12-301-0/+7
| | | | | | | | | This function's implementation lives in libgcc, a static library, so we need to expose it explicitly, like the other such functions. Differential Revision: http://reviews.llvm.org/D6788 llvm-svn: 224993
* [Hexagon] Dropping old combine instructions without encodings.Colin LeMahieu2014-12-303-79/+68
| | | | llvm-svn: 224992
* [Hexagon] Adding compare byte/halfword reg-reg/reg-imm forms. Adding ↵Colin LeMahieu2014-12-303-55/+149
| | | | | | compare to general register reg-imm form. llvm-svn: 224991
* [Hexagon] Updating constant extender def, adding alu-not instructions, ↵Colin LeMahieu2014-12-304-14/+60
| | | | | | compare to general register, and inverted compares. llvm-svn: 224989
* Some code improvements in Masked Load/Store. Elena Demikhovsky2014-12-303-36/+46
| | | | | | No functional changes. llvm-svn: 224986
* Remove doesSectionRequireSymbols.Rafael Espindola2014-12-3019-242/+294
| | | | | | | | | | | | | | | | | | | | | | | | | | | In an assembly expression like bar: .long L0 + 1 the intended semantics is that bar will contain a pointer one byte past L0. In sections that are merged by content (strings, 4 byte constants, etc), a single position in the section doesn't give the linker enough information. For example, it would not be able to tell a relocation must point to the end of a string, since that would look just like the start of the next. The solution used in ELF to use relocation with symbols if there is a non-zero addend. In MachO before this patch we would just keep all symbols in some sections. This would miss some cases (only cstrings on x86_64 were implemented) and was inefficient since most relocations have an addend of 0 and can be represented without the symbol. This patch implements the non-zero addend logic for MachO too. llvm-svn: 224985
* reverted prev commit (it was a mistake)Elena Demikhovsky2014-12-303-14/+0
| | | | llvm-svn: 224984
* vElena Demikhovsky2014-12-303-0/+14
| | | | llvm-svn: 224983
* Add IRBuilder routines for gc.statepoints, gc.results, and gc.relocatesPhilip Reames2014-12-302-2/+92
| | | | | | | | | | Nothing particularly interesting, just adding infrastructure for use by in tree users and out of tree users. Note: These were extracted out of a working frontend, but they have not been well tested in isolation. Differential Revision: http://reviews.llvm.org/D6807 llvm-svn: 224981
* Simplify test a bit.Rafael Espindola2014-12-301-516/+3
| | | | | | | | | | It looks like the original intent was to check which symbols were created. With macho-dump the sections were being checked just to match which symbol was in which section. llvm-objdump prints the section a symbol is in. llvm-svn: 224980
* [OCaml] Fix bitrot in tests.Peter Zotov2014-12-301-2/+2
| | | | llvm-svn: 224979
OpenPOWER on IntegriCloud