summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
...
* Create a MCSymbolELF.Rafael Espindola2015-06-022-5/+7
| | | | | | | | | This create a MCSymbolELF class and moves SymbolSize since only ELF needs a size expression. This reduces the size of MCSymbol from 56 to 48 bytes. llvm-svn: 238801
* ARM: Thumb2 LDRD/STRD supports independent input/output regsMatthias Braun2015-06-011-20/+22
| | | | | | | | | | | | | The existing code would unnecessarily break LDRD/STRD apart with non-adjacent registers, on thumb2 this is not necessary. Ideally on thumb2 we shouldn't match for ldrd/strd pre-regalloc anymore as there is not reason to set register hints anymore, changing that is something for a future patch however. Differential Revision: http://reviews.llvm.org/D9694 llvm-svn: 238795
* AArch64: Use CMP;CCMP sequences for and/or/setcc trees.Matthias Braun2015-06-014-72/+255
| | | | | | | | | | | | Previously CCMP/FCCMP instructions were only used by the AArch64ConditionalCompares pass for control flow. This patch uses them for SELECT like instructions as well by matching patterns in ISelLowering. PR20927, rdar://18326194 Differential Revision: http://reviews.llvm.org/D8232 llvm-svn: 238793
* [bpf] fix buildAlexei Starovoitov2015-06-011-2/+2
| | | | | | | | fix breakage due to r238634 Patch by Vijay Subramanian. llvm-svn: 238792
* R600/SI: Don't hardcode pointer typeMatt Arsenault2015-06-011-4/+5
| | | | llvm-svn: 238789
* ARMLoadStoreOptimizer: Fix doxygen comments; NFCMatthias Braun2015-06-011-34/+28
| | | | llvm-svn: 238784
* Revert "[Hexagon] Adding basic ELF relocation generation and testing ↵Rafael Espindola2015-06-013-427/+28
| | | | | | | | | | | | advanced relaxation codepath." This reverts commit r238748. It broke the msan bot: http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/4372/steps/check-llvm%20msan/logs/stdio llvm-svn: 238772
* [mips][FastISel] Implement bswap.Vasileios Kalintiris2015-06-011-0/+64
| | | | | | | | | | | | | | | | | | Summary: Implement bswap intrinsic for MIPS FastISel. It's very different for misp32 r1/r2 . Based on a patch by Reed Kotler. Test Plan: bswap1.ll test-suite Reviewers: dsanders, rkotler Subscribers: llvm-commits, rfuhler Differential Revision: http://reviews.llvm.org/D7219 llvm-svn: 238760
* [mips][FastISel] Implement intrinsics memset, memcopy & memmove.Vasileios Kalintiris2015-06-011-7/+89
| | | | | | | | | | | | | | | | | | | | Summary: Implement the intrinsics memset, memcopy and memmove in MIPS FastISel. Make some needed infrastructure fixes so that this can work. Based on a patch by Reed Kotler. Test Plan: memtest1.ll The patch passes test-suite for mips32 r1/r2 and at O0/O2 Reviewers: rkotler, dsanders Subscribers: llvm-commits, rfuhler Differential Revision: http://reviews.llvm.org/D7158 llvm-svn: 238759
* [mips][FastISel] Implement srem/urem and sdiv/udiv instructions.Vasileios Kalintiris2015-06-011-0/+61
| | | | | | | | | | | | | | | | | | | Summary: Implement the LLVM assembly urem/srem and sdiv/udiv instructions in MIPS FastISel. Based on a patch by Reed Kotler. Test Plan: srem1.ll div1.ll test-suite at O0/O2 for mips32 r1/r2 Reviewers: dsanders, rkotler Subscribers: llvm-commits, rfuhler Differential Revision: http://reviews.llvm.org/D7028 llvm-svn: 238757
* [mips][FastISel] Implement the select statement for MIPS FastISel.Vasileios Kalintiris2015-06-011-0/+47
| | | | | | | | | | | | | | | | | | Summary: Implement the LLVM IR select statement for MIPS FastISelsel. Based on a patch by Reed Kotler. Test Plan: "Make check" test included now. Passes test-suite at O2/O0 mips32 r1/r2. Reviewers: dsanders, rkotler Subscribers: llvm-commits, rfuhler Differential Revision: http://reviews.llvm.org/D6774 llvm-svn: 238756
* [mips][FastISel] Clobber HI0/LO0 registers in MUL instructions.Vasileios Kalintiris2015-06-011-0/+33
| | | | | | | | | | | | | | | | | | | | | Summary: The contents of the HI/LO registers are unpredictable after the execution of the MUL instruction. In addition to implicitly defining these registers in the MUL instruction definition, we have to mark those registers as dead too. Without this the fast register allocator is running out of registers when the MUL instruction is followed by another one that tries to allocate the AC0 register. Based on a patch by Reed Kotler. Reviewers: dsanders, rkotler Subscribers: llvm-commits, rfuhler Differential Revision: http://reviews.llvm.org/D9825 llvm-svn: 238755
* Fix relocation selection for foo-. on mips.Rafael Espindola2015-06-011-1/+1
| | | | | | This handles only the 32 bit case. llvm-svn: 238751
* Simplify code, NFC.Rafael Espindola2015-06-011-107/+60
| | | | llvm-svn: 238750
* [Hexagon] Adding basic ELF relocation generation and testing advanced ↵Colin LeMahieu2015-06-013-28/+427
| | | | | | relaxation codepath. llvm-svn: 238748
* AVX-512: Optimized vector shuffle for v16f32 and v16i32 types.Elena Demikhovsky2015-06-011-36/+48
| | | | llvm-svn: 238743
* Re-commit of r238201 with fix for building with shared libraries.Luke Cheeseman2015-06-017-18/+629
| | | | llvm-svn: 238739
* AVX-512: Implemented VRANGEPD and VRANGEPD instructions for SKX.Elena Demikhovsky2015-06-014-6/+34
| | | | | | | | | Implemented DAG lowering for all these forms. Added tests for encoding. By Igor Breger (igor.breger@intel.com) llvm-svn: 238738
* AVX-512: Implemented vector shuffle lowering for v8i64 and v8f64 types.Elena Demikhovsky2015-06-011-33/+102
| | | | | | I removed the vector-shuffle-512-v8.ll, it is auto-generated test, not valid any more. llvm-svn: 238735
* AVX-512: added all forms of VPSHUFD and VPSHUFHW, VPSHUFLWElena Demikhovsky2015-06-012-36/+28
| | | | | | including encodings. llvm-svn: 238729
* AVX-512: Implemented VFIXUPIMMPD and VFIXUPIMMPS instructions for KNL and SKXElena Demikhovsky2015-06-014-0/+74
| | | | | | | | | Implemented DAG lowering for all these forms. Added tests for encoding. by Igor Breger (igor.breger@intel.com) llvm-svn: 238728
* AVX-512: Fixed a bug in compress and expand intrinsics.Elena Demikhovsky2015-06-011-5/+8
| | | | | | By Igor Breger (igor.breger@intel.com) llvm-svn: 238724
* Add address space argument to isLegalAddressingModeMatt Arsenault2015-06-0120-27/+49
| | | | | | | | | | This is important because of different addressing modes depending on the address space for GPU targets. This only adds the argument, and does not update any of the uses to provide the correct address space. llvm-svn: 238723
* Simplify another function that doesn't fail.Rafael Espindola2015-06-011-1/+1
| | | | llvm-svn: 238703
* ARMConstantIslandPass.cpp: Prune an empty \brief. [-Wdocumentation]NAKAMURA Takumi2015-05-311-1/+0
| | | | llvm-svn: 238697
* [Hexagon] Including raw_ostream for debug builds.Colin LeMahieu2015-05-311-0/+1
| | | | llvm-svn: 238695
* [Hexagon] classes are actually structs.Colin LeMahieu2015-05-311-2/+2
| | | | llvm-svn: 238694
* [Hexagon] Adding MC packet shuffler.Colin LeMahieu2015-05-319-6/+882
| | | | llvm-svn: 238692
* ARM: recommit r237590: allow jump tables to be placed as constant islands.Tim Northover2015-05-317-238/+418
| | | | | | | | | | | | | | | The original version didn't properly account for the base register being modified before the final jump, so caused miscompilations in Chromium and LLVM. I've fixed this and tested with an LLVM self-host (I don't have the means to build & test Chromium). The general idea remains the same: in pathological cases jump tables can be too far away from the instructions referencing them (like other constants) so they need to be movable. Should fix PR23627. llvm-svn: 238680
* [Hexagon] Adding override specifier and removing erroneous assertionColin LeMahieu2015-05-301-4/+2
| | | | llvm-svn: 238664
* [Hexagon] Adding basic relaxation functionality.Colin LeMahieu2015-05-303-6/+157
| | | | llvm-svn: 238660
* Stripped trailing whitespace. NFC.Simon Pilgrim2015-05-301-12/+12
| | | | llvm-svn: 238654
* Comment change. NFCRenato Golin2015-05-301-3/+2
| | | | | | | That comment misleads the current discussions in mentioned bug. Leave the discussions to the bug. Also, adding a future change FIXME. llvm-svn: 238653
* [x86] Unify the horizontal adding used for popcount lowering taking theChandler Carruth2015-05-301-76/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | best approach of each. For vNi16, we use SHL + ADD + SRL pattern that seem easily the best. For vNi32, we use the PUNPCK + PSADBW + PACKUSWB pattern. In some cases there is a huge improvement with this in IACA's estimated throughput -- over 2x higher throughput!!!! -- but the measurements are too good to be true. In one narrow case, the SHL + ADD + SHL + ADD + SRL pattern looks slightly faster, but I'm not sure I believe any of the measurements at this point. Both are the exact same uops though. Hard to be confident of anything past that. If anyone wants to collect very detailed (Agner-level) timings with the result of this patch, or with the i32 case replaced with SHL + ADD + SHl + ADD + SRL, I'd be very interested. Note that you'll need to test it on both Ivybridge and Haswell, with both SSE3, SSSE3, and AVX selected as I saw unique behavior in each of these buckets with IACA all of which should be checked against measured performance. But this patch is still a useful improvement by dropping duplicate work and getting the much nicer PSADBW lowering for v2i64. I'd still like to rephrase this in terms of generic horizontal sum. It's a bit lame to have a special case of that just for popcount. llvm-svn: 238652
* [ARMTargetParser] Move IAS arch ext parser. NFCRenato Golin2015-05-301-20/+22
| | | | | | | | | | | The plan was to move the whole table into the already existing ArchExtNames but some fields depend on a table-generated file, and we don't yet have this feature in the generic lib/Support side. Once the minimum target-specific table-generated files are available in a generic fashion to these libraries, we'll have to keep it in the ASM parser. llvm-svn: 238651
* [x86] Split out the horizontal byte sum lowering component of the LUTChandler Carruth2015-05-301-78/+100
| | | | | | | | lowering into a helper function. NFC. llvm-svn: 238650
* [x86] Replace the long spelling of getting a bitcast with the *much*Chandler Carruth2015-05-301-332/+300
| | | | | | | | | | shorter one. NFC. In addition to being much shorter to type and requiring fewer arguments, this change saves over 30 lines from this one file, all wasted on total boilerplate... llvm-svn: 238640
* [x86] Replace the long spelling of getting a bitcast with the new shortChandler Carruth2015-05-301-14/+13
| | | | | | spelling. NFC. llvm-svn: 238639
* [sdag] Add the helper I most want to the DAG -- building a bitcastChandler Carruth2015-05-301-14/+8
| | | | | | | | around a value using its existing SDLoc. Start using this in just one function to save omg lines of code. llvm-svn: 238638
* [x86] Restore the bitcasts I removed when refactoring this to avoidChandler Carruth2015-05-301-43/+41
| | | | | | | | | | | | | | shifting vectors of bytes as x86 doesn't have direct support for that. This removes a bunch of redundant masking in the generated code for SSE2 and SSE3. In order to avoid the really significant code size growth this would have triggered, I also factored the completely repeatative logic for shifting and masking into two lambdas which in turn makes all of this much easier to read IMO. llvm-svn: 238637
* [x86] Implement a faster vector population count based on the PSHUFBChandler Carruth2015-05-304-28/+196
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | in-register LUT technique. Summary: A description of this technique can be found here: http://wm.ite.pl/articles/sse-popcount.html The core of the idea is to use an in-register lookup table and the PSHUFB instruction to compute the population count for the low and high nibbles of each byte, and then to use horizontal sums to aggregate these into vector population counts with wider element types. On x86 there is an instruction that will directly compute the horizontal sum for the low 8 and high 8 bytes, giving vNi64 popcount very easily. Various tricks are used to get vNi32 and vNi16 from the vNi8 that the LUT computes. The base implemantion of this, and most of the work, was done by Bruno in a follow up to D6531. See Bruno's detailed post there for lots of timing information about these changes. I have extended Bruno's patch in the following ways: 0) I committed the new tests with baseline sequences so this shows a diff, and regenerated the tests using the update scripts. 1) Bruno had noticed and mentioned in IRC a redundant mask that I removed. 2) I introduced a particular optimization for the i32 vector cases where we use PSHL + PSADBW to compute the the low i32 popcounts, and PSHUFD + PSADBW to compute doubled high i32 popcounts. This takes advantage of the fact that to line up the high i32 popcounts we have to shift them anyways, and we can shift them by one fewer bit to effectively divide the count by two. While the PSHUFD based horizontal add is no faster, it doesn't require registers or load traffic the way a mask would, and provides more ILP as it happens on different ports with high throughput. 3) I did some code cleanups throughout to simplify the implementation logic. 4) I refactored it to continue to use the parallel bitmath lowering when SSSE3 is not available to preserve the performance of that version on SSE2 targets where it is still much better than scalarizing as we'll still do a bitmath implementation of popcount even in scalar code there. With #1 and #2 above, I analyzed the result in IACA for sandybridge, ivybridge, and haswell. In every case I measured, the throughput is the same or better using the LUT lowering, even v2i64 and v4i64, and even compared with using the native popcnt instruction! The latency of the LUT lowering is often higher than the latency of the scalarized popcnt instruction sequence, but I think those latency measurements are deeply misleading. Keeping the operation fully in the vector unit and having many chances for increased throughput seems much more likely to win. With this, we can lower every integer vector popcount implementation using the LUT strategy if we have SSSE3 or better (and thus have PSHUFB). I've updated the operation lowering to reflect this. This also fixes an issue where we were scalarizing horribly some AVX lowerings. Finally, there are some remaining cleanups. There is duplication between the two techniques in how they perform the horizontal sum once the byte population count is computed. I'm going to factor and merge those two in a separate follow-up commit. Differential Revision: http://reviews.llvm.org/D10084 llvm-svn: 238636
* [x86] Restructure the parallel bitmath lowering of popcount intoChandler Carruth2015-05-301-103/+95
| | | | | | | | | | | | | | | | | | | | | | | | | | a separate routine, generalize it to work for all the integer vector sizes, and do general code cleanups. This dramatically improves lowerings of byte and short element vector popcount, but more importantly it will make the introduction of the LUT-approach much cleaner. The biggest cleanup I've done is to just force the legalizer to do the bitcasting we need. We run these iteratively now and it makes the code much simpler IMO. Other changes were minor, and mostly naming and splitting things up in a way that makes it more clear what is going on. The other significant change is to use a different final horizontal sum approach. This is the same number of instructions as the old method, but shifts left instead of right so that we can clear everything but the final sum with a single shift right. This seems likely better than a mask which will usually have to read the mask from memory. It is certaily fewer u-ops. Also, this will be temporary. This and the LUT approach share the need of horizontal adds to finish the computation, and we have more clever approaches than this one that I'll switch over to. llvm-svn: 238635
* MC: Clean up MCExpr naming. NFC.Jim Grosbach2015-05-3072-488/+488
| | | | llvm-svn: 238634
* [WinEH] Adjust the 32-bit SEH prologue to better match realityReid Kleckner2015-05-291-72/+52
| | | | | | | | | | It turns out that _except_handler3 and _except_handler4 really use the same stack allocation layout, at least today. They just make different choices about encoding the LSDA. This is in preparation for lowering the llvm.eh.exceptioninfo(). llvm-svn: 238627
* Disable FP elimination in funcs using 32-bit MSVC EH personalitiesReid Kleckner2015-05-291-0/+5
| | | | | | | The value in 'ebp' acts as an implicit argument to the outlined handlers, and is recovered with frameaddress(1). llvm-svn: 238619
* Remove getData.Rafael Espindola2015-05-297-52/+34
| | | | | | This completes the mechanical part of merging MCSymbol and MCSymbolData. llvm-svn: 238617
* Only add the EH state insertion pass on 32-bit WindowsReid Kleckner2015-05-291-2/+2
| | | | llvm-svn: 238612
* Remove the MCSymbolData typedef.Rafael Espindola2015-05-297-16/+16
| | | | | | The getData member function is next. llvm-svn: 238611
* Rename getOrCreateSymbolData to registerSymbol and return void.Rafael Espindola2015-05-293-3/+5
| | | | | | Another step in merging MCSymbol and MCSymbolData. llvm-svn: 238607
* Pass MCSymbols to the helper functions in MCELF.h.Rafael Espindola2015-05-2910-34/+24
| | | | llvm-svn: 238596
OpenPOWER on IntegriCloud