summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* [dwarfdump] Print the name for referenced specification of abstract_origin DIEs.Frederic Riss2014-10-061-0/+14
| | | | | | | | | | Reviewers: dblaikie, samsonov, echristo, aprantl Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D5466 llvm-svn: 219099
* Factor the Unit section parsing into the DWARFUnitSection class.Frederic Riss2014-10-063-62/+50
| | | | | | | | | | | | Summary: No functional change. Reviewers: dblaikie, samsonov Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D5522 llvm-svn: 219098
* [PM] Remove an unused and rather expensive mapping from an analysisChandler Carruth2014-10-061-4/+0
| | | | | | | | group's interface to all of the implementations of that analysis group. The groups themselves can and do manage this anyways, the pass registry needn't involve itself. llvm-svn: 219097
* [PM] Remove the (deeply misguided) 'unregister' functionality from theChandler Carruth2014-10-061-10/+0
| | | | | | | | | | | | pass registry. This style of registry is somewhat questionable, but it being non-monotonic is crazy. No one is (or should be) unloading DSOs with passes and unregistering them here. I've checked with a few folks and I don't know of anyone using this functionality or any important use case where it is necessary. llvm-svn: 219096
* [cleanup] Switch to using range-based for loops in two very obviousChandler Carruth2014-10-061-6/+4
| | | | | | places. llvm-svn: 219095
* [cleanup] Fix up trailing whitespace and formatting in the pass regitsryChandler Carruth2014-10-051-21/+21
| | | | | | code prior to hacking on it more significantly. llvm-svn: 219094
* Give the Reassociate pass a bit more flexibility and autonomy when ↵Owen Anderson2014-10-051-2/+12
| | | | | | | | | | optimizing expressions. Particularly, it addresses cases where Reassociate breaks Subtracts but then fails to optimize combinations like I1 + -I2 where I1 and I2 have the same rank and are identical. Patch by Dmitri Shtilman. llvm-svn: 219092
* [x86] Remove the 2-addr-to-3-addr "optimization" from shufps to pshufd.Chandler Carruth2014-10-051-28/+0
| | | | | | | | | | | | | | | This trades a (register-renamer-friendly) movaps for a floating point / integer domain cross. That is a very bad trade, even on architectures where domain crossing is relatively fast. On any chip where there is even a cycle stall, this is a Very Bad Idea. It doesn't even seem likely to cause a spill to be introduced because the reason for the copy is to destructively shuffle in place. Thanks to Ben Kramer for fixing a bug in this code that my new shuffle lowering exposed and highlighting that perhaps it should just go away. =] llvm-svn: 219090
* [x86, dag] Teach the DAG combiner to prune inputs toa vector_shuffleChandler Carruth2014-10-051-0/+93
| | | | | | | | | | | | | | | that are unused. This allows the combiner to delete math feeding shuffles where the math isn't actually necessary. This improves some of the vperm2x128 tests that regressed when the vector shuffle lowering started actually generating vperm instructions rather than forcibly decomposing them. Sadly, this isn't enough to get this *really* right because we still form a completely unnecessary permutation. To fix that, we also need to fold shuffles which just rearrange concatenated or inserted subvectors. llvm-svn: 219086
* Remove unused mapDavid Blaikie2014-10-052-6/+0
| | | | | | This became unnecessary/unused in r208636 llvm-svn: 219085
* X86: Don't drop half of the mask when converting 2-address shufps into ↵Benjamin Kramer2014-10-051-1/+1
| | | | | | | | | 3-address pshufd. It's debatable whether this transform is useful at all, but for now make sure we don't generate invalid asm. llvm-svn: 219084
* AVX-512-SKX: Added instruction VPMOVM2B/W/D/Q.Elena Demikhovsky2014-10-052-2/+50
| | | | | | This instruction allows to broadacst mask vector to data vector. llvm-svn: 219083
* Simplify code. No functionality change.Benjamin Kramer2014-10-051-4/+2
| | | | llvm-svn: 219082
* [x86] Fix PR21139, one of the last remaining regressions found in theChandler Carruth2014-10-051-9/+17
| | | | | | | | | | | | | new vector shuffle lowering. This is loosely based on a patch by Marius Wachtler to the PR (thanks!). I refactored it a bi to use std::count_if and a mutable array ref but the core idea was exactly right. I also added some direct testing of this case. I believe PR21137 is now the only remaining regression. llvm-svn: 219081
* [x86] Teach the new vector shuffle lowering how to lower 128-bitChandler Carruth2014-10-051-55/+102
| | | | | | | | | | | | | | | | | | | | | | shuffles using AVX and AVX2 instructions. This fixes PR21138, one of the few remaining regressions impacting benchmarks from the new vector shuffle lowering. You may note that it "regresses" many of the vperm2x128 test cases -- these were actually "improved" by the naive lowering that the new shuffle lowering previously did. This regression gave me fits. I had this patch ready-to-go about an hour after flipping the switch but wasn't sure how to have the best of both worlds here and thought the correct solution might be a completely different approach to lowering these vector shuffles. I'm now convinced this is the correct lowering and the missed optimizations shown in vperm2x128 are actually due to missing target-independent DAG combines. I've even written most of the needed DAG combine and will submit it shortly, but this part is ready and should help some real-world benchmarks out. llvm-svn: 219079
* HexagonMCCodeEmitter.cpp: Prune 2nd redundant \brief. [-Wdocumentation]NAKAMURA Takumi2014-10-051-1/+1
| | | | llvm-svn: 219073
* HexagonDesc: Update LLVMBuild.txt.NAKAMURA Takumi2014-10-051-1/+1
| | | | llvm-svn: 219071
* [InstCombine] Simplify the logic from r219067 using ValueTrackingHal Finkel2014-10-051-12/+4
| | | | | | | | | | | | Joerg suggested on IRC that I look at generalizing the logic from r219067 to handle more general redundancies (like removing an assume(x > 3) dominated by an assume(x > 5)). The way to do this would be to ask ValueTracking to determine the value of the i1 argument. It turns out that ValueTracking is not very good at this right now (although it does get the trivial redundancy case) because it does not understand ICmps. Nevertheless, the resulting code in InstCombine is simpler than r219067, so we might as well do it now. llvm-svn: 219070
* [SystemZ] Make operator bool explicit. NFC.Benjamin Kramer2014-10-042-2/+2
| | | | llvm-svn: 219069
* Make AAMDNodes ctor and operator bool (!!!) explicit, mop up bugs and ↵Benjamin Kramer2014-10-045-9/+12
| | | | | | weirdness exposed by it. llvm-svn: 219068
* [InstCombine] Remove redundant @llvm.assume intrinsicsHal Finkel2014-10-041-0/+17
| | | | | | | | | For any @llvm.assume intrinsic, if there is another which dominates it and uses the same condition, then it is redundant and can be removed. While this does not alter the semantics of the @llvm.assume intrinsics, it makes subsequent handling more efficient (and the resulting IR easier to read). llvm-svn: 219067
* Remove unnecessary copying or replace it with moves in a bunch of places.Benjamin Kramer2014-10-0414-69/+73
| | | | | | NFC. llvm-svn: 219061
* Sink DwarfDebug::updateSubprogramScopeDIE into DwarfCompileUnitDavid Blaikie2014-10-044-33/+41
| | | | | | | | | | | | | This requires exposing some of the current function state from DwarfDebug. I hope there's not too much of that to expose as I go through all the functions, but it still seems nicer to expose singular data down to multiple consumers, than have consumers expose raw mapping data structures up to DwarfDebug for building subprograms. Part of a series of refactoring to allow subprograms in both the skeleton and dwo CUs under Fission. llvm-svn: 219060
* Reformatting accidentally left out of r219057David Blaikie2014-10-041-1/+2
| | | | llvm-svn: 219059
* Sink DwarfDebug::attachLowHighPC into DwarfCompileUnitDavid Blaikie2014-10-044-20/+20
| | | | | | | One of many things to sink down into DwarfCompileUnit to allow handling of subprograms in both the skeleton and dwo CU under Fission. llvm-svn: 219058
* Move DwarfCompileUnit from DwarfUnit.h to its own header (DwarfCompileUnit.h)David Blaikie2014-10-047-300/+352
| | | | | | | | | | | | In preparation for sinking all the subprogram emission code down from DwarfDebug into DwarfCompileUnit, this will avoid bloating DwarfUnit.h/cpp greatly and make concerns a bit more clear/isolated. (sinking this handling down is part of the work to handle emitting minimal subprograms for -gmlt-like data into the skeleton CU under fission) llvm-svn: 219057
* [x86] Enable the new vector shuffle lowering by default.Chandler Carruth2014-10-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update the entire regression test suite for the new shuffles. Remove most of the old testing which was devoted to the old shuffle lowering path and is no longer relevant really. Also remove a few other random tests that only really exercised shuffles and only incidently or without any interesting aspects to them. Benchmarking that I have done shows a few small regressions with this on LNT, zero measurable regressions on real, large applications, and for several benchmarks where the loop vectorizer fires in the hot path it shows 5% to 40% improvements for SSE2 and SSE3 code running on Sandy Bridge machines. Running on AMD machines shows even more dramatic improvements. When using newer ISA vector extensions the gains are much more modest, but the code is still better on the whole. There are a few regressions being tracked (PR21137, PR21138, PR21139) but by and large this is expected to be a win for x86 generated code performance. It is also more correct than the code it replaces. I have fuzz tested this extensively with ISA extensions up through AVX2 and found no crashes or miscompiles (yet...). The old lowering had a few miscompiles and crashers after a somewhat smaller amount of fuzz testing. There is one significant area where the new code path lags behind and that is in AVX-512 support. However, there was *extremely little* support for that already and so this isn't a significant step backwards and the new framework will probably make it easier to implement lowering that uses the full power of AVX-512's table-based shuffle+blend (IMO). Many thanks to Quentin, Andrea, Robert, and others for benchmarking assistance. Thanks to Adam and others for help with AVX-512. Thanks to Hal, Eric, and *many* others for answering my incessant questions about how the backend actually works. =] I will leave the old code path in the tree until the 3 PRs above are at least resolved to folks' satisfaction. Then I will rip it (and 1000s of lines of code) out. =] I don't expect this flag to stay around for very long. It may not survive next week. llvm-svn: 219046
* Add fake use to suppress defined-but-unused warningsJingyue Wu2014-10-041-0/+1
| | | | llvm-svn: 219045
* [x86] Fix a bug in the VZEXT DAG combine that I just made more powerful.Chandler Carruth2014-10-041-3/+23
| | | | | | | | | | | | It turns out this combine was always somewhat flawed -- there are cases where nested VZEXT nodes *can't* be combined: if their types have a mismatch that can be observed in the result. While none of these show up in currently, once I switch to the new vector shuffle lowering a few test cases actually form such nested VZEXT nodes. I've not come up with any IR pattern that I can sensible write to exercise this, but it will be covered by tests once I flip the switch. llvm-svn: 219044
* [x86] Sink a generic combine of VZEXT nodes from the lowering to VZEXTChandler Carruth2014-10-041-40/+39
| | | | | | | | | | | nodes to the DAG combining of them. This will allow the combine to fire on both old vector shuffle lowering and the new vector shuffle lowering and generally seems like a cleaner design. I've trimmed down the code a bit and tried to make it and the surrounding combine fairly clean while moving it around. llvm-svn: 219042
* R600/SI: Custom lower f64 -> i64 conversionsMatt Arsenault2014-10-033-3/+57
| | | | llvm-svn: 219038
* R600: Custom lower [s|u]int_to_fp for i64 -> f64Matt Arsenault2014-10-033-2/+46
| | | | llvm-svn: 219037
* R600/SI: Fix ftrunc f64 conformance failures.Matt Arsenault2014-10-031-1/+1
| | | | | | Re-add the tests since they were deleted at some point llvm-svn: 219036
* [x86] Add a really preposterous number of patterns for matching all ofChandler Carruth2014-10-032-5/+194
| | | | | | | | | | | | | | | | | | | | | | | | | the various ways in which blends can be used to do vector element insertion for lowering with the scalar math instruction forms that effectively re-blend with the high elements after performing the operation. This then allows me to bail on the element insertion lowering path when we have SSE4.1 and are going to be doing a normal blend, which in turn restores the last of the blends lost from the new vector shuffle lowering when I got it to prioritize insertion in other cases (for example when we don't *have* a blend instruction). Without the patterns, using blends here would have regressed sse-scalar-fp-arith.ll *completely* with the new vector shuffle lowering. For completeness, I've added RUN-lines with the new lowering here. This is somewhat superfluous as I'm about to flip the default, but hey, it shows that this actually significantly changed behavior. The patterns I've added are just ridiculously repetative. Suggestions on making them better very much welcome. In particular, handling the commuted form of the v2f64 patterns is somewhat obnoxious. llvm-svn: 219033
* Converting the ErrorHandlerMutex to a ManagedStatic to avoid the static ↵Chris Bieneman2014-10-031-4/+5
| | | | | | constructor and destructor. llvm-svn: 219028
* [x86] Adjust the patterns for lowering X86vzmovl nodes which don'tChandler Carruth2014-10-032-47/+54
| | | | | | | | | | | | | | perform a load to use blendps rather than movss when it is available. For non-loads, blendps is *much* faster. It can execute on two ports in Sandy Bridge and Ivy Bridge, and *three* ports on Haswell. This fixes one of the "regressions" from aggressively taking the "insertion" path in the new vector shuffle lowering. This does highlight one problem with blendps -- it isn't commuted as heavily as it should be. That's future work though. llvm-svn: 219022
* PR21145: Teach LLVM about C++14 sized deallocation functions.Richard Smith2014-10-032-1/+9
| | | | | | | | C++14 adds new builtin signatures for 'operator delete'. This change allows new/delete pairs to be removed in C++14 onwards, as they were in C++11 and before. llvm-svn: 219014
* Revert "Revert "DI: Fold constant arguments into a single MDString""Duncan P. N. Exon Smith2014-10-037-616/+542
| | | | | | | | | | | | | | | | | | | | | | This reverts commit r218918, effectively reapplying r218914 after fixing an Ocaml bindings test and an Asan crash. The root cause of the latter was a tightened-up check in `DILexicalBlock::Verify()`, so I'll file a PR to investigate who requires the loose check (and why). Original commit message follows. -- This patch addresses the first stage of PR17891 by folding constant arguments together into a single MDString. Integers are stringified and a `\0` character is used as a separator. Part of PR17891. Note: I've attached my testcases upgrade scripts to the PR. If I've just broken your out-of-tree testcases, they might help. llvm-svn: 219010
* [ISel] Keep matching state consistent when folding during X86 address matchAdam Nemet2014-10-032-0/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the X86 backend, matching an address is initiated by the 'addr' complex pattern and its friends. During this process we may reassociate and-of-shift into shift-of-and (FoldMaskedShiftToScaledMask) to allow folding of the shift into the scale of the address. However as demonstrated by the testcase, this can trigger CSE of not only the shift and the AND which the code is prepared for but also the underlying load node. In the testcase this node is sitting in the RecordedNode and MatchScope data structures of the matcher and becomes a deleted node upon CSE. Returning from the complex pattern function, we try to access it again hitting an assert because the node is no longer a load even though this was checked before. Now obviously changing the DAG this late is bending the rules but I think it makes sense somewhat. Outside of addresses we prefer and-of-shift because it may lead to smaller immediates (FoldMaskAndShiftToScale is an even better example because it create a non-canonical node). We currently don't recognize addresses during DAGCombiner where arguably this canonicalization should be performed. On the other hand, having this in the matcher allows us to cover all the cases where an address can be used in an instruction. I've also talked a little bit to Dan Gohman on llvm-dev who added the RAUW for the new shift node in FoldMaskedShiftToScaledMask. This RAUW is responsible for initiating the recursive CSE on users (http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-September/076903.html) but it is not strictly necessary since the shift is hooked into the visited user. Of course it's safer to keep the DAG consistent at all times (e.g. for accurate number of uses, etc.). So rather than changing the fundamentals, I've decided to continue along the previous patches and detect the CSE. This patch installs a very targeted DAGUpdateListener for the duration of a complex-pattern match and updates the matching state accordingly. (Previous patches used HandleSDNode to detect the CSE but that's not practical here). The listener is only installed on X86. I tested that there is no measurable overhead due to this while running through the spec2k BC files with llc. The only thing we pay for is the creation of the listener. The callback never ever triggers in spec2k since this is a corner case. Fixes rdar://problem/18206171 llvm-svn: 219009
* R600: Align functions to 256 bytesTom Stellard2014-10-032-3/+12
| | | | llvm-svn: 219002
* Eliminate some deep std::vector copies. NFC.Benjamin Kramer2014-10-0313-45/+21
| | | | llvm-svn: 218999
* MCParser: Modernize memory handling.Benjamin Kramer2014-10-031-37/+22
| | | | | | NFC. llvm-svn: 218998
* llvm-readobj: print out the fields of the COFF delay-import tableRui Ueyama2014-10-031-0/+6
| | | | llvm-svn: 218996
* [Power] Use lwsync for non-seq_cst fencesRobin Morisset2014-10-031-1/+8
| | | | | | | | | | | | | | | | Summary: hwsync is only required for seq_cst fences, acquire and release one can use the cheaper lwsync. Test Plan: Added some cases to atomics.ll + make check-all Reviewers: jfb, wschmidt Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D5317 llvm-svn: 218995
* MipsAsmParser.cpp: fix VS2012 buildHans Wennborg2014-10-031-1/+1
| | | | llvm-svn: 218991
* HexagonMCCodeEmitter.h: deleted member functions are not supported in VS2012Hans Wennborg2014-10-031-2/+2
| | | | llvm-svn: 218990
* [mips] Print warning when using register names not available in N32/64Daniel Sanders2014-10-032-0/+34
| | | | | | | | | | | | | | | | | | | Summary: The register names t4-t7 are not available in the N32 and N64 ABIs. This patch prints a warning, when those names are used in N32/64, along with a fix-it with the correct register names. Patch by Vasileios Kalintiris Reviewers: dsanders Reviewed By: dsanders Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D5272 llvm-svn: 218989
* Fix build break on HexagonSid Manning2014-10-031-1/+2
| | | | | | Differential Revision: http://reviews.llvm.org/D5600 llvm-svn: 218987
* Adding skeleton for unit testing Hexagon Code EmissionSid Manning2014-10-036-8/+173
| | | | | | | | | | | Adding and modifying CMakeLists.txt files to run unit tests under unittests/Target/* if the directory exists. Adding basic unit test to check that code emitter object can be retrieved. Differential Revision: http://reviews.llvm.org/D5523 Change by: Colin LeMahieu llvm-svn: 218986
* [x86] Teach the new vector shuffle lowering to aggressively form MOVSSChandler Carruth2014-10-032-5/+37
| | | | | | | | | | | | | | | | | | | | | | | | and MOVSD nodes for single element vector inserts. This is particularly important because a number of patterns in the backend detect these patterns and leverage them to simplify things. It also fixes quite a few of the insertion bad code examples. However, it regresses a specific area: when available, blendps and blendpd are *dramatically* faster than movss and movsd respectively. But it doesn't really work to form the blend logic first because the blends *aren't* as crazy efficient when the data is coming from memory anyways, and thus will have a movss or movsd regardless. Also, doing that would block a bunch of the patterns that this is designed to hit. So my plan is to go into the patterns for lowering MOVSS and MOVSD and lower them via blends when available. However that's a pretty invasive restructuring so it will need to be a follow-up patch. I have already gone into the patterns to lower MOVSS and MOVSD from memory using MOVLPD, etc. Without that, several of the test cases I already have regress. llvm-svn: 218985
OpenPOWER on IntegriCloud