summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* MCTargetOptions reside on the TargetMachine that we always have viaEric Christopher2015-02-191-5/+2
| | | | | | TargetOptions. llvm-svn: 229917
* Remove a call to TargetMachine::getSubtarget from the inlineEric Christopher2015-02-191-1/+11
| | | | | | | | | asm support in the asm printer. If we can get a subtarget from the machine function then we should do so, otherwise we can go ahead and create a default one since we're at the module level. llvm-svn: 229916
* [Hexagon] Moving remaining methods off of HexagonMCInst in to ↵Colin LeMahieu2015-02-1913-173/+113
| | | | | | HexagonMCInstrInfo and eliminating HexagonMCInst class. llvm-svn: 229914
* MC: Allow multiple comma-separated expressions on the .uleb128 directive.Benjamin Kramer2015-02-191-9/+15
| | | | | | | For compatiblity with GNU as. Binutils documents this as '.uleb128 expressions'. Subtle, isn't it? llvm-svn: 229911
* SSAUpdater: Use range-based for. NFC.Benjamin Kramer2015-02-191-24/+17
| | | | llvm-svn: 229908
* Remove unused argument from emitInlineAsmStart.Eric Christopher2015-02-193-6/+5
| | | | llvm-svn: 229907
* [objc-arc] Convert the bodies of ARCInstKind predicates into covered switches.Michael Gottesman2015-02-192-58/+323
| | | | | | | | | | | | | | | This is much better than the previous manner of just using short-curcuiting booleans from: 1. A "naive" efficiency perspective: we do not have to rely on the compiler to change the short circuiting boolean operations into a switch. 2. An understanding perspective by making the implicit behavior of negative predicates explicit. 3. A maintainability perspective through the covered switch flag making it easy to know where to update code when adding new ARCInstKinds. llvm-svn: 229906
* [objc-arc] Change the InstructionClass to be an enum class called ARCInstKind.Michael Gottesman2015-02-1912-588/+648
| | | | | | | I also renamed ObjCARCUtil.cpp -> ARCInstKind.cpp. That file only contained items related to ARCInstKind anyways. llvm-svn: 229905
* Checking if TARGET_OS_IPHONE is defined isn't good enough for 10.7 and earlier.Chris Bieneman2015-02-191-2/+10
| | | | | | | | Older versions of the TargetConditionals header always defined TARGET_OS_IPHONE to something (0 or 1), so we need to test not only for the existence but also if it is 1. This resolves PR22631. llvm-svn: 229904
* [Hexagon] Moving more functions off of HexagonMCInst and in to ↵Colin LeMahieu2015-02-194-174/+191
| | | | | | HexagonMCInstrInfo. llvm-svn: 229903
* [LoopAccesses] Change LAA:getInfo to return a constant referenceAdam Nemet2015-02-192-6/+7
| | | | | | | | As expected, this required a few more const-correctness fixes. Based on Hal's feedback on D7684. llvm-svn: 229899
* [LoopAccesses] Add -analyze supportAdam Nemet2015-02-191-0/+51
| | | | | | | | | | | | | | | | The LoopInfo in combination with depth_first is used to enumerate the loops. Right now -analyze is not yet complete. It only prints the result of the analysis, the report and the run-time checks. Printing the unsafe depedences will require a bit more reshuffling which I'd like to do in a follow-on to this patchset. Unsafe dependences are currently checked via -debug-only=loop-accesses in the new test. This is part of the patchset that converts LoopAccessAnalysis into an actual analysis pass. llvm-svn: 229898
* [LoopAccesses] Split out LoopAccessReport from VectorizerReportAdam Nemet2015-02-192-25/+44
| | | | | | | | | | | | The only difference between these two is that VectorizerReport adds a vectorizer-specific prefix to its messages. When LAA is used in the vectorizer context the prefix is added when we promote the LoopAccessReport into a VectorizerReport via one of the constructors. This is part of the patchset that converts LoopAccessAnalysis into an actual analysis pass. llvm-svn: 229897
* [LoopAccesses] Add missing const to APIs in VectorizationReportAdam Nemet2015-02-192-4/+4
| | | | | | | | | | When I split out LoopAccessReport from this, I need to create some temps so constness becomes necessary. This is part of the patchset that converts LoopAccessAnalysis into an actual analysis pass. llvm-svn: 229896
* [LoopAccesses] Add canAnalyzeLoopAdam Nemet2015-02-191-1/+51
| | | | | | | | | | | This allows the analysis to be attempted with any loop. This feature will be used with -analysis. (LV only requests the analysis on loops that have already satisfied these tests.) This is part of the patchset that converts LoopAccessAnalysis into an actual analysis pass. llvm-svn: 229895
* [LoopAccesses] Change debug messages from LV to LAAAdam Nemet2015-02-192-40/+41
| | | | | | | | | Also add pass name as an argument to VectorizationReport::emitAnalysis. This is part of the patchset that converts LoopAccessAnalysis into an actual analysis pass. llvm-svn: 229894
* [LoopAccesses] Create the analysis passAdam Nemet2015-02-193-17/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a function pass that runs the analysis on demand. The analysis can be initiated by querying the loop access info via LAA::getInfo. It either returns the cached info or runs the analysis. Symbolic stride information continues to reside outside of this analysis pass. We may move it inside later but it's not a priority for me right now. The idea is that Loop Distribution won't support run-time stride checking at least initially. This means that when querying the analysis, symbolic stride information can be provided optionally. Whether stride information is used can invalidate the cache entry and rerun the analysis. Note that if the loop does not have any symbolic stride, the entry should be preserved across Loop Distribution and LV. Since currently the only user of the pass is LV, I just check that the symbolic stride information didn't change when using a cached result. On the LV side, LoopVectorizationLegality requests the info object corresponding to the loop from the analysis pass. A large chunk of the diff is due to LAI becoming a pointer from a reference. A test will be added as part of the -analyze patch. Also tested that with AVX, we generate identical assembly output for the testsuite (including the external testsuite) before and after. This is part of the patchset that converts LoopAccessAnalysis into an actual analysis pass. llvm-svn: 229893
* [LoopAccesses] Cache the result of canVectorizeMemoryAdam Nemet2015-02-192-15/+22
| | | | | | | | | | | | LAA will be an on-demand analysis pass, so we need to cache the result of the analysis. canVectorizeMemory is renamed to analyzeLoop which computes the result. canVectorizeMemory becomes the query function for the cached result. This is part of the patchset that converts LoopAccessAnalysis into an actual analysis pass. llvm-svn: 229892
* [LoopAccesses] Stash the report from the analysis rather than emitting itAdam Nemet2015-02-192-3/+8
| | | | | | | | | | | The transformation passes will query this and then emit them as part of their own report. The currently only user LV is modified to do just that. This is part of the patchset that converts LoopAccessAnalysis into an actual analysis pass. llvm-svn: 229891
* [LoopAccesses] Make VectorizerParams global + fix for cyclic depAdam Nemet2015-02-192-40/+47
| | | | | | | | | | | | | | | | | As LAA is becoming a pass, we can no longer pass the params to its constructor. This changes the command line flags to have external storage. These can now be accessed both from LV and LAA. VectorizerParams is moved out of LoopAccessInfo in order to shorten the code to access it. This commits also has the fix (D7731) to the break dependence cycle between the analysis and vector libraries. This is part of the patchset that converts LoopAccessAnalysis into an actual analysis pass. llvm-svn: 229890
* Revert "Reformat."Adam Nemet2015-02-192-63/+73
| | | | | | | | | | This reverts commit r229651. I'd like to ultimately revert r229650 but this reformat stands in the way. I'll reformat the affected files once the the loop-access pass is fully committed. llvm-svn: 229889
* [Hexagon] Creating HexagonMCInstrInfo namespace as landing zone for static ↵Colin LeMahieu2015-02-197-40/+87
| | | | | | functions detached from HexagonMCInst. llvm-svn: 229885
* Update and remove a few calls to TargetMachine::getSubtargetImplEric Christopher2015-02-191-9/+12
| | | | | | out of the asm printer. llvm-svn: 229883
* [fuzzer] split main() into FuzzerDriver() that takes a callback as a ↵Kostya Serebryany2015-02-196-183/+235
| | | | | | parameter and a tiny main() in a separate file llvm-svn: 229882
* Assume the original file is created before release in LockFileManagerBen Langmuir2015-02-191-39/+9
| | | | | | | | | | This is true in clang, and let's us remove the problematic code that waits around for the original file and then times out if it doesn't get created in short order. This caused any 'dead' lock file or legitimate time out to cause a cascade of timeouts in any processes waiting on the same lock (even if they only just showed up). llvm-svn: 229881
* [fuzzer] properly annotate fallthrough, add one more entry to FAQKostya Serebryany2015-02-192-1/+7
| | | | llvm-svn: 229880
* [Hexagon] Removing static variable holding MCInstrInfo.Colin LeMahieu2015-02-194-8/+7
| | | | llvm-svn: 229872
* LSR: Move set instead of copying. NFC.Benjamin Kramer2015-02-191-4/+2
| | | | llvm-svn: 229871
* Avoid conversion to float when creating ConstantDataArray/ConstantDataVector.Rafael Espindola2015-02-191-19/+72
| | | | | | Patch by Raoux, Thomas F! llvm-svn: 229864
* Demote vectors to arrays. No functionality change.Benjamin Kramer2015-02-199-151/+77
| | | | llvm-svn: 229861
* [x86] Delete still more piles of complex code now that we have a goodChandler Carruth2015-02-191-74/+9
| | | | | | | | | | | | | | systematic lowering of v8i16. This required a slight strategy shift to prefer unpack lowerings in more places. While this isn't a cut-and-dry win in every case, it is in the overwhelming majority. There are only a few places where the old lowering would probably be a touch faster, and then only by a small margin. In some cases, this is yet another significant improvement. llvm-svn: 229859
* [x86] Teach the unpack lowering how to lower with an initial unpack inChandler Carruth2015-02-191-1/+36
| | | | | | | | | | addition to lowering to trees rooted in an unpack. This saves shuffles and or registers in many various ways, lets us handle another class of v4i32 shuffles pre SSE4.1 without domain crosses, etc. llvm-svn: 229856
* [x86] Dramatically improve v8i16 shuffle lowering by not using itsChandler Carruth2015-02-191-119/+0
| | | | | | | | | | | | | | | | | | | | terribly complex partial blend logic. This code path was one of the more complex and bug prone when it first went in and it hasn't faired much better. Ultimately, with the simpler basis for unpack lowering and support bit-math blending, this is completely obsolete. In the worst case without this we generate different but equivalent instructions. However, in many cases we generate much better code. This is especially true when blends or pshufb is available. This does expose one (minor) weakness of the unpack lowering that I'll try to address. In case you were wondering, this is actually a big part of what I've been trying to pull off in the recent string of commits. llvm-svn: 229853
* [x86] Remove the final fallback in the v8i16 lowering that isn't reallyChandler Carruth2015-02-191-57/+72
| | | | | | | | | | | | | | | | | | | | | | needed, and significantly improve the SSSE3 path. This makes the new strategy much more clear. If we can blend, we just go with that. If we can't blend, we try to permute into an unpack so that we handle cases where the unpack doing the blend also simplifies the shuffle. If that fails and we've got SSSE3, we now call into factored-out pshufb lowering code so that we leverage the fact that pshufb can set up a blend for us while shuffling. This generates great code, especially because we *know* we don't have a fast blend at this point. Finally, we fall back on decomposing into permutes and blends because we do at least have a bit-math-based blend if we need to use that. This pretty significantly improves some of the v8i16 code paths. We never need to form pshufb for the single-input shuffles because we have effective target-specific combines to form it there, but we were missing its effectiveness in the blends. llvm-svn: 229851
* [x86] Simplify the pre-SSSE3 v16i8 lowering significantly by decomposingChandler Carruth2015-02-191-75/+71
| | | | | | | | | | | | | | | | | them into permutes and a blend with the generic decomposition logic. This works really well in almost every case and lets the code only manage the expansion of a single input into two v8i16 vectors to perform the actual shuffle. The blend-based merging is often much nicer than the pack based merging that this replaces. The only place where it isn't we end up blending between two packs when we could do a single pack. To handle that case, just teach the v2i64 lowering to handle these blends by digging out the operands. With this we're down to only really random permutations that cause an explosion of instructions. llvm-svn: 229849
* [x86] Remove the insanely over-aggressive unpack lowering strategy forChandler Carruth2015-02-191-38/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | v16i8 shuffles, and replace it with new facilities. This uses precise patterns to match exact unpacks, and the new generalized unpack lowering only when we detect a case where we will have to shuffle both inputs anyways and they terminate in exactly a blend. This fixes all of the blend horrors that I uncovered by always lowering blends through the vector shuffle lowering. It also removes *sooooo* much of the crazy instruction sequences required for v16i8 lowering previously. Much cleaner now. The only "meh" aspect is that we sometimes use pshufb+pshufb+unpck when it would be marginally nicer to use pshufb+pshufb+por. However, the difference there is *tiny*. In many cases its a win because we re-use the pshufb mask. In others, we get to avoid the pshufb entirely. I've left a FIXME, but I'm dubious we can really do better than this. I'm actually pretty happy with this lowering now. For SSE2 this exposes some horrors that were really already there. Those will have to fixed by changing a different path through the v16i8 lowering. llvm-svn: 229846
* [mips][microMIPS] Make usage of AND16, OR16 and XOR16 by code generatorJozef Kolek2015-02-191-0/+2
| | | | | | Differential Revision: http://reviews.llvm.org/D7611 llvm-svn: 229845
* [x86] The SELECT x86 DAG combine also does legalization. It used to relyChandler Carruth2015-02-191-6/+6
| | | | | | | | | | | | | | | | | | | on things not being marked as either custom or legal, but we now do custom lowering of more VSELECT nodes. To cope with this, manually replicate the legality tests here. These have to stay in sync with the set of tests used in the custom lowering of VSELECT. Ideally, we wouldn't do any of this combine-based-legalization when we have an actual custom legalization step for VSELECT, but I'm not going to be able to rewrite all of that today. I don't have a test case for this currently, but it was found when compiling a number of the test-suite benchmarks. I'll try to reduce a test case and add it. This should at least fix the test-suite fallout on build bots. llvm-svn: 229844
* Reverting r229831 due to multiple ARM/PPC/MIPS build-bot failures.Michael Kuperstein2015-02-1925-267/+244
| | | | llvm-svn: 229841
* Implement invoke statepoint verification.Igor Laevsky2015-02-191-9/+49
| | | | | | Differential Revision: http://reviews.llvm.org/D7366 llvm-svn: 229840
* Add invoke related functionality into StatepointSite classes.Igor Laevsky2015-02-191-4/+19
| | | | | | Differential Revision: http://reviews.llvm.org/D7364 llvm-svn: 229838
* AVX-512: Full implementation for VRNDSCALESS/SD instructions and intrinsics.Elena Demikhovsky2015-02-195-48/+85
| | | | llvm-svn: 229837
* [x86] Add support for bit-wise blending and use it in the v8 and v16Chandler Carruth2015-02-191-1/+39
| | | | | | | | | | | | | | | | | | | | | | | lowering paths. I'm going to be leveraging this to simplify a lot of the overly complex lowering of v8 and v16 shuffles in pre-SSSE3 modes. Sadly, this isn't profitable on v4i32 and v2i64. There, the float and double blending instructions for pre-SSE4.1 are actually pretty good, and we can't beat them with bit math. And once SSE4.1 comes around we have direct blending support and this ceases to be relevant. Also, some of the test cases look odd because the domain fixer canonicalizes these to floating point domain. That's OK, it'll use the integer domain when it matters and some day I may be able to update enough of LLVM to canonicalize the other way. This restores almost all of the regressions from teaching x86's vselect lowering to always use vector shuffle lowering for blends. The remaining problems are because the v16 lowering path is still doing crazy things. I'll be re-arranging that strategy in more detail in subsequent commits to finish recovering the performance here. llvm-svn: 229836
* [x86,sdag] Two interrelated changes to the x86 and sdag code.Chandler Carruth2015-02-192-50/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | First, don't combine bit masking into vector shuffles (even ones the target can handle) once operation legalization has taken place. Custom legalization of vector shuffles may exist for these patterns (making the predicate return true) but that custom legalization may in some cases produce the exact bit math this matches. We only really want to handle this prior to operation legalization. However, the x86 backend, in a fit of awesome, relied on this. What it would do is mark VSELECTs as expand, which would turn them into arithmetic, which this would then match back into vector shuffles, which we would then lower properly. Amazing. Instead, the second change is to teach the x86 backend to directly form vector shuffles from VSELECT nodes with constant conditions, and to mark all of the vector types we support lowering blends as shuffles as custom VSELECT lowering. We still mark the forms which actually support variable blends as *legal* so that the custom lowering is bypassed, and the legal lowering can even be used by the vector shuffle legalization (yes, i know, this is confusing. but that's how the patterns are written). This makes the VSELECT lowering much more sensible, and in fact should fix a bunch of bugs with it. However, as you'll see in the test cases, right now what it does is point out the *hilarious* deficiency of the new vector shuffle lowering when it comes to blends. Fortunately, my very next patch fixes that. I can't submit it yet, because that patch, somewhat obviously, forms the exact and/or pattern that the DAG combine is matching here! Without this patch, teaching the vector shuffle lowering to produce the right code infloops in the DAG combiner. With this patch alone, we produce terrible code but at least lower through the right paths. With both patches, all the regressions here should be fixed, and a bunch of the improvements (like using 2 shufps with no memory loads instead of 2 andps with memory loads and an orps) will stay. Win! There is one other change worth noting here. We had hilariously wrong vectorization cost estimates for vselect because we fell through to the code path that assumed all "expand" vector operations are scalarized. However, the "expand" lowering of VSELECT is vector bit math, most definitely not scalarized. So now we go back to the correct if horribly naive cost of "1" for "not scalarized". If anyone wants to add actual modeling of shuffle costs, that would be cool, but this seems an improvement on its own. Note the removal of 16 and 32 "costs" for doing a blend. Even in SSE2 we can blend in fewer than 16 instructions. ;] Of course, we don't right now because of OMG bad code, but I'm going to fix that. Next patch. I promise. llvm-svn: 229835
* Use std::bitset for SubtargetFeaturesMichael Kuperstein2015-02-1925-244/+267
| | | | | | | | | | | Previously, subtarget features were a bitfield with the underlying type being uint64_t. Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset. No functional change. Differential Revision: http://reviews.llvm.org/D7065 llvm-svn: 229831
* [Support/Timer] Make GetMallocUsage() aware of jemalloc.Davide Italiano2015-02-191-0/+10
| | | | | | | Differential Revision: D7657 Reviewed by: shankarke, majnemer llvm-svn: 229824
* Provide the same ABI regardless of NDEBUGDmitri Gribenko2015-02-192-30/+38
| | | | | | | | | | | | | | | | | | | | | For projects depending on LLVM, I find it very useful to combine a release-no-asserts build of LLVM with a debug+asserts build of the dependent project. The motivation is that when developing a dependent project, you are debugging that project itself, not LLVM. In my usecase, a significant part of the runtime is spent in LLVM optimization passes, so I would like to build LLVM without assertions to get the best performance from this combination. Currently, `lib/Support/Debug.cpp` changes the set of symbols it provides depending on NDEBUG, while `include/llvm/Support/Debug.h` requires extra symbols when NDEBUG is not defined. Thus, it is not possible to enable assertions in an external project that uses facilities of `Debug.h`. This patch changes `Debug.cpp` and `Valgrind.cpp` to always define the symbols that other code may depend on when #including LLVM headers without NDEBUG. http://reviews.llvm.org/D7662 llvm-svn: 229819
* Remove the local subtarget variable from the SystemZ asm printerEric Christopher2015-02-192-8/+3
| | | | | | and update the two calls accordingly. llvm-svn: 229805
* Remove a few more calls to TargetMachine::getSubtarget from theEric Christopher2015-02-192-4/+4
| | | | | | R600 port. llvm-svn: 229804
* Grab the subtarget off of the machine function for the R600Eric Christopher2015-02-192-15/+14
| | | | | | asm printer and clean up a bunch of uses. llvm-svn: 229803
OpenPOWER on IntegriCloud