summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* [ADT] Rewrite the StringRef::find implementation to be simpler, clearer,Chandler Carruth2015-09-101-16/+23
| | | | | | | | | | | | | | | | and tremendously less reliant on the optimizer to fix things. The code is always necessarily looking for the entire length of the string when doing the equality tests in this find implementation, but it previously was needlessly re-checking the size each time among other annoyances. By writing this so simply an ddirectly in terms of memcmp, it also is about 8x faster in a debug build, which in turn makes FileCheck about 2x faster in 'ninja check-llvm'. This saves about 8% of the time for FileCheck-heavy parts of the test suite like the x86 backend tests. llvm-svn: 247269
* [DAGCombine] Truncate BUILD_VECTOR operators if necessary when constant ↵Silviu Baranga2015-09-101-11/+25
| | | | | | | | | | | | | | | | | | | | | folding vectors Summary: The BUILD_VECTOR node will truncate its operators to match the type. We need to take this into account when constant folding - we need to perform a truncation before constant folding the elements. This is because the upper bits can change the result, depending on the operation type (for example this is the case for min/max). This change also adds a regression test. Reviewers: jmolloy Subscribers: jmolloy, llvm-commits Differential Revision: http://reviews.llvm.org/D12697 llvm-svn: 247265
* Enable GlobalsAA by defaultJames Molloy2015-09-101-1/+1
| | | | | | This can give significant improvements to alias analysis in some situations, and improves its testing coverage in all situations. llvm-svn: 247264
* Add GlobalsAA as preserved to a bunch of transformsJames Molloy2015-09-1014-0/+28
| | | | | | GlobalsAA must by definition be preserved in function passes, but the passmanager doesn't know that. Make each pass explicitly preserve GlobalsAA. llvm-svn: 247263
* [ARM] Do not use vtrn for vectorshuffle if the order is reversedJames Molloy2015-09-101-4/+13
| | | | | | | | The tests in isVTRNMask and isVTRN_v_undef_Mask should also check that the elements of the upper and lower half of the vectorshuffle occur in the correct order when both halves are used. Without this test the code assumes that it is correct to use vector transpose (vtrn) for the masks <1, 1, 0, 0> and <1, 3, 0, 2>, among others, but the transpose actually incorrectly generates shuffles for <0, 0, 1, 1> and <0, 2, 1, 3> in this case. Patch by Jeroen Ketema! llvm-svn: 247254
* [ADT] Micro-optimize the Triple constructor by doing a single split andChandler Carruth2015-09-101-8/+21
| | | | | | | | | | | | re-using the resulting components rather than repeatedly splitting and re-splitting to compute each component as part of the initializer list. This is more work on PR23676. Sadly, it doesn't help much. It removes the constructor from my profile, but doesn't make a sufficient dent in the total time. But it should play together nicely with subsequent changes. llvm-svn: 247250
* [ADT] Fix a confusing interface spec and some annoying peculiaritiesChandler Carruth2015-09-101-31/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | with the StringRef::split method when used with a MaxSplit argument other than '-1' (which nobody really does today, but which should actually work). The spec claimed both to split up to MaxSplit times, but also to append <= MaxSplit strings to the vector. One of these doesn't make sense. Given the name "MaxSplit", let's go with it being a max over how many *splits* occur, which means the max on how many strings get appended is MaxSplit+1. I'm not actually sure the implementation correctly provided this logic either, as it used a really opaque loop structure. The implementation was also playing weird games with nullptr in the data field to try to rely on a totally opaque hidden property of the split method that returns a pair. Nasty IMO. Replace all of this with what is (IMO) simpler code that doesn't use the pair returning split method, and instead just finds each separator and appends directly. I think this is a lot easier to read, and it most definitely matches the spec. Added some tests that exercise the corner cases around StringRef() and StringRef("") that all now pass. I'll start using this in code in the next commit. llvm-svn: 247249
* GlobalsAAResult(&&): Move every members.NAKAMURA Takumi2015-09-101-1/+6
| | | | | | Or, one of MSVC builders failed with unexpected behavior. llvm-svn: 247247
* [ADT] Switch a bunch of places in LLVM that were doing single-characterChandler Carruth2015-09-108-12/+12
| | | | | | | splits to actually use the single character split routine which does less work, and in a debug build is *substantially* faster. llvm-svn: 247245
* [ADT] Add a single-character version of the small vector split routineChandler Carruth2015-09-102-1/+21
| | | | | | | | | | | on StringRef. Finding and splitting on a single character is substantially faster than doing it on even a single character StringRef -- we immediately get to a *very* tuned memchr call this way. Even nicer, we get to this even in a debug build, shaving 18% off the runtime of TripleTest.Normalization, helping PR23676 some more. llvm-svn: 247244
* [ScalarEvolution] Fix PR24757.Sanjoy Das2015-09-101-2/+42
| | | | | | | | | | | | | | | | | | | Summary: PR24757 was caused by some incorect math in `ScalarEvolution::HowFarToZero` -- the smallest unsigned solution for X in 2^N * A = 2^N * X is not necessarily A. Reviewers: atrick, majnemer, meheff Subscribers: llvm-commits, sanjoy Differential Revision: http://reviews.llvm.org/D12721 llvm-svn: 247242
* [LPM] Simplify this code and fix a compile error for compilers thatChandler Carruth2015-09-101-3/+1
| | | | | | | | don't correctly implement the scoping rules of C++11 range based for loops. This kind of aliasing isn't a good idea anyways (and wasn't really intended). llvm-svn: 247241
* [LPM] Use a map from analysis ID to immutable passes in the legacy passChandler Carruth2015-09-101-18/+24
| | | | | | | | | | | | manager to avoid a slow linear scan of every immutable pass and on every attempt to find an analysis pass. This speeds up 'check-llvm' on an unoptimized build for me by 15%, YMMV. It should also help (a tiny bit) other folks that are really bottlenecked on repeated runs of tiny pass pipelines across small IR files. llvm-svn: 247240
* Enable the shrink wrapping optimization for PPC64.Kit Barton2015-09-103-77/+89
| | | | | | | | | | | | | | The changes in this patch are as follows: 1. Modify the emitPrologue and emitEpilogue methods to work properly when the prologue and epilogue blocks are not the first/last blocks in the function 2. Fix a bug in PPCEarlyReturn optimization caused by an empty entry block in the function 3. Override the runShrinkWrap PredicateFtor (defined in TargetMachine) to check whether shrink wrapping should run: Shrink wrapping will run on PPC64 (Little Endian and Big Endian) unless -enable-shrink-wrap=false is specified on command line A new test case, ppc-shrink-wrapping.ll was created based on the existing shrink wrapping tests for x86, arm, and arm64. Phabricator review: http://reviews.llvm.org/D11817 llvm-svn: 247237
* [AArch64] Match FI+offset in STNP addressing mode.Ahmed Bougacha2015-09-102-0/+26
| | | | | | | | | | | | | | | First, we need to teach isFrameOffsetLegal about STNP. It already knew about the STP/LDP variants, but those were probably never exercised, because it's only the load/store optimizer that generates STP/LDP, and the only user of the method is frame lowering, which runs earlier. The STP/LDP cases were wrong: they didn't take into account the fact that they return two results, not one, so the immediate offset will be the 4th operand, not the 3rd. Follow-up to r247234. llvm-svn: 247236
* [AArch64] Match base+offset in STNP addressing mode.Ahmed Bougacha2015-09-101-0/+16
| | | | | | Followup to r247231. llvm-svn: 247234
* [AArch64] Support selecting STNP.Ahmed Bougacha2015-09-103-0/+78
| | | | | | | | | | | | | | | | | | We could go through the load/store optimizer and match STNP where we would have matched a nontemporal-annotated STP, but that's not reliable enough, as an opportunistic optimization. Insetad, we can guarantee emitting STNP, by matching them at ISel. Since there are no single-input nontemporal stores, we have to resort to some high-bits-extracting trickery to generate an STNP from a plain store. Also, we need to support another, LDP/STP-specific addressing mode, base + signed scaled 7-bit immediate offset. For now, only match the base. Let's make it smart separately. Part of PR24086. llvm-svn: 247231
* AMDGPU/SI: Fix more cases of losing exec operandsMatt Arsenault2015-09-103-16/+12
| | | | llvm-svn: 247230
* AMDGPU/SI: Fix creating v_mov_b32s without exec usesMatt Arsenault2015-09-101-2/+14
| | | | | | | This will be caught by existing tests with a verifier check to be added in a future commit. llvm-svn: 247229
* Revert r247216: "Fix Clang-tidy misc-use-override warnings, other minor fixes"Hans Wennborg2015-09-104-58/+58
| | | | | | | This caused build breakges, e.g. http://lab.llvm.org:8011/builders/clang-x86_64-ubuntu-gdb-75/builds/24926 llvm-svn: 247226
* [CodeGen] Make x86 nontemporal store patfrags generic. NFC.Ahmed Bougacha2015-09-101-19/+0
| | | | | | To be used by other targets. llvm-svn: 247225
* [RewriteStatepointsForGC] Minor refactor to use shared implementation [NFC]Philip Reames2015-09-101-8/+1
| | | | llvm-svn: 247223
* [RewriteStatepointsForGC] Strengthen a confusingly weak assertion [NFC]Philip Reames2015-09-101-3/+3
| | | | | | The assertion was weaker than it should be and gave the impression we're growing the number of base defining values being considered during the fixed point interation. That's not true. The tighter form of the assert is useful documentation. llvm-svn: 247221
* [RewriteStatepointsForGC] One last bit of naming [NFCI]Philip Reames2015-09-101-7/+7
| | | | llvm-svn: 247220
* [WinEH] Add codegen support for cleanuppad and cleanupretReid Kleckner2015-09-109-62/+116
| | | | | | | | | | | | | | | | | | | | | | | | | | | All of the complexity is in cleanupret, and it mostly follows the same codepaths as catchret, except it doesn't take a return value in RAX. This small example now compiles and executes successfully on win32: extern "C" int printf(const char *, ...) noexcept; struct Dtor { ~Dtor() { printf("~Dtor\n"); } }; void has_cleanup() { Dtor o; throw 42; } int main() { try { has_cleanup(); } catch (int) { printf("caught it\n"); } } Don't try to put the cleanup in the same function as the catch, or Bad Things will happen. llvm-svn: 247219
* [RewriteStatepointsForGC] Further style/naming fixup [NFCI]Philip Reames2015-09-101-26/+26
| | | | llvm-svn: 247217
* Fix Clang-tidy misc-use-override warnings, other minor fixesHans Wennborg2015-09-104-58/+58
| | | | | | | | Patch by Eugene Zelenko! Differential Revision: http://reviews.llvm.org/D12740 llvm-svn: 247216
* [RewriteStatepointsForGC] More naming cleanup [NFCI]Philip Reames2015-09-101-6/+6
| | | | llvm-svn: 247213
* [RewriteStatepointsForGC] Code cleanup [NFC]Philip Reames2015-09-091-25/+26
| | | | | | Factor out common code related to naming values, fix a small style issue. More to follow in separate changes. llvm-svn: 247211
* [RewriteStatepointsForGC] Extend base pointer inference to handle insertelementPhilip Reames2015-09-091-58/+61
| | | | | | | | | | | | This change is simply enhancing the existing inference algorithm to handle insertelement instructions by conservatively inserting a new instruction to propagate the vector of associated base pointers. In the process, I'm ripping out the peephole optimizations which mostly helped cover the fact this hadn't been done. Note that most of the newly inserted nodes will be nearly immediately removed by the post insertion optimization pass introduced in 246718. Arguably, we should be trying harder to avoid the malloc traffic here, but I'd rather get the code correct, then worry about compile time. Unlike previous extensions of the algorithm to handle more case, I discovered the existing code was causing miscompiles in some cases. In particular, we had an implicit assumption that the peephole covered *all* insert element instructions, so if we had a value directly based on a insert element the peephole didn't cover, we proceeded as if it were a base anyways. Not good. I believe we had the same issue with shufflevector which is why I adjusted the predicate for them as well. Differential Revision: http://reviews.llvm.org/D12583 llvm-svn: 247210
* [RewriteStatepointsForGC] Make base pointer inference deterministicPhilip Reames2015-09-091-44/+35
| | | | | | | | | | | | | | Previously, the base pointer algorithm wasn't deterministic. The core fixed point was (of course), but we were inserting new nodes and optimizing them in an order which was unspecified and variable. We'd somewhat hacked around this for testing by sorting by value name, but that doesn't solve the general determinism problem. Instead, we can use the order of traversal over the def/use graph to give us a single consistent ordering. Today, this is a DFS order, but the exact order doesn't mater provided it's deterministic for a given input. (Q: It is safe to rely on a deterministic order of operands right?) Note that this only fixes the determinism within a single inference step. The inference step is currently invoked many times in a non-deterministic order. That's a future change in the sequence. :) Differential Revision: http://reviews.llvm.org/D12640 llvm-svn: 247208
* LowerBitSets: Fix non-determinism bug.Peter Collingbourne2015-09-091-4/+22
| | | | | | | | Visit disjoint sets in a deterministic order based on the maximum BitSetNM index, otherwise the order in which we visit them will depend on pointer comparisons. This was being exposed by MSan. llvm-svn: 247201
* [SEH] Emit 32-bit SEH tables for the new EH IRReid Kleckner2015-09-097-98/+279
| | | | | | | | | | | The 32-bit tables don't actually contain PC range data, so emitting them is incredibly simple. The 64-bit tables, on the other hand, use the same table for state numbering as well as label ranges. This makes things more difficult, so it will be implemented later. llvm-svn: 247192
* ScalarEvolution assume hanging bugfixPiotr Padlewski2015-09-091-13/+13
| | | | | | http://reviews.llvm.org/D12719 llvm-svn: 247184
* Revert trunc(lshr (sext A), Cst) to ashr A, CstDavid Majnemer2015-09-091-20/+0
| | | | | | This reverts commit r246997, it introduced a regression (PR24763). llvm-svn: 247180
* Revert "AVX512: Implemented encoding and intrinsics for vextracti64x4 ↵Renato Golin2015-09-091-109/+52
| | | | | | | | ,vextracti64x2, vextracti32x8, vextracti32x4, vextractf64x4, vextractf64x2, vextractf32x8, vextractf32x4 Added tests for intrinsics and encoding." This reverts commit r247149, as it was breaking numerous buildbots of varied architectures. llvm-svn: 247177
* Save LaneMask with livein registersMatthias Braun2015-09-0922-79/+108
| | | | | | | | | | | | | | | | | With subregister liveness enabled we can detect the case where only parts of a register are live in, this is expressed as a 32bit lanemask. The current code only keeps registers in the live-in list and therefore enumerated all subregisters affected by the lanemask. This turned out to be too conservative as the subregister may also cover additional parts of the lanemask which are not live. Expressing a given lanemask by enumerating a minimum set of subregisters is computationally expensive so the best solution is to simply change the live-in list to store the lanemasks as well. This will reduce memory usage for targets using subregister liveness and slightly increase it for other targets Differential Revision: http://reviews.llvm.org/D12442 llvm-svn: 247171
* VirtRegMap: Improve addMBBLiveIns() using SlotIndex::MBBIndexIterator; NFCMatthias Braun2015-09-091-25/+62
| | | | | | | | | | | | | | | | | | Now that we have an explicit iterator over the idx2MBBMap in SlotIndices we can use the fact that segments and the idx2MBBMap is sorted by SlotIndex position so can advance both simultaneously instead of starting from the beginning for each segment. This complicates the code for the subregister case somewhat but should be more efficient and has the advantage that we get the final lanemask for each block immediately which will be important for a subsequent change. Removes the now unused SlotIndexes::findMBBLiveIns function. Differential Revision: http://reviews.llvm.org/D12443 llvm-svn: 247170
* [PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatibleChandler Carruth2015-09-0973-1258/+1230
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | with the new pass manager, and no longer relying on analysis groups. This builds essentially a ground-up new AA infrastructure stack for LLVM. The core ideas are the same that are used throughout the new pass manager: type erased polymorphism and direct composition. The design is as follows: - FunctionAAResults is a type-erasing alias analysis results aggregation interface to walk a single query across a range of results from different alias analyses. Currently this is function-specific as we always assume that aliasing queries are *within* a function. - AAResultBase is a CRTP utility providing stub implementations of various parts of the alias analysis result concept, notably in several cases in terms of other more general parts of the interface. This can be used to implement only a narrow part of the interface rather than the entire interface. This isn't really ideal, this logic should be hoisted into FunctionAAResults as currently it will cause a significant amount of redundant work, but it faithfully models the behavior of the prior infrastructure. - All the alias analysis passes are ported to be wrapper passes for the legacy PM and new-style analysis passes for the new PM with a shared result object. In some cases (most notably CFL), this is an extremely naive approach that we should revisit when we can specialize for the new pass manager. - BasicAA has been restructured to reflect that it is much more fundamentally a function analysis because it uses dominator trees and loop info that need to be constructed for each function. All of the references to getting alias analysis results have been updated to use the new aggregation interface. All the preservation and other pass management code has been updated accordingly. The way the FunctionAAResultsWrapperPass works is to detect the available alias analyses when run, and add them to the results object. This means that we should be able to continue to respect when various passes are added to the pipeline, for example adding CFL or adding TBAA passes should just cause their results to be available and to get folded into this. The exception to this rule is BasicAA which really needs to be a function pass due to using dominator trees and loop info. As a consequence, the FunctionAAResultsWrapperPass directly depends on BasicAA and always includes it in the aggregation. This has significant implications for preserving analyses. Generally, most passes shouldn't bother preserving FunctionAAResultsWrapperPass because rebuilding the results just updates the set of known AA passes. The exception to this rule are LoopPass instances which need to preserve all the function analyses that the loop pass manager will end up needing. This means preserving both BasicAAWrapperPass and the aggregating FunctionAAResultsWrapperPass. Now, when preserving an alias analysis, you do so by directly preserving that analysis. This is only necessary for non-immutable-pass-provided alias analyses though, and there are only three of interest: BasicAA, GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is preserved when needed because it (like DominatorTree and LoopInfo) is marked as a CFG-only pass. I've expanded GlobalsAA into the preserved set everywhere we previously were preserving all of AliasAnalysis, and I've added SCEVAA in the intersection of that with where we preserve SCEV itself. One significant challenge to all of this is that the CGSCC passes were actually using the alias analysis implementations by taking advantage of a pretty amazing set of loop holes in the old pass manager's analysis management code which allowed analysis groups to slide through in many cases. Moving away from analysis groups makes this problem much more obvious. To fix it, I've leveraged the flexibility the design of the new PM components provides to just directly construct the relevant alias analyses for the relevant functions in the IPO passes that need them. This is a bit hacky, but should go away with the new pass manager, and is already in many ways cleaner than the prior state. Another significant challenge is that various facilities of the old alias analysis infrastructure just don't fit any more. The most significant of these is the alias analysis 'counter' pass. That pass relied on the ability to snoop on AA queries at different points in the analysis group chain. Instead, I'm planning to build printing functionality directly into the aggregation layer. I've not included that in this patch merely to keep it smaller. Note that all of this needs a nearly complete rewrite of the AA documentation. I'm planning to do that, but I'd like to make sure the new design settles, and to flesh out a bit more of what it looks like in the new pass manager first. Differential Revision: http://reviews.llvm.org/D12080 llvm-svn: 247167
* MachineVerifier: Check that SlotIndex MBBIndexList is sorted.Matthias Braun2015-09-091-0/+17
| | | | | | | This introduces a check that the MBBIndexList is sorted as proposed in http://reviews.llvm.org/D12443 but split up into a separate commit. llvm-svn: 247166
* AMDGPU: Extract full 64-bit subregister and use subregsMatt Arsenault2015-09-091-35/+29
| | | | | | | | | | | | Instead of extracting both 32-bit components from the 128-bit register. This produces fewer copies and is easier for the copy peephole optimizer to understand and see the actual uses as extracts from a reg_sequence. This avoids needing to handle subregister composing in the PeepholeOptimizer's ValueTracker for this case. llvm-svn: 247162
* AMDGPU: Remove unused multiclass argumentMatt Arsenault2015-09-091-5/+4
| | | | llvm-svn: 247161
* [WebAssembly] Implement calls with void return types.Dan Gohman2015-09-094-8/+17
| | | | llvm-svn: 247158
* AMDGPU/SI: Fold operands through REG_SEQUENCE instructionsTom Stellard2015-09-091-0/+21
| | | | | | | | | | | | | | Summary: This helps mostly when we use add instructions for address calculations that contain immediates. Reviewers: arsenm Subscribers: arsenm, llvm-commits Differential Revision: http://reviews.llvm.org/D12256 llvm-svn: 247157
* [CostModel][AArch64] Remove amortization factor for some of the vector ↵Silviu Baranga2015-09-091-4/+5
| | | | | | | | | | | | | | | | | select instructions Summary: We are not scalarizing the wide selects in codegen for i16 and i32 and therefore we can remove the amortization factor. We still have issues with i64 vectors in codegen though. Reviewers: mcrosier Subscribers: mcrosier, aemerson, llvm-commits, rengolin Differential Revision: http://reviews.llvm.org/D12724 llvm-svn: 247156
* don't repeat function names in comments; NFCSanjay Patel2015-09-094-33/+27
| | | | llvm-svn: 247154
* [WebAssembly] Tidy up some unneeded newline characters.Dan Gohman2015-09-091-10/+9
| | | | llvm-svn: 247152
* function names start with a lower case letter; NFCSanjay Patel2015-09-091-54/+54
| | | | llvm-svn: 247150
* AVX512: Implemented encoding and intrinsics forIgor Breger2015-09-091-52/+109
| | | | | | | | | vextracti64x4 ,vextracti64x2, vextracti32x8, vextracti32x4, vextractf64x4, vextractf64x2, vextractf32x8, vextractf32x4 Added tests for intrinsics and encoding. Differential Revision: http://reviews.llvm.org/D11802 llvm-svn: 247149
* don't repeat function names in comments; NFCSanjay Patel2015-09-091-35/+32
| | | | llvm-svn: 247148
OpenPOWER on IntegriCloud