summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* [x86] Remove some unused instruction format classes.Craig Topper2014-02-261-12/+0
| | | | llvm-svn: 202234
* [x86] Simplify disassembler code slightly.Craig Topper2014-02-261-4/+8
| | | | llvm-svn: 202233
* [SROA] The original refactoring inspired by the addrspace patch inChandler Carruth2014-02-261-21/+21
| | | | | | | | | | | | | | | | | | | | | | D1764, which in turn set off the other refactorings to make 'getSliceAlign()' a sensible thing. There are two possible inputs to the required alignment of a memory transfer intrinsic: the alignment constraints of the source and the destination. If we are *only* introducing a (potentially new) offset onto one side of the transfer, we don't need to consider the alignment constraints of the other side. Use this to simplify the logic feeding into alignment computation for unsplit transfers. Also, hoist the clamp of the magical zero alignment for these intrinsics to the more customary one alignment early. This lets several other conditions melt away. No functionality changed. There is a further improvement this exposes which *will* change functionality, but that's arriving in a separate patch. llvm-svn: 202232
* [SROA] Yet another slight refactoring that simplifies an API in theChandler Carruth2014-02-261-20/+19
| | | | | | | | | | | | | | | | rewriting logic: don't pass custom offsets for the adjusted pointer to the new alloca. We always passed NewBeginOffset here. Sometimes we spelled it BeginOffset, but only when they were in fact equal. Whats worse, the API is set up so that you can't reasonably call it with anything else -- it assumes that you're passing it an offset relative to the *original* alloca that happens to fall within the new one. That's the whole point of NewBeginOffset, it's the clamped beginning offset. No functionality changed. llvm-svn: 202231
* [SROA] Simplify the computing of alignment: we only ever need theChandler Carruth2014-02-261-30/+16
| | | | | | | | | | | | | | | alignment of the slice being rewritten, not any arbitrary offset. Every caller is really just trying to compute the alignment for the whole slice, never for some arbitrary alignment. They are also just passing a type when they have one to see if we can skip an explicit alignment in the IR by using the type's alignment. This makes for a much simpler interface. Another refactoring inspired by the addrspace patch for SROA, although only loosely related. llvm-svn: 202230
* [SROA] Use NewOffsetBegin in the unsplit case for memset merely forChandler Carruth2014-02-261-3/+4
| | | | | | | | | | | | | | | | | consistency with memcpy rewriting, and fix a latent bug in the alignment management for memset. The alignment issue is that getAdjustedAllocaPtr is computing the *relative* offset into the new alloca, but the alignment isn't being set to the relative offset, it was using the the absolute offset which is into the old alloca. I don't think its possible to write a test case that actually reaches this code where the resulting alignment would be observably different, but the intent was clearly to use the relative offset within the new alloca. llvm-svn: 202229
* [SROA] Use the members for New{Begin,End}Offset in the rewrite helpersChandler Carruth2014-02-261-14/+8
| | | | | | | | | | | | | | | rather than passing them as arguments. While I generally prefer actual arguments, in this case the readability loss is substantial. By using members we avoid repeatedly calculating the offsets, and once we're using members it is useful to ensure that those names *always* refer to the original-alloca-relative new offset for a rewritten slice. No functionality changed. Follow-up refactoring, all toward getting the address space patch merged. llvm-svn: 202228
* [SROA] Compute the New{Begin,End}Offset values once for each allocaChandler Carruth2014-02-261-40/+24
| | | | | | | | | | | | | | slice being rewritten. We had the same code scattered across most of the visits. Instead, compute the new offsets and the slice size once when we start to visit a particular slice, and use the member variables from then on. This reduces quite a bit of code duplication. No functionality changed. Refactoring inspired to make it easier to apply the address space patch to SROA. llvm-svn: 202227
* Use StringRef in raw_fd_ostream constructorBen Langmuir2014-02-261-3/+2
| | | | llvm-svn: 202225
* [SROA] Fix PR18615 with some long overdue simplifications to the boundsChandler Carruth2014-02-261-9/+6
| | | | | | | | | | | | | | | | | | | | | | | | | checking in SROA. The primary change is to just rely on uge for checking that the offset is within the allocation size. This removes the explicit checks against isNegative which were terribly error prone (including the reversed logic that led to PR18615) and prevented us from supporting stack allocations larger than half the address space.... Ok, so maybe the latter isn't *common* but it's a silly restriction to have. Also, we used to try to support a PHI node which loaded from before the start of the allocation if any of the loaded bytes were within the allocation. This doesn't make any sense, we have never really supported loading or storing *before* the allocation starts. The simplified logic just doesn't care. We continue to allow loading past the end of the allocation in part to support cases where there is a PHI and some loads are larger than others and the larger ones reach past the end of the allocation. We could solve this a different and more conservative way, but I'm still somewhat paranoid about this. llvm-svn: 202224
* Remove spurious emacs major mode marker, these should only go on .h files.Nick Lewycky2014-02-261-1/+1
| | | | llvm-svn: 202222
* 80-col.Eric Christopher2014-02-261-1/+2
| | | | llvm-svn: 202221
* Formatting fixups.Eric Christopher2014-02-261-2/+2
| | | | llvm-svn: 202220
* Constify the Optnone checks in IR passes.Paul Robinson2014-02-262-5/+5
| | | | llvm-svn: 202213
* Simplify base64 routine a bit.Rui Ueyama2014-02-251-2/+2
| | | | llvm-svn: 202210
* Add DIUnspecifiedParameter, so we can pretty-print it.Adrian Prantl2014-02-251-0/+6
| | | | | | This will be used for testcases in CFE. llvm-svn: 202207
* Use DataLayout from the module when easily available.Rafael Espindola2014-02-2510-15/+32
| | | | | | | | | | | | | | | | | Eventually DataLayoutPass should go away, but for now that is the only easy way to get a DataLayout in some APIs. This patch only changes the ones that have easy access to a Module. One interesting issue with sometimes using DataLayoutPass and sometimes fetching it from the Module is that we have to make sure they are equivalent. We can get most of the way there by always constructing the pass with a Module. In fact, the pass could be changed to point to an external DataLayout instead of owning one to make this stricter. Unfortunately, the C api passes a DataLayout, so it has to be up to the caller to make sure the pass and the module are in sync. llvm-svn: 202204
* DwarfDebug: Avoid emitting an empty debug_aranges section when aranges are ↵David Blaikie2014-02-251-1/+2
| | | | | | disabled llvm-svn: 202201
* Address review comments for r202188.Adrian Prantl2014-02-253-32/+16
| | | | | | | | | This is refactoring / simplifying code, updating comments and enabling the testcase on non-x86 platforms. No functionality change. llvm-svn: 202199
* Fix resetting the DataLayout in a Module.Rafael Espindola2014-02-252-5/+19
| | | | | | | | | | | | | | | No tool does this currently, but as everything else in a module we should be able to change its DataLayout. Most of the fix is in DataLayout to make sure it can be reset properly. The test uses Module::setDataLayout since the fact that we mutate a DataLayout is an implementation detail. The module could hold a OwningPtr<DataLayout> and the DataLayout itself could be immutable. Thanks to Philip Reames for pushing me in the right direction. llvm-svn: 202198
* [reassociate] Switch two std::sort calls into std::stable_sort calls asChandler Carruth2014-02-251-2/+2
| | | | | | | | | | | | | | | their inputs come from std::stable_sort and they are not total orders. I'm not a huge fan of this, but the really bad std::stable_sort is right at the beginning of Reassociate. After we commit to stable-sort based consistent respect of source order, the downstream sorts shouldn't undo that unless they have a total order or they are used in an order-insensitive way. Neither appears to be true for these cases. I don't have particularly good test cases, but this jumped out by inspection when looking for output instability in this pass due to changes in the ordering of std::sort. llvm-svn: 202196
* R600: Don't unconditionally unroll loops with private memory accessesTom Stellard2014-02-251-3/+7
| | | | | | | This causes the size of the scrypt kernel to explode and eats all the memory on some systems. llvm-svn: 202195
* R600/SI: Custom select 64-bit ADDTom Stellard2014-02-253-30/+48
| | | | llvm-svn: 202194
* [SROA] Add an off-by-default *strict* inbounds check to SROA. I had SROAChandler Carruth2014-02-251-0/+42
| | | | | | | | | | implemented this way a long time ago and due to the overwhelming bugs that surfaced, moved to a much more relaxed variant. Richard Smith would like to understand the magnitude of this problem and it seems fairly harmless to keep some flag-controlled logic to get the extremely strict behavior here. I'll remove it if it doesn't prove useful. llvm-svn: 202193
* Account for 128-bit integer operations in PPCCTRLoopsHal Finkel2014-02-251-6/+11
| | | | | | | We need to abort the formation of counter-register-based loops where there are 128-bit integer operations that might become function calls. llvm-svn: 202192
* Store a DataLayout in Module.Rafael Espindola2014-02-256-11/+42
| | | | | | | | | | | | | | Now that DataLayout is not a pass, store one in Module. Since the C API expects to be able to get a char* to the datalayout description, we have to keep a std::string somewhere. This patch keeps it in Module and also uses it to represent modules without a DataLayout. Once DataLayout is mandatory, we should probably move the string to DataLayout itself since it won't be necessary anymore to represent the special case of a module without a DataLayout. llvm-svn: 202190
* Debug info: Support variadic functions.Adrian Prantl2014-02-253-28/+50
| | | | | | | | | | | | | | Variadic functions have an unspecified parameter tag after the last argument. In IR this is represented as an unspecified parameter in the subroutine type. Paired commit with CFE r202185. rdar://problem/13690847 This re-applies r202184 + a bugfix in DwarfDebug's argument handling. llvm-svn: 202188
* Revert "Debug info: Support variadic functions."Adrian Prantl2014-02-253-37/+21
| | | | | | This reverts commit r202184 because of buildbot breakage. llvm-svn: 202187
* Remove outdated comments.Manman Ren2014-02-251-1/+1
| | | | llvm-svn: 202186
* Debug info: Support variadic functions.Adrian Prantl2014-02-253-21/+37
| | | | | | | | | | | | Variadic functions have an unspecified parameter tag after the last argument. In IR this is represented as an unspecified parameter in the subroutine type. Paired commit with CFE. rdar://problem/13690847 llvm-svn: 202184
* [XCore] Add intrinsic for CLRPT (clear port time) instruction.Richard Osborne2014-02-251-1/+2
| | | | llvm-svn: 202172
* [XCore] Add intrinsic for EDU (event disable unconditional) instruction.Richard Osborne2014-02-251-1/+2
| | | | llvm-svn: 202171
* Make DataLayout a plain object, not a pass.Rafael Espindola2014-02-2549-89/+138
| | | | | | | Instead, have a DataLayoutPass that holds one. This will allow parts of LLVM don't don't handle passes to also use DataLayout. llvm-svn: 202168
* Keep the link register for uwtable.Logan Chien2014-02-251-3/+12
| | | | | | | | | | | The function with uwtable attribute might be visited by the stack unwinder, thus the link register should be considered as clobbered after the execution of the branch and link instruction (i.e. the definition of the machine instruction can't be ignored) even when the callee function are marked with noreturn. llvm-svn: 202165
* [XCore] Prefer to word align functions.Richard Osborne2014-02-251-0/+1
| | | | | | | | | | | | | | | | | | | The behaviour of the XCore's instruction buffer means that the performance of the same code sequence can differ depending on whether it starts at a 4 byte aligned address or not. Since we don't model the instruction buffer in the backend we have no way of knowing for sure if it is beneficial to word align a specific function. However, in the absence of precise modelling, it is better on balance to word align functions because: * It makes a fetch-nop while executing the prologue slightly less likely. * If we don't word align functions then a small perturbation in one function can have a dramatic knock on effect. If the size of the function changes it might change the alignment and therefore the performance of all the functions that happen to follow it in the binary. This butterfly effect makes it harder to reason about and measure the performance of code. llvm-svn: 202163
* Factor out calls to AA.getDataLayout().Rafael Espindola2014-02-251-8/+6
| | | | llvm-svn: 202157
* Make a few more DataLayout variables const.Rafael Espindola2014-02-252-5/+5
| | | | llvm-svn: 202155
* [SROA] Use the original load name with the SROA-prefixed IRB rather thanChandler Carruth2014-02-251-2/+2
| | | | | | | | just "load". This helps avoid pointless de-duping with order-sensitive numbers as we already have unique names from the original load. It also makes the resulting IR quite a bit easier to read. llvm-svn: 202140
* [SROA] Thread the ability to add a pointer-specific name prefix throughChandler Carruth2014-02-251-21/+53
| | | | | | | | | | | | | | | | | | | | | | | | the pointer adjustment code. This is the primary code path that creates totally new instructions in SROA and being able to lump them based on the pointer value's name for which they were created causes *significantly* fewer name collisions and general noise in the debug output. This is particularly significant because it is making it much harder to track down instability in the output of SROA, as name de-duplication is a totally harmless form of instability that gets in the way of seeing real problems. The new fancy naming scheme tries to dig out the root "pre-SROA" name for pointer values and associate that all the way through the pointer formation instructions. Digging out the root is important to prevent the multiple iterative rounds of SROA from just layering too much cruft on top of cruft here. We already track the layers of SROAs iteration in the alloca name prefix. We don't need to duplicate it here. Should have no functionality change, and shouldn't have any really measurable impact on NDEBUG builds, as most of the complex logic is debug-only. llvm-svn: 202139
* [SROA] Rather than copying the logic for building a name prefix into theChandler Carruth2014-02-251-3/+3
| | | | | | | PHI-pointer builder, just copy the builder and clobber the obvious fields. llvm-svn: 202136
* [SROA] Simplify some of the logic to dig out the old pointer value byChandler Carruth2014-02-251-14/+10
| | | | | | | | | using OldPtr more heavily. Lots of this code was written before the rewriter had an OldPtr member setup ahead of time. There are already asserts in place that should ensure this doesn't change any functionality. llvm-svn: 202135
* [SROA] Adjust to new clang-format style.Chandler Carruth2014-02-251-2/+2
| | | | llvm-svn: 202134
* Reuse constants for COFF string table entry offsetsNico Rieck2014-02-251-7/+9
| | | | llvm-svn: 202130
* [SROA] Fix a *glaring* bug in r202091: you have to actually *write*Chandler Carruth2014-02-251-0/+2
| | | | | | | | | | | | | | the break statement, not just think it to yourself.... No idea how this worked at all, much less survived most bots, my bootstrap, and some bot bootstraps! The Polly one didn't survive, and this was filed as PR18959. I don't have a reduced test case and honestly I'm not seeing the need. What we probably need here are better asserts / debug-build behavior in SmallPtrSet so that this madness doesn't make it so far. llvm-svn: 202129
* Silence GCC warningAlexey Samsonov2014-02-251-1/+1
| | | | llvm-svn: 202119
* Fix typosAlp Toker2014-02-253-5/+5
| | | | llvm-svn: 202107
* [SROA] Add a debugging tool which shuffles the slices sequence prior toChandler Carruth2014-02-251-0/+19
| | | | | | | | | | | | | sorting it. This helps uncover latent reliance on the original ordering which aren't guaranteed to be preserved by std::sort (but often are), and which are based on the use-def chain orderings which also aren't (technically) guaranteed. Only available in C++11 debug builds, and behind a flag to prevent noise at the moment, but this is generally useful so figured I'd put it in the tree rather than keeping it out-of-tree. llvm-svn: 202106
* [SROA] Use a more direct way of determining whether we are processingChandler Carruth2014-02-251-2/+3
| | | | | | | | | | | | | | | | | | the destination operand or source operand of a memmove. It so happens that it was impossible for SROA to try to rewrite self-memmove where the operands are *identical*, because either such a think is volatile (and we don't rewrite) or it is non-volatile, and we don't even register it as a use of the alloca. However, making the 'IsDest' test *rely* on this subtle fact is... Very confusing for the reader. We should use the direct and readily available test of the Use* which gives us concrete information about which operand is being rewritten. No functionality changed, I hope! ;] llvm-svn: 202103
* Indent this continued line.Nick Lewycky2014-02-251-4/+4
| | | | llvm-svn: 202096
* [SROA] Fix another instability in SROA with respect to the sliceChandler Carruth2014-02-251-66/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ordering. The fundamental problem that we're hitting here is that the use-def chain ordering is *itself* not a stable thing to be relying on in the rewriting for SROA. Further, we use a non-stable sort over the slices to arrange them based on the section of the alloca they're operating on. With a debugging STL implementation (or different implementations in stage2 and stage3) this can cause stage2 != stage3. The specific aspect of this problem fixed in this commit deals with the rewriting and load-speculation around PHIs and Selects. This, like many other aspects of the use-rewriting in SROA, is really part of the "strong SSA-formation" that is doen by SROA where it works very hard to canonicalize loads and stores in *just* the right way to satisfy the needs of mem2reg[1]. When we have a select (or a PHI) with 2 uses of the same alloca, we test that loads downstream of the select are speculatable around it twice. If only one of the operands to the select needs to be rewritten, then if we get lucky we rewrite that one first and the select is immediately speculatable. This can cause the order of operand visitation, and thus the order of slices to be rewritten, to change an alloca from promotable to non-promotable and vice versa. The fix is to defer all of the speculation until *after* the rewrite phase is done. Once we've rewritten everything, we can accurately test for whether speculation will work (once, instead of twice!) and the order ceases to matter. This also happens to simplify the other subtlety of speculation -- we need to *not* speculate anything unless the result of speculating will make the alloca fully promotable by mem2reg. I had a previous attempt at simplifying this, but it was still pretty horrible. There is actually already a *really* nice test case for this in basictest.ll, but on multiple STL implementations and inputs, we just got "lucky". Fortunately, the test case is very small and we can essentially build it in exactly the opposite way to get reasonable coverage in both directions even from normal STL implementations. llvm-svn: 202092
OpenPOWER on IntegriCloud