summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* [PowerPC] Convert a README.txt entry into a better testHal Finkel2015-01-051-13/+0
| | | | | | | We now produce the desired code as noted in the README.txt file (no spurious or). Remove the README entry and improve the regression test. llvm-svn: 225214
* Use the integrated assembler by default on 32-bit PowerPC and SPARCBrad Smith2015-01-052-4/+2
| | | | llvm-svn: 225213
* [PowerPC] Remove README.txt entryHal Finkel2015-01-051-34/+0
| | | | | | | This entry has been rendered irrelevant now that we have proper CR bit tracking. llvm-svn: 225211
* [Hexagon] Adding add/sub with carry, logical shift left by immediate and ↵Colin LeMahieu2015-01-052-226/+124
| | | | | | memop instructions. Removing old defs without bits and updating references. llvm-svn: 225210
* [PowerPC] Add a test for truncating a shifted loadHal Finkel2015-01-051-18/+0
| | | | | | | We now produce the desired code as noted in the README.txt file. Remove the README entry and add a regression test. llvm-svn: 225209
* Make DIE.h a public CodeGen header.Frederic Riss2015-01-059-595/+8
| | | | | | | | | | | | | | dsymutil would like to use all the AsmPrinter/MCStreamer infrastructure to stream out the DWARF. In order to do so, it will reuse the DIE object and so this header needs to be public. The interface exposed here has some corners that cannot be used without a DwarfDebug object, but clients that want to stream Dwarf can just avoid these. Differential Revision: http://reviews.llvm.org/D6695 llvm-svn: 225208
* [PowerPC] Add another test for load/store with updateHal Finkel2015-01-051-34/+0
| | | | | | | We now produce the desired code as noted in the README.txt file. Remove the README entry and add a regression test. llvm-svn: 225205
* [PowerPC] Fold i1 extensions with other opsHal Finkel2015-01-052-17/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Consider this function from our README.txt file: int foo(int a, int b) { return (a < b) << 4; } We now explicitly track CR bits by default, so the comment in the README.txt about not really having a SETCC is no longer accurate, but we did generate this somewhat silly code: cmpw 0, 3, 4 li 3, 0 li 12, 1 isel 3, 12, 3, 0 sldi 3, 3, 4 blr which generates the zext as a select between 0 and 1, and then shifts the result by a constant amount. Here we preprocess the DAG in order to fold the results of operations on an extension of an i1 value into the SELECT_I[48] pseudo instruction when the resulting constant can be materialized using one instruction (just like the 0 and 1). This was not implemented as a DAGCombine because the resulting code would have been anti-canonical and depends on replacing chained user nodes, which does not fit well into the lowering paradigm. Now we generate: cmpw 0, 3, 4 li 3, 0 li 12, 16 isel 3, 12, 3, 0 blr which is less silly. llvm-svn: 225203
* [X86][SSE] Fixed description for isSequentialOrUndefInRange. NFC.Simon Pilgrim2015-01-051-1/+1
| | | | llvm-svn: 225202
* [Hexagon] Adding rounding reg/reg variants, accumulating multiplies, and ↵Colin LeMahieu2015-01-051-57/+170
| | | | | | accumulating shifts. llvm-svn: 225201
* IR: Prune arguments to ValueAsMetadata::ValueAsMetadata()Duncan P. N. Exon Smith2015-01-051-2/+2
| | | | | | `LLVMContext` isn't actually used. llvm-svn: 225200
* [Hexagon] Adding V4 bit manipulating instructions, removing ALU defs without ↵Colin LeMahieu2015-01-051-251/+104
| | | | | | encoding bits. llvm-svn: 225199
* [Hexagon] Adding V4 logic-logic instructions and tests.Colin LeMahieu2015-01-051-0/+55
| | | | llvm-svn: 225198
* [Hexagon] Adding orand, bitsplit reg/reg, and modwrap instructions.Colin LeMahieu2015-01-051-0/+57
| | | | llvm-svn: 225197
* [PowerPC] Remove zexts after i32 ctlzHal Finkel2015-01-052-1/+11
| | | | | | | | | The 64-bit semantics of cntlzw are not special, the 32-bit population count is stored as a 64-bit value in the range [0,32]. As a result, it is always zero extended, and it can be added to the PPCISelDAGToDAG peephole optimization as a frontier instruction for the removal of unnecessary zero extensions. llvm-svn: 225192
* [PowerPC] Remove zexts after byte-swapping loadsHal Finkel2015-01-052-0/+16
| | | | | | | | | lhbrx and lwbrx not only load their data with byte swapping, but also clear the upper 32 bits (at least). As a result, they can be added to the PPCISelDAGToDAG peephole optimization as frontier instructions for the removal of unnecessary zero extensions. llvm-svn: 225189
* [Hexagon] Adding round reg/imm and bitsplit instructions.Colin LeMahieu2015-01-052-0/+21
| | | | llvm-svn: 225188
* SymbolRewriter: use iplist::spliceSaleem Abdulrasool2015-01-051-1/+1
| | | | | | | | The swap implementation for iplist is currently unsupported. Simply splice the old list into place, which achieves the same purpose. This is needed in order to thread the -frewrite-map-file frontend option correctly. NFC. llvm-svn: 225186
* SymbolRewriter: 80-columnSaleem Abdulrasool2015-01-051-2/+4
| | | | | | Wrap a couple of lines. NFC. llvm-svn: 225185
* [AArch64] Improve codegen of store lane instructions by avoiding GPR usage.Ahmed Bougacha2015-01-051-2/+2
| | | | | | | | | | | | | | | | | | | | We used to generate code similar to: umov.b w8, v0[2] strb w8, [x0, x1] because the STR*ro* patterns were preferred to ST1*. Instead, we can avoid going through GPRs, and generate: add x8, x0, x1 st1.b { v0 }[2], [x8] This patch increases the ST1* AddedComplexity to achieve that. rdar://16372710 Differential Revision: http://reviews.llvm.org/D6202 llvm-svn: 225183
* [AArch64] Improve codegen of store lane 0 instructions by directly storing ↵Ahmed Bougacha2015-01-051-0/+27
| | | | | | | | | | | | | | | | | | | | | | | the subregister. For 0-lane stores, we used to generate code similar to: fmov w8, s0 str w8, [x0, x1, lsl #2] instead of: str s0, [x0, x1, lsl #2] To correct that: for store lane 0 patterns, directly match to STR <subreg>0. Byte-sized instructions don't have the special case for a 0 index, because FPR8s are defined to have untyped content. rdar://16372710 Differential Revision: http://reviews.llvm.org/D6772 llvm-svn: 225181
* Select lower fsub,fabs pattern to fabd on AArch64Karthik Bhat2015-01-051-0/+12
| | | | | | | | | | | | This patch lowers patterns such as- fsub v0.4s, v0.4s, v1.4s fabs v0.4s, v0.4s to fabd v0.4s, v0.4s, v1.4s on AArch64. Review: http://reviews.llvm.org/D6791 llvm-svn: 225169
* Parse Tag_compatibility correctly.Charlie Turner2015-01-051-2/+7
| | | | | | | | Tag_compatibility takes two arguments, but before this patch it would erroneously accept just one, it now produces an error in that case. Change-Id: I530f918587620d0d5dfebf639944d6083871ef7d llvm-svn: 225167
* Emit the build attribute Tag_conformance.Charlie Turner2015-01-052-1/+15
| | | | | | | | | | | Claim conformance to version 2.09 of the ARM ABI. This build attribute must be emitted first amongst the build attributes when written to an object file. This is to simplify conformance detection by consumers. Change-Id: If9eddcfc416bc9ad6e5cc8cdcb05d0031af7657e llvm-svn: 225166
* Select lower sub,abs pattern to sabd on AArch64Karthik Bhat2015-01-051-0/+27
| | | | | | | | | | | | This patch lowers patterns such as- sub v0.4s, v0.4s, v1.4s abs v0.4s, v0.4s to sabd v0.4s, v0.4s, v1.4s on AArch64. Review: http://reviews.llvm.org/D6781 llvm-svn: 225165
* [PM] Don't run the machinery of invalidating all the analysis passesChandler Carruth2015-01-052-0/+12
| | | | | | | | | | | | | | | | | | | when all are being preserved. We want to short-circuit this for a couple of reasons. One, I don't really want passes to grow a dependency on actually receiving their invalidate call when they've been preserved. I'm thinking about removing this entirely. But more importantly, preserving everything is likely to be the common case in a lot of scenarios, and it would be really good to bypass all of the invalidation and preservation machinery there. Avoiding calling N opaque functions to try to invalidate things that are by definition still valid seems important. =] This wasn't really inpsired by much other than seeing the spam in the logging for analyses, but it seems better ot get it checked in rather than forgetting about it. llvm-svn: 225163
* [PM] Add names and debug logging for analysis passes to the new passChandler Carruth2015-01-052-4/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | manager. This starts to allow us to test analyses more easily, but it's really only the beginning. Some of the code here is still untestable without manual changes to create analysis passes, but I wanted to factor it into a small of chunks as possible. Next up in order to be able to test things are, in no particular order: - No-op analyses passes so we don't have to use real ones to exercise the pass maneger itself. - Automatic way of generating dummy passes that require an analysis be run, including a variant that calls a 'print' method on a pass to make it even easier to print out the results of an analysis. - Dummy passes that invalidate all analyses for their IR unit so we can test invalidation and re-runs. - Automatic way to print each analysis pass as it is re-run. - Automatic but optional verification of analysis passes everywhere possible. I'm not claiming I'll get to all of these immediately, but that's what is in the pipeline at some stage. I'm fleshing out exactly what I need and what to prioritize by working on converting analyses and then trying to test the conversion. =] llvm-svn: 225162
* Replace several 'assert(false' with 'llvm_unreachable' or fold a condition ↵Craig Topper2015-01-0516-56/+38
| | | | | | into the assert. llvm-svn: 225160
* Fixed a bug in memory dependence checking module of loop vectorization. The ↵Jiangning Liu2015-01-051-48/+57
| | | | | | | | | | | | | | | | | | | | | following loop should not be vectorized with current algorithm. {code} // loop body ... = a[i] (1) ... = a[i+1] (2) ....... a[i+1] = .... (3) a[i] = ... (4) {code} The algorithm tries to collect memory access candidates from AliasSetTracker, and then check memory dependences one another. The memory accesses are unique in AliasSetTracker, and a single memory access in AliasSetTracker may map to multiple entries in AccessAnalysis, which could cover both 'read' and 'write'. Originally the algorithm only checked 'write' entry in Accesses if only 'write' exists. This is incorrect and the consequence is it ignored all read access, and finally some RAW and WAR dependence are missed. For the case given above, if we ignore two reads, the dependence between (1) and (3) would not be able to be captured, and finally this loop will be incorrectly vectorized. The fix simply inserts a new loop to find all entries in Accesses. Since it will skip most of all other memory accesses by checking the Value pointer at the very beginning of the loop, it should not increase compile-time visibly. llvm-svn: 225159
* [X86] Remove the predicates from the register forms of the 2-byte inc and ↵Craig Topper2015-01-052-44/+24
| | | | | | dec instructions. Remove the 32-bit mode only versions that existed for the disassembler. Move the patterns out of the instructions so they can still be qualified with predicates. llvm-svn: 225157
* [X86] Simplify code a little by just summing flags instead of conditionally ↵Craig Topper2015-01-051-18/+7
| | | | | | incrementing. NFC llvm-svn: 225156
* [X86] Remove unnecessary redeclaration of a variable with the same ↵Craig Topper2015-01-051-1/+0
| | | | | | assignment as the beginning of the function. NFC. llvm-svn: 225155
* [X86] Remove a strange fixme referring to a hack that doesn't seem to exist ↵Craig Topper2015-01-051-3/+0
| | | | | | since the code is in a comment. Can't figure out what the body of the 'if' was supposed to be anyway. llvm-svn: 225154
* [x86] Reduce text duplication for similar operand class declarations in ↵Craig Topper2015-01-051-268/+178
| | | | | | tablegen instruction info. No functional change intended. llvm-svn: 225153
* [X86] Fix the immediate size to match the address size in the operand types ↵Craig Topper2015-01-051-7/+7
| | | | | | for the move to/from absolute memory instructions. llvm-svn: 225152
* [PowerPC] Enable speculation of cttz/ctlzHal Finkel2015-01-051-0/+8
| | | | | | | | PPC has an instruction for ctlz with defined zero behavior, and our lowering of cttz (provided by DAGCombine) is also efficient and branchless, so speculating these makes sense. llvm-svn: 225150
* [SROA] Apply a somewhat heavy and unpleasant hammer to fix PR22093, anChandler Carruth2015-01-051-11/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | assert out of the new pre-splitting in SROA. This fix makes the code do what was originally intended -- when we have a store of a load both dealing in the same alloca, we force them to both be pre-split with identical offsets. This is really quite hard to do because we can keep discovering problems as we go along. We have to track every load over the current alloca which for any resaon becomes invalid for pre-splitting, and go back to remove all stores of those loads. I've included a couple of test cases derived from PR22093 that cover the different ways this can happen. While that PR only really triggered the first of these two, its the same fundamental issue. The other challenge here is documented in a FIXME now. We end up being quite a bit more aggressive for pre-splitting when loads and stores don't refer to the same alloca. This aggressiveness comes at the cost of introducing potentially redundant loads. It isn't clear that this is the right balance. It might be considerably better to require that we only do pre-splitting when we can presplit every load and store involved in the entire operation. That would give more consistent if conservative results. Unfortunately, it requires a non-trivial change to the actual pre-splitting operation in order to correctly handle cases where we end up pre-splitting stores out-of-order. And it isn't 100% clear that this is the right direction, although I'm starting to suspect that it is. llvm-svn: 225149
* [PowerPC] Materialize i64 constants using rotation with maskingHal Finkel2015-01-052-26/+51
| | | | | | | | | r225135 added the ability to materialize i64 constants using rotations in order to reduce the instruction count. Sometimes we can use a rotation only with some extra masking, so that we take advantage of the fact that generating a bunch of extra higher-order 1 bits is easy using li/lis. llvm-svn: 225147
* [PM] Switch the new pass manager to use a reference-based API for IRChandler Carruth2015-01-056-55/+54
| | | | | | | | | | | | | | | | | | | | | units. This was debated back and forth a bunch, but using references is now clearly cleaner. Of all the code written using pointers thus far, in only one place did it really make more sense to have a pointer. In most cases, this just removes immediate dereferencing from the code. I think it is much better to get errors on null IR units earlier, potentially at compile time, than to delay it. Most notably, the legacy pass manager uses references for its routines and so as more and more code works with both, the use of pointers was likely to become really annoying. I noticed this when I ported the domtree analysis over and wrote the entire thing with references only to have it fail to compile. =/ It seemed better to switch now than to delay. We can, of course, revisit this is we learn that references are really problematic in the API. llvm-svn: 225145
* [PM] Cleanup a const_cast and other machinery left over in this codeChandler Carruth2015-01-041-2/+1
| | | | | | | | | from before I removed thet non-const use of the function. The unused variable that held the const_cast was already kindly removed by Michael. llvm-svn: 225143
* [PowerPC] Materialize i64 constants using rotationHal Finkel2015-01-042-29/+29
| | | | | | | | | | | | | | | | Materializing full 64-bit constants on PPC64 can be expensive, requiring up to 5 instructions depending on the locations of the non-zero bits. Sometimes materializing a rotated constant, and then applying the inverse rotation, requires fewer instructions than the direct method. If so, do that instead. In r225132, I added support for forming constants using bit inversion. In effect, this reverts that commit and replaces it with rotation support. The bit inversion is useful for turning constants that are mostly ones into ones that are mostly zeros (thus enabling a more-efficient shift-based materialization), but the same effect can be obtained by using negative constants and a rotate, and that is at least as efficient, if not more. llvm-svn: 225135
* Fix unused variable warning for non-asserts builds. NFC.Michael Kuperstein2015-01-041-2/+2
| | | | llvm-svn: 225133
* [PowerPC] Materialize i64 constants using bit inversionHal Finkel2015-01-041-2/+30
| | | | | | | | | Materializing full 64-bit constants on PPC64 can be expensive, requiring up to 5 instructions depending on the locations of the non-zero bits. Sometimes materializing the bit-reversed constant, and then flipping the bits, requires fewer instructions than the direct method. If so, do that instead. llvm-svn: 225132
* [PM] Split the AssumptionTracker immutable pass into two separate APIs:Chandler Carruth2015-01-0449-724/+764
| | | | | | | | | | | | | | | | | | | | | | | | | | | | a cache of assumptions for a single function, and an immutable pass that manages those caches. The motivation for this change is two fold. Immutable analyses are really hacks around the current pass manager design and don't exist in the new design. This is usually OK, but it requires that the core logic of an immutable pass be reasonably partitioned off from the pass logic. This change does precisely that. As a consequence it also paves the way for the *many* utility functions that deal in the assumptions to live in both pass manager worlds by creating an separate non-pass object with its own independent API that they all rely on. Now, the only bits of the system that deal with the actual pass mechanics are those that actually need to deal with the pass mechanics. Once this separation is made, several simplifications become pretty obvious in the assumption cache itself. Rather than using a set and callback value handles, it can just be a vector of weak value handles. The callers can easily skip the handles that are null, and eventually we can wrap all of this up behind a filter iterator. For now, this adds boiler plate to the various passes, but this kind of boiler plate will end up making it possible to port these passes to the new pass manager, and so it will end up factored away pretty reasonably. llvm-svn: 225131
* InstCombine: match can find ConstantExprs, don't assume we have a ValueDavid Majnemer2015-01-041-2/+2
| | | | | | | | | | We assumed the output of a match was a Value, this would cause us to assert because we would fail a cast<>. Instead, use a helper in the Operator family to hide the distinction between Value and Constant. This fixes PR22087. llvm-svn: 225127
* ValueTracking: ComputeNumSignBits should tolerate misshapen phi nodesDavid Majnemer2015-01-041-2/+5
| | | | | | | | | | | | PHI nodes can have zero operands in the middle of a transform. It is expected that utilities in Analysis don't freak out when this happens. Note that it is considered invalid to allow these misshapen phi nodes to make it to another pass. This fixes PR22086. llvm-svn: 225126
* [APFloat][ADT] Fix sign handling logic for FMA results that truncate to zero.Lang Hames2015-01-041-1/+1
| | | | | | | | | | | | | | This patch adds a check for underflow when truncating results back to lower precision at the end of an FMA. The additional sign handling logic in APFloat::fusedMultiplyAdd should only be performed when the result of the addition step of the FMA (in full precision) is exactly zero, not when the result underflows to zero. Unit tests for this case and related signed zero FMA results are included. Fixes <rdar://problem/18925551>. llvm-svn: 225123
* ARM: permit tail calls to weak externals on COFFSaleem Abdulrasool2015-01-032-2/+6
| | | | | | | | | | | Weak externals are resolved statically, so we can actually generate the tail call on PE/COFF targets without breaking the requirements. It is questionable whether we want to propagate the current behaviour for MachO as the requirements are part of the ARM ELF specifications, and it seems that prior to the SVN r215890, we would have tail'ed the call. For now, be conservative and only permit it on PE/COFF where the call will always be fully resolved. llvm-svn: 225119
* [PowerPC/BlockPlacement] Allow target to provide a per-loop alignment preferenceHal Finkel2015-01-033-3/+41
| | | | | | | | | | | | | | | | | The existing code provided for specifying a global loop alignment preference. However, the preferred loop alignment might depend on the loop itself. For recent POWER cores, loops between 5 and 8 instructions should have 32-byte alignment (while the others are better with 16-byte alignment) so that the entire loop will fit in one i-cache line. To support this, getPrefLoopAlignment has been made virtual, and can be provided with an optional MachineLoop* so the target can inspect the loop before answering the query. The default behavior, as before, is to return the value set with setPrefLoopAlignment. MachineBlockPlacement now queries the target for each loop instead of only once per function. There should be no functional change for other targets. llvm-svn: 225117
* [PowerPC] Use 16-byte alignment for modern cores for functions/loopsHal Finkel2015-01-032-4/+45
| | | | | | | | | | | | Most modern PowerPC cores prefer that functions and loops start on 16-byte-aligned boundaries (*), so instruct block placement, etc. to make this happen. The branch selector has also been adjusted so account for the extra nops that might now be inserted before loop headers. (*) Some cores actually prefer other alignments for small loops, but that will be addressed in a follow-up commit. llvm-svn: 225115
OpenPOWER on IntegriCloud