summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* [SelectionDAG] Add support for EXTRACT_SUBVECTOR to ComputeNumSignBitsSimon Pilgrim2016-12-121-0/+2
| | | | | | Pre-commit as discussed on D27657 llvm-svn: 289425
* [X86] Teach selectScalarSSELoad to accept full 128-bit vector loads and the ↵Craig Topper2016-12-121-0/+22
| | | | | | | | X86ISD::VZEXT_LOAD opcode. Disable peephole on some of the tests that no longer require it to properly fold scalar intrinsics. llvm-svn: 289424
* [X86] Change CMPSS/CMPSD intrinsic instructions to use sse_load_f32/f64 as ↵Craig Topper2016-12-121-12/+12
| | | | | | | | | | its memory pattern instead of full vector load. These intrinsics only load a single element. We should use sse_loadf32/f64 to give more options of what loads it can match. Currently these instructions are often only getting their load folded thanks to the load folding in the peephole pass. I plan to add more types of loads to sse_load_f32/64 so we can match without the peephole. llvm-svn: 289423
* [X86] Remove some intrinsic instructions from hasPartialRegUpdateCraig Topper2016-12-121-8/+0
| | | | | | | | | | | | | | | | | Summary: These intrinsic instructions are all selected from intrinsics that have well defined behavior for where the upper bits come from. It's not the same place as the lower bits. As you can see we were suppressing load folding for these instructions in some cases. In none of the cases was the separate load helping avoid a partial dependency on the destination register. So we should just go ahead and allow the load to be folded. Only foldMemoryOperand was suppressing folding for these. They all have patterns for folding sse_load_f32/f64 that aren't gated with OptForSize, but sse_load_f32/f64 doesn't allow 128-bit vector loads. It only allows scalar_to_vector and vzmovl of scalar loads to match. There's no reason we can't allow a 128-bit vector load to be narrowed so I would like to fix sse_load_f32/f64 to allow that. And if I do that it changes some of these same test cases to fold the load too. Reviewers: spatel, zvi, RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D27611 llvm-svn: 289419
* [SCEVExpand] do not hoist divisions by zero (PR30935)Sebastian Pop2016-12-122-31/+59
| | | | | | | | | | | | | | | SCEVExpand computes the insertion point for the components of a SCEV to be code generated. When it comes to generating code for a division, SCEVexpand would not be able to check (at compilation time) all the conditions necessary to avoid a division by zero. The patch disables hoisting of expressions containing divisions by anything other than non-zero constants in order to avoid hoisting these expressions past conditions that should hold before doing the division. The patch passes check-all on x86_64-linux. Differential Revision: https://reviews.llvm.org/D27216 llvm-svn: 289412
* [InstCombine][XOP] The instructions for the scalar frcz intrinsics are ↵Craig Topper2016-12-111-2/+14
| | | | | | defined to put 0 in the upper bits, not pass bits through like other intrinsics. So we should return a zero vector instead. llvm-svn: 289411
* [X86][SSE] Add support for combining target shuffles to SHUFPD.Simon Pilgrim2016-12-111-17/+45
| | | | llvm-svn: 289407
* [SCCP] Use the appropriate helper function. NFCI.Davide Italiano2016-12-111-2/+2
| | | | llvm-svn: 289406
* [X86][AVX512] Add missing patterns for broadcast fallback in case load node ↵Ayman Musa2016-12-111-0/+6
| | | | | | | | | | | has multiple uses (for v4i64 and v4f64). When the load node which the broadcast instruction broadcasts has multiple uses, it cannot be folded. A fallback pattern is added to catch these cases and provide another solution. Differential Revision: https://reviews.llvm.org/D27661 llvm-svn: 289404
* [TBAA] Don't generate invalid TBAA when merging nodesSanjoy Das2016-12-111-1/+5
| | | | | | | | | | | | | | Summary: Fix a corner case in `MDNode::getMostGenericTBAA` where we can sometimes generate invalid TBAA metadata. Reviewers: chandlerc, hfinkel, mehdi_amini, manmanren Subscribers: mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D26635 llvm-svn: 289403
* [Verifier] Add verification for TBAA metadataSanjoy Das2016-12-111-2/+265
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This change adds some verification in the IR verifier around struct path TBAA metadata. Other than some basic sanity checks (e.g. we get constant integers where we expect constant integers), this checks: - That by the time an struct access tuple `(base-type, offset)` is "reduced" to a scalar base type, the offset is `0`. For instance, in C++ you can't start from, say `("struct-a", 16)`, and end up with `("int", 4)` -- by the time the base type is `"int"`, the offset better be zero. In particular, a variant of this invariant is needed for `llvm::getMostGenericTBAA` to be correct. - That there are no cycles in a struct path. - That struct type nodes have their offsets listed in an ascending order. - That when generating the struct access path, you eventually reach the access type listed in the tbaa tag node. Reviewers: dexonsmith, chandlerc, reames, mehdi_amini, manmanren Subscribers: mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D26438 llvm-svn: 289402
* [Constants] don't die processing non-ConstantInt GEP indices in ↵Sanjay Patel2016-12-111-6/+8
| | | | | | | | | isGEPWithNoNotionalOverIndexing() (PR31262) This should fix: https://llvm.org/bugs/show_bug.cgi?id=31262 llvm-svn: 289401
* instr-combiner: sum up all latencies of the transformed instructionsSebastian Pop2016-12-111-2/+9
| | | | | | | | | | | | | | | | | | | | We have found that -- when the selected subarchitecture has a scheduling model and we are not optimizing for size -- the machine-instruction combiner uses a too-simple algorithm to compute the cost of one of the two alternatives [before and after running a combining pass on a section of code], and therefor it throws away the combination results too often. This fix has the potential to help any ISA with the potential to combine instructions and for which at least one subarchitecture has a scheduling model. As of now, this is only known to definitely affect AArch64 subarchitectures with a scheduling model. Regression tested on AMD64/GNU-Linux, new test case tested to fail on an unpatched compiler and pass on a patched compiler. Patch by Abe Skolnik and Sebastian Pop. llvm-svn: 289399
* [SCEVExpander] Explicitly expand AddRec starts into loop preheaderSanjoy Das2016-12-111-5/+8
| | | | | | | | | | | | | | | | This is NFC today, but won't be once D27216 (or an equivalent patch) is in. This change fixes a design problem in SCEVExpander -- it relied on a hoisting optimization to generate correct code for add recurrences. This meant changing the hoisting optimization to not kick in under certain circumstances (to avoid speculating faulting instructions, say) would break correctness. The fix is to make the correctness requirements explicit, and have it not rely on the hoisting optimization for correctness. llvm-svn: 289397
* [X86] Regcall - Adding support for mask typesOren Ben Simhon2016-12-113-38/+62
| | | | | | | | | Regcall calling convention passes mask types arguments in x86 GPR registers. The review includes the changes required in order to support v32i1, v16i1 and v8i1. Differential Revision: https://reviews.llvm.org/D27148 llvm-svn: 289383
* [X86][InstCombine] Add support for scalar FMA intrinsics to ↵Craig Topper2016-12-111-0/+29
| | | | | | | | SimplifyDemandedVectorElts. This teaches SimplifyDemandedElts that the FMA can be removed if the lower element isn't used. It also teaches it that if upper elements of the first operand aren't used then we can simplify them. llvm-svn: 289377
* Tweak the core loop in StringRef::find to avoid calling memcmp on everyChandler Carruth2016-12-111-6/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | iteration. Instead, load the byte at the needle length, compare it directly, and save it to use in the lookup table of lengths we can skip forward. I also added an annotation to expect that the comparison fails so that the loop gets laid out contiguously without the call to memcpy (and the substantial register shuffling that the ABI requires of that call). Finally, because this behaves especially badly with a needle length of one (by calling memcmp with a zero length) special case that to directly call memchr, which is what we should have been doing anyways. This was motivated by the fact that there are a large number of test cases in 'check-llvm' where FileCheck's performance is dominated by calls to StringRef::find (in a release, no-asserts build). I'm working on patches to generally improve matters there, but this alone was worth a 12.5% improvement in one test case where FileCheck spent 92% of its time in this routine. I experimented a bunch with different minor variations on this theme, for example setting the pointer *at* the last byte and indexing backwards for the call to memcmp. That didn't improve anything on this version and seemed more complex. I also tried other things to make the loop flow more nicely and none worked. =/ It is a bit unfortunate, the generated code here remains pretty gross, but I don't see any obvious ways to improve it. At this point, most of my ideas would be really elaborate: 1) While the remainder of the string is long enough, we could load a 16-byte or 32-byte vector at the address of the last byte and use palignr to rotate that and check the first 15- or 31-bytes at the front of the next segment, essentially pre-loading the first several bytes of the next iteration so we could quickly detect a mismatch in those bytes without an additional memory access. Down side would be the code complexity, having a fallback loop, and likely misaligned vector load. Plus it would make the common case of the last byte not matching somewhat slower (need some extraction from a vector). 2) While we have space, we could do an aligned load of a 16- or 32-byte vector that *contains* the end byte, and use any peceding bytes to have a more precise "no" test, and any subsequent bytes could be saved for the next iteration. This remove any unaligned load penalty, but still requires us to pay the overhead of vector extraction for the cases where we didn't need to do anything other than load and compare the last byte. 3) Try to walk from the last byte in a way that is more friendly to cache and/or memory pre-fetcher considering we have to poke the last byte anyways. No idea if any of these are really worth pursuing though. They all seem somewhat unlikely to yield big wins in practice and to be a lot of work and complexity. So I settled here, which at least seems like a strict improvement over the previous version. llvm-svn: 289373
* [X86][InstCombine] Teach InstCombineCalls to simplify demanded elements for ↵Craig Topper2016-12-111-0/+8
| | | | | | | | scalar FMA intrinsics. These intrinsics don't read the upper bits of their second and third inputs so we can try to simplify them. llvm-svn: 289372
* [AVX-512][InstCombine] Teach InstCombineCalls how to simplify demanded for ↵Craig Topper2016-12-111-1/+3
| | | | | | | | scalar cmp intrinsics with masking and rounding. These intrinsics don't read the upper elements of their first and second input. These are slightly different the the SSE version which does use the upper bits of its first element as passthru bits since the result goes to an XMM register. For AVX-512 the result goes to a mask register instead. llvm-svn: 289371
* [AVX-512][InstCombine] Teach InstCombineCalls how to simplify demanded ↵Craig Topper2016-12-111-0/+31
| | | | | | | | elements for scalar add,div,mul,sub,max,min intrinsics with masking and rounding. These intrinsics don't read the upper bits of their second input. And the third input is the passthru for masking and that only uses the lower element as well. llvm-svn: 289370
* [libFuzzer] don't depend on time in a testKostya Serebryany2016-12-111-1/+1
| | | | llvm-svn: 289368
* [AVX-512][InstCombine] Add 512-bit vpermilvar intrinsics to InstCombineCalls ↵Craig Topper2016-12-111-10/+10
| | | | | | to match 128 and 256-bit. llvm-svn: 289354
* [X86] Fix a comment to say 'an FMA' instead of 'a FMA'. NFCCraig Topper2016-12-111-1/+1
| | | | llvm-svn: 289352
* [X86] Remove masking from 512-bit VPERMIL intrinsics in preparation for ↵Craig Topper2016-12-112-8/+7
| | | | | | being able to constant fold them in InstCombineCalls like we do for 128/256-bit. llvm-svn: 289350
* [AVR] Fix a signed vs unsigned compiler warningDylan McKay2016-12-111-1/+1
| | | | llvm-svn: 289349
* [X86][InstCombine] Teach InstCombineCalls to turn pshufb intrinsic into a ↵Craig Topper2016-12-111-2/+3
| | | | | | shufflevector if the indices are constant. llvm-svn: 289348
* [AVR] Remove incorrect commentDylan McKay2016-12-101-2/+0
| | | | | | This should've been removed in r289323. llvm-svn: 289346
* [X86] Remove masking from 512-bit PSHUFB intrinsics in preparation for being ↵Craig Topper2016-12-102-4/+4
| | | | | | able to constant fold it in InstCombineCalls like we do for 128/256-bit. llvm-svn: 289344
* [InstCombine] add helper for shift-by-shift folds; NFCISanjay Patel2016-12-101-150/+162
| | | | | | | These are currently limited to integer types, but we should be able to extend to splat vectors and possibly general vectors. llvm-svn: 289343
* [X86][SSE] Ensure UNPCK inputs are a consistent value type in ↵Simon Pilgrim2016-12-101-2/+3
| | | | | | LowerHorizontalByteSum llvm-svn: 289341
* [AVX-512] Remove 128/256 masked vpermil instrinsics and autoupgrade to a ↵Craig Topper2016-12-102-8/+22
| | | | | | select around the unmasked avx1 intrinsics. llvm-svn: 289340
* [X86][IR] Move the autoupgrading of store intrinsics out of the main nested ↵Craig Topper2016-12-101-90/+102
| | | | | | | | if/else chain. This should buy a little more time against the MSVC limit mentioned in PR31034. The handlers for stores all return at the end of their block so they can be picked off early. llvm-svn: 289339
* AMDGPU: Fix asan errors when folding operandsMatt Arsenault2016-12-101-2/+2
| | | | | | | This was failing when trying to fold immediates into operand 1 of a phi, which only has one statically known operand. llvm-svn: 289337
* [X86][SSE] Move ZeroVector creation into the shuffle pattern case where its ↵Simon Pilgrim2016-12-101-2/+2
| | | | | | | | actually used. Also fix the ZeroVector's type - I've no idea how this hasn't caused problems........ llvm-svn: 289336
* [AVX-512] Add support for lowering (v2i64 (fp_to_sint (v2f32))) to ↵Craig Topper2016-12-102-6/+25
| | | | | | vcvttps2uqq when AVX512DQ and AVX512VL are available. llvm-svn: 289335
* [X86] Clarify indentation. NFCCraig Topper2016-12-101-1/+1
| | | | llvm-svn: 289334
* [X86] Combine LowerFP_TO_SINT and LowerFP_TO_UINT. They only differ by a ↵Craig Topper2016-12-102-24/+7
| | | | | | single boolean flag passed to a helper function. Just check the opcode and create the flag. llvm-svn: 289333
* [InstSimplify] improve function name; NFCSanjay Patel2016-12-101-4/+6
| | | | llvm-svn: 289332
* [mips] Eliminate else-after-return. NFCSimon Atanasyan2016-12-101-4/+3
| | | | llvm-svn: 289331
* [SelectionDAG] Add ability for computeKnownBits to peek through bitcasts ↵Simon Pilgrim2016-12-101-1/+23
| | | | | | | | from 'large element' scalar/vector to 'small element' vector. Extension to D27129 which already supported bitcasts from 'small element' vector to 'large element' scalar/vector types. llvm-svn: 289329
* [AVR] Add a stub README fileDylan McKay2016-12-101-0/+8
| | | | llvm-svn: 289326
* [AVR] Fix and clean up the inline assembly testsDylan McKay2016-12-101-1/+3
| | | | | | | | | | There was a bug where we would hit an assertion if 'Q' was used as a constraint. I also removed hardcoded register names to prefer regexes so the tests don't break when the register allocator changes. llvm-svn: 289325
* [AVR] Fix an inline asm assertion which would always triggerDylan McKay2016-12-101-1/+1
| | | | | | | It looks like some time in the past, constraint codes were changed from chars being passed around to enums. llvm-svn: 289323
* [AVR] Use the register scavenger when expanding 'LDDW' instructionsDylan McKay2016-12-101-25/+45
| | | | | | | | | | | | Summary: This gets rid of the hardcoded 'r0' that was used previously. Reviewers: asl Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D27567 llvm-svn: 289322
* [AVR] Support stores to undefined pointersDylan McKay2016-12-101-1/+2
| | | | | | This would previously trigger an assertion error in AVRISelDAGToDAG. llvm-svn: 289321
* [PM] Support invalidation of inner analysis managers from a pass over the ↵Chandler Carruth2016-12-105-5/+132
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | outer IR unit. Summary: This never really got implemented, and was very hard to test before a lot of the refactoring changes to make things more robust. But now we can test it thoroughly and cleanly, especially at the CGSCC level. The core idea is that when an inner analysis manager proxy receives the invalidation event for the outer IR unit, it needs to walk the inner IR units and propagate it to the inner analysis manager for each of those units. For example, each function in the SCC needs to get an invalidation event when the SCC gets one. The function / module interaction is somewhat boring here. This really becomes interesting in the face of analysis-backed IR units. This patch effectively handles all of the CGSCC layer's needs -- both invalidating SCC analysis and invalidating function analysis when an SCC gets invalidated. However, this second aspect doesn't really handle the LoopAnalysisManager well at this point. That one will need some change of design in order to fully integrate, because unlike the call graph, the entire function behind a LoopAnalysis's results can vanish out from under us, and we won't even have a cached API to access. I'd like to try to separate solving the loop problems into a subsequent patch though in order to keep this more focused so I've adapted them to the API and updated the tests that immediately fail, but I've not added the level of testing and validation at that layer that I have at the CGSCC layer. An important aspect of this change is that the proxy for the FunctionAnalysisManager at the SCC pass layer doesn't work like the other proxies for an inner IR unit as it doesn't directly manage the FunctionAnalysisManager and invalidation or clearing of it. This would create an ever worsening problem of dual ownership of this responsibility, split between the module-level FAM proxy and this SCC-level FAM proxy. Instead, this patch changes the SCC-level FAM proxy to work in terms of the module-level proxy and defer to it to handle much of the updates. It only does SCC-specific invalidation. This will become more important in subsequent patches that support more complex invalidaiton scenarios. Reviewers: jlebar Subscribers: mehdi_amini, mcrosier, mzolotukhin, llvm-commits Differential Revision: https://reviews.llvm.org/D27197 llvm-svn: 289317
* [X86] Use X86ISD::CVTTP2SI and X86ISD::CVTTP2UI for lowering 128-bit ↵Craig Topper2016-12-102-7/+7
| | | | | | | | | | cvttps2qq and cvttps2uqq intrinsics since there is a mismatch between number of input and output elements. Ideally ISD::FP_TO_SINT and ISD::FP_TO_UINT would only be used for cases with the same number of input and output elements. Similar things have already been done for other convert intrinsics. llvm-svn: 289316
* [AVR] Fix a bunch of incorrect assertion messagesDylan McKay2016-12-102-5/+5
| | | | | | | | | | These should've been checking whether the immediate is a 6-bit unsigned integer. If the immediate was '63', this would cause an assertion error which shouldn't have occurred. llvm-svn: 289315
* [libFuzzer] test cleanup (3)Kostya Serebryany2016-12-101-1/+0
| | | | llvm-svn: 289314
* [libFuzzer] test cleanup (2)Kostya Serebryany2016-12-101-15/+0
| | | | llvm-svn: 289313
OpenPOWER on IntegriCloud