summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* [AVR] Add an 'relax memory operation' passDylan McKay2016-12-135-2/+157
| | | | | | | | | | | | | | | | | | | | | | Summary: This pass will be used to relax instructions which use out of bounds memory accesses to equivalent operations that can work with the addresses. The pass currently implements relaxation for the STDWPtrQRr instruction. Without this pass, an assertion error would be hit in the pseudo expansion pass. In the future, we will need to add more instructions to this pass. We can do that on a case-by-case basic. Reviewers: arsenm, kparzysz Subscribers: wdng, llvm-commits, mgorny Differential Revision: https://reviews.llvm.org/D27650 llvm-svn: 289517
* [peephole] Enhance folding logic to work for STATEPOINTsPhilip Reames2016-12-132-27/+26
| | | | | | | | | | | | | | The general idea here is to get enough of the existing restrictions out of the way that the already existing folding logic in foldMemoryOperand can kick in for STATEPOINTs and fold references to immutable stack slots. The key changes are: Support for folding multiple operands at once which reference the same load Support for folding multiple loads into a single instruction Walk all the operands of the instruction for varidic instructions (this is a bug fix!) Once this lands, I'll post another patch which refactors the TII interface here. There's nothing actually x86 specific about the x86 code used here. Differential Revision: https://reviews.llvm.org/D24103 llvm-svn: 289510
* [Statepoints] Reuse stack slots more than once within a basic blockPhilip Reames2016-12-131-4/+9
| | | | | | | | | | The stack slot reuse code had a really amusing bug. We ended up only reusing a stack slot exact once (initial use + reuse) within a basic block. If we had a third statepoint to process, we ended up allocating a new set of stack slots. If we crossed a basic block boundary, the set got cleared. As a result, code which is invoke heavy doesn't see the problem, but multiple calls within a basic block does. Net result: as we optimize invokes into calls, lowering gets worse. The root error here is that the bitmap uses by the custom allocator wasn't kept in sync. The result was that we ended up resizing the bitmap on the next statepoint (to handle the cross block case), reset the bit once, but then never reset it again. Differential Revision: https://reviews.llvm.org/D25243 llvm-svn: 289509
* [libFuzzer] don't require extra flags with -minimize_crash=1 (default to ↵Kostya Serebryany2016-12-132-10/+16
| | | | | | -max_total_time=600). Also respect exact_artifact_path when outputting the end result llvm-svn: 289506
* [libFuzzer] Implement Timers for Windows.Marcos Pividori2016-12-121-1/+32
| | | | | | | | | | Implemented timeouts for Windows using TimerQueueTimers. Timers are used to supervise the time of execution of the callback function that is being fuzzed. Differential Revision: https://reviews.llvm.org/D27237 llvm-svn: 289495
* Avoid infinite loops in branch foldingAndrew Kaylor2016-12-121-1/+13
| | | | | | Differential Revision: https://reviews.llvm.org/D27582 llvm-svn: 289486
* [libFuzzer] split one slow test into several, for more parallel testingKostya Serebryany2016-12-124-6/+7
| | | | llvm-svn: 289481
* Fix MSVC build after 289461; MSVC isn't sure if this is std:: or llvm::Nico Weber2016-12-121-2/+2
| | | | llvm-svn: 289480
* [libFuzzer] make SimpleCmpTest a bit simpler to crack and more verboseKostya Serebryany2016-12-121-15/+26
| | | | llvm-svn: 289477
* [x86] fix formatting; NFCSanjay Patel2016-12-121-1/+1
| | | | llvm-svn: 289476
* [AMDGPU, PowerPC, TableGen] Fix some Clang-tidy modernize and Include What ↵Eugene Zelenko2016-12-1210-135/+192
| | | | | | You Use warnings; other minor fixes (NFC). llvm-svn: 289475
* [PPC] Prefer direct move on power8 if load 1 or 2 bytes to VSRGuozhi Wei2016-12-122-1/+9
| | | | | | | | | Power8 has MTVSRWZ but no LXSIBZX/LXSIHZX, so move 1 or 2 bytes to VSR through MTVSRWZ is much faster than store the extended value into stack and load it with LXSIWZX. This patch fixes pr31144. Differential Revision: https://reviews.llvm.org/D27287 llvm-svn: 289473
* [APFloat] Implement PPCDoubleDouble add and subtract.Tim Shen2016-12-121-3/+207
| | | | | | | | | | | | Summary: I looked at libgcc's implementation (which is based on the paper, Software for Doubled-Precision Floating-Point Computations", by Seppo Linnainmaa, ACM TOMS vol 7 no 3, September 1981, pages 272-283.) and made it generic to arbitrary IEEE floats. Differential Revision: https://reviews.llvm.org/D26817 llvm-svn: 289472
* [SLP] Fix sign-extends for type-shrinkingMatthew Simpson2016-12-121-15/+62
| | | | | | | | | | | | | | This patch ensures the correct minimum bit width during type-shrinking. Previously when type-shrinking, we always sign-extended values back to their original width. However, if we are going to sign-extend, and the sign bit is unknown, we have to increase the minimum bit width by one bit so the sign-extend will fill the upper bits correctly. If the sign bit is known to be zero, we can perform a zero-extend instead. This should fix PR31243. Reference: https://llvm.org/bugs/show_bug.cgi?id=31243 Differential Revision: https://reviews.llvm.org/D27466 llvm-svn: 289470
* [libFuzzer] build libFuzzer itself with asanKostya Serebryany2016-12-123-3/+4
| | | | llvm-svn: 289469
* Recommit r288212: Emit 'no line' information for interesting 'orphan' ↵Paul Robinson2016-12-123-16/+64
| | | | | | | | | | | | | | | | | instructions. DWARF specifies that "line 0" really means "no appropriate source location" in the line table. By default, use this for branch targets and some other cases that have no specified source location, to prevent inheriting unfortunate line numbers from physically preceding instructions (which might be from completely unrelated source). Updated patch allows enabling or suppressing this behavior for all unspecified source locations. Differential Revision: http://reviews.llvm.org/D24180 llvm-svn: 289468
* [libFuzzer] respect -max_len during mergeKostya Serebryany2016-12-123-1/+8
| | | | llvm-svn: 289467
* [ThinLTO] Remove useless code (NFC)Teresa Johnson2016-12-121-4/+0
| | | | | | Should have been removed in r288446. llvm-svn: 289466
* Refactor BitcodeReader: move Metadata and ValueId handling in their own ↵Mehdi Amini2016-12-126-1395/+1694
| | | | | | | | | | | | | | | | | | class/file Summary: I'm planning on changing the way we load metadata to enable laziness. I'm getting lost in this gigantic files, and gigantic class that is the bitcode reader. This is a first toward splitting it in a few coarse components that are more easily understandable. Reviewers: pcc, tejohnson Subscribers: mgorny, llvm-commits, dexonsmith Differential Revision: https://reviews.llvm.org/D27646 llvm-svn: 289461
* Remove IsMetadataMaterialized from BitcodeReader (NFC)Mehdi Amini2016-12-121-5/+1
| | | | | | | | | | | | Summary: It does not seem useful. Reviewers: pcc, dexonsmith Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D27668 llvm-svn: 289457
* [LiveRangeEdit] Add assert string and descriptive comment.Geoff Berry2016-12-121-1/+3
| | | | llvm-svn: 289456
* Fix compile with GCC 5 or laterDimitry Andric2016-12-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | Summary: Compiling with GCC 5 or later can fail with a bogus error "constructor required before non-static data member for llvm::ValueEnumerator::MDRange::First has been parsed". This was originally fixed upstream in GCC PR 70528, but later this fix was reverted, and released versions of GCC still show the bogus error. To work around this, replace MDRange's declaration of a default constructor with a definition. Reviewers: dexonsmith, rsmith, rivanvx Subscribers: llvm-commits, dim, dexonsmith Differential Revision: https://reviews.llvm.org/D18730 llvm-svn: 289454
* Revert "[SCEVExpand] do not hoist divisions by zero (PR30935)"Reid Kleckner2016-12-122-59/+31
| | | | | | | | | Reverts r289412. It caused an OOB PHI operand access in instcombine when ASan is enabled. Reduction in progress. Also reverts "[SCEVExpander] Add a test case related to r289412" llvm-svn: 289453
* [mips] For PIC code convert unconditional jump to unconditional branchSimon Atanasyan2016-12-121-0/+11
| | | | | | | | | | | | Unconditional branch uses relative addressing which is the right choice in case of position independent code. This is a fix for the bug: https://dmz-portal.mips.com/bugz/show_bug.cgi?id=2445 Differential revision: https://reviews.llvm.org/D27483 llvm-svn: 289448
* AMDGPU: llvm.amdgcn.interp.mov is a source of divergenceNicolai Haehnle2016-12-121-0/+1
| | | | | | | | | | | | | | Summary: While the result is constant across a single primitive, each pixel shader wave can have pixels from multiple primitives. Reviewers: tstellarAMD, arsenm Subscribers: kzhuravl, wdng, yaxunl, llvm-commits, tony-tye Differential Revision: https://reviews.llvm.org/D27572 llvm-svn: 289447
* [InstCombine] fix bug when offsetting case values of a switch (PR31260)Sanjay Patel2016-12-121-25/+15
| | | | | | | | | | | | | We could truncate the condition and then try to fold the add into the original condition value causing wrong case constants to be used. Move the offset transform ahead of the truncate transform and return after each transform, so there's no chance of getting confused values. Fix for: https://llvm.org/bugs/show_bug.cgi?id=31260 llvm-svn: 289442
* [ThinLTO] Import only necessary DICompileUnit fieldsTeresa Johnson2016-12-123-5/+77
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: As discussed on mailing list, for ThinLTO importing we don't need to import all the fields of the DICompileUnit. Don't import enums, macros, retained types lists. Also only import local scoped imported entities. Since we don't currently import any global variables, we also don't need to import the list of global variables (added an assert to verify none are being imported). This is being done by pre-populating the value map entries to map the unneeded metadata to nullptr. For the imported entities, we can simply replace the source module's list with a new list containing only those needed imported entities. This is done in the IRLinker constructor so that value mapping automatically does the desired mapping. Reviewers: mehdi_amini, dexonsmith, dblaikie, aprantl Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D27635 llvm-svn: 289441
* [InstCombine] clean up range-for-loops in visitSwitchInst(); NFCISanjay Patel2016-12-121-7/+7
| | | | llvm-svn: 289439
* Update inline argument comment. NFCI.Simon Pilgrim2016-12-121-3/+3
| | | | | | combineX86ShufflesRecursively 'HasPSHUFB' flag has been the more generic 'HasVariableMask' flag for some time. llvm-svn: 289430
* [X86][SSE] Add support for combining SSE VSHLI/VSRLI uniform constant shifts.Simon Pilgrim2016-12-121-0/+33
| | | | | | Fixes some missed constant folding opportunities and allows us to combine shuffles that end with a logical bit shift. llvm-svn: 289429
* [X86][SSE] Lower suitably sign-extended mul vXi64 using PMULDQSimon Pilgrim2016-12-121-4/+21
| | | | | | | | PMULDQ returns the 64-bit result of the signed multiplication of the lower 32-bits of vXi64 vector inputs, we can lower with this if the sign bits stretch that far. Differential Revision: https://reviews.llvm.org/D27657 llvm-svn: 289426
* [SelectionDAG] Add support for EXTRACT_SUBVECTOR to ComputeNumSignBitsSimon Pilgrim2016-12-121-0/+2
| | | | | | Pre-commit as discussed on D27657 llvm-svn: 289425
* [X86] Teach selectScalarSSELoad to accept full 128-bit vector loads and the ↵Craig Topper2016-12-121-0/+22
| | | | | | | | X86ISD::VZEXT_LOAD opcode. Disable peephole on some of the tests that no longer require it to properly fold scalar intrinsics. llvm-svn: 289424
* [X86] Change CMPSS/CMPSD intrinsic instructions to use sse_load_f32/f64 as ↵Craig Topper2016-12-121-12/+12
| | | | | | | | | | its memory pattern instead of full vector load. These intrinsics only load a single element. We should use sse_loadf32/f64 to give more options of what loads it can match. Currently these instructions are often only getting their load folded thanks to the load folding in the peephole pass. I plan to add more types of loads to sse_load_f32/64 so we can match without the peephole. llvm-svn: 289423
* [X86] Remove some intrinsic instructions from hasPartialRegUpdateCraig Topper2016-12-121-8/+0
| | | | | | | | | | | | | | | | | Summary: These intrinsic instructions are all selected from intrinsics that have well defined behavior for where the upper bits come from. It's not the same place as the lower bits. As you can see we were suppressing load folding for these instructions in some cases. In none of the cases was the separate load helping avoid a partial dependency on the destination register. So we should just go ahead and allow the load to be folded. Only foldMemoryOperand was suppressing folding for these. They all have patterns for folding sse_load_f32/f64 that aren't gated with OptForSize, but sse_load_f32/f64 doesn't allow 128-bit vector loads. It only allows scalar_to_vector and vzmovl of scalar loads to match. There's no reason we can't allow a 128-bit vector load to be narrowed so I would like to fix sse_load_f32/f64 to allow that. And if I do that it changes some of these same test cases to fold the load too. Reviewers: spatel, zvi, RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D27611 llvm-svn: 289419
* [SCEVExpand] do not hoist divisions by zero (PR30935)Sebastian Pop2016-12-122-31/+59
| | | | | | | | | | | | | | | SCEVExpand computes the insertion point for the components of a SCEV to be code generated. When it comes to generating code for a division, SCEVexpand would not be able to check (at compilation time) all the conditions necessary to avoid a division by zero. The patch disables hoisting of expressions containing divisions by anything other than non-zero constants in order to avoid hoisting these expressions past conditions that should hold before doing the division. The patch passes check-all on x86_64-linux. Differential Revision: https://reviews.llvm.org/D27216 llvm-svn: 289412
* [InstCombine][XOP] The instructions for the scalar frcz intrinsics are ↵Craig Topper2016-12-111-2/+14
| | | | | | defined to put 0 in the upper bits, not pass bits through like other intrinsics. So we should return a zero vector instead. llvm-svn: 289411
* [X86][SSE] Add support for combining target shuffles to SHUFPD.Simon Pilgrim2016-12-111-17/+45
| | | | llvm-svn: 289407
* [SCCP] Use the appropriate helper function. NFCI.Davide Italiano2016-12-111-2/+2
| | | | llvm-svn: 289406
* [X86][AVX512] Add missing patterns for broadcast fallback in case load node ↵Ayman Musa2016-12-111-0/+6
| | | | | | | | | | | has multiple uses (for v4i64 and v4f64). When the load node which the broadcast instruction broadcasts has multiple uses, it cannot be folded. A fallback pattern is added to catch these cases and provide another solution. Differential Revision: https://reviews.llvm.org/D27661 llvm-svn: 289404
* [TBAA] Don't generate invalid TBAA when merging nodesSanjoy Das2016-12-111-1/+5
| | | | | | | | | | | | | | Summary: Fix a corner case in `MDNode::getMostGenericTBAA` where we can sometimes generate invalid TBAA metadata. Reviewers: chandlerc, hfinkel, mehdi_amini, manmanren Subscribers: mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D26635 llvm-svn: 289403
* [Verifier] Add verification for TBAA metadataSanjoy Das2016-12-111-2/+265
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This change adds some verification in the IR verifier around struct path TBAA metadata. Other than some basic sanity checks (e.g. we get constant integers where we expect constant integers), this checks: - That by the time an struct access tuple `(base-type, offset)` is "reduced" to a scalar base type, the offset is `0`. For instance, in C++ you can't start from, say `("struct-a", 16)`, and end up with `("int", 4)` -- by the time the base type is `"int"`, the offset better be zero. In particular, a variant of this invariant is needed for `llvm::getMostGenericTBAA` to be correct. - That there are no cycles in a struct path. - That struct type nodes have their offsets listed in an ascending order. - That when generating the struct access path, you eventually reach the access type listed in the tbaa tag node. Reviewers: dexonsmith, chandlerc, reames, mehdi_amini, manmanren Subscribers: mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D26438 llvm-svn: 289402
* [Constants] don't die processing non-ConstantInt GEP indices in ↵Sanjay Patel2016-12-111-6/+8
| | | | | | | | | isGEPWithNoNotionalOverIndexing() (PR31262) This should fix: https://llvm.org/bugs/show_bug.cgi?id=31262 llvm-svn: 289401
* instr-combiner: sum up all latencies of the transformed instructionsSebastian Pop2016-12-111-2/+9
| | | | | | | | | | | | | | | | | | | | We have found that -- when the selected subarchitecture has a scheduling model and we are not optimizing for size -- the machine-instruction combiner uses a too-simple algorithm to compute the cost of one of the two alternatives [before and after running a combining pass on a section of code], and therefor it throws away the combination results too often. This fix has the potential to help any ISA with the potential to combine instructions and for which at least one subarchitecture has a scheduling model. As of now, this is only known to definitely affect AArch64 subarchitectures with a scheduling model. Regression tested on AMD64/GNU-Linux, new test case tested to fail on an unpatched compiler and pass on a patched compiler. Patch by Abe Skolnik and Sebastian Pop. llvm-svn: 289399
* [SCEVExpander] Explicitly expand AddRec starts into loop preheaderSanjoy Das2016-12-111-5/+8
| | | | | | | | | | | | | | | | This is NFC today, but won't be once D27216 (or an equivalent patch) is in. This change fixes a design problem in SCEVExpander -- it relied on a hoisting optimization to generate correct code for add recurrences. This meant changing the hoisting optimization to not kick in under certain circumstances (to avoid speculating faulting instructions, say) would break correctness. The fix is to make the correctness requirements explicit, and have it not rely on the hoisting optimization for correctness. llvm-svn: 289397
* [X86] Regcall - Adding support for mask typesOren Ben Simhon2016-12-113-38/+62
| | | | | | | | | Regcall calling convention passes mask types arguments in x86 GPR registers. The review includes the changes required in order to support v32i1, v16i1 and v8i1. Differential Revision: https://reviews.llvm.org/D27148 llvm-svn: 289383
* [X86][InstCombine] Add support for scalar FMA intrinsics to ↵Craig Topper2016-12-111-0/+29
| | | | | | | | SimplifyDemandedVectorElts. This teaches SimplifyDemandedElts that the FMA can be removed if the lower element isn't used. It also teaches it that if upper elements of the first operand aren't used then we can simplify them. llvm-svn: 289377
* Tweak the core loop in StringRef::find to avoid calling memcmp on everyChandler Carruth2016-12-111-6/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | iteration. Instead, load the byte at the needle length, compare it directly, and save it to use in the lookup table of lengths we can skip forward. I also added an annotation to expect that the comparison fails so that the loop gets laid out contiguously without the call to memcpy (and the substantial register shuffling that the ABI requires of that call). Finally, because this behaves especially badly with a needle length of one (by calling memcmp with a zero length) special case that to directly call memchr, which is what we should have been doing anyways. This was motivated by the fact that there are a large number of test cases in 'check-llvm' where FileCheck's performance is dominated by calls to StringRef::find (in a release, no-asserts build). I'm working on patches to generally improve matters there, but this alone was worth a 12.5% improvement in one test case where FileCheck spent 92% of its time in this routine. I experimented a bunch with different minor variations on this theme, for example setting the pointer *at* the last byte and indexing backwards for the call to memcmp. That didn't improve anything on this version and seemed more complex. I also tried other things to make the loop flow more nicely and none worked. =/ It is a bit unfortunate, the generated code here remains pretty gross, but I don't see any obvious ways to improve it. At this point, most of my ideas would be really elaborate: 1) While the remainder of the string is long enough, we could load a 16-byte or 32-byte vector at the address of the last byte and use palignr to rotate that and check the first 15- or 31-bytes at the front of the next segment, essentially pre-loading the first several bytes of the next iteration so we could quickly detect a mismatch in those bytes without an additional memory access. Down side would be the code complexity, having a fallback loop, and likely misaligned vector load. Plus it would make the common case of the last byte not matching somewhat slower (need some extraction from a vector). 2) While we have space, we could do an aligned load of a 16- or 32-byte vector that *contains* the end byte, and use any peceding bytes to have a more precise "no" test, and any subsequent bytes could be saved for the next iteration. This remove any unaligned load penalty, but still requires us to pay the overhead of vector extraction for the cases where we didn't need to do anything other than load and compare the last byte. 3) Try to walk from the last byte in a way that is more friendly to cache and/or memory pre-fetcher considering we have to poke the last byte anyways. No idea if any of these are really worth pursuing though. They all seem somewhat unlikely to yield big wins in practice and to be a lot of work and complexity. So I settled here, which at least seems like a strict improvement over the previous version. llvm-svn: 289373
* [X86][InstCombine] Teach InstCombineCalls to simplify demanded elements for ↵Craig Topper2016-12-111-0/+8
| | | | | | | | scalar FMA intrinsics. These intrinsics don't read the upper bits of their second and third inputs so we can try to simplify them. llvm-svn: 289372
* [AVX-512][InstCombine] Teach InstCombineCalls how to simplify demanded for ↵Craig Topper2016-12-111-1/+3
| | | | | | | | scalar cmp intrinsics with masking and rounding. These intrinsics don't read the upper elements of their first and second input. These are slightly different the the SSE version which does use the upper bits of its first element as passthru bits since the result goes to an XMM register. For AVX-512 the result goes to a mask register instead. llvm-svn: 289371
OpenPOWER on IntegriCloud