summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
* [ScheduleOptimizer] Fix memory leak. NFC.Michael Kruse2016-12-121-1/+3
| | | | llvm-svn: 289434
* Use function_ref to avoid allocation in std::function. NFC.Benjamin Kramer2016-12-122-2/+4
| | | | llvm-svn: 289433
* [ELF][MIPS] Fix .MIPS.options ri_gp_value on MIPS64Simon Atanasyan2016-12-123-7/+11
| | | | | | | | | | | The VA of _gp was being truncated to 32 bits when calling getVa(), but for 64bit MIPS we need to write a 64 bit value to .MIPS.options. Patch by Alexander Richardson. Differential revision: https://reviews.llvm.org/D27672 llvm-svn: 289432
* [StaticAnalysis] Remove unnecessary parameter in CallGraphNode::addCallee.Haojian Wu2016-12-122-3/+3
| | | | | | | | | | | | | | | Summary: Remove the CallGraph in addCallee as it is not used in addCallee. It decouples addCallee from CallGraph, so that we can use CallGraphNode within our customized CallGraph. Reviewers: bkramer Subscribers: cfe-commits, ioeric Differential Revision: https://reviews.llvm.org/D27674 llvm-svn: 289431
* Update inline argument comment. NFCI.Simon Pilgrim2016-12-121-3/+3
| | | | | | combineX86ShufflesRecursively 'HasPSHUFB' flag has been the more generic 'HasVariableMask' flag for some time. llvm-svn: 289430
* [X86][SSE] Add support for combining SSE VSHLI/VSRLI uniform constant shifts.Simon Pilgrim2016-12-126-46/+58
| | | | | | Fixes some missed constant folding opportunities and allows us to combine shuffles that end with a logical bit shift. llvm-svn: 289429
* clang-format: Separate out a language kind for ObjC.Daniel Jasper2016-12-1210-744/+858
| | | | | | | | | | | | | While C(++) and ObjC are generally formatted the same way and can be mixed, people might want to choose different styles based on the language. This patch recognizes .m and .mm files as ObjC and also implements a very crude detection of whether or not a .h file contains ObjC code. This can be improved over time. Also move most of the ObjC tests into their own test file to keep file size maintainable. llvm-svn: 289428
* Remove some annotations from TestMultipleTargetsPavel Labath2016-12-121-8/+2
| | | | | | | The test passes on linux. The i386 case is already handled by skipIfHostIncompatibleWithRemote. llvm-svn: 289427
* [X86][SSE] Lower suitably sign-extended mul vXi64 using PMULDQSimon Pilgrim2016-12-124-149/+59
| | | | | | | | PMULDQ returns the 64-bit result of the signed multiplication of the lower 32-bits of vXi64 vector inputs, we can lower with this if the sign bits stretch that far. Differential Revision: https://reviews.llvm.org/D27657 llvm-svn: 289426
* [SelectionDAG] Add support for EXTRACT_SUBVECTOR to ComputeNumSignBitsSimon Pilgrim2016-12-122-40/+18
| | | | | | Pre-commit as discussed on D27657 llvm-svn: 289425
* [X86] Teach selectScalarSSELoad to accept full 128-bit vector loads and the ↵Craig Topper2016-12-124-12/+34
| | | | | | | | X86ISD::VZEXT_LOAD opcode. Disable peephole on some of the tests that no longer require it to properly fold scalar intrinsics. llvm-svn: 289424
* [X86] Change CMPSS/CMPSD intrinsic instructions to use sse_load_f32/f64 as ↵Craig Topper2016-12-121-12/+12
| | | | | | | | | | its memory pattern instead of full vector load. These intrinsics only load a single element. We should use sse_loadf32/f64 to give more options of what loads it can match. Currently these instructions are often only getting their load folded thanks to the load folding in the peephole pass. I plan to add more types of loads to sse_load_f32/64 so we can match without the peephole. llvm-svn: 289423
* [Driver] Simplify ToolChain::GetCXXStdlibType (NFC)Jonas Hahnfeld2016-12-121-34/+13
| | | | | | | | | | I made the wrong assumption that execution would continue after an error Diag which led to unneeded complex code. This patch aligns with the better implementation of ToolChain::GetRuntimeLibType. Differential Revision: https://reviews.llvm.org/D25669 llvm-svn: 289422
* build: add support for standalone lld buildSaleem Abdulrasool2016-12-123-2/+61
| | | | | | | | | | Enable building lld as a standalone project. This is motivated by the desire to package lld for inclusion in a linux distribution. This allows building lld against an existing paired llvm installation. Now that lld is usable on x86_64, it makes sense to revive this configuration to allow distributions to package it. llvm-svn: 289421
* [XRay][CMake] Check target for XRay Flight Data RecorderPetr Hosek2016-12-121-0/+8
| | | | | | | | | This target doesn't currently do anything, but it is required by the runtimes build. Differential Revision: https://reviews.llvm.org/D27640 llvm-svn: 289420
* [X86] Remove some intrinsic instructions from hasPartialRegUpdateCraig Topper2016-12-126-22/+25
| | | | | | | | | | | | | | | | | Summary: These intrinsic instructions are all selected from intrinsics that have well defined behavior for where the upper bits come from. It's not the same place as the lower bits. As you can see we were suppressing load folding for these instructions in some cases. In none of the cases was the separate load helping avoid a partial dependency on the destination register. So we should just go ahead and allow the load to be folded. Only foldMemoryOperand was suppressing folding for these. They all have patterns for folding sse_load_f32/f64 that aren't gated with OptForSize, but sse_load_f32/f64 doesn't allow 128-bit vector loads. It only allows scalar_to_vector and vzmovl of scalar loads to match. There's no reason we can't allow a 128-bit vector load to be narrowed so I would like to fix sse_load_f32/f64 to allow that. And if I do that it changes some of these same test cases to fold the load too. Reviewers: spatel, zvi, RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D27611 llvm-svn: 289419
* [libcxx][CMake] Move the warning to HandleOutOfTreeLLVMPetr Hosek2016-12-122-7/+4
| | | | | | | | This currently gives a warning when building libcxx under runtimes. Differential Revision: https://reviews.llvm.org/D27643 llvm-svn: 289418
* COFF: Remove an unused mutex declaration.Peter Collingbourne2016-12-121-2/+0
| | | | llvm-svn: 289415
* COFF: Use a DenseSet instead of a map to atomic_flag to track which archive ↵Peter Collingbourne2016-12-122-12/+3
| | | | | | | | members have been read. Differential Revision: https://reviews.llvm.org/D27667 llvm-svn: 289414
* Add two new AST nodes to represent initialization of an array in terms ofRichard Smith2016-12-1222-29/+545
| | | | | | | | | | | | | | | | | | | | initialization of each array element: * ArrayInitLoopExpr is a prvalue of array type with two subexpressions: a common expression (an OpaqueValueExpr) that represents the up-front computation of the source of the initialization, and a subexpression representing a per-element initializer * ArrayInitIndexExpr is a prvalue of type size_t representing the current position in the loop This will be used to replace the creation of explicit index variables in lambda capture of arrays and copy/move construction of classes with array elements, and also C++17 structured bindings of arrays by value (which inexplicably allow copying an array by value, unlike all of C++'s other array declarations). No uses of these nodes are introduced by this change, however. llvm-svn: 289413
* [SCEVExpand] do not hoist divisions by zero (PR30935)Sebastian Pop2016-12-124-31/+229
| | | | | | | | | | | | | | | SCEVExpand computes the insertion point for the components of a SCEV to be code generated. When it comes to generating code for a division, SCEVexpand would not be able to check (at compilation time) all the conditions necessary to avoid a division by zero. The patch disables hoisting of expressions containing divisions by anything other than non-zero constants in order to avoid hoisting these expressions past conditions that should hold before doing the division. The patch passes check-all on x86_64-linux. Differential Revision: https://reviews.llvm.org/D27216 llvm-svn: 289412
* [InstCombine][XOP] The instructions for the scalar frcz intrinsics are ↵Craig Topper2016-12-112-4/+16
| | | | | | defined to put 0 in the upper bits, not pass bits through like other intrinsics. So we should return a zero vector instead. llvm-svn: 289411
* COFF: Use CachedHashStringRef in the symbol table.Peter Collingbourne2016-12-112-4/+5
| | | | | | | | This resulted in about a 1% perf improvement linking chrome_child.dll. Differential Revision: https://reviews.llvm.org/D27666 llvm-svn: 289410
* COFF: Load inputs immediately instead of adding them to a queue.Peter Collingbourne2016-12-114-133/+49
| | | | | | | | | | | | This patch replaces the symbol table's object and archive queues, as well as the convergent loop in the linker driver, with a design more similar to the ELF linker where symbol resolution directly causes input files to be added to the link, including input files arising from linker directives. Effectively this removes the last vestiges of the old parallel input file loader. Differential Revision: https://reviews.llvm.org/D27660 llvm-svn: 289409
* COFF: Use a bit in SymbolBody to track which symbols are written to the ↵Peter Collingbourne2016-12-112-3/+9
| | | | | | | | | | | | | | | | | | | | symbol table. Using a set here caused us to take about 1 second longer to write the symbol table when linking chrome_child.dll. With this I consistently get better performance on Windows with the new symbol table. Before r289280 and with r289183 reverted (median of 5 runs): 17.65s After this change: 17.33s On Linux things look even better: Before: 10.700480444s After: 5.735681610s Differential Revision: https://reviews.llvm.org/D27648 llvm-svn: 289408
* [X86][SSE] Add support for combining target shuffles to SHUFPD.Simon Pilgrim2016-12-114-27/+52
| | | | llvm-svn: 289407
* [SCCP] Use the appropriate helper function. NFCI.Davide Italiano2016-12-111-2/+2
| | | | llvm-svn: 289406
* Fix TBAA metadataSanjoy Das2016-12-111-1/+3
| | | | | | | The existing tbaa metadata in the test is ill-formed, and fails the verifier after r289402. llvm-svn: 289405
* [X86][AVX512] Add missing patterns for broadcast fallback in case load node ↵Ayman Musa2016-12-112-0/+79
| | | | | | | | | | | has multiple uses (for v4i64 and v4f64). When the load node which the broadcast instruction broadcasts has multiple uses, it cannot be folded. A fallback pattern is added to catch these cases and provide another solution. Differential Revision: https://reviews.llvm.org/D27661 llvm-svn: 289404
* [TBAA] Don't generate invalid TBAA when merging nodesSanjoy Das2016-12-112-7/+39
| | | | | | | | | | | | | | Summary: Fix a corner case in `MDNode::getMostGenericTBAA` where we can sometimes generate invalid TBAA metadata. Reviewers: chandlerc, hfinkel, mehdi_amini, manmanren Subscribers: mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D26635 llvm-svn: 289403
* [Verifier] Add verification for TBAA metadataSanjoy Das2016-12-1136-63/+454
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This change adds some verification in the IR verifier around struct path TBAA metadata. Other than some basic sanity checks (e.g. we get constant integers where we expect constant integers), this checks: - That by the time an struct access tuple `(base-type, offset)` is "reduced" to a scalar base type, the offset is `0`. For instance, in C++ you can't start from, say `("struct-a", 16)`, and end up with `("int", 4)` -- by the time the base type is `"int"`, the offset better be zero. In particular, a variant of this invariant is needed for `llvm::getMostGenericTBAA` to be correct. - That there are no cycles in a struct path. - That struct type nodes have their offsets listed in an ascending order. - That when generating the struct access path, you eventually reach the access type listed in the tbaa tag node. Reviewers: dexonsmith, chandlerc, reames, mehdi_amini, manmanren Subscribers: mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D26438 llvm-svn: 289402
* [Constants] don't die processing non-ConstantInt GEP indices in ↵Sanjay Patel2016-12-112-6/+19
| | | | | | | | | isGEPWithNoNotionalOverIndexing() (PR31262) This should fix: https://llvm.org/bugs/show_bug.cgi?id=31262 llvm-svn: 289401
* [X86][AVX512] Add target shuffle test showing missing PSHUFPD combine.Simon Pilgrim2016-12-111-0/+16
| | | | llvm-svn: 289400
* instr-combiner: sum up all latencies of the transformed instructionsSebastian Pop2016-12-114-15/+64
| | | | | | | | | | | | | | | | | | | | We have found that -- when the selected subarchitecture has a scheduling model and we are not optimizing for size -- the machine-instruction combiner uses a too-simple algorithm to compute the cost of one of the two alternatives [before and after running a combining pass on a section of code], and therefor it throws away the combination results too often. This fix has the potential to help any ISA with the potential to combine instructions and for which at least one subarchitecture has a scheduling model. As of now, this is only known to definitely affect AArch64 subarchitectures with a scheduling model. Regression tested on AMD64/GNU-Linux, new test case tested to fail on an unpatched compiler and pass on a patched compiler. Patch by Abe Skolnik and Sebastian Pop. llvm-svn: 289399
* [X86][XOP] Add target shuffle tests showing missing PSHUFPD combine.Simon Pilgrim2016-12-111-0/+28
| | | | llvm-svn: 289398
* [SCEVExpander] Explicitly expand AddRec starts into loop preheaderSanjoy Das2016-12-111-5/+8
| | | | | | | | | | | | | | | | This is NFC today, but won't be once D27216 (or an equivalent patch) is in. This change fixes a design problem in SCEVExpander -- it relied on a hoisting optimization to generate correct code for add recurrences. This meant changing the hoisting optimization to not kick in under certain circumstances (to avoid speculating faulting instructions, say) would break correctness. The fix is to make the correctness requirements explicit, and have it not rely on the hoisting optimization for correctness. llvm-svn: 289397
* [X86] Regcall - Adding support for mask typesOren Ben Simhon2016-12-114-46/+225
| | | | | | | | | Regcall calling convention passes mask types arguments in x86 GPR registers. The review includes the changes required in order to support v32i1, v16i1 and v8i1. Differential Revision: https://reviews.llvm.org/D27148 llvm-svn: 289383
* [FileCheck] Re-implement the logic to find each check prefix in theChandler Carruth2016-12-111-93/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | check file to not be unreasonably slow in the face of multiple check prefixes. The previous logic would repeatedly scan potentially large portions of the check file looking for alternative prefixes. In the worst case this would scan most of the file looking for a rare prefix between every single occurance of a common prefix. Even if we bounded the scan, this would do bad things if the order of the prefixes was "unlucky" and the distant prefix was scanned for first. None of this is necessary. It is straightforward to build a state machine that recognizes the first, longest of the set of alternative prefixes. That is in fact exactly whan a regular expression does. This patch builds a regular expression once for the set of prefixes and then uses it to search incrementally for the next prefix. This requires some threading of state but actually makes the code dramatically simpler. I've also added a big comment describing the algorithm as it was not at all obvious to me when I started. With this patch, several previously pathological test cases in test/CodeGen/X86 are 5x and more faster. Overall, running all tests under test/CodeGen/X86 uses 10% less CPU after this, and because all the slowest tests were hitting this, finishes in 40% less wall time on my system (going from just over 5.38s to just over 3.23s) on a release build! This patch substantially improves the time of all 7 X86 tests that were in the top 20 reported by --time-tests, 5 of them are completely off the list and the remaining 2 are much lower. (Sadly, the new tests on the list include 2 new X86 ones that are slow for unrelated reasons, so the count stays at 4 of the top 20.) It isn't clear how much this helps debug builds in aggregate in part because of the noise, but it again makes mane of the slowest x86 tests significantly faster (10% or more improvement). llvm-svn: 289382
* [FileCheck] Remove a parameter that was simply always set toChandler Carruth2016-12-111-9/+4
| | | | | | | | a commandline flag and test the flag directly. NFC. If we ever need this generality it can be added back. llvm-svn: 289381
* [FileCheck] Clean up doxygen comments throughout. NFC.Chandler Carruth2016-12-111-70/+62
| | | | llvm-svn: 289380
* [FileCheck] Run clang-format over this code. NFC.Chandler Carruth2016-12-111-118/+108
| | | | | | | | | | This fixes one formatting goof I left in my previous commit and *many* other inconsistencies. I'm planning to make substantial changes here and so wanted to get to a clean baseline. llvm-svn: 289379
* Refactor FileCheck some to reduce memory allocation and copying. AlsoChandler Carruth2016-12-111-87/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | make some readability improvements. Both the check file and input file have to be fully buffered to normalize their whitespace. But previously this would be done in a stack SmallString and then copied into a heap allocated MemoryBuffer. That seems pretty wasteful, especially for something like FileCheck where there are only ever two such entities. This just rearranges the code so that we can keep the canonicalized buffers on the stack of the main function, use reasonably large stack buffers to reduce allocation. A rough estimate seems to show that about 80% of LLVM's .ll and .s files will fit into a 4k buffer, so this should completely avoid heap allocation for the buffer in those cases. My system's malloc is fast enough that the allocations don't directly show up in timings. However, on some very slow test cases, this saves 1% - 2% by avoiding the copy into the heap allocated buffer. This also splits out the code which checks the input into a helper much like the code to build the checks as that made the code much more readable to me. Nit picks and suggestions welcome here. It has really exposed a *bunch* of stuff that could be cleaned up though, so I'm probably going to go and spring clean all of this code as I have more changes coming to speed things up. llvm-svn: 289378
* [X86][InstCombine] Add support for scalar FMA intrinsics to ↵Craig Topper2016-12-112-0/+221
| | | | | | | | SimplifyDemandedVectorElts. This teaches SimplifyDemandedElts that the FMA can be removed if the lower element isn't used. It also teaches it that if upper elements of the first operand aren't used then we can simplify them. llvm-svn: 289377
* [sanitizer] Make sure libmalloc doesn't remove the sanitizer zone from ↵Kuba Mracek2016-12-111-0/+23
| | | | | | | | | | malloc_zones[0] In certain OS versions, it was possible that libmalloc replaced the sanitizer zone from being the default zone (i.e. being in malloc_zones[0]). This patch introduces a failsafe that makes sure we always stay the default zone. No testcase for this, because this doesn't reproduce under normal circumstances. Differential Revision: https://reviews.llvm.org/D27083 llvm-svn: 289376
* [sanitizer] Handle malloc_destroy_zone() on DarwinKuba Mracek2016-12-112-0/+34
| | | | | | | | We currently have a interceptor for malloc_create_zone, which returns a new zone that redirects all the zone requests to our sanitizer zone. However, calling malloc_destroy_zone on that zone will cause libmalloc to print out some warning messages, because the zone is not registered in the list of zones. This patch handles this and adds a testcase for that. Differential Revision: https://reviews.llvm.org/D27083 llvm-svn: 289375
* [X86][InstCombine] Add the test cases for r289370, r289371, and r289372.Craig Topper2016-12-112-0/+444
| | | | | | I forgot to add the new files before commiting. llvm-svn: 289374
* Tweak the core loop in StringRef::find to avoid calling memcmp on everyChandler Carruth2016-12-111-6/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | iteration. Instead, load the byte at the needle length, compare it directly, and save it to use in the lookup table of lengths we can skip forward. I also added an annotation to expect that the comparison fails so that the loop gets laid out contiguously without the call to memcpy (and the substantial register shuffling that the ABI requires of that call). Finally, because this behaves especially badly with a needle length of one (by calling memcmp with a zero length) special case that to directly call memchr, which is what we should have been doing anyways. This was motivated by the fact that there are a large number of test cases in 'check-llvm' where FileCheck's performance is dominated by calls to StringRef::find (in a release, no-asserts build). I'm working on patches to generally improve matters there, but this alone was worth a 12.5% improvement in one test case where FileCheck spent 92% of its time in this routine. I experimented a bunch with different minor variations on this theme, for example setting the pointer *at* the last byte and indexing backwards for the call to memcmp. That didn't improve anything on this version and seemed more complex. I also tried other things to make the loop flow more nicely and none worked. =/ It is a bit unfortunate, the generated code here remains pretty gross, but I don't see any obvious ways to improve it. At this point, most of my ideas would be really elaborate: 1) While the remainder of the string is long enough, we could load a 16-byte or 32-byte vector at the address of the last byte and use palignr to rotate that and check the first 15- or 31-bytes at the front of the next segment, essentially pre-loading the first several bytes of the next iteration so we could quickly detect a mismatch in those bytes without an additional memory access. Down side would be the code complexity, having a fallback loop, and likely misaligned vector load. Plus it would make the common case of the last byte not matching somewhat slower (need some extraction from a vector). 2) While we have space, we could do an aligned load of a 16- or 32-byte vector that *contains* the end byte, and use any peceding bytes to have a more precise "no" test, and any subsequent bytes could be saved for the next iteration. This remove any unaligned load penalty, but still requires us to pay the overhead of vector extraction for the cases where we didn't need to do anything other than load and compare the last byte. 3) Try to walk from the last byte in a way that is more friendly to cache and/or memory pre-fetcher considering we have to poke the last byte anyways. No idea if any of these are really worth pursuing though. They all seem somewhat unlikely to yield big wins in practice and to be a lot of work and complexity. So I settled here, which at least seems like a strict improvement over the previous version. llvm-svn: 289373
* [X86][InstCombine] Teach InstCombineCalls to simplify demanded elements for ↵Craig Topper2016-12-111-0/+8
| | | | | | | | scalar FMA intrinsics. These intrinsics don't read the upper bits of their second and third inputs so we can try to simplify them. llvm-svn: 289372
* [AVX-512][InstCombine] Teach InstCombineCalls how to simplify demanded for ↵Craig Topper2016-12-111-1/+3
| | | | | | | | scalar cmp intrinsics with masking and rounding. These intrinsics don't read the upper elements of their first and second input. These are slightly different the the SSE version which does use the upper bits of its first element as passthru bits since the result goes to an XMM register. For AVX-512 the result goes to a mask register instead. llvm-svn: 289371
* [AVX-512][InstCombine] Teach InstCombineCalls how to simplify demanded ↵Craig Topper2016-12-111-0/+31
| | | | | | | | elements for scalar add,div,mul,sub,max,min intrinsics with masking and rounding. These intrinsics don't read the upper bits of their second input. And the third input is the passthru for masking and that only uses the lower element as well. llvm-svn: 289370
OpenPOWER on IntegriCloud