summaryrefslogtreecommitdiffstats
path: root/llvm
Commit message (Collapse)AuthorAgeFilesLines
...
* [libFuzzer] add -Werror for libFuzzer build ruleKostya Serebryany2016-03-021-1/+1
| | | | llvm-svn: 262517
* Revert "Fix ASAN detected errors in code and test" (it was not meant to be ↵Daniel Berlin2016-03-022-13/+13
| | | | | | | | committed yet) This reverts commit 890bbccd600ba1eb050353d06a29650ad0f2eb95. llvm-svn: 262512
* Fix ASAN detected errors in code and testDaniel Berlin2016-03-022-13/+13
| | | | llvm-svn: 262511
* Add another test for the GlobalOpt change in r212079.Bob Wilson2016-03-021-0/+19
| | | | | | | | | This is a test that Akira Hatanaka wrote to test GlobalOpt's handling of aliases with GEP operands. David Majnemer independently made the same change to GlobalOpt in r212079. Akira's test is a useful addition, so I'm pulling it over from the llvm repo for Swift on GitHub. llvm-svn: 262510
* [libFuzzer] more trophiesKostya Serebryany2016-03-021-1/+1
| | | | llvm-svn: 262509
* [ARM] Merging 64-bit divmod lib calls into oneRenato Golin2016-03-023-2/+14
| | | | | | | | | | | | | | | | | When div+rem calls on the same arguments are found, the ARM back-end merges the two calls into one __aeabi_divmod call for up to 32-bits values. However, for 64-bit values, which also have a lib call (__aeabi_ldivmod), it wasn't merging the calls, and thus calling ldivmod twice and spilling the temporary results, which generated pretty bad code. This patch legalises 64-bit lib calls for divmod, so that now all the spilling and the second call are gone. It also relaxes the DivRem combiner a bit on the legal type check, since it was already checking for isLegalOrCustom on every value, so the extra check for isTypeLegal was redundant. This patch fixes PR17193 (and a long time FIXME in the tests). llvm-svn: 262507
* Revert "[X86] Elide references to _chkstk for dynamic allocas"Reid Kleckner2016-03-028-67/+40
| | | | | | | | | | | | | | | This reverts commit r262370. It turns out there is code out there that does sequences of allocas greater than 4K: http://crbug.com/591404 The goal of this change was to improve the code size of inalloca call sequences, but we got tangled up in the mess of dynamic allocas. Instead, we should come back later with a separate MI pass that uses dominance to optimize the full sequence. This should also be able to remove the often unneeded stacksave/stackrestore pairs around the call. llvm-svn: 262505
* ARM: Introduce conservative load/store optimization modeMatthias Braun2016-03-024-31/+111
| | | | | | | | | | | | | | | | | | | | | | | | Most of the time ARM has the CCR.UNALIGN_TRP bit set to false which means that unaligned loads/stores do not trap and even extensive testing will not catch these bugs. However the multi/double variants are not affected by this bit and will still trap. In effect a more aggressive load/store optimization will break existing (bad) code. These bugs do not necessarily manifest in the broken code where the misaligned pointer is formed but often later in perfectly legal code where it is accessed. This means recompiling system libraries (which have no alignment bugs) with a newer compiler will break existing applications (with alignment bugs) that worked before. So (under protest) I implemented this safe mode which limits the formation of multi/double operations to cases that are not affected by user code (stack operations like spills/reloads) or cases where the normal operations trap anyway (floating point load/stores). It is disabled by default. Differential Revision: http://reviews.llvm.org/D17015 llvm-svn: 262504
* SelectionDAG: Use correctly sized allocation functions for SDNodesJustin Bogner2016-03-022-116/+92
| | | | | | | | | | | | | | | | The placement new calls here were all calling the allocation function in RecyclingAllocator/Recycler for SDNode, instead of the function for the specific subclass we were constructing. Since this particular allocator always overallocates it more or less worked, but would hide what we're actually doing from any memory tools. Also, if you tried to change this allocator so something like a BumpPtrAllocator or MallocAllocator, the compiler would crash horribly all the time. Part of llvm.org/PR26808. llvm-svn: 262500
* [AArch64] Enable non-leaf frame pointer elimination.Geoff Berry2016-03-0217-56/+50
| | | | | | | | | | | | | | | | | | | | Summary: This change enables frame pointer elimination in non-leaf functions. The -fomit-frame-pointer option still needs to be used when compiling via clang (or an equivalent method of not setting the 'no-frame-pointer-elim*' function attributes if generating llvm IR via some other method) to take advantage of this optimization. This change should be NFC when compiling via clang without -fomit-frame-pointer. Reviewers: t.p.northover Subscribers: aemerson, rengolin, tberghammer, qcolombet, llvm-commits, danalbert, mcrosier, srhines Differential Revision: http://reviews.llvm.org/D17730 llvm-svn: 262495
* [CMake] Add test-depends target to build dependencies of check-allChris Bieneman2016-03-021-0/+1
| | | | | | This is just another convenience target for bots to use. It enables isolation of building and testing. llvm-svn: 262494
* [cmake] Check the compiler version firstReid Kleckner2016-03-023-43/+52
| | | | | | | | | | | | | | | Otherwise users get messages from CheckAtomic about missing libatomic instead of a sensible message that says "use GCC 4.7 or newer". I structured the change along the lines of HandleLLVMStdlib.cmake, so that the standalone build of Clang still gets the compiler version check. Reviewers: beanz Differential Revision: http://reviews.llvm.org/D17789 llvm-svn: 262491
* [AA] Hoist the logic to reformulate various AA queries in terms of otherChandler Carruth2016-03-0222-250/+201
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | parts of the AA interface out of the base class of every single AA result object. Because this logic reformulates the query in terms of some other aspect of the API, it would easily cause O(n^2) query patterns in alias analysis. These could in turn be magnified further based on the number of call arguments, and then further based on the number of AA queries made for a particular call. This ended up causing problems for Rust that were actually noticable enough to get a bug (PR26564) and probably other places as well. When originally re-working the AA infrastructure, the desire was to regularize the pattern of refinement without losing any generality. While I think it was successful, that is clearly proving to be too costly. And the cost is needless: we gain no actual improvement for this generality of making a direct query to tbaa actually be able to re-use some other alias analysis's refinement logic for one of the other APIs, or some such. In short, this is entirely wasted work. To the extent possible, delegation to other API surfaces should be done at the aggregation layer so that we can avoid re-walking the aggregation. In fact, this significantly simplifies the logic as we no longer need to smuggle the aggregation layer into each alias analysis (or the TargetLibraryInfo into each alias analysis just so we can form argument memory locations!). However, we also have some delegation logic inside of BasicAA and some of it even makes sense. When the delegation logic is baking in specific knowledge of aliasing properties of the LLVM IR, as opposed to simply reformulating the query to utilize a different alias analysis interface entry point, it makes a lot of sense to restrict that logic to a different layer such as BasicAA. So one aspect of the delegation that was in every AA base class is that when we don't have operand bundles, we re-use function AA results as a fallback for callsite alias results. This relies on the IR properties of calls and functions w.r.t. aliasing, and so seems a better fit to BasicAA. I've lifted the logic up to that point where it seems to be a natural fit. This still does a bit of redundant work (we query function attributes twice, once via the callsite and once via the function AA query) but it is *exactly* twice here, no more. The end result is that all of the delegation logic is hoisted out of the base class and into either the aggregation layer when it is a pure retargeting to a different API surface, or into BasicAA when it relies on the IR's aliasing properties. This should fix the quadratic query pattern reported in PR26564, although I don't have a stand-alone test case to reproduce it. It also seems general goodness. Now the numerous AAs that don't need target library info don't carry it around and depend on it. I think I can even rip out the general access to the aggregation layer and only expose that in BasicAA as it is the only place where we re-query in that manner. However, this is a non-trivial change to the AA infrastructure so I want to get some additional eyes on this before it lands. Sadly, it can't wait long because we should really cherry pick this into 3.8 if we're going to go this route. Differential Revision: http://reviews.llvm.org/D17329 llvm-svn: 262490
* [X86][SSSE3] Added combine test for unary shuffle (pshufb) only referencing ↵Simon Pilgrim2016-03-021-0/+17
| | | | | | elements from one of the inputs of a binary shuffle (punpcklbw) llvm-svn: 262486
* [LLVM][AVX512]PSRAWI Change imm8 to int.Michael Zuckerman2016-03-026-63/+63
| | | | | | Differential Revision: http://reviews.llvm.org/D17705 llvm-svn: 262480
* [X86][SSE] Lower 128-bit MOVDDUP with existing VBROADCAST mechanismsSimon Pilgrim2016-03-026-55/+61
| | | | | | | | | | | | We have a number of useful lowering strategies for VBROADCAST instructions (both from memory and register element 0) which the 128-bit form of the MOVDDUP instruction can make use of. This patch tweaks lowerVectorShuffleAsBroadcast to enable it to broadcast 2f64 args using MOVDDUP as well. It does require a slight tweak to the lowerVectorShuffleAsBroadcast mechanism as the existing MOVDDUP lowering uses isShuffleEquivalent which can match binary shuffles that can lower to (unary) broadcasts. Differential Revision: http://reviews.llvm.org/D17680 llvm-svn: 262478
* Revert "[AMDGPU] table-driven parser/printer for amd_kernel_code_t structure ↵Nikolay Haustov2016-03-024-370/+0
| | | | | | | | fields" Build failure with clang. llvm-svn: 262477
* Revert "[AMDGPU] Using table-driven amd_kernel_code_t field parser in ↵Nikolay Haustov2016-03-022-8/+157
| | | | | | | | assembler." Build failure with clang. llvm-svn: 262475
* [AMDGPU] Using table-driven amd_kernel_code_t field parser in assembler.Nikolay Haustov2016-03-022-157/+8
| | | | | | | | | | complementary patch to table-driven amd_kernel_code_t field parser/printer utility. lit tests passed. Patch by: Valery Pykhtin Differential Revision: http://reviews.llvm.org/D17151 llvm-svn: 262474
* [AMDGPU] table-driven parser/printer for amd_kernel_code_t structure fieldsNikolay Haustov2016-03-024-0/+370
| | | | | | | | | | | | | | | | | | This is going to be used in .hsatext disassembler and can be used in current assembler parser (lit tests passed on parsing). Code using this helpers isn't included in this patch. Benefits: unified approach fast field name lookup on parsing Later I would like to enhance some of the field naming/syntax using this code. Patch by: Valery Pykhtin Differential Revision: http://reviews.llvm.org/D17150 llvm-svn: 262473
* libfuzzer: fix compiler warningsDmitry Vyukov2016-03-022-6/+12
| | | | | | | | - unused sigaction/setitimer result (used in assert) - unchecked fscanf return value - signed/unsigned comparison llvm-svn: 262472
* [X86] Remove unnecessary call to isReg from emitter's DestMem handling for ↵Craig Topper2016-03-021-7/+5
| | | | | | VEX prefix. The operand is always a register. NFC llvm-svn: 262468
* [X86] Make X86MCCodeEmitter::DetermineREXPrefix locate operands more like ↵Craig Topper2016-03-022-54/+54
| | | | | | how VEX prefix handling does. llvm-svn: 262467
* [X86] Permit reading of the FLAGS register without it being previously definedDavid Majnemer2016-03-024-5/+10
| | | | | | | | | | | We modeled the RDFLAGS{32,64} operations as "using" {E,R}FLAGS. While technically correct, this is not be desirable for folks who want to examine aspects of the FLAGS register which are not related to computation like whether or not CPUID is a valid instruction. Differential Revision: http://reviews.llvm.org/D17782 llvm-svn: 262465
* [X86] Remove assertion I accidentally left in.Craig Topper2016-03-021-1/+0
| | | | llvm-svn: 262464
* [X86] Be more structured about how we capture the register number when it is ↵Craig Topper2016-03-021-41/+39
| | | | | | | | | | encoded in bits 7:4 of the immediate. For some instructions the register is not the last operand and the immediate handling had to detect this and hardcode the index to find it. It also required CurOp to be pointing at the last operand handled in the Form switch whereas for any instruction it would be pointing at the next operand. Now we just capture the value in the Form switch when we know exactly where it is and the CurOp pointer can behave normally. llvm-svn: 262462
* [SCEV] Minor naming, braces cleanup; NFCSanjoy Das2016-03-021-5/+4
| | | | llvm-svn: 262459
* [X86] Use MCPhysReg and uint16_t for static arrays of registers and opcodes ↵Craig Topper2016-03-025-16/+16
| | | | | | respectively should reduce size tiny bit. NFC llvm-svn: 262458
* AMDGPU: Fix bug 26659.Matt Arsenault2016-03-021-1/+1
| | | | | | | | Fix checking the same instruction twice instead of the second branch that uses vccz. I don't think this matters currently because s_branch_vccnz is always used currently. llvm-svn: 262457
* AMDGPU: Cleanup suggested in bug 23960Matt Arsenault2016-03-021-6/+3
| | | | llvm-svn: 262456
* Bug 20810: Use report_fatal_error instead of unreachableMatt Arsenault2016-03-021-6/+6
| | | | llvm-svn: 262455
* Add a comment with a rational for the unusual code structureSanjoy Das2016-03-021-0/+3
| | | | llvm-svn: 262454
* Qualify getRangeForAffineAR with this-> for MSVCSanjoy Das2016-03-021-2/+2
| | | | llvm-svn: 262453
* Attempt to fix ASAN failure in a MemorySSA test.George Burgess IV2016-03-021-4/+4
| | | | llvm-svn: 262452
* Perturb code in an attempt to appease MSVCSanjoy Das2016-03-021-9/+9
| | | | | | | | For some reason MSVC seems to think I'm calling getConstant() from a static context. Try to avoid this issue by explicitly specifying 'this->' (though I'm not confident that this will actually work). llvm-svn: 262451
* More code permutation to appease MSVCSanjoy Das2016-03-021-4/+7
| | | | llvm-svn: 262449
* Remove "auto" to appease the MSVC botsSanjoy Das2016-03-021-2/+2
| | | | llvm-svn: 262448
* DAGCombiner: Make sure an integer is being truncatedMatt Arsenault2016-03-022-2/+29
| | | | llvm-svn: 262446
* revert r262424 because there's a *clang test* for AArch64 that checks -O3 ↵Sanjay Patel2016-03-022-45/+5
| | | | | | | | asm output that is broken by this change llvm-svn: 262440
* Fix SHARED_LIBS buildDaniel Berlin2016-03-021-0/+1
| | | | llvm-svn: 262439
* [SCEV] Make getRange smarter around selectsSanjoy Das2016-03-023-0/+178
| | | | | | | | | | | | Have ScalarEvolution::getRange re-consider cases like "{C?A:B,+,C?P:Q}" by factoring out "C" and computing RangeOf{A,+,P} union RangeOf({B,+,Q}) instead. The latter can be easier to compute precisely in cases like "{C?0:N,+,C?1:-1}" N is the backedge taken count of the loop; since in such cases the latter form simplifies to [0,N+1) union [0,N+1). llvm-svn: 262438
* [SCEV] Extract out a getRangeForAffineAR; NFCSanjoy Das2016-03-022-57/+77
| | | | | | Pure code-motion change. Will be used later in making getRange more clever. llvm-svn: 262437
* [CMake] Add convenience target llvm-test-depends to build test dependencies.Chris Bieneman2016-03-021-0/+2
| | | | | | This is useful when paired with the distribution targets to build prerequisites for running tests. llvm-svn: 262428
* [CMake] Add distribution target that is the "just-build" side of ↵Chris Bieneman2016-03-021-0/+7
| | | | | | | | install-distribution This is just a convenience target to allow limiting what you build. llvm-svn: 262427
* [InstCombine] convert 'isPositive' and 'isNegative' vector comparisons to ↵Sanjay Patel2016-03-012-5/+45
| | | | | | | | | | | | | | | | | | shifts (PR26701) As noted in the code comment, I don't think we can do the same transform that we do for *scalar* integers comparisons to *vector* integers comparisons because it might pessimize the general case. Exhibit A for an incomplete integer comparison ISA remains x86 SSE/AVX: it only has EQ and GT for integer vectors. But we should now recognize all the variants of this construct and produce the optimal code for the cases shown in: https://llvm.org/bugs/show_bug.cgi?id=26701 llvm-svn: 262424
* Perform InstructioinCombiningPass before SampleProfile pass.Dehao Chen2016-03-018-24/+70
| | | | | | | | | | | | Summary: SampleProfile pass needs to be performed after InstructionCombiningPass, which helps eliminate un-inlinable function calls. Reviewers: davidxl, dnovillo Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D17742 llvm-svn: 262419
* [libFuzzer] deprecate exit_on_first flagKostya Serebryany2016-03-014-12/+10
| | | | llvm-svn: 262417
* llvm-dwp: Add missing copyright notice to llvm-dwp.cppDavid Blaikie2016-03-011-0/+13
| | | | | | Addressing feedback on IRC by Sean Silva. llvm-svn: 262416
* [libFuzzer] add generic signal handlers so that libFuzzer can report at ↵Kostya Serebryany2016-03-017-21/+94
| | | | | | least something if ASan is not handlig the signals for us. Remove abort_on_timeout flag. llvm-svn: 262415
* [X86][SSE41] Added missing fast-isel intrinsics testsSimon Pilgrim2016-03-011-27/+443
| | | | | | Match IR generated in clang/test/CodeGen/sse41-builtins.c llvm-svn: 262412
OpenPOWER on IntegriCloud