summaryrefslogtreecommitdiffstats
path: root/llvm/test
Commit message (Collapse)AuthorAgeFilesLines
* Use inbounds GEPs for memcpy and memset loweringEli Bendersky2015-07-171-5/+5
| | | | | | Follow-up on discussion in http://reviews.llvm.org/D11220 llvm-svn: 242542
* Add support for producing thin archives in llvm-lib.Rafael Espindola2015-07-171-0/+9
| | | | | | I will send an entry in docs/CommandGuide for review today. llvm-svn: 242533
* Make global aliases have symbol size equal to their typeJohn Brawn2015-07-173-0/+24
| | | | | | | | | | This is mainly for the benefit of GlobalMerge, so that an alias into a MergedGlobals variable has the same size as the original non-merged variable. Differential Revision: http://reviews.llvm.org/D10837 llvm-svn: 242520
* [PM/AA] Disable the core unsafe aspect of GlobalsModRef in the face ofChandler Carruth2015-07-172-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | basic changes to the IR such as folding pointers through PHIs, Selects, integer casts, store/load pairs, or outlining. This leaves the feature available behind a flag. This flag's default could be flipped if necessary, but the real-world performance impact of this particular feature of GMR may not be sufficiently significant for many folks to want to run the risk. Currently, the risk here is somewhat mitigated by half-hearted attempts to update GlobalsModRef when the rest of the optimizer changes something. However, I am currently trying to remove that update mechanism as it makes migrating the AA infrastructure to a form that can be readily shared between new and old pass managers very challenging. Without this update mechanism, it is possible that this still unlikely failure mode will start to trip people, and so I wanted to try to proactively avoid that. There is a lengthy discussion on the mailing list about why the core approach here is flawed, and likely would need to look totally different to be both reasonably effective and resilient to basic IR changes occuring. This patch is essentially the first of two which will enact the result of that discussion. The next patch will remove the current update mechanism. Thanks to lots of folks that helped look at this from different angles. Especial thanks to Michael Zolotukhin for doing some very prelimanary benchmarking of LTO without GlobalsModRef to get a rough idea of the impact we could be facing here. So far, it looks very small, but there are some concerns lingering from other benchmarking. The default here may get flipped if performance results end up pointing at this as a more significant issue. Also thanks to Pete and Gerolf for reviewing! Differential Revision: http://reviews.llvm.org/D11213 llvm-svn: 242512
* [asan] Fix invalid debug info for promotable allocasKuba Brecka2015-07-171-0/+26
| | | | | | | | | | Since r230724 ("Skip promotable allocas to improve performance at -O0"), there is a regression in the generated debug info for those non-instrumented variables. When inspecting such a variable's value in LLDB, you often get garbage instead of the actual value. ASan instrumentation is inserted before the creation of the non-instrumented alloca. The only allocas that are considered standard stack variables are the ones declared in the first basic-block, but the initial instrumentation setup in the function breaks that invariant. This patch makes sure uninstrumented allocas stay in the first BB. Differential Revision: http://reviews.llvm.org/D11179 llvm-svn: 242510
* ARM: Enable MachineScheduler and disable PostRAScheduler for swift.Matthias Braun2015-07-176-31/+31
| | | | | | | | | | | | | | | | | | | | | This is mostly done to disable the PostRAScheduler which optimizes for instruction latencies which isn't a good fit for out-of-order architectures. This also allows to leave out the itinerary table in swift in favor of the SchedModel ones. This change leads to performance improvements/regressions by as much as 10% in some benchmarks, in fact we loose 0.4% performance over the llvm-testsuite for reasons that appear to be unknown or out of the compilers control. rdar://20803802 documents the investigation of these effects. While it is probably a good idea to perform the same switch for the other ARM out-of-order CPUs, I limited this change to swift as I cannot perform the benchmark verification on the other CPUs. Differential Revision: http://reviews.llvm.org/D10513 llvm-svn: 242500
* Only do fmul (fadd x, x), c combine if the fadd only has one useMatt Arsenault2015-07-171-0/+102
| | | | | | This was increasing the instruction count if the fadd has multiple uses. llvm-svn: 242498
* Use small encodings for constants when possible.Rafael Espindola2015-07-172-15/+41
| | | | llvm-svn: 242493
* MIR Serialization: Serialize the frame setup machine instruction flag.Alex Lorenz2015-07-171-0/+39
| | | | llvm-svn: 242491
* MIR Serialization: Serialize the frame index machine operands.Alex Lorenz2015-07-164-0/+155
| | | | | Reviewers: Duncan P. N. Exon Smith llvm-svn: 242487
* Arm: Don't define a label twice with two setjmps in a function.Matthias Braun2015-07-161-0/+89
| | | | | | | | | | Constructing a name based on the function name didn't give us a unique symbol if we had more than one setjmp in a function. Using MCContext::createTempSymbol() always gives us a unique name. Differential Revision: http://reviews.llvm.org/D9314 llvm-svn: 242482
* Fix __builtin_setjmp in combination with sjlj exception handling.Matthias Braun2015-07-161-0/+113
| | | | | | | | | | | | | | | | | | | llvm.eh.sjlj.setjmp was used as part of the SjLj exception handling style but is also used in clang to implement __builtin_setjmp. The ARM backend needs to output additional dispatch tables for the SjLj exception handling style, these tables however can't be emitted if llvm.eh.sjlj.setjmp is simply used for __builtin_setjmp and no actual landing pad blocks exist. To solve this issue a new llvm.eh.sjlj.setup_dispatch intrinsic is introduced which is used instead of llvm.eh.sjlj.setjmp in the SjLj exception handling lowering, so we can differentiate between the case where we actually need to setup a dispatch table and the case where we just need the __builtin_setjmp semantic. Differential Revision: http://reviews.llvm.org/D9313 llvm-svn: 242481
* AArch64: make inexact signalling on round Darwin-specificTim Northover2015-07-162-75/+146
| | | | | | | | | C11 leaves the choice on whether round-to-integer operations set the inexact flag implementation-defined. Darwin does expect it to be set, but this seems to be against the intent of the IEEE document and slower to implement anyway. So it should be opt-in. llvm-svn: 242446
* [X86][SSE] Added nounwind attribute to vector shift tests.Simon Pilgrim2015-07-166-128/+96
| | | | | | Stop i686 codegen from generating cfi directives. llvm-svn: 242443
* [PowerPC] v4i32 is a VSRCRegClassBill Schmidt2015-07-162-39/+38
| | | | | | | | | | | | | | | | I was looking at some vector code generation and kept seeing unnecessary vector copies into the Altivec half of the VSX registers. I discovered that we overlooked v4i32 when adding the register classes for VSX; we only added v4f32 and v2f64. This means that anything that canonicalizes into v4i32 (which is a LOT of stuff) ends up being forced into VRRC on its way to VSRC. The fix is one line. The rest of the patch is fixing up some test cases whose code generation has changed as a result. This seems like it would be a good candidate for backport to 3.7. llvm-svn: 242442
* [X86][SSE] Updated vector conversion test names.Simon Pilgrim2015-07-162-201/+201
| | | | | | I'll be adding further tests shortly so need a more thorough naming convention. llvm-svn: 242440
* [NVPTX] enable SpeculativeExecution in NVPTXJingyue Wu2015-07-161-0/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: SpeculativeExecution enables a series straight line optimizations (such as SLSR and NaryReassociate) on conditional code. For example, if (...) ... b * s ... if (...) ... (b + 1) * s ... speculative execution can hoist b * s and (b + 1) * s from then-blocks, so that we have ... b * s ... if (...) ... ... (b + 1) * s ... if (...) ... Then, SLSR can rewrite (b + 1) * s to (b * s + s) because after speculative execution b * s dominates (b + 1) * s. The performance impact of this change is significant. It speeds up the benchmarks running EigenFloatContractionKernelInternal16x16 (https://bitbucket.org/eigen/eigen/src/ba68f42fa69e4f43417fe1e52669d4dd5d2b3bee/unsupported/Eigen/CXX11/src/Tensor/TensorContractionCuda.h?at=default#cl-526) by roughly 2%. Some internal benchmarks that have the above code pattern are improved by up to 40%. No significant slowdowns are observed on Eigen CUDA microbenchmarks. Reviewers: jholewinski, broune, eliben Subscribers: llvm-commits, jholewinski Differential Revision: http://reviews.llvm.org/D11201 llvm-svn: 242437
* AArch64: Implement conditional compare sequence matching.Matthias Braun2015-07-161-0/+96
| | | | | | | | | | | | | | | This is a new iteration of the reverted r238793 / http://reviews.llvm.org/D8232 which wrongly assumed that any and/or trees can be represented by conditional compare sequences, however there are some restrictions to that. This version fixes this and adds comments that explain exactly what types of and/or trees can actually be implemented as conditional compare sequences. Related to http://llvm.org/PR20927, rdar://18326194 Differential Revision: http://reviews.llvm.org/D10579 llvm-svn: 242436
* AMDPGU/SI: Negative offsets aren't allowed in MUBUF's vaddr operandTom Stellard2015-07-162-12/+41
| | | | | | | | | | Reviewers: arsenm Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11226 llvm-svn: 242434
* Revert "Add missing load/store flags to thumb2 instructions."Pete Cooper2015-07-161-1/+1
| | | | | | | | | | This reverts commit r242300. This is causing buildbot failures which we are investigating. I'll reapply once we know whats going on, but for now want to get the bots green. llvm-svn: 242428
* Internalize: internalize comdat members as a group, and drop comdat on such ↵Peter Collingbourne2015-07-161-0/+52
| | | | | | | | | | | | | | | | | | | | members. Internalizing an individual comdat group member without also internalizing the other members of the comdat can break comdat semantics. For example, if a module contains a reference to an internalized comdat member, and the linker chooses a comdat group from a different object file, this will break the reference to the internalized member. This change causes the internalizer to only internalize comdat members if all other members of the comdat are not externally visible. Once a comdat group has been fully internalized, there is no need to apply comdat rules to its members; later optimization passes (e.g. globaldce) can legally drop individual members of the comdat. So we drop the comdat attribute from all comdat members. Differential Revision: http://reviews.llvm.org/D10679 llvm-svn: 242423
* Correct lowering of memmove in NVPTXEli Bendersky2015-07-161-22/+96
| | | | | | | | | | This fixes https://llvm.org/bugs/show_bug.cgi?id=24056 Also a bit of refactoring along the way. Differential Revision: http://reviews.llvm.org/D11220 llvm-svn: 242413
* [Codegen] Add intrinsics 'absdiff' and corresponding SDNodes for absolute ↵James Molloy2015-07-161-0/+242
| | | | | | | | | | | | | difference operation This adds new intrinsics "*absdiff" for absolute difference ops to facilitate efficient code generation for "sum of absolute differences" operation. The patch also contains the introduction of corresponding SDNodes and basic legalization support.Sanity of the generated code is tested on X86. This is 1st of the three patches. Patch by Shahid Asghar-ahmad! llvm-svn: 242409
* Fix memcheck interval ends for pointers with negative stridesSilviu Baranga2015-07-161-0/+89
| | | | | | | | | | | | | | | | | | | | | | | | Summary: The checking pointer grouping algorithm assumes that the starts/ends of the pointers are well formed (start <= end). The runtime memory checking algorithm also assumes this by doing: start0 < end1 && start1 < end0 to detect conflicts. This check only works if start0 <= end0 and start1 <= end1. This change correctly orders the interval ends by either checking the stride (if it is constant) or by using min/max SCEV expressions. Reviewers: anemet, rengolin Subscribers: rengolin, llvm-commits Differential Revision: http://reviews.llvm.org/D11149 llvm-svn: 242400
* [X86] Test for r242395 (Fix emitPrologue() to make less assumptions about ↵Michael Kuperstein2015-07-161-0/+19
| | | | | | pushes) llvm-svn: 242399
* [X86] Reapply r240257 : "Allow more call sequences to use push instructions ↵Michael Kuperstein2015-07-161-40/+53
| | | | | | | | | | | for argument passing" This allows more call sequences to use pushes instead of movs when optimizing for size. In particular, calling conventions that pass some parameters in registers (e.g. thiscall) are now supported. This should no longer cause miscompiles, now that a bug in emitPrologue was fixed in r242395. llvm-svn: 242398
* Revert "[X86] Allow more call sequences to use push instructions for ↵Reid Kleckner2015-07-161-53/+40
| | | | | | | | | | | argument passing" It miscompiles some code and a reduced test case has been sent to the author. This reverts commit r240257. llvm-svn: 242373
* Fix broken testcase from r242358.Alex Lorenz2015-07-161-1/+1
| | | | | | | The testcase failed on non X86 targets, because I forgot to pass the '-march=x86-64' option into llc for one of the X86 specific tests. llvm-svn: 242370
* [ARM] Define a subtarget feature that is used to avoid using movt/movwAkira Hatanaka2015-07-162-5/+50
| | | | | | | | | | | | | | | | | pairs for 32-bit immediates. This change is needed to avoid emitting movt/movw pairs when doing LTO and do so on a per-function basis. Out-of-tree projects currently using cl::opt option -arm-use-movt=0 or false to avoid emitting movt/movw pairs should make changes to add subtarget feature "+no-movt" (see the changes made to clang in r242368). rdar://problem/21529937 Differential Revision: http://reviews.llvm.org/D11026 llvm-svn: 242369
* Trying to fix the windows bots.Rafael Espindola2015-07-161-4/+4
| | | | llvm-svn: 242367
* Fix handling of relative paths in thin archives.Rafael Espindola2015-07-161-3/+18
| | | | | | The member has to end up with a path relative to the archive. llvm-svn: 242362
* MIR Serialization: Serialize the jump table index operands.Alex Lorenz2015-07-152-1/+183
| | | | | Reviewers: Duncan P. N. Exon Smith llvm-svn: 242358
* MIR Serialization: Serialize the jump table info.Alex Lorenz2015-07-152-0/+116
| | | | | | | | | | | | | | The jump table info is serialized using a YAML mapping that contains its kind and a YAML sequence of jump table entries. A jump table entry is a YAML mapping that has an ID and an inline YAML sequence of machine basic block references. The testcase 'CodeGen/MIR/X86/jump-table-info.mir' doesn't have any instructions because one of them contains a jump table index operand. The jump table index operands will be serialized in a follow up patch, and the appropriate instructions will be added to this testcase. Reviewers: Duncan P. N. Exon Smith llvm-svn: 242357
* Add a test for r242281 from an old patch of mine.Sean Silva2015-07-151-0/+37
| | | | | | This isn't thorough, but should serve as a sanity check. llvm-svn: 242356
* llvm-ar: Don't write the directory in the string table.Rafael Espindola2015-07-151-3/+12
| | | | | | | We were already doing the right thing for short file names, but not long ones. llvm-svn: 242354
* MIR Serialization: Serialize references from the stack objects to named allocas.Alex Lorenz2015-07-153-6/+36
| | | | | | | | | This commit serializes the references to the named LLVM alloca instructions from the stack objects in the machine frame info. This commit adds a field 'Name' to the struct 'yaml::MachineStackObject'. This new field is used to store the name of the alloca instruction when the alloca is present and when it has a name. llvm-svn: 242339
* Add a "debugger tuning" concept that allows us to fine-tune how wePaul Robinson2015-07-152-1/+45
| | | | | | | | | | | emit debug info, according to the preferences of the different debuggers used on various targets. Darwin and FreeBSD default to tuning for LLDB; PS4 defaults to tuning for the SCE (Sony Computer Entertainment) debugger. All others default to GDB. Differential Revision: http://reviews.llvm.org/D8506 llvm-svn: 242338
* Fix mergefunc infinite loopJF Bastien2015-07-151-0/+40
| | | | | | | | | | | | | | | Self-referential constants containing references to a merged function no longer cause the MergeFunctions pass to infinite loop. Also adds a reproduction IR which would otherwise fail, which was isolated from a similar issue in Chromium. Author: jrkoenig Reviewers: nlewycky, jfb Subscribers: llvm-commits, nlewycky, jfb Differential Revision: http://reviews.llvm.org/D11208 llvm-svn: 242337
* Handle the error of trying to convert a regular archive to a thin one.Rafael Espindola2015-07-151-0/+14
| | | | | | While at it, test that we can add to a thin archive. llvm-svn: 242330
* Analyze recursive PHI nodes in BasicAATobias Edler von Koch2015-07-151-0/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch allows phi nodes like %x = phi [ %incptr, ... ] [ %var, ... ] %incptr = getelementptr %x, 1 to be analyzed by BasicAliasAnalysis. In aliasPHI, we can detect incoming values that are recursive GEPs with a constant offset. Instead of trying to analyze a recursive GEP (and failing), we now ignore it and instead set the size of the memory referenced by the PHINode to UnknownSize. This represents all the possible memory locations the pointer represented by the PHINode could be advanced to by the GEP. For now, this new behavior is turned off by default to allow debugging of performance degradations seen with SPEC/x86 and Hexagon benchmarks. The flag -basicaa-recphi turns it on. Reviewers: hfinkel, sanjoy Subscribers: tobiasvk_caf, sanjoy, llvm-commits Differential Revision: http://reviews.llvm.org/D10368 llvm-svn: 242320
* Revert "Look through PHIs to find additional register sources"Bruno Cardoso Lopes2015-07-151-84/+0
| | | | | | | | | | Likely broke compilation on ARM: http://lab.llvm.org:8011/builders/clang-native-arm-lnt/builds/13054 This reverts commit 131ce4a838c081516cbfed039fc986b33e3979d6. llvm-svn: 242310
* Debug Info: Add basic support for external types references.Adrian Prantl2015-07-151-0/+51
| | | | | | | | | | | | | | This is a necessary prerequisite for bootstrapping the emission of debug info inside modules. - Adds a FlagExternalTypeRef to DICompositeType. External types must have a unique identifier. - External type references are emitted using a forward declaration with a DW_AT_signature([DW_FORM_ref_sig8]) based on the UID. http://reviews.llvm.org/D9612 llvm-svn: 242302
* Add missing load/store flags to thumb2 instructions.Pete Cooper2015-07-151-1/+1
| | | | | | | | | | | | | These were the cause of a verifier error when building 7zip with -verify-machineinstrs. Running 'make check' with the verifier triggered the same error on the test here so i've updated the test to run the verifier on one of its runs instead of adding a new one. While looking at this code, there was a stale comment that these instructions were only used for disassembly. This probably used to be the case, but they are now used in the 'ARM load / store optimization pass' too. llvm-svn: 242300
* Look through PHIs to find additional register sourcesBruno Cardoso Lopes2015-07-151-0/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Teaches the ValueTracker in the PeepholeOptimizer to look through PHI instructions. - Add findNextSourceAndRewritePHI method to lookup into multiple sources returnted by the ValueTracker and rewrite PHIs with new sources. With these changes we can find more register sources and rewrite more copies to allow coaslescing of bitcast instructions. Hence, we eliminate unnecessary VR64 <-> GR64 copies in x86, but it could be extended to other archs by marking "isBitcast" on target specific instructions. The x86 example follows: A: psllq %mm1, %mm0 movd %mm0, %r9 jmp C B: por %mm1, %mm0 movd %mm0, %r9 jmp C C: movd %r9, %mm0 pshufw $238, %mm0, %mm0 Becomes: A: psllq %mm1, %mm0 jmp C B: por %mm1, %mm0 jmp C C: pshufw $238, %mm0, %mm0 Differential Revision: http://reviews.llvm.org/D11197 rdar://problem/20404526 llvm-svn: 242295
* [PPC] Disassemble little endian ppc instructions in the right byte orderBenjamin Kramer2015-07-151-0/+664
| | | | | | PR24122. The test is simply a byte swapped version of ppc64-encoding.txt. llvm-svn: 242288
* [SDAG] Optimize unordered comparison in soft-float mode (patch by Anton ↵Alexey Bataev2015-07-155-51/+158
| | | | | | | | | | | Nadolskiy) Current implementation handles unordered comparison poorly in soft-float mode. Consider (a ULE b) which is a <= b. It is lowered to (ledf2(a, b) <= 0 || unorddf2(a, b) != 0) (in general). We can do better job by lowering it to (__gtdf2(a, b) <= 0). Such replacement is true for other CMP's (ult, ugt, uge). In general, we just call same function as for ordered case but negate comparison against zero. Differential Revision: http://reviews.llvm.org/D10804 llvm-svn: 242280
* [PowerPC] Use the MachineCombiner to reassociate fadd/fmulHal Finkel2015-07-151-0/+188
| | | | | | | | | | | | | This is a direct port of the code from the X86 backend (r239486/r240361), which uses the MachineCombiner to reassociate (floating-point) adds/muls to increase ILP, to the PowerPC backend. The rationale is the same. There is a lot of copy-and-paste here between the X86 code and the PowerPC code, and we should extract at least some of this into CodeGen somewhere. However, I don't want to do that until this code is enhanced to handle FMAs as well. After that, we'll be in a better position to extract the common parts. llvm-svn: 242279
* [AArch64] Fix problems in decoding generic MSR instructionsPetr Pavlu2015-07-151-0/+4
| | | | | | | | | Bitpatterns rejected by the decoder method of `MSR (immediate)` should be decoded as the `extended MSR (register)` instruction. Differential Revision: http://reviews.llvm.org/D7174 llvm-svn: 242276
* [TableGen] Improve decoding options for non-orthogonal instructionsPetr Pavlu2015-07-153-0/+131
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When FixedLenDecoder matches an input bitpattern of form [01]+ with an instruction bitpattern of form [01?]+ (where 0/1 are static bits and ? are mixed/variable bits) it passes the input bitpattern to a specific instruction decoder method which then makes a final decision whether the bitpattern is a valid instruction or not. This means the decoder must handle all possible values of the variable bits which sometimes leads to opcode rewrites in the decoder method when the instructions are not fully orthogonal. The patch provides a way for the decoder method to say that when it returns Fail it does not necessarily mean the bitpattern is invalid, but rather that the bitpattern is definitely not an instruction that is recognized by the decoder method. The decoder can then try to match the input bitpattern with other possible instruction bitpatterns. For example, this allows to solve a situation on AArch64 where the `MSR (immediate)` instruction has form: 1101 0101 0000 0??? 0100 ???? ???1 1111 but not all values of the ? bits are allowed. The rejected values should be handled by the `extended MSR (register)` instruction: 1101 0101 000? ???? ???? ???? ???? ???? The decoder will first try to decode an input bitpattern that matches both bitpatterns as `MSR (immediate)` but currently this puts the decoder method of `MSR (immediate)` into a situation when it must be able to decode all possible values of the ? bits, i.e. it would need to rewrite the instruction to `MSR (register)` when it is not `MSR (immediate)`. The patch allows to specify that the decoder method cannot determine if the instruction is valid for all variable values. The decoder method can simply return Fail when it knows it is definitely not `MSR (immediate)`. The decoder will then backtrack the decoding and find that it can match the input bitpattern with the more generic `MSR (register)` bitpattern too. Differential Revision: http://reviews.llvm.org/D7174 llvm-svn: 242274
* [X86][SSE] Added i686/SSE2 vector shift tests.Simon Pilgrim2015-07-153-34/+987
| | | | | | We were only testing on x86-64, but we should be ensuring decent code gen of i64 shifts on 32-bit targets. llvm-svn: 242273
OpenPOWER on IntegriCloud