summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* AMDGPU: VOP3b definition cleanupsMatt Arsenault2015-09-262-18/+25
| | | | llvm-svn: 248647
* AMDGPU: Fix sched model for VOP2b instructionsMatt Arsenault2015-09-261-6/+7
| | | | | | | | Trying to use the version with the explicit output operand would complain because of the missing WriteSALU. I'm not sure why it doesn't complain about this with the implicit VCC def. llvm-svn: 248646
* [WebAssembly] Rename several functions and types according to the new spec.Dan Gohman2015-09-2610-158/+158
| | | | llvm-svn: 248644
* [ARM] Don't generate clrex for pre-v7 targets.Ahmed Bougacha2015-09-261-0/+2
| | | | | | Since r248294, we emit clrex, but it doesn't exist on v6. llvm-svn: 248640
* [SCEV] Reapply 'Teach isLoopBackedgeGuardedByCond to exploit trip counts'Sanjoy Das2015-09-251-0/+16
| | | | | | | | | | | | | | | | | | | | | | Summary: If the trip count of a specific backedge is `N`, then we know that backedge is effectively guarded by the condition `{0,+,1} u< N`. This change teaches SCEV to use this condition to prove things in `isLoopBackedgeGuardedByCond`. Depends on D12948 Depends on D12949 The original checkin, r248608 had to be backed out due to an issue with a ObjCXX unit test. That issue is now fixed, so re-landing. Reviewers: atrick, reames, majnemer, hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D12950 llvm-svn: 248638
* [SCEV] Reapply 'Exploit A < B => (A+K) < (B+K) when possible'Sanjoy Das2015-09-251-0/+143
| | | | | | | | | | | | | | | | | | | | | | | Summary: This change teaches SCEV's `isImpliedCond` two new identities: A u< B u< -C => (A + C) u< (B + C) A s< B s< INT_MIN - C => (A + C) s< (B + C) While these are useful on their own, they're really intended to support D12950. The original checkin, r248606 had to be backed out due to an issue with a ObjCXX unit test. That issue is now fixed, so re-landing. Reviewers: atrick, reames, majnemer, nlewycky, hfinkel Subscribers: aadg, sanjoy, llvm-commits Differential Revision: http://reviews.llvm.org/D12948 llvm-svn: 248637
* LivePhysRegs: Fix live-outs of return blocksMatthias Braun2015-09-251-2/+10
| | | | | | | | | | | | | I realized that the live-out set computed for the return block is missing the callee saved registers (the non-pristine ones to be exact). This only affects the liveness computed for instructions inside the function epilogue which currently none of the LivePhysRegs users in llvm cares about, so this is just a drive-by fix without a testcase. Differential Revision: http://reviews.llvm.org/D13180 llvm-svn: 248636
* [InstCombine] match De Morgan's Law hidden by zext ops (PR22723)Sanjay Patel2015-09-251-5/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a fix for PR22723: https://llvm.org/bugs/show_bug.cgi?id=22723 My first attempt at this was to change what I thought was the root problem: xor (zext i1 X to i32), 1 --> zext (xor i1 X, true) to i32 ...but we create the opposite pattern in InstCombiner::visitZExt(), so infinite loop! My next idea was to fix the matchIfNot() implementation in PatternMatch, but that would mean potentially returning a different size for the match than what was input. I think this would require all users of m_Not to check the size of the returned match, so I abandoned that idea. I settled on just fixing the exact case presented in the PR. This patch does allow the 2 functions in PR22723 to compile identically (x86): bool test(bool x, bool y) { return !x | !y; } bool test(bool x, bool y) { return !x || !y; } ... andb %sil, %dil xorb $1, %dil movb %dil, %al retq Differential Revision: http://reviews.llvm.org/D12705 llvm-svn: 248634
* Use fixed-point representation for BranchProbability.Cong Hou2015-09-251-5/+51
| | | | | | | | | | | | | | | | | | | | BranchProbability now is represented by its numerator and denominator in uint32_t type. This patch changes this representation into a fixed point that is represented by the numerator in uint32_t type and a constant denominator 1<<31. This is quite similar to the representation of BlockMass in BlockFrequencyInfoImpl.h. There are several pros and cons of this change: Pros: 1. It uses only a half space of the current one. 2. Some operations are much faster like plus, subtraction, comparison, and scaling by an integer. Cons: 1. Constructing a probability using arbitrary numerator and denominator needs additional calculations. 2. It is a little less precise than before as we use a fixed denominator. For example, 1 - 1/3 may not be exactly identical to 1 / 3 (this will lead to many BranchProbability unit test failures). This should not matter when we only use it for branch probability. If we use it like a rational value for some precise calculations we may need another construct like ValueRatio. One important reason for this change is that we propose to store branch probabilities instead of edge weights in MachineBasicBlock. We also want clients to use probability instead of weight when adding successors to a MBB. The current BranchProbability has more space which may be a concern. Differential revision: http://reviews.llvm.org/D12603 llvm-svn: 248633
* SelectionDAGDumper: Print simple operands inline.Matthias Braun2015-09-251-22/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Print simple operands inline instead of their pointer/value number. Simple operands are SDNodes without predecessors like Constant(FP), Register, UNDEF. This unifies the behaviour with dumpr() which was already doing this. Previously: t0: ch = EntryToken t1: i64 = Register %vreg0 t2: i64,ch = CopyFromReg t0, t1 t3: i64 = Constant<1> t4: i64 = add t2, t3 t5: i64 = Constant<2> t6: i64 = add t2, t5 t10: i64 = undef t11: i8,ch = load t0, t2, t10<LD1[%tmp81]> t12: i8,ch = load t0, t4, t10<LD1[%tmp10]> t13: i8,ch = load t0, t6, t10<LD1[%tmp12]> Now: t0: ch = EntryToken t2: i64,ch = CopyFromReg t0, Register:i64 %vreg0 t4: i64 = add t2, Constant:i64<1> t6: i64 = add t2, Constant:i64<2> t11: i8,ch = load<LD1[%tmp81]> t0, t2, undef:i64 t12: i8,ch = load<LD1[%tmp10]> t0, t4, undef:i64 t13: i8,ch = load<LD1[%tmp12]> t0, t6, undef:i64 Differential Revision: http://reviews.llvm.org/D12567 llvm-svn: 248628
* AMDGPU: Construct new buffer instruction when moving SMRDMatt Arsenault2015-09-252-31/+39
| | | | | | | | | It's easier to understand creating a full instruction than the current situation where sometimes a new instruction is created and sometimes it is awkwardly mutated in place. llvm-svn: 248627
* DAGCombiner: Check if store is volatile firstMatt Arsenault2015-09-251-3/+3
| | | | | | This is the simpler check. NFC. llvm-svn: 248625
* TargetRegisterInfo: Introduce PrintLaneMask.Matthias Braun2015-09-259-26/+25
| | | | | | | This makes it more convenient to print lane masks and lead to more uniform printing. llvm-svn: 248624
* TargetRegisterInfo: Add typedef unsigned LaneBitmask and use it where ↵Matthias Braun2015-09-2512-88/+90
| | | | | | apropriate; NFC llvm-svn: 248623
* merge vector stores into wider vector stores and fix AArch64 misaligned ↵Sanjay Patel2015-09-252-14/+47
| | | | | | | | | | | | | | | | | | | | | | access TLI hook (PR21711) This is a redo of D7208 ( r227242 - http://llvm.org/viewvc/llvm-project?view=revision&revision=227242 ). The patch was reverted because an AArch64 target could infinite loop after the change in DAGCombiner to merge vector stores. That happened because AArch64's allowsMisalignedMemoryAccesses() wasn't telling the truth. It reported all unaligned memory accesses as fast, but then split some 128-bit unaligned accesses up in performSTORECombine() because they are slow. This patch attempts to fix the problem in AArch's allowsMisalignedMemoryAccesses() while preserving existing (perhaps questionable) lowering behavior. The x86 test shows that store merging is working as intended for a target with fast 32-byte unaligned stores. Differential Revision: http://reviews.llvm.org/D12635 llvm-svn: 248622
* PrologueEpilogInserter: Fix missing live-ins when savepoint equals restorepointMatthias Braun2015-09-251-3/+6
| | | | | | | | | | | | | | The algorithm would not modify the live-in list of blocks below the save block point which is correct unless it happens to be a restore point at the same time. Also fixes the benign issue of live-in registers being added twice in some cases. The testcase is based on a test submitted by Kit Barton. Differential Revision: http://reviews.llvm.org/D13176 llvm-svn: 248620
* AMDGPU/SI: Use .hsatext section instead of .text for HSATom Stellard2015-09-2517-9/+197
| | | | | | | | | | Reviewers: arsenm, grosbach, rafael Subscribers: arsenm, llvm-commits Differential Revision: http://reviews.llvm.org/D12424 llvm-svn: 248619
* MCAsmInfo: Allow targets to specify when the .section directive should be ↵Tom Stellard2015-09-252-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | omitted Summary: The default behavior is to omit the .section directive for .text, .data, and sometimes .bss, but some targets may want to omit this directive for other sections too. The AMDGPU backend will uses this to emit a simplified syntax for section switches. For example if the section directive is not omitted (current behavior), section switches to .hsatext will be printed like this: .section .hsatext,#alloc,#execinstr,#write This is actually wrong, because .hsatext has some custom STT_* flags, which MC doesn't know how to print or parse. If the section directive is omitted (made possible by this commit), section switches will be printed like this: .hsatext The motivation for this patch is to make it possible to emit sections with custom STT_* flags without having to teach MC about all the target specific STT_* flags. Reviewers: rafael, grosbach Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D12423 llvm-svn: 248618
* MachineBasicBlock: Factor out common code into isReturnBlock()Matthias Braun2015-09-257-17/+10
| | | | llvm-svn: 248617
* Revert two SCEV changes that caused test failures in clang.Sanjoy Das2015-09-251-159/+0
| | | | | | r248606: "[SCEV] Exploit A < B => (A+K) < (B+K) when possible" r248608: "[SCEV] Teach isLoopBackedgeGuardedByCond to exploit trip counts." llvm-svn: 248614
* ADCE: Fix typo in file comment. NFCJustin Bogner2015-09-251-1/+1
| | | | llvm-svn: 248613
* PeepholeOptimizer: Remove redundant copiesMatt Arsenault2015-09-251-0/+79
| | | | | | | | | | | | If a virtual register is copied and another copy was already seen, replace with the previous copy. This only handles the simplest cases for now. This pattern shows up from various operand restrictions AMDGPU has which require inserting copies depending on the register class of the operands. llvm-svn: 248611
* Simplify code. NFC.Chad Rosier2015-09-251-6/+1
| | | | llvm-svn: 248610
* more space; NFCSanjay Patel2015-09-251-0/+1
| | | | llvm-svn: 248609
* [SCEV] Teach isLoopBackedgeGuardedByCond to exploit trip counts.Sanjoy Das2015-09-251-0/+16
| | | | | | | | | | | | | | | | | | | Summary: If the trip count of a specific backedge is `N`, then we know that backedge is effectively guarded by the condition `{0,+,1} u< N`. This change teaches SCEV to use this condition to prove things in `isLoopBackedgeGuardedByCond`. Depends on D12948 Depends on D12949 Reviewers: atrick, reames, majnemer, hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D12950 llvm-svn: 248608
* [SCEV] Extract helper function from isImpliedCond; NFCSanjoy Das2015-09-251-0/+8
| | | | | | | | | | | | | Summary: This new helper routine will be used in a subsequent change. Reviewers: hfinkel Subscribers: hfinkel, sanjoy, llvm-commits Differential Revision: http://reviews.llvm.org/D12949 llvm-svn: 248607
* [SCEV] Exploit A < B => (A+K) < (B+K) when possibleSanjoy Das2015-09-251-0/+143
| | | | | | | | | | | | | | | | | | | | Summary: This change teaches SCEV's `isImpliedCond` two new identities: A u< B u< -C => (A + C) u< (B + C) A s< B s< INT_MIN - C => (A + C) s< (B + C) While these are useful on their own, they're really intended to support D12950. Reviewers: atrick, reames, majnemer, nlewycky, hfinkel Subscribers: aadg, sanjoy, llvm-commits Differential Revision: http://reviews.llvm.org/D12948 llvm-svn: 248606
* AMDGPU: Make getNamedOperandIdx declaration readonlyMatt Arsenault2015-09-252-0/+3
| | | | | | This matches how it is defined in the generated implementation. llvm-svn: 248598
* [AArch64] Add support for generating pre- and post-index load/store pairs.Chad Rosier2015-09-251-43/+173
| | | | llvm-svn: 248593
* AMDGPU: Disable some passes that are not meaningfulMatt Arsenault2015-09-251-3/+15
| | | | | | | | | | | Don't run passes related to stack maps, garbage collection, exceptions since these aren't useful for GPUs. There might be a few more to turn off that I'm less sure about (e.g. ShrinkWrapping) or I'm not sure how to disable (SafeStack and StackProtector) llvm-svn: 248591
* AMDGPU: Handle i64->v2i32 loads/stores in PreprocessISelDAGMatt Arsenault2015-09-251-52/+61
| | | | | | | | | | | | | This fixes a select error when the i64 source was also bitcasted to v2i32 in the original source. Instead of awkwardly trying to select the modified source value and the store, replace before isel begins. Uses a worklist to avoid possible problems from mutating the DAG, although it seems to work OK without it. llvm-svn: 248589
* AMDGPU: Fix recomputing dominator tree unnecessarilyMatt Arsenault2015-09-256-1/+19
| | | | | | | SIFixSGPRCopies does not modify the CFG, but this was being recomputed before running SIFoldOperands. llvm-svn: 248587
* AMDGPU: Re-justify workaround and fix worked around problemMatt Arsenault2015-09-252-56/+65
| | | | | | | | | | | | | | | When buffer resource descriptors were built, the upper two components of the descriptor were first composed into a 64-bit register because legalizeOperands assumed all operands had the same register class. Fix that problem, but keep the workaround. I'm not sure anything actually is actually emitting such a REG_SEQUENCE now. If multiple resource descriptors are set up with different base pointers, this is copied with a single s_mov_b64. We probably should fix this better by recognizing a pair of s_mov_b32 later, but for now delete the dead code. llvm-svn: 248585
* AMDGPU: Don't create REG_SEQUENCE with SGPR dest and VGPR sourcesMatt Arsenault2015-09-251-6/+15
| | | | | | This avoids needting to re-legalize the new REG_SEQUENCE. llvm-svn: 248584
* AMDGPU: Fix not adding exec to defs of cmpx instruction pseudosMatt Arsenault2015-09-251-0/+2
| | | | | | | | | This was only set on the final _si/_vi version, but not on the pseudos most of codegen sees. No test since these instructions aren't used yet. llvm-svn: 248583
* AMDGPU: Improve accuracy of instruction rates for VOPCMatt Arsenault2015-09-252-48/+73
| | | | | | | | | | | These were all using the default 32-bit VALU write class, but the i64/f64 compares are half rate. I'm not sure this is really correct, because they are still using the write to VALU write class, even though they really write to the SALU. llvm-svn: 248582
* [GlobalsAA] Teach GlobalsAA about nocaptureJames Molloy2015-09-251-1/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Arguments to function calls marked "nocapture" can be marked as non-escaping. However, nocapture is defined in terms of the lifetime of the callee, and if the callee can directly or indirectly recurse to the caller, the semantics of nocapture are invalid. Therefore, we eagerly discover which SCC each function belongs to, and later can check if callee and caller of a callsite belong to the same SCC, in which case there could be recursion. This means that we can't be so optimistic in getModRefInfo(ImmutableCallsite) - previously we assumed all call arguments never aliased with an escaping global. Now we need to check, because a global could now be passed as an argument but still not escape. This also solves a related conformance problem: MemCpyOptimizer can turn non-escaping stores of globals into calls to intrinsics like llvm.memcpy/llvm/memset. This confuses GlobalsAA, which knows the global can't escape and so returns NoModRef when queried, when obviously a memcpy/memset call does indeed reference and modify its arguments. This fixes PR24800, PR24801, and PR24802. llvm-svn: 248576
* ARM: make -Asserts,-Werror=unused-variable build happySaleem Abdulrasool2015-09-251-4/+4
| | | | | | | The value was only used in an assertion. Sink the variable usage into the assertion. llvm-svn: 248562
* ARM: address WoA division limitationSaleem Abdulrasool2015-09-253-8/+147
| | | | | | | | | | | | | We now emit the compiler generated divide by zero check that was needed for the MSVC routines. We construct a psuedo-instruction for the DBZ check as the operation requires splitting up the BB. For the 64-bit operations, we need to custom expand the node as we need to insert the DBZ check and then emit the libcall to the appropriate name. Because this is target specific, it seemed better to reproduce the expansion operation from the target-agnostic type legalization rather than sink this there to avoid the duplication. The division library calls now match MSVC semantically. llvm-svn: 248561
* AMDGPU: Remove unused includesMatt Arsenault2015-09-251-6/+0
| | | | llvm-svn: 248553
* [Bitcode][Asm] Teach LLVM to read and write operand bundles.Sanjoy Das2015-09-245-12/+247
| | | | | | | | | | | | | | | | | | Summary: This also adds the first set of tests for operand bundles. The optimizer has not been audited to ensure that it does the right thing with operand bundles. Depends on D12456. Reviewers: reames, chandlerc, majnemer, dexonsmith, kmod, JosephTremoulet, rnk, bogner Subscribers: maksfb, llvm-commits Differential Revision: http://reviews.llvm.org/D12457 llvm-svn: 248551
* Fix typoMatt Arsenault2015-09-241-1/+1
| | | | llvm-svn: 248549
* [AArch64] Improve the readability of the ld/st optimization pass. NFC.Chad Rosier2015-09-241-4/+4
| | | | | | In this context, MI is an add/sub instruction not a loads/store. llvm-svn: 248540
* [X86][SSE2] Fix zero/any extension shuffles that don't start from the first ↵Simon Pilgrim2015-09-241-5/+7
| | | | | | | | element Fix for D12561 - we weren't correctly ensuring that the base element for extension was moved to start on a boundary suitable for UNPCKL/H llvm-svn: 248536
* AMDGPU: Add s_dcache_* instructionsMatt Arsenault2015-09-245-14/+69
| | | | llvm-svn: 248533
* AMDGPU: Add cache invalidation instructions.Matt Arsenault2015-09-243-4/+34
| | | | | | | | | | These are necessary for implementing mem_fence for OpenCL 2.0. The VI assembler tests are disabled since it seems to be using the wrong encoding or opcode. llvm-svn: 248532
* [AArch64] The paired post-increment store instruction has an output register.Chad Rosier2015-09-241-2/+2
| | | | | | | | The pre- and post-increment version update the base register, but the post- version was defined incorrectly. There is no test case as we don't currently generate these instructions, but I plan on changing that in the near future. llvm-svn: 248528
* [IR] Add operand bundles to CallInst and InvokeInst.Sanjoy Das2015-09-245-4/+59
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: This change teaches `CallInst`s and `InvokeInst`s to maintain a set of operand bundles as part of its operands. `CallInst`s and `InvokeInst`s with operand bundles co-allocate some space before their `Use` array to hold meta information about which of its operands are part of an operand bundle. The strings corresponding to the bundle tags are interned into `LLVMContextImpl::BundleTagCache` This change does not include any parsing / bitcode support. That's the next change. Depends on D12455. Reviewers: reames, chandlerc, majnemer, dexonsmith, kmod, JosephTremoulet, rnk, bogner Subscribers: MatzeB, sanjoy, llvm-commits Differential Revision: http://reviews.llvm.org/D12456 llvm-svn: 248527
* [ARM] Handle +t2dsp feature as an ArchExtKind in ARMTargetParser.defArtyom Skrobov2015-09-2413-98/+94
| | | | | | | | | | | | | | | | | | Currently, the availability of DSP instructions (ACLE 6.4.7) is handled in a hand-rolled tricky condition block in tools/clang/lib/Basic/Targets.cpp, with a FIXME: attached. This patch changes the handling of +t2dsp to be in line with other architecture extensions. Following a revert of r248152 and new review comments, this patch also includes renaming FeatureDSPThumb2 -> FeatureDSP, hasThumb2DSP() -> hasDSP(), etc. The spelling of "t2dsp" is preserved, pending a further investigation of its possible external usage. Differential Revision: http://reviews.llvm.org/D12937 llvm-svn: 248519
* [ValueTracking] Teach isKnownNonZero a new trickJames Molloy2015-09-241-0/+17
| | | | | | | | If the shifter operand is a constant, and all of the bits shifted out are known to be zero, then if X is known non-zero at least one non-zero bit must remain. llvm-svn: 248508
OpenPOWER on IntegriCloud