summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/SelectionDAG
Commit message (Collapse)AuthorAgeFilesLines
...
* Convert a vselect into a concat_vector if possibleFilipe Cabecinhas2014-05-301-0/+61
| | | | | | | | | | | | | | | | | | | | | Summary: If both vector args to vselect are concat_vectors and the condition is constant and picks half a vector from each argument, convert the vselect into a concat_vectors. Added a test. The ConvertSelectToConcatVector is assuming it doesn't get vselects with arguments of, for example, <undef, undef, true, true>. Those get taken care of in the checks above its call. Reviewers: nadav, delena, grosbach, hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D3916 llvm-svn: 209929
* [pr19636] Fix known bit computation in urem instruction with power of two.Rafael Espindola2014-05-301-2/+5
| | | | | | Patch by Andrey Kuharev. llvm-svn: 209902
* SelectionDAG: skip barriers for unordered atomic operationsTim Northover2014-05-301-2/+2
| | | | | | | | | Unordered is strictly weaker than monotonic, so if the latter doesn't have any barriers then the former certainly shouldn't. rdar://problem/16548260 llvm-svn: 209901
* Fix an assertion failure caused by v1i64 in DAGCombiner Shrink.Hao Liu2014-05-291-0/+4
| | | | llvm-svn: 209798
* [x86] Fold extract_vector_elt of a load into the Load's address computation.Michael J. Spencer2014-05-291-90/+124
| | | | | | | | An address only use of an extract element of a load can be simplified to a load. Without this the result of the extract element is spilled to the stack so that an address is available. llvm-svn: 209788
* Fix wrong setcc result type when legalizing uaddo/usuboMatt Arsenault2014-05-281-5/+11
| | | | | | | | | No test because no in-tree targets change the bitwidth of the setcc type depending on the bitwidth of the compared type. Patch by Ke Bai llvm-svn: 209771
* [pr19844] Add thread local mode to aliases.Rafael Espindola2014-05-281-8/+1
| | | | | | | | | | This matches gcc's behavior. It also seems natural given that aliases contain other properties that govern how it is accessed (linkage, visibility, dll storage). Clang still has to be updated to expose this feature to C. llvm-svn: 209759
* Revert "[DAGCombiner] Split up an indexed load if only the base pointer ↵Hal Finkel2014-05-281-30/+7
| | | | | | | | | | value is live" This reverts r208640 (I've just XFAILed the test) because it broke ppc64/Linux self-hosting. Because nearly every regression test triggers a segfault, I hope this will be easy to fix. llvm-svn: 209747
* ARM: teach AAPCS-VFP to deal with Cortex-M4.Tim Northover2014-05-271-8/+11
| | | | | | | | | | | Cortex-M4 only has single-precision floating point support, so any LLVM "double" type will have been split into 2 i32s by now. Fortunately, the consecutive-register framework turns out to be precisely what's needed to reconstruct the double and follow AAPCS-VFP correctly! rdar://problem/17012966 llvm-svn: 209650
* Legalizer: Make bswap promotion safe for vectors.Benjamin Kramer2014-05-201-2/+2
| | | | llvm-svn: 209202
* SDAG: Legalize vector BSWAP into a shuffle if the shuffle is legal but the ↵Benjamin Kramer2014-05-191-0/+27
| | | | | | | | | | bswap not. - On ARM/ARM64 we get a vrev because the shuffle matching code is really smart. We still unroll anything that's not v4i32 though. - On X86 we get a pshufb with SSSE3. Required more cleverness in isShuffleMaskLegal. - On PPC we get a vperm for v8i16 and v4i32. v2i64 is unrolled. llvm-svn: 209123
* Target: remove old constructors for CallLoweringInfoSaleem Abdulrasool2014-05-176-105/+93
| | | | | | | | | | This is mostly a mechanical change changing all the call sites to the newer chained-function construction pattern. This removes the horrible 15-parameter constructor for the CallLoweringInfo in favour of setting properties of the call via chained functions. No functional change beyond the removal of the old constructors are intended. llvm-svn: 209082
* Target: change member from reference to pointerSaleem Abdulrasool2014-05-171-1/+1
| | | | | | | | | This is a preliminary step to help ease the construction of CallLoweringInfo. Changing the construction to a chained function pattern requires that the parameter be nullable. However, rather than copying the vector, save a pointer rather than the reference to permit a late binding of the arguments. llvm-svn: 209080
* Delete getAliasedGlobal.Rafael Espindola2014-05-161-1/+1
| | | | llvm-svn: 209040
* Instead of littering asserts throughout the code after every call toJay Foad2014-05-151-53/+32
| | | | | | | computeKnownBits, consolidate them into one assert at the end of computeKnownBits itself. llvm-svn: 208876
* Rename ComputeMaskedBits to computeKnownBits. "Masked" has beenJay Foad2014-05-145-68/+67
| | | | | | inappropriate since it lost its Mask parameter in r154011. llvm-svn: 208811
* Update the comments for ComputeMaskedBits, which lost its Mask parameterJay Foad2014-05-141-3/+2
| | | | | | in r154011. llvm-svn: 208757
* Use a logical not when inverting SetCC. This unfortunately doesn't fire on ↵Pete Cooper2014-05-122-3/+19
| | | | | | | | | | | | any targets so I couldn't find a test case to trigger it. The problem occurs when a non-i1 setcc is inverted. For example 'i8 = setcc' will get 'xor 0xff' to invert this. This is clearly wrong when the boolean contents are ZeroOrOne. This patch introduces getLogicalNOT and updates SetCC legalisation to use it. Reviewed by Hal Finkel. llvm-svn: 208641
* [DAGCombiner] Split up an indexed load if only the base pointer value is liveAdam Nemet2014-05-121-7/+30
| | | | | | | | | | | | | | | | | | | | Right now the load may not get DCE'd because of the side-effect of updating the base pointer. This can happen if we lower a read-modify-write of an illegal larger type (e.g. i48) such that the modification only affects one of the subparts (the lower i32 part but not the higher i16 part). See the testcase. In order to spot the dead load we need to revisit it when SimplifyDemandedBits decided that the value of the load is masked off. This is the CommitTargetLoweringOpt piece. I checked compile time with ARM64 by sending SPEC bitcode files through llc. No measurable change. Fixes <rdar://problem/16031651> llvm-svn: 208640
* Make SimplifyDemandedBits understand BUILD_PAIRMatt Arsenault2014-05-121-0/+25
| | | | llvm-svn: 208598
* Pass the value type to TLI::getRegisterByNameHal Finkel2014-05-111-2/+2
| | | | | | | | | | | | | We must validate the value type in TLI::getRegisterByName, because if we don't and the wrong type was used with the IR intrinsic, then we'll assert (because we won't be able to find a valid register class with which to construct the requested copy operation). For PPC64, additionally, the type information is necessary to decide between the 64-bit register and the 32-bit subregister. No functionality change. llvm-svn: 208508
* ARM: HFAs must be passed in consecutive registersOliver Stannard2014-05-091-2/+22
| | | | | | | | | | When using the ARM AAPCS, HFAs (Homogeneous Floating-point Aggregates) must be passed in a block of consecutive floating-point registers, or on the stack. This means that unused floating-point registers cannot be back-filled with part of an HFA, however this can currently happen. This patch, along with the corresponding clang patch (http://reviews.llvm.org/D3083) prevents this. llvm-svn: 208413
* Fix using wrong result type for setcc.Matt Arsenault2014-05-072-4/+16
| | | | | | | | | | | When reducing the bitwidth of a comparison against a constant, the original setcc's result type was used, which was incorrect. No test since I don't think any other in tree targets change the bitwidth of the setcc type depending on the bitwidth of the compared type. llvm-svn: 208236
* Implememting named register intrinsicsRenato Golin2014-05-064-0/+55
| | | | | | | | | | | This patch implements the infrastructure to use named register constructs in programs that need access to specific registers (bare metal, kernels, etc). So far, only the stack pointer is supported as a technology preview, but as it is, the intrinsic can already support all non-allocatable registers from any architecture. llvm-svn: 208104
* Satisfy GCC's urgent need for parentheses around ‘&&’ within ‘||’.Benjamin Kramer2014-05-021-2/+2
| | | | llvm-svn: 207871
* DAGCombine: prevent formation of illegal ConstantFP nodes.Tim Northover2014-05-021-5/+10
| | | | llvm-svn: 207850
* Allow SelectionDAG::FoldConstantArithmetic to work when it's called with a ↵Benjamin Kramer2014-05-021-2/+8
| | | | | | vector VT but scalar values. llvm-svn: 207835
* Convert more loops to range-based equivalentsAlexey Samsonov2014-04-301-9/+5
| | | | llvm-svn: 207714
* [ARM64] Prevent bit extraction to be adjusted by following shiftWeiming Zhao2014-04-301-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx more difficult. For example: Given %shr = lshr i64 %x, 4 %and = and i64 %shr, 15 %arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and %0 = load i64* %arrayidx With current shift folding, it takes 3 instrs to compute base address: lsr x8, x0, #1 and x8, x8, #0x78 add x8, x9, x8 If using ubfx, it only needs 2 instrs: ubfx x8, x0, #4, #4 add x8, x9, x8, lsl #3 This fixes bug 19589 llvm-svn: 207702
* Use makeArrayRef insted of calling ArrayRef<T> constructor directly. I ↵Craig Topper2014-04-305-14/+12
| | | | | | introduced most of these recently. llvm-svn: 207616
* Tidy up whitespace.Jim Grosbach2014-04-291-7/+7
| | | | llvm-svn: 207583
* [C++11] Add 'override' keywords and remove 'virtual'. Additionally add ↵Craig Topper2014-04-291-1/+1
| | | | | | 'final' and leave 'virtual' on some methods that are marked virtual without overriding anything and have no obvious overrides themselves. llvm-svn: 207511
* We already calculate WideVT above, just reuse it.Eric Christopher2014-04-281-2/+1
| | | | | | Patch by Jan Vesely <jan.vesely@rutgers.edu>. llvm-svn: 207455
* Convert more SelectionDAG functions to use ArrayRef.Craig Topper2014-04-284-32/+32
| | | | llvm-svn: 207397
* Convert AddNodeIDNode and SelectionDAG::getNodeIfExiists to use ↵Craig Topper2014-04-272-42/+42
| | | | | | ArrayRef<SDValue> llvm-svn: 207383
* Convert SelectionDAGISel::MorphNode to use ArrayRef.Craig Topper2014-04-271-5/+3
| | | | llvm-svn: 207379
* Convert SelectionDAG::MorphNodeTo to use ArrayRef.Craig Topper2014-04-273-11/+12
| | | | llvm-svn: 207378
* Convert SelectionDAG::SelectNodeTo to use ArrayRef.Craig Topper2014-04-271-22/+19
| | | | llvm-svn: 207377
* Convert one last signature of getNode to take an ArrayRef of SDUse.Craig Topper2014-04-272-8/+8
| | | | llvm-svn: 207376
* Convert SDNode constructor to use ArrayRef.Craig Topper2014-04-271-9/+8
| | | | llvm-svn: 207375
* Convert SelectionDAG::getMergeValues to use ArrayRef.Craig Topper2014-04-272-13/+10
| | | | llvm-svn: 207374
* Const-correct SelectionDAG::getAtomic.Craig Topper2014-04-271-2/+2
| | | | llvm-svn: 207373
* SelectionDAG: Aggressively fold shuffles of constant splats.Benjamin Kramer2014-04-271-0/+5
| | | | llvm-svn: 207352
* DAGCombiner: Simplify code a bit, make more transforms work with vectors.Benjamin Kramer2014-04-261-58/+37
| | | | llvm-svn: 207338
* Convert getMemIntrinsicNode to take ArrayRef of SDValue instead of pointer ↵Craig Topper2014-04-262-11/+9
| | | | | | and size. llvm-svn: 207329
* Convert SelectionDAG::getNode methods to use ArrayRef<SDValue>.Craig Topper2014-04-269-249/+183
| | | | llvm-svn: 207327
* Remove an unused version of getMemIntrinsicNode and getNode. Additionally, ↵Craig Topper2014-04-261-20/+0
| | | | | | these were calling makeVTList with the pointers passed in which would were unlikely to belong to SelectionDAG and likely would have just been stack pointers. llvm-svn: 207326
* Rip out X86-specific vector SDIV lowering, make the corresponding ↵Benjamin Kramer2014-04-261-13/+24
| | | | | | DAGCombiner transform work on vectors. llvm-svn: 207316
* DAGCombiner: Turn divs of vector splats into vectorized multiplications.Benjamin Kramer2014-04-262-24/+54
| | | | | | | | | | | | Otherwise the legalizer would just scalarize everything. Support for mulhi in the targets isn't that great yet so on most targets we get exactly the same scalarized output. Add a test for x86 vector udiv. I had to disable the mulhi nodes on ARM because there aren't any patterns for it. As far as I know ARM has instructions for getting the high part of a multiply so this should be fixed. llvm-svn: 207315
* [DAG] During DAG legalization keep opaque constants even after expanding.Juergen Ributzka2014-04-261-3/+8
| | | | | | | | | | | | | | | | | | | | | The included test case would return the incorrect results, because the expansion of an shift with a constant shift amount of 0 would generate undefined behavior. This is because ExpandShiftByConstant assumes that all shifts by constants with a value of 0 have already been optimized away. This doesn't happen for opaque constants and usually this isn't a problem, because opaque constants won't take this code path - they are not supposed to. In the case that the opaque constant has to be expanded by the legalizer, the legalizer would drop the opaque flag. In this case we hit the limitations of ExpandShiftByConstant and create incorrect code. This commit fixes the legalizer by not dropping the opaque flag when expanding opaque constants and adding an assertion to ExpandShiftByConstant to catch this not supported case in the future. This fixes <rdar://problem/16718472> llvm-svn: 207304
OpenPOWER on IntegriCloud