summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
...
* [X86] Regcall - Adding support for mask typesOren Ben Simhon2016-12-113-38/+62
| | | | | | | | | Regcall calling convention passes mask types arguments in x86 GPR registers. The review includes the changes required in order to support v32i1, v16i1 and v8i1. Differential Revision: https://reviews.llvm.org/D27148 llvm-svn: 289383
* [X86] Fix a comment to say 'an FMA' instead of 'a FMA'. NFCCraig Topper2016-12-111-1/+1
| | | | llvm-svn: 289352
* [X86] Remove masking from 512-bit VPERMIL intrinsics in preparation for ↵Craig Topper2016-12-111-4/+2
| | | | | | being able to constant fold them in InstCombineCalls like we do for 128/256-bit. llvm-svn: 289350
* [AVR] Fix a signed vs unsigned compiler warningDylan McKay2016-12-111-1/+1
| | | | llvm-svn: 289349
* [AVR] Remove incorrect commentDylan McKay2016-12-101-2/+0
| | | | | | This should've been removed in r289323. llvm-svn: 289346
* [X86] Remove masking from 512-bit PSHUFB intrinsics in preparation for being ↵Craig Topper2016-12-101-2/+1
| | | | | | able to constant fold it in InstCombineCalls like we do for 128/256-bit. llvm-svn: 289344
* [X86][SSE] Ensure UNPCK inputs are a consistent value type in ↵Simon Pilgrim2016-12-101-2/+3
| | | | | | LowerHorizontalByteSum llvm-svn: 289341
* [AVX-512] Remove 128/256 masked vpermil instrinsics and autoupgrade to a ↵Craig Topper2016-12-101-8/+0
| | | | | | select around the unmasked avx1 intrinsics. llvm-svn: 289340
* AMDGPU: Fix asan errors when folding operandsMatt Arsenault2016-12-101-2/+2
| | | | | | | This was failing when trying to fold immediates into operand 1 of a phi, which only has one statically known operand. llvm-svn: 289337
* [X86][SSE] Move ZeroVector creation into the shuffle pattern case where its ↵Simon Pilgrim2016-12-101-2/+2
| | | | | | | | actually used. Also fix the ZeroVector's type - I've no idea how this hasn't caused problems........ llvm-svn: 289336
* [AVX-512] Add support for lowering (v2i64 (fp_to_sint (v2f32))) to ↵Craig Topper2016-12-102-6/+25
| | | | | | vcvttps2uqq when AVX512DQ and AVX512VL are available. llvm-svn: 289335
* [X86] Clarify indentation. NFCCraig Topper2016-12-101-1/+1
| | | | llvm-svn: 289334
* [X86] Combine LowerFP_TO_SINT and LowerFP_TO_UINT. They only differ by a ↵Craig Topper2016-12-102-24/+7
| | | | | | single boolean flag passed to a helper function. Just check the opcode and create the flag. llvm-svn: 289333
* [mips] Eliminate else-after-return. NFCSimon Atanasyan2016-12-101-4/+3
| | | | llvm-svn: 289331
* [AVR] Add a stub README fileDylan McKay2016-12-101-0/+8
| | | | llvm-svn: 289326
* [AVR] Fix and clean up the inline assembly testsDylan McKay2016-12-101-1/+3
| | | | | | | | | | There was a bug where we would hit an assertion if 'Q' was used as a constraint. I also removed hardcoded register names to prefer regexes so the tests don't break when the register allocator changes. llvm-svn: 289325
* [AVR] Fix an inline asm assertion which would always triggerDylan McKay2016-12-101-1/+1
| | | | | | | It looks like some time in the past, constraint codes were changed from chars being passed around to enums. llvm-svn: 289323
* [AVR] Use the register scavenger when expanding 'LDDW' instructionsDylan McKay2016-12-101-25/+45
| | | | | | | | | | | | Summary: This gets rid of the hardcoded 'r0' that was used previously. Reviewers: asl Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D27567 llvm-svn: 289322
* [AVR] Support stores to undefined pointersDylan McKay2016-12-101-1/+2
| | | | | | This would previously trigger an assertion error in AVRISelDAGToDAG. llvm-svn: 289321
* [X86] Use X86ISD::CVTTP2SI and X86ISD::CVTTP2UI for lowering 128-bit ↵Craig Topper2016-12-102-7/+7
| | | | | | | | | | cvttps2qq and cvttps2uqq intrinsics since there is a mismatch between number of input and output elements. Ideally ISD::FP_TO_SINT and ISD::FP_TO_UINT would only be used for cases with the same number of input and output elements. Similar things have already been done for other convert intrinsics. llvm-svn: 289316
* [AVR] Fix a bunch of incorrect assertion messagesDylan McKay2016-12-102-5/+5
| | | | | | | | | | These should've been checking whether the immediate is a 6-bit unsigned integer. If the immediate was '63', this would cause an assertion error which shouldn't have occurred. llvm-svn: 289315
* AMDGPU: Fix AMDGPUPromoteAlloca breaking addrspacecastsMatt Arsenault2016-12-101-1/+8
| | | | | | | The users of the addrspacecast were having their types incorrectly changed, producing invalid bitcasts between address spaces. llvm-svn: 289307
* AMDGPU: Fix handling of 16-bit immediatesMatt Arsenault2016-12-1019-222/+741
| | | | | | | | | | | | | | | | | | Since 32-bit instructions with 32-bit input immediate behavior are used to materialize 16-bit constants in 32-bit registers for 16-bit instructions, determining the legality based on the size is incorrect. Change operands to have the size specified in the type. Also adds a workaround for a disassembler bug that produces an immediate MCOperand for an operand that is supposed to be OPERAND_REGISTER. The assembler appears to accept out of bounds immediates and truncates them, but this seems to be an issue for 32-bit already. llvm-svn: 289306
* AMDGPU: Fix vintrp disassemblyMatt Arsenault2016-12-102-12/+12
| | | | llvm-svn: 289292
* AMDGPU: Change vintrp printing to better match scMatt Arsenault2016-12-102-12/+15
| | | | | | | Some of the immediates need to be printed differently eventually. llvm-svn: 289291
* [AMDGPU, PowerPC, TableGen] Fix some Clang-tidy modernize and Include What ↵Eugene Zelenko2016-12-0915-150/+243
| | | | | | You Use warnings; other minor fixes (NFC). llvm-svn: 289282
* AMDGPU/SI: Remove XNACK feature from CIMarek Olsak2016-12-091-2/+1
| | | | | | | | | | | | Summary: CI doesn't have XNACK. Reviewers: tstellarAMD Subscribers: arsenm, kzhuravl, wdng, nhaehnle, yaxunl, tony-tye Differential Revision: https://reviews.llvm.org/D27175 llvm-svn: 289263
* AMDGPU/SI: Don't reserve XNACK when it's disabledMarek Olsak2016-12-092-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: This frees 2 additional scalar registers. These are results from all of my 3 patches combined: Polaris: Spilled SGPRs: 2231 -> 1517 (-32.00 %) Tonga: Spilled SGPRs: 3829 -> 2608 (-31.89 %) Spilled VGPRs: 100 -> 84 (-16.00 %) Tonga even spills SGPRs via VGPRs to scratch. That's a compute shader limited to 64 VGPRs. Reviewers: tstellarAMD Subscribers: arsenm, kzhuravl, wdng, nhaehnle, yaxunl, tony-tye Differential Revision: https://reviews.llvm.org/D27151 llvm-svn: 289262
* AMDGPU/SI: Don't reserve FLAT_SCR on non-HSA targets & without stack objectsMarek Olsak2016-12-093-8/+22
| | | | | | | | | | | | Summary: This frees 2 scalar registers. Reviewers: tstellarAMD Subscribers: qcolombet, arsenm, kzhuravl, wdng, nhaehnle, yaxunl, tony-tye Differential Revision: https://reviews.llvm.org/D27150 llvm-svn: 289261
* AMDGPU/SI: Allow using SGPRs 96-101 on VIMarek Olsak2016-12-094-20/+42
| | | | | | | | | | | | | | | Summary: There is no point in setting SGPRS=104, because VI allocates SGPRs in multiples of 16, so 104 -> 112. That enables us to use all 102 SGPRs for general purposes. Reviewers: tstellarAMD Subscribers: qcolombet, arsenm, kzhuravl, wdng, nhaehnle, yaxunl, tony-tye Differential Revision: https://reviews.llvm.org/D27149 llvm-svn: 289260
* AMDGPU: Fix isTypeDesirableForOp for i16Matt Arsenault2016-12-091-4/+16
| | | | | | This should do nothing for targets without i16. llvm-svn: 289235
* [SelectionDAG] Add knownbits support for EXTRACT_VECTOR_ELT opcodes (REAPPLIED)Simon Pilgrim2016-12-091-1/+6
| | | | | | Reapplied with fix for PR31323 - X86 SSE2 vXi16 multiplies for illegal types were creating CONCAT_VECTORS nodes with vector inputs that might not total the number of elements in the result type. llvm-svn: 289232
* AMDGPU: Fix i128 mulMatt Arsenault2016-12-091-0/+4
| | | | llvm-svn: 289231
* AMDGPU: Allow TBA, TMA, TTMP* registers with SMEM instructionsMatt Arsenault2016-12-091-2/+2
| | | | | | Fixes assembler regressions. llvm-svn: 289230
* AMDGPU: Clean up instruction bitsMatt Arsenault2016-12-092-98/+117
| | | | | | | | | Sort the instruction bits by type and make sure there is one for each format. Also cleanup namespaces. llvm-svn: 289229
* [PPC] Add intrinsics for vector extract word and vector insert word.Sean Fertile2016-12-091-0/+9
| | | | | Revision: https://reviews.llvm.org/D26547 llvm-svn: 289227
* Revert "In visitSTORE, always use FindBetterChain, rather than only when ↵Nirav Dave2016-12-091-0/+10
| | | | | | | | UseAA is enabled." This reverts commit r289221 which appears to be triggering an assertion llvm-svn: 289226
* In visitSTORE, always use FindBetterChain, rather than only when UseAA is ↵Nirav Dave2016-12-091-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | enabled. Retrying after fixing overly aggressive load-store forwarding optimization. Simplify Consecutive Merge Store Candidate Search Now that address aliasing is much less conservative, push through simplified store merging search which only checks for parallel stores through the chain subgraph. This is cleaner as the separation of non-interfering loads/stores from the store-merging logic. Whem merging stores, search up the chain through a single load, and finds all possible stores by looking down from through a load and a TokenFactor to all stores visited. This improves the quality of the output SelectionDAG and generally the output CodeGen (with some exceptions). Additional Minor Changes: 1. Finishes removing unused AliasLoad code 2. Unifies the the chain aggregation in the merged stores across code paths 3. Re-add the Store node to the worklist after calling SimplifyDemandedBits. 4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is arbitrary, but seemed sufficient to not cause regressions in tests. This finishes the change Matt Arsenault started in r246307 and jyknight's original patch. Many tests required some changes as memory operations are now reorderable. Some tests relying on the order were changed to use volatile memory operations Noteworthy tests: CodeGen/AArch64/argument-blocks.ll - It's not entirely clear what the test_varargs_stackalign test is supposed to be asserting, but the new code looks right. CodeGen/AArch64/arm64-memset-inline.lli - CodeGen/AArch64/arm64-stur.ll - CodeGen/ARM/memset-inline.ll - The backend now generates *worse* code due to store merging succeeding, as we do do a 16-byte constant-zero store efficiently. CodeGen/AArch64/merge-store.ll - Improved, but there still seems to be an extraneous vector insert from an element to itself? CodeGen/PowerPC/ppc64-align-long-double.ll - Worse code emitted in this case, due to the improved store->load forwarding. CodeGen/X86/dag-merge-fast-accesses.ll - CodeGen/X86/MergeConsecutiveStores.ll - CodeGen/X86/stores-merging.ll - CodeGen/Mips/load-store-left-right.ll - Restored correct merging of non-aligned stores CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll - Improved. Correctly merges buffer_store_dword calls CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll - Improved. Sidesteps loading a stored value and merges two stores CodeGen/X86/pr18023.ll - This test has been removed, as it was asserting incorrect behavior. Non-volatile stores *CAN* be moved past volatile loads, and now are. CodeGen/X86/vector-idiv.ll - CodeGen/X86/vector-lzcnt-128.ll - It's basically impossible to tell what these tests are actually testing. But, looks like the code got better due to the memory operations being recognized as non-aliasing. CodeGen/X86/win32-eh.ll - Both loads of the securitycookie are now merged. Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel Differential Revision: https://reviews.llvm.org/D14834 llvm-svn: 289221
* AMDGPU/SI: Don't mark VINTRP instructions as mayLoadTom Stellard2016-12-091-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: These instructions technically do read from memory, but the memory is considered to be out of bounds for normal load/store instructions. shader-db stats: SGPRS: 1416075 -> 1413323 (-0.19 %) VGPRS: 867413 -> 863935 (-0.40 %) Spilled SGPRs: 1409 -> 1354 (-3.90 %) Spilled VGPRs: 63 -> 63 (0.00 %) Private memory VGPRs: 880 -> 880 (0.00 %) Scratch size: 2648 -> 2632 (-0.60 %) dwords per thread Code Size: 37889052 -> 37897340 (0.02 %) bytes LDS: 2147 -> 2147 (0.00 %) blocks Max Waves: 279243 -> 280369 (0.40 %) Wait states: 0 -> 0 (0.00 %) Reviewers: nhaehnle, mareko, arsenm Subscribers: kzhuravl, wdng, yaxunl, tony-tye Differential Revision: https://reviews.llvm.org/D27593 llvm-svn: 289219
* [X86] Modify patterns from memory form of RCP/RSQRT/SQRT intrinsics to only ↵Craig Topper2016-12-091-14/+11
| | | | | | | | | | allow (scalar_to_vector (loadf32/load64)) instead of anything that sse_load_f32/f64 can match. sse_load_f32/f64 can also match loads that are zero extended to vectors. We shouldn't match that because we wouldn't be able to get the instruction to zero the upper bits like the intrinsic semantics would require for such a case. There is a test case that does depend on this behavior. llvm-svn: 289193
* [AVR] Use a more appropriate integer type for wide IN/OUT instructionsDylan McKay2016-12-091-2/+2
| | | | | | | | | | We could previously select an integer which would hit an assertion error in pseudo expansion. The new type will also generate the appropriate fixups if needed, which wasn't done beforehand. llvm-svn: 289192
* [AVR] Add tests for a large number of pseudo instructionsDylan McKay2016-12-091-0/+12
| | | | | | This adds MIR tests for 24 pseudo instructions. llvm-svn: 289191
* [AVX-512] Correctly preserve the passthru semantics of the FMA scalar intrinsicsCraig Topper2016-12-095-57/+107
| | | | | | | | | | | | | | | | | | | | | Summary: Scalar intrinsics have specific semantics about the which input's upper bits are passed through to the output. The same input is also supposed to be the input we use for the lower element when the mask bit is 0 in a masked operation. We aren't currently keeping these semantics with instruction selection. This patch corrects this by introducing new scalar FMA ISD nodes that indicate whether operand 1(one of the multiply inputs) or operand 3(the additon/subtraction input) should pass thru its upper bits. We use this information to select 213/132 form for the operand 1 version and the 231 form for the operand 3 version. We also use this information to suppress combining FNEG operations on the passthru input since semantically the passthru bits aren't negated. This is stronger than the earlier check added for a user being SELECTS so we can remove that. This fixes PR30913. Reviewers: delena, zvi, v_klochkov Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D27144 llvm-svn: 289190
* AMDGPU: Select i16 instructions to VOP3 formsMatt Arsenault2016-12-091-10/+10
| | | | | | | | | | | | | | These were selecting directly to the VOP2 form instead of VOP3 like the i32 instructions. Fixes regressions in future commits where an immediate isn't folded because it was initially used for the second operand. Because uniform 16-bit operations are promoted to i32, it's difficult to get a simple testcase where this matters. Fold failures in SIFoldOperands here tend to be hidden by commute and fold in SIShrinkInstructions. llvm-svn: 289189
* [X86] Add masked versions of VPERMT2* and VPERMI2* to load folding tables.Craig Topper2016-12-091-6/+84
| | | | llvm-svn: 289186
* [AVX-512] Add vpermilps/pd to load folding tables.Craig Topper2016-12-091-0/+36
| | | | llvm-svn: 289173
* [RDF] Fix incorrect lane mask calculationKrzysztof Parzyszek2016-12-081-7/+31
| | | | | | | | This was exposed by some code that used more than one level of sub- registers. There is no testcase, because there is no such code in the Hexagon backend. llvm-svn: 289099
* AMDGPU: Make f16 ConstantFP legalMatt Arsenault2016-12-083-16/+14
| | | | | | | | | | | | | | Not having this legal led to combine failures, resulting in dumb things like bitcasts of constants not being folded away. The only reason I'm leaving the v_mov_b32 hack that f32 already uses is to avoid madak formation test regressions. PeepholeOptimizer has an ordering issue where the immediate fold attempt is into the sgpr->vgpr copy instead of the actual use. Running it twice avoids that problem. llvm-svn: 289096
* [AMDGPU] Fix number of reserved SGPRs on CI to reflect flat scratch useStanislav Mekhanoshin2016-12-081-0/+2
| | | | | | Differential Revision: https://reviews.llvm.org/D27225 llvm-svn: 289095
* AMDGPU: Fix commuting v_sub_u16Matt Arsenault2016-12-081-1/+1
| | | | | | | | The correct commutable opcode was set to itself, so this was simply swapping the operands to commute instead of also changing the opcode to v_subrev_u16. llvm-svn: 289093
OpenPOWER on IntegriCloud