summaryrefslogtreecommitdiffstats
path: root/llvm/test
Commit message (Collapse)AuthorAgeFilesLines
* Speculative REQUIRES to fix Windows bot.Paul Robinson2016-12-091-0/+1
| | | | llvm-svn: 289281
* [X86] Regenerate testSimon Pilgrim2016-12-091-6/+26
| | | | llvm-svn: 289279
* AMDGPU: Cleanup checks in sext_inreg testMatt Arsenault2016-12-091-168/+203
| | | | llvm-svn: 289272
* Fix LLVM's use of DW_OP_bit_piece in DWARF expressions.Adrian Prantl2016-12-097-4/+407
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LLVM's use of DW_OP_bit_piece is incorrect and a based on a misunderstanding of the wording in the DWARF specification. The offset argument of DW_OP_bit_piece refers to the offset into the location that is on the top of the DWARF expression stack, and not an offset into the source variable. This has since also been clarified in the DWARF specification. This patch fixes all uses of DW_OP_bit_piece to emit the correct offset and simplifies the DwarfExpression class to semi-automaticaly emit empty DW_OP_pieces to adjust the offset of the source variable, thus simplifying the code using DwarfExpression. While this is an incompatible bugfix, in practice I don't expect this to be much of a problem since LLVM's old interpretation and the correct interpretation of DW_OP_bit_piece differ only when there are gaps in the fragmented locations of the described variables or if individual fragments are smaller than a byte. LLDB at least won't interpret locations with gaps in them because is has no way to present undefined bits in a variable, and there is a high probability that an old-form expression will be malformed when interpreted correctly, because the DW_OP_bit_piece offset will be outside of the location at the top of the stack. As a nice side-effect, this patch enables us to use a more efficient encoding for subregisters: In order to express a sub-register at a non-zero offset we now use a DW_OP_bit_piece instead of shifting the value into place manually. This patch also adds missing test coverage for code paths that weren't exercised before. <rdar://problem/29335809> Differential Revision: https://reviews.llvm.org/D27550 llvm-svn: 289266
* Add README describing the intention of test/CodeGen/MIRMatthias Braun2016-12-091-0/+7
| | | | llvm-svn: 289265
* AMDGPU/SI: Don't reserve XNACK when it's disabledMarek Olsak2016-12-092-6/+10
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: This frees 2 additional scalar registers. These are results from all of my 3 patches combined: Polaris: Spilled SGPRs: 2231 -> 1517 (-32.00 %) Tonga: Spilled SGPRs: 3829 -> 2608 (-31.89 %) Spilled VGPRs: 100 -> 84 (-16.00 %) Tonga even spills SGPRs via VGPRs to scratch. That's a compute shader limited to 64 VGPRs. Reviewers: tstellarAMD Subscribers: arsenm, kzhuravl, wdng, nhaehnle, yaxunl, tony-tye Differential Revision: https://reviews.llvm.org/D27151 llvm-svn: 289262
* AMDGPU/SI: Don't reserve FLAT_SCR on non-HSA targets & without stack objectsMarek Olsak2016-12-096-42/+56
| | | | | | | | | | | | Summary: This frees 2 scalar registers. Reviewers: tstellarAMD Subscribers: qcolombet, arsenm, kzhuravl, wdng, nhaehnle, yaxunl, tony-tye Differential Revision: https://reviews.llvm.org/D27150 llvm-svn: 289261
* AMDGPU/SI: Allow using SGPRs 96-101 on VIMarek Olsak2016-12-092-7/+8
| | | | | | | | | | | | | | | Summary: There is no point in setting SGPRS=104, because VI allocates SGPRs in multiples of 16, so 104 -> 112. That enables us to use all 102 SGPRs for general purposes. Reviewers: tstellarAMD Subscribers: qcolombet, arsenm, kzhuravl, wdng, nhaehnle, yaxunl, tony-tye Differential Revision: https://reviews.llvm.org/D27149 llvm-svn: 289260
* [DWARF] Suppress .loc directives from CFI instructionsPaul Robinson2016-12-091-0/+76
| | | | | | | | | Like DBG_VALUE, these emit nothing to the .text section, and sometimes have no source location specified. Just ignore them. Differential Revision: http://reviews.llvm.org/D27492 llvm-svn: 289256
* Move .mir tests to appropriate directoriesMatthias Braun2016-12-0923-2/+0
| | | | | | | | | | | test/CodeGen/MIR should contain tests that intent to test the MIR printing or parsing. Tests that test something else should be in test/CodeGen/TargetName even when they are written in .mir. As a rule of thumb, only tests using "llc -run-pass none" should be in test/CodeGen/MIR. llvm-svn: 289254
* [SelectionDAG] Add knownbits support for EXTRACT_VECTOR_ELT opcodes (REAPPLIED)Simon Pilgrim2016-12-092-10/+29
| | | | | | Reapplied with fix for PR31323 - X86 SSE2 vXi16 multiplies for illegal types were creating CONCAT_VECTORS nodes with vector inputs that might not total the number of elements in the result type. llvm-svn: 289232
* AMDGPU: Fix i128 mulMatt Arsenault2016-12-091-0/+71
| | | | llvm-svn: 289231
* AMDGPU: Allow TBA, TMA, TTMP* registers with SMEM instructionsMatt Arsenault2016-12-092-5/+153
| | | | | | Fixes assembler regressions. llvm-svn: 289230
* [PPC] Add intrinsics for vector extract word and vector insert word.Sean Fertile2016-12-091-0/+18
| | | | | Revision: https://reviews.llvm.org/D26547 llvm-svn: 289227
* Revert "In visitSTORE, always use FindBetterChain, rather than only when ↵Nirav Dave2016-12-0964-1540/+1709
| | | | | | | | UseAA is enabled." This reverts commit r289221 which appears to be triggering an assertion llvm-svn: 289226
* In visitSTORE, always use FindBetterChain, rather than only when UseAA is ↵Nirav Dave2016-12-0964-1709/+1540
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | enabled. Retrying after fixing overly aggressive load-store forwarding optimization. Simplify Consecutive Merge Store Candidate Search Now that address aliasing is much less conservative, push through simplified store merging search which only checks for parallel stores through the chain subgraph. This is cleaner as the separation of non-interfering loads/stores from the store-merging logic. Whem merging stores, search up the chain through a single load, and finds all possible stores by looking down from through a load and a TokenFactor to all stores visited. This improves the quality of the output SelectionDAG and generally the output CodeGen (with some exceptions). Additional Minor Changes: 1. Finishes removing unused AliasLoad code 2. Unifies the the chain aggregation in the merged stores across code paths 3. Re-add the Store node to the worklist after calling SimplifyDemandedBits. 4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is arbitrary, but seemed sufficient to not cause regressions in tests. This finishes the change Matt Arsenault started in r246307 and jyknight's original patch. Many tests required some changes as memory operations are now reorderable. Some tests relying on the order were changed to use volatile memory operations Noteworthy tests: CodeGen/AArch64/argument-blocks.ll - It's not entirely clear what the test_varargs_stackalign test is supposed to be asserting, but the new code looks right. CodeGen/AArch64/arm64-memset-inline.lli - CodeGen/AArch64/arm64-stur.ll - CodeGen/ARM/memset-inline.ll - The backend now generates *worse* code due to store merging succeeding, as we do do a 16-byte constant-zero store efficiently. CodeGen/AArch64/merge-store.ll - Improved, but there still seems to be an extraneous vector insert from an element to itself? CodeGen/PowerPC/ppc64-align-long-double.ll - Worse code emitted in this case, due to the improved store->load forwarding. CodeGen/X86/dag-merge-fast-accesses.ll - CodeGen/X86/MergeConsecutiveStores.ll - CodeGen/X86/stores-merging.ll - CodeGen/Mips/load-store-left-right.ll - Restored correct merging of non-aligned stores CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll - Improved. Correctly merges buffer_store_dword calls CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll - Improved. Sidesteps loading a stored value and merges two stores CodeGen/X86/pr18023.ll - This test has been removed, as it was asserting incorrect behavior. Non-volatile stores *CAN* be moved past volatile loads, and now are. CodeGen/X86/vector-idiv.ll - CodeGen/X86/vector-lzcnt-128.ll - It's basically impossible to tell what these tests are actually testing. But, looks like the code got better due to the memory operations being recognized as non-aliasing. CodeGen/X86/win32-eh.ll - Both loads of the securitycookie are now merged. Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel Differential Revision: https://reviews.llvm.org/D14834 llvm-svn: 289221
* AMDGPU/SI: Don't mark VINTRP instructions as mayLoadTom Stellard2016-12-092-5/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: These instructions technically do read from memory, but the memory is considered to be out of bounds for normal load/store instructions. shader-db stats: SGPRS: 1416075 -> 1413323 (-0.19 %) VGPRS: 867413 -> 863935 (-0.40 %) Spilled SGPRs: 1409 -> 1354 (-3.90 %) Spilled VGPRs: 63 -> 63 (0.00 %) Private memory VGPRs: 880 -> 880 (0.00 %) Scratch size: 2648 -> 2632 (-0.60 %) dwords per thread Code Size: 37889052 -> 37897340 (0.02 %) bytes LDS: 2147 -> 2147 (0.00 %) blocks Max Waves: 279243 -> 280369 (0.40 %) Wait states: 0 -> 0 (0.00 %) Reviewers: nhaehnle, mareko, arsenm Subscribers: kzhuravl, wdng, yaxunl, tony-tye Differential Revision: https://reviews.llvm.org/D27593 llvm-svn: 289219
* llvm/test/Object/archive-thin-create.test: Make sure that %t is empty to ↵NAKAMURA Takumi2016-12-091-0/+1
| | | | | | stabilize the test. llvm-svn: 289202
* [AVR] Remove a set of redundant testsDylan McKay2016-12-094-88/+0
| | | | | | This fixes the build. llvm-svn: 289201
* [SelectionDAG] Add partial BITCAST support to computeKnownBitsSimon Pilgrim2016-12-091-315/+123
| | | | | | | | | | Adds support for bitcasting a little endian 'small element' vector to 'large element' scalar/vector (e.g. v16i8 to v4i32 or v2i32 to i64), which is required for PR30845. We extract the knownbits for each 'small element' part and concatenate the results together. We can add support for big endian and 'large element' scalar/vector to 'small element' vector bitcasting once we have test cases for them. Differential Revision: https://reviews.llvm.org/D27129 llvm-svn: 289200
* Revert "[SelectionDAG] Add knownbits support for EXTRACT_VECTOR_ELT opcodes"Daniel Jasper2016-12-091-2/+10
| | | | | | | | This reverts commit r288916 as it is currently causing a crasher in Halide. Reproducer on llvm.org/PR31323. While it might be that halide is generating invalid IR, llc shouldn't crash. llvm-svn: 289194
* [AVR] Add tests for a large number of pseudo instructionsDylan McKay2016-12-0927-4/+560
| | | | | | This adds MIR tests for 24 pseudo instructions. llvm-svn: 289191
* [AVX-512] Correctly preserve the passthru semantics of the FMA scalar intrinsicsCraig Topper2016-12-092-51/+49
| | | | | | | | | | | | | | | | | | | | | Summary: Scalar intrinsics have specific semantics about the which input's upper bits are passed through to the output. The same input is also supposed to be the input we use for the lower element when the mask bit is 0 in a masked operation. We aren't currently keeping these semantics with instruction selection. This patch corrects this by introducing new scalar FMA ISD nodes that indicate whether operand 1(one of the multiply inputs) or operand 3(the additon/subtraction input) should pass thru its upper bits. We use this information to select 213/132 form for the operand 1 version and the 231 form for the operand 3 version. We also use this information to suppress combining FNEG operations on the passthru input since semantically the passthru bits aren't negated. This is stronger than the earlier check added for a user being SELECTS so we can remove that. This fixes PR30913. Reviewers: delena, zvi, v_klochkov Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D27144 llvm-svn: 289190
* AMDGPU: Select i16 instructions to VOP3 formsMatt Arsenault2016-12-092-10/+10
| | | | | | | | | | | | | | These were selecting directly to the VOP2 form instead of VOP3 like the i32 instructions. Fixes regressions in future commits where an immediate isn't folded because it was initially used for the second operand. Because uniform 16-bit operations are promoted to i32, it's difficult to get a simple testcase where this matters. Fold failures in SIFoldOperands here tend to be hidden by commute and fold in SIShrinkInstructions. llvm-svn: 289189
* [X86] Add masked versions of VPERMT2* and VPERMI2* to load folding tables.Craig Topper2016-12-091-0/+28
| | | | llvm-svn: 289186
* [SCCP] Make the test added in r289175 more meaningful.Davide Italiano2016-12-091-1/+2
| | | | | | Add a comment while here. llvm-svn: 289182
* [SCCP] Teach the pass about `mul %x 0` even if %x is overdefined.Davide Italiano2016-12-091-1/+8
| | | | | | | | | | | | | | | | | | | | | | | The motivating example is: extern int patatino; int goo() { int x = 0; for (int i = 0; i < 1000000; ++i) { x *= patatino; } return x; } Currently SCCP will not realize that this function returns always zero, therefore will try to unroll and vectorize the loop at -O3 producing an awful lot of (useless) code. With this change, it will just produce: 0000000000000000 <g>: xor %eax,%eax retq llvm-svn: 289175
* [AVX-512] Add vpermilps/pd to load folding tables.Craig Topper2016-12-092-0/+200
| | | | llvm-svn: 289173
* [AVX-512] Move some floating point stack folding test cases out of the ↵Craig Topper2016-12-094-192/+192
| | | | | | integer test. llvm-svn: 289172
* WholeProgramDevirt: Teach the pass to handle structs of arrays.Peter Collingbourne2016-12-093-4/+74
| | | | | | This will become necessary in some cases once D22296 lands. llvm-svn: 289165
* Make WholeProgramDevirt understand ConstStruct vtables.Peter Collingbourne2016-12-092-0/+63
| | | | | | | | Based on a patch by LemonBoy! Differential Revision: https://reviews.llvm.org/D26581 llvm-svn: 289162
* [ObjectYAML] Support for DWARF debug_arangesChris Bieneman2016-12-091-0/+335
| | | | | | This patch adds support for round tripping DWARF debug_aranges in and out of YAML. llvm-svn: 289161
* [InstCombine] add tests for umin+icmp; NFCSanjay Patel2016-12-081-0/+258
| | | | llvm-svn: 289157
* [InstCombine] add tests for umax+icmp; NFCSanjay Patel2016-12-081-0/+258
| | | | llvm-svn: 289156
* [InstSimplify] Add "X / 1.0" to SimplifyFDivInst.Zia Ansari2016-12-081-3/+1
| | | | | | Differential Revision: https://reviews.llvm.org/D27587 llvm-svn: 289153
* [InstCombine] add tests for smax+icmp; NFCSanjay Patel2016-12-081-0/+258
| | | | llvm-svn: 289151
* GlobalISel: fall back gracefully for debug intrinsics.Tim Northover2016-12-081-0/+33
| | | | | | | Supporting them properly is a reasonably complex chunk of work, so to allow bot testing before then we should at least be able to fall back to DAG ISel. llvm-svn: 289150
* [SCCP] Make sure SCCP and ConstantFolding agree on undef >> a.Davide Italiano2016-12-081-2/+2
| | | | | | | Currently SCCP folds the value to -1, while ConstantProp folds to 0. This changes SCCP to do what ConstantFolding does. llvm-svn: 289147
* [mips] Make the test case more specific and provide OS component of a ↵Simon Atanasyan2016-12-081-3/+3
| | | | | | triple. NFC llvm-svn: 289117
* [mips] Change instruction s/daddiu/addiu/ since O32 prohibits the use of ↵Simon Atanasyan2016-12-081-34/+34
| | | | | | 64-bit GPRs. NFC llvm-svn: 289115
* [mips] Change gnueabi to gnu in the triple because EABI has been removed ↵Simon Atanasyan2016-12-081-1/+1
| | | | | | recently. NFC llvm-svn: 289114
* [mips] Remove N32 Android test because Android does not support N32 ABI. NFCSimon Atanasyan2016-12-081-2/+0
| | | | llvm-svn: 289113
* Don't emit .seh_handler directives for any cleanup funcletsReid Kleckner2016-12-081-1/+1
| | | | | | | | | | | | | | | | We were falsely claiming that we had an LSDA for the relevant EH personality before this change, which could lead to the EH machinery interpreting random adjacent data as an LSDA. Fixes PR31317 This change is safe because cleanups can't contain exception handlers today. We do these things to maintain that invariant: - C++ destructors are naturally out-of-line - __finally blocks are outlined in clang - LLVM's inliner will not inline EH constructs into cleanups llvm-svn: 289101
* [InstSimplify] add fdiv x/1.0 test and update checks; NFCSanjay Patel2016-12-081-8/+25
| | | | llvm-svn: 289098
* AMDGPU: Make f16 ConstantFP legalMatt Arsenault2016-12-081-2/+3
| | | | | | | | | | | | | | Not having this legal led to combine failures, resulting in dumb things like bitcasts of constants not being folded away. The only reason I'm leaving the v_mov_b32 hack that f32 already uses is to avoid madak formation test regressions. PeepholeOptimizer has an ordering issue where the immediate fold attempt is into the sgpr->vgpr copy instead of the actual use. Running it twice avoids that problem. llvm-svn: 289096
* AMDGPU: Fix commuting v_sub_u16Matt Arsenault2016-12-081-0/+169
| | | | | | | | The correct commutable opcode was set to itself, so this was simply swapping the operands to commute instead of also changing the opcode to v_subrev_u16. llvm-svn: 289093
* [AMDGPU] Add amdgpu-unify-metadata passStanislav Mekhanoshin2016-12-081-0/+26
| | | | | | | | | | | | | | | | | | Multiple metadata values for records such as opencl.ocl.version, llvm.ident and similar are created after linking several modules. For some of them, notably opencl.ocl.version, this creates semantic problem because we cannot tell which version of OpenCL the composite module conforms. Moreover, such repetitions of identical values often create a huge list of unneeded metadata, which grows bitcode size both in memory and stored on disk. It can go up to several Mb when linked against our OpenCL library. Lastly, such long lists obscure reading of dumped IR. The pass unifies metadata after linking. Differential Revision: https://reviews.llvm.org/D25381 llvm-svn: 289092
* IR, X86: Understand !absolute_symbol metadata on global variables.Peter Collingbourne2016-12-085-0/+177
| | | | | | | | | | | | | | | | | Summary: Attaching !absolute_symbol to a global variable does two things: 1) Marks it as an absolute symbol reference. 2) Specifies the value range of that symbol's address. Teach the X86 backend to allow absolute symbols to appear in place of immediates by extending the relocImm and mov64imm32 matchers. Start using relocImm in more places where it is legal. As previously proposed on llvm-dev: http://lists.llvm.org/pipermail/llvm-dev/2016-October/105800.html Differential Revision: https://reviews.llvm.org/D25878 llvm-svn: 289087
* [AMDGPU] Scalarization of global uniform loads.Alexander Timofeev2016-12-082-0/+206
| | | | | | | | | | | | | | | | | | Summary: LC can currently select scalar load for uniform memory access basing on readonly memory address space only. This restriction originated from the fact that in HW prior to VI vector and scalar caches are not coherent. With MemoryDependenceAnalysis we can check that the memory location corresponding to the memory operand of the LOAD is not clobbered along the all paths from the function entry. Reviewers: rampitec, tstellarAMD, arsenm Subscribers: wdng, arsenm, nhaehnle Differential Revision: https://reviews.llvm.org/D26917 llvm-svn: 289076
* ConstantFolding: Don't crash when encountering vector GEPKeno Fischer2016-12-081-0/+19
| | | | | | | | | | | | | | ConstantFolding tried to cast one of the scalar indices to a vector type. Instead, use the vector type only for the first index (which is the only one allowed to be a vector) and use its scalar type otherwise. Fixes PR31250. Reviewers: majnemer Differential Revision: https://reviews.llvm.org/D27389 llvm-svn: 289073
OpenPOWER on IntegriCloud