summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
* Add a WriteMicrocoded for ancient microcoded instructions.Jakob Stoklund Olesen2013-03-212-0/+7
| | | | llvm-svn: 177611
* Model prefetches and barriers as loads.Jakob Stoklund Olesen2013-03-201-1/+4
| | | | | | It's not yet clear if these instructions need a more careful model. llvm-svn: 177599
* Add a catch-all WriteSystem SchedWrite type.Jakob Stoklund Olesen2013-03-203-1/+28
| | | | | | This is used for all the expensive system instructions. llvm-svn: 177598
* Annotate the remaining SSE MOV instructions.Jakob Stoklund Olesen2013-03-201-25/+45
| | | | llvm-svn: 177592
* Annotate SSE horizontal and integer instructions.Jakob Stoklund Olesen2013-03-201-16/+26
| | | | llvm-svn: 177591
* Correct cost model for vector shift on AVX2Michael Liao2013-03-201-0/+23
| | | | | | | | | - After moving logic recognizing vector shift with scalar amount from DAG combining into DAG lowering, we declare to customize all vector shifts even vector shift on AVX is legal. As a result, the cost model needs special tuning to identify these legal cases. llvm-svn: 177586
* Add some missing SSE annotations.Jakob Stoklund Olesen2013-03-201-8/+18
| | | | llvm-svn: 177540
* Annotate remaining IIC_BIN_* instructions.Jakob Stoklund Olesen2013-03-201-5/+10
| | | | llvm-svn: 177539
* Fix PR15296Michael Liao2013-03-201-119/+199
| | | | | | | - Move SRA/SRL/SHL lowering support from DAG combination to DAG lowering to support extended 256-bit integer in AVX but not AVX2. llvm-svn: 177478
* Mark all variable shifts needing customizingMichael Liao2013-03-201-28/+29
| | | | | | | - Prepare moving logic from DAG combining into DAG lowering. There's no functionality change. llvm-svn: 177477
* Move scalar immediate shift lowering into a dedicated funcMichael Liao2013-03-201-5/+20
| | | | | | - no functionality change llvm-svn: 177476
* Fix pr13145 - Naming a function like a register name confuses the asm parser.Chad Rosier2013-03-191-14/+20
| | | | | | | Patch by Stepan Dyatkovskiy <stpworld@narod.ru> rdar://13457826 llvm-svn: 177463
* Annotate various null idioms with SchedRW lists.Jakob Stoklund Olesen2013-03-191-4/+4
| | | | llvm-svn: 177461
* Annotate SSE float conversions with SchedRW lists.Jakob Stoklund Olesen2013-03-191-60/+81
| | | | llvm-svn: 177460
* Annotate X86InstrCMovSetCC.td with SchedRW lists.Jakob Stoklund Olesen2013-03-191-4/+5
| | | | llvm-svn: 177459
* [ms-inline asm] Move the immediate asm rewrite into the target specificChad Rosier2013-03-191-10/+4
| | | | | | | logic as a QOI cleanup. No functional change. Tests already in place. rdar://13456414 llvm-svn: 177446
* Annotate X86InstrCompiler.td with SchedRW lists.Jakob Stoklund Olesen2013-03-192-8/+13
| | | | | | | Add a new WriteZero SchedWrite type for the common dependency-breaking instructions that clear a register. llvm-svn: 177442
* [ms-inline asm] Create a helper function, CreateMemForInlineAsm, that createsChad Rosier2013-03-191-36/+49
| | | | | | | | | | an X86Operand, but also performs a Sema lookup and adds the sizing directive when appropriate. Use this when parsing a bracketed statement. This is necessary to get the instruction matching correct as well. Test case coming on clang side. rdar://13455408 llvm-svn: 177439
* Add missing mayLoad flag to LHAUX8 and LWAUX.Ulrich Weigand2013-03-191-1/+2
| | | | | | | | | All pre-increment load patterns need to set the mayLoad flag (since they don't provide a DAG pattern). This was missing for LHAUX8 and LWAUX, which is added by this patch. llvm-svn: 177431
* Rewrite LHAU8 pattern to use standard memory operand.Ulrich Weigand2013-03-191-4/+4
| | | | | | | | | | | | As opposed to to pre-increment store patterns, the pre-increment load patterns were already using standard memory operands, with the sole exception of LHAU8. As there's no real reason why LHAU8 should be different here, this patch simply rewrites the pattern to also use a memri operand, just like all the other patterns. llvm-svn: 177430
* Rewrite pre-increment store patterns to use standard memory operands.Ulrich Weigand2013-03-192-151/+121
| | | | | | | | | | | | | | | | | | | | | | Currently, pre-increment store patterns are written to use two separate operands to represent address base and displacement: stwu $rS, $ptroff($ptrreg) This causes problems when implementing the assembler parser, so this commit changes the patterns to use standard (complex) memory operands like in all other memory access instruction patterns: stwu $rS, $dst To still match those instructions against the appropriate pre_store SelectionDAG nodes, the patch uses the new feature that allows a Pat to match multiple DAG operands against a single (complex) instruction operand. Approved by Hal Finkel. llvm-svn: 177429
* Fix sub-operand size mismatch in tocentry operands.Ulrich Weigand2013-03-191-1/+1
| | | | | | | | | | | The tocentry operand class refers to 64-bit values (it is only used in 64-bit, where iPTR is a 64-bit type), but its sole suboperand is designated as 32-bit type. This causes a mismatch to be detected at compile-time with the TableGen patch I'll check in shortly. To fix this, this commit changes the suboperand to a 64-bit type as well. llvm-svn: 177427
* Remove an invalid and unnecessary Pat pattern from the X86 backend:Ulrich Weigand2013-03-191-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | def : Pat<(load (i64 (X86Wrapper tglobaltlsaddr :$dst))), (MOV64rm tglobaltlsaddr :$dst)>; This pattern is invalid because the MOV64rm instruction expects a source operand of type "i64mem", which is a subclass of X86MemOperand and thus actually consists of five MI operands, but the Pat provides only a single MI operand ("tglobaltlsaddr" matches an SDnode of type ISD::TargetGlobalTLSAddress and provides a single output). Thus, if the pattern were ever matched, subsequent uses of the MOV64rm instruction pattern would access uninitialized memory. In addition, with the TableGen patch I'm about to check in, this would actually be reported as a build-time error. Fortunately, the pattern does in fact never match, for at least two independent reasons. First, the code generator actually never generates a pattern of the form (load (X86Wrapper (tglobaltlsaddr))). For most combinations of TLS and code models, (tglobaltlsaddr) represents just an offset that needs to be added to some base register, so it is never directly dereferenced. The only exception is the initial-exec model, where (tglobaltlsaddr) refers to the (pc-relative) address of a GOT slot, which *is* in fact directly dereferenced: but in that case, the X86WrapperRIP node is used, not X86Wrapper, so the Pat doesn't match. Second, even if some patterns along those lines *were* ever generated, we should not need an extra Pat pattern to match it. Instead, the original MOV64rm instruction pattern ought to match directly, since it uses an "addr" operand, which is implemented via the SelectAddr C++ routine; this routine is supposed to accept the full range of input DAGs that may be implemented by a single mov instruction, including those cases involving ISD::TargetGlobalTLSAddress (and actually does so e.g. in the initial-exec case as above). To avoid build breaks (due to the above-mentioned error) after the TableGen patch is checked in, I'm removing this Pat here. llvm-svn: 177426
* Prepare to make r0 an allocatable register on PPCHal Finkel2013-03-196-112/+156
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the PPC r0 register is unconditionally reserved. There are two reasons for this: 1. r0 is treated specially (as the constant 0) by certain instructions, and so cannot be used with those instructions as a regular register. 2. r0 is used as a temporary register in the CR-register spilling process (where, under some circumstances, we require two GPRs). This change addresses the first reason by introducing a restricted register class (without r0) for use by those instructions that treat r0 specially. These register classes have a new pseudo-register, ZERO, which represents the r0-as-0 use. This has the side benefit of making the existing target code simpler (and easier to understand), and will make it clear to the register allocator that uses of r0 as 0 don't conflict will real uses of the r0 register. Once the CR spilling code is improved, we'll be able to allocate r0. Adding these extra register classes, for some reason unclear to me, causes requests to the target to copy 32-bit registers to 64-bit registers. The resulting code seems correct (and causes no test-suite failures), and the new test case covers this new kind of asymmetric copy. As r0 is still reserved, no functionality change intended. llvm-svn: 177423
* Optimize sext <4 x i8> and <4 x i16> to <4 x i64>.Nadav Rotem2013-03-192-4/+19
| | | | | | Patch by Ahmad, Muhammad T <muhammad.t.ahmad@intel.com> llvm-svn: 177421
* Annotate X86InstrExtension.td with SchedRW lists.Jakob Stoklund Olesen2013-03-191-26/+39
| | | | llvm-svn: 177418
* Annotate a lot of X86InstrInfo.td with SchedRW lists.Jakob Stoklund Olesen2013-03-191-26/+60
| | | | llvm-svn: 177417
* [ms-inline asm] Move the size directive asm rewrite into the target specificChad Rosier2013-03-191-44/+33
| | | | | | | logic as a QOI cleanup. rdar://13445327 llvm-svn: 177413
* Cleanup PPC64 unaligned i64 load/storeHal Finkel2013-03-191-4/+0
| | | | | | | | | Remove an accidentally-added instruction definition and add a comment in the test case. This is in response to a post-commit review by Bill Schmidt. No functionality change intended. llvm-svn: 177404
* Improve long vector sext/zext lowering on ARMRenato Golin2013-03-192-4/+67
| | | | | | | | | | | | | The ARM backend currently has poor codegen for long sext/zext operations, such as v8i8 -> v8i32. This patch addresses this by performing a custom expansion in ARMISelLowering. It also adds/changes the cost of such lowering in ARMTTI. This partially addresses PR14867. Patch by Pete Couperus llvm-svn: 177380
* Don't reserve R31 on PPC64 unless the frame pointer is neededHal Finkel2013-03-191-4/+3
| | | | llvm-svn: 177379
* Fix a sign-extension bug in PPCCTRLoopsHal Finkel2013-03-181-1/+1
| | | | | | | Don't sign extend the immediate value from the OR instruction in an LIS/OR pair. llvm-svn: 177361
* [ms-inline asm] Avoid emitting a redundant sizing directive, if we've alreadyChad Rosier2013-03-181-2/+3
| | | | | | | parsed one. Test case coming shortly. rdar://13446980 llvm-svn: 177347
* Fix PPC unaligned 64-bit loads and storesHal Finkel2013-03-183-6/+60
| | | | | | | | | | | | | PPC64 supports unaligned loads and stores of 64-bit values, but in order to use the r+i forms, the offset must be a multiple of 4. Unfortunately, this cannot always be determined by examining the immediate itself because it might be available only via a TOC entry. In order to get around this issue, we additionally predicate the selection of the r+i form on the alignment of the load or store (forcing it to be at least 4 in order to select the r+i form). llvm-svn: 177338
* ARM cost model: Make some vector integer to float casts cheaperArnold Schwaighofer2013-03-181-0/+30
| | | | | | | | | | | | | | | | | | | The default logic marks them as too expensive. For example, before this patch we estimated: cost of 16 for instruction: %r = uitofp <4 x i16> %v0 to <4 x float> While this translates to: vmovl.u16 q8, d16 vcvt.f32.u32 q8, q8 All other costs are left to the values assigned by the fallback logic. Theses costs are mostly reasonable in the sense that they get progressively more expensive as the instruction sequences emitted get longer. radar://13445992 llvm-svn: 177334
* ARM cost model: Correct cost for some cheap float to integer conversionsArnold Schwaighofer2013-03-181-1/+9
| | | | | | | | | | | | | | | | | | | Fix cost of some "cheap" cast instructions. Before this patch we used to estimate for example: cost of 16 for instruction: %r = fptoui <4 x float> %v0 to <4 x i16> While we would emit: vcvt.s32.f32 q8, q8 vmovn.i32 d16, q8 vuzp.8 d16, d17 All other costs are left to the values assigned by the fallback logic. Theses costs are mostly reasonable in the sense that they get progressively more expensive as the instruction sequences emitted get longer. radar://13434072 llvm-svn: 177333
* Add SchedRW annotations to most of X86InstrSSE.td.Jakob Stoklund Olesen2013-03-181-186/+280
| | | | | | | | | | We hitch a ride with the existing OpndItins class that was used to add instruction itinerary classes in the many multiclasses in this file. Use the link provided by the X86FoldableSchedWrite.Folded to find the right SchedWrite for folded loads. llvm-svn: 177326
* Annotate X86 arithmetic instructions with SchedRW lists.Jakob Stoklund Olesen2013-03-181-60/+112
| | | | | | | | | This new-style scheduling information is going to replace the instruction iteneraries. This also serves as a test case for Andy's fix in r177317. llvm-svn: 177323
* Fix 80-col. violations in PPCCTRLoopsHal Finkel2013-03-181-6/+8
| | | | llvm-svn: 177296
* Fix large count and negative constant count handling in PPCCTRLoopsHal Finkel2013-03-181-11/+41
| | | | | | | | | | | | | | | | | | This commit fixes an assert that would occur on loops with large constant counts (like looping for ((uint32_t) -1) iterations on PPC64). The existing code did not handle counts that it computed to be negative (asserting instead), but these can be created with valid inputs. This bug was discovered by bugpoint while I was attempting to isolate a completely different problem. Also, in writing test cases for the negative-count problem, I discovered that the ori/lsi handling was broken (there was a typo which caused the logic that was supposed to detect these pairs and extract the iteration count to always fail). This has now also been corrected (and is covered by one of the new test cases). llvm-svn: 177295
* Cleanup initial-value constants in PPCCTRLoopsHal Finkel2013-03-181-2/+9
| | | | | | | | Because the initial-value constants had not been added to the list of instructions considered for DCE the resulting code had redundant constant-materialization instructions. llvm-svn: 177294
* R600/SI: implement indirect adressing for SIChristian Konig2013-03-183-1/+190
| | | | | | Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177277
* R600/SI: add float vector typesChristian Konig2013-03-184-21/+82
| | | | | | Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177276
* R600/SI: add shl patternChristian Konig2013-03-183-1/+8
| | | | | | Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177275
* R600/SI: add BUFFER_LOAD_DWORD patternChristian Konig2013-03-181-3/+9
| | | | | | Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177274
* R600/SI: implement SI.load.const intrinsicChristian Konig2013-03-182-2/+13
| | | | | | Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177273
* R600/SI: enable all S_LOAD and S_BUFFER_LOAD opcodesChristian Konig2013-03-182-14/+29
| | | | | | Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177272
* R600/SI: fix inserting waits for all definesChristian Konig2013-03-181-15/+1
| | | | | | | | | | Unfortunately the previous fix for inserting waits for unordered defines wasn't sufficient, cause it's possible that even ordered defines are only partially used (or not used at all). Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177271
* TLS support for MinGW targets.Anton Korobeynikov2013-03-181-7/+8
| | | | | | | | MinGW is almost completely compatible to MSVC, with the exception of the _tls_array global not being available. Patch by David Nadlinger! llvm-svn: 177257
* Post process ADC/SBB and use a shorter encoding if they use a sign extended ↵Craig Topper2013-03-181-0/+6
| | | | | | immediate. llvm-svn: 177243
OpenPOWER on IntegriCloud