summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen
Commit message (Collapse)AuthorAgeFilesLines
* Make DIE.h a public CodeGen header.Frederic Riss2015-01-059-595/+8
| | | | | | | | | | | | | | dsymutil would like to use all the AsmPrinter/MCStreamer infrastructure to stream out the DWARF. In order to do so, it will reuse the DIE object and so this header needs to be public. The interface exposed here has some corners that cannot be used without a DwarfDebug object, but clients that want to stream Dwarf can just avoid these. Differential Revision: http://reviews.llvm.org/D6695 llvm-svn: 225208
* Replace several 'assert(false' with 'llvm_unreachable' or fold a condition ↵Craig Topper2015-01-053-4/+5
| | | | | | into the assert. llvm-svn: 225160
* [PowerPC/BlockPlacement] Allow target to provide a per-loop alignment preferenceHal Finkel2015-01-031-3/+4
| | | | | | | | | | | | | | | | | The existing code provided for specifying a global loop alignment preference. However, the preferred loop alignment might depend on the loop itself. For recent POWER cores, loops between 5 and 8 instructions should have 32-byte alignment (while the others are better with 16-byte alignment) so that the entire loop will fit in one i-cache line. To support this, getPrefLoopAlignment has been made virtual, and can be provided with an optional MachineLoop* so the target can inspect the loop before answering the query. The default behavior, as before, is to return the value set with setPrefLoopAlignment. MachineBlockPlacement now queries the target for each loop instead of only once per function. There should be no functional change for other targets. llvm-svn: 225117
* Revert "merge consecutive stores of extracted vector elements"Alexey Samsonov2014-12-311-75/+4
| | | | | | | This reverts commit r224611. This change causes crashes in X86 DAG->DAG Instruction Selection. llvm-svn: 225031
* DebugInfo: Omit is_stmt from line table entries on the same line.David Blaikie2014-12-301-1/+3
| | | | | | | | | | | | | | | | | | | | | | GCC does this for non-zero discriminators and since GCC doesn't produce column info, that was the only place it comes up there. For LLVM, since we can emit discriminators and/or column info, it makes more sense to invert the condition and just test for changes in line number. This should resolve at least some of the GDB 7.5 test suite failures created by recent Clang changes that increase the location fidelity (which, since Clang defaults to including column info on Linux by default created a bunch of cases that confused GDB). In theory we could do this better/differently by grouping actual source statements together in a similar manner to the way lexical scopes are handled but given that GDB isn't really in a position to consume that (& users are probably somewhat used to different lines being different 'statements') this seems the safest and cheapest change. (I'm concerned that doing this 'right' would bloat the debugloc data even further - something Duncan's working hard to address) llvm-svn: 225011
* x86_64: Fix calls to __morestack under the large code model.Peter Collingbourne2014-12-302-1/+18
| | | | | | | | | | | | | | | | | | | | Under the large code model, we cannot assume that __morestack lives within 2^31 bytes of the call site, so we cannot use pc-relative addressing. We cannot perform the call via a temporary register, as the rax register may be used to store the static chain, and all other suitable registers may be either callee-save or used for parameter passing. We cannot use the stack at this point either because __morestack manipulates the stack directly. To avoid these issues, perform an indirect call via a read-only memory location containing the address. This solution is not perfect, as it assumes that the .rodata section is laid out within 2^31 bytes of each function body, but this seems to be sufficient for JIT. Differential Revision: http://reviews.llvm.org/D6787 llvm-svn: 225003
* [COFF] Don't try to add quotes to already quoted linker directivesMichael Kuperstein2014-12-301-1/+1
| | | | | | | | | | If a linker directive is already quoted, don't try to quote it again, otherwise it creates a mess. This pops up in places like: #pragma comment(linker,"\"/foo bar'\"") Differential Revision: http://reviews.llvm.org/D6792 llvm-svn: 224998
* Refactor duplicated code.Rafael Espindola2014-12-291-54/+0
| | | | | | No intended functionality change. llvm-svn: 224935
* [CodeGenPrepare] Teach when it is profitable to speculate calls to ↵Andrea Di Biagio2014-12-281-0/+141
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | @llvm.cttz/ctlz. If the control flow is modelling an if-statement where the only instruction in the 'then' basic block (excluding the terminator) is a call to cttz/ctlz, CodeGenPrepare can try to speculate the cttz/ctlz call and simplify the control flow graph. Example: \code entry: %cmp = icmp eq i64 %val, 0 br i1 %cmp, label %end.bb, label %then.bb then.bb: %c = tail call i64 @llvm.cttz.i64(i64 %val, i1 true) br label %end.bb end.bb: %cond = phi i64 [ %c, %then.bb ], [ 64, %entry] \code In this example, basic block %then.bb is taken if value %val is not zero. Also, the phi node in %end.bb would propagate the size-of in bits of %val only if %val is equal to zero. With this patch, CodeGenPrepare will try to hoist the call to cttz from %then.bb into basic block %entry only if cttz is cheap to speculate for the target. Added two new hooks in TargetLowering.h to let targets customize the behavior (i.e. decide whether it is cheap or not to speculate calls to cttz/ctlz). The two new methods are 'isCheapToSpeculateCtlz' and 'isCheapToSpeculateCttz'. By default, both methods return 'false'. On X86, method 'isCheapToSpeculateCtlz' returns true only if the target has LZCNT. Method 'isCheapToSpeculateCttz' only returns true if the target has BMI. Differential Revision: http://reviews.llvm.org/D6728 llvm-svn: 224899
* Scalarizer for masked load and store intrinsics.Elena Demikhovsky2014-12-281-40/+274
| | | | | | | | Masked vector intrinsics are a part of common LLVM IR, but they are really supported on AVX2 and AVX-512 targets. I added a code that translates masked intrinsic for all other targets. The masked vector intrinsic is converted to a chain of scalar operations inside conditional basic blocks. http://reviews.llvm.org/D6436 llvm-svn: 224897
* Band-aid fix for PR22032: don't emit DWARF debug info if AddressSanitizer is ↵Timur Iskhodzhanov2014-12-261-3/+16
| | | | | | enabled on Windows llvm-svn: 224860
* Silence GCC's -Wparentheses warningDavid Majnemer2014-12-251-1/+1
| | | | | | No functionality change intended. llvm-svn: 224833
* Masked Load/Store - Changed the order of parameters in intrinsics.Elena Demikhovsky2014-12-251-5/+7
| | | | | | | No functional changes. The documentation is coming. llvm-svn: 224829
* CodeGen: Error on redefinitions instead of assertingDavid Majnemer2014-12-241-5/+11
| | | | | | | It's possible to have a prior definition of a symbol in module asm. Raise an error instead of crashing. llvm-svn: 224828
* CodeGen: Allow aliases to be overridden by variablesDavid Majnemer2014-12-241-0/+2
| | | | llvm-svn: 224827
* MC: Label definitions are permitted after .set directivesDavid Majnemer2014-12-241-0/+2
| | | | | | | | | .set directives may be overridden by other .set directives as well as label definitions. This fixes PR22019. llvm-svn: 224811
* LiveInterval: Remove accidentally committed debug code.Matthias Braun2014-12-241-10/+0
| | | | llvm-svn: 224807
* LiveInterval: Introduce createMainRangeFromSubranges().Matthias Braun2014-12-242-7/+226
| | | | | | | | | | | | | | | | | | | | This function constructs the main liverange by merging all subranges if subregister liveness tracking is available. This should be slightly faster to compute instead of performing the liveness calculation again for the main range. More importantly it avoids cases where the main liverange would cover positions where no subrange was live. These cases happened for partial definitions where the actual defined part was dead and only the undefined parts used later. The register coalescing requires that every part covered by the main live range has at least one subrange live. I also expect this function to become usefull later for places where the subranges are modified in a way that it is hard to correctly fix the main liverange in the machine scheduler, we can simply reconstruct it from subranges then. llvm-svn: 224806
* RegisterCoalescer: With subrange liveness there may be no RedefVNI for ↵Matthias Braun2014-12-241-3/+6
| | | | | | unused lanes. llvm-svn: 224805
* LiveRangeEdit: Check for completely empy subranges after removing ValNos.Matthias Braun2014-12-241-0/+1
| | | | | | | Completely empty subranges are not allowed and must be removed when subreg liveness is enabled. llvm-svn: 224804
* LiveIntervalAnalysis: Fix performance bug that I introduced in r224663.Matthias Braun2014-12-241-2/+2
| | | | | | | | Without a reference the code did not remember when moving the iterators of the subranges/registerunit ranges forward and instead would scan from the beginning again at the next position. llvm-svn: 224803
* Debug Info: In symmetry to DW_TAG_pointer_type, do not emit the byte sizeAdrian Prantl2014-12-241-1/+2
| | | | | | | of a DW_TAG_ptr_to_member_type. This restores the behavior from before r224780-r224781. llvm-svn: 224799
* Always assert in DAGCombine and not only when -debug is enabledMehdi Amini2014-12-231-5/+6
| | | | | | | | | Right now in DAG Combine check the validity of the returned type only when -debug is given on the command line. However usually the test cases in the validation does not use -debug. An Assert build should always check this. llvm-svn: 224779
* [DagCombine] Improve DAGCombiner BUILD_VECTOR when it has two sources of ↵Michael Kuperstein2014-12-231-12/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | elements This partially fixes PR21943. For AVX, we go from: vmovq (%rsi), %xmm0 vmovq (%rdi), %xmm1 vpermilps $-27, %xmm1, %xmm2 ## xmm2 = xmm1[1,1,2,3] vinsertps $16, %xmm2, %xmm1, %xmm1 ## xmm1 = xmm1[0],xmm2[0],xmm1[2,3] vinsertps $32, %xmm0, %xmm1, %xmm1 ## xmm1 = xmm1[0,1],xmm0[0],xmm1[3] vpermilps $-27, %xmm0, %xmm0 ## xmm0 = xmm0[1,1,2,3] vinsertps $48, %xmm0, %xmm1, %xmm0 ## xmm0 = xmm1[0,1,2],xmm0[0] To the expected: vmovq (%rdi), %xmm0 vmovhpd (%rsi), %xmm0, %xmm0 retq Fixing this for AVX2 is still open. Differential Revision: http://reviews.llvm.org/D6749 llvm-svn: 224759
* Make musttail more robust for vector types on x86Reid Kleckner2014-12-221-0/+56
| | | | | | | | | | | | | | | | Previously I tried to plug musttail into the existing vararg lowering code. That turned out to be a mistake, because non-vararg calls use significantly different register lowering, even on x86. For example, AVX vectors are usually passed in registers to normal functions and memory to vararg functions. Now musttail uses a completely separate lowering. Hopefully this can be used as the basis for non-x86 perfect forwarding. Reviewers: majnemer Differential Revision: http://reviews.llvm.org/D6156 llvm-svn: 224745
* [CodeGenPrepare] Handle properly the promotion of operands when this does notQuentin Colombet2014-12-221-3/+7
| | | | | | | | | generate instructions. Fixes PR21978. Related to <rdar://problem/18310086> llvm-svn: 224717
* The leak detector is dead, long live asan and valgrind.Rafael Espindola2014-12-221-5/+0
| | | | | | | In resent times asan and valgrind have found way more memory management bugs in llvm than the special purpose leak detector. llvm-svn: 224703
* CodeGen: minor style tweaks to SSPSaleem Abdulrasool2014-12-211-13/+15
| | | | | | Clean up some style related things in the StackProtector CodeGen. NFC. llvm-svn: 224693
* Enable (sext x) == C --> x == (trunc C) combineMatt Arsenault2014-12-211-9/+26
| | | | | | | | | Extend the existing code which handles this for zext. This makes this more useful for targets with ZeroOrNegativeOne BooleanContent and obsoletes a custom combine SI uses for i1 setcc (sext(i1), 0, setne) since the constant will now be shrunk to i1. llvm-svn: 224691
* CodeGen: constify and use range loop for SSPSaleem Abdulrasool2014-12-201-8/+4
| | | | | | Use range-based for loop and constify the iterators. NFC. llvm-svn: 224683
* LiveIntervalAnalysis: No kill flags for partially undefined uses.Matthias Braun2014-12-201-24/+68
| | | | | | | | | We must not add kill flags when reading a vreg with some undefined subregisters, if subreg liveness tracking is enabled. This is because the register allocator may reuse these undefined subregisters for other values which are not killed. llvm-svn: 224664
* LiveIntervalAnalysis: cleanup addKills(), NFCMatthias Braun2014-12-201-19/+18
| | | | | | | | - Use more const modifiers - Use references for things that can't be nullptr - Improve some variable names llvm-svn: 224663
* EH: Sink computation of local PadMap variable into function that uses itReid Kleckner2014-12-192-17/+15
| | | | | | No functionality change. llvm-svn: 224635
* Add the ExceptionHandling::MSVC enumerationReid Kleckner2014-12-193-8/+10
| | | | | | | | | | | | | | | It is intended to be used for a family of personality functions that have similar IR preparation requirements. Typically when interoperating with MSVC personality functions, bits of functionality need to be outlined from the main function into helper functions. There is also usually more than one landing pad per invoke, which does not match the LLVM IR landingpad representation. None of this is implemented yet. This change just adds a new enum that is active for *-windows-msvc and delegates to the EH removal preparation pass. No functionality change for other targets. llvm-svn: 224625
* merge consecutive stores of extracted vector elementsSanjay Patel2014-12-191-4/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a path to DAGCombiner::MergeConsecutiveStores() to combine multiple scalar stores when the store operands are extracted vector elements. This is a partial fix for PR21711 ( http://llvm.org/bugs/show_bug.cgi?id=21711 ). For the new test case, codegen improves from: vmovss %xmm0, (%rdi) vextractps $1, %xmm0, 4(%rdi) vextractps $2, %xmm0, 8(%rdi) vextractps $3, %xmm0, 12(%rdi) vextractf128 $1, %ymm0, %xmm0 vmovss %xmm0, 16(%rdi) vextractps $1, %xmm0, 20(%rdi) vextractps $2, %xmm0, 24(%rdi) vextractps $3, %xmm0, 28(%rdi) vzeroupper retq To: vmovups %ymm0, (%rdi) vzeroupper retq Patch reviewed by Nadav Rotem. Differential Revision: http://reviews.llvm.org/D6698 llvm-svn: 224611
* RegisterCoalescer: rewrite eliminateUndefCopy().Matthias Braun2014-12-191-29/+64
| | | | | | | | This also fixes problems with undef copies of subregisters. I can't attach a testcase for that as none of the targets in trunk has subregister liveness tracking enabled. llvm-svn: 224560
* Explain why LLVM is emitting a DW_AT_containing_type inside of a class.Adrian Prantl2014-12-191-0/+2
| | | | llvm-svn: 224555
* LiveIntervalAnalysis: Cleanup computeDeadValuesMatthias Braun2014-12-181-24/+33
| | | | | | | | | - This also fixes a bug introduced in r223880 where values were not correctly marked as Dead anymore. - Cleanup computeDeadValues(): split up SubRange code variant, simplify arguments. llvm-svn: 224538
* Add a new string member to the TargetOptions struct for the nameEric Christopher2014-12-181-0/+7
| | | | | | | | | | | | | of the abi we should be using. For targets that don't use the option there's no change, otherwise this allows external users to set the ABI via string and avoid some of the -backend-option pain in clang. Use this option to move the ABI for the ARM port from the Subtarget to the TargetMachine and update the testcases accordingly since it's no longer valid to set via -mattr. llvm-svn: 224492
* RegisterCoalescer: Fix stripCopies() picking up main range instead of ↵Matthias Braun2014-12-171-50/+78
| | | | | | | | | | subregister range This fixes a problem where stripCopies() would switch to values in the main liverange when it crossed a copy instruction. However when joining subranges we need to stay in the respective subregister ranges. llvm-svn: 224461
* ExecutionDepsFix: Correctly handle wide registers.Matthias Braun2014-12-171-70/+71
| | | | | | | | | | | | | | The ExecutionDepsFix previously mapped each register to 1 or zero registers of the register class it was called with and therefore simulating liveness for. This was problematic for cases involving wider registers like Q0 on ARM where ExecutionDepsFix gets invoked for the Dxx registers. In these cases the wide register would get mapped to the last matching D register, while it should have been all matching D registers. This commit changes the AliasMap to use a SmallVector to map registers to potentially multiple destination regclass registers. This is required to avoid regressions with subregister liveness tracking enabled. llvm-svn: 224447
* [DAGCombine] Slightly improve lowering of BUILD_VECTOR into a shuffle.Michael Kuperstein2014-12-171-11/+22
| | | | | | | | | | This handles the case of a BUILD_VECTOR being constructed out of elements extracted from a vector twice the size of the result vector. Previously this was always scalarized. Now, we try to construct a shuffle node that feeds on extract_subvectors. This fixes PR15872 and provides a partial fix for PR21711. Differential Revision: http://reviews.llvm.org/D6678 llvm-svn: 224429
* [mips] Set GCC-compatible MIPS asssembler options before inline asm blocks.Toma Tabacu2014-12-171-0/+4
| | | | | | | | | | | | | | | | | | | | Summary: When generating MIPS assembly, LLVM always overrides the default assembler options by emitting the '.set noreorder', '.set nomacro' and '.set noat' directives, while GCC uses the default options if an assembly-level function contains inline assembly code. This becomes a problem when the code generated by LLVM is interleaved with inline assembly which assumes GCC-like assembler options (from Linux, for example). This patch fixes these conflicts by setting the appropriate assembler options at the beginning of an inline asm block and popping them at the end. Reviewers: dsanders Reviewed By: dsanders Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D6637 llvm-svn: 224425
* RegisterCoalescer: Sprinkle some const modifiers.Matthias Braun2014-12-171-11/+12
| | | | llvm-svn: 224409
* [CodeGenPrepare] Reapply r224351 with a fix for the assertion failure:Quentin Colombet2014-12-172-44/+261
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The type promotion helper does not support vector type, so when make such it does not kick in in such cases. Original commit message: [CodeGenPrepare] Move sign/zero extensions near loads using type promotion. This patch extends the optimization in CodeGenPrepare that moves a sign/zero extension near a load when the target can combine them. The optimization may promote any operations between the extension and the load to make that possible. Although this optimization may be beneficial for all targets, in particular AArch64, this is enabled for X86 only as I have not benchmarked it for other targets yet. ** Context ** Most targets feature extended loads, i.e., loads that perform a zero or sign extension for free. In that context it is interesting to expose such pattern in CodeGenPrepare so that the instruction selection pass can form such loads. Sometimes, this pattern is blocked because of instructions between the load and the extension. When those instructions are promotable to the extended type, we can expose this pattern. ** Motivating Example ** Let us consider an example: define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) { %ld = load i8* %addr1 %zextld = zext i8 %ld to i32 %ld2 = load i32* %addr2 %add = add nsw i32 %ld2, %zextld %sextadd = sext i32 %add to i64 %zexta = zext i8 %a to i32 %addza = add nsw i32 %zexta, %zextld %sextaddza = sext i32 %addza to i64 %addb = add nsw i32 %b, %zextld %sextaddb = sext i32 %addb to i64 call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb) ret void } As it is, this IR generates the following assembly on x86_64: [...] movzbl (%rdi), %eax # zero-extended load movl (%rsi), %es # plain load addl %eax, %esi # 32-bit add movslq %esi, %rdi # sign extend the result of add movzbl %dl, %edx # zero extend the first argument addl %eax, %edx # 32-bit add movslq %edx, %rsi # sign extend the result of add addl %eax, %ecx # 32-bit add movslq %ecx, %rdx # sign extend the result of add [...] The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA. Now, by promoting the additions to form more extended loads we would generate: [...] movzbl (%rdi), %eax # zero-extended load movslq (%rsi), %rdi # sign-extended load addq %rax, %rdi # 64-bit add movzbl %dl, %esi # zero extend the first argument addq %rax, %rsi # 64-bit add movslq %ecx, %rdx # sign extend the second argument addq %rax, %rdx # 64-bit add [...] The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA. This kind of sequences happen a lot on code using 32-bit indexes on 64-bit architectures. Note: The throughput numbers are similar on Sandy Bridge and Haswell. ** Proposed Solution ** To avoid the penalty of all these sign/zero extensions, we merge them in the loads at the beginning of the chain of computation by promoting all the chain of computation on the extended type. The promotion is done if and only if we do not introduce new extensions, i.e., if we do not degrade the code quality. To achieve this, we extend the existing “move ext to load” optimization with the promotion mechanism introduced to match larger patterns for addressing mode (r200947). The idea of this extension is to perform the following transformation: ext(promotableInst1(...(promotableInstN(load)))) => promotedInst1(...(promotedInstN(ext(load)))) The promotion mechanism in that optimization is enabled by a new TargetLowering switch, which is off by default. In other words, by default, the optimization performs the “move ext to load” optimization as it was before this patch. ** Performance ** Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10. Tested Optimization Levels: O3/Os Tests: llvm-testsuite + externals. Results: - No regression beside noise. - Improvements: CINT2006/473.astar: ~2% Benchmarks/PAQ8p: ~2% Misc/perlin: ~3% The results are consistent for both O3 and Os. <rdar://problem/18310086> llvm-svn: 224402
* PR21875: codegen for non-type template parameters of nullptr_t typeDavid Blaikie2014-12-171-2/+5
| | | | llvm-svn: 224399
* Revert "[CodeGenPrepare] Move sign/zero extensions near loads using type ↵Reid Kleckner2014-12-172-255/+44
| | | | | | | | | promotion." This reverts commit r224351. It causes assertion failures when building ICU. llvm-svn: 224397
* SelectionDAG switch lowering: use 'unsigned' to count destination popularityHans Wennborg2014-12-161-2/+2
| | | | | | | | | SwitchInst::getNumCases() returns unsinged, so using uint64_t to count cases seems unnecessary. Also fix a missing CHECK in the test case. llvm-svn: 224393
* merge consecutive loads that are offset from a base addressSanjay Patel2014-12-161-5/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SelectionDAG::isConsecutiveLoad() was not detecting consecutive loads when the first load was offset from a base address. This patch recognizes that pattern and subtracts the offset before comparing the second load to see if it is consecutive. The codegen change in the new test case improves from: vmovsd 32(%rdi), %xmm0 vmovsd 48(%rdi), %xmm1 vmovhpd 56(%rdi), %xmm1, %xmm1 vmovhpd 40(%rdi), %xmm0, %xmm0 vinsertf128 $1, %xmm1, %ymm0, %ymm0 To: vmovups 32(%rdi), %ymm0 An existing test case is also improved from: vmovsd (%rdi), %xmm0 vmovsd 16(%rdi), %xmm1 vmovsd 24(%rdi), %xmm2 vunpcklpd %xmm2, %xmm0, %xmm0 ## xmm0 = xmm0[0],xmm2[0] vmovhpd 8(%rdi), %xmm1, %xmm3 To: vmovsd (%rdi), %xmm0 vmovsd 16(%rdi), %xmm1 vmovhpd 24(%rdi), %xmm0, %xmm0 vmovhpd 8(%rdi), %xmm1, %xmm1 This patch fixes PR21771 ( http://llvm.org/bugs/show_bug.cgi?id=21771 ). Differential Revision: http://reviews.llvm.org/D6642 llvm-svn: 224379
* Move lowerConstant to AsmPrinterMatt Arsenault2014-12-161-25/+20
| | | | | | | This was a static function before, and NVPTX duplicated it because it wasn't exposed. llvm-svn: 224354
OpenPOWER on IntegriCloud