summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen
Commit message (Collapse)AuthorAgeFilesLines
* Move simple case earlier and use a continue.Rafael Espindola2015-02-021-24/+25
| | | | llvm-svn: 227841
* Debug Info: Relax assertion in isUnsignedDIType() to allow floats to beAdrian Prantl2015-02-021-0/+1
| | | | | | | | | | | | | | | described by integer constants. This is a bit ugly, but if the source language allows arbitrary type casting, the debug info must follow suit. For example: void foo() { float a; *(int *)&a = 0; } For the curious: SROA replaces the float alloca with an i32 alloca, which is then optimized away and described via dbg.value(i32 0, ...). llvm-svn: 227827
* [X86] Convert esp-relative movs of function arguments to pushes, step 2Michael Kuperstein2015-02-012-8/+17
| | | | | | | | | | | | | | This moves the transformation introduced in r223757 into a separate MI pass. This allows it to cover many more cases (not only cases where there must be a reserved call frame), and perform rudimentary call folding. It still doesn't have a heuristic, so it is enabled only for optsize/minsize, with stack alignment <= 8, where it ought to be a fairly clear win. (Re-commit of r227728) Differential Revision: http://reviews.llvm.org/D6789 llvm-svn: 227752
* Revert r227728 due to bad line endings.Michael Kuperstein2015-02-012-17/+8
| | | | llvm-svn: 227746
* [multiversion] Switch the TTI queries from TargetMachine to SubtargetChandler Carruth2015-02-012-4/+5
| | | | | | | | | | | | | | | | | | | now that we have a correct and cached subtarget specific to the function. Also, finish providing a cached per-function subtarget in the core LLVMTargetMachine -- that layer hadn't switched over yet. The only use of the TargetMachine was to re-lookup a subtarget for a particular function to work around the fact that TTI was immutable. Now that it is per-function and we haved a cached subtarget, use it. This still leaves a few interfaces with real warts on them where we were passing Function objects through the TTI interface. I'll remove these and clean their usage up in subsequent commits now that this isn't necessary. llvm-svn: 227738
* [multiversion] Remove the cached TargetMachine pointer from theChandler Carruth2015-02-011-1/+2
| | | | | | | | | | | | | | | | intermediate TTI implementation template and instead query up to the derived class for both the TargetMachine and the TargetLowering. Most of the derived types had a TLI cached already and there is no need to store a less precisely typed target machine pointer. This will in turn make it much cleaner to look up the TLI via a per-function subtarget instead of the generic subtarget, and it will pave the way toward pulling the subtarget used for unroll preferences into the same form once we are *always* using the function to look up the correct subtarget. llvm-svn: 227737
* [multiversion] Switch all of the targets over to use theChandler Carruth2015-02-011-2/+3
| | | | | | | | | | | | | | | | TargetIRAnalysis access path directly rather than implementing getTTI. This even removes getTTI from the interface. It's more efficient for each target to just register a precise callback that creates their specific TTI. As part of this, all of the targets which are building their subtargets individually per-function now build their TTI instance with the function and thus look up the correct subtarget and cache it. NVPTX, R600, and XCore currently don't leverage this functionality, but its trivial for them to add it now. llvm-svn: 227735
* [multiversion] Implement the old pass manager's TTI wrapper pass inChandler Carruth2015-02-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | terms of the new pass manager's TargetIRAnalysis. Yep, this is one of the nicer bits of the new pass manager's design. Passes can in many cases operate in a vacuum and so we can just nest things when convenient. This is particularly convenient here as I can now consolidate all of the TargetMachine logic on this analysis. The most important change here is that this pushes the function we need TTI for all the way into the TargetMachine, and re-creates the TTI object for each function rather than re-using it for each function. We're now prepared to teach the targets to produce function-specific TTI objects with specific subtargets cached, etc. One piece of feedback I'd love here is whether its worth renaming any of this stuff. None of the names really seem that awesome to me at this point, but TargetTransformInfoWrapperPass is particularly ... odd. TargetIRAnalysisWrapper might make more sense. I would want to do that rename separately anyways, but let me know what you think. llvm-svn: 227731
* [multiversion] Thread a function argument through all the callers of theChandler Carruth2015-02-011-1/+1
| | | | | | | | | | | | | | getTTI method used to get an actual TTI object. No functionality changed. This just threads the argument and ensures code like the inliner can correctly look up the callee's TTI rather than using a fixed one. The next change will use this to implement per-function subtarget usage by TTI. The changes after that should eliminate the need for FTTI as that will have become the default. llvm-svn: 227730
* [X86] Convert esp-relative movs of function arguments to pushes, step 2Michael Kuperstein2015-02-012-8/+17
| | | | | | | | | | | | This moves the transformation introduced in r223757 into a separate MI pass. This allows it to cover many more cases (not only cases where there must be a reserved call frame), and perform rudimentary call folding. It still doesn't have a heuristic, so it is enabled only for optsize/minsize, with stack alignment <= 8, where it ought to be a fairly clear win. Differential Revision: http://reviews.llvm.org/D6789 llvm-svn: 227728
* [PM] Switch the TargetMachine interface from accepting a pass managerChandler Carruth2015-01-312-40/+9
| | | | | | | | | | | | | | | | | | | | | | | base which it adds a single analysis pass to, to instead return the type erased TargetTransformInfo object constructed for that TargetMachine. This removes all of the pass variants for TTI. There is now a single TTI *pass* in the Analysis layer. All of the Analysis <-> Target communication is through the TTI's type erased interface itself. While the diff is large here, it is nothing more that code motion to make types available in a header file for use in a different source file within each target. I've tried to keep all the doxygen comments and file boilerplate in line with this move, but let me know if I missed anything. With this in place, the next step to making TTI work with the new pass manager is to introduce a really simple new-style analysis that produces a TTI object via a callback into this routine on the target machine. Once we have that, we'll have the building blocks necessary to accept a function argument as well. llvm-svn: 227685
* [PM] Change the core design of the TTI analysis to use a polymorphicChandler Carruth2015-01-313-607/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | type erased interface and a single analysis pass rather than an extremely complex analysis group. The end result is that the TTI analysis can contain a type erased implementation that supports the polymorphic TTI interface. We can build one from a target-specific implementation or from a dummy one in the IR. I've also factored all of the code into "mix-in"-able base classes, including CRTP base classes to facilitate calling back up to the most specialized form when delegating horizontally across the surface. These aren't as clean as I would like and I'm planning to work on cleaning some of this up, but I wanted to start by putting into the right form. There are a number of reasons for this change, and this particular design. The first and foremost reason is that an analysis group is complete overkill, and the chaining delegation strategy was so opaque, confusing, and high overhead that TTI was suffering greatly for it. Several of the TTI functions had failed to be implemented in all places because of the chaining-based delegation making there be no checking of this. A few other functions were implemented with incorrect delegation. The message to me was very clear working on this -- the delegation and analysis group structure was too confusing to be useful here. The other reason of course is that this is *much* more natural fit for the new pass manager. This will lay the ground work for a type-erased per-function info object that can look up the correct subtarget and even cache it. Yet another benefit is that this will significantly simplify the interaction of the pass managers and the TargetMachine. See the future work below. The downside of this change is that it is very, very verbose. I'm going to work to improve that, but it is somewhat an implementation necessity in C++ to do type erasure. =/ I discussed this design really extensively with Eric and Hal prior to going down this path, and afterward showed them the result. No one was really thrilled with it, but there doesn't seem to be a substantially better alternative. Using a base class and virtual method dispatch would make the code much shorter, but as discussed in the update to the programmer's manual and elsewhere, a polymorphic interface feels like the more principled approach even if this is perhaps the least compelling example of it. ;] Ultimately, there is still a lot more to be done here, but this was the huge chunk that I couldn't really split things out of because this was the interface change to TTI. I've tried to minimize all the other parts of this. The follow up work should include at least: 1) Improving the TargetMachine interface by having it directly return a TTI object. Because we have a non-pass object with value semantics and an internal type erasure mechanism, we can narrow the interface of the TargetMachine to *just* do what we need: build and return a TTI object that we can then insert into the pass pipeline. 2) Make the TTI object be fully specialized for a particular function. This will include splitting off a minimal form of it which is sufficient for the inliner and the old pass manager. 3) Add a new pass manager analysis which produces TTI objects from the target machine for each function. This may actually be done as part of #2 in order to use the new analysis to implement #2. 4) Work on narrowing the API between TTI and the targets so that it is easier to understand and less verbose to type erase. 5) Work on narrowing the API between TTI and its clients so that it is easier to understand and less verbose to forward. 6) Try to improve the CRTP-based delegation. I feel like this code is just a bit messy and exacerbating the complexity of implementing the TTI in each target. Many thanks to Eric and Hal for their help here. I ended up blocked on this somewhat more abruptly than I expected, and so I appreciate getting it sorted out very quickly. Differential Revision: http://reviews.llvm.org/D7293 llvm-svn: 227669
* Fix memory leak in WinEHPrepare introduced in r227405.Alexey Samsonov2015-01-301-1/+3
| | | | | | This leak was detected by ASan bootstrap of LLVM. llvm-svn: 227625
* Update comments to use unreachable instead of llvm.trap, as implemented nowReid Kleckner2015-01-291-1/+1
| | | | llvm-svn: 227502
* Compute the ELF SectionKind from the flags.Rafael Espindola2015-01-292-28/+17
| | | | | | | | | | | | Any code creating an MCSectionELF knows ELF and already provides the flags. SectionKind is an abstraction used by common code that uses a plain MCSection. Use the flags to compute the SectionKind. This removes a lot of guessing and boilerplate from the MCSectionELF construction. llvm-svn: 227476
* Use isMergeableConst now that it is sane.Rafael Espindola2015-01-291-3/+1
| | | | llvm-svn: 227441
* Remove MergeableConst.Rafael Espindola2015-01-291-1/+1
| | | | | | | Only the specific ones (MergeableConst4, MergeableConst8, MergeableConst16) are handled specially. llvm-svn: 227440
* EHPrepare: Remove leftover initialization code for DomTrees.Benjamin Kramer2015-01-292-27/+10
| | | | | | While there modernize some loops. NFC. llvm-svn: 227436
* Use enum values. NFC.Rafael Espindola2015-01-291-3/+3
| | | | llvm-svn: 227435
* Don't create multiple mergeable sections with -fdata-sections.Rafael Espindola2015-01-291-9/+10
| | | | | | | | | | | | | | | | | | | | | ELF has support for sections that can be split into fixed size or null terminated entities. Since these sections can be split by the linker, it is not necessary to split them in codegen. This reduces the combined .o size in a llvm+clang build from 202,394,570 to 173,819,098 bytes. The time for linking clang with gold (on a VM, on a laptop) goes from 2.250089985 to 1.383001792 seconds. The flip side is the size of rodata in clang goes from 10,926,785 to 10,929,345 bytes. The increase seems to be because of http://sourceware.org/bugzilla/show_bug.cgi?id=17902. llvm-svn: 227431
* Remove an unused private field added r227405 to fix a Clang warning.Chandler Carruth2015-01-291-2/+1
| | | | llvm-svn: 227415
* Remove unused variableReid Kleckner2015-01-291-2/+0
| | | | llvm-svn: 227408
* Add a Windows EH preparation pass that zaps resumesReid Kleckner2015-01-293-1/+110
| | | | | | | | | | | | | | | | | | | If the personality is not a recognized MSVC personality function, this pass delegates to the dwarf EH preparation pass. This chaining supports people on *-windows-itanium or *-windows-gnu targets. Currently this recognizes some personalities used by MSVC and turns resume instructions into traps to avoid link errors. Even if cleanups are not used in the source program, LLVM requires the frontend to emit a code path that resumes unwinding after an exception. Clang does this, and we get unreachable resume instructions. PR20300 covers cleaning up these unreachable calls to resume. Reviewers: majnemer Differential Revision: http://reviews.llvm.org/D7216 llvm-svn: 227405
* Add nullptr checks for TargetSelectionDAGInfo in SelectionDAG.Manuel Jacob2015-01-281-13/+19
| | | | | | TSI is not guaranteed be non-null in SelectionDAG. llvm-svn: 227397
* Remove gc.root's performCustomLoweringPhilip Reames2015-01-285-422/+463
| | | | | | | | | | | | | | This is a refactoring to restructure the single user of performCustomLowering as a specific lowering pass and remove the custom lowering hook entirely. Before this change, the LowerIntrinsics pass (note to self: rename!) was essentially acting as a pass manager, but without being structured in terms of passes. Instead, it proxied calls to a set of GCStrategies internally. This adds a lot of conceptual complexity (i.e. GCStrategies are stateful!) for very little benefit. Since there's been interest in keeping the ShadowStackGC working, I extracting it's custom lowering pass into a dedicated pass and just added that to the pass order. It will only run for functions which opt-in to that gc. I wasn't able to find an easy way to preserve the runtime registration of custom lowering functionality. Given that no user of this exists that I'm aware of, I made the choice to just remove that. If someone really cares, we can look at restoring it via dynamic pass registration in the future. Note that despite the large diff, none of the lowering code actual changes. I added the framing needed to make it a pass and rename the class, but that's it. Differential Revision: http://reviews.llvm.org/D7218 llvm-svn: 227351
* Simplify code. NFC.Rafael Espindola2015-01-281-3/+1
| | | | llvm-svn: 227333
* Correct the AggressiveAntiDepBreaker's handling of subregisters defining ↵Hal Finkel2015-01-281-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | super registers As the AggressiveAntiDepBreaker iterated backward through a scheduling region, we must leave super registers live through subregister definitions so that all relevant subregister definitions are renamed together. The problem was that we were also discarding sub-register use locations as the sub-registers are redefined. The result is that we'd rename the super register along with some, but not all, subregister definitions. R0_D = {R0_L, R1_L} R0_L = {R0_S, R1_S} %R0_L<def> = TRLi9 16, pred:8, pred:%noreg %R1_L<def> = LSRLrr %R1_L<kill>, %R0_S, pred:8, pred:%noreg %R0_L<def> = LSRLrr %R2_L, %R0_S, pred:8, pred:%noreg, %R0_L<imp-use,kill> %R1_L<def> = ANDLri %R1_L<kill>, 2047, pred:8, pred:%noreg %R0_L<def> = ANDLri %R0_L<kill>, 2047, pred:8, pred:%noreg %R4_D<def> = ASRDrr %R0_D<kill>, %R6_S Anti: %R4_D<def> = ASRDrr %R0_D<kill>, %R6_S Def Groups: R4_D=g213->g215(via R4_S)->g214(via R4_L)->g216(via R5_S)->g216(via R4_L)->g217(via R5_L) Use Groups: R0_D=g0->g218(last-use) R1_L->g219(last-use) R6_S=g204->g220(last-use) Anti: %R0_L<def> = ANDLri %R0_L<kill>, 2047, pred:8, pred:%noreg Def Groups: R0_L=g208->g209(via R0_S)->g218(via R0_D)->g210(via R1_S)->g210(via R0_D) Antidep reg: R0_L (real dependency) Use Groups: R0_L=g210->g224(last-use) R0_S->g225(last-use) R1_S->g226(last-use) Anti: %R1_L<def> = ANDLri %R1_L<kill>, 2047, pred:8, pred:%noreg Def Groups: R1_L=g219->g210(via R0_D) Antidep reg: R1_L (real dependency) Use Groups: R1_L=g210->g229(last-use) Anti: %R0_L<def> = LSRLrr %R2_L, %R0_S, pred:8, pred:%noreg, %R0_L<imp-use,kill> Def Groups: R0_L=g224->g225(via R0_S)->g210(via R0_D)->g226(via R1_S)->g226(via R0_D) Antidep reg: R0_L Use Groups: R2_L=g192 R0_S=g226->g230(last-use) R0_L=g226->g231(last-use) R1_S->g232(last-use) Anti: %R1_L<def> = LSRLrr %R1_L<kill>, %R0_S, pred:8, pred:%noreg Def Groups: R1_L=g229->g226(via R0_D) Antidep reg: R1_L Use Groups: R1_L=g226->g233(last-use) R0_S=g230 Anti: %R0_L<def> = TRLi9 16, pred:8, pred:%noreg Def Groups: R0_L=g231->g230(via R0_S)->g226(via R0_D)->g232(via R1_S)->g232(via R0_D) Antidep reg: R0_L Rename Candidates for Group g232: R0_D: elcInt64Regs :: R0_D R1_D R2_D R3_D R4_D R5_D R8_D R9_D R10_D R11_D R12_D R13_D R14_D R15_D R16_D R17_D R18_D R19_D R20_D R21_D R22_D R23_D R24_D R25_D R0_L: elcIntRegs :: R0_L R1_L R2_L R3_L R4_L R5_L R8_L R9_L R10_L R11_L R12_L R13_L R14_L R15_L R16_L R17_L R18_L R19_L R20_L R21_L R22_L R23_L R24_L R25_L R0_S: elcShrtRegs elcShrtRegs :: R0_S R1_S R2_S R3_S R4_S R5_S R8_S R9_S R10_S R11_S R12_S R13_S R14_S R15_S R16_S R17_S R18_S R19_S R20_S R21_S R22_S R23_S R24_S R25_S Find Registers: [R12_D: R12_D R12_L R12_S] Breaking anti-dependence edge on R0_L: R0_D->R12_D(1 refs) R0_L->R12_L(2 refs) R0_S->R12_S(2 refs) Use Groups: ... %R12_L<def> = TRLi9 16, pred:8, pred:%noreg %R1_L<def> = LSRLrr %R1_L<kill>, %R12_S, pred:8, pred:%noreg %R0_L<def> = LSRLrr %R2_L<kill>, %R12_S, pred:8, pred:%noreg, %R12_L<imp-use> %R1_L<def> = ANDLri %R1_L<kill>, 2047, pred:8, pred:%noreg %R0_L<def> = ANDLri %R0_L<kill>, 2047, pred:8, pred:%noreg %R4_D<def> = ASRDrr %R12_D<kill>, %R6_S With this change, we now produce: Anti: %R4_D<def> = ASRDrr %R0_D<kill>, %R6_S Def Groups: R4_D=g213->g215(via R4_S)->g214(via R4_L)->g216(via R5_S)->g216(via R4_L)->g217(via R5_L) Use Groups: R0_D=g0->g218(last-use) R1_L->g219(last-use) R6_S=g204->g220(last-use) Anti: %R0_L<def> = ANDLri %R0_L<kill>, 2047, pred:8, pred:%noreg Def Groups: R0_L=g208->g209(via R0_S)->g218(via R0_D)->g210(via R1_S)->g210(via R0_D) Antidep reg: R0_L (real dependency) Use Groups: R0_L=g210 Anti: %R1_L<def> = ANDLri %R1_L<kill>, 2047, pred:8, pred:%noreg Def Groups: R1_L=g219->g210(via R0_D) Antidep reg: R1_L (real dependency) Use Groups: R1_L=g210 Anti: %R0_L<def> = LSRLrr %R2_L, %R0_S, pred:8, pred:%noreg, %R0_L<imp-use,kill> Def Groups: R0_L=g210->g210(via R0_D)->g210(via R0_D) Antidep reg: R0_L Use Groups: R2_L=g192 R0_S=g210 R0_L=g210 Anti: %R1_L<def> = LSRLrr %R1_L<kill>, %R0_S, pred:8, pred:%noreg Def Groups: R1_L=g210->g210(via R0_D) Antidep reg: R1_L Use Groups: R1_L=g210 R0_S=g210 Anti: %R0_L<def> = TRLi9 16, pred:8, pred:%noreg Def Groups: R0_L=g210->g210(via R0_D)->g210(via R0_D) Antidep reg: R0_L Rename Candidates for Group g210: R0_D: elcInt64Regs :: R0_D R1_D R2_D R3_D R4_D R5_D R8_D R9_D R10_D R11_D R12_D R13_D R14_D R15_D R16_D R17_D R18_D R19_D R20_D R21_D R22_D R23_D R24_D R25_D R0_L: elcIntRegs elcIntAIRegs elcIntRegs elcIntRegs elcIntRegs elcIntRegs :: R0_L R1_L R2_L R3_L R4_L R5_L R8_L R9_L R10_L R11_L R12_L R13_L R14_L R15_L R16_L R17_L R18_L R19_L R20_L R21_L R22_L R23_L R24_L R25_L R1_L: elcIntRegs elcIntRegs elcIntRegs elcIntRegs elcIntRegs :: R0_L R1_L R2_L R3_L R4_L R5_L R8_L R9_L R10_L R11_L R12_L R13_L R14_L R15_L R16_L R17_L R18_L R19_L R20_L R21_L R22_L R23_L R24_L R25_L R0_S: elcShrtRegs elcShrtRegs :: R0_S R1_S R2_S R3_S R4_S R5_S R8_S R9_S R10_S R11_S R12_S R13_S R14_S R15_S R16_S R17_S R18_S R19_S R20_S R21_S R22_S R23_S R24_S R25_S Find Registers: [R12_D: R12_D R12_L R13_L R12_S] Breaking anti-dependence edge on R0_L: R0_D->R12_D(1 refs) R0_L->R12_L(7 refs) R1_L->R13_L(5 refs) R0_S->R12_S(2 refs) Use Groups: ... %R12_L<def> = TRLi9 16, pred:8, pred:%noreg %R13_L<def> = LSRLrr %R13_L<kill>, %R12_S, pred:8, pred:%noreg %R12_L<def> = LSRLrr %R2_L<kill>, %R12_S<kill>, pred:8, pred:%noreg, %R12_L<imp-use,kill> %R13_L<def> = ANDLri %R13_L<kill>, 2047, pred:8, pred:%noreg %R12_L<def> = ANDLri %R12_L<kill>, 2047, pred:8, pred:%noreg %R4_D<def> = ASRDrr %R12_D, %R6_S, %R12_L<imp-def>, %R12_S<imp-def>, %R13_S<imp-def> As demonstrated by this example, this is also somewhat unfortunate, because there is actually no need to rename the super register in this case (it is fully covered by later subregister definitions), but we don't seem to track enough information here to exploit that either. Thanks to Daniil Troshkov for reporting the issue. The debug outputs in this commit message are from Daniil. llvm-svn: 227311
* [LPM] Stop using the string based preservation API. It is anChandler Carruth2015-01-281-11/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | abomination. For starters, this API is incredibly slow. In order to lookup the name of a pass it must take a memory fence to acquire a pointer to the managed static pass registry, and then potentially acquire locks while it consults this registry for information about what passes exist by that name. This stops the world of LLVMs in your process no matter how little they cared about the result. To make this more joyful, you'll note that we are preserving many passes which *do not exist* any more, or are not even analyses which one might wish to have be preserved. This means we do all the work only to say "nope" with no error to the user. String-based APIs are a *bad idea*. String-based APIs that cannot produce any meaningful error are an even worse idea. =/ I have a patch that simply removes this API completely, but I'm hesitant to commit it as I don't really want to perniciously break out-of-tree users of the old pass manager. I'd rather they just have to migrate to the new one at some point. If others disagree and would like me to kill it with fire, just say the word. =] llvm-svn: 227294
* Add description to assertDavid Blaikie2015-01-281-1/+1
| | | | llvm-svn: 227291
* PR22356: DebugInfo: Handle the size of a member where the type of that ↵David Blaikie2015-01-281-5/+2
| | | | | | member is a typedef (or other sugar) of a declaration. llvm-svn: 227290
* Revert r227242 - Merge vector stores into wider vector stores (PR21711).Quentin Colombet2015-01-271-54/+30
| | | | | | | This commit creates infinite loop in DAG combine for in the LLVM test-suite for aarch64 with mcpu=cylcone (just having neon may be enough to expose this). llvm-svn: 227272
* remove function names from comments; NFCSanjay Patel2015-01-271-8/+6
| | | | llvm-svn: 227256
* fix typos; NFCSanjay Patel2015-01-271-4/+4
| | | | llvm-svn: 227253
* Merge vector stores into wider vector stores (PR21711)Sanjay Patel2015-01-271-30/+54
| | | | | | | | | | | | | | | | | | | | This patch resolves part of PR21711 ( http://llvm.org/bugs/show_bug.cgi?id=21711 ). The 'f3' test case in that report presents a situation where we have two 128-bit stores extracted from a 256-bit source vector. Instead of producing this: vmovaps %xmm0, (%rdi) vextractf128 $1, %ymm0, 16(%rdi) This patch merges the 128-bit stores into a single 256-bit store: vmovups %ymm0, (%rdi) Differential Revision: http://reviews.llvm.org/D7208 llvm-svn: 227242
* Add a FIXME in SelectionDAGBuilder before an assert that is valid only on X86.Manuel Jacob2015-01-271-0/+3
| | | | | | | | | When lowering memcpy, memset or memmove, this assert checks whether the pointer operands are in an address space < 256 which means "user defined address space" on X86. However, this notion of "user defined address space" does not exist for other targets. llvm-svn: 227191
* Replace some uses of getSubtargetImpl with the cached versionEric Christopher2015-01-275-9/+8
| | | | | | | off of the MachineFunction or with the version that takes a Function reference as an argument. llvm-svn: 227185
* Have the PBQP register allocator use the subtarget on the MachineFunction.Eric Christopher2015-01-271-8/+5
| | | | | | (and remove an extraneous private). llvm-svn: 227181
* Fix build failure with pointer vs reference.Eric Christopher2015-01-271-1/+1
| | | | | NB: Saving files after editing helps. llvm-svn: 227178
* Update a few calls to getSubtarget<> to either be getSubtargetImplEric Christopher2015-01-275-18/+16
| | | | | | | when we didn't need the cast to the base class or the cached version off of the subtarget. llvm-svn: 227176
* The subtarget is cached on the MachineFunction. Access it directly.Eric Christopher2015-01-276-26/+16
| | | | llvm-svn: 227173
* MachineRegisterInfo can access TII off of the MachineFunction'sEric Christopher2015-01-273-4/+4
| | | | | | | subtarget and so doesn't need the TargetMachine or to access via getSubtargetImpl. Update all callers. llvm-svn: 227160
* Migrate AtomicExpandPass and DwarfEHPrepare to using a Function-ized ↵Eric Christopher2015-01-272-3/+3
| | | | | | getSubtargetImpl. llvm-svn: 227159
* Migrate CodeGenPrepare to use the Function based getSubtargetEric Christopher2015-01-271-3/+5
| | | | | | code. llvm-svn: 227157
* Grab the TargetLowering info from the DAG rather than querying forEric Christopher2015-01-271-3/+2
| | | | | | a subtarget. llvm-svn: 227156
* Cache the lookup of TargetLowering in the atomic expand pass.Eric Christopher2015-01-261-31/+17
| | | | llvm-svn: 227121
* [SelectionDAG] Fix assert message copypasta. NFC.Ahmed Bougacha2015-01-261-2/+2
| | | | llvm-svn: 227119
* Move DataLayout back to the TargetMachine from TargetSubtargetInfoEric Christopher2015-01-2613-60/+54
| | | | | | | | | | | | | | | | | | | derived classes. Since global data alignment, layout, and mangling is often based on the DataLayout, move it to the TargetMachine. This ensures that global data is going to be layed out and mangled consistently if the subtarget changes on a per function basis. Prior to this all targets(*) have had subtarget dependent code moved out and onto the TargetMachine. *One target hasn't been migrated as part of this change: R600. The R600 port has, as a subtarget feature, the size of pointers and this affects global data layout. I've currently hacked in a FIXME to enable progress, but the port needs to be updated to either pass the 64-bitness to the TargetMachine, or fix the DataLayout to avoid subtarget dependent features. llvm-svn: 227113
* Revert GCStrategy ownership changesPhilip Reames2015-01-2611-19/+54
| | | | | | | | | | | | This change reverts the interesting parts of 226311 (and 227046). This change introduced two problems, and I've been convinced that an alternate approach is preferrable anyways. The bugs were: - Registery appears to require all users be within the same linkage unit. After this change, asking for "statepoint-example" in Transform/ would sometimes get you nullptr, whereas asking the same question in CodeGen would return the right GCStrategy. The correct long term fix is to get rid of the utter hack which is Registry, but I don't have time for that right now. 227046 appears to have been an attempt to fix this, but I don't believe it does so completely. - GCMetadataPrinter::finishAssembly was being called more than once per GCStrategy. Each Strategy was being added to the GCModuleInfo multiple times. Once I get time again, I'm going to split GCModuleInfo into the gc.root specific part and a GCStrategy owning Analysis pass. I'm probably also going to kill off the Registry. Once that's done, I'll move the new GCStrategyAnalysis and all built in GCStrategies into Analysis. (As original suggested by Chandler.) This will accomplish my original goal of being able to access GCStrategy from Transform/ without adding all of the builtin GCs to IR/. llvm-svn: 227109
* Debug info: Fix PR22296 by omitting the DW_AT_location if we lost theAdrian Prantl2015-01-252-4/+10
| | | | | | | | | | | physical register that is described in a DBG_VALUE. In the testcase the DBG_VALUE describing "p5" becomes unavailable because the register its address is in is clobbered and we (currently) aren't smart enough to realize that the value is rematerialized immediately after the DBG_VALUE and/or is actually a stack slot. llvm-svn: 227056
* Implemented cost model for masked load/store operations.Elena Demikhovsky2015-01-251-0/+4
| | | | llvm-svn: 227035
OpenPOWER on IntegriCloud