summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* Fix load alignement when unpacking aggregates structsAmaury Sechet2016-02-171-12/+26
| | | | | | | | | | | | Summary: Store and loads unpacked by instcombine do not always have the right alignement. This explicitely compute the alignement and set it. Reviewers: dblaikie, majnemer, reames, hfinkel, joker.eph Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D17326 llvm-svn: 261139
* Re-apply r238452, the bug was in clang and was fixed in r260567.Quentin Colombet2016-02-111-5/+10
| | | | | | | | | | | | | | | | Original commit message: [InstCombine] Fold IntToPtr and PtrToInt into preceding loads. Currently we only fold a BitCast into a Load when the BitCast is its only user. Do the same for any no-op cast. Patch by Philip Pfaffe! Differential Revision: http://reviews.llvm.org/D9152 llvm-svn: 260612
* Set load alignment on aggregate loads.Pete Cooper2016-02-111-1/+2
| | | | | | | | | | | | | | | | When optimizing a extractvalue(load), we generate a load from the aggregate type. This load didn't have alignment set and so would get the alignment of the type. This breaks when the type is packed and so the alignment should be lower. For example, loading { int, int } would give us alignment of 4, but the original load from this type may have an alignment of 1 if packed. Reviewed by David Majnemer Differential revision: http://reviews.llvm.org/D17158 llvm-svn: 260587
* [InstCombine] Revert r238452: Fold IntToPtr and PtrToInt into preceding loads.Quentin Colombet2016-02-031-10/+5
| | | | | | | | | | | | | | | | | | | | | | | | | According to git bisect, this is the root cause of a miscompile for Regex in libLLVMSupport. I am still working on reducing a test case. The actual bug may be elsewhere and this commit just exposed it. Anyway, at the moment, to reproduce, follow these steps: 1. Build clang and libLTO in release mode. 2. Create a new build directory <stage2> and cd into it. 3. Use clang and libLTO from #1 to build llvm-extract in Release mode + asserts using -O2 -flto 4. Run llvm-extract -ralias '.*bar' -S test/Other/extract-alias.ll Result: program doesn't contain global named '.*bar'! Expected result: @a0a0bar = alias void ()* @bar @a0bar = alias void ()* @bar declare void @bar() Note: In step #3, if you don't use lto or asserts, the miscompile disappears. llvm-svn: 259674
* function names start with a lowercase letter; NFCSanjay Patel2016-02-011-23/+23
| | | | llvm-svn: 259425
* [opaque pointer types] [NFC] FindAvailableLoadedValue: take LoadInst instead ↵Eduard Burtescu2016-01-221-1/+1
| | | | | | | | | | | | of just the pointer. Reviewers: mjacob, dblaikie Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D16422 llvm-svn: 258477
* [opaque pointer types] [NFC] GEP: replace get(Pointer)ElementType uses with ↵Eduard Burtescu2016-01-191-4/+2
| | | | | | | | | | | | | | | | | | get{Source,Result}ElementType. Summary: GEPOperator: provide getResultElementType alongside getSourceElementType. This is made possible by adding a result element type field to GetElementPtrConstantExpr, which GetElementPtrInst already has. GEP: replace get(Pointer)ElementType uses with get{Source,Result}ElementType. Reviewers: mjacob, dblaikie Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D16275 llvm-svn: 258145
* GlobalValue: use getValueType() instead of getType()->getPointerElementType().Manuel Jacob2016-01-161-1/+1
| | | | | | | | | | | | Reviewers: mjacob Subscribers: jholewinski, arsenm, dsanders, dblaikie Patch by Eduard Burtescu. Differential Revision: http://reviews.llvm.org/D16260 llvm-svn: 257999
* Change isSafeToLoadUnconditionally arguments order. Separated from ↵Artur Pilipenko2016-01-151-2/+2
| | | | | | http://reviews.llvm.org/D10920. llvm-svn: 257894
* [OperandBundles] Have InstCombine play nice with operand bundlesDavid Majnemer2015-12-231-4/+6
| | | | | | | Don't assume a call's use corresponds to an argument operand, it might correspond to a bundle operand. llvm-svn: 256327
* [InstCombine] Extend peephole DSE to handle unordered atomicsPhilip Reames2015-12-171-6/+11
| | | | | | | | | | | | This extends the same line of reasoning used in EarlyCSE w/http://reviews.llvm.org/D15352 to the DSE implementation in InstCombine. Key points: * We only remove unordered or simple stores. * The loads producing values consumed by dead stores don't influence whether the store is dead. Differential Revision: http://reviews.llvm.org/D15354 llvm-svn: 255932
* InstCombineLoadStoreAlloca.cpp: Avoid instantiating Twine.NAKAMURA Takumi2015-12-151-4/+9
| | | | llvm-svn: 255637
* Instcombine: destructor loads of structs that do not contains paddingMehdi Amini2015-12-151-2/+55
| | | | | | | | | | | | | | | | | | | | For non padded structs, we can just proceed and deaggregate them. We don't want ot do this when there is padding in the struct as to not lose information about this padding (the subsequents passes would then try hard to preserve the padding, which is undesirable). Also update extractvalue.ll and cast.ll so that they use structs with padding. Remove the FIXME in the extractvalue of laod case as the non padded case is handled when processing the load, and we don't want to do it on the padded case. Patch by: Amaury SECHET <deadalnix@gmail.com> Differential Revision: http://reviews.llvm.org/D14483 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 255600
* Preserve load alignment and dereferenceable metadata during some transformationsArtur Pilipenko2015-11-021-5/+16
| | | | | | | | Reviewed By: hfinkel Differential Revision: http://reviews.llvm.org/D13953 llvm-svn: 251809
* InstCombine: Remove ilist iterator implicit conversions, NFCDuncan P. N. Exon Smith2015-10-131-5/+5
| | | | | | | Stop relying on implicit conversions of ilist iterators in LLVMInstCombine. No functionality change intended. llvm-svn: 250183
* inariant.group handling in GVNPiotr Padlewski2015-10-021-7/+4
| | | | | | | | | | | | The most important part required to make clang devirtualization works ( ͡°͜ʖ ͡°). The code is able to find non local dependencies, but unfortunatelly because the caller can only handle local dependencies, I had to add some restrictions to look for dependencies only in the same BB. http://reviews.llvm.org/D12992 llvm-svn: 249196
* Clean up: Refactoring the hardcoded value of 6 for ↵Larisse Voufo2015-09-181-3/+4
| | | | | | FindAvailableLoadedValue()'s parameter MaxInstsToScan. (Complete version of r247497. See D12886) llvm-svn: 248022
* Revert "Clean up: Refactoring the hardcoded value of 6 for ↵Larisse Voufo2015-09-151-4/+3
| | | | | | FindAvailableLoadedValue()'s parameter MaxInstsToScan." for preliminary community discussion (See. D12886) llvm-svn: 247716
* Clean up: Refactoring the hardcoded value of 6 for ↵Larisse Voufo2015-09-121-3/+4
| | | | | | FindAvailableLoadedValue()'s parameter MaxInstsToScan. llvm-svn: 247497
* Fix typos.Bruce Mitchener2015-09-121-3/+3
| | | | | | | | | | Summary: This fixes a variety of typos in docs, code and headers. Subscribers: jholewinski, sanjoy, arsenm, llvm-commits Differential Revision: http://reviews.llvm.org/D12626 llvm-svn: 247495
* Rename Instruction::dropUnknownMetadata() to dropUnknownNonDebugMetadata()Adrian Prantl2015-08-201-1/+0
| | | | | | | | | | | and make it always preserve debug locations, since all callers wanted this behavior anyway. This is addressing a post-commit review feedback for r245589. NFC (inside the LLVM tree). llvm-svn: 245622
* Fix a bug that caused SimplifyCFG to drop DebugLocs.Adrian Prantl2015-08-201-0/+1
| | | | | | | | | | | Instruction::dropUnknownMetadata(KnownSet) is supposed to preserve all metadata in KnownSet, but the condition for DebugLocs was inverted. Most users of dropUnknownMetadata() actually worked around this by not adding LLVMContext::MD_dbg to their list of KnowIDs. This is now made explicit. llvm-svn: 245589
* [InstCombine] Actually combine AA metadata when replacing one load with anotherBjorn Steinbrink2015-07-101-2/+0
| | | | | | Fixes PR24083 llvm-svn: 241955
* [InstCombine] Employ AliasAnalysis in FindAvailableLoadedValueBjorn Steinbrink2015-07-101-1/+1
| | | | llvm-svn: 241887
* [InstCombine] Properly combine metadata when replacing a load with anotherBjorn Steinbrink2015-07-101-1/+18
| | | | | | | | Not doing this can lead to misoptimizations down the line, e.g. because of range metadata on the replacing load excluding values that are valid for the load that is being replaced. llvm-svn: 241886
* [InstCombine] Fold IntToPtr and PtrToInt into preceding loads.David Majnemer2015-05-281-5/+10
| | | | | | | | | | | Currently we only fold a BitCast into a Load when the BitCast is its only user. Do the same for any no-op cast. Differential Revision: http://reviews.llvm.org/D9152 llvm-svn: 238452
* Convert PHI getIncomingValue() to foreach over incoming_values(). NFC.Pete Cooper2015-05-121-2/+2
| | | | | | | | We already had a method to iterate over all the incoming values of a PHI. This just changes all eligible code to use it. Ineligible code included anything which cared about the index, or was also trying to get the i'th incoming BB. llvm-svn: 237169
* [InstCombine] Canonicalize single element array storeDavid Majnemer2015-05-111-0/+9
| | | | | | | | Use the element type instead of the aggregate type. Differential Revision: http://reviews.llvm.org/D9591 llvm-svn: 236969
* [InstCombine] Canonicalize single element array loadDavid Majnemer2015-05-111-0/+10
| | | | | | | | Use the element type instead of the aggregate type. Differential Revision: http://reviews.llvm.org/D9596 llvm-svn: 236968
* Update InstCombine to transform aggregate loads into scalar loads.Mehdi Amini2015-05-071-3/+32
| | | | | | | | | | | | | | | | | | | | | Summary: One step further getting aggregate loads and store being optimized properly. This will only handle struct with one element at this point. Test Plan: Added unit tests for the new supported cases. Reviewers: chandlerc, joker-eph, joker.eph, majnemer Reviewed By: majnemer Subscribers: pete, llvm-commits Differential Revision: http://reviews.llvm.org/D8339 Patch by Amaury Sechet. From: Amaury Sechet <amaury@fb.com> llvm-svn: 236695
* [CallSite] Make construction from Value* (or Instruction*) explicit.Benjamin Kramer2015-04-101-1/+1
| | | | | | | | | | | | | | | | | | | CallSite roughly behaves as a common base CallInst and InvokeInst. Bring the behavior closer to that model by making upcasts explicit. Downcasts remain implicit and work as before. Following dyn_cast as a mental model checking whether a Value *V isa CallSite now looks like this: if (auto CS = CallSite(V)) // think dyn_cast instead of: if (CallSite CS = V) This is an extra token but I think it is slightly clearer. Making the ctor explicit has the advantage of not accidentally creating nullptr CallSites, e.g. when you pass a Value * to a function taking a CallSite argument. llvm-svn: 234601
* [opaque pointer type] Change GetElementPtrInst::getIndexedType to take the ↵David Blaikie2015-03-301-2/+4
| | | | | | | | | | pointee type This pushes the use of PointerType::getElementType up into several callers - I'll essentially just have to keep pushing that up the stack until I can eliminate every call to it... llvm-svn: 233604
* Update InstCombine to transform aggregate stores into scalar stores.Mehdi Amini2015-03-141-0/+28
| | | | | | | | | | | | | | | | | | | Summary: This is a first step toward getting proper support for aggregate loads and stores. Test Plan: Added unittests Reviewers: reames, chandlerc Reviewed By: chandlerc Subscribers: majnemer, joker.eph, chandlerc, llvm-commits Differential Revision: http://reviews.llvm.org/D7780 Patch by Amaury Sechet From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 232284
* instcombine: alloca: Canonicalize scalar allocation array sizeDuncan P. N. Exon Smith2015-03-131-2/+10
| | | | | | | | | As a follow-up to r232200, add an `-instcombine` to canonicalize scalar allocations to `i32 1`. Since r232200, `iX 1` (for X != 32) are only created by RAUWs, so this shouldn't fire too often. Nevertheless, it's a cheap check and a nice cleanup. llvm-svn: 232202
* instcombine: alloca: Limit array size type promotionDuncan P. N. Exon Smith2015-03-131-9/+9
| | | | | | | | | | | | | | Move type promotion of the size of the array allocation to the end of `simplifyAllocaArraySize()`. This avoids promoting the type of the array size if it's a `ConstantInt`, since the next -instcombine iteration will drop it to a scalar allocation anyway. Similarly, this avoids promoting the type if it's an `UndefValue`, in which case the alloca gets RAUW'ed. This is NFC when considered over the lifetime of -instcombine, since it's just reducing the number of iterations needed to reach fixed point. llvm-svn: 232201
* AsmWriter: Write alloca array size explicitly (and -instcombine fixup)Duncan P. N. Exon Smith2015-03-131-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Write the `alloca` array size explicitly when it's non-canonical. Previously, if the array size was `iX 1` (where X is not 32), the type would mutate to `i32` when round-tripping through assembly. The testcase I added fails in `verify-uselistorder` (as well as `FileCheck`), since the use-lists for `i32 1` and `i64 1` change. (Manman Ren came across this when running `verify-uselistorder` on some non-trivial, optimized code as part of PR5680.) The type mutation started with r104911, which allowed array sizes to be something other than an `i32`. Starting with r204945, we "canonicalized" to `i64` on 64-bit platforms -- and then on every round-trip through assembly, mutated back to `i32`. I bundled a fixup for `-instcombine` to avoid r204945 on scalar allocations. (There wasn't a clean way to sequence this into two commits, since the assembly change on its own caused testcase churn, and the `-instcombine` change can't be tested without the assembly changes.) An obvious alternative fix -- change `AllocaInst::AllocaInst()`, `AsmWriter` and `LLParser` to treat `intptr_t` as the canonical type for scalar allocations -- was rejected out of hand, since this required teaching them each about the data layout. A follow-up commit will add an `-instcombine` to canonicalize the scalar allocation array size to `i32 1` rather than leaving `iX 1` alone. rdar://problem/20075773 llvm-svn: 232200
* instcombine: alloca: Remove nesting in simplifyAllocaArraySize(), NFCDuncan P. N. Exon Smith2015-03-131-27/+30
| | | | llvm-svn: 232199
* instcombine: alloca: Split out simplifyAllocaArraySize(), NFCDuncan P. N. Exon Smith2015-03-131-8/+15
| | | | | | | | Follow-up commits will change some of the logic here. Splitting into a separate function simplifies the logic by allowing early returns instead of deeper nesting. llvm-svn: 232197
* DataLayout is mandatory, update the API to reflect it with references.Mehdi Amini2015-03-101-54/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Now that the DataLayout is a mandatory part of the module, let's start cleaning the codebase. This patch is a first attempt at doing that. This patch is not exactly NFC as for instance some places were passing a nullptr instead of the DataLayout, possibly just because there was a default value on the DataLayout argument to many functions in the API. Even though it is not purely NFC, there is no change in the validation. I turned as many pointer to DataLayout to references, this helped figuring out all the places where a nullptr could come up. I had initially a local version of this patch broken into over 30 independant, commits but some later commit were cleaning the API and touching part of the code modified in the previous commits, so it seemed cleaner without the intermediate state. Test Plan: Reviewers: echristo Subscribers: llvm-commits From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 231740
* [IC] Turn non-null MD on pointer loads to range MD on integer loads.Charles Davis2015-02-251-4/+18
| | | | | | | | | | | | | | | | | Summary: This change fixes the FIXME that you recently added when you committed (a modified version of) my patch. When `InstCombine` combines a load and store of an pointer to those of an equivalently-sized integer, it currently drops any `!nonnull` metadata that might be present. This change replaces `!nonnull` metadata with `!range !{ 1, -1 }` metadata instead. Reviewers: chandlerc Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D7621 llvm-svn: 230462
* [InstCombine] Remove unnecessary variable indexing into single-element arraysHal Finkel2015-02-201-0/+187
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change addresses a deficiency pointed out in PR22629. To copy from the bug report: [from the bug report] Consider this code: int f(int x) { int a[] = {12}; return a[x]; } GCC knows to optimize this to movl $12, %eax ret The code generated by recent Clang at -O3 is: movslq %edi, %rax movl .L_ZZ1fiE1a(,%rax,4), %eax retq .L_ZZ1fiE1a: .long 12 # 0xc [end from the bug report] This definitely seems worth fixing. I've also seen this kind of code before (as the base case of generic vector wrapper templates with one element). The general idea is to look at the GEP feeding a load or a store, which has some variable as its first non-zero index, and determine if that index must be zero (or else an out-of-bounds access would occur). We can do this for allocas and globals with constant initializers where we know the maximum size of the underlying object. When we find such a GEP, we create a new one for the memory access with that first variable index replaced with a constant zero. Even if we can't eliminate the memory access (and sometimes we can't), it is still useful because it removes unnecessary indexing calculations. llvm-svn: 229959
* [IC] Fix a bug with the instcombine canonicalizing of loads andChandler Carruth2015-02-131-2/+9
| | | | | | | | | | | | | | | | | | | | propagating of metadata. We were propagating !nonnull metadata even when the newly formed load is no longer of a pointer type. This is clearly broken and results in LLVM failing the verifier and aborting. This patch just restricts the propagation of !nonnull metadata to when we actually have a pointer type. This bug report and the initial version of this patch was provided by Charles Davis! Many thanks for finding this! We still need to add logic to round-trip the metadata correctly if we combine from pointer types to integer types and then back by using range metadata for the integer type loads. But this is the minimal and safe version of the patch, which is important so we can backport it into 3.6. llvm-svn: 229029
* [PM] Rename InstCombine.h to InstCombineInternal.h in preparation forChandler Carruth2015-01-221-1/+1
| | | | | | | | | | | | | | | | creating a non-internal header file for the InstCombine pass. I thought about calling this InstCombiner.h or in some way more clearly associating it with the InstCombiner clas that it is primarily defining, but there are several other utility interfaces defined within this for InstCombine. If, in the course of refactoring, those end up moving elsewhere or going away, it might make more sense to make this the combiner's header alone. Naturally, this is a bikeshed to a certain degree, so feel free to lobby for a different shade of paint if this name just doesn't suit you. llvm-svn: 226783
* [canonicalize] Teach InstCombine to canonicalize loads which are onlyChandler Carruth2015-01-221-0/+29
| | | | | | | | | | | | | | | | | | | | | | | ever stored to always use a legal integer type if one is available. Regardless of whether this particular type is good or bad, it ensures we don't get weird differences in generated code (and resulting performance) from "equivalent" patterns that happen to end up using a slightly different type. After some discussion on llvmdev it seems everyone generally likes this canonicalization. However, there may be some parts of LLVM that handle it poorly and need to be fixed. I have at least verified that this doesn't impede GVN and instcombine's store-to-load forwarding powers in any obvious cases. Subtle cases are exactly what we need te flush out if they remain. Also note that this IR pattern should already be hitting LLVM from Clang at least because it is exactly the IR which would be produced if you used memcpy to copy a pointer or floating point between memory instead of a variable. llvm-svn: 226781
* [canonicalize] Move a helper function further up the file so it can beChandler Carruth2015-01-221-47/+47
| | | | | | used earlier. NFC. llvm-svn: 226777
* [canonicalization] Refactor how we create new stores into a helperChandler Carruth2015-01-211-38/+48
| | | | | | | function. This is a bit tidier anyways and will make a subsquent patch simpler as I want to add another case to this combine. llvm-svn: 226746
* [PM] Split the AssumptionTracker immutable pass into two separate APIs:Chandler Carruth2015-01-041-9/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | a cache of assumptions for a single function, and an immutable pass that manages those caches. The motivation for this change is two fold. Immutable analyses are really hacks around the current pass manager design and don't exist in the new design. This is usually OK, but it requires that the core logic of an immutable pass be reasonably partitioned off from the pass logic. This change does precisely that. As a consequence it also paves the way for the *many* utility functions that deal in the assumptions to live in both pass manager worlds by creating an separate non-pass object with its own independent API that they all rely on. Now, the only bits of the system that deal with the actual pass mechanics are those that actually need to deal with the pass mechanics. Once this separation is made, several simplifications become pretty obvious in the assumption cache itself. Rather than using a set and callback value handles, it can just be a vector of weak value handles. The callers can easily skip the handles that are null, and eventually we can wrap all of this up behind a filter iterator. For now, this adds boiler plate to the various passes, but this kind of boiler plate will end up making it possible to port these passes to the new pass manager, and so it will end up factored away pretty reasonably. llvm-svn: 225131
* Loading from null is valid outside of addrspace 0Philip Reames2014-12-291-10/+10
| | | | | | | | | | This patches fixes a miscompile where we were assuming that loading from null is undefined and thus we could assume it doesn't happen. This transform is perfectly legal in address space 0, but is not neccessarily legal in other address spaces. We really should introduce a hook to control this property on a per target per address space basis. We may be loosing valuable optimizations in some address spaces by being too conservative. Original patch by Thomas P Raoux (submitted to llvm-commits), tests and formatting fixes by me. llvm-svn: 224961
* Revert r223764 which taught instcombine about integer-based elment extractionChandler Carruth2014-12-091-349/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | patterns. This is causing Clang to miscompile itself for 32-bit x86 somehow, and likely also on ARM and PPC. I really don't know how, but reverting now that I've confirmed this is actually the culprit. I have a reproduction as well and so should be able to restore this shortly. This reverts commit r223764. Original commit log follows: Teach instcombine to canonicalize "element extraction" from a load of an integer and "element insertion" into a store of an integer into actual element extraction, element insertion, and vector loads and stores. Previously various parts of LLVM (including instcombine itself) would introduce integer loads and stores into the code as a way of opaquely loading and storing "bits". In some cases (such as a memcpy of std::complex<float> object) we will eventually end up using those bits in non-integer types. In order for SROA to effectively promote the allocas involved, it splits these "store a bag of bits" integer loads and stores up into the constituent parts. However, for non-alloca loads and tsores which remain, it uses integer math to recombine the values into a large integer to load or store. All of this would be "fine", except that it forces LLVM to go through integer math to combine and split up values. While this makes perfect sense for integers (and in fact is critical for bitfields to end up lowering efficiently) it is *terrible* for non-integer types, especially floating point types. We have a much more canonical way of representing the act of concatenating the bits of two SSA values in LLVM: a vector and insertelement. This patch teaching InstCombine to use this representation. With this patch applied, LLVM will no longer introduce integer math into the critical path of every loop over std::complex<float> operations such as those that make up the hot path of ... oh, most HPC code, Eigen, and any other heavy linear algebra library. For the record, I looked *extensively* at fixing this in other parts of the compiler, but it just doesn't work: - We really do want to canonicalize memcpy and other bit-motion to integer loads and stores. SSA values are tremendously more powerful than "copy" intrinsics. Not doing this regresses massive amounts of LLVM's scalar optimizer. - We really do need to split up integer loads and stores of this form in SROA or every memcpy of a trivially copyable struct will prevent SSA formation of the members of that struct. It essentially turns off SROA. - The closest alternative is to actually split the loads and stores when partitioning with SROA, but this has all of the downsides historically discussed of splitting up loads and stores -- the wide-store information is fundamentally lost. We would also see performance regressions for bitfield-heavy code and other places where the integers aren't really intended to be split without seemingly arbitrary logic to treat integers totally differently. - We *can* effectively fix this in instcombine, so it isn't that hard of a choice to make IMO. llvm-svn: 223813
* Teach instcombine to canonicalize "element extraction" from a load of anChandler Carruth2014-12-091-41/+349
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | integer and "element insertion" into a store of an integer into actual element extraction, element insertion, and vector loads and stores. Previously various parts of LLVM (including instcombine itself) would introduce integer loads and stores into the code as a way of opaquely loading and storing "bits". In some cases (such as a memcpy of std::complex<float> object) we will eventually end up using those bits in non-integer types. In order for SROA to effectively promote the allocas involved, it splits these "store a bag of bits" integer loads and stores up into the constituent parts. However, for non-alloca loads and tsores which remain, it uses integer math to recombine the values into a large integer to load or store. All of this would be "fine", except that it forces LLVM to go through integer math to combine and split up values. While this makes perfect sense for integers (and in fact is critical for bitfields to end up lowering efficiently) it is *terrible* for non-integer types, especially floating point types. We have a much more canonical way of representing the act of concatenating the bits of two SSA values in LLVM: a vector and insertelement. This patch teaching InstCombine to use this representation. With this patch applied, LLVM will no longer introduce integer math into the critical path of every loop over std::complex<float> operations such as those that make up the hot path of ... oh, most HPC code, Eigen, and any other heavy linear algebra library. For the record, I looked *extensively* at fixing this in other parts of the compiler, but it just doesn't work: - We really do want to canonicalize memcpy and other bit-motion to integer loads and stores. SSA values are tremendously more powerful than "copy" intrinsics. Not doing this regresses massive amounts of LLVM's scalar optimizer. - We really do need to split up integer loads and stores of this form in SROA or every memcpy of a trivially copyable struct will prevent SSA formation of the members of that struct. It essentially turns off SROA. - The closest alternative is to actually split the loads and stores when partitioning with SROA, but this has all of the downsides historically discussed of splitting up loads and stores -- the wide-store information is fundamentally lost. We would also see performance regressions for bitfield-heavy code and other places where the integers aren't really intended to be split without seemingly arbitrary logic to treat integers totally differently. - We *can* effectively fix this in instcombine, so it isn't that hard of a choice to make IMO. Differential Revision: http://reviews.llvm.org/D6548 llvm-svn: 223764
OpenPOWER on IntegriCloud