summaryrefslogtreecommitdiffstats
path: root/llvm/lib
diff options
context:
space:
mode:
authorAlexandros Lamprineas <alexandros.lamprineas@arm.com>2018-12-06 16:11:58 +0000
committerAlexandros Lamprineas <alexandros.lamprineas@arm.com>2018-12-06 16:11:58 +0000
commite4c91f5c4c52cd36b8238d29b933f2866ea0d735 (patch)
treef7260782da76417bfab8b49bba4cacd5d3533d4e /llvm/lib
parent64ad0ad5ed92fc9881075cffebce418d05f61288 (diff)
downloadbcm5719-llvm-e4c91f5c4c52cd36b8238d29b933f2866ea0d735.tar.gz
bcm5719-llvm-e4c91f5c4c52cd36b8238d29b933f2866ea0d735.zip
[GVN] Don't perform scalar PRE on GEPs
Partial Redundancy Elimination of GEPs prevents CodeGenPrepare from sinking the addressing mode computation of memory instructions back to its uses. The problem comes from the insertion of PHIs, which confuse CGP and make it bail. I've autogenerated the check lines of an existing test and added a store instruction to demonstrate the motivation behind this change. The store is now using the gep instead of a phi. Differential Revision: https://reviews.llvm.org/D55009 llvm-svn: 348496
Diffstat (limited to 'llvm/lib')
-rw-r--r--llvm/lib/Transforms/Scalar/GVN.cpp10
1 files changed, 10 insertions, 0 deletions
diff --git a/llvm/lib/Transforms/Scalar/GVN.cpp b/llvm/lib/Transforms/Scalar/GVN.cpp
index c080c2a1813..440ea4a5bc7 100644
--- a/llvm/lib/Transforms/Scalar/GVN.cpp
+++ b/llvm/lib/Transforms/Scalar/GVN.cpp
@@ -2156,6 +2156,16 @@ bool GVN::performScalarPRE(Instruction *CurInst) {
if (isa<CmpInst>(CurInst))
return false;
+ // Don't do PRE on GEPs. The inserted PHI would prevent CodeGenPrepare from
+ // sinking the addressing mode computation back to its uses. Extending the
+ // GEP's live range increases the register pressure, and therefore it can
+ // introduce unnecessary spills.
+ //
+ // This doesn't prevent Load PRE. PHI translation will make the GEP available
+ // to the load by moving it to the predecessor block if necessary.
+ if (isa<GetElementPtrInst>(CurInst))
+ return false;
+
// We don't currently value number ANY inline asm calls.
if (CallInst *CallI = dyn_cast<CallInst>(CurInst))
if (CallI->isInlineAsm())
OpenPOWER on IntegriCloud