summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp
diff options
context:
space:
mode:
authorHal Finkel <hfinkel@anl.gov>2015-02-05 18:43:00 +0000
committerHal Finkel <hfinkel@anl.gov>2015-02-05 18:43:00 +0000
commitc9dd02066cc0528da28bd887e8dbdb3bd9c68165 (patch)
tree879cc31dfeb4b4117d4c16a0289f4ea9cf43c5ec /llvm/lib/Target/PowerPC/PPCTargetMachine.cpp
parent65d1cbf9df60f8ee44ceb242c03dc59377910425 (diff)
downloadbcm5719-llvm-c9dd02066cc0528da28bd887e8dbdb3bd9c68165.tar.gz
bcm5719-llvm-c9dd02066cc0528da28bd887e8dbdb3bd9c68165.zip
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX vector load/stores). Using these on embedded cores can be very important, but most loops are not naturally set up to use them. We can often change that, however, by placing loops into a non-canonical form. Generically, this means transforming loops like this: for (int i = 0; i < n; ++i) array[i] = c; to look like this: T *p = array[-1]; for (int i = 0; i < n; ++i) *++p = c; the key point is that addresses accessed are pulled into dedicated PHIs and "pre-decremented" in the loop preheader. This allows the use of pre-increment load/store instructions without loop peeling. A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is introduced to perform this transformation. I've used this code out-of-tree for generating code for the PPC A2 for over a year. Somewhat to my surprise, running the test suite + externals on a P7 with this transformation enabled showed no performance regressions, and one speedup: External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk -2.32514% +/- 1.03736% So I'm going to enable it on everything for now. I was surprised by this because, on the POWER cores, these pre-increment load/store instructions are cracked (and, thus, harder to schedule effectively). But seeing no regressions, and feeling that it is generally easier to split instructions apart late than it is to combine them late, this might be the better approach regardless. In the future, we might want to integrate this functionality into LSR (but currently LSR does not create new PHI nodes, so (for that and other reasons) significant work would need to be done). llvm-svn: 228328
Diffstat (limited to 'llvm/lib/Target/PowerPC/PPCTargetMachine.cpp')
-rw-r--r--llvm/lib/Target/PowerPC/PPCTargetMachine.cpp7
1 files changed, 7 insertions, 0 deletions
diff --git a/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp b/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp
index 03425c9ca5f..f9686686ee8 100644
--- a/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp
+++ b/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp
@@ -30,6 +30,10 @@ static cl::
opt<bool> DisableCTRLoops("disable-ppc-ctrloops", cl::Hidden,
cl::desc("Disable CTR loops for PPC"));
+static cl::
+opt<bool> DisablePreIncPrep("disable-ppc-preinc-prep", cl::Hidden,
+ cl::desc("Disable PPC loop preinc prep"));
+
static cl::opt<bool>
VSXFMAMutateEarly("schedule-ppc-vsx-fma-mutation-early",
cl::Hidden, cl::desc("Schedule VSX FMA instruction mutation early"));
@@ -231,6 +235,9 @@ void PPCPassConfig::addIRPasses() {
}
bool PPCPassConfig::addPreISel() {
+ if (!DisablePreIncPrep && getOptLevel() != CodeGenOpt::None)
+ addPass(createPPCLoopPreIncPrepPass(getPPCTargetMachine()));
+
if (!DisableCTRLoops && getOptLevel() != CodeGenOpt::None)
addPass(createPPCCTRLoops(getPPCTargetMachine()));
OpenPOWER on IntegriCloud