summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/ARM/ARMHazardRecognizer.cpp
diff options
context:
space:
mode:
authorBob Wilson <bob.wilson@apple.com>2011-04-19 18:11:57 +0000
committerBob Wilson <bob.wilson@apple.com>2011-04-19 18:11:57 +0000
commit0858c3aaed585bc38cc9f8f0b63bc9edd9a47233 (patch)
tree5c555249f461d538bdba04cb0a298b980cc0ec0c /llvm/lib/Target/ARM/ARMHazardRecognizer.cpp
parentd04a83f8f28b5cb489ed9bf9ebffdf2762f0eabc (diff)
downloadbcm5719-llvm-0858c3aaed585bc38cc9f8f0b63bc9edd9a47233.tar.gz
bcm5719-llvm-0858c3aaed585bc38cc9f8f0b63bc9edd9a47233.zip
This patch combines several changes from Evan Cheng for rdar://8659675.
Making use of VFP / NEON floating point multiply-accumulate / subtraction is difficult on current ARM implementations for a few reasons. 1. Even though a single vmla has latency that is one cycle shorter than a pair of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause additional pipeline stall. So it's frequently better to single codegen vmul + vadd. 2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to stall for 4 cycles. We need to schedule them apart. 3. A vmla followed vmla is a special case. Obvious issuing back to back RAW vmla + vmla is very bad. But this isn't ideal either: vmul vadd vmla Instead, we want to expand the second vmla: vmla vmul vadd Even with the 4 cycle vmul stall, the second sequence is still 2 cycles faster. Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough but it isn't the optimial solution. This patch attempts to make it possible to use vmla / vmls in cases where it is profitable. A. Add missing isel predicates which cause vmla to be codegen'ed. B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to compute a fmul and a fmla. C. Add additional isel checks for vmla, avoid cases where vmla is feeding into fp instructions (except for the #3 exceptional case). D. Add ARM hazard recognizer to model the vmla / vmls hazards. E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the vmla / vmls will trigger one of the special hazards. Enable these fp vmlx codegen changes for Cortex-A9. llvm-svn: 129775
Diffstat (limited to 'llvm/lib/Target/ARM/ARMHazardRecognizer.cpp')
-rw-r--r--llvm/lib/Target/ARM/ARMHazardRecognizer.cpp2
1 files changed, 2 insertions, 0 deletions
diff --git a/llvm/lib/Target/ARM/ARMHazardRecognizer.cpp b/llvm/lib/Target/ARM/ARMHazardRecognizer.cpp
index e97ce50bc42..517bba8cee8 100644
--- a/llvm/lib/Target/ARM/ARMHazardRecognizer.cpp
+++ b/llvm/lib/Target/ARM/ARMHazardRecognizer.cpp
@@ -49,6 +49,8 @@ ARMHazardRecognizer::getHazardType(SUnit *SU, int Stalls) {
const TargetInstrDesc &LastTID = LastMI->getDesc();
// Skip over one non-VFP / NEON instruction.
if (!LastTID.isBarrier() &&
+ // On A9, AGU and NEON/FPU are muxed.
+ !(STI.isCortexA9() && (LastTID.mayLoad() || LastTID.mayStore())) &&
(LastTID.TSFlags & ARMII::DomainMask) == ARMII::DomainGeneral) {
MachineBasicBlock::iterator I = LastMI;
if (I != LastMI->getParent()->begin()) {
OpenPOWER on IntegriCloud