diff options
author | Hiroshi Inoue <inouehrs@jp.ibm.com> | 2018-06-07 13:21:14 +0000 |
---|---|---|
committer | Hiroshi Inoue <inouehrs@jp.ibm.com> | 2018-06-07 13:21:14 +0000 |
commit | 01ef4c2c64e17f339d1ef9da992493c652be7f8e (patch) | |
tree | baebfc4d2d39f1caf4684caeb48a79522779ac9a /llvm | |
parent | 241f286bd731e40c89883cf70ecf0961aae32cd3 (diff) | |
download | bcm5719-llvm-01ef4c2c64e17f339d1ef9da992493c652be7f8e.tar.gz bcm5719-llvm-01ef4c2c64e17f339d1ef9da992493c652be7f8e.zip |
[PowerPC] avoid unprofitable Repl32 flag in BitPermutationSelector
BitPermutationSelector sets Repl32 flag for bit groups which can be (potentially) benefit from 32-bit rotate-and-mask instructions with bit replication, i.e. rlwinm/rlwimi copies lower 32 bits into upper 32 bits on 64-bit PowerPC before rotation.
However, enforcing 32-bit instruction sometimes results in redundant generated code.
For example, the following simple code is compiled into rotldi + rlwimi while it can be compiled into only rldimi instruction if Repl32 flag is not set on the bit group for (a & 0xFFFFFFFF).
uint64_t func(uint64_t a, uint64_t b) {
return (a & 0xFFFFFFFF) | (b << 32) ;
}
To avoid such problem, this patch checks the potential benefit of Repl32 flag before setting it. If a bit group does not require rotation (i.e. RLAmt == 0) and won't be merged into another group, we do not benefit from Repl32 flag on this group.
Differential Revision: https://reviews.llvm.org/D47867
llvm-svn: 334195
Diffstat (limited to 'llvm')
-rw-r--r-- | llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp | 14 | ||||
-rw-r--r-- | llvm/test/CodeGen/PowerPC/bperm.ll | 12 |
2 files changed, 26 insertions, 0 deletions
diff --git a/llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp b/llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp index de2c2289251..4b0633142c1 100644 --- a/llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp +++ b/llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp @@ -1450,6 +1450,20 @@ class BitPermutationSelector { }; for (auto &BG : BitGroups) { + // If this bit group has RLAmt of 0 and will not be merged with + // another bit group, we don't benefit from Repl32. We don't mark + // such group to give more freedom for later instruction selection. + if (BG.RLAmt == 0) { + auto PotentiallyMerged = [this](BitGroup & BG) { + for (auto &BG2 : BitGroups) + if (&BG != &BG2 && BG.V == BG2.V && + (BG2.RLAmt == 0 || BG2.RLAmt == 32)) + return true; + return false; + }; + if (!PotentiallyMerged(BG)) + continue; + } if (BG.StartIdx < 32 && BG.EndIdx < 32) { if (IsAllLow32(BG)) { if (BG.RLAmt >= 32) { diff --git a/llvm/test/CodeGen/PowerPC/bperm.ll b/llvm/test/CodeGen/PowerPC/bperm.ll index 9c807763e70..2f3118a7f39 100644 --- a/llvm/test/CodeGen/PowerPC/bperm.ll +++ b/llvm/test/CodeGen/PowerPC/bperm.ll @@ -271,6 +271,18 @@ entry: ; CHECK: blr } +define i64 @test16(i64 %a, i64 %b) #0 { +entry: + %and = and i64 %a, 4294967295 + %shl = shl i64 %b, 32 + %or = or i64 %and, %shl + ret i64 %or + +; CHECK-LABEL: @test16 +; CHECK: rldimi 3, 4, 32, 0 +; CHECK: blr +} + ; Function Attrs: nounwind readnone declare i32 @llvm.bswap.i32(i32) #0 declare i64 @llvm.bswap.i64(i64) #0 |