summaryrefslogtreecommitdiffstats
path: root/clang/lib/Frontend/CompilerInvocation.cpp
diff options
context:
space:
mode:
authorSanjay Patel <spatel@rotateright.com>2017-06-11 21:18:58 +0000
committerSanjay Patel <spatel@rotateright.com>2017-06-11 21:18:58 +0000
commitdcbfbb11d985caa30fcd782ac13267d41058f256 (patch)
tree98bfcf2872d8f0ac5753153207fb38e85a6b31de /clang/lib/Frontend/CompilerInvocation.cpp
parent7ed6cd32ea702971e38a28b865727a016b727c3f (diff)
downloadbcm5719-llvm-dcbfbb11d985caa30fcd782ac13267d41058f256.tar.gz
bcm5719-llvm-dcbfbb11d985caa30fcd782ac13267d41058f256.zip
[x86] use vperm2f128 rather than vinsertf128 when there's a chance to fold a 32-byte load
I was looking closer at the x86 test diffs in D33866, and the first change seems like it shouldn't happen in the first place. So this patch will resolve that. Using Agner's tables and AMD docs, vperm2f128 and vinsertf128 have identical timing for any given CPU model, so we should be able to interchange those without affecting perf. But as we can see in some of the diffs here, using vperm2f128 allows load folding, so we should take that opportunity to reduce code size and register pressure. A secondary advantage is making AVX1 and AVX2 codegen more similar. Given that vperm2f128 was introduced with AVX1, we should be selecting it in all of the same situations that we would with AVX2. If there's some reason that an AVX1 CPU would not want to use this instruction, that should be fixed up in a later pass. Differential Revision: https://reviews.llvm.org/D33938 llvm-svn: 305171
Diffstat (limited to 'clang/lib/Frontend/CompilerInvocation.cpp')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud