diff options
| author | Sanjay Patel <spatel@rotateright.com> | 2015-05-07 15:48:53 +0000 |
|---|---|---|
| committer | Sanjay Patel <spatel@rotateright.com> | 2015-05-07 15:48:53 +0000 |
| commit | a9f6d3505d04aec6849a8309ce7e1971c0c142cb (patch) | |
| tree | 215926c8a3b773c23fdfd49668e13a24b992138a /llvm/lib/Support/Unix | |
| parent | 44faaa7aa472599bc4b7793817e006347aff6b14 (diff) | |
| download | bcm5719-llvm-a9f6d3505d04aec6849a8309ce7e1971c0c142cb.tar.gz bcm5719-llvm-a9f6d3505d04aec6849a8309ce7e1971c0c142cb.zip | |
[x86] eliminate unnecessary shuffling/moves with unary scalar math ops (PR21507)
Finish the job that was abandoned in D6958 following the refactoring in
http://reviews.llvm.org/rL230221:
1. Uncomment the intrinsic def for the AVX r_Int instruction.
2. Add missing r_Int entries to the load folding tables; there are already
tests that check these in "test/Codegen/X86/fold-load-unops.ll", so I
haven't added any more in this patch.
3. Add patterns to solve PR21507 ( https://llvm.org/bugs/show_bug.cgi?id=21507 ).
So instead of this:
movaps %xmm0, %xmm1
rcpss %xmm1, %xmm1
movss %xmm1, %xmm0
We should now get:
rcpss %xmm0, %xmm0
And instead of this:
vsqrtss %xmm0, %xmm0, %xmm1
vblendps $1, %xmm1, %xmm0, %xmm0 ## xmm0 = xmm1[0],xmm0[1,2,3]
We should now get:
vsqrtss %xmm0, %xmm0, %xmm0
Differential Revision: http://reviews.llvm.org/D9504
llvm-svn: 236740
Diffstat (limited to 'llvm/lib/Support/Unix')
0 files changed, 0 insertions, 0 deletions

