summaryrefslogtreecommitdiffstats
path: root/llvm/test/Bitcode/binaryIntInstructions.3.2.ll.bc
diff options
context:
space:
mode:
authorSimon Pilgrim <llvm-dev@redking.me.uk>2019-04-30 10:18:25 +0000
committerSimon Pilgrim <llvm-dev@redking.me.uk>2019-04-30 10:18:25 +0000
commit22641cc19417a938f1511d0fe3686d2508ebd009 (patch)
tree4eb9535a18b40e2d1048712259adde1e1151190a /llvm/test/Bitcode/binaryIntInstructions.3.2.ll.bc
parent59b6889238a61b864bb0a28cf02608be1ae3c324 (diff)
downloadbcm5719-llvm-22641cc19417a938f1511d0fe3686d2508ebd009.tar.gz
bcm5719-llvm-22641cc19417a938f1511d0fe3686d2508ebd009.zip
Fix for bug 41512: lower INSERT_VECTOR_ELT(ZeroVec, 0, Elt) to SCALAR_TO_VECTOR(Elt) for all SSE flavors
Current LLVM uses pxor+pinsrb on SSE4+ for INSERT_VECTOR_ELT(ZeroVec, 0, Elt) insead of much simpler movd. INSERT_VECTOR_ELT(ZeroVec, 0, Elt) is idiomatic construct which is used e.g. for _mm_cvtsi32_si128(Elt) and for lowest element initialization in _mm_set_epi32. So such inefficient lowering leads to significant performance digradations in ceratin cases switching from SSSE3 to SSE4. https://bugs.llvm.org/show_bug.cgi?id=41512 Here INSERT_VECTOR_ELT(ZeroVec, 0, Elt) is simply converted to SCALAR_TO_VECTOR(Elt) when applicable since latter is closer match to desired behavior and always efficiently lowered to movd and alike. Committed on behalf of @Serge_Preis (Serge Preis) Differential Revision: https://reviews.llvm.org/D60852 llvm-svn: 359545
Diffstat (limited to 'llvm/test/Bitcode/binaryIntInstructions.3.2.ll.bc')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud