diff options
author | Sanjay Patel <spatel@rotateright.com> | 2016-03-14 16:54:43 +0000 |
---|---|---|
committer | Sanjay Patel <spatel@rotateright.com> | 2016-03-14 16:54:43 +0000 |
commit | 62d707c8d91a5cc304ec52ecc990e6234cb04525 (patch) | |
tree | 0dc3ea20c9afff0ebcce352730ca8cd7468378ae /lldb/packages/Python/lldbsuite/test/python_api | |
parent | e8efff373a5118edf298b63a4c24cdbcc3ebd1da (diff) | |
download | bcm5719-llvm-62d707c8d91a5cc304ec52ecc990e6234cb04525.tar.gz bcm5719-llvm-62d707c8d91a5cc304ec52ecc990e6234cb04525.zip |
[x86, AVX] replace masked load with full vector load when possible
Converting masked vector loads to regular vector loads for x86 AVX should always be a win.
I raised the legality issue of reading the extra memory bytes on llvm-dev. I did not see any
objections.
1. x86 already does this kind of optimization for multiple scalar loads -> vector load.
2. If other targets have the same flexibility, we could move this transform up to CGP or DAGCombiner.
Differential Revision: http://reviews.llvm.org/D18094
llvm-svn: 263446
Diffstat (limited to 'lldb/packages/Python/lldbsuite/test/python_api')
0 files changed, 0 insertions, 0 deletions