summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/AMDGPU/AMDGPUAlwaysInlinePass.cpp
diff options
context:
space:
mode:
authorCraig Topper <craig.topper@intel.com>2018-07-10 00:37:25 +0000
committerCraig Topper <craig.topper@intel.com>2018-07-10 00:37:25 +0000
commit638426fc36e08ccee78605a4d8136757ca0faf12 (patch)
tree10d3c4a1a9bbc0ed4ecb0847c75e6f833b67b698 /llvm/lib/Target/AMDGPU/AMDGPUAlwaysInlinePass.cpp
parente194f73e9f6a101dcb7dba5224c2d4b1fa1b7459 (diff)
downloadbcm5719-llvm-638426fc36e08ccee78605a4d8136757ca0faf12.tar.gz
bcm5719-llvm-638426fc36e08ccee78605a4d8136757ca0faf12.zip
[X86] Add __builtin_ia32_selectss_128 and __builtin_ia32_selectsd_128 that is suitable for use in scalar mask intrinsics.
This will convert the i8 mask argument to <8 x i1> and extract an i1 and then emit a select instruction. This replaces the '(__U & 1)" and ternary operator used in some of intrinsics. The old sequence was lowered to a scalar and and compare. The new sequence uses an i1 vector that will interoperate better with other mask intrinsics. This removes the need to handle div_ss/sd specially in CGBuiltin.cpp. A follow up patch will add the GCCBuiltin name back in llvm and remove the custom handling. I made some adjustments to legacy move_ss/sd intrinsics which we reused here to do a simpler extract and insert instead of 2 extracts and two inserts or a shuffle. llvm-svn: 336622
Diffstat (limited to 'llvm/lib/Target/AMDGPU/AMDGPUAlwaysInlinePass.cpp')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud