summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/ARM/ARMISelLowering.cpp
diff options
context:
space:
mode:
authorElena Demikhovsky <elena.demikhovsky@intel.com>2014-12-16 09:10:08 +0000
committerElena Demikhovsky <elena.demikhovsky@intel.com>2014-12-16 09:10:08 +0000
commita79fc16bb05a37eb9b42fb3ab349b37b9c58e034 (patch)
tree7d0cb10879d8bbabac871989249cd4e80cb57af7 /llvm/lib/Target/ARM/ARMISelLowering.cpp
parent07649fb7c5d7dec2e35b5c1c1c7907bafabb6a90 (diff)
downloadbcm5719-llvm-a79fc16bb05a37eb9b42fb3ab349b37b9c58e034.tar.gz
bcm5719-llvm-a79fc16bb05a37eb9b42fb3ab349b37b9c58e034.zip
X86: Added FeatureVectorUAMem for all AVX architectures.
According to AVX specification: "Most arithmetic and data processing instructions encoded using the VEX prefix and performing memory accesses have more flexible memory alignment requirements than instructions that are encoded without the VEX prefix. Specifically, With the exception of explicitly aligned 16 or 32 byte SIMD load/store instructions, most VEX-encoded, arithmetic and data processing instructions operate in a flexible environment regarding memory address alignment, i.e. VEX-encoded instruction with 32-byte or 16-byte load semantics will support unaligned load operation by default. Memory arguments for most instructions with VEX prefix operate normally without causing #GP(0) on any byte-granularity alignment (unlike Legacy SSE instructions)." The same for AVX-512. This change does not affect anything right now, because only the "memop pattern fragment" depends on FeatureVectorUAMem and it is not used in AVX patterns. All AVX patterns are based on the "unaligned load" anyway. llvm-svn: 224330
Diffstat (limited to 'llvm/lib/Target/ARM/ARMISelLowering.cpp')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud