| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
| |
Differential Revision: http://reviews.llvm.org/D11420
llvm-svn: 242986
|
|
|
|
|
|
|
|
|
| |
add/div/mul/sub include rounding versions
Differential Revision: http://reviews.llvm.org/D11354
llvm-svn: 242790
|
|
|
|
|
|
|
|
| |
also made minor fix in "test_mm512_maskz_permutex2var_epi16"
Differential Revision: http://reviews.llvm.org/D11336
llvm-svn: 242635
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_ReadBarrier, _WriteBarrier, and _ReadWriteBarrier are essentially
memory barriers of one form or another. Model these as
atomic_signal_fence(ATOMIC_SEQ_CST).
__faststorefence is a curious intrinsic. It's single purpose seems to
an alternative to mfence when that instruction is slow. However, mfence
is not always slow and is, in general, preferable to a 'lock or'
sequence on certain CPUs. Give the compiler freedom to select the best
sequence to get a fence.
llvm-svn: 242378
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The vec_sld interface provides access to the vsldoi instruction.
Unlike most of the vec_* interfaces, we do not attempt to change the
generated code for vec_sld based on the endian mode. It is too
difficult to correctly infer the desired semantics because of
different element types, and the corrected instruction sequence is
expensive, involving loading a permute control vector and performing a
generalized permute.
For GCC, this was implemented as "Don't touch the vec_sld"
implementation. When it came time for the LLVM implementation, I did
the same thing. However, this was hasty and incorrect. In LLVM's
version of altivec.h, vec_sld was previously defined in terms of the
vec_perm interface. Because vec_perm semantics are adjusted for
little endian, this means that leaving vec_sld untouched causes it to
generate something different for LE than for BE. Not good.
This patch adjusts the form of vec_perm that is used for vec_sld and
vec_vsldoi, effectively undoing the modifications so that the same
vsldoi instruction will be generated for both BE and LE.
There is an accompanying back-end patch to take care of some small
ripple effects caused by these changes.
llvm-svn: 242297
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch corresponds to review:
http://reviews.llvm.org/D11184
A number of new interfaces for altivec.h (as mandated by the ABI):
vector float vec_cpsgn(vector float, vector float)
vector double vec_cpsgn(vector double, vector double)
vector double vec_or(vector bool long long, vector double)
vector double vec_or(vector double, vector bool long long)
vector double vec_re(vector double)
vector signed char vec_cntlz(vector signed char)
vector unsigned char vec_cntlz(vector unsigned char)
vector short vec_cntlz(vector short)
vector unsigned short vec_cntlz(vector unsigned short)
vector int vec_cntlz(vector int)
vector unsigned int vec_cntlz(vector unsigned int)
vector signed long long vec_cntlz(vector signed long long)
vector unsigned long long vec_cntlz(vector unsigned long long)
vector signed char vec_nand(vector bool signed char, vector signed char)
vector signed char vec_nand(vector signed char, vector bool signed char)
vector signed char vec_nand(vector signed char, vector signed char)
vector unsigned char vec_nand(vector bool unsigned char, vector unsigned char)
vector unsigned char vec_nand(vector unsigned char, vector bool unsigned char)
vector unsigned char vec_nand(vector unsigned char, vector unsigned char)
vector short vec_nand(vector bool short, vector short)
vector short vec_nand(vector short, vector bool short)
vector short vec_nand(vector short, vector short)
vector unsigned short vec_nand(vector bool unsigned short, vector unsigned short)
vector unsigned short vec_nand(vector unsigned short, vector bool unsigned short)
vector unsigned short vec_nand(vector unsigned short, vector unsigned short)
vector int vec_nand(vector bool int, vector int)
vector int vec_nand(vector int, vector bool int)
vector int vec_nand(vector int, vector int)
vector unsigned int vec_nand(vector bool unsigned int, vector unsigned int)
vector unsigned int vec_nand(vector unsigned int, vector bool unsigned int)
vector unsigned int vec_nand(vector unsigned int, vector unsigned int)
vector signed long long vec_nand(vector bool long long, vector signed long long)
vector signed long long vec_nand(vector signed long long, vector bool long long)
vector signed long long vec_nand(vector signed long long, vector signed long long)
vector unsigned long long vec_nand(vector bool long long, vector unsigned long long)
vector unsigned long long vec_nand(vector unsigned long long, vector bool long long)
vector unsigned long long vec_nand(vector unsigned long long, vector unsigned long long)
vector signed char vec_orc(vector bool signed char, vector signed char)
vector signed char vec_orc(vector signed char, vector bool signed char)
vector signed char vec_orc(vector signed char, vector signed char)
vector unsigned char vec_orc(vector bool unsigned char, vector unsigned char)
vector unsigned char vec_orc(vector unsigned char, vector bool unsigned char)
vector unsigned char vec_orc(vector unsigned char, vector unsigned char)
vector short vec_orc(vector bool short, vector short)
vector short vec_orc(vector short, vector bool short)
vector short vec_orc(vector short, vector short)
vector unsigned short vec_orc(vector bool unsigned short, vector unsigned short)
vector unsigned short vec_orc(vector unsigned short, vector bool unsigned short)
vector unsigned short vec_orc(vector unsigned short, vector unsigned short)
vector int vec_orc(vector bool int, vector int)
vector int vec_orc(vector int, vector bool int)
vector int vec_orc(vector int, vector int)
vector unsigned int vec_orc(vector bool unsigned int, vector unsigned int)
vector unsigned int vec_orc(vector unsigned int, vector bool unsigned int)
vector unsigned int vec_orc(vector unsigned int, vector unsigned int)
vector signed long long vec_orc(vector bool long long, vector signed long long)
vector signed long long vec_orc(vector signed long long, vector bool long long)
vector signed long long vec_orc(vector signed long long, vector signed long long)
vector unsigned long long vec_orc(vector bool long long, vector unsigned long long)
vector unsigned long long vec_orc(vector unsigned long long, vector bool long long)
vector unsigned long long vec_orc(vector unsigned long long, vector unsigned long long)
vector signed char vec_div(vector signed char, vector signed char)
vector unsigned char vec_div(vector unsigned char, vector unsigned char)
vector signed short vec_div(vector signed short, vector signed short)
vector unsigned short vec_div(vector unsigned short, vector unsigned short)
vector signed int vec_div(vector signed int, vector signed int)
vector unsigned int vec_div(vector unsigned int, vector unsigned int)
vector signed long long vec_div(vector signed long long, vector signed long long)
vector unsigned long long vec_div(vector unsigned long long, vector unsigned long long)
vector unsigned char vec_mul(vector unsigned char, vector unsigned char)
vector unsigned int vec_mul(vector unsigned int, vector unsigned int)
vector unsigned long long vec_mul(vector unsigned long long, vector unsigned long long)
vector unsigned short vec_mul(vector unsigned short, vector unsigned short)
vector signed char vec_mul(vector signed char, vector signed char)
vector signed int vec_mul(vector signed int, vector signed int)
vector signed long long vec_mul(vector signed long long, vector signed long long)
vector signed short vec_mul(vector signed short, vector signed short)
vector signed long long vec_mergeh(vector signed long long, vector signed long long)
vector signed long long vec_mergeh(vector signed long long, vector bool long long)
vector signed long long vec_mergeh(vector bool long long, vector signed long long)
vector unsigned long long vec_mergeh(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_mergeh(vector unsigned long long, vector bool long long)
vector unsigned long long vec_mergeh(vector bool long long, vector unsigned long long)
vector double vec_mergeh(vector double, vector double)
vector double vec_mergeh(vector double, vector bool long long)
vector double vec_mergeh(vector bool long long, vector double)
vector signed long long vec_mergel(vector signed long long, vector signed long long)
vector signed long long vec_mergel(vector signed long long, vector bool long long)
vector signed long long vec_mergel(vector bool long long, vector signed long long)
vector unsigned long long vec_mergel(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_mergel(vector unsigned long long, vector bool long long)
vector unsigned long long vec_mergel(vector bool long long, vector unsigned long long)
vector double vec_mergel(vector double, vector double)
vector double vec_mergel(vector double, vector bool long long)
vector double vec_mergel(vector bool long long, vector double)
vector signed int vec_pack(vector signed long long, vector signed long long)
vector unsigned int vec_pack(vector unsigned long long, vector unsigned long long)
vector bool int vec_pack(vector bool long long, vector bool long long)
llvm-svn: 242171
|
|
|
|
|
|
|
|
|
|
| |
add 2 bit to ObjCOrBuiltinID (changed from 11bits to 13bits), see discussion in
Add new intrinsics support that already covered by the BE.
All the intrinsics are covered by tests
Differential Revision: http://reviews.llvm.org/D10893
llvm-svn: 242144
|
|
|
|
|
|
| |
No functionality change is intended.
llvm-svn: 242087
|
|
|
|
|
|
| |
No functionality change intended.
llvm-svn: 242086
|
|
|
|
|
|
|
| |
The program is permitted to have stuff like '#define x' in it so avoid
using identifiers not reserved for the implementation.
llvm-svn: 242010
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Three things:
- The atomic intrinsics mandate memory barriers, let's start emitting
some.
- We don't need to manually create RMW operations, we can just do
__atomic_fetch_foo instead of performing __atomic_foo_fetch and
undoing foo.
- Don't use inline assembly, we don't need it for these intrinsics.
This fixes PR24101.
llvm-svn: 242009
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch corresponds to review:
http://reviews.llvm.org/D10972
Fix for the handling of dependent features that are enabled by default
on some CPU's (such as -mvsx, -mpower8-vector).
Also provides a number of new interfaces or fixes existing ones in
altivec.h.
Changed signatures to conform to ABI:
vector short vec_perm(vector signed short, vector signed short, vector unsigned char)
vector int vec_perm(vector signed int, vector signed int, vector unsigned char)
vector long long vec_perm(vector signed long long, vector signed long long, vector unsigned char)
vector signed char vec_sld(vector signed char, vector signed char, const int)
vector unsigned char vec_sld(vector unsigned char, vector unsigned char, const int)
vector bool char vec_sld(vector bool char, vector bool char, const int)
vector unsigned short vec_sld(vector unsigned short, vector unsigned short, const int)
vector signed short vec_sld(vector signed short, vector signed short, const int)
vector signed int vec_sld(vector signed int, vector signed int, const int)
vector unsigned int vec_sld(vector unsigned int, vector unsigned int, const int)
vector float vec_sld(vector float, vector float, const int)
vector signed char vec_splat(vector signed char, const int)
vector unsigned char vec_splat(vector unsigned char, const int)
vector bool char vec_splat(vector bool char, const int)
vector signed short vec_splat(vector signed short, const int)
vector unsigned short vec_splat(vector unsigned short, const int)
vector bool short vec_splat(vector bool short, const int)
vector pixel vec_splat(vector pixel, const int)
vector signed int vec_splat(vector signed int, const int)
vector unsigned int vec_splat(vector unsigned int, const int)
vector bool int vec_splat(vector bool int, const int)
vector float vec_splat(vector float, const int)
Added a VSX path to:
vector float vec_round(vector float)
Added interfaces:
vector signed char vec_eqv(vector signed char, vector signed char)
vector signed char vec_eqv(vector bool char, vector signed char)
vector signed char vec_eqv(vector signed char, vector bool char)
vector unsigned char vec_eqv(vector unsigned char, vector unsigned char)
vector unsigned char vec_eqv(vector bool char, vector unsigned char)
vector unsigned char vec_eqv(vector unsigned char, vector bool char)
vector signed short vec_eqv(vector signed short, vector signed short)
vector signed short vec_eqv(vector bool short, vector signed short)
vector signed short vec_eqv(vector signed short, vector bool short)
vector unsigned short vec_eqv(vector unsigned short, vector unsigned short)
vector unsigned short vec_eqv(vector bool short, vector unsigned short)
vector unsigned short vec_eqv(vector unsigned short, vector bool short)
vector signed int vec_eqv(vector signed int, vector signed int)
vector signed int vec_eqv(vector bool int, vector signed int)
vector signed int vec_eqv(vector signed int, vector bool int)
vector unsigned int vec_eqv(vector unsigned int, vector unsigned int)
vector unsigned int vec_eqv(vector bool int, vector unsigned int)
vector unsigned int vec_eqv(vector unsigned int, vector bool int)
vector signed long long vec_eqv(vector signed long long, vector signed long long)
vector signed long long vec_eqv(vector bool long long, vector signed long long)
vector signed long long vec_eqv(vector signed long long, vector bool long long)
vector unsigned long long vec_eqv(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_eqv(vector bool long long, vector unsigned long long)
vector unsigned long long vec_eqv(vector unsigned long long, vector bool long long)
vector float vec_eqv(vector float, vector float)
vector float vec_eqv(vector bool int, vector float)
vector float vec_eqv(vector float, vector bool int)
vector double vec_eqv(vector double, vector double)
vector double vec_eqv(vector bool long long, vector double)
vector double vec_eqv(vector double, vector bool long long)
vector bool long long vec_perm(vector bool long long, vector bool long long, vector unsigned char)
vector double vec_round(vector double)
vector double vec_splat(vector double, const int)
vector bool long long vec_splat(vector bool long long, const int)
vector signed long long vec_splat(vector signed long long, const int)
vector unsigned long long vec_splat(vector unsigned long long,
vector bool int vec_sld(vector bool int, vector bool int, const int)
vector bool short vec_sld(vector bool short, vector bool short, const int)
llvm-svn: 241904
|
|
|
|
| |
llvm-svn: 241405
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch corresponds to review:
http://reviews.llvm.org/D10875
The bulk of the second round of additions to altivec.h.
The following interfaces were added:
vector double vec_floor(vector double)
vector double vec_madd(vector double, vector double, vector double)
vector float vec_msub(vector float, vector float, vector float)
vector double vec_msub(vector double, vector double, vector double)
vector float vec_mul(vector float, vector float)
vector double vec_mul(vector double, vector double)
vector float vec_nmadd(vector float, vector float, vector float)
vector double vec_nmadd(vector double, vector double, vector double)
vector double vec_nmsub(vector double, vector double, vector double)
vector double vec_nor(vector double, vector double)
vector double vec_or(vector double, vector double)
vector float vec_rint(vector float)
vector double vec_rint(vector double)
vector float vec_nearbyint(vector float)
vector double vec_nearbyint(vector double)
vector float vec_sqrt(vector float)
vector double vec_sqrt(vector double)
vector double vec_rsqrte(vector double)
vector double vec_sel(vector double, vector double, vector unsigned long long)
vector double vec_sel(vector double, vector double, vector unsigned long long)
vector double vec_sub(vector double, vector double)
vector double vec_trunc(vector double)
vector double vec_xor(vector double, vector double)
vector double vec_xor(vector double, vector bool long long)
vector double vec_xor(vector bool long long, vector double)
New VSX paths for the following interfaces:
vector float vec_madd(vector float, vector float, vector float)
vector float vec_nmsub(vector float, vector float, vector float)
vector float vec_rsqrte(vector float)
vector float vec_trunc(vector float)
vector float vec_floor(vector float)
llvm-svn: 241399
|
|
|
|
|
|
|
|
|
|
|
|
| |
instructions introduced in POWER8.
These are the Clang-related changes for http://reviews.llvm.org/D10704
All builtins are added in altivec.h and guarded with the POWER8_VECTOR macro.
Phabricator review: http://reviews.llvm.org/D10736
llvm-svn: 241293
|
|
|
|
| |
llvm-svn: 241065
|
|
|
|
| |
llvm-svn: 241055
|
|
|
|
|
|
|
|
|
| |
Add intrinsics for the FXSR instructions (FXSAVE/FXSAVE64/FXRSTOR/FXRSTOR64)
These were previously declared in Intrin.h for MSVC compatibility, but now
that we have them implemented, these declarations can be removed.
llvm-svn: 241053
|
|
|
|
|
|
|
|
|
| |
include tests
review
http://reviews.llvm.org/D10795
llvm-svn: 240941
|
|
|
|
|
|
|
|
|
|
| |
Blend, abs, packs, adds, subs, avg, max, min, permute.
all the intrinsics are covered by tests
review:
http://reviews.llvm.org/D10799
llvm-svn: 240937
|
|
|
|
|
|
|
|
| |
by Igor Breger
http://reviews.llvm.org/D10797
llvm-svn: 240928
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch corresponds to review:
http://reviews.llvm.org/D10637
This is the first round of additions of missing builtins listed in the ABI document. More to come (this builds onto what seurer already addes). This patch adds:
vector signed long long vec_abs(vector signed long long)
vector double vec_abs(vector double)
vector signed long long vec_add(vector signed long long, vector signed long long)
vector unsigned long long vec_add(vector unsigned long long, vector unsigned long long)
vector double vec_add(vector double, vector double)
vector double vec_and(vector bool long long, vector double)
vector double vec_and(vector double, vector bool long long)
vector double vec_and(vector double, vector double)
vector signed long long vec_and(vector signed long long, vector signed long long)
vector double vec_andc(vector bool long long, vector double)
vector double vec_andc(vector double, vector bool long long)
vector double vec_andc(vector double, vector double)
vector signed long long vec_andc(vector signed long long, vector signed long long)
vector double vec_ceil(vector double)
vector bool long long vec_cmpeq(vector double, vector double)
vector bool long long vec_cmpge(vector double, vector double)
vector bool long long vec_cmpge(vector signed long long, vector signed long long)
vector bool long long vec_cmpge(vector unsigned long long, vector unsigned long long)
vector bool long long vec_cmpgt(vector double, vector double)
vector bool long long vec_cmple(vector double, vector double)
vector bool long long vec_cmple(vector signed long long, vector signed long long)
vector bool long long vec_cmple(vector unsigned long long, vector unsigned long long)
vector bool long long vec_cmplt(vector double, vector double)
vector bool long long vec_cmplt(vector signed long long, vector signed long long)
vector bool long long vec_cmplt(vector unsigned long long, vector unsigned long long)
llvm-svn: 240821
|
|
|
|
| |
llvm-svn: 240743
|
|
|
|
|
|
|
|
| |
Before MSVS2015, MSVS's headers disagree about int32_t and PRIx32 and so on.
Provide a wrapper header to fix this, so that -Wformat can still be used.
Fixes PR23412.
llvm-svn: 240741
|
|
|
|
|
|
|
|
|
| |
Ever since the target attributes change, we don't need to guard these
headers with `requires`. Actually it's a bit worse, because if we do
then they are included textually under the covers, causing declarations
to appear in submodules they aren't supposed to be in.
llvm-svn: 240720
|
|
|
|
| |
llvm-svn: 239926
|
|
|
|
| |
llvm-svn: 239925
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This involved removing the conditional inclusion and replacing them
with target attributes matching the original conditional inclusion
and checks. The testcase update removes the macro checks for each
file and replaces them with usage of the __target__ attribute, e.g.:
int __attribute__((__target__(("sse3")))) foo(int a) {
_mm_mwait(0, 0);
return 4;
}
This usage does require the enclosing function have the requisite
__target__ attribute for inlining and code generation - also for
any macro intrinsic uses in the enclosing function. There's no change
for existing uses of the intrinsic headers.
llvm-svn: 239883
|
|
|
|
|
|
|
| |
This is a precursor to changing them to use the new target attribute
code.
llvm-svn: 239882
|
|
|
|
|
|
| |
Saves some typing and if someone wants to change them it makes it much easier.
llvm-svn: 239782
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in section 10.1, __arm_{w,r}sr{,p,64}.
This includes arm_acle.h definitions with builtins and codegen to support
these, the intrinsics are implemented by generating read/write_register calls
which get appropriately lowered in the backend based on the register string
provided. SemaChecking is also implemented to fault invalid parameters.
Differential Revision: http://reviews.llvm.org/D9697
llvm-svn: 239737
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
builtins
This patch corresponds to review:
http://reviews.llvm.org/D10095
This is for just two instructions and related builtins:
vbpermq
vgbbd
llvm-svn: 239506
|
|
|
|
|
|
| |
This revision just fixes the formatting of altivec.h.
llvm-svn: 239408
|
|
|
|
|
|
| |
This change was unrelated to r239170.
llvm-svn: 239176
|
|
|
|
|
|
|
|
|
|
| |
We would crash in the DeclPrinter trying to pretty-print the
static_assert message. C++1z-style assertions don't have a message so
we would crash.
This fixes PR23756.
llvm-svn: 239170
|
|
|
|
|
|
|
|
|
|
|
|
| |
Vector Programming" from appendix A of the OpenPOWER ABI for Linux Supplement document.
I also added tests for the new functions and updated another test that was looking for specific line numbers in error messages from altivec.h.
https://llvm.org/bugs/show_bug.cgi?id=23679
http://reviews.llvm.org/D10131
llvm-svn: 239066
|
|
|
|
| |
llvm-svn: 238386
|
|
|
|
| |
llvm-svn: 238345
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in POWER8.
These are the Clang-related changes for http://reviews.llvm.org/D9081
vadduqm
vaddeuqm
vaddcuq
vaddecuq
vsubuqm
vsubeuqm
vsubcuq
vsubecuq
All builtins are added in altivec.h, and guarded with the POWER8_VECTOR and
powerpc64 macros.
http://reviews.llvm.org/D9903
llvm-svn: 238145
|
|
|
|
|
|
| |
Differential Revision: http://reviews.llvm.org/D9855
llvm-svn: 237778
|
|
|
|
|
|
| |
_mm_broadcastsd_pd is basically an alias for _mm_movedup_pd, however the alias is only available from AVX2 forward.
llvm-svn: 237698
|
|
|
|
|
|
| |
These two intrinsics are alternative names for _mm256_slli_si256 and _mm256_srli_si256, respectively.
llvm-svn: 237693
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds support for the following new instructions in the
Power ISA 2.07:
vpksdss
vpksdus
vpkudus
vpkudum
vupkhsw
vupklsw
These instructions are available through the vec_packs, vec_packsu,
vec_unpackh, and vec_unpackl built-in interfaces. These are
lane-sensitive instructions, so the built-ins have different
implementations for big- and little-endian, and the instructions must
be marked as killing the vector swap optimization for now.
The first three instructions perform saturating pack operations. The
fourth performs a modulo pack operation, which means it can be
represented with a vector shuffle, and conversely the appropriate
vector shuffles may cause this instruction to be generated. The other
instructions are only generated via built-in support for now.
I noticed during patch preparation that the macro __VSX__ was not
previously predefined when the power8-vector or direct-move features
are requested. This is an error, and I've corrected that here as
well.
Appropriate tests have been added.
There is a companion patch to llvm for the rest of this support.
llvm-svn: 237500
|
|
|
|
|
|
|
|
|
|
| |
xmmintrin.h includes emmintrin.h and vice versa if SSE2 is enabled. We break
this cycle for a modules build, and instead make the xmmintrin.h module
re-export the immintrin.h module. Also included is a fix for an assert in the
serialization code if a module exports another module that was declared later
in the same module map.
llvm-svn: 237321
|
|
|
|
|
|
|
|
| |
according to the spec.
Added FP compare intrinsics for SKX.
llvm-svn: 236715
|
|
|
|
|
|
| |
by Asaf Badouh (asaf.badouh@intel.com)
llvm-svn: 236218
|
|
|
|
|
|
| |
by Asaf Badouh (asaf.badouh@intel.com)
llvm-svn: 235986
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added cuda_builtin_vars.h which implements built-in CUDA variables
using __declattr(property).
Fields of built-in variables (except for warpSize) are implemented
using __declattr(property) which replaces read/write of a member field
with a call to a getter/setter member function, in this case with
appropriate NVPTX builtin.
Added a test case to check diagnostics on attempt to construct or
improperly access a built-in variable.
Differential Revision: http://reviews.llvm.org/D9064
llvm-svn: 235448
|
|
|
|
|
|
| |
r235398 was causing buildbot break due to missing Makefile changes.
llvm-svn: 235401
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added cuda_builtin_vars.h which implements built-in CUDA variables
using __declattr(property).
Fields of built-in variables (except for warpSize) are implemented
using __declattr(property) which replaces read/write of a member field
with a call to a getter/setter member function, in this case with
appropriate NVPTX builtin.
Added a test case to check diagnostics on attempt to construct or
improperly access a built-in variable.
Differential Revision: http://reviews.llvm.org/D9064
llvm-svn: 235398
|