| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Add intrinsics for the
XSAVE instructions (XSAVE/XSAVE64/XRSTOR/XRSTOR64)
XSAVEOPT instructions (XSAVEOPT/XSAVEOPT64)
XSAVEC instructions (XSAVEC/XSAVEC64)
XSAVES instructions (XSAVES/XSAVES64/XRSTORS/XRSTORS64)
Differential Revision: http://reviews.llvm.org/D13014
llvm-svn: 250158
|
|
|
|
| |
llvm-svn: 245929
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_rotl, _rotwl and _lrotl (and their right-shift counterparts) are official x86
intrinsics, and should be supported regardless of environment. This is in contrast
to _rotl8, _rotl16, and _rotl64 which are MS-specific.
Note that the MS documentation for _lrotl is different from the Intel
documentation. Intel explicitly documents it as a 64-bit rotate, while for MS,
since sizeof(unsigned long) for MSVC is always 4, a 32-bit rotate is implied.
Differential Revision: http://reviews.llvm.org/D12271
llvm-svn: 245923
|
|
|
|
|
|
|
|
|
| |
Add intrinsics for the FXSR instructions (FXSAVE/FXSAVE64/FXRSTOR/FXRSTOR64)
These were previously declared in Intrin.h for MSVC compatibility, but now
that we have them implemented, these declarations can be removed.
llvm-svn: 241053
|
|
|
|
|
|
|
|
|
| |
include tests
review
http://reviews.llvm.org/D10795
llvm-svn: 240941
|
|
|
|
| |
llvm-svn: 239926
|
|
|
|
| |
llvm-svn: 239925
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This involved removing the conditional inclusion and replacing them
with target attributes matching the original conditional inclusion
and checks. The testcase update removes the macro checks for each
file and replaces them with usage of the __target__ attribute, e.g.:
int __attribute__((__target__(("sse3")))) foo(int a) {
_mm_mwait(0, 0);
return 4;
}
This usage does require the enclosing function have the requisite
__target__ attribute for inlining and code generation - also for
any macro intrinsic uses in the enclosing function. There's no change
for existing uses of the intrinsic headers.
llvm-svn: 239883
|
|
|
|
|
|
| |
by Asaf Badouh (asaf.badouh@intel.com)
llvm-svn: 236218
|
|
|
|
| |
llvm-svn: 221130
|
|
|
|
|
|
|
|
| |
Added tests.
Patch by Maxim Blumenthal <maxim.blumenthal@intel.com>
llvm-svn: 219319
|
|
|
|
| |
llvm-svn: 218117
|
|
|
|
|
|
|
| |
The set is small, that what I have right now.
Everybody is welcome to add more.
llvm-svn: 213641
|
|
|
|
|
|
|
|
|
|
| |
and one for PCLMUL support. The current immintrin.h header only includes
wmmintrin.h if AES support is enabled. It should include it if either AES or
PCLMUL is enabled (GCC's version of immintrin.h does this).
Patch by John Baldwin!
llvm-svn: 202871
|
|
|
|
|
|
| |
This is consistent with ICC and Intel's SHA-enabled GCC version.
llvm-svn: 191002
|
|
|
|
| |
llvm-svn: 178330
|
|
|
|
|
|
|
|
|
|
|
|
| |
- New options '-mrtm'/'-mno-rtm' are added to enable/disable RTM feature
- Builtin macro '__RTM__' is defined if RTM feature is enabled
- RTM intrinsic header is added and introduces 3 new intrinsics, namely
'_xbegin', '_xend', and '_xabort'.
- 3 new builtins are added to keep compatible with gcc, namely
'__builtin_ia32_xbegin', '__builtin_ia32_xend', and '__builtin_ia32_xabort'.
- Test cases for pre-defined macro and new intrinsic codegen are added.
llvm-svn: 167665
|
|
|
|
| |
llvm-svn: 160118
|
|
|
|
| |
llvm-svn: 157913
|
|
|
|
| |
llvm-svn: 147275
|
|
|
|
| |
llvm-svn: 147263
|
|
|
|
| |
llvm-svn: 147262
|
|
|
|
|
|
| |
used to store builtinID when serializing identifier table.
llvm-svn: 146855
|
|
- This is the official way to get AVX intrinsics, we might want to disallow
direct inclusion of avxintrin.h, just like GCC does.
llvm-svn: 111660
|