Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | [X86] Add guards to some of the x86 intrinsic tests to skip 64-bit mode only ↵ | Craig Topper | 2019-07-10 | 1 | -0/+2 |
| | | | | | | | | | | intrinsics when compiled for 32-bit mode. All the command lines are for 64-bit mode, but sometimes I compile the tests in 32-bit mode to see what assembly we get and we need to skip these to do that. llvm-svn: 365668 | ||||
* | [X86] NFC Include immintrin.h in CodeGen tests | Gabor Buella | 2018-05-24 | 1 | -1/+1 |
| | | | | | | | Following r333110: "Move all Intel defined intrinsic includes into immintrin.h" llvm-svn: 333160 | ||||
* | [X86] Remove the mm_malloc.h include guard hack from the X86 builtins tests | Elad Cohen | 2016-09-28 | 1 | -4/+2 |
| | | | | | | | | | | | | The X86 clang/test/CodeGen/*builtins.c tests define the mm_malloc.h include guard as a hack for avoiding its inclusion (mm_malloc.h requires a hosted environment since it expects stdlib.h to be available - which is not the case in these internal clang codegen tests). This patch removes this hack and instead passes -ffreestanding to clang cc1. Differential Revision: https://reviews.llvm.org/D24825 llvm-svn: 282581 | ||||
* | After PR28761 use -Wall with -Werror in builtins tests to identify | Eric Christopher | 2016-08-04 | 1 | -2/+2 |
| | | | | | | possible problems in headers. llvm-svn: 277696 | ||||
* | [X86][SSE42] Sync with llvm/test/CodeGen/X86/sse42-intrinsics-fast-isel.ll | Simon Pilgrim | 2016-05-18 | 1 | -42/+26 |
| | | | | llvm-svn: 269931 | ||||
* | [X86] Stripped backend codegen tests | Simon Pilgrim | 2015-12-03 | 1 | -25/+0 |
| | | | | | | | | | | As discussed on the ml, backend tests need to be put in llvm/test/CodeGen/X86 as fast-isel tests using IR that is as close to what is generated here as possible. The llvm tests will (re)added in a future commit I will update PR24580 on this new plan llvm-svn: 254594 | ||||
* | Canonicalize some of the x86 builtin tests and either remove or comment | Eric Christopher | 2015-10-14 | 1 | -4/+4 |
| | | | | | | about optimization options. llvm-svn: 250271 | ||||
* | Fix the SSE4 byte sign extension in a cleaner way, and more thoroughly | Chandler Carruth | 2015-10-01 | 1 | -0/+23 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | test that our intrinsics behave the same under -fsigned-char and -funsigned-char. This further testing uncovered that AVX-2 has a broken cmpgt for 8-bit elements, and has for a long time. This is fixed in the same way as SSE4 handles the case. The other ISA extensions currently work correctly because they use specific instruction intrinsics. As soon as they are rewritten in terms of generic IR, they will need to add these special casts. I've added the necessary testing to catch this however, so we shouldn't have to chase it down again. I considered changing the core typedef to be signed, but that seems like a bad idea. Notably, it would be an ABI break if anyone is reaching into the innards of the intrinsic headers and passing __v16qi on an API boundary. I can't be completely confident that this wouldn't happen due to a macro expanding in a lambda, etc., so it seems much better to leave it alone. It also matches GCC's behavior exactly. A fun side note is that for both GCC and Clang, -funsigned-char really does change the semantics of __v16qi. To observe this, consider: % cat x.cc #include <smmintrin.h> #include <iostream> int main() { __v16qi a = { 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; __v16qi b = _mm_set1_epi8(-1); std::cout << (int)(a / b)[0] << ", " << (int)(a / b)[1] << '\n'; } % clang++ -o x x.cc && ./x -1, 1 % clang++ -funsigned-char -o x x.cc && ./x 0, 1 However, while this may be surprising, both Clang and GCC agree. Differential Revision: http://reviews.llvm.org/D13324 llvm-svn: 249097 | ||||
* | [X86]][SSE42] Added SSE42 IR + assembly codegen builtin tests | Simon Pilgrim | 2015-09-06 | 1 | -0/+141 |
llvm-svn: 246944 |