summaryrefslogtreecommitdiffstats
path: root/clang/test/CodeGen
Commit message (Collapse)AuthorAgeFilesLines
* [X86][AVX512F] Add FP scalar intrinsicsAsaf Badouh2015-07-231-0/+300
| | | | | | | | intrinsics for: add/sub/mul/div/min/max in their FP scalar versions Differential Revision: http://reviews.llvm.org/D11418 llvm-svn: 243009
* [X86][AVX512BW] add madd and maddubs intrinsicsAsaf Badouh2015-07-231-0/+30
| | | | | | Differential Revision: http://reviews.llvm.org/D11420 llvm-svn: 242986
* [X86][AVX512F] add FP arithmetic intrinsicsAsaf Badouh2015-07-211-0/+201
| | | | | | | | | add/div/mul/sub include rounding versions Differential Revision: http://reviews.llvm.org/D11354 llvm-svn: 242790
* [MS Compat] Add support for __declspec(noalias)David Majnemer2015-07-201-0/+5
| | | | | | | The attribute '__declspec(noalias)' communicates that the function only accesses memory pointed to by its pointer-typed arguments. llvm-svn: 242728
* Fix quoting of #pragma comment for PS4.Yunzhong Gao2015-07-201-0/+1
| | | | | | | | | | | | This is the PS4 counterpart to r229376, which quotes the library name if the name contains space. It was discovered that if a library name contains both double-quote and space characters, quoting the name might produce unexpected results, but we are mostly concerned with a Windows host environment, which does not allow double-quote or slashes in file/folder names. Differential Revision: http://reviews.llvm.org/D11275 llvm-svn: 242689
* [CodeGen] Flip lanes when lowering __builtin_palignr with one laneBenjamin Kramer2015-07-201-2/+2
| | | | | | | Otherwise we'd pick the wrong lane for the resulting shuffle and miscompile code. PR24187. llvm-svn: 242678
* [X86, inlineasm] Improve analysis of x,Y0,Yi,Ym,Yt,L,e,Z,s asm constraints ↵Alexey Bataev2015-07-201-4/+8
| | | | | | | | | (patch by Alexey Frolov) Improve Sema checking of 9 existing inline asm constraints (‘x’, ‘Y*’, ‘L’, ‘e’, ‘Z’, ‘s’). Differential Revision: http://reviews.llvm.org/D10536 llvm-svn: 242665
* [X86][AVX512BW] add clang intrinsics for pmulhrsw / pmulhuw / pmulhwAsaf Badouh2015-07-191-1/+48
| | | | | | | | also made minor fix in "test_mm512_maskz_permutex2var_epi16" Differential Revision: http://reviews.llvm.org/D11336 llvm-svn: 242635
* Fix test case in r242565Steven Wu2015-07-171-1/+2
| | | | llvm-svn: 242571
* Fix -save-temp when using objc-arc, sanitizer and profilingSteven Wu2015-07-171-0/+1
| | | | | | | | | | | | Currently, -save-temp will cause ObjCARC optimization to be dropped, sanitizer pass to run early in the pipeline, and profiling instrumentation to run twice. Fix the issue by properly disable all passes in the optimization pipeline when generating bitcode output and parse some of the Language Options even when the input is bitcode so the passes can be setup correctly. llvm-svn: 242565
* Disable #pragma redefine_extname for C++ code as it does not make sense in ↵Aaron Ballman2015-07-161-0/+6
| | | | | | | | such a context. Patch by Andrey Bokhanko! llvm-svn: 242420
* [ARM] Pass subtarget feature "+no-movt" instead of passing backend optionAkira Hatanaka2015-07-161-0/+7
| | | | | | | | | | | | | | "-arm-use-movt=0". This change is needed since backend options do not make it to the backend when doing LTO and are not capable of changing the behavior of code-gen passes on a per-function basis. rdar://problem/21529937 Differential Revision: http://reviews.llvm.org/D11025 llvm-svn: 242368
* [PPC64] Update tests for vec_sldBill Schmidt2015-07-151-1/+145
| | | | | | | | | | | Revision 224297 modified the behavior of vec_sld for little endian so that LLVM will generate the correct corresponding vsldoi instruction. I neglected to update the existing tests, which continued to pass because they were not specific enough. This patch adds enough specificity to the tests to make them useful for BE and LE testing of vec_sld. llvm-svn: 242313
* Add missing builtins to altivec.h for ABI compliance (vol. 4)Nemanja Ivanovic2015-07-143-5/+492
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch corresponds to review: http://reviews.llvm.org/D11184 A number of new interfaces for altivec.h (as mandated by the ABI): vector float vec_cpsgn(vector float, vector float) vector double vec_cpsgn(vector double, vector double) vector double vec_or(vector bool long long, vector double) vector double vec_or(vector double, vector bool long long) vector double vec_re(vector double) vector signed char vec_cntlz(vector signed char) vector unsigned char vec_cntlz(vector unsigned char) vector short vec_cntlz(vector short) vector unsigned short vec_cntlz(vector unsigned short) vector int vec_cntlz(vector int) vector unsigned int vec_cntlz(vector unsigned int) vector signed long long vec_cntlz(vector signed long long) vector unsigned long long vec_cntlz(vector unsigned long long) vector signed char vec_nand(vector bool signed char, vector signed char) vector signed char vec_nand(vector signed char, vector bool signed char) vector signed char vec_nand(vector signed char, vector signed char) vector unsigned char vec_nand(vector bool unsigned char, vector unsigned char) vector unsigned char vec_nand(vector unsigned char, vector bool unsigned char) vector unsigned char vec_nand(vector unsigned char, vector unsigned char) vector short vec_nand(vector bool short, vector short) vector short vec_nand(vector short, vector bool short) vector short vec_nand(vector short, vector short) vector unsigned short vec_nand(vector bool unsigned short, vector unsigned short) vector unsigned short vec_nand(vector unsigned short, vector bool unsigned short) vector unsigned short vec_nand(vector unsigned short, vector unsigned short) vector int vec_nand(vector bool int, vector int) vector int vec_nand(vector int, vector bool int) vector int vec_nand(vector int, vector int) vector unsigned int vec_nand(vector bool unsigned int, vector unsigned int) vector unsigned int vec_nand(vector unsigned int, vector bool unsigned int) vector unsigned int vec_nand(vector unsigned int, vector unsigned int) vector signed long long vec_nand(vector bool long long, vector signed long long) vector signed long long vec_nand(vector signed long long, vector bool long long) vector signed long long vec_nand(vector signed long long, vector signed long long) vector unsigned long long vec_nand(vector bool long long, vector unsigned long long) vector unsigned long long vec_nand(vector unsigned long long, vector bool long long) vector unsigned long long vec_nand(vector unsigned long long, vector unsigned long long) vector signed char vec_orc(vector bool signed char, vector signed char) vector signed char vec_orc(vector signed char, vector bool signed char) vector signed char vec_orc(vector signed char, vector signed char) vector unsigned char vec_orc(vector bool unsigned char, vector unsigned char) vector unsigned char vec_orc(vector unsigned char, vector bool unsigned char) vector unsigned char vec_orc(vector unsigned char, vector unsigned char) vector short vec_orc(vector bool short, vector short) vector short vec_orc(vector short, vector bool short) vector short vec_orc(vector short, vector short) vector unsigned short vec_orc(vector bool unsigned short, vector unsigned short) vector unsigned short vec_orc(vector unsigned short, vector bool unsigned short) vector unsigned short vec_orc(vector unsigned short, vector unsigned short) vector int vec_orc(vector bool int, vector int) vector int vec_orc(vector int, vector bool int) vector int vec_orc(vector int, vector int) vector unsigned int vec_orc(vector bool unsigned int, vector unsigned int) vector unsigned int vec_orc(vector unsigned int, vector bool unsigned int) vector unsigned int vec_orc(vector unsigned int, vector unsigned int) vector signed long long vec_orc(vector bool long long, vector signed long long) vector signed long long vec_orc(vector signed long long, vector bool long long) vector signed long long vec_orc(vector signed long long, vector signed long long) vector unsigned long long vec_orc(vector bool long long, vector unsigned long long) vector unsigned long long vec_orc(vector unsigned long long, vector bool long long) vector unsigned long long vec_orc(vector unsigned long long, vector unsigned long long) vector signed char vec_div(vector signed char, vector signed char) vector unsigned char vec_div(vector unsigned char, vector unsigned char) vector signed short vec_div(vector signed short, vector signed short) vector unsigned short vec_div(vector unsigned short, vector unsigned short) vector signed int vec_div(vector signed int, vector signed int) vector unsigned int vec_div(vector unsigned int, vector unsigned int) vector signed long long vec_div(vector signed long long, vector signed long long) vector unsigned long long vec_div(vector unsigned long long, vector unsigned long long) vector unsigned char vec_mul(vector unsigned char, vector unsigned char) vector unsigned int vec_mul(vector unsigned int, vector unsigned int) vector unsigned long long vec_mul(vector unsigned long long, vector unsigned long long) vector unsigned short vec_mul(vector unsigned short, vector unsigned short) vector signed char vec_mul(vector signed char, vector signed char) vector signed int vec_mul(vector signed int, vector signed int) vector signed long long vec_mul(vector signed long long, vector signed long long) vector signed short vec_mul(vector signed short, vector signed short) vector signed long long vec_mergeh(vector signed long long, vector signed long long) vector signed long long vec_mergeh(vector signed long long, vector bool long long) vector signed long long vec_mergeh(vector bool long long, vector signed long long) vector unsigned long long vec_mergeh(vector unsigned long long, vector unsigned long long) vector unsigned long long vec_mergeh(vector unsigned long long, vector bool long long) vector unsigned long long vec_mergeh(vector bool long long, vector unsigned long long) vector double vec_mergeh(vector double, vector double) vector double vec_mergeh(vector double, vector bool long long) vector double vec_mergeh(vector bool long long, vector double) vector signed long long vec_mergel(vector signed long long, vector signed long long) vector signed long long vec_mergel(vector signed long long, vector bool long long) vector signed long long vec_mergel(vector bool long long, vector signed long long) vector unsigned long long vec_mergel(vector unsigned long long, vector unsigned long long) vector unsigned long long vec_mergel(vector unsigned long long, vector bool long long) vector unsigned long long vec_mergel(vector bool long long, vector unsigned long long) vector double vec_mergel(vector double, vector double) vector double vec_mergel(vector double, vector bool long long) vector double vec_mergel(vector bool long long, vector double) vector signed int vec_pack(vector signed long long, vector signed long long) vector unsigned int vec_pack(vector unsigned long long, vector unsigned long long) vector bool int vec_pack(vector bool long long, vector bool long long) llvm-svn: 242171
* [x86] add 2 bit to ObjCOrBuiltinID and new intrinsicsAsaf Badouh2015-07-141-0/+585
| | | | | | | | | | add 2 bit to ObjCOrBuiltinID (changed from 11bits to 13bits), see discussion in Add new intrinsics support that already covered by the BE. All the intrinsics are covered by tests Differential Revision: http://reviews.llvm.org/D10893 llvm-svn: 242144
* [MS ABI] Don't generates code for unreferenced inline definitions of library ↵David Majnemer2015-07-101-0/+4
| | | | | | | | | | | builtins We should only consider declarations which were written, implicit declarations shouldn't be considered. This fixes PR24084. llvm-svn: 241941
* [inlineasm] Attach readonly and readnone to inline-asm instructions.Akira Hatanaka2015-07-101-0/+33
| | | | | | | | | | | | | | Previously, clang/llvm treated inline-asm instructions conservatively, choosing not to eliminate the instructions or hoisting them out of a loop even when it was safe to do so. This commit makes changes to attach a readonly or readnone attribute to an inline-asm instruction, which enables passes such as LICM and EarlyCSE to move or optimize away the instruction. rdar://problem/11358192 Differential Revision: http://reviews.llvm.org/D10546 llvm-svn: 241930
* Respect alignment of nested bitfieldsUlrich Weigand2015-07-102-7/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tools/clang/test/CodeGen/packed-nest-unpacked.c contains this test: struct XBitfield { unsigned b1 : 10; unsigned b2 : 12; unsigned b3 : 10; }; struct YBitfield { char x; struct XBitfield y; } __attribute((packed)); struct YBitfield gbitfield; unsigned test7() { // CHECK: @test7 // CHECK: load i32, i32* getelementptr inbounds (%struct.YBitfield, %struct.YBitfield* @gbitfield, i32 0, i32 1, i32 0), align 4 return gbitfield.y.b2; } The "align 4" is actually wrong. Accessing all of "gbitfield.y" as a single i32 is of course possible, but that still doesn't make it 4-byte aligned as it remains packed at offset 1 in the surrounding gbitfield object. This alignment was changed by commit r169489, which also introduced changes to bitfield access code in CGExpr.cpp. Code before that change used to take into account *both* the alignment of the field to be accessed within the current struct, *and* the alignment of that outer struct itself; this logic was removed by the above commit. Neglecting to consider both values can cause incorrect code to be generated (I've seen an unaligned access crash on SystemZ due to this bug). In order to always use the best known alignment value, this patch removes the CGBitFieldInfo::StorageAlignment member and replaces it with a StorageOffset member specifying the offset from the start of the surrounding struct to the bitfield's underlying storage. This offset can then be combined with the best-known alignment for a bitfield access lvalue to determine the alignment to use when accessing the bitfield's storage. Differential Revision: http://reviews.llvm.org/D11034 llvm-svn: 241916
* Add missing builtins to altivec.h for ABI compliance (vol. 3)Nemanja Ivanovic2015-07-103-30/+579
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch corresponds to review: http://reviews.llvm.org/D10972 Fix for the handling of dependent features that are enabled by default on some CPU's (such as -mvsx, -mpower8-vector). Also provides a number of new interfaces or fixes existing ones in altivec.h. Changed signatures to conform to ABI: vector short vec_perm(vector signed short, vector signed short, vector unsigned char) vector int vec_perm(vector signed int, vector signed int, vector unsigned char) vector long long vec_perm(vector signed long long, vector signed long long, vector unsigned char) vector signed char vec_sld(vector signed char, vector signed char, const int) vector unsigned char vec_sld(vector unsigned char, vector unsigned char, const int) vector bool char vec_sld(vector bool char, vector bool char, const int) vector unsigned short vec_sld(vector unsigned short, vector unsigned short, const int) vector signed short vec_sld(vector signed short, vector signed short, const int) vector signed int vec_sld(vector signed int, vector signed int, const int) vector unsigned int vec_sld(vector unsigned int, vector unsigned int, const int) vector float vec_sld(vector float, vector float, const int) vector signed char vec_splat(vector signed char, const int) vector unsigned char vec_splat(vector unsigned char, const int) vector bool char vec_splat(vector bool char, const int) vector signed short vec_splat(vector signed short, const int) vector unsigned short vec_splat(vector unsigned short, const int) vector bool short vec_splat(vector bool short, const int) vector pixel vec_splat(vector pixel, const int) vector signed int vec_splat(vector signed int, const int) vector unsigned int vec_splat(vector unsigned int, const int) vector bool int vec_splat(vector bool int, const int) vector float vec_splat(vector float, const int) Added a VSX path to: vector float vec_round(vector float) Added interfaces: vector signed char vec_eqv(vector signed char, vector signed char) vector signed char vec_eqv(vector bool char, vector signed char) vector signed char vec_eqv(vector signed char, vector bool char) vector unsigned char vec_eqv(vector unsigned char, vector unsigned char) vector unsigned char vec_eqv(vector bool char, vector unsigned char) vector unsigned char vec_eqv(vector unsigned char, vector bool char) vector signed short vec_eqv(vector signed short, vector signed short) vector signed short vec_eqv(vector bool short, vector signed short) vector signed short vec_eqv(vector signed short, vector bool short) vector unsigned short vec_eqv(vector unsigned short, vector unsigned short) vector unsigned short vec_eqv(vector bool short, vector unsigned short) vector unsigned short vec_eqv(vector unsigned short, vector bool short) vector signed int vec_eqv(vector signed int, vector signed int) vector signed int vec_eqv(vector bool int, vector signed int) vector signed int vec_eqv(vector signed int, vector bool int) vector unsigned int vec_eqv(vector unsigned int, vector unsigned int) vector unsigned int vec_eqv(vector bool int, vector unsigned int) vector unsigned int vec_eqv(vector unsigned int, vector bool int) vector signed long long vec_eqv(vector signed long long, vector signed long long) vector signed long long vec_eqv(vector bool long long, vector signed long long) vector signed long long vec_eqv(vector signed long long, vector bool long long) vector unsigned long long vec_eqv(vector unsigned long long, vector unsigned long long) vector unsigned long long vec_eqv(vector bool long long, vector unsigned long long) vector unsigned long long vec_eqv(vector unsigned long long, vector bool long long) vector float vec_eqv(vector float, vector float) vector float vec_eqv(vector bool int, vector float) vector float vec_eqv(vector float, vector bool int) vector double vec_eqv(vector double, vector double) vector double vec_eqv(vector bool long long, vector double) vector double vec_eqv(vector double, vector bool long long) vector bool long long vec_perm(vector bool long long, vector bool long long, vector unsigned char) vector double vec_round(vector double) vector double vec_splat(vector double, const int) vector bool long long vec_splat(vector bool long long, const int) vector signed long long vec_splat(vector signed long long, const int) vector unsigned long long vec_splat(vector unsigned long long, vector bool int vec_sld(vector bool int, vector bool int, const int) vector bool short vec_sld(vector bool short, vector bool short, const int) llvm-svn: 241904
* Respect alignment when loading up a coerced function argumentUlrich Weigand2015-07-106-57/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | Code in CGCall.cpp that loads up function arguments that need to be coerced to a different type may in some cases ignore the fact that the source of the argument is not naturally aligned. This may cause incorrect code to be generated. In some places in CreateCoercedLoad, we already have setAlignment calls to address this, but I ran into one where it was missing, causing wrong code generation on SystemZ. However, in that location, we do not actually know what alignment of the source location we can rely on; the callers do not pass anything to this routine. This is already an issue in other places in CreateCoercedLoad; and the same problem exists for CreateCoercedStore. To avoid pessimising code, and to fix the FIXMEs already in place, this patch also adds an alignment argument to the CreateCoerced* routines and uses it instead of forcing an alignment of 1. The callers are changed to pass in the best information they have. This actually requires changes in a number of existing test cases since we now get better alignment in many places. Differential Revision: http://reviews.llvm.org/D11033 llvm-svn: 241898
* Re-enable 32-bit SEH after the alignment fixReid Kleckner2015-07-102-5/+3
| | | | llvm-svn: 241878
* Disable 32-bit SEH, againReid Kleckner2015-07-082-3/+5
| | | | | | | | Move the diagnostic back to codegen so that we can compile ATL on the self-host bot. We don't actually end up emitting code for the __try, so the diagnostic won't be hit. llvm-svn: 241761
* [SEH] Re-enable SEH on x86 Windows after r241699Reid Kleckner2015-07-082-5/+3
| | | | llvm-svn: 241704
* Revert "Revert r241620 and follow-up commits" and move the initializationAdrian Prantl2015-07-082-1/+2
| | | | | | of the llvm targets from clang/CodeGen into ClangCheck.cpp and CIndex.cpp. llvm-svn: 241653
* [SEH] Switch from frameaddress(0) to localaddressReid Kleckner2015-07-073-18/+18
| | | | | | This should do the right thing for stack realignment prologues. llvm-svn: 241644
* Revert r241620 and follow-up commits while investigating linux buildbot ↵Adrian Prantl2015-07-072-2/+1
| | | | | | failures. llvm-svn: 241642
* Update clang for intrinsic rename of framerecover to localrecoverReid Kleckner2015-07-071-7/+7
| | | | llvm-svn: 241634
* Wrap clang modules and pch files in an object file container.Adrian Prantl2015-07-072-1/+2
| | | | | | | | | | | | | This patch adds ObjectFilePCHContainerOperations uses the LLVM backend to put the contents of a PCH into a __clangast section inside a COFF, ELF, or Mach-O object file container. This is done to facilitate module debugging by makeing it possible to store the debug info for the types defined by a module alongside the AST. rdar://problem/20091852 llvm-svn: 241620
* [ARM] Pass subtarget feature "+long-calls" instead of passing backend optionAkira Hatanaka2015-07-071-0/+7
| | | | | | | | | | | | | "-arm-long-calls". This change allows using -mlong-calls/-mno-long-calls for LTO and enabling or disabling long call on a per-function basis. rdar://problem/21529937 Differential Revision: http://reviews.llvm.org/D9414 llvm-svn: 241565
* Debug info: Emit distinct __block_literal_generic types for blocks withAdrian Prantl2015-07-071-5/+22
| | | | | | | | | | different function signatures. (Previously clang would emit all block pointer types with the type of the first block pointer in the compile unit.) rdar://problem/21602473 llvm-svn: 241534
* Revert "Revert 241171, 241187, 241199 (32-bit SEH)."Reid Kleckner2015-07-073-82/+129
| | | | | | | | | | | This reverts commit r241244, but restricts SEH support to Win64. This way, Chromium builds will still fall back on TUs with SEH, and Clang developers can work on this incrementally upstream while patching this small predicate locally. It'll also make it easier to review small fixes. llvm-svn: 241533
* Handle arbitrary whitespace in the target attribute support.Eric Christopher2015-07-061-0/+3
| | | | | | | This allows us to deal a bit more gracefully with inclusions done by macros, token pasting, or just code layout/formatting. llvm-svn: 241525
* Debug info: Don't emit a bogus location for the global block pointer typeAdrian Prantl2015-07-061-1/+3
| | | | | | | | | | | (__block_literal_generic). The arbitrary nature of the location confuses lldb and prevents type uniquing. rdar://problem/21602473 llvm-svn: 241511
* Resubmit "Pass down the -flto option to the -cc1 job" (r239481)Teresa Johnson2015-07-062-0/+42
| | | | | | | | | | | | | | | | | | The patch is the same except for the addition of a new test for the issue that required reverting the dependent llvm commit. --Original Commit Message-- Pass down the -flto option to the -cc1 job, and from there into the CodeGenOptions and onto the PassManagerBuilder. This enables gating the new EliminateAvailableExternally module pass on whether we are preparing for LTO. If we are preparing for LTO (e.g. a -flto -c compile), the new pass is not included as we want to preserve available externally functions for possible link time inlining. llvm-svn: 241467
* clang/test/CodeGen/builtins-ppc-vsx.c: Fix for -Asserts.NAKAMURA Takumi2015-07-051-8/+8
| | | | llvm-svn: 241401
* Add missing builtins to altivec.h for ABI compliance (vol. 2)Nemanja Ivanovic2015-07-051-0/+127
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch corresponds to review: http://reviews.llvm.org/D10875 The bulk of the second round of additions to altivec.h. The following interfaces were added: vector double vec_floor(vector double) vector double vec_madd(vector double, vector double, vector double) vector float vec_msub(vector float, vector float, vector float) vector double vec_msub(vector double, vector double, vector double) vector float vec_mul(vector float, vector float) vector double vec_mul(vector double, vector double) vector float vec_nmadd(vector float, vector float, vector float) vector double vec_nmadd(vector double, vector double, vector double) vector double vec_nmsub(vector double, vector double, vector double) vector double vec_nor(vector double, vector double) vector double vec_or(vector double, vector double) vector float vec_rint(vector float) vector double vec_rint(vector double) vector float vec_nearbyint(vector float) vector double vec_nearbyint(vector double) vector float vec_sqrt(vector float) vector double vec_sqrt(vector double) vector double vec_rsqrte(vector double) vector double vec_sel(vector double, vector double, vector unsigned long long) vector double vec_sel(vector double, vector double, vector unsigned long long) vector double vec_sub(vector double, vector double) vector double vec_trunc(vector double) vector double vec_xor(vector double, vector double) vector double vec_xor(vector double, vector bool long long) vector double vec_xor(vector bool long long, vector double) New VSX paths for the following interfaces: vector float vec_madd(vector float, vector float, vector float) vector float vec_nmsub(vector float, vector float, vector float) vector float vec_rsqrte(vector float) vector float vec_trunc(vector float) vector float vec_floor(vector float) llvm-svn: 241399
* This patch adds support for the vector merge even word and vector merge odd wordKit Barton2015-07-021-0/+28
| | | | | | | | | | | | instructions introduced in POWER8. These are the Clang-related changes for http://reviews.llvm.org/D10704 All builtins are added in altivec.h and guarded with the POWER8_VECTOR macro. Phabricator review: http://reviews.llvm.org/D10736 llvm-svn: 241293
* Revert 241171, 241187, 241199 (32-bit SEH).Nico Weber2015-07-023-127/+82
| | | | | | | It still doesn't produce quite the right code, test binaries built with this enabled fail some tests. llvm-svn: 241244
* [OPENMP] Introduced type trait "__builtin_omp_required_simd_align" for ↵Alexey Bataev2015-07-021-0/+11
| | | | | | | | | default simd alignment. Adds type trait "__builtin_omp_required_simd_align" after discussions here http://reviews.llvm.org/D9894 Differential Revision: http://reviews.llvm.org/D10597 llvm-svn: 241237
* [SEH] Update EmitCapturedLocals to match r241187Reid Kleckner2015-07-011-13/+14
| | | | | | | It was still using frameaddress(1) to get the parent FP, even though it had the value it wanted as a parameter. llvm-svn: 241199
* [SEH] Delete the 32-bit IR lowering for __finally blocks and use x64Reid Kleckner2015-07-012-36/+18
| | | | | | | | | | | | | | | | 32-bit finally funclets are intended to be called both directly from the parent function and indirectly from the EH runtime. Because we aren't contorting LLVM's X86 prologue to match MSVC's, calling the finally block directly passes in a different value of EBP than the one that the runtime provides. We need an adapter thunk to adjust EBP to the expected value. However, WinEHPrepare already has to solve this problem when cleanups are not pre-outlined, so we can go ahead and rely on it rather than duplicating work. Now we only do the llvm.x86.seh.recoverfp dance for 32-bit SEH filter functions. llvm-svn: 241187
* [SEH] Add 32-bit lowering for SEH __tryReid Kleckner2015-07-013-79/+141
| | | | | | | | | | | | | | | | | | | This re-lands r236052 and adds support for __exception_code(). In 32-bit SEH, the exception code is not available in eax. It is only available in the filter function, and now we arrange to load it and store it into an escaped variable in the parent frame. As a consequence, we have to disable the "catch i8* null" optimization on 32-bit and always generate a filter function. We can re-enable the optimization if we detect an __except block that doesn't use the exception code, but this probably isn't worth optimizing. Reviewers: majnemer Differential Revision: http://reviews.llvm.org/D10852 llvm-svn: 241171
* Use a stable sort to guarantee target feature ordering in the IREric Christopher2015-07-013-7/+7
| | | | | | | in order to make testing somewhat more feasible. Has the advantage of making it easier to find target features as well. llvm-svn: 241134
* Fix sse4 for target attribute feature additions.Eric Christopher2015-07-011-0/+3
| | | | | | | | | | This reinstates part of the hack removed in r233223, by special casing sse4 as part of the feature additions. The notable change here is that we consider it only as part of setting the SSE level and not as part of the actual target features set which handles setting the rest of the masks. llvm-svn: 241130
* Fix a TODO dealing with canonicalizing attributes on functions byEric Christopher2015-07-012-3/+3
| | | | | | | | using a string map to canonicalize. Fix up a couple of testcases that needed changing since we are no longer simply appending features to the list, but all of their mask dependencies as well. llvm-svn: 241129
* [X86] Add FXSR intrinsicsMichael Kuperstein2015-06-301-0/+4
| | | | | | | | | Add intrinsics for the FXSR instructions (FXSAVE/FXSAVE64/FXRSTOR/FXRSTOR64) These were previously declared in Intrin.h for MSVC compatibility, but now that we have them implemented, these declarations can be removed. llvm-svn: 241053
* [CodeGen] Tweak isTriviallyRecursive furtherDavid Majnemer2015-06-301-0/+7
| | | | | | | | | | | | | | | | isTriviallyRecursive is a hack used to bridge a gap between the expectations that source code assumes and the semantics that LLVM IR can provide. Specifically, asm labels on functions are treated as an explicit name for a GlobalObject in Clang but treated like an output-processing step in GCC. Tweak this hack a little further to emit calls to library functions instead of emitting an incorrect definition. The definition in question would have available_externally linkage (this is OK) but result in a call to itself which will either result in an infinite loop or stack overflow. This fixes PR23964. llvm-svn: 241043
* Add support for the x86 builtin __builtin_cpu_supports.Eric Christopher2015-06-291-0/+16
| | | | | | | | | | | | | | | | | | | | This matches the implementation of the gcc support for the same feature, including checking the values set up by libgcc at runtime. The structure looks like this: unsigned int __cpu_vendor; unsigned int __cpu_type; unsigned int __cpu_subtype; unsigned int __cpu_features[1]; with a set of enums to match various fields that are field out after parsing the output of the cpuid instruction. This also adds a set of errors checking for valid input (and cpu). compiler-rt support for this and the other builtins in this family (__builtin_cpu_init and __builtin_cpu_is) are forthcoming. llvm-svn: 240994
* [CodeGen] Remove atomic sugar from record types in isSafeToConvertDavid Majnemer2015-06-291-2/+18
| | | | | | | | | | We failed to see that we should have deferred the creation of a type which references a type currently under construction because of atomic sugar. This fixes PR23985. llvm-svn: 240989
* Account for calling convention specifiers in function definitions in IR test ↵David Blaikie2015-06-2918-49/+49
| | | | | | | | | | | | | cases Several tests wouldn't pass when executed on an armv7a_pc_linux triple due to the non-default arm_aapcs calling convention produced on the function definitions in the IR output. Account for this with the application of a little regex. Patch by Ying Yi. llvm-svn: 240971
OpenPOWER on IntegriCloud