Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Sink some IntrinsicInst.h and Intrinsics.h out of llvm/include | Reid Kleckner | 2017-09-07 | 1 | -0/+1 |
| | | | | | | | Many of these uses can get by with forward declarations. Hopefully this speeds up compilation after adding a single intrinsic. llvm-svn: 312759 | ||||
* | CodeGen: Rename DEBUG_TYPE to match passnames | Matthias Braun | 2017-05-25 | 1 | -6/+2 |
| | | | | | | | | Rename the DEBUG_TYPE to match the names of corresponding passes where it makes sense. Also establish the pattern of simply referencing DEBUG_TYPE instead of repeating the passname where possible. llvm-svn: 303921 | ||||
* | [X86] Relocate code of replacement of subtarget unsupported masked memory ↵ | Ayman Musa | 2017-05-15 | 1 | -0/+660 |
intrinsics to run also on -O0 option. Currently, when masked load, store, gather or scatter intrinsics are used, we check in CodeGenPrepare pass if the subtarget support these intrinsics, if not we replace them with scalar code - this is a functional transformation not an optimization (not optional). CodeGenPrepare pass does not run when the optimization level is set to CodeGenOpt::None (-O0). Functional transformation should run with all optimization levels, so here I created a new pass which runs on all optimization levels and does no more than this transformation. Differential Revision: https://reviews.llvm.org/D32487 llvm-svn: 303050 |