summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* Revert "[ThinLTO] Ensure we always select the same function copy to import"Teresa Johnson2018-07-142-90/+71
| | | | | | | This reverts commits r337050 and r337059. Caused failure in reverse-iteration bot that needs more investigation. llvm-svn: 337081
* Revert "AMDGPU: Fix handling of alignment padding in DAG argument lowering"Evgeniy Stepanov2018-07-1413-214/+193
| | | | | | | | | | | | | | | | | | | | | | This reverts commit r337021. WARNING: MemorySanitizer: use-of-uninitialized-value #0 0x1415cd65 in void write_signed<long>(llvm::raw_ostream&, long, unsigned long, llvm::IntegerStyle) /code/llvm-project/llvm/lib/Support/NativeFormatting.cpp:95:7 #1 0x1415c900 in llvm::write_integer(llvm::raw_ostream&, long, unsigned long, llvm::IntegerStyle) /code/llvm-project/llvm/lib/Support/NativeFormatting.cpp:121:3 #2 0x1472357f in llvm::raw_ostream::operator<<(long) /code/llvm-project/llvm/lib/Support/raw_ostream.cpp:117:3 #3 0x13bb9d4 in llvm::raw_ostream::operator<<(int) /code/llvm-project/llvm/include/llvm/Support/raw_ostream.h:210:18 #4 0x3c2bc18 in void printField<unsigned int, &(amd_kernel_code_s::amd_kernel_code_version_major)>(llvm::StringRef, amd_kernel_code_s const&, llvm::raw_ostream&) /code/llvm-project/llvm/lib/Target/AMDGPU/Utils/AMDKernelCodeTUtils.cpp:78:23 #5 0x3c250ba in llvm::printAmdKernelCodeField(amd_kernel_code_s const&, int, llvm::raw_ostream&) /code/llvm-project/llvm/lib/Target/AMDGPU/Utils/AMDKernelCodeTUtils.cpp:104:5 #6 0x3c27ca3 in llvm::dumpAmdKernelCode(amd_kernel_code_s const*, llvm::raw_ostream&, char const*) /code/llvm-project/llvm/lib/Target/AMDGPU/Utils/AMDKernelCodeTUtils.cpp:113:5 #7 0x3a46e6c in llvm::AMDGPUTargetAsmStreamer::EmitAMDKernelCodeT(amd_kernel_code_s const&) /code/llvm-project/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUTargetStreamer.cpp:161:3 #8 0xd371e4 in llvm::AMDGPUAsmPrinter::EmitFunctionBodyStart() /code/llvm-project/llvm/lib/Target/AMDGPU/AMDGPUAsmPrinter.cpp:204:26 [...] Uninitialized value was created by an allocation of 'KernelCode' in the stack frame of function '_ZN4llvm16AMDGPUAsmPrinter21EmitFunctionBodyStartEv' #0 0xd36650 in llvm::AMDGPUAsmPrinter::EmitFunctionBodyStart() /code/llvm-project/llvm/lib/Target/AMDGPU/AMDGPUAsmPrinter.cpp:192 llvm-svn: 337079
* [x86/SLH] Add an assert to catch if we ever end up trying to hardenChandler Carruth2018-07-141-0/+8
| | | | | | post-load a register that isn't valid for use with OR or SHRX. llvm-svn: 337078
* Re-apply "[SCEV] Strengthen StrengthenNoWrapFlags (reapply r334428)."Tim Shen2018-07-131-7/+20
| | | | llvm-svn: 337075
* [Hexagon] Avoid introducing calls into coalesced range of HVX vector pairsKrzysztof Parzyszek2018-07-132-0/+54
| | | | | | | | | | | | | If an HVX vector register is to be coalesced into a vector pair, make sure that the vector pair will not have a function call in its live range, unless it already had one. All HVX vector registers are volatile, so any vector register live across a function call will have to be spilled. If a vector needs to be spilled, and it's coalesced into a vector pair then the whole pair will need to be spilled (even if only a part of it is live), taking extra stack space. llvm-svn: 337073
* [LSR] If no Use is interesting, early return.Tim Shen2018-07-131-1/+3
| | | | | | | | | | | | | | | Summary: By looking at the callers of getUse(), we can see that even though IVUsers may offer uses, but they may not be interesting to LSR. It's possible that none of them is interesting. Reviewers: sanjoy Subscribers: jlebar, hiraditya, bixia, llvm-commits Differential Revision: https://reviews.llvm.org/D49049 llvm-svn: 337072
* [X86][SLH] Remove PDEP and PEXT from isDataInvariantLoadCraig Topper2018-07-131-4/+0
| | | | | | | | | | | | Ryzen has something like an 18 cycle latency on these based on Agner's data. AMD's own xls is blank. So it seems like there might be something tricky here. Agner's data for Intel CPUs indicates these are a single uop there. Probably safest to remove them. We never generate them without an intrinsic so this should be ok. Differential Revision: https://reviews.llvm.org/D49315 llvm-svn: 337067
* [X86][SLH] Add VEX and EVEX conversion instructions to isDataInvariantLoadCraig Topper2018-07-131-13/+19
| | | | | | | | | | -Drop the intrinsic versions of conversion instructions. These should be handled when we do vectors. They shouldn't show up in scalar code. -Add the float<->double conversions which were missing. -Add the AVX512 and AVX version of the conversion instructions including the unsigned integer conversions unique to AVX512 Differential Revision: https://reviews.llvm.org/D49313 llvm-svn: 337066
* [X86][SLH] Regroup the instructions in isDataInvariantLoad a little. NFCCraig Topper2018-07-131-36/+43
| | | | | | | | | | -Move BSF/BSR to the same group as TZCNT/LZCNT/POPCNT. -Split some of the bit manipulation instructions away from TZCNT/LZCNT/POPCNT. These are things like 'x & (x - 1)' which are composed of a few simple arithmetic operations. These aren't nearly as complicated/surprising as counting bits. -Move BEXTR/BZHI into their own group. They aren't like a simple arithmethic op or the bit manipulation instructions. They're more like a shift+and. Differential Revision: https://reviews.llvm.org/D49312 llvm-svn: 337065
* Fix comments which mixed up 'before' and 'after', NFCVedant Kumar2018-07-131-2/+2
| | | | llvm-svn: 337061
* [X86] Use the correct types in some recently added isel patterns.Craig Topper2018-07-131-2/+2
| | | | | | | | These were supposed to be integer types since we are selecting integer instructions. Found while preparing to remove these patterns for another patch. llvm-svn: 337057
* AMDGPU/GlobalISel: Implement select() for 32-bit @llvm.minnun and @llvm.maxnumTom Stellard2018-07-132-0/+19
| | | | | | | | | | Reviewers: arsenm, nhaehnle Subscribers: kzhuravl, wdng, yaxunl, rovka, kristof.beyls, dstuttard, tpr, llvm-commits, t-tye Differential Revision: https://reviews.llvm.org/D46172 llvm-svn: 337056
* [X86][FastISel] Support uitofp with avx512.Craig Topper2018-07-131-8/+26
| | | | llvm-svn: 337055
* [LTO] Fix linking with an alias defined using another alias.Eli Friedman2018-07-131-1/+1
| | | | | | | | | | | | When we're linking an alias which will be defined later, we neeed to build a GlobalAlias, or else we'll crash later in IRLinker::linkGlobalValueBody. clang sometimes constructs aliases like this for C++ destructors. Differential Revision: https://reviews.llvm.org/D49316 llvm-svn: 337053
* [X86] Correct comment of TEST elimination in BSF/TZCNTFangrui Song2018-07-131-2/+2
| | | | llvm-svn: 337052
* [ThinLTO] Ensure we always select the same function copy to importTeresa Johnson2018-07-132-71/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to always import the same copy of a linkonce function, even when encountering it with different thresholds (a higher one then a lower one), keep track of the summary we decided to import. This ensures that the backend only gets a single definition to import for each GUID, so that it doesn't need to choose one. Move the largest threshold the GUID was considered for import into the current module out of the ImportMap (which is part of a larger map maintained across the whole index), and into a new map just maintained for the current module we are computing imports for. This saves some memory since we no longer have the thresholds maintained across the whole index (and throughout the in-process backends when doing a normal non-distributed ThinLTO build), at the cost of some additional information being maintained for each invocation of ComputeImportForModule (the selected summary pointer for each import). There is an additional map lookup for each callee being considered for importing, however, this was able to subsume a map lookup in the Worklist iteration that invokes computeImportForFunction. We also are able to avoid calling selectCallee if we already failed to import at the same or higher threshold. I compared the run time and peak memory for the SPEC2006 471.omnetpp benchmark (running in-process ThinLTO backends), as well as for a large internal benchmark with a distributed ThinLTO build (so just looking at the thin link time/memory). Across a number of runs with and without this change there was no significant change in the time and memory. (I tried a few other variations of the change but they also didn't improve time or peak memory). Reviewers: davidxl Subscribers: mehdi_amini, inglorion, llvm-commits Differential Revision: https://reviews.llvm.org/D48670 llvm-svn: 337050
* AMDGPU/GlobalISel: Implement select() for @llvm.amdgcn.expTom Stellard2018-07-132-0/+71
| | | | | | | | | | Reviewers: arsenm, nhaehnle Subscribers: kzhuravl, wdng, yaxunl, rovka, kristof.beyls, dstuttard, tpr, t-tye, llvm-commits Differential Revision: https://reviews.llvm.org/D45882 llvm-svn: 337046
* [X86][FastISel] Add EVEX support to sitofp handling.Craig Topper2018-07-131-7/+16
| | | | llvm-svn: 337045
* [X86] Try fixing r336768Fangrui Song2018-07-131-1/+1
| | | | llvm-svn: 337043
* [LowerTypeTests] Limit when icall jumptable entries are emittedVlad Tsyrklevich2018-07-131-6/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Currently LowerTypeTests emits jumptable entries for all live external and address-taken functions; however, we could limit the number of functions that we emit entries for significantly. For Cross-DSO CFI, we continue to emit jumptable entries for all exported definitions. In the non-Cross-DSO CFI case, we only need to emit jumptable entries for live functions that are address-taken in live functions. This ignores exported functions and functions that are only address taken in dead functions. This change uses ThinLTO summary data (now emitted for all modules during ThinLTO builds) to determine address-taken and liveness info. The logic for emitting jumptable entries is more conservative in the regular LTO case because we don't have summary data in the case of monolithic LTO builds; however, once summaries are emitted for all LTO builds we can unify the Thin/monolithic LTO logic to only use summaries to determine the liveness of address taking functions. This change is a partial fix for PR37474. It reduces the build size for nacl_helper by ~2-3%, the reduction is due to nacl_helper compiling in lots of unused code and unused functions that are address taken in dead functions no longer being being considered live due to emitted jumptable references. The reduction for chromium is ~0.1-0.2%. Reviewers: pcc, eugenis, javed.absar Reviewed By: pcc Subscribers: aheejin, dexonsmith, dschuff, mehdi_amini, eraman, steven_wu, llvm-commits, kcc Differential Revision: https://reviews.llvm.org/D47652 llvm-svn: 337038
* [dwarfdump] Add pretty printer for accelerator table based on Atom.Jonas Devlieghere2018-07-132-3/+20
| | | | | | | | For instance, When dumping .apple_types, the second atom represents the DW_TAG. In addition to printing the raw value, we now also pretty print the value if the ATOM tells us how. llvm-svn: 337026
* AMDGPU: Properly handle shader inputs with split argumentsMatt Arsenault2018-07-131-12/+27
| | | | | | | | | | This needs to refer to arguments by their original argument index, not the argument split index which depends on what the type splitting decides to do. Also avoid increment PSInputNum for each split piece. llvm-svn: 337022
* AMDGPU: Fix handling of alignment padding in DAG argument loweringMatt Arsenault2018-07-1313-193/+214
| | | | | | | | | | | | | | | | | This was completely broken if there was ever a struct argument, as this information is thrown away during the argument analysis. The offsets as passed in to LowerFormalArguments are not useful, as they partially depend on the legalized result register type, and they don't consider the alignment in the first place. Ignore the Ins array, and instead figure out from the raw IR type what we need to do. This seems to fix the padding computation if the DAG lowering is forced (and stops breaking arguments following padded arguments if the arguments were only partially lowered in the IR) llvm-svn: 337021
* Revert "CallGraphSCCPass: iterate over all functions."Evgeniy Stepanov2018-07-131-71/+39
| | | | | | | | | | | | This reverts commit r336419: use-after-free on CallGraph::FunctionMap elements due to the use of a stale iterator in CGPassManager::runOnModule. The iterator may be invalidated if a pass removes a function, ex.: llvm::LegacyInlinerBase::inlineCalls inlineCallsImpl llvm::CallGraph::removeFunctionFromModule llvm-svn: 337018
* [dwarfdump] Pretty print DW_AT_APPLE_runtime_classJonas Devlieghere2018-07-131-0/+2
| | | | | | | | | | | | Instead of printing DW_AT_APPLE_runtime_class (0x10) we now print DW_AT_APPLE_runtime_class (DW_LANG_ObjC) llvm-svn: 337011
* [AArch64] Armv8.4-A: LDAPR & STLR with immediate offset instructions (cont'd)Sjoerd Meijer2018-07-133-22/+35
| | | | | | | | | Follow up of rL336913: fix base class description. Thanks to Ahmed Bougacha for pointing this out. Differential Revision: https://reviews.llvm.org/D49284 llvm-svn: 337009
* [PowerPC] Materialize more constants with CR-field set in late peepholeNemanja Ivanovic2018-07-131-5/+28
| | | | | | | | | | | | | | | | | Revision r322373 fixed a bug in how we materialize constants when the CR-field needs to be set. However the fix is overly conservative. It will only do the transform if AND-ing the input with the new constant produces the same new constant. This is of course correct, but not necessarily required. If there are no futher uses of the constant, the constant can be changed. If there are no uses of the GPR result, the final result of the materialization isn't important other than it needs to compare to zero correctly (lt, gt, eq). Differential revision: https://reviews.llvm.org/D42109 llvm-svn: 337008
* [cfi-verify] Support AArch64.Joel Galenson2018-07-133-9/+15
| | | | | | | | | | | | This patch adds support for AArch64 to cfi-verify. This required three changes to cfi-verify. First, it generalizes checking if an instruction is a trap by adding a new isTrap flag to TableGen (and defining it for x86 and AArch64). Second, the code that ensures that the operand register is not clobbered between the CFI check and the indirect call needs to allow a single dereference (in x86 this happens as part of the jump instruction). Third, we needed to ensure that return instructions are not counted as indirect branches. Technically, returns are indirect branches and can be covered by CFI, but LLVM's forward-edge CFI does not protect them, and x86 does not consider them, so we keep that behavior. In addition, we had to improve AArch64's code to evaluate the branch target of a MCInst to handle calls where the destination is not the first operand (which it often is not). Differential Revision: https://reviews.llvm.org/D48836 llvm-svn: 337007
* Add parens to silence Wparentheses warning, introduced by 336990Erich Keane2018-07-131-5/+3
| | | | llvm-svn: 337002
* [NFC] Silence Wparentheses warning in DomTreeUpdater, introduced by 336968Erich Keane2018-07-131-2/+2
| | | | llvm-svn: 337001
* [TableGen] Support multi-alternative pattern fragmentsUlrich Weigand2018-07-134-133/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A TableGen instruction record usually contains a DAG pattern that will describe the SelectionDAG operation that can be implemented by this instruction. However, there will be cases where several different DAG patterns can all be implemented by the same instruction. The way to represent this today is to write additional patterns in the Pattern (or usually Pat) class that map those extra DAG patterns to the instruction. This usually also works fine. However, I've noticed cases where the current setup seems to require quite a bit of extra (and duplicated) text in the target .td files. For example, in the SystemZ back-end, there are quite a number of instructions that can implement an "add-with-overflow" operation. The same instructions also need to be used to implement just plain addition (simply ignoring the extra overflow output). The current solution requires creating extra Pat pattern for every instruction, duplicating the information about which particular add operands map best to which particular instruction. This patch enhances TableGen to support a new PatFrags class, which can be used to encapsulate multiple alternative patterns that may all match to the same instruction. It operates the same way as the existing PatFrag class, except that it accepts a list of DAG patterns to match instead of just a single one. As an example, we can now define a PatFrags to match either an "add-with-overflow" or a regular add operation: def z_sadd : PatFrags<(ops node:$src1, node:$src2), [(z_saddo node:$src1, node:$src2), (add node:$src1, node:$src2)]>; and then use this in the add instruction pattern: defm AR : BinaryRRAndK<"ar", 0x1A, 0xB9F8, z_sadd, GR32, GR32>; These SystemZ target changes are implemented here as well. Note that PatFrag is now defined as a subclass of PatFrags, which means that some users of internals of PatFrag need to be updated. (E.g. instead of using PatFrag.Fragment you now need to use !head(PatFrag.Fragments).) The implementation is based on the following main ideas: - InlinePatternFragments may now replace each original pattern with several result patterns, not just one. - parseInstructionPattern delays calling InlinePatternFragments and InferAllTypes. Instead, it extracts a single DAG match pattern from the main instruction pattern. - Processing of the DAG match pattern part of the main instruction pattern now shares most code with processing match patterns from the Pattern class. - Direct use of main instruction patterns in InferFromPattern and EmitResultInstructionAsOperand is removed; everything now operates solely on DAG match patterns. Reviewed by: hfinkel Differential Revision: https://reviews.llvm.org/D48545 llvm-svn: 336999
* DivergenceAnalysis: added debug outputTim Renouf2018-07-131-5/+16
| | | | | | | | | | | | | | | | | | Summary: This commit does two things: 1. modified the existing DivergenceAnalysis::dump() so it dumps the whole function with added DIVERGENT: annotations; 2. added code to do that dump if the appropriate -debug-only option is on. Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D47700 Change-Id: Id97b605aab1fc6f5a11a20c58a99bbe8c565bf83 llvm-svn: 336998
* [SLH] Introduce a new pass to do Speculative Load Hardening to mitigateChandler Carruth2018-07-135-0/+1696
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Spectre variant #1 for x86. There is a lengthy, detailed RFC thread on llvm-dev which discusses the high level issues. High level discussion is probably best there. I've split the design document out of this patch and will land it separately once I update it to reflect the latest edits and updates to the Google doc used in the RFC thread. This patch is really just an initial step. It isn't quite ready for prime time and is only exposed via debugging flags. It has two major limitations currently: 1) It only supports x86-64, and only certain ABIs. Many assumptions are currently hard-coded and need to be factored out of the code here. 2) It doesn't include any options for more fine-grained control, either of which control flow edges are significant or which loads are important to be hardened. 3) The code is still quite rough and the testing lighter than I'd like. However, this is enough for people to begin using. I have had numerous requests from people to be able to experiment with this patch to understand the trade-offs it presents and how to use it. We would also like to encourage work to similar effect in other toolchains. The ARM folks are actively developing a system based on this for AArch64. We hope to merge this with their efforts when both are far enough along. But we also don't want to block making this available on that effort. Many thanks to the *numerous* people who helped along the way here. For this patch in particular, both Eric and Craig did a ton of review to even have confidence in it as an early, rough cut at this functionality. Differential Revision: https://reviews.llvm.org/D44824 llvm-svn: 336990
* [SLPVectorizer] Add initial alternate opcode support for cast instructions. ↵Simon Pilgrim2018-07-131-26/+72
| | | | | | | | | | | | | | | | | | | (REAPPLIED-2) We currently only support binary instructions in the alternate opcode shuffles. This patch is an initial attempt at adding cast instructions as well, this raises several issues that we probably want to address as we continue to generalize the alternate mechanism: 1 - Duplication of cost determination - we should probably add scalar/vector costs helper functions and get BoUpSLP::getEntryCost to use them instead of determining costs directly. 2 - Support alternate instructions with the same opcode (e.g. casts with different src types) - alternate vectorization of calls with different IntrinsicIDs will require this. 3 - Allow alternates to be a different instruction type - mixing binary/cast/call etc. 4 - Allow passthrough of unsupported alternate instructions - related to PR30787/D28907 'copyable' elements. Reapplied with fix to only accept 2 different casts if they come from the same source type (PR38154). Differential Revision: https://reviews.llvm.org/D49135 llvm-svn: 336989
* [x86] Teach the EFLAGS copy lowering to handle much more complex controlChandler Carruth2018-07-131-44/+161
| | | | | | | | | | | | | | | | | | | | | | | | flow patterns including forks, merges, and even cyles. This tries to cover a reasonably comprehensive set of patterns that still don't require PHIs or PHI placement. The coverage was inspired by the amazing variety of patterns produced when copy EFLAGS and restoring it to implement Speculative Load Hardening. Without this patch, we simply cannot make such complex and invasive changes to x86 instruction sequences due to EFLAGS. I've added "just" one test, but this test covers many different complexities and corner cases of this approach. It is actually more comprehensive, as far as I can tell, than anything that I have encountered in the wild on SLH. Because the test is so complex, I've tried to give somewhat thorough comments and an ASCII-art diagram of the control flows to make it a bit easier to read and maintain long-term. Differential Revision: https://reviews.llvm.org/D49220 llvm-svn: 336985
* [AArch64][SVE] Asm: Vector Unpack Low/High instructions.Sander de Smalen2018-07-132-0/+45
| | | | | | | | | | | | | | | | | This patch adds support for the following unpack instructions: - PUNPKLO, PUNPKHI Unpack elements from low/high half and place into elements of twice their size. e.g. punpklo p0.h, p0.b - UUNPKLO, UUNPKHI Unpack elements from low/high half and SUNPKLO, SUNPKHI place into elements of twice their size after zero- or sign-extending the values. e.g. uunpklo z0.h, z0.b llvm-svn: 336982
* [AArch64][SVE] Asm: Support for insert element (INSR) instructions.Sander de Smalen2018-07-132-0/+51
| | | | | | | | | | | | | | Insert general purpose register into shifted vector, e.g. insr z0.s, w0 insr z0.d, x0 Insert SIMD&FP scalar register into shifted vector, e.g. insr z0.b, b0 insr z0.h, h0 insr z0.s, s0 insr z0.d, d0 llvm-svn: 336979
* [LiveDebugValues] Tracking copying value between registersPetar Jovanovic2018-07-131-52/+127
| | | | | | | | | | | | | | | During the execution of long functions or functions that have a lot of inlined code it could come to the situation where tracked value could be transferred from one register to another. The transfer is recognized only if destination register is a callee saved register and if source register is killed. We do not salvage caller-saved registers since there is a great chance that killed register would outlive it. Patch by Nikola Prica. Differential Revision: https://reviews.llvm.org/D44016 llvm-svn: 336978
* [X86] Prefer MOVSS/SD over BLEND under optsize in isel.Craig Topper2018-07-132-14/+46
| | | | | | Previously we iseled to blend, commuted to another blend, and then commuted back to movss/movsd or blend depending on optsize. Now we do it directly. llvm-svn: 336976
* [XRay][compiler-rt] Add PID field to llvm-xray tool and add PID metadata ↵Dean Michael Berris2018-07-131-11/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | record entry in FDR mode Summary: llvm-xray changes: - account-mode - process-id {...} shows after thread-id - convert-mode - process {...} shows after thread - parses FDR and basic mode pid entries - Checks version number for FDR log parsing. Basic logging changes: - Update header version from 2 -> 3 FDR logging changes: - Update header version from 2 -> 3 - in writeBufferPreamble, there is an additional PID Metadata record (after thread id record and tsc record) Test cases changes: - fdr-mode.cc, fdr-single-thread.cc, fdr-thread-order.cc modified to catch process id output in the log. Reviewers: dberris Reviewed By: dberris Subscribers: hiraditya, llvm-commits, #sanitizers Differential Revision: https://reviews.llvm.org/D49153 llvm-svn: 336974
* [X86] Remove isel patterns that turns packed add/sub/mul/div+movss/sd into ↵Craig Topper2018-07-132-41/+33
| | | | | | | | | | scalar intrinsic instructions. This is not an optimization we should be doing in isel. This is more suitable for a DAG combine. My main concern is a future time when we support more FPENV. Changing a packed op to a scalar op could cause us to miss some exceptions that should have occured if we had done a packed op. A DAG combine would be better able to manage this. llvm-svn: 336971
* [DomTreeUpdater] Ignore updates when both DT and PDT are nullptrsChijun Sima2018-07-131-14/+29
| | | | | | | | | | | | | | | | | Summary: Previously, when both DT and PDT are nullptrs and the UpdateStrategy is Lazy, DomTreeUpdater still pends updates inside. After this patch, DomTreeUpdater will ignore all updates from(`applyUpdates()/insertEdge*()/deleteEdge*()`) in this case. (call `delBB()` still pends BasicBlock deletion until a flush event according to the doc). The behavior of DomTreeUpdater previously documented won't change after the patch. Reviewers: dmgreen, davide, kuhar, brzycki, grosser Reviewed By: kuhar Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D48974 llvm-svn: 336968
* [InstCombine] return when SimplifyAssociativeOrCommutative makes a changeSanjay Patel2018-07-133-12/+28
| | | | | | | | | | | | | This bug was created by rL335258 because we used to always call instsimplify after trying the associative folds. After that change it became possible for subsequent folds to encounter unsimplified code (and potentially assert because of it). Instead of carrying changed state through instcombine, we can just return immediately. This allows instsimplify to run, so we can continue assuming that easy folds have already occurred. llvm-svn: 336965
* CodeGen: Remove pipeline dependencies on StackProtector; NFCMatthias Braun2018-07-1316-69/+95
| | | | | | | | | | | | | | | | | | | | | | | | | | This re-applies r336929 with a fix to accomodate for the Mips target scheduling multiple SelectionDAG instances into the pass pipeline. PrologEpilogInserter and StackColoring depend on the StackProtector analysis being alive from the point it is run until PEI, which requires that they are all scheduled in the same FunctionPassManager. Inserting a (machine) ModulePass between StackProtector and PEI results in these passes being in separate FunctionPassManagers and the StackProtector is not available for PEI. PEI and StackColoring don't use much information from the StackProtector pass, so transfering the required information to MachineFrameInfo is cleaner than keeping the StackProtector pass around. This commit moves the SSP layout information to MFI instead of keeping it in the pass. This patch set (D37580, D37581, D37582, D37583, D37584, D37585, D37586, D37587) is a first draft of the pagerando implementation described in http://lists.llvm.org/pipermail/llvm-dev/2017-June/113794.html. Patch by Stephen Crane <sjc@immunant.com> Differential Revision: https://reviews.llvm.org/D49256 llvm-svn: 336964
* Simplify recursive launder.invariant.group and stripPiotr Padlewski2018-07-121-1/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch is crucial for proving equality laundered/stripped pointers. eg: bool foo(A *a) { return a == std::launder(a); } Clang with -fstrict-vtable-pointers will emit something like: define dso_local zeroext i1 @_Z3fooP1A(%struct.A* %a) { entry: %c = bitcast %struct.A* %a to i8* %call = tail call i8* @llvm.launder.invariant.group.p0i8(i8* %c) %0 = bitcast %struct.A* %a to i8* %1 = tail call i8* @llvm.strip.invariant.group.p0i8(i8* %0) %2 = tail call i8* @llvm.strip.invariant.group.p0i8(i8* %call) %cmp = icmp eq i8* %1, %2 ret i1 %cmp } and because %2 can be replaced with @llvm.strip.invariant.group(%0) and that %2 and %1 will produce the same value (because strip is readnone) we can replace compare with true. Reviewers: rsmith, hfinkel, majnemer, amharc, kuhar Subscribers: llvm-commits, hiraditya Differential Revision: https://reviews.llvm.org/D47423 llvm-svn: 336963
* [InstCombine] Simplify isKnownNegationFangrui Song2018-07-121-5/+2
| | | | llvm-svn: 336957
* [X86] Add AVX512 equivalents of some isel patterns so we get EVEX instructions.Craig Topper2018-07-122-17/+48
| | | | | | These are the patterns for matching fceil, ffloor, and sqrt to intrinsic instructions if they have a MOVSS/SD. llvm-svn: 336954
* Revert r336950 and r336951 "[X86] Add AVX512 equivalents of some isel ↵Craig Topper2018-07-122-48/+17
| | | | | | | | patterns so we get EVEX instructions." and "foo" One of them had a bad title and they should have been squashed. llvm-svn: 336953
* Remove redundant *_or_null checks; NFCGeorge Burgess IV2018-07-121-2/+2
| | | | | | | For the first one, we dereference `NewDef` right before the `if` anyway. For the second, we shouldn't have NULL users(). llvm-svn: 336952
* [X86] Add AVX512 equivalents of some isel patterns so we get EVEX instructions.Craig Topper2018-07-121-0/+31
| | | | | | These are the patterns for matching fceil, ffloor, and sqrt to intrinsic instructions if they have a MOVSS/SD. llvm-svn: 336951
OpenPOWER on IntegriCloud