summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen
Commit message (Collapse)AuthorAgeFilesLines
* [DEBUGINFO] Reposting r352642: Handle restore instructions in LiveDebugValuesWolfgang Pieb2019-02-043-92/+216
| | | | | | | | | | | | | | | | | The LiveDebugValues pass recognizes spills but not restores, which can cause large gaps in location information for some variables, depending on control flow. This patch make LiveDebugValues recognize restores and generate appropriate DBG_VALUE instructions. This patch was posted previously with r352642 and reverted in r352666 due to buildbot errors. A missing return statement was the cause for the failures. Reviewers: aprantl, NicolaPrica Differential Revision: https://reviews.llvm.org/D57271 llvm-svn: 353089
* GlobalISel: Fix CSE handling of buildConstantMatt Arsenault2019-02-042-40/+51
| | | | | | | | | | | | | | | | This fixes two problems with CSE done in buildConstant. First, this would hit an assert when used with a vector result type. Solve this by allowing CSE on the vector elements, but not on the result vector for now. Second, this was also performing the CSE based on the input ConstantInt pointer. The underlying buildConstant could potentially convert the constant depending on the result type, giving in a different ConstantInt*. Stop allowing the APInt and ConstantInt forms from automatically casting to the result type to avoid any similar problems in the future. llvm-svn: 353077
* GlobalISel: Fix moreElementsToNextPow2Matt Arsenault2019-02-042-8/+7
| | | | | | | | | This was completely broken. The condition was inverted, and changed the element type for vectors of pointers. Fixes bug 40592. llvm-svn: 353069
* Revert "[GlobalISel] Add IRTranslator support for G_FFLOOR"Jessica Paquette2019-02-041-5/+0
| | | | | | | | | This reverts commit 8bbd570fd5205a04d88d2e5513a6e4adbd028039. Apparently adding ffloor breaks AMDGPU somehow, so I need to back this out while I look into it. llvm-svn: 353064
* [Intrinsic] Unsigned Fixed Point Multiplication IntrinsicLeonard Chan2019-02-049-37/+90
| | | | | | | | | | | | | Add an intrinsic that takes 2 unsigned integers with the scale of them provided as the third argument and performs fixed point multiplication on them. This is a part of implementing fixed point arithmetic in clang where some of the more complex operations will be implemented as intrinsics. Differential Revision: https://reviews.llvm.org/D55625 llvm-svn: 353059
* [GlobalISel] Add IRTranslator support for G_FFLOORJessica Paquette2019-02-041-0/+5
| | | | | | | | | | Follow-up to https://reviews.llvm.org/D57484 Adds G_FFLOOR to translateKnownIntrinsic and update arm64-irtranslator.ll. Differential Revision: https://reviews.llvm.org/D57485 llvm-svn: 353058
* [CGP] use IRBuilder to simplify codeSanjay Patel2019-02-041-26/+25
| | | | | | | | | | | | | | | | This is no-functional-change-intended although there could be intermediate variations caused by a difference in the debug info produced by setting that from the builder's insertion point. I'm updating the IR test file associated with this code just to show that the naming differences from using the builder are visible. The motivation for adding a helper function is that we are likely to extend this code to deal with other overflow ops. llvm-svn: 353056
* GlobalISel: Fix formatting of debug outputMatt Arsenault2019-02-041-3/+3
| | | | | | | There was a missing space before the instruction name, and the newline is redundant since MI::print by default adds one. llvm-svn: 353046
* [DAGCombine] Add ADD(SUB,SUB) combinesSimon Pilgrim2019-02-041-0/+12
| | | | | | | | Noticed while investigating PR40483, and fixes the basic test case from the bug - but not a more general case. We're pretty weak at dealing with ADD/SUB combines compared to the SimplifyAssociativeOrCommutative/SimplifyUsingDistributiveLaws abilities that InstCombine can manage. llvm-svn: 353044
* [AsmPrinter] Remove hidden flag -print-schedule.Andrea Di Biagio2019-02-044-124/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes hidden codegen flag -print-schedule effectively reverting the logic originally committed as r300311 (https://llvm.org/viewvc/llvm-project?view=revision&revision=300311). Flag -print-schedule was originally introduced by r300311 to address PR32216 (https://bugs.llvm.org/show_bug.cgi?id=32216). That bug was about adding "Better testing of schedule model instruction latencies/throughputs". These days, we can use llvm-mca to test scheduling models. So there is no longer a need for flag -print-schedule in LLVM. The main use case for PR32216 is now addressed by llvm-mca. Flag -print-schedule is mainly used for debugging purposes, and it is only actually used by x86 specific tests. We already have extensive (latency and throughput) tests under "test/tools/llvm-mca" for X86 processor models. That means, most (if not all) existing -print-schedule tests for X86 are redundant. When flag -print-schedule was first added to LLVM, several files had to be modified; a few APIs gained new arguments (see for example method MCAsmStreamer::EmitInstruction), and MCSubtargetInfo/TargetSubtargetInfo gained a couple of getSchedInfoStr() methods. Method getSchedInfoStr() had to originally work for both MCInst and MachineInstr. The original implmentation of getSchedInfoStr() introduced a subtle layering violation (reported as PR37160 and then fixed/worked-around by r330615). In retrospect, that new API could have been designed more optimally. We can always query MCSchedModel to get the latency and throughput. More importantly, the "sched-info" string should not have been generated by the subtarget. Note, r317782 fixed an issue where "print-schedule" didn't work very well in the presence of inline assembly. That commit is also reverted by this change. Differential Revision: https://reviews.llvm.org/D57244 llvm-svn: 353043
* [SelectionDAG] Add a BaseIndexOffset::print() method for debugging.Clement Courbet2019-02-041-0/+19
| | | | llvm-svn: 353028
* [CGP] adjust target constraints for forming uaddoSanjay Patel2019-02-031-8/+11
| | | | | | | | | | | | | | | | | | | There are 2 changes visible here: 1. There's no reason to limit this transform based on number of condition registers. That diff allows PPC to produce slightly better (dot-instructions should be generally good) code. Note: someone that cares about PPC codegen might want to look closer at that output because it seems like we could still improve this. 2. We (probably?) should not bother trying to form uaddo (or other overflow ops) when there's no target support for such an op. This goes beyond checking whether the op is expanded because both PPC and AArch64 show better codegen for standard types regardless of whether the op is legal/custom. llvm-svn: 353001
* [CGP] refactor optimizeCmpExpression (NFCI)Sanjay Patel2019-02-031-36/+38
| | | | | | | | | | | | | | | | This is not truly NFC because we are bailing out without a TLI now. That should not be a real concern though because there should be a TLI in any real-world scenario. That seems better than passing around a pointer and then checking it for null-ness all over the place. The motivation is to fix what appears to be an unintended restriction on the uaddo transform - hasMultipleConditionRegisters() shouldn't be reason to limit the transform. llvm-svn: 352988
* GlobalISel: Implement widenScalar for G_UNMERGE_VALUESMatt Arsenault2019-02-031-39/+83
| | | | | | | | | For the scalar case only. Also move the similar G_MERGE_VALUES handling to a separate function and cleanup to make them look more similar. llvm-svn: 352979
* GlobalISel: Implement widenScalar for G_EXTRACT vector sourcesMatt Arsenault2019-02-021-0/+26
| | | | | | Handle the basic element extract case. llvm-svn: 352978
* GlobalISel: Legalization for inttoptr/ptrtointMatt Arsenault2019-02-022-4/+46
| | | | llvm-svn: 352973
* [SDAG] Add SDNode/SDValue getConstantOperandAPInt helper. NFCI.Simon Pilgrim2019-02-021-1/+1
| | | | | | | | We already have the getConstantOperandVal helper which returns a uint64_t, but along comes the fuzzer and inserts a i128 -1 constant or something and the whole thing asserts....... I've updated a few obvious cases, and tried to make use of the const reference where possible, but there's more to do. A number of existing oss-fuzz tickets should be fixed if we start using APInt and perform value clamping where necessary. llvm-svn: 352961
* [CodeGen] Be as conservative about atomic accesses as for volatilePhilip Reames2019-02-013-2/+7
| | | | | | | | | | | | | | Background: At the moment, we record the AtomicOrdering of an access in the MMO, but also mark any atomic access as volatile in SelectionDAG. I'm working towards separating that. See https://reviews.llvm.org/D57601 for context. Update all usages of isVolatile in lib/CodeGen to preserve behaviour once atomic MMOs stop being also volatile. This is NFC in it's current form, but is essential for correctness once we make that final change. It useful to keep in mind that AtomicSDNode is not a parent of LoadSDNode, StoreSDNode, or LSBaseSDNode. As a result, any call to isVolatile on one of those static types doesn't need a companion isAtomic check. We should probably adjust that class hierarchy long term, but for now, that seperation is useful. I'm deliberately being conservative about handling. I want the change to stop adding volatile to be NFC itself, and then will work through places where we can be less conservative for atomics one by one in separate changes w/tests. Differential Revision: https://reviews.llvm.org/D57596 llvm-svn: 352937
* [COFF, ARM64] Fix localaddress to handle stack realignment and variable size ↵Mandeep Singh Grang2019-02-012-7/+1
| | | | | | | | | | | | | | | | objects Summary: This fixes using the correct stack registers for SEH when stack realignment is needed or when variable size objects are present. Reviewers: rnk, efriedma, ssijaric, TomTan Reviewed By: rnk, efriedma Subscribers: javed.absar, kristof.beyls, llvm-commits Differential Revision: https://reviews.llvm.org/D57183 llvm-svn: 352923
* [opaque pointer types] Pass value type to GetElementPtr creation.James Y Knight2019-02-011-4/+4
| | | | | | | | | This cleans up all GetElementPtr creation in LLVM to explicitly pass a value type rather than deriving it from the pointer's element-type. Differential Revision: https://reviews.llvm.org/D57173 llvm-svn: 352913
* [opaque pointer types] Pass value type to LoadInst creation.James Y Knight2019-02-0111-29/+41
| | | | | | | | | This cleans up all LoadInst creation in LLVM to explicitly pass the value type rather than deriving it from the pointer's element-type. Differential Revision: https://reviews.llvm.org/D57172 llvm-svn: 352911
* [opaque pointer types] Pass function types to CallInst creation.James Y Knight2019-02-015-14/+12
| | | | | | | | | This cleans up all CallInst creation in LLVM to explicitly pass a function type rather than deriving it from the pointer's element-type. Differential Revision: https://reviews.llvm.org/D57170 llvm-svn: 352909
* [DWARF v5] Fix DWARF emitter and consumer to produce/expect a uleb for a ↵Wolfgang Pieb2019-02-011-2/+4
| | | | | | | | | | location description's length. Reviewer: davide, JDevliegere Differential Revision: https://reviews.llvm.org/D57550 llvm-svn: 352889
* [SDAG] improve variable names; NFCSanjay Patel2019-02-011-23/+22
| | | | | | | | The version of FoldConstantArithmetic() that takes arbitrary nodes was confusingly naming those nodes as constants when they might not be; also "Cst" reads like "Cast". llvm-svn: 352884
* [TargetLowering] try harder to determine undef elements of vector binopsSanjay Patel2019-02-011-7/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This might be the start of tracking all vector element constants generally if we take it to its logical conclusion, but let's stop here and make sure this is correct/beneficial so far. The affected tests require a convoluted path before they get simplified currently because we don't call SimplifyDemandedVectorElts() from binops directly and don't modify the binop operands directly in SimplifyDemandedVectorElts(). That's why the tests all have a trailing shuffle to induce a chain reaction of transforms. So something like this is happening: 1. Improve the knowledge of undefs in the binop via a SimplifyDemandedVectorElts() call that originates from a shuffle. 2. Transfer that undef knowledge back to the shuffle mask user as more undef lanes. 3. Combine the modified shuffle by calling SimplifyDemandedVectorElts() again. 4. Translate the improved shuffle mask as undemanded lanes of build vector constants causing those to become full undef constants. 5. Simplify the binop now that it has a full undef operand. As we can see from the unchanged 'and' and 'or' tests, tracking undefs alone isn't a full solution. We would need to track zero and all-ones constants to improve those opcodes. We'd probably need to track NaN for FP ops too (assuming we don't have fast-math-flags set). Differential Revision: https://reviews.llvm.org/D57066 llvm-svn: 352880
* [CodeGen] Don't scavenge non-saved regs in exception throwing functionsOliver Stannard2019-02-011-7/+9
| | | | | | | | | | | | | | | | Previously, LiveRegUnits was assuming that if a block has no successors and does not return, then no registers are live at the end of it (because the end of the block is unreachable). This was causing the register scavenger to use callee-saved registers to materialise stack frame addresses without saving them in the prologue. This would normally be fine, because the end of the block is unreachable, but this is not legal if the block ends by throwing a C++ exception. If this happens, the scratch register will be modified, but its previous value won't be preserved, so it doesn't get restored by the exception unwinder. Differential revision: https://reviews.llvm.org/D57381 llvm-svn: 352844
* [SelectionDAG] Support promotion of the FPOWI integer operandAlex Bradbury2019-02-012-0/+8
| | | | | | | | | | | | | | For targets where i32 is not a legal type (e.g. 64-bit RISC-V), LegalizeIntegerTypes must promote the integer operand of ISD::FPOWI. As this is a signed value, this should be sign-extended. This patch enables all tests in test/CodeGen/RISCVfloat-intrinsics.ll for RV64, as prior to this patch that file couldn't be compiled for RV64 due to an assertion when performing codegen for fpowi. Differential Revision: https://reviews.llvm.org/D54574 llvm-svn: 352832
* [opaque pointer types] Add a FunctionCallee wrapper type, and use it.James Y Knight2019-02-0111-136/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Recommit r352791 after tweaking DerivedTypes.h slightly, so that gcc doesn't choke on it, hopefully. Original Message: The FunctionCallee type is effectively a {FunctionType*,Value*} pair, and is a useful convenience to enable code to continue passing the result of getOrInsertFunction() through to EmitCall, even once pointer types lose their pointee-type. Then: - update the CallInst/InvokeInst instruction creation functions to take a Callee, - modify getOrInsertFunction to return FunctionCallee, and - update all callers appropriately. One area of particular note is the change to the sanitizer code. Previously, they had been casting the result of `getOrInsertFunction` to a `Function*` via `checkSanitizerInterfaceFunction`, and storing that. That would report an error if someone had already inserted a function declaraction with a mismatching signature. However, in general, LLVM allows for such mismatches, as `getOrInsertFunction` will automatically insert a bitcast if needed. As part of this cleanup, cause the sanitizer code to do the same. (It will call its functions using the expected signature, however they may have been declared.) Finally, in a small number of locations, callers of `getOrInsertFunction` actually were expecting/requiring that a brand new function was being created. In such cases, I've switched them to Function::Create instead. Differential Revision: https://reviews.llvm.org/D57315 llvm-svn: 352827
* GlobalISel: Fix MMO creation with non-power-of-2 mem sizeMatt Arsenault2019-01-311-4/+5
| | | | | | | It should probably just be mandatory for getTgtMemIntrinsic to return the alignment. llvm-svn: 352817
* Revert "[opaque pointer types] Add a FunctionCallee wrapper type, and use it."James Y Knight2019-01-3111-36/+136
| | | | | | | | | This reverts commit f47d6b38c7a61d50db4566b02719de05492dcef1 (r352791). Seems to run into compilation failures with GCC (but not clang, where I tested it). Reverting while I investigate. llvm-svn: 352800
* [DAGCombine] Avoid CombineZExtLogicopShiftLoad if there is free ZEXTGuozhi Wei2019-01-311-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes pr39098. For the attached test case, CombineZExtLogicopShiftLoad can optimize it to t25: i64 = Constant<1099511627775> t35: i64 = Constant<0> t0: ch = EntryToken t57: i64,ch = load<(load 4 from `i40* undef`, align 8), zext from i32> t0, undef:i64, undef:i64 t58: i64 = srl t57, Constant:i8<1> t60: i64 = and t58, Constant:i64<524287> t29: ch = store<(store 5 into `i40* undef`, align 8), trunc to i40> t57:1, t60, undef:i64, undef:i64 But later visitANDLike transforms it to t25: i64 = Constant<1099511627775> t35: i64 = Constant<0> t0: ch = EntryToken t57: i64,ch = load<(load 4 from `i40* undef`, align 8), zext from i32> t0, undef:i64, undef:i64 t61: i32 = truncate t57 t63: i32 = srl t61, Constant:i8<1> t64: i32 = and t63, Constant:i32<524287> t65: i64 = zero_extend t64 t58: i64 = srl t57, Constant:i8<1> t60: i64 = and t58, Constant:i64<524287> t29: ch = store<(store 5 into `i40* undef`, align 8), trunc to i40> t57:1, t60, undef:i64, undef:i64 And it triggers CombineZExtLogicopShiftLoad again, causes a dead loop. Both forms should generate same instructions, CombineZExtLogicopShiftLoad generated IR looks cleaner. But it looks more difficult to prevent visitANDLike to do the transform, so I prevent CombineZExtLogicopShiftLoad to do the transform if the ZExt is free. Differential Revision: https://reviews.llvm.org/D57491 llvm-svn: 352792
* [opaque pointer types] Add a FunctionCallee wrapper type, and use it.James Y Knight2019-01-3111-136/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The FunctionCallee type is effectively a {FunctionType*,Value*} pair, and is a useful convenience to enable code to continue passing the result of getOrInsertFunction() through to EmitCall, even once pointer types lose their pointee-type. Then: - update the CallInst/InvokeInst instruction creation functions to take a Callee, - modify getOrInsertFunction to return FunctionCallee, and - update all callers appropriately. One area of particular note is the change to the sanitizer code. Previously, they had been casting the result of `getOrInsertFunction` to a `Function*` via `checkSanitizerInterfaceFunction`, and storing that. That would report an error if someone had already inserted a function declaraction with a mismatching signature. However, in general, LLVM allows for such mismatches, as `getOrInsertFunction` will automatically insert a bitcast if needed. As part of this cleanup, cause the sanitizer code to do the same. (It will call its functions using the expected signature, however they may have been declared.) Finally, in a small number of locations, callers of `getOrInsertFunction` actually were expecting/requiring that a brand new function was being created. In such cases, I've switched them to Function::Create instead. Differential Revision: https://reviews.llvm.org/D57315 llvm-svn: 352791
* [DAG] Aggressively cleanup dangling node in CombineZExtLogicopShiftLoad.Nirav Dave2019-01-311-0/+4
| | | | | | | | | | | | | While dangling nodes will eventually be pruned when they are considered, leaving them disables combines requiring single-use. Reviewers: Carrot, spatel, craig.topper, RKSimon, efriedma Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D57520 llvm-svn: 352784
* [Intrinsic] Expand SMULFIX to MUL, MULH[US], or [US]MUL_LOHI on vector argumentsLeonard Chan2019-01-312-14/+21
| | | | | | | | | | | r zero scale SMULFIX, expand into MUL which produces better code for X86. For vector arguments, expand into MUL if SMULFIX is provided with a zero scale. Otherwise, expand into MULH[US] or [US]MUL_LOHI. Differential Revision: https://reviews.llvm.org/D56987 llvm-svn: 352783
* Lower widenable_conditions in CGPPhilip Reames2019-01-311-0/+14
| | | | | | | | This ensures that if we make it to the backend w/o lowering widenable_conditions first, that we generate correct code. Doing it in CGP - instead of isel - let's us fold control flow before hitting block local instruction selection. Differential Revision: https://reviews.llvm.org/D57473 llvm-svn: 352779
* [SelectionDAG] Codesize: don't expand SHIFT to SHIFT_PARTSSjoerd Meijer2019-01-311-3/+7
| | | | | | | | | | | | | | | | And instead just generate a libcall. My motivating example on ARM was a simple: shl i64 %A, %B for which the code bloat is quite significant. For other targets that also accept __int128/i128 such as AArch64 and X86, it is also beneficial for these cases to generate a libcall when optimising for minsize. On these 64-bit targets, the 64-bits shifts are of course unaffected because the SHIFT/SHIFT_PARTS lowering operation action is not set to custom/expand. Differential Revision: https://reviews.llvm.org/D57386 llvm-svn: 352736
* Revert "Reapply "[CGP] Check for existing inttotpr before creating new one""David L. Jones2019-01-311-18/+4
| | | | | | | | This change reverts r351626. The changes in r351626 cause quadratic work in several cases. (See r351626 thread on llvm-commits for details.) llvm-svn: 352722
* GlobalISel: Handle odd splits in fewerElementsVector for load/storeMatt Arsenault2019-01-311-30/+173
| | | | llvm-svn: 352720
* GlobalISel: Implement narrowScalar for bswapMatt Arsenault2019-01-311-0/+25
| | | | llvm-svn: 352719
* GlobalISel: Don't call changingInstruction before giving upMatt Arsenault2019-01-311-1/+1
| | | | llvm-svn: 352718
* GlobalISel: Allow bitcount ops to have different result typeMatt Arsenault2019-01-311-4/+30
| | | | | | For AMDGPU the result is always 32-bit for 64-bit inputs. llvm-svn: 352717
* GlobalISel: Use helper function for MMO splittingMatt Arsenault2019-01-312-26/+21
| | | | | | | | Also fix an alignment bug getMachineMemOperand. If the tracked value is null, the offset isn't tracked so the base alignment needs to be reduced. llvm-svn: 352716
* GlobalISel: Fix creating MMOs with align 0Matt Arsenault2019-01-312-2/+6
| | | | llvm-svn: 352712
* [LegalizeVectorTypes] Allow illegal indices when splitting extract_vector_eltThomas Lively2019-01-311-1/+0
| | | | | | | | | | | | | | | | | Summary: Fixes PR40267, in which the removed assertion was triggering on perfectly valid IR. As far as I can tell, constant out of bounds indices should be allowed when splitting extract_vector_elt, since they will simply be propagated as out of bounds indices in the resulting split vector and handled appropriately elsewhere. Reviewers: aheejin Subscribers: dschuff, sbc100, jgravelle-google, hiraditya Differential Revision: https://reviews.llvm.org/D57471 llvm-svn: 352702
* [LegalizeTypes] Use report_fatal_error instead of llvm_unreachable in the ↵Craig Topper2019-01-311-2/+3
| | | | | | | | | | default case of some type legalization handlers that can be reached with intrinsics with result or operands that aren't legal types. These can be triggered by mistakenly using a 64-bit mode only intrinsics with a -mtriple=i686. Using report_fatal_error gives a better experience for this mistake in release builds instead of probably crashing. We already do this for some of the vector type legalization handles. llvm-svn: 352699
* [GlobalISel][AArch64] Select G_FEXPJessica Paquette2019-01-301-1/+6
| | | | | | | | | | | | | | | This teaches the legalizer to handle G_FEXP in AArch64. As a result, it also allows us to select G_FEXP. It... - Updates the legalizer-info tests - Adds a test for legalizing exp - Updates the existing fp tests to show that we can now select G_FEXP https://reviews.llvm.org/D57483 llvm-svn: 352692
* [GlobalISel][LegalizerHelper] Add some missing MI change observer calls.Amara Emerson2019-01-301-0/+2
| | | | | | No test as it's a preventative fix. llvm-svn: 352691
* MIR: Reject non-power-of-4 alignments in MMO parsingMatt Arsenault2019-01-301-0/+4
| | | | llvm-svn: 352686
* [DAGCombiner] sub X, 0/1 --> add X, 0/-1Sanjay Patel2019-01-301-10/+22
| | | | | | | | | | This extends the existing transform for: add X, 0/1 --> sub X, 0/-1 ...to allow the sibling subtraction fold. This pattern could regress with the proposed change in D57401. llvm-svn: 352680
* [GlobalISel][AArch64] Add instruction selection support for @llvm.log2Jessica Paquette2019-01-301-1/+7
| | | | | | | | | | | | | This teaches GlobalISel to emit a RTLib call for @llvm.log2 when it encounters it. It updates the existing floating point tests to show that we don't fall back on the intrinsic, and select the correct instructions. It also adds a legalizer test for G_FLOG2. https://reviews.llvm.org/D57357 llvm-svn: 352673
OpenPOWER on IntegriCloud