summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/SelectionDAG
Commit message (Collapse)AuthorAgeFilesLines
...
* fix return values to match bool return type; NFC Sanjay Patel2015-12-071-2/+2
| | | | llvm-svn: 254968
* AVX-512: Fixed masked load / store instruction selection for KNL.Elena Demikhovsky2015-12-071-1/+4
| | | | | | | | | | | | | | | Patterns were missing for KNL target for <8 x i32>, <8 x float> masked load/store. This intrinsic comes with all legal types: <8 x float> @llvm.masked.load.v8f32(<8 x float>* %addr, i32 align, <8 x i1> %mask, <8 x float> %passThru), but still requires lowering, because VMASKMOVPS, VMASKMOVDQU32 work with 512-bit vectors only. All data operands should be widened to 512-bit vector. The mask operand should be widened to v16i1 with zeroes. Differential Revision: http://reviews.llvm.org/D15265 llvm-svn: 254909
* Replace uint16_t with the MCPhysReg typedef in many places. A lot of ↵Craig Topper2015-12-052-8/+8
| | | | | | physical register arrays already use this typedef. llvm-svn: 254843
* Normalize successors' probabilities when building MBBs for jump table.Cong Hou2015-12-051-0/+2
| | | | llvm-svn: 254837
* raw_ostream: << operator for callables with raw_ostream argumentMatthias Braun2015-12-041-14/+4
| | | | | | | | | This is a revised version of r254655 which uses a Printable wrapper class to avoid ambiguous overload problems. Differential Revision: http://reviews.llvm.org/D14348 llvm-svn: 254681
* Revert "raw_ostream: << operator for callables with raw_stream argument"Matthias Braun2015-12-031-3/+14
| | | | | | | | This commit provoked "error C2593: 'operator <<' is ambiguous" on MSVC. This reverts commit r254655. llvm-svn: 254661
* raw_ostream: << operator for callables with raw_stream argumentMatthias Braun2015-12-031-14/+3
| | | | | | | | | | | | | | | | | This allows easier construction of print helpers. Example: Printable PrintLaneMask(unsigned LaneMask) { return Printable([LaneMask](raw_ostream &OS) { OS << format("%08X", LaneMask); }); } // Usage: OS << PrintLaneMask(Mask); Differential Revision: http://reviews.llvm.org/D14348 llvm-svn: 254655
* [X86] Part 1 to fix x86-64 fp128 calling convention.Chih-Hung Hsieh2015-12-0310-69/+260
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Almost all these changes are conditioned and only apply to the new x86-64 f128 type configuration, which will be enabled in a follow up patch. They are required together to make new f128 work. If there is any error, we should fix or revert them as a whole. These changes should have no impact to current configurations. * Relax type legalization checks to accept new f128 type configuration, whose TypeAction is TypeSoftenFloat, not TypeLegal, but also has TLI.isTypeLegal true. * Relax GetSoftenedFloat to return in some cases f128 type SDValue, which is TLI.isTypeLegal but not "softened" to i128 node. * Allow customized FABS, FNEG, FCOPYSIGN on new f128 type configuration, to generate optimized bitwise operators for libm functions. * Enhance related Lower* functions to handle f128 type. * Enhance DAGTypeLegalizer::run, SoftenFloatResult, and related functions to keep new f128 type in register, and convert f128 operators to library calls. * Fix Combiner, Emitter, Legalizer routines that did not handle f128 type. * Add ExpandConstant to handle i128 constants, ExpandNode to handle ISD::Constant node. * Add one more parameter to getCommonSubClass and firstCommonClass, to guarantee that returned common sub class will contain the specified simple value type. This extra parameter is used by EmitCopyFromReg in InstrEmitter.cpp. * Fix infinite loop in getTypeLegalizationCost when f128 is the value type. * Fix printOperand to handle null operand. * Enhance ISD::BITCAST node to handle f128 constant. * Expand new f128 type for BR_CC, SELECT_CC, SELECT, SETCC nodes. * Enhance X86AsmPrinter to emit f128 values in comments. Differential Revision: http://reviews.llvm.org/D15134 llvm-svn: 254653
* Move EH-specific helper functions to a more appropriate placeDavid Majnemer2015-12-021-1/+1
| | | | | | No functionality change is intended. llvm-svn: 254562
* Fix accidental off by one changeFiona Glaser2015-12-021-1/+1
| | | | | | Didn't break any tests, but did unnecessary extra work. llvm-svn: 254529
* Scheduler / Regalloc: use unique_ptr[] instead of std::vectorFiona Glaser2015-12-021-11/+13
| | | | | | | | | vector.resize() is significantly slower than memset in many STLs and the cost of initializing these vectors is significant on targets with many registers. Since we don't need the overhead of a vector, use a simple unique_ptr instead. llvm-svn: 254526
* Fixed a failure in cost calculation for vector GEPElena Demikhovsky2015-12-011-8/+8
| | | | | | | | | Cost calculation for vector GEP failed with due to invalid cast to GEP index operand. The bug is fixed, added a test. http://reviews.llvm.org/D14976 llvm-svn: 254408
* Introduce new @llvm.get.dynamic.area.offset.i{32, 64} intrinsics.Yury Gribov2015-12-013-0/+24
| | | | | | | | | | | | | | | The @llvm.get.dynamic.area.offset.* intrinsic family is used to get the offset from native stack pointer to the address of the most recent dynamic alloca on the caller's stack. These intrinsics are intendend for use in combination with @llvm.stacksave and @llvm.restore to get a pointer to the most recent dynamic alloca. This is useful, for example, for AddressSanitizer's stack unpoisoning routines. Patch by Max Ostapenko. Differential Revision: http://reviews.llvm.org/D14983 llvm-svn: 254404
* Extend debug info for function parameters in SDAG.Evgeniy Stepanov2015-12-011-16/+11
| | | | | | | | | | | | | SDAG currently can emit debug location for function parameters when an llvm.dbg.declare points to either a function argument SSA temp, or to an AllocaInst. This change extends this logic by adding a fallback case when neither of the above is true. This is required for SafeStack, which may copy the contents of a byval function argument into something that is not an alloca, and then describe the target as the new location of the said argument. llvm-svn: 254352
* Have 'optnone' respect the -fast-isel=false option.Paul Robinson2015-11-301-3/+7
| | | | | | | | This is primarily useful for debugging optnone v. ISel issues. Differential Revision: http://reviews.llvm.org/D14792 llvm-svn: 254335
* Use a lambda instead of std::bind and std::mem_fn I introduced in r254242. NFCCraig Topper2015-11-291-2/+3
| | | | llvm-svn: 254260
* [SelectionDAG] Use std::any_of instead of a manually coded loop. NFCCraig Topper2015-11-291-8/+4
| | | | llvm-svn: 254242
* [Stack realignment] Handling of aligned allocas.Jonas Paulsson2015-11-281-13/+15
| | | | | | | | | | | | | | | | | | | | This patch implements dynamic realignment of stack objects for targets with a non-realigned stack pointer. Behaviour in FunctionLoweringInfo is changed so that for a target that has StackRealignable set to false, over-aligned static allocas are considered to be variable-sized objects and are handled with DYNAMIC_STACKALLOC nodes. It would be good to group aligned allocas into a single big alloca as an optimization, but this is yet todo. SystemZ benefits from this, due to its stack frame layout. New tests SystemZ/alloca-03.ll for aligned allocas, and SystemZ/alloca-04.ll for "no-realign-stack" attribute on functions. Review and help from Ulrich Weigand and Hal Finkel. llvm-svn: 254227
* Expose isXxxConstant() functions from SelectionDAGNodes.h (NFC)Artyom Skrobov2015-11-252-20/+20
| | | | | | | | | | | | | | Summary: Many target lowerings copy-paste the code to test SDValues for known constants. This code can instead be shared in SelectionDAG.cpp, and reused in the targets. Reviewers: MatzeB, andreadb, tstellarAMD Subscribers: arsenm, jyknight, llvm-commits Differential Revision: http://reviews.llvm.org/D14945 llvm-svn: 254085
* Fix some places where we were assuming that memory type had been legalizedEric Christopher2015-11-252-3/+2
| | | | | | | | | | to a simple type when lowering a truncating store of a vector type. In this case for an EVT we'll return Expand as we should in all of the cases anyhow. The testcase triggered at the one in VectorLegalizer::LegalizeOp, inspection found the rest. llvm-svn: 254061
* Let SelectionDAG start to use probability-based interface to add successors.Cong Hou2015-11-244-214/+226
| | | | | | | | | | | | | | | | | | | | | | | | The patch in http://reviews.llvm.org/D13745 is broken into four parts: 1. New interfaces without functional changes. 2. Use new interfaces in SelectionDAG, while in other passes treat probabilities as weights. 3. Use new interfaces in all other passes. 4. Remove old interfaces. This the second patch above. In this patch SelectionDAG starts to use probability-based interfaces in MBB to add successors but other MC passes are still using weight-based interfaces. Therefore, we need to maintain correct weight list in MBB even when probability-based interfaces are used. This is done by updating weight list in probability-based interfaces by treating the numerator of probabilities as weights. This change affects many test cases that check successor weight values. I will update those test cases once this patch looks good to you. Differential revision: http://reviews.llvm.org/D14361 llvm-svn: 253965
* Remove duplicate getValueType() calls. NFCI.Simon Pilgrim2015-11-221-2/+2
| | | | llvm-svn: 253823
* [DAGCombiner] Bugfix for lost chain depenedency.Jonas Paulsson2015-11-211-13/+7
| | | | | | | | | | | | | | When MergeConsecutiveStores() combines two loads and two stores into wider loads and stores, the chain users of both of the original loads must be transfered to the new load, because it may be that a chain user only depends on one of the loads. New test case: test/CodeGen/SystemZ/dag-combine-01.ll Reviewed by James Y Knight. Bugzilla: https://llvm.org/bugs/show_bug.cgi?id=25310#c6 llvm-svn: 253779
* Partially revert r253662: some unrelated work was accidentally committed ↵Daniel Sanders2015-11-201-1/+0
| | | | | | | | with it. Sorry. llvm-svn: 253663
* Revert the revert 253497 and 253539 - These commits aren't the cause of the ↵Daniel Sanders2015-11-201-0/+1
| | | | | | | | clang-cmake-mips failures. Sorry for the noise. llvm-svn: 253662
* X86: More efficient legalization of wide integer comparesHans Wennborg2015-11-195-1/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In particular, this makes the code for 64-bit compares on 32-bit targets much more efficient. Example: define i32 @test_slt(i64 %a, i64 %b) { entry: %cmp = icmp slt i64 %a, %b br i1 %cmp, label %bb1, label %bb2 bb1: ret i32 1 bb2: ret i32 2 } Before this patch: test_slt: movl 4(%esp), %eax movl 8(%esp), %ecx cmpl 12(%esp), %eax setae %al cmpl 16(%esp), %ecx setge %cl je .LBB2_2 movb %cl, %al .LBB2_2: testb %al, %al jne .LBB2_4 movl $1, %eax retl .LBB2_4: movl $2, %eax retl After this patch: test_slt: movl 4(%esp), %eax movl 8(%esp), %ecx cmpl 12(%esp), %eax sbbl 16(%esp), %ecx jge .LBB1_2 movl $1, %eax retl .LBB1_2: movl $2, %eax retl Differential Revision: http://reviews.llvm.org/D14496 llvm-svn: 253572
* Revert "Change memcpy/memset/memmove to have dest and source alignments."Pete Cooper2015-11-191-34/+30
| | | | | | | | | | This reverts commit r253511. This likely broke the bots in http://lab.llvm.org:8011/builders/clang-ppc64-elf-linux2/builds/20202 http://bb.pgr.jp/builders/clang-3stage-i686-linux/builds/3787 llvm-svn: 253543
* Change memcpy/memset/memmove to have dest and source alignments.Pete Cooper2015-11-181-30/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html These intrinsics currently have an explicit alignment argument which is required to be a constant integer. It represents the alignment of the source and dest, and so must be the minimum of those. This change allows source and dest to each have their own alignments by using the alignment attribute on their arguments. The alignment argument itself is removed. There are a few places in the code for which the code needs to be checked by an expert as to whether using only src/dest alignment is safe. For those places, they currently take the minimum of src/dest alignments which matches the current behaviour. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 500, i32 8, i1 false) will now read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 8 %dest, i8* align 8 %src, i32 500, i1 false) For out of tree owners, I was able to strip alignment from calls using sed by replacing: (call.*llvm\.memset.*)i32\ [0-9]*\,\ i1 false\) with: $1i1 false) and similarly for memmove and memcpy. I then added back in alignment to test cases which needed it. A similar commit will be made to clang which actually has many differences in alignment as now IRBuilder can generate different source/dest alignments on calls. In IRBuilder itself, a new argument was added. Instead of calling: CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, /* isVolatile */ false) you now call CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, SrcAlign, /* isVolatile */ false) There is a temporary class (IntegerAlignment) which takes the source alignment and rejects implicit conversion from bool. This is to prevent isVolatile here from passing its default parameter to the source alignment. Note, changes in future can now be made to codegen. I didn't change anything here, but this change should enable better memcpy code sequences. Reviewed by Hal Finkel. llvm-svn: 253511
* [DAGCombiner] Vector constant folding for comparisonsSimon Pilgrim2015-11-181-6/+14
| | | | | | | | | | This patch adds support for vector constant folding of integer/float comparisons. This requires FoldConstantVectorArithmetic to support scalar constant operands (in this case ISD::CONDCASE). In future we should be able to support other scalar constant types as necessary (and possibly start calling FoldConstantVectorArithmetic for all node creations) Differential Revision: http://reviews.llvm.org/D14683 llvm-svn: 253504
* [PGO] Value profiling supportBetul Buyukkurt2015-11-181-1/+2
| | | | | | | | | This change introduces an instrumentation intrinsic instruction for value profiling purposes, the lowering of the instrumentation intrinsic and raw reader updates. The raw profile data files for llvm-profdata testing are updated. llvm-svn: 253484
* [SelectionDAGBuilder] Make sure DemoteReg ends up in right reg-class.Jonas Paulsson2015-11-181-1/+2
| | | | | | | | | | | | | | | | | The virtual register containing the address for returned value on stack should in the DAG be represented with a CopyFromReg node and not a Register node. Otherwise, InstrEmitter will not make sure that it ends up in the right register class for the target instruction. SystemZ needs this, becuause the reg class for address registers is a subset of the general 64 bit register class. test/SystemZ/CodeGen/args-07.ll and args-04.ll updated to run with -verify-machineinstrs. Reviewed by Hal Finkel. llvm-svn: 253461
* [WinEH] Move WinEHFuncInfo from MachineModuleInfo to MachineFunctionReid Kleckner2015-11-172-26/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Now that there is a one-to-one mapping from MachineFunction to WinEHFuncInfo, we don't need to use a DenseMap to select the right WinEHFuncInfo for the current funclet. The main challenge here is that X86WinEHStatePass is an IR pass that doesn't have access to the MachineFunction. I gave it its own WinEHFuncInfo object that it uses to calculate state numbers, which it then throws away. As long as nobody creates or removes EH pads between this pass and SDAG construction, we will get the same state numbers. The other thing X86WinEHStatePass does is to mark the EH registration node. Instead of communicating which alloca was the registration through WinEHFuncInfo, I added the llvm.x86.seh.ehregnode intrinsic. This intrinsic generates no code and simply marks the alloca in use. Reviewers: JCTremoulet Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D14668 llvm-svn: 253378
* Lower statepoints with multi-def targets.Pat Gavlin2015-11-171-7/+11
| | | | | | | | | | Statepoint lowering currently expects that the target method of a statepoint only defines a single value. This precludes using statepoints with ABIs that return values in multiple registers (e.g. the SysV AMD64 ABI). This change adds support for lowering statepoints with mutli-def targets. llvm-svn: 253339
* [SDAG] Fix expansion of BITREVERSEJames Molloy2015-11-131-3/+5
| | | | | | | | | | Richard Trieu noted that UBSan detected an overflowing shift, and the obvious fix caused a crash. What was happening was that the shiftee (1U) was indeed too small for the possible range of shifts it had to handle, but also we were using "VT.getSizeInBits()" to get the maximum type bitwidth, but we wanted "VT.getScalarSizeInBits()" to get the vector lane size instead of the entire vector size. Use an APInt for the shift and VT.getScalarSizeInBits(). llvm-svn: 253023
* Revert "Remove unnecessary call to getAllocatableRegClass"Tom Stellard2015-11-121-8/+4
| | | | | | | | | | | | | This reverts commit r252565. This also includes the revert of the commit mentioned below in order to avoid breaking tests in AMDGPU: Revert "AMDGPU: Set isAllocatable = 0 on VS_32/VS_64" This reverts commit r252674. llvm-svn: 252956
* [SDAG] Introduce a new BITREVERSE node along with a corresponding LLVM intrinsicJames Molloy2015-11-126-1/+62
| | | | | | | | | | Several backends have instructions to reverse the order of bits in an integer. Conceptually matching such patterns is similar to @llvm.bswap, and it was mentioned in http://reviews.llvm.org/D14234 that it would be best if these patterns were matched in InstCombine instead of reimplemented in every different target. This patch introduces an intrinsic @llvm.bitreverse.i* that operates similarly to @llvm.bswap. For plumbing purposes there is also a new ISD node ISD::BITREVERSE, with simple expansion and promotion support. The intention is that InstCombine's BSWAP detection logic will be extended to support BITREVERSE too, and @llvm.bitreverse intrinsics emitted (if the backend supports lowering it efficiently). llvm-svn: 252878
* LegalizeDAG: Fix and improve FCOPYSIGN/FABS legalizationMatthias Braun2015-11-121-73/+145
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Factor out code to query and modify the sign bit of a floatingpoint value as an integer. This also works if none of the targets integer types is big enough to hold all bits of the floatingpoint value. - Legalize FABS(x) as FCOPYSIGN(x, 0.0) if FCOPYSIGN is available, otherwise perform bit manipulation on the sign bit. The previous code used "x >u 0 ? x : -x" which is incorrect for x being -0.0! It also takes 34 instructions on ARM Cortex-M4. With this patch we only require 5: vldr d0, LCPI0_0 vmov r2, r3, d0 lsrs r2, r3, #31 bfi r1, r2, #31, #1 bx lr (This could be further improved if the compiler would recognize that r2, r3 is zero). - Only lower FCOPYSIGN(x, y) = sign(x) ? -FABS(x) : FABS(x) if FABS is available otherwise perform bit manipulation on the sign bit. - Perform the sign(x) test by masking out the sign bit and comparing with 0 rather than shifting the sign bit to the highest position and testing for "<s 0". For x86 copysignl (on 80bit values) this gets us: testl $32768, %eax rather than: shlq $48, %rax sets %al testb %al, %al Differential Revision: http://reviews.llvm.org/D11172 llvm-svn: 252839
* [DAGCombiner] Improve zextload optimization.Geoff Berry2015-11-111-22/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Don't fold (zext (and (load x), cst)) -> (and (zextload x), (zext cst)) if (and (load x) cst) will match as a zextload already and has additional users. For example, the following IR: %load = load i32, i32* %ptr, align 8 %load16 = and i32 %load, 65535 %load64 = zext i32 %load16 to i64 store i32 %load16, i32* %dst1, align 4 store i64 %load64, i64* %dst2, align 8 used to produce the following aarch64 code: ldr w8, [x0] and w9, w8, #0xffff and x8, x8, #0xffff str w9, [x1] str x8, [x2] but with this change produces the following aarch64 code: ldrh w8, [x0] str w8, [x1] str x8, [x2] Reviewers: resistor, mcrosier Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D14340 llvm-svn: 252789
* Add target preference for GatherAllAliases max depthMatt Arsenault2015-11-111-1/+1
| | | | llvm-svn: 252775
* LegalizeDAG: Implement promote for scalar_to_vectorMatt Arsenault2015-11-101-0/+28
| | | | | | | | | | | This allows avoiding the default Expand behavior which introduces stack usage. Bitcast the scalar and replace the missing elements with undef. This is covered by existing tests and used by a future commit which makes 64-bit vectors legal types on AMDGPU. llvm-svn: 252632
* LegalizeDAG: Implement promote for insert_vector_eltMatt Arsenault2015-11-101-1/+52
| | | | | | | This is covered by existing tests and used by a future commit which makes 64-bit vectors legal types on AMDGPU. llvm-svn: 252631
* LegalizeDAG: Implement promote for extract_vector_eltMatt Arsenault2015-11-101-4/+58
| | | | | | | | | | This is for AMDGPU to implement v2i64 extract as extract of half of a v4i32. This is covered by existing tests and used by a future commit which makes 64-bit vectors legal types on AMDGPU. llvm-svn: 252630
* Remove unnecessary call to getAllocatableRegClassMatt Arsenault2015-11-101-4/+8
| | | | | | | | | | | | | | | | | | | | | | I'm not sure what the point of this was. I'm not sure why you would ever define an instruction that produces an unallocatable register class. No tests fail with this removed, and it seems like it should be a verifier error to define such an instruction. This was problematic for AMDGPU because it would make bad decisions by arbitrarily changing the register class when unsetting isAllocatable for VS_32/VS_64, which is currently set as a workaround to this problem. AMDGPU uses the VS_32/VS_64 register classes to represent operands which can use either VGPRs or SGPRs. When isAllocatable is unset for these, this would need to pick either the SGPR or VGPR class and insert either a copy we don't want, or an illegal copy we would need to deal with later. A semi-arbitrary register class ordering decision is made in tablegen, which resulted in always picking a VGPR class because it happens to have more registers than the SGPR register class. We really just want to use whatever register class the original register had. llvm-svn: 252565
* add a SelectionDAG method to check if no common bits are set in two nodes; NFCISanjay Patel2015-11-092-16/+13
| | | | | | | | | | | | | | | This was suggested in: http://reviews.llvm.org/D13956 and is a follow-on to: http://reviews.llvm.org/rL252515 http://reviews.llvm.org/rL252519 This lets us remove logically equivalent/duplicated code from DAGCombiner and X86ISelDAGToDAG. A corresponding function for IR instructions already exists in ValueTracking. llvm-svn: 252539
* [WinEH] Don't emit CATCHRET from visitCatchPadDavid Majnemer2015-11-091-12/+5
| | | | | | | Instead, emit a CATCHPAD node which will get selected to a target specific sequence. llvm-svn: 252528
* [CodeGen] Always promote f16 if not legalOliver Stannard2015-11-092-0/+23
| | | | | | | | | | | | | | | | | | | We don't currently have any runtime library functions for operations on f16 values (other than conversions to and from f32 and f64), so we should always promote it to f32, even if that is not a legal type. In that case, the f32 values would be softened to f32 library calls. SoftenFloatRes_FP_EXTEND now needs to check the promoted operand's type, as it may ne a no-op or require a different library call. getCopyFromParts and getCopyToParts now need to cope with a floating-point value stored in a larger integer part, as is the case for any target that needs to store an f16 value in a 32-bit integer register. Differential Revision: http://reviews.llvm.org/D12856 llvm-svn: 252459
* [WinEH] Update exception pointer registersJoseph Tremoulet2015-11-072-5/+7
| | | | | | | | | | | | | | | | | | | | Summary: The CLR's personality routine passes these in rdx/edx, not rax/eax. Make getExceptionPointerRegister a virtual method parameterized by personality function to allow making this distinction. Similarly make getExceptionSelectorRegister a virtual method parameterized by personality function, for symmetry. Reviewers: pgavlin, majnemer, rnk Subscribers: jyknight, dsanders, llvm-commits Differential Revision: http://reviews.llvm.org/D14344 llvm-svn: 252383
* DAGCombiner: Check shouldReduceLoadWidth before combining (and (load), x) -> ↵Tom Stellard2015-11-061-1/+2
| | | | | | | | | | | | extload Reviewers: resistor, arsenm Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D13805 llvm-svn: 252349
* [WinEH] Fix funclet prologues with stack realignmentReid Kleckner2015-11-051-2/+6
| | | | | | | | | | | | | | We already had a test for this for 32-bit SEH catchpads, but those don't actually create funclets. We had a bug that only appeared in funclet prologues, where we would establish EBP and ESI as our FP and BP, and then downstream prologue code would overwrite them. While I was at it, I fixed Win64+funclets+stackrealign. This issue doesn't come up as often there due to the ABI requring 16 byte stack alignment, but now we can rest easy that AVX and WinEH will work well together =P. llvm-svn: 252210
* [StatepointLowering] Remove distinction between call and invoke safepointsIgor Laevsky2015-11-042-24/+31
| | | | | | | | | | | | There is no point in having invoke safepoints handled differently than the call safepoints. All relevant decisions could be made by looking at whether or not gc.result and gc.relocate lay in a same basic block. This change will allow to lower call safepoints with relocates and results in a different basic blocks. See test case for example. Differential Revision: http://reviews.llvm.org/D14158 llvm-svn: 252028
OpenPOWER on IntegriCloud