summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Indent consistently.Joerg Sonnenberger2016-06-191-1/+1
| | | | llvm-svn: 273109
* AMDGPU: Implement canonicalizeMatt Arsenault2016-04-141-0/+1
| | | | | | Also add generic DAG node for it. llvm-svn: 266272
* Avoid overly large SmallPtrSet/SmallSetMatthias Braun2016-01-301-1/+1
| | | | | | | These sets perform linear searching in small mode so it is never a good idea to use SmallSize/N bigger than 32. llvm-svn: 259283
* Annotate dump() methods with LLVM_DUMP_METHOD, addressing Richard Smith ↵Yaron Keren2016-01-291-2/+2
| | | | | | | | r259192 post commit comment. clang part in r259232, this is the LLVM part of the patch. llvm-svn: 259240
* Revert r248483, r242546, r242545, and r242409 - absdiff intrinsicsHal Finkel2015-12-111-2/+0
| | | | | | | | | | | | | | | | | | | | After much discussion, ending here: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151123/315620.html it has been decided that, instead of having the vectorizer directly generate special absdiff and horizontal-add intrinsics, we'll recognize the relevant reduction patterns during CodeGen. Accordingly, these intrinsics are not needed (the operations they represent can be pattern matched, as is already done in some backends). Thus, we're backing these out in favor of the current development work. r248483 - Codegen: Fix llvm.*absdiff semantic. r242546 - [ARM] Use [SU]ABSDIFF nodes instead of intrinsics for VABD/VABA r242545 - [AArch64] Use [SU]ABSDIFF nodes instead of intrinsics for ABD/ABA r242409 - [Codegen] Add intrinsics 'absdiff' and corresponding SDNodes for absolute difference operation llvm-svn: 255387
* raw_ostream: << operator for callables with raw_ostream argumentMatthias Braun2015-12-041-14/+4
| | | | | | | | | This is a revised version of r254655 which uses a Printable wrapper class to avoid ambiguous overload problems. Differential Revision: http://reviews.llvm.org/D14348 llvm-svn: 254681
* Revert "raw_ostream: << operator for callables with raw_stream argument"Matthias Braun2015-12-031-3/+14
| | | | | | | | This commit provoked "error C2593: 'operator <<' is ambiguous" on MSVC. This reverts commit r254655. llvm-svn: 254661
* raw_ostream: << operator for callables with raw_stream argumentMatthias Braun2015-12-031-14/+3
| | | | | | | | | | | | | | | | | This allows easier construction of print helpers. Example: Printable PrintLaneMask(unsigned LaneMask) { return Printable([LaneMask](raw_ostream &OS) { OS << format("%08X", LaneMask); }); } // Usage: OS << PrintLaneMask(Mask); Differential Revision: http://reviews.llvm.org/D14348 llvm-svn: 254655
* [X86] Part 1 to fix x86-64 fp128 calling convention.Chih-Hung Hsieh2015-12-031-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Almost all these changes are conditioned and only apply to the new x86-64 f128 type configuration, which will be enabled in a follow up patch. They are required together to make new f128 work. If there is any error, we should fix or revert them as a whole. These changes should have no impact to current configurations. * Relax type legalization checks to accept new f128 type configuration, whose TypeAction is TypeSoftenFloat, not TypeLegal, but also has TLI.isTypeLegal true. * Relax GetSoftenedFloat to return in some cases f128 type SDValue, which is TLI.isTypeLegal but not "softened" to i128 node. * Allow customized FABS, FNEG, FCOPYSIGN on new f128 type configuration, to generate optimized bitwise operators for libm functions. * Enhance related Lower* functions to handle f128 type. * Enhance DAGTypeLegalizer::run, SoftenFloatResult, and related functions to keep new f128 type in register, and convert f128 operators to library calls. * Fix Combiner, Emitter, Legalizer routines that did not handle f128 type. * Add ExpandConstant to handle i128 constants, ExpandNode to handle ISD::Constant node. * Add one more parameter to getCommonSubClass and firstCommonClass, to guarantee that returned common sub class will contain the specified simple value type. This extra parameter is used by EmitCopyFromReg in InstrEmitter.cpp. * Fix infinite loop in getTypeLegalizationCost when f128 is the value type. * Fix printOperand to handle null operand. * Enhance ISD::BITCAST node to handle f128 constant. * Expand new f128 type for BR_CC, SELECT_CC, SELECT, SETCC nodes. * Enhance X86AsmPrinter to emit f128 values in comments. Differential Revision: http://reviews.llvm.org/D15134 llvm-svn: 254653
* Introduce new @llvm.get.dynamic.area.offset.i{32, 64} intrinsics.Yury Gribov2015-12-011-0/+1
| | | | | | | | | | | | | | | The @llvm.get.dynamic.area.offset.* intrinsic family is used to get the offset from native stack pointer to the address of the most recent dynamic alloca on the caller's stack. These intrinsics are intendend for use in combination with @llvm.stacksave and @llvm.restore to get a pointer to the most recent dynamic alloca. This is useful, for example, for AddressSanitizer's stack unpoisoning routines. Patch by Max Ostapenko. Differential Revision: http://reviews.llvm.org/D14983 llvm-svn: 254404
* X86: More efficient legalization of wide integer comparesHans Wennborg2015-11-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In particular, this makes the code for 64-bit compares on 32-bit targets much more efficient. Example: define i32 @test_slt(i64 %a, i64 %b) { entry: %cmp = icmp slt i64 %a, %b br i1 %cmp, label %bb1, label %bb2 bb1: ret i32 1 bb2: ret i32 2 } Before this patch: test_slt: movl 4(%esp), %eax movl 8(%esp), %ecx cmpl 12(%esp), %eax setae %al cmpl 16(%esp), %ecx setge %cl je .LBB2_2 movb %cl, %al .LBB2_2: testb %al, %al jne .LBB2_4 movl $1, %eax retl .LBB2_4: movl $2, %eax retl After this patch: test_slt: movl 4(%esp), %eax movl 8(%esp), %ecx cmpl 12(%esp), %eax sbbl 16(%esp), %ecx jge .LBB1_2 movl $1, %eax retl .LBB1_2: movl $2, %eax retl Differential Revision: http://reviews.llvm.org/D14496 llvm-svn: 253572
* [SDAG] Introduce a new BITREVERSE node along with a corresponding LLVM intrinsicJames Molloy2015-11-121-1/+2
| | | | | | | | | | Several backends have instructions to reverse the order of bits in an integer. Conceptually matching such patterns is similar to @llvm.bswap, and it was mentioned in http://reviews.llvm.org/D14234 that it would be best if these patterns were matched in InstCombine instead of reimplemented in every different target. This patch introduces an intrinsic @llvm.bitreverse.i* that operates similarly to @llvm.bswap. For plumbing purposes there is also a new ISD node ISD::BITREVERSE, with simple expansion and promotion support. The intention is that InstCombine's BSWAP detection logic will be extended to support BITREVERSE too, and @llvm.bitreverse intrinsics emitted (if the backend supports lowering it efficiently). llvm-svn: 252878
* SelectionDAG: Remove implicit ilist iterator conversions, NFCDuncan P. N. Exon Smith2015-10-131-1/+1
| | | | llvm-svn: 250214
* SelectionDAGDumper: Print simple operands inline.Matthias Braun2015-09-251-22/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Print simple operands inline instead of their pointer/value number. Simple operands are SDNodes without predecessors like Constant(FP), Register, UNDEF. This unifies the behaviour with dumpr() which was already doing this. Previously: t0: ch = EntryToken t1: i64 = Register %vreg0 t2: i64,ch = CopyFromReg t0, t1 t3: i64 = Constant<1> t4: i64 = add t2, t3 t5: i64 = Constant<2> t6: i64 = add t2, t5 t10: i64 = undef t11: i8,ch = load t0, t2, t10<LD1[%tmp81]> t12: i8,ch = load t0, t4, t10<LD1[%tmp10]> t13: i8,ch = load t0, t6, t10<LD1[%tmp12]> Now: t0: ch = EntryToken t2: i64,ch = CopyFromReg t0, Register:i64 %vreg0 t4: i64 = add t2, Constant:i64<1> t6: i64 = add t2, Constant:i64<2> t11: i8,ch = load<LD1[%tmp81]> t0, t2, undef:i64 t12: i8,ch = load<LD1[%tmp10]> t0, t4, undef:i64 t13: i8,ch = load<LD1[%tmp12]> t0, t6, undef:i64 Differential Revision: http://reviews.llvm.org/D12567 llvm-svn: 248628
* SelectionDAGDumper: Leave out the <multiple use> markersMatthias Braun2015-09-181-3/+0
| | | | | | | | | They mostly clutter the output while it is still possible to see which node has multiple users without them. Differential Revision: http://reviews.llvm.org/D12569 llvm-svn: 248013
* SelectionDAGDumper: Avoid unnecessary newlinesMatthias Braun2015-09-181-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before: t0 = EntryToken:ch t0: <multiple use> t0: <multiple use> t1 = CopyFromReg:v4f32,ch t0, Register:v4f32 %vreg0 t25 = IMPLICIT_DEF:v4f32 t26 = HADDPSrr:v4f32 t1, t25 t23 = CopyToReg:ch,glue t0, Register:v4f32 %XMM0, t26 t23: <multiple use> t23: <multiple use> t24 = RETQ:ch Register:v4f32 %XMM0, t23, t23:1 After: t0: <multiple use> t0: <multiple use> t1 = CopyFromReg:v4f32,ch t0, Register:v4f32 %vreg0 t26 = X86ISD::FHADD:v4f32 t1, undef:v4f32 t23 = CopyToReg:ch,glue t0, Register:v4f32 %XMM0, t26 t23: <multiple use> t21 = TargetConstant:i16<0> t23: <multiple use> t24 = X86ISD::RET_FLAG:ch t23, t21, Register:v4f32 %XMM0, t23:1 Differential Revision: http://reviews.llvm.org/D12568 llvm-svn: 248012
* SelectionDAGDumper: Hide [ID=X], [ORD=X] and source locations by default.Matthias Braun2015-09-181-16/+23
| | | | | | | | You can show them with the new -dag-dump-verbose switch. Differential Revision: http://reviews.llvm.org/D12566 llvm-svn: 248011
* SelectionDAG: Introduce PersistentID to SDNode for assert builds.Matthias Braun2015-09-181-4/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This gives us more human readable numbers to identify nodes in debug dumps. Before: 0x7fcbd9700160: ch = EntryToken 0x7fcbd985c7c8: i64 = Register %RAX ... 0x7fcbd9700160: <multiple use> 0x7fcbd985c578: i64,ch = MOV64rm 0x7fcbd985c6a0, 0x7fcbd985cc68, 0x7fcbd985c200, 0x7fcbd985cd90, 0x7fcbd985ceb8, 0x7fcbd9700160<Mem:LD8[@foo]> [ORD=2] 0x7fcbd985c8f0: ch,glue = CopyToReg 0x7fcbd9700160, 0x7fcbd985c7c8, 0x7fcbd985c578 [ORD=3] 0x7fcbd985c7c8: <multiple use> 0x7fcbd985c8f0: <multiple use> 0x7fcbd985c8f0: <multiple use> 0x7fcbd985ca18: ch = RETQ 0x7fcbd985c7c8, 0x7fcbd985c8f0, 0x7fcbd985c8f0:1 [ORD=3] Now: t0: ch = EntryToken t5: i64 = Register %RAX ... t0: <multiple use> t3: i64,ch = MOV64rm t10, t12, t11, t13, t14, t0<Mem:LD8[@foo]> [ORD=2] t6: ch,glue = CopyToReg t0, t5, t3 [ORD=3] t5: <multiple use> t6: <multiple use> t6: <multiple use> t7: ch = RETQ t5, t6, t6:1 [ORD=3] Differential Revision: http://reviews.llvm.org/D12564 llvm-svn: 248010
* [WinEH] Add some support for code generating catchpadReid Kleckner2015-08-271-0/+4
| | | | | | | We can now run 32-bit programs with empty catch bodies. The next step is to change PEI so that we get funclet prologues and epilogues. llvm-svn: 246235
* NFC SelectionDAGDumper: fix typoJF Bastien2015-08-111-1/+1
| | | | | | | | Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11959 llvm-svn: 244667
* Add new ISD nodes: ISD::FMINNAN and ISD::FMAXNANJames Molloy2015-08-111-0/+2
| | | | | | | | | | | | | | | | | | The intention of these is to be a corollary to ISD::FMINNUM/FMAXNUM, differing only on how NaNs are treated. FMINNUM returns the non-NaN input (when given one NaN and one non-NaN), FMINNAN returns the NaN input instead. This patch includes support for scalarizing, widening and splitting vectors, but not expansion or softening. The reason is that these should never be needed - FMINNAN nodes are only going to be created in one place (SDAGBuilder::visitSelect) and there we'll check if the node is legal or custom. I could preemptively add expand and soften code, but I'm fairly opposed to adding code I can't test. It's bad enough I can't create tests with this patch, but at least this code will be exercised by the ARM and AArch64 backends fairly shortly. llvm-svn: 244581
* Fix __builtin_setjmp in combination with sjlj exception handling.Matthias Braun2015-07-161-0/+1
| | | | | | | | | | | | | | | | | | | llvm.eh.sjlj.setjmp was used as part of the SjLj exception handling style but is also used in clang to implement __builtin_setjmp. The ARM backend needs to output additional dispatch tables for the SjLj exception handling style, these tables however can't be emitted if llvm.eh.sjlj.setjmp is simply used for __builtin_setjmp and no actual landing pad blocks exist. To solve this issue a new llvm.eh.sjlj.setup_dispatch intrinsic is introduced which is used instead of llvm.eh.sjlj.setjmp in the SjLj exception handling lowering, so we can differentiate between the case where we actually need to setup a dispatch table and the case where we just need the __builtin_setjmp semantic. Differential Revision: http://reviews.llvm.org/D9313 llvm-svn: 242481
* [Codegen] Add intrinsics 'absdiff' and corresponding SDNodes for absolute ↵James Molloy2015-07-161-0/+2
| | | | | | | | | | | | | difference operation This adds new intrinsics "*absdiff" for absolute difference ops to facilitate efficient code generation for "sum of absolute differences" operation. The patch also contains the introduction of corresponding SDNodes and basic legalization support.Sanity of the generated code is tested on X86. This is 1st of the three patches. Patch by Shahid Asghar-ahmad! llvm-svn: 242409
* Rename llvm.frameescape and llvm.framerecover to localescape and localrecoverReid Kleckner2015-07-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Initially, these intrinsics seemed like part of a family of "frame" related intrinsics, but now I think that's more confusing than helpful. Initially, the LangRef specified that this would create a new kind of allocation that would be allocated at a fixed offset from the frame pointer (EBP/RBP). We ended up dropping that design, and leaving the stack frame layout alone. These intrinsics are really about sharing local stack allocations, not frame pointers. I intend to go further and add an `llvm.localaddress()` intrinsic that returns whatever register (EBP, ESI, ESP, RBX) is being used to address locals, which should not be confused with the frame pointer. Naming suggestions at this point are welcome, I'm happy to re-run sed. Reviewers: majnemer, nicholas Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11011 llvm-svn: 241633
* Convert a bunch of loops to foreach. NFC.Pete Cooper2015-06-261-11/+9
| | | | | | This uses the new SDNode::op_values() iterator range committed in r240805. llvm-svn: 240822
* Avoid a Symbol -> Name -> Symbol conversion.Rafael Espindola2015-06-221-0/+1
| | | | | | | | | | | | | | Before this we were producing a TargetExternalSymbol from a MCSymbol. That meant extracting the symbol name and fetching the symbol again down the pipeline. This patch adds a DAG.getMCSymbol that lets the MCSymbol pass unchanged on the DAG. Doing so removes the need for MO_NOPREFIX and fixes the root cause of pr23900, allowing r240130 to be committed again. llvm-svn: 240300
* Add SDNodes for umin, umax, smin and smax.James Molloy2015-05-151-0/+4
| | | | | | | | | | | | | This adds new SDNodes for signed/unsigned min/max. These nodes are built from select/icmp pairs matched at SDAGBuilder stage. This patch adds the nodes, as well as legalization support and sets them to be "expand" for all targets. NFC for now; this will be tested when I switch AArch64 to using these new nodes. llvm-svn: 237423
* Extend the statepoint intrinsic to allow statepoints to be marked as ↵Pat Gavlin2015-05-081-0/+2
| | | | | | | | | | | | | | | | | | | | | | transitions from GC-aware code to code that is not GC-aware. This changes the shape of the statepoint intrinsic from: @llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args) to: @llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args) This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back. In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation. Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops. Differential Revision: http://reviews.llvm.org/D9501 llvm-svn: 236888
* Masked gather and scatter - added DAGCombine visitorsElena Demikhovsky2015-04-301-0/+2
| | | | | | | | | and AVX-512 instruction selection patterns. All other patches, including tests will follow. http://reviews.llvm.org/D7665 llvm-svn: 236211
* IR: Give 'DI' prefix to debug info metadataDuncan P. N. Exon Smith2015-04-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Finish off PR23080 by renaming the debug info IR constructs from `MD*` to `DI*`. The last of the `DIDescriptor` classes were deleted in r235356, and the last of the related typedefs removed in r235413, so this has all baked for about a week. Note: If you have out-of-tree code (like a frontend), I recommend that you get everything compiling and tests passing with the *previous* commit before updating to this one. It'll be easier to keep track of what code is using the `DIDescriptor` hierarchy and what you've already updated, and I think you're extremely unlikely to insert bugs. YMMV of course. Back to *this* commit: I did this using the rename-md-di-nodes.sh upgrade script I've attached to PR23080 (both code and testcases) and filtered through clang-format-diff.py. I edited the tests for test/Assembler/invalid-generic-debug-node-*.ll by hand since the columns were off-by-three. It should work on your out-of-tree testcases (and code, if you've followed the advice in the previous paragraph). Some of the tests are in badly named files now (e.g., test/Assembler/invalid-mdcompositetype-missing-tag.ll should be 'dicompositetype'); I'll come back and move the files in a follow-up commit. llvm-svn: 236120
* DebugInfo: Gut DIScope, DIEnumerator and DISubrangeDuncan P. N. Exon Smith2015-04-161-2/+2
| | | | | | The only class the still has API left is `DIDescriptor` itself. llvm-svn: 235067
* CodeGen: Use the new DebugLoc API, NFCDuncan P. N. Exon Smith2015-03-301-11/+5
| | | | | | Update lib/CodeGen (and lib/Target) to use the new `DebugLoc` API. llvm-svn: 233582
* SelectionDAG: Reflow code to use early returns, NFCDuncan P. N. Exon Smith2015-03-301-15/+19
| | | | llvm-svn: 233577
* X86: Optimize address mode matching for FRAME_ALLOC_RECOVER nodesDavid Majnemer2015-03-051-0/+1
| | | | | | | We know that the absolute symbol will be less than 2GB and thus will always fit. llvm-svn: 231389
* Add generic fmad DAG node.Matt Arsenault2015-02-201-0/+1
| | | | | | | | | | | This allows sharing of FMA forming combines to work with instructions that have the same semantics as a separate multiply and add. This is expand by default, and only formed post legalization so it shouldn't have much impact on targets that do not want it. llvm-svn: 230070
* Masked Load / Store Intrinsics - the CodeGen part.Elena Demikhovsky2014-12-041-0/+2
| | | | | | | | | | | | | | | | | | I'm recommiting the codegen part of the patch. The vectorizer part will be send to review again. Masked Vector Load and Store Intrinsics. Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores. Added SDNodes for masked operations and lowering patterns for X86 code generator. Examples: <16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask) declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask) Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch. http://reviews.llvm.org/D6191 llvm-svn: 223348
* Revert "Masked Vector Load and Store Intrinsics."Duncan P. N. Exon Smith2014-11-281-2/+0
| | | | | | | | | | | This reverts commit r222632 (and follow-up r222636), which caused a host of LNT failures on an internal bot. I'll respond to the commit on the list with a reproduction of one of the failures. Conflicts: lib/Target/X86/X86TargetTransformInfo.cpp llvm-svn: 222936
* Masked Vector Load and Store Intrinsics.Elena Demikhovsky2014-11-231-0/+2
| | | | | | | | | | | | | | Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores. Added SDNodes for masked operations and lowering patterns for X86 code generator. Examples: <16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask) declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask) Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch. http://reviews.llvm.org/D6191 llvm-svn: 222632
* Update SetVector to rely on the underlying set's insert to return a ↵David Blaikie2014-11-191-1/+1
| | | | | | | | | | | | | pair<iterator, bool> This is to be consistent with StringSet and ultimately with the standard library's associative container insert function. This lead to updating SmallSet::insert to return pair<iterator, bool>, and then to update SmallPtrSet::insert to return pair<iterator, bool>, and then to update all the existing users of those functions... llvm-svn: 222334
* Add minnum / maxnum codegenMatt Arsenault2014-10-211-0/+2
| | | | llvm-svn: 220342
* Have MachineFunction cache a pointer to the subtarget to make lookupsEric Christopher2014-08-051-6/+3
| | | | | | | | | | | shorter/easier and have the DAG use that to do the same lookup. This can be used in the future for TargetMachine based caching lookups from the MachineFunction easily. Update the MIPS subtarget switching machinery to update this pointer at the same time it runs. llvm-svn: 214838
* Remove the TargetMachine forwards for TargetSubtargetInfo basedEric Christopher2014-08-041-2/+7
| | | | | | information and update all callers. No functional change. llvm-svn: 214781
* CodeGen: extend f16 conversions to permit types > float.Tim Northover2014-07-171-2/+2
| | | | | | | | | | | | | | | | | | | This makes the two intrinsics @llvm.convert.from.f16 and @llvm.convert.to.f16 accept types other than simple "float". This is only strictly needed for the truncate operation, since otherwise double rounding occurs and there's no way to represent the strict IEEE conversion. However, for symmetry we allow larger types in the extend too. During legalization, we can expand an "fp16_to_double" operation into two extends for convenience, but abort when the truncate isn't legal. A new libcall is probably needed here. Even after this commit, various target tweaks are needed to actually use the extended intrinsics. I've put these into separate commits for clarity, so there are no actual tests of f64 conversion here. llvm-svn: 213248
* [x86,SDAG] Introduce any- and sign-extend-vector-inreg nodes analogousChandler Carruth2014-07-101-0/+2
| | | | | | | | | | | | | | | | | | | | to the zero-extend-vector-inreg node introduced previously for the same purpose: manage the type legalization of widened extend operations, especially to support the experimental widening mode for x86. I'm adding both because sign-extend is expanded in terms of any-extend with shifts to propagate the sign bit. This removes the last fundamental scalarization from vec_cast2.ll (a test case that hit many really bad edge cases for widening legalization), although the trunc tests in that file still appear scalarized because the the shuffle legalization is scalarizing. Funny thing, I've been working on that. Some initial experiments with this and SSE2 scenarios is showing moderately good behavior already for sign extension. Still some work to do on the shuffle combining on X86 before we're generating optimal sequences, but avoiding scalarization is a huge step forward. llvm-svn: 212714
* [x86] Add a ZERO_EXTEND_VECTOR_INREG DAG node and use it when wideningChandler Carruth2014-07-091-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | vector types to be legal and a ZERO_EXTEND node is encountered. When we use widening to legalize vector types, extend nodes are a real challenge. Either the input or output is likely to be legal, but in many cases not both. As a consequence, we don't really have any way to represent this situation and the prior code in the widening legalization framework would just scalarize the extend operation completely. This patch introduces a new DAG node to represent doing a zero extend of a vector "in register". The core of the idea is to allow legal but different vector types in the input and output. The output vector must have fewer lanes but wider elements. The operation is defined to zero extend the low elements of the input to the size of the output elements, and drop all of the high elements which don't have a corresponding lane in the output vector. It also includes generic expansion of this node in terms of blending a zero vector into the high elements of the vector and bitcasting across. This in turn yields extremely nice code for x86 SSE2 when we use the new widening legalization logic in conjunction with the new shuffle lowering logic. There is still more to do here. We need to support sign extension, any extension, and potentially int-to-float conversions. My current plan is to continue using similar synthetic nodes to model each of these transitions with generic lowering code for each one. However, with this patch LLVM already reaches performance parity with GCC for the core C loops of the x264 code (assuming you disable the hand-written assembly versions) when compiling for SSE2 and SSE3 architectures and enabling the new widening and lowering logic for vectors. Differential Revision: http://reviews.llvm.org/D4405 llvm-svn: 212610
* IR: add "cmpxchg weak" variant to support permitted failure.Tim Northover2014-06-131-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds a weak variant of the cmpxchg operation, as described in C++11. A cmpxchg instruction with this modifier is permitted to fail to store, even if the comparison indicated it should. As a result, cmpxchg instructions must return a flag indicating success in addition to their original iN value loaded. Thus, for uniformity *all* cmpxchg instructions now return "{ iN, i1 }". The second flag is 1 when the store succeeded. At the DAG level, a new ATOMIC_CMP_SWAP_WITH_SUCCESS node has been added as the natural representation for the new cmpxchg instructions. It is a strong cmpxchg. By default this gets Expanded to the existing ATOMIC_CMP_SWAP during Legalization, so existing backends should see no change in behaviour. If they wish to deal with the enhanced node instead, they can call setOperationAction on it. Beware: as a node with 2 results, it cannot be selected from TableGen. Currently, no use is made of the extra information provided in this patch. Test updates are almost entirely adapting the input IR to the new scheme. Summary for out of tree users: ------------------------------ + Legacy Bitcode files are upgraded during read. + Legacy assembly IR files will be invalid. + Front-ends must adapt to different type for "cmpxchg". + Backends should be unaffected by default. llvm-svn: 210903
* Implememting named register intrinsicsRenato Golin2014-05-061-0/+2
| | | | | | | | | | | This patch implements the infrastructure to use named register constructs in programs that need access to specific registers (bare metal, kernels, etc). So far, only the stack pointer is supported as a technology preview, but as it is, the intrinsic can already support all non-allocatable registers from any architecture. llvm-svn: 208104
* [C++11] More 'nullptr' conversion. In some cases just using a boolean check ↵Craig Topper2014-04-141-3/+3
| | | | | | instead of comparing to nullptr. llvm-svn: 206142
* [Layering] Move DebugInfo.h into the IR library where its implementationChandler Carruth2014-03-061-1/+1
| | | | | | already lives. llvm-svn: 203046
* [C++11] Replace llvm::next and llvm::prior with std::next and std::prev.Benjamin Kramer2014-03-021-1/+1
| | | | | | Remove the old functions. llvm-svn: 202636
OpenPOWER on IntegriCloud