summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/SelectionDAG
Commit message (Collapse)AuthorAgeFilesLines
...
* Fixed a bug in narrowing store operation.Elena Demikhovsky2015-01-221-2/+5
| | | | | | | | | Type MVT::i1 became legal in KNL, but store operation can't be narrowed to this type, since the size of VT (1 bit) is not equal to its actual store size(8 bits). Added a test provided by David (dag@cray.com) llvm-svn: 226805
* DAGCombine: fold (or (and X, M), (and X, N)) -> (and X, (or M, N))Tim Northover2015-01-211-0/+11
| | | | | | | It can help with argument juggling on some targets, and is generally a good idea. llvm-svn: 226740
* Revert "DAGCombine: fold (or (and X, M), (and X, N)) -> (and X, (or M, N))"Tim Northover2015-01-211-11/+0
| | | | | | | | It hadn't gone through review yet, but was still on my local copy. This reverts commit r226663 llvm-svn: 226665
* DAGCombine: fold (or (and X, M), (and X, N)) -> (and X, (or M, N))Tim Northover2015-01-211-0/+11
| | | | llvm-svn: 226663
* Prevent binary-tree deterioration in sparse switch statements.Daniel Jasper2015-01-201-8/+10
| | | | | | | | | | | | | This addresses part of llvm.org/PR22262. Specifically, it prevents considering the densities of sub-ranges that have fewer than TLI.getMinimumJumpTableEntries() elements. Those densities won't help jump tables. This is not a complete solution but works around the most pressing issue. Review: http://reviews.llvm.org/D7070 llvm-svn: 226600
* Factor out a splitSwitchCase() function so that it can be reused.Daniel Jasper2015-01-202-21/+26
| | | | | | | | | | | | | | This is in preparation for a fix to llvm.org/PR22262. One of the ideas here is to first find a good jump table range first and then split before and after it. Thereby, we don't need to use the split-based-on-density heuristic at all, which can make the "binary tree" deteriorate in various cases. Also some minor cleanups. No functional changes. llvm-svn: 226551
* [PM] Remove the Pass argument from all of the critical edge splittingChandler Carruth2015-01-191-4/+5
| | | | | | | | | | | | | | | | | | | APIs and replace it and numerous booleans with an option struct. The critical edge splitting API has a really large surface of flags and so it seems worth burning a small option struct / builder. This struct can be constructed with the various preserved analyses and then flags can be flipped in a builder style. The various users are now responsible for directly passing along their analysis information. This should be enough for the critical edge splitting to work cleanly with the new pass manager as well. This API is still pretty crufty and could be cleaned up a lot, but I've focused on this change just threading an option struct rather than a pass through the API. llvm-svn: 226456
* Improve DAG combine pass on certain IR vector patternsMehdi Amini2015-01-171-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Loading 2 2x32-bit float vectors into the bottom half of a 256-bit vector produced suboptimal code in AVX2 mode with certain IR combinations. In particular, the IR optimizer folded 2f32 + 2f32 -> 4f32, 4f32 + 4f32 (undef) -> 8f32 into a 2f32 + 2f32 -> 8f32, which seems more canonical, but then mysteriously generated rather bad code; the movq/movhpd combination didn't match. The problem lay in the BUILD_VECTOR optimization path. The 2f32 inputs would get promoted to 4f32 by the type legalizer, eventually resulting in a BUILD_VECTOR on two 4f32 into an 8f32. The BUILD_VECTOR then, recognizing these were both half the output size, concatted them and then produced a shuffle. However, the resulting concat + shuffle was more complex than it should be; in the case where the upper half of the output is undef, we probably want to generate shuffle + concat instead. This enhancement causes the vector_shuffle combine step to recognize this suboptimal pattern and correct it. I included it there instead of in BUILD_VECTOR in case the same suboptimal pattern occurs for other reasons. This results in the optimizer correctly producing the optimal movq + movhpd sequence for all three variations on this IR, even with AVX2. I've included a test case. Radar link: rdar://problem/19287012 Fix for PR 21943. From: Fiona Glaser <fglaser@apple.com> llvm-svn: 226360
* Move ownership of GCStrategy objects to LLVMContextPhilip Reames2015-01-163-3/+4
| | | | | | | | | | | | Note: This change ended up being slightly more controversial than expected. Chandler has tentatively okayed this for the moment, but I may be revisiting this in the near future after we settle some high level questions. Rather than have the GCStrategy object owned by the GCModuleInfo - which is an immutable analysis pass used mainly by gc.root - have it be owned by the LLVMContext. This simplifies the ownership logic (i.e. can you have two instances of the same strategy at once?), but more importantly, allows us to access the GCStrategy in the middle end optimizer. To this end, I add an accessor through Function which becomes the canonical way to get at a GCStrategy instance. In the near future, this will allows me to move some of the checks from http://reviews.llvm.org/D6808 into the Verifier itself, and to introduce optimization legality predicates for some of the recent additions to InstCombine. (These will follow as separate changes.) Differential Revision: http://reviews.llvm.org/D6811 llvm-svn: 226311
* Fix SelectionDAG -view-*-dags filteringMehdi Amini2015-01-151-1/+1
| | | | llvm-svn: 226163
* Replace size method call of containers to empty method where appropriateAlexander Kornienko2015-01-151-1/+1
| | | | | | | | | | | | | | | | This patch was generated by a clang tidy checker that is being open sourced. The documentation of that checker is the following: /// The emptiness of a container should be checked using the empty method /// instead of the size method. It is not guaranteed that size is a /// constant-time function, and it is generally more efficient and also shows /// clearer intent to use empty. Furthermore some containers may implement the /// empty method but not implement the size method. Using empty whenever /// possible makes it easier to switch to another container in the future. Patch by Gábor Horváth! llvm-svn: 226161
* [PM] Separate the TargetLibraryInfo object from the immutable pass.Chandler Carruth2015-01-151-3/+4
| | | | | | | | | | | | | | The pass is really just a means of accessing a cached instance of the TargetLibraryInfo object, and this way we can re-use that object for the new pass manager as its result. Lots of delta, but nothing interesting happening here. This is the common pattern that is developing to allow analyses to live in both the old and new pass manager -- a wrapper pass in the old pass manager emulates the separation intrinsic to the new pass manager between the result and pass for analyses. llvm-svn: 226157
* [PM] Move TargetLibraryInfo into the Analysis library.Chandler Carruth2015-01-153-3/+3
| | | | | | | | | | | | | | | | While the term "Target" is in the name, it doesn't really have to do with the LLVM Target library -- this isn't an abstraction which LLVM targets generally need to implement or extend. It has much more to do with modeling the various runtime libraries on different OSes and with different runtime environments. The "target" in this sense is the more general sense of a target of cross compilation. This is in preparation for porting this analysis to the new pass manager. No functionality changed, and updates inbound for Clang and Polly. llvm-svn: 226078
* Emit the Itanium LSDA for unknown EH personalities on Win64Reid Kleckner2015-01-141-0/+4
| | | | | | | | | | This fixes lots of generic CodeGen tests that use __gcc_personality_v0. This suggests that using ExceptionHandling::MSVC was a mistake, and we should instead classify each function by personality function. This would, for example, allow us to LTO a binary containing uses of SEH and Itanium EH. llvm-svn: 226019
* Remove dead code for llvm.eh.selector in the old EH modelReid Kleckner2015-01-141-54/+0
| | | | llvm-svn: 226018
* [cleanup] Re-sort all the #include lines in LLVM usingChandler Carruth2015-01-143-3/+3
| | | | | | | | | | | utils/sort_includes.py. I clearly haven't done this in a while, so more changed than usual. This even uncovered a missing include from the InstrProf library that I've added. No functionality changed here, just mechanical cleanup of the include order. llvm-svn: 225974
* SelectionDAG: add a -filter-view-dags option to llcMehdi Amini2015-01-141-10/+25
| | | | | | | | | This option takes the name of the basic block you want to visualize with -view-*-dags Differential Revision: http://reviews.llvm.org/D6948 llvm-svn: 225953
* DAG Combiner: Fold SelectCC When Cond is UNDEFMehdi Amini2015-01-141-4/+7
| | | | | | | | | In case folding a node end up with a NaN as operand for the select, the folding of the condition of the selectcc node returns "UNDEF". Differential Revision: http://reviews.llvm.org/D6889 llvm-svn: 225952
* Implement new way of expanding extloads.Matt Arsenault2015-01-142-16/+19
| | | | | | | | | | | | | | | Now that the source and destination types can be specified, allow doing an expansion that doesn't use an EXTLOAD of the result type. Try to do a legal extload to an intermediate type and extend that if possible. This generalizes the special case custom lowering of extloads R600 has been using to work around this problem. This also happens to fix a bug that would incorrectly use more aligned loads than should be used. llvm-svn: 225925
* Adjust ScheduleDAGSDNodes::RegDefIter for patchpointsHal Finkel2015-01-141-0/+8
| | | | | | | | | | | | | | | PATCHPOINT is a strange pseudo-instruction. Depending on how it is used, and whether or not the AnyReg calling convention is being used, it might or might not define a value. However, its TableGen definition says that it defines one value, and so when it doesn't, the code in ScheduleDAGSDNodes::RegDefIter becomes confused and the code that uses the RegDefIter will try to get the register class of the MVT::Other type associated with the PATCHPOINT's chain result (under certain circumstances). This will be covered by the PPC64 PatchPoint test cases once that support is re-committed. llvm-svn: 225907
* CodeGen support for x86_64 SEH catch handlers in LLVMReid Kleckner2015-01-143-5/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds handling for ExceptionHandling::MSVC, used by the x86_64-pc-windows-msvc triple. It assumes that filter functions have already been outlined in either the frontend or the backend. Filter functions are used in place of the landingpad catch clause type info operands. In catch clause order, the first filter to return true will catch the exception. The C specific handler table expects the landing pad to be split into one block per handler, but LLVM IR uses a single landing pad for all possible unwind actions. This patch papers over the mismatch by synthesizing single instruction BBs for every catch clause to fill in the EH selector that the landing pad block expects. Missing functionality: - Accessing data in the parent frame from outlined filters - Cleanups (from __finally) are unsupported, as they will require outlining and parent frame access - Filter clauses are unsupported, as there's no clear analogue in SEH In other words, this is the minimal set of changes needed to write IR to catch arbitrary exceptions and resume normal execution. Reviewers: majnemer Differential Revision: http://reviews.llvm.org/D6300 llvm-svn: 225904
* DAGCombiner: simplify by using condition variables; NFCMatthias Braun2015-01-132-18/+15
| | | | llvm-svn: 225836
* R600: Implement getRecipEstimateMatt Arsenault2015-01-131-1/+2
| | | | | | | | | This requires a new hook to prevent expanding sqrt in terms of rsqrt and reciprocal. v_rcp_f32, v_rsq_f32, and v_sqrt_f32 are all the same rate, so this expansion would just double the number of instructions and cycles. llvm-svn: 225828
* [StackMaps] Mark in CallLoweringInfo when lowering a patchpointHal Finkel2015-01-133-4/+7
| | | | | | | | | | | | | | | While, generally speaking, the process of lowering arguments for a patchpoint is the same as lowering a regular indirect call, on some targets it may not be exactly the same. Targets may not, for example, want to add additional register dependencies that apply only to making cross-DSO calls through linker stubs, may not want to load additional registers out of function descriptors, and may not want to add additional side-effect-causing instructions that cannot be removed later with the call itself being generated. The PowerPC target will use this in a future commit (for all of the reasons stated above). llvm-svn: 225806
* Added TLI hook for isFPExtFree. Some of the FMA combine heuristics are now ↵Olivier Sallenave2015-01-131-63/+70
| | | | | | guarded with that hook. llvm-svn: 225795
* Rename llvm.recoverframeallocation to llvm.framerecoverReid Kleckner2015-01-131-3/+3
| | | | | | | | This name is less descriptive, but it sort of puts things in the 'llvm.frame...' namespace, relating it to frameallocate and frameaddress. It also avoids using "allocate" and "allocation" together. llvm-svn: 225752
* Add the llvm.frameallocate and llvm.recoverframeallocation intrinsicsReid Kleckner2015-01-131-0/+53
| | | | | | | | | | | | | | | | | | | | | These intrinsics allow multiple functions to share a single stack allocation from one function's call frame. The function with the allocation may only perform one allocation, and it must be in the entry block. Functions accessing the allocation call llvm.recoverframeallocation with the function whose frame they are accessing and a frame pointer from an active call frame of that function. These intrinsics are very difficult to inline correctly, so the intention is that they be introduced rarely, or at least very late during EH preparation. Reviewers: echristo, andrew.w.kaylor Differential Revision: http://reviews.llvm.org/D6493 llvm-svn: 225746
* Combine fcmp + select to fminnum / fmaxnum if no nans and legalMatt Arsenault2015-01-131-0/+59
| | | | | | | Also require unsafe FP math for no since there isn't a way to test for signed zeros. llvm-svn: 225744
* [DAGCombine] Remainder of fix to r225380 (More FMA folding opportunities)Hal Finkel2015-01-091-10/+24
| | | | | | | | | | As pointed out by Aditya (and Owen), when we elide an FP extend to form an FMA, we need to extend the incoming operands so that the resulting node will really be legal. This is currently enabled only for PowerPC, and it happens to work there regardless, but this should fix the functionality for everyone else should anyone else wish to use it. llvm-svn: 225492
* Partial fix to r225380 (More FMA folding opportunities)Hal Finkel2015-01-091-96/+95
| | | | | | | | | | | | As pointed out by Aditya (and Owen), there are two things wrong with this code. First, it adds patterns which elide FP extends when forming FMAs, and that might not be profitable on all targets (it belongs behind the pre-existing aggressive-FMA-formation flag). This is fixed by this change. Second, the resulting nodes might have operands of different types (the extensions need to be re-added). That will be fixed in the follow-up commit. llvm-svn: 225485
* Masked Load/Store - fixed a bug in type legalization.Elena Demikhovsky2015-01-083-3/+107
| | | | llvm-svn: 225441
* [SelectionDAG] Allow targets to specify legality of extloads' resultAhmed Bougacha2015-01-083-31/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | type (in addition to the memory type). The *LoadExt* legalization handling used to only have one type, the memory type. This forced users to assume that as long as the extload for the memory type was declared legal, and the result type was legal, the whole extload was legal. However, this isn't always the case. For instance, on X86, with AVX, this is legal: v4i32 load, zext from v4i8 but this isn't: v4i64 load, zext from v4i8 Whereas v4i64 is (arguably) legal, even without AVX2. Note that the same thing was done a while ago for truncstores (r46140), but I assume no one needed it yet for extloads, so here we go. Calls to getLoadExtAction were changed to add the value type, found manually in the surrounding code. Calls to setLoadExtAction were mechanically changed, by wrapping the call in a loop, to match previous behavior. The loop iterates over the MVT subrange corresponding to the memory type (FP vectors, etc...). I also pulled neighboring setTruncStoreActions into some of the loops; those shouldn't make a difference, as the additional types are illegal. (e.g., i128->i1 truncstores on PPC.) No functional change intended. Differential Revision: http://reviews.llvm.org/D6532 llvm-svn: 225421
* More FMA folding opportunities.Olivier Sallenave2015-01-071-1/+133
| | | | llvm-svn: 225380
* Test commitOlivier Sallenave2015-01-071-0/+1
| | | | llvm-svn: 225368
* Introduce an example statepoint GC strategyPhilip Reames2015-01-071-0/+43
| | | | | | | | | | | | | | This change includes the most basic possible GCStrategy for a GC which is using the statepoint lowering code. At the moment, this GCStrategy doesn't really do much - aside from actually generate correct stackmaps that is - but I went ahead and added a few extra correctness checks as proof of concept. It's mostly here to provide documentation on how to do one, and to provide a point for various optimization legality hooks I'd like to add going forward. (For context, see the TODOs in InstCombine around gc.relocate.) Most of the validation logic added here as proof of concept will soon move in to the Verifier. That move is dependent on http://reviews.llvm.org/D6811 There was discussion in the review thread about addrspace(1) being reserved for something. I'm going to follow up on a seperate llvmdev thread. If needed, I'll update all the code at once. Note that I am deliberately not making a GCStrategy required to use gc.statepoints with this change. I want to give folks out of tree - including myself - a chance to migrate. In a week or two, I'll make having a GCStrategy be required for gc.statepoints. To this end, I added the gc tag to one of the test cases but not others. Differential Revision: http://reviews.llvm.org/D6808 llvm-svn: 225365
* SelectionDAGBuilder: move constant initialization out of loopMehdi Amini2015-01-061-15/+19
| | | | | | | | | | No semantic change intended. Reviewers: resistor Differential Revision: http://reviews.llvm.org/D6834 llvm-svn: 225278
* Replace several 'assert(false' with 'llvm_unreachable' or fold a condition ↵Craig Topper2015-01-051-1/+1
| | | | | | into the assert. llvm-svn: 225160
* Revert "merge consecutive stores of extracted vector elements"Alexey Samsonov2014-12-311-75/+4
| | | | | | | This reverts commit r224611. This change causes crashes in X86 DAG->DAG Instruction Selection. llvm-svn: 225031
* Masked Load/Store - Changed the order of parameters in intrinsics.Elena Demikhovsky2014-12-251-5/+7
| | | | | | | No functional changes. The documentation is coming. llvm-svn: 224829
* Always assert in DAGCombine and not only when -debug is enabledMehdi Amini2014-12-231-5/+6
| | | | | | | | | Right now in DAG Combine check the validity of the returned type only when -debug is given on the command line. However usually the test cases in the validation does not use -debug. An Assert build should always check this. llvm-svn: 224779
* [DagCombine] Improve DAGCombiner BUILD_VECTOR when it has two sources of ↵Michael Kuperstein2014-12-231-12/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | elements This partially fixes PR21943. For AVX, we go from: vmovq (%rsi), %xmm0 vmovq (%rdi), %xmm1 vpermilps $-27, %xmm1, %xmm2 ## xmm2 = xmm1[1,1,2,3] vinsertps $16, %xmm2, %xmm1, %xmm1 ## xmm1 = xmm1[0],xmm2[0],xmm1[2,3] vinsertps $32, %xmm0, %xmm1, %xmm1 ## xmm1 = xmm1[0,1],xmm0[0],xmm1[3] vpermilps $-27, %xmm0, %xmm0 ## xmm0 = xmm0[1,1,2,3] vinsertps $48, %xmm0, %xmm1, %xmm0 ## xmm0 = xmm1[0,1,2],xmm0[0] To the expected: vmovq (%rdi), %xmm0 vmovhpd (%rsi), %xmm0, %xmm0 retq Fixing this for AVX2 is still open. Differential Revision: http://reviews.llvm.org/D6749 llvm-svn: 224759
* Enable (sext x) == C --> x == (trunc C) combineMatt Arsenault2014-12-211-9/+26
| | | | | | | | | Extend the existing code which handles this for zext. This makes this more useful for targets with ZeroOrNegativeOne BooleanContent and obsoletes a custom combine SI uses for i1 setcc (sext(i1), 0, setne) since the constant will now be shrunk to i1. llvm-svn: 224691
* merge consecutive stores of extracted vector elementsSanjay Patel2014-12-191-4/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a path to DAGCombiner::MergeConsecutiveStores() to combine multiple scalar stores when the store operands are extracted vector elements. This is a partial fix for PR21711 ( http://llvm.org/bugs/show_bug.cgi?id=21711 ). For the new test case, codegen improves from: vmovss %xmm0, (%rdi) vextractps $1, %xmm0, 4(%rdi) vextractps $2, %xmm0, 8(%rdi) vextractps $3, %xmm0, 12(%rdi) vextractf128 $1, %ymm0, %xmm0 vmovss %xmm0, 16(%rdi) vextractps $1, %xmm0, 20(%rdi) vextractps $2, %xmm0, 24(%rdi) vextractps $3, %xmm0, 28(%rdi) vzeroupper retq To: vmovups %ymm0, (%rdi) vzeroupper retq Patch reviewed by Nadav Rotem. Differential Revision: http://reviews.llvm.org/D6698 llvm-svn: 224611
* [DAGCombine] Slightly improve lowering of BUILD_VECTOR into a shuffle.Michael Kuperstein2014-12-171-11/+22
| | | | | | | | | | This handles the case of a BUILD_VECTOR being constructed out of elements extracted from a vector twice the size of the result vector. Previously this was always scalarized. Now, we try to construct a shuffle node that feeds on extract_subvectors. This fixes PR15872 and provides a partial fix for PR21711. Differential Revision: http://reviews.llvm.org/D6678 llvm-svn: 224429
* SelectionDAG switch lowering: use 'unsigned' to count destination popularityHans Wennborg2014-12-161-2/+2
| | | | | | | | | SwitchInst::getNumCases() returns unsinged, so using uint64_t to count cases seems unnecessary. Also fix a missing CHECK in the test case. llvm-svn: 224393
* merge consecutive loads that are offset from a base addressSanjay Patel2014-12-161-5/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SelectionDAG::isConsecutiveLoad() was not detecting consecutive loads when the first load was offset from a base address. This patch recognizes that pattern and subtracts the offset before comparing the second load to see if it is consecutive. The codegen change in the new test case improves from: vmovsd 32(%rdi), %xmm0 vmovsd 48(%rdi), %xmm1 vmovhpd 56(%rdi), %xmm1, %xmm1 vmovhpd 40(%rdi), %xmm0, %xmm0 vinsertf128 $1, %xmm1, %ymm0, %ymm0 To: vmovups 32(%rdi), %ymm0 An existing test case is also improved from: vmovsd (%rdi), %xmm0 vmovsd 16(%rdi), %xmm1 vmovsd 24(%rdi), %xmm2 vunpcklpd %xmm2, %xmm0, %xmm0 ## xmm0 = xmm0[0],xmm2[0] vmovhpd 8(%rdi), %xmm1, %xmm3 To: vmovsd (%rdi), %xmm0 vmovsd 16(%rdi), %xmm1 vmovhpd 24(%rdi), %xmm0, %xmm0 vmovhpd 8(%rdi), %xmm1, %xmm1 This patch fixes PR21771 ( http://llvm.org/bugs/show_bug.cgi?id=21771 ). Differential Revision: http://reviews.llvm.org/D6642 llvm-svn: 224379
* Silence more static analyzer warnings.Michael Ilseman2014-12-151-1/+2
| | | | | | | | Add in definedness checks for shift operators, null checks when pointers are assumed by the code to be non-null, and explicit unreachables. llvm-svn: 224255
* Add target hook for whether it is profitable to reduce load widthsMatt Arsenault2014-12-121-0/+3
| | | | | | | | Add an option to disable optimization to shrink truncated larger type loads to smaller type loads. On SI this prevents using scalar load instructions in some cases, since there are no scalar extloads. llvm-svn: 224084
* IR: Split Metadata from ValueDuncan P. N. Exon Smith2014-12-091-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Split `Metadata` away from the `Value` class hierarchy, as part of PR21532. Assembly and bitcode changes are in the wings, but this is the bulk of the change for the IR C++ API. I have a follow-up patch prepared for `clang`. If this breaks other sub-projects, I apologize in advance :(. Help me compile it on Darwin I'll try to fix it. FWIW, the errors should be easy to fix, so it may be simpler to just fix it yourself. This breaks the build for all metadata-related code that's out-of-tree. Rest assured the transition is mechanical and the compiler should catch almost all of the problems. Here's a quick guide for updating your code: - `Metadata` is the root of a class hierarchy with three main classes: `MDNode`, `MDString`, and `ValueAsMetadata`. It is distinct from the `Value` class hierarchy. It is typeless -- i.e., instances do *not* have a `Type`. - `MDNode`'s operands are all `Metadata *` (instead of `Value *`). - `TrackingVH<MDNode>` and `WeakVH` referring to metadata can be replaced with `TrackingMDNodeRef` and `TrackingMDRef`, respectively. If you're referring solely to resolved `MDNode`s -- post graph construction -- just use `MDNode*`. - `MDNode` (and the rest of `Metadata`) have only limited support for `replaceAllUsesWith()`. As long as an `MDNode` is pointing at a forward declaration -- the result of `MDNode::getTemporary()` -- it maintains a side map of its uses and can RAUW itself. Once the forward declarations are fully resolved RAUW support is dropped on the ground. This means that uniquing collisions on changing operands cause nodes to become "distinct". (This already happened fairly commonly, whenever an operand went to null.) If you're constructing complex (non self-reference) `MDNode` cycles, you need to call `MDNode::resolveCycles()` on each node (or on a top-level node that somehow references all of the nodes). Also, don't do that. Metadata cycles (and the RAUW machinery needed to construct them) are expensive. - An `MDNode` can only refer to a `Constant` through a bridge called `ConstantAsMetadata` (one of the subclasses of `ValueAsMetadata`). As a side effect, accessing an operand of an `MDNode` that is known to be, e.g., `ConstantInt`, takes three steps: first, cast from `Metadata` to `ConstantAsMetadata`; second, extract the `Constant`; third, cast down to `ConstantInt`. The eventual goal is to introduce `MDInt`/`MDFloat`/etc. and have metadata schema owners transition away from using `Constant`s when the type isn't important (and they don't care about referring to `GlobalValue`s). In the meantime, I've added transitional API to the `mdconst` namespace that matches semantics with the old code, in order to avoid adding the error-prone three-step equivalent to every call site. If your old code was: MDNode *N = foo(); bar(isa <ConstantInt>(N->getOperand(0))); baz(cast <ConstantInt>(N->getOperand(1))); bak(cast_or_null <ConstantInt>(N->getOperand(2))); bat(dyn_cast <ConstantInt>(N->getOperand(3))); bay(dyn_cast_or_null<ConstantInt>(N->getOperand(4))); you can trivially match its semantics with: MDNode *N = foo(); bar(mdconst::hasa <ConstantInt>(N->getOperand(0))); baz(mdconst::extract <ConstantInt>(N->getOperand(1))); bak(mdconst::extract_or_null <ConstantInt>(N->getOperand(2))); bat(mdconst::dyn_extract <ConstantInt>(N->getOperand(3))); bay(mdconst::dyn_extract_or_null<ConstantInt>(N->getOperand(4))); and when you transition your metadata schema to `MDInt`: MDNode *N = foo(); bar(isa <MDInt>(N->getOperand(0))); baz(cast <MDInt>(N->getOperand(1))); bak(cast_or_null <MDInt>(N->getOperand(2))); bat(dyn_cast <MDInt>(N->getOperand(3))); bay(dyn_cast_or_null<MDInt>(N->getOperand(4))); - A `CallInst` -- specifically, intrinsic instructions -- can refer to metadata through a bridge called `MetadataAsValue`. This is a subclass of `Value` where `getType()->isMetadataTy()`. `MetadataAsValue` is the *only* class that can legally refer to a `LocalAsMetadata`, which is a bridged form of non-`Constant` values like `Argument` and `Instruction`. It can also refer to any other `Metadata` subclass. (I'll break all your testcases in a follow-up commit, when I propagate this change to assembly.) llvm-svn: 223802
* Fix a few instances found in SelectionDAG where we were not handling F16 at ↵Owen Anderson2014-12-092-3/+5
| | | | | | parity with F32 and F64. llvm-svn: 223760
OpenPOWER on IntegriCloud