summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen
Commit message (Collapse)AuthorAgeFilesLines
...
* Add alignment value to allowsUnalignedMemoryAccessMatt Arsenault2014-07-273-12/+17
| | | | | | | | | | Rename to allowsMisalignedMemoryAccess. On R600, 8 and 16 byte accesses are mostly OK with 4-byte alignment, and don't need to be split into multiple accesses. Vector loads with an alignment of the element type are not uncommon in OpenCL code. llvm-svn: 214055
* [SDAG] Add an assert that we don't mess up the number of values whenChandler Carruth2014-07-261-0/+3
| | | | | | | | replacing nodes in the legalizer. This caught a number of bugs for me during development. llvm-svn: 214022
* [SDAG] Simplify the code for handling single-value nodes and addChandler Carruth2014-07-261-8/+12
| | | | | | a missing transfer of debug information (without which tests fail). llvm-svn: 214021
* [SDAG] When performing post-legalize DAG combining, run the legalizerChandler Carruth2014-07-262-61/+107
| | | | | | | | | | | | | | | | | | | | | | over each node in the worklist prior to combining. This allows the combiner to produce new nodes which need to go back through legalization. This is particularly useful when generating operands to target specific nodes in a post-legalize DAG combine where the operands are significantly easier to express as pre-legalized operations. My immediate use case will be PSHUFB formation where we need to build a constant shuffle mask with a build_vector node. This also refactors the relevant functionality in the legalizer to support this, and updates relevant tests. I've spoken to the R600 folks and these changes look like improvements to them. The avx512 change needs to be investigated, I suspect there is a disagreement between the legalizer and the DAG combiner there, but it seems a minor issue so leaving it to be re-evaluated after this patch. Differential Revision: http://reviews.llvm.org/D4564 llvm-svn: 214020
* Add @llvm.assume, lowering, and some basic propertiesHal Finkel2014-07-253-3/+6
| | | | | | | | | | | | | | | | | This is the first commit in a series that add an @llvm.assume intrinsic which can be used to provide the optimizer with a condition it may assume to be true (when the control flow would hit the intrinsic call). Some basic properties are added here: - llvm.invariant(true) is dead. - llvm.invariant(false) is unreachable (this directly corresponds to the documented behavior of MSVC's __assume(0)), so is llvm.invariant(undef). The intrinsic is tagged as writing arbitrarily, in order to maintain control dependencies. BasicAA has been updated, however, to return NoModRef for any particular location-based query so that we don't unnecessarily block code motion. llvm-svn: 213973
* [stack protector] Fix a potential security bug in stack protector where theAkira Hatanaka2014-07-252-6/+50
| | | | | | | | | | | | | | address of the stack guard was being spilled to the stack. Previously the address of the stack guard would get spilled to the stack if it was impossible to keep it in a register. This patch introduces a new target independent node and pseudo instruction which gets expanded post-RA to a sequence of instructions that load the stack guard value. Register allocator can now just remat the value when it can't keep it in a register. <rdar://problem/12475629> llvm-svn: 213967
* Reapply "DebugInfo: Don't put fission type units in comdat sections."David Blaikie2014-07-253-12/+19
| | | | | | | | | | | | | | | | This recommits r208930, r208933, and r208975 (by reverting r209338) and reverts r209529 (the FIXME to readd this functionality once the tools were fixed) now that DWP has been fixed to cope with a single section for all fission type units. Original commit message: "Since type units in the dwo file are handled by a debug aware tool, they don't need to leverage the ELF comdat grouping to implement deduplication. Avoid creating all the .group sections for these as a space optimization." llvm-svn: 213956
* Recommit r212203: Don't try to construct debug LexicalScopes hierarchy for ↵David Blaikie2014-07-254-4/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | functions that do not have top level debug information. Reverted by Eric Christopher (Thanks!) in r212203 after Bob Wilson reported LTO issues. Duncan Exon Smith and Aditya Nandakumar helped provide a reduced reproduction, though the failure wasn't too hard to guess, and even easier with the example to confirm. The assertion that the subprogram metadata associated with an llvm::Function matches the scope data referenced by the DbgLocs on the instructions in that function is not valid under LTO. In LTO, a C++ inline function might exist in multiple CUs and the subprogram metadata nodes will refer to the same llvm::Function. In this case, depending on the order of the CUs, the first intance of the subprogram metadata may not be the one referenced by the instructions in that function and the assertion will fail. A test case (test/DebugInfo/cross-cu-linkonce-distinct.ll) is added, the assertion removed and a comment added to explain this situation. This was then reverted again in r213581 as it caused PR20367. The root cause of this was the early exit in LiveDebugVariables meant that spurious DBG_VALUE intrinsics that referenced dead variables were not removed, causing an assertion/crash later on. The fix is to have LiveDebugVariables strip all DBG_VALUE intrinsics in functions without debug info as they're not needed anyway. Test case added to cover this situation (that occurs when a debug-having function is inlined into a nodebug function) in test/DebugInfo/X86/nodebug_with_debug_loc.ll Original commit message: If a function isn't actually in a CU's subprogram list in the debug info metadata, ignore all the DebugLocs and don't try to build scopes, track variables, etc. While this is possibly a minor optimization, it's also a correctness fix for an incoming patch that will add assertions to LexicalScopes and the debug info verifier to ensure that all scope chains lead to debug info for the current function. Fix up a few test cases that had broken/incomplete debug info that could violate this constraint. Add a test case where this occurs by design (inlining a debug-info-having function in an attribute nodebug function - we want this to work because /if/ the nodebug function is then inlined into a debug-info-having function, it should be fine (and will work fine - we just stitch the scopes up as usual), but should the inlining not happen we need to not assert fail either). llvm-svn: 213952
* [SDAG] Don't insert the VRBase into a mapping from SDValues when the defChandler Carruth2014-07-251-6/+10
| | | | | | | doesn't actually correspond to an SDValue at all. Fixes most of the remaining asserts on out-of-range SDValue result numbers. llvm-svn: 213930
* Store nodes only have 1 result.Matt Arsenault2014-07-251-1/+1
| | | | llvm-svn: 213928
* [SDAG] Start plumbing an assert into SDValues that we don't form oneChandler Carruth2014-07-251-1/+1
| | | | | | | | | | | | | with a result number outside the range of results for the node. I don't know how we managed to not really check this very basic invariant for so long, but the code is *very* broken at this point. I have over 270 test failures with the assert enabled. I'm committing it disabled so that others can join in the cleanup effort and reproduce the issues. I've also included one of the obvious fixes that I already found. More fixes to come. llvm-svn: 213926
* [SDAG] Introduce a combined set to the DAG combiner which tracks nodesChandler Carruth2014-07-241-5/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | which have successfully round-tripped through the combine phase, and use this to ensure all operands to DAG nodes are visited by the combiner, even if they are only added during the combine phase. This is critical to have the combiner reach nodes that are *introduced* during combining. Previously these would sometimes be visited and sometimes not be visited based on whether they happened to end up on the worklist or not. Now we always run them through the combiner. This fixes quite a few bad codegen test cases lurking in the suite while also being more principled. Among these, the TLS codegeneration is particularly exciting for programs that have this in the critical path like TSan-instrumented binaries (although I think they engineer to use a different TLS that is faster anyways). I've tried to check for compile-time regressions here by running llc over a merged (but not LTO-ed) clang bitcode file and observed at most a 3% slowdown in llc. Given that this is essentially a worst case (none of opt or clang are running at this phase) I think this is tolerable. The actual LTO case should be even less costly, and the cost in normal compilation should be negligible. With this combining logic, it is possible to re-legalize as we combine which is necessary to implement PSHUFB formation on x86 as a post-legalize DAG combine (my ultimate goal). Differential Revision: http://reviews.llvm.org/D4638 llvm-svn: 213898
* [x86] Make vector legalization of extloads work more like the "normal"Chandler Carruth2014-07-241-5/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | vector operation legalization with support for custom target lowering and fallback to expand when it fails, and use this to implement sext and anyext load lowering for x86 in a more principled way. Previously, the x86 backend relied on a target DAG combine to "combine away" sextload and extload nodes prior to legalization, or would expand them during legalization with terrible code. This is particularly problematic because the DAG combine relies on running over non-canonical DAG nodes at just the right time to match several common and important patterns. It used a combine rather than lowering because we didn't have good lowering support, and to expose some tricks being employed to more combine phases. With this change it becomes a proper lowering operation, the backend marks that it can lower these nodes, and I've added support for handling the canonical forms that don't have direct legal representations such as sextload of a v4i8 -> v4i64 on AVX1. With this change, our test cases for this behavior continue to pass even after the DAG combiner beigns running more systematically over every node. There is some noise caused by this in the test suite where we actually use vector extends instead of subregister extraction. This doesn't really seem like the right thing to do, but is unlikely to be a critical regression. We do regress in one case where by lowering to the target-specific patterns early we were able to combine away extraneous legal math nodes. However, this regression is completely addressed by switching to a widening based legalization which is what I'm working toward anyways, so I've just switched the test to that mode. Differential Revision: http://reviews.llvm.org/D4654 llvm-svn: 213897
* [X86] Optimize stackmap shadows on X86.Lang Hames2014-07-241-0/+2
| | | | | | | | | | | | | | | | | | | This patch minimizes the number of nops that must be emitted on X86 to satisfy stackmap shadow constraints. To minimize the number of nops inserted, the X86AsmPrinter now records the size of the most recent stackmap's shadow in the StackMapShadowTracker class, and tracks the number of instruction bytes emitted since the that stackmap instruction was encountered. Padding is emitted (if it is required at all) immediately before the next stackmap/patchpoint instruction, or at the end of the basic block. This optimization should reduce code-size and improve performance for people using the llvm stackmap intrinsic on X86. <rdar://problem/14959522> llvm-svn: 213892
* Add scoped-noalias metadataHal Finkel2014-07-242-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds scoped noalias metadata. The primary motivations for this feature are: 1. To preserve noalias function attribute information when inlining 2. To provide the ability to model block-scope C99 restrict pointers Neither of these two abilities are added here, only the necessary infrastructure. In fact, there should be no change to existing functionality, only the addition of new features. The logic that converts noalias function parameters into this metadata during inlining will come in a follow-up commit. What is added here is the ability to generally specify noalias memory-access sets. Regarding the metadata, alias-analysis scopes are defined similar to TBAA nodes: !scope0 = metadata !{ metadata !"scope of foo()" } !scope1 = metadata !{ metadata !"scope 1", metadata !scope0 } !scope2 = metadata !{ metadata !"scope 2", metadata !scope0 } !scope3 = metadata !{ metadata !"scope 2.1", metadata !scope2 } !scope4 = metadata !{ metadata !"scope 2.2", metadata !scope2 } Loads and stores can be tagged with an alias-analysis scope, and also, with a noalias tag for a specific scope: ... = load %ptr1, !alias.scope !{ !scope1 } ... = load %ptr2, !alias.scope !{ !scope1, !scope2 }, !noalias !{ !scope1 } When evaluating an aliasing query, if one of the instructions is associated with an alias.scope id that is identical to the noalias scope associated with the other instruction, or is a descendant (in the scope hierarchy) of the noalias scope associated with the other instruction, then the two memory accesses are assumed not to alias. Note that is the first element of the scope metadata is a string, then it can be combined accross functions and translation units. The string can be replaced by a self-reference to create globally unqiue scope identifiers. [Note: This overview is slightly stylized, since the metadata nodes really need to just be numbers (!0 instead of !scope0), and the scope lists are also global unnamed metadata.] Existing noalias metadata in a callee is "cloned" for use by the inlined code. This is necessary because the aliasing scopes are unique to each call site (because of possible control dependencies on the aliasing properties). For example, consider a function: foo(noalias a, noalias b) { *a = *b; } that gets inlined into bar() { ... if (...) foo(a1, b1); ... if (...) foo(a2, b2); } -- now just because we know that a1 does not alias with b1 at the first call site, and a2 does not alias with b2 at the second call site, we cannot let inlining these functons have the metadata imply that a1 does not alias with b2. llvm-svn: 213864
* AA metadata refactoring (introduce AAMDNodes)Hal Finkel2014-07-2413-120/+126
| | | | | | | | | | | | | | | | | | | | In order to enable the preservation of noalias function parameter information after inlining, and the representation of block-level __restrict__ pointer information (etc.), additional kinds of aliasing metadata will be introduced. This metadata needs to be carried around in AliasAnalysis::Location objects (and MMOs at the SDAG level), and so we need to generalize the current scheme (which is hard-coded to just one TBAA MDNode*). This commit introduces only the necessary refactoring to allow for the introduction of other aliasing metadata types, but does not actually introduce any (that will come in a follow-up commit). What it does introduce is a new AAMDNodes structure to hold all of the aliasing metadata nodes associated with a particular memory-accessing instruction, and uses that structure instead of the raw MDNode* in AliasAnalysis::Location, etc. No functionality change intended. llvm-svn: 213859
* Fix indenting.Eric Christopher2014-07-231-13/+14
| | | | llvm-svn: 213811
* Reorganize and simplify local variables.Eric Christopher2014-07-231-13/+11
| | | | llvm-svn: 213809
* Remove the query for TargetMachine and TargetInstrInfo since we'reEric Christopher2014-07-231-3/+1
| | | | | | already inside TargetInstrInfo. llvm-svn: 213806
* DAG: fp->int conversion for non-splat constants.Jim Grosbach2014-07-231-12/+11
| | | | | | | | | | Constant fold the lanes of the input constant build_vector individually so we correctly handle when the vector elements are not all the same constant value. PR20394 llvm-svn: 213798
* [AArch64] Lower sdiv x, pow2 using add + select + shift.Chad Rosier2014-07-231-3/+29
| | | | | | | | | | | | | | | The target-independent DAGcombiner will generate: asr w1, X, #31 w1 = splat sign bit. add X, X, w1, lsr #28 X = X + 0 or pow2-1 asr w0, X, asr #4 w0 = X/pow2 However, the add + shifts is expensive, so generate: add w0, X, 15 w0 = X + pow2-1 cmp X, wzr X - 0 csel X, w0, X, lt X = (X < 0) ? X + pow2-1 : X; asr w0, X, asr 4 w0 = X/pow2 llvm-svn: 213758
* Enable partial libcall inlining for all targets by default.James Molloy2014-07-231-0/+5
| | | | | | | | This pass attempts to speculatively use a sqrt instruction if one exists on the target, falling back to a libcall if the target instruction returned NaN. This was enabled for MIPS and System-Z, but is well guarded and is good for most targets - GCC does this for (that I've checked) X86, ARM and AArch64. llvm-svn: 213752
* [SDAG] Make the DAGCombine worklist not grow endlessly due to duplicateChandler Carruth2014-07-231-52/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | insertions. The old behavior could cause arbitrarily bad memory usage in the DAG combiner if there was heavy traffic of adding nodes already on the worklist to it. This commit switches the DAG combine worklist to work the same way as the instcombine worklist where we null-out removed entries and only add new entries to the worklist. My measurements of codegen time shows slight improvement. The memory utilization is unsurprisingly dominated by other factors (the IR and DAG itself I suspect). This change results in subtle, frustrating churn in the particular order in which DAG combines are applied which causes a number of minor regressions where we fail to match a pattern previously matched by accident. AFAICT, all of these should be using AddToWorklist to directly or should be written in a less brittle way. None of the changes seem drastically bad, and a few of the changes seem distinctly better. A major change required to make this work is to significantly harden the way in which the DAG combiner handle nodes which become dead (zero-uses). Previously, we relied on the ability to "priority-bump" them on the combine worklist to achieve recursive deletion of these nodes and ensure that the frontier of remaining live nodes all were added to the worklist. Instead, I've introduced a routine to just implement that precise logic with no indirection. It is a significantly simpler operation than that of the combiner worklist proper. I suspect this will also fix some other problems with the combiner. I think the x86 changes are really minor and uninteresting, but the avx512 change at least is hiding a "regression" (despite the test case being just noise, not testing some performance invariant) that might be looked into. Not sure if any of the others impact specific "important" code paths, but they didn't look terribly interesting to me, or the changes were really minor. The consensus in review is to fix any regressions that show up after the fact here. Thanks to the other reviewers for checking the output on other architectures. There is a specific regression on ARM that Tim already has a fix prepped to commit. Differential Revision: http://reviews.llvm.org/D4616 llvm-svn: 213727
* [SDAG] Refactor the code for inserting a newly allocated SDNode into theChandler Carruth2014-07-221-96/+86
| | | | | | | | | | | DAG into a helper function. This adds a trip through the (very minimal) verification logic in a bunch of places that were missing it, but shouldn't have any other impact outside of refactoring. I'm hoping to use this to do more clever things when DAG nodes are inserted into the graph. llvm-svn: 213612
* [SDAG] Remove a giant pile of asserts that may have helped track downChandler Carruth2014-07-221-40/+3
| | | | | | | | | | | a bug in 2010 when they were added but are adding no value today. In fact, they are utter lies. NodeAllocator is used to allocate almost all of these node types. I don't know what we were trying to assert here, and the docs don't give any answer. Until we once again stumble upon a bug needing help, let's clear the path for improvements. llvm-svn: 213610
* Revert "Recommit r212203: Don't try to construct debug LexicalScopes ↵David Blaikie2014-07-214-33/+4
| | | | | | | | hierarchy for functions that do not have top level debug information." This reverts commit r212649 while I investigate/reduce/etc PR20367. llvm-svn: 213581
* Replace the result usages while legalizing cmpxchg.Logan Chien2014-07-211-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | We should update the usages to all of the results; otherwise, we might get assertion failure or SEGV during the type legalization of ATOMIC_CMP_SWAP_WITH_SUCCESS with two or more illegal types. For example, in the following sequence, both i8 and i1 might be illegal in some target, e.g. armv5, mipsel, mips64el, %0 = cmpxchg i8* %ptr, i8 %desire, i8 %new monotonic monotonic %1 = extractvalue { i8, i1 } %0, 1 Since both i8 and i1 should be legalized, the corresponding ATOMIC_CMP_SWAP_WITH_SUCCESS dag will be checked/replaced/updated twice. If we don't update the usage to *ALL* of the results in the first round, the DAG for extractvalue might be processed earlier. The GetPromotedInteger() will result in assertion failure, because its operand (i.e. the success bit of cmpxchg) is not promoted beforehand. llvm-svn: 213569
* Revert "[C++11] Add predecessors(BasicBlock *) / successors(BasicBlock *) ↵Duncan P. N. Exon Smith2014-07-214-11/+12
| | | | | | | | | iterator ranges." This reverts commit r213474 (and r213475), which causes a miscompile on a stage2 LTO build. I'll reply on the list in a moment. llvm-svn: 213562
* CodeGen: emit IR-level f16 conversion intrinsics as fptrunc/fpextTim Northover2014-07-213-5/+21
| | | | | | | | | | | | | | | | | This makes the first stage DAG for @llvm.convert.to.fp16 an fptrunc, and correspondingly @llvm.convert.from.fp16 an fpext. The legalisation path is now uniform, regardless of the input IR: fptrunc -> FP_TO_FP16 (if f16 illegal) -> libcall fpext -> FP16_TO_FP (if f16 illegal) -> libcall Each target should be able to select the version that best matches its operations and not be required to duplicate patterns for both fptrunc and FP_TO_FP16 (for example). As a result we can remove some redundant AArch64 patterns. llvm-svn: 213507
* [SDAG,cleanup] Switch the DAG combiner over to use the spellingChandler Carruth2014-07-211-179/+179
| | | | | | | | | | | | | | | | | | 'Worklist' consistently rather than a deeply confusing mixture of 'WorkList' and 'Worklist'. Notably, the very 'WorkList' of the DAG combiner was exposed to target specific DAG combines under an interface 'AddToWorklist' which was implemented by in turn calling 'AddToWorkList' in the combiner. This has sent me circling with the wrong case in grep one too many times. I chose to normalize on 'Worklist' because that one won the grep-vote for llvm/lib/... by a hundered hits or so, and it is used in places relatively "canonical" such as InstCombine's Worklist. Let's all jsut pick this casing, whether "correct", "good", or "bad" and be consistent... llvm-svn: 213506
* [SDAG] Rather than using a narrow test against the one dummy node on theChandler Carruth2014-07-211-1/+6
| | | | | | | | | | | | stack, filter all handle nodes from the DAG combiner worklist. This will also handle cases where other handle nodes might be (erroneously) added to the worklist and then cause bugs and explosions when deleted. For example, when running the legalizer within the DAG combiner, there are times when other handle nodes are used and can end up here. llvm-svn: 213505
* [DAGCombiner] Improve the shuffle-vector folding logic.Andrea Di Biagio2014-07-211-0/+22
| | | | | | | | | | | | | | | | Canonicalize shuffles according to rules: * shuffle(A, shuffle(A, B)) -> shuffle(shuffle(A,B), A) * shuffle(B, shuffle(A, B)) -> shuffle(shuffle(A,B), B) * shuffle(B, shuffle(A, Undef)) -> shuffle(shuffle(A, Undef), B) This patch helps identifying more shuffle pairs that could be combined reusing the already existing rules in the DAGCombiner. Added new test 'combine-vec-shuffle-5.ll' to verify that the canonicalized shuffles are now folded into a single shuffle node by the DAGCombiner. Added more test cases to 'combine-vec-shuffle-4.ll'. llvm-svn: 213504
* [DAG] Refactor some logic. No functional change.Andrea Di Biagio2014-07-211-0/+21
| | | | | | | | This patch removes function 'CommuteVectorShuffle' from X86ISelLowering.cpp and moves its logic into SelectionDAG.cpp as method 'getCommutedVectorShuffles'. This refactoring is in preperation of an upcoming change to the DAGCombiner. llvm-svn: 213503
* MachineRegionInfo.cpp: Another fix on ↵NAKAMURA Takumi2014-07-201-5/+4
| | | | | | MachineRegionInfo::MachineRegionInfo::recalculate() to appease msc17. llvm-svn: 213476
* [C++11] Add predecessors(BasicBlock *) / successors(BasicBlock *) iterator ↵Manuel Jacob2014-07-204-12/+11
| | | | | | | | | | | | | | | | | | ranges. Summary: This patch introduces two new iterator ranges and updates existing code to use it. No functional change intended. Test Plan: All tests (make check-all) still pass. Reviewers: dblaikie Reviewed By: dblaikie Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D4481 llvm-svn: 213474
* Fix -Asserts build introduced since r213456.NAKAMURA Takumi2014-07-201-0/+2
| | | | llvm-svn: 213465
* Sure up ownership passing of the PBQPBuilder by passing unique_ptrs by value ↵David Blaikie2014-07-191-7/+7
| | | | | | | | | rather than lvalue reference. Also removes an unnecessary '.release()' that should've been a std::move anyway. (I'm on a hunt for '.release()' calls) llvm-svn: 213464
* Templatify RegionInfo so it works on MachineBasicBlocksMatt Arsenault2014-07-192-0/+138
| | | | llvm-svn: 213456
* Remove uses of the redundant ".reset(nullptr)" of unique_ptr, in favor of ↵David Blaikie2014-07-192-2/+2
| | | | | | | | | | | ".reset()" It's also possible to just write "= nullptr", but there's some question of whether that's as readable, so I leave it up to authors to pick which they prefer for now. If we want to discuss standardizing on one or the other, we can do that at some point in the future. llvm-svn: 213438
* Revert "Reapply "DebugInfo: Ensure that all debug location scope chains from ↵Eric Christopher2014-07-182-8/+4
| | | | | | | | | | instructions within a function, lead to the function itself."""" After a successful build it seems to have come back on a later build. This reverts commit r213391. llvm-svn: 213432
* DebugInfo: Assert that all abstract scopes are subprograms, rather than ↵David Blaikie2014-07-181-2/+1
| | | | | | | | conditionalizing. There's nothing else these should ever be... llvm-svn: 213417
* Reapply "DebugInfo: Ensure that all debug location scope chains from ↵David Blaikie2014-07-182-4/+8
| | | | | | | | | | | | | | | | | instructions within a function, lead to the function itself.""" Recommits 212776 which was reverted in r212793. This has been committed and recommitted a few times as I try to test it harder and find/fix more issues. The most recent revert was due to an asan bot failure which I can't seem to reproduce locally, though I believe I'm following all the steps the buildbot does. So I'm going to recommit this in the hopes of investigating the failure on the buildbot itself... apologies in advance for the bot noise. If anyone sees failures with this /please/ provide me with any reproductions, etc. llvm-svn: 213391
* ARM: support legalisation of "fptrunc ... to half" operations.Tim Northover2014-07-182-0/+24
| | | | llvm-svn: 213373
* CodeGen: soften f16 type by default instead of marking legal.Tim Northover2014-07-181-0/+7
| | | | | | | | | | | | Actual support for softening f16 operations is still limited, and can be added when it's needed. But Soften is much closer to being a useful thing to try than keeping it Legal when no registers can actually hold such values. Longer term, we probably want something between Soften and Promote semantics for most targets, it'll be more efficient to promote the 4 basic operations to f32 than libcall them. llvm-svn: 213372
* AArch64: Constant fold converting vector setcc results to float.Jim Grosbach2014-07-181-0/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the result of a SETCC for AArch64 is 0 or -1 in each lane, we can move unary operations, in this case [su]int_to_fp through the mask operation and constant fold the operation away. Generally speaking: UNARYOP(AND(VECTOR_CMP(x,y), constant)) --> AND(VECTOR_CMP(x,y), constant2) where constant2 is UNARYOP(constant). This implements the transform where UNARYOP is [su]int_to_fp. For example, consider the simple function: define <4 x float> @foo(<4 x float> %val, <4 x float> %test) nounwind { %cmp = fcmp oeq <4 x float> %val, %test %ext = zext <4 x i1> %cmp to <4 x i32> %result = sitofp <4 x i32> %ext to <4 x float> ret <4 x float> %result } Before this change, the code is generated as: fcmeq.4s v0, v0, v1 movi.4s v1, #0x1 // Integer splat value. and.16b v0, v0, v1 // Mask lanes based on the comparison. scvtf.4s v0, v0 // Convert each lane to f32. ret After, the code is improved to: fcmeq.4s v0, v0, v1 fmov.4s v1, #1.00000000 // f32 splat value. and.16b v0, v0, v1 // Mask lanes based on the comparison. ret The svvtf.4s has been constant folded away and the floating point 1.0f vector lanes are materialized directly via fmov.4s. Rather than do the folding manually in the target code, teach getNode() in the generic SelectionDAG to handle folding constant operands of vector [su]int_to_fp nodes. It is reasonable (as noted in a FIXME) to do additional constant folding there as well, but I don't have test cases for those operations, so leaving them for another time when it becomes appropriate. rdar://17693791 llvm-svn: 213341
* Revert "[x86] Fold extract_vector_elt of a load into the Load's address ↵Michael J. Spencer2014-07-181-124/+90
| | | | | | | | | computation." There's a bug where this can create cycles in the DAG. It will take a bit to fix, so I'm backing it out for now. llvm-svn: 213339
* CodeGen: generate single libcall for fptrunc -> f16 operations.Tim Northover2014-07-174-19/+29
| | | | | | | | | | | | Previously we asserted on this code. Currently compiler-rt doesn't actually implement any of these new libcalls, but external help is pretty much the only viable option for LLVM. I've followed the much more generic "__truncST2" naming, as opposed to the odd name for f32 -> f16 truncation. This can obviously be changed later, or overridden by any targets that need to. llvm-svn: 213252
* CodeGen: extend f16 conversions to permit types > float.Tim Northover2014-07-176-21/+48
| | | | | | | | | | | | | | | | | | | This makes the two intrinsics @llvm.convert.from.f16 and @llvm.convert.to.f16 accept types other than simple "float". This is only strictly needed for the truncate operation, since otherwise double rounding occurs and there's no way to represent the strict IEEE conversion. However, for symmetry we allow larger types in the extend too. During legalization, we can expand an "fp16_to_double" operation into two extends for convenience, but abort when the truncate isn't legal. A new libcall is probably needed here. Even after this commit, various target tweaks are needed to actually use the extended intrinsics. I've put these into separate commits for clarity, so there are no actual tests of f64 conversion here. llvm-svn: 213248
* Fixed formatting, removed bug reference, renamed testcaseSanjay Patel2014-07-161-3/+4
| | | | | | Thanks to Duncan Exon Smith for reviewing and cleanup suggestions. llvm-svn: 213205
* [FastISel] Local values shouldn't be alive across an inline asm call with ↵Juergen Ributzka2014-07-161-0/+5
| | | | | | | | | | | | | | side effects. This fixes an issue where a local value is defined before and used after an inline asm call with side effects. This fix simply flushes the local value map, which updates the insertion point for the inline asm call to be above any previously defined local values. This fixes <rdar://problem/17694203> llvm-svn: 213203
OpenPOWER on IntegriCloud