summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* Fix incorrect kill flags in fastisel.Pete Cooper2015-05-061-2/+6
| | | | | | | | If called twice in the same BB on the same constant, FastISel::fastEmit_ri_ was marking the materialized vreg as killed on each use, instead of only the last use. Change this to only mark the last use as killed by making earlier uses check if the vreg is already used elsewhere. llvm-svn: 236650
* [x86] Fix register class of folded load index reg.Pete Cooper2015-05-061-0/+17
| | | | | | | When folding a load in to another instruction, we need to fix the class of the index register Otherwise, it could be something like GR64 not GR64_NOSP and would fail the machine verifier. llvm-svn: 236644
* [SanitizerCoverage] Fix a couple of typos. NFC.Alexey Samsonov2015-05-061-7/+7
| | | | llvm-svn: 236643
* MC: Skip names of temporary symbols in object streamerDuncan P. N. Exon Smith2015-05-062-0/+6
| | | | | | | | | | | | | | Don't create names for temporary symbols when using an object streamer. The names never make it to the output anyway. From the starting point of r236629, my heap profile says this drops peak memory usage from 1100 MB to 1058 MB for CodeGen of `verify-uselistorder`, a savings of almost 4% on peak memory, and removes `StringMap<bool, BumpPtrAllocator...>` from the profile entirely. (I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`; see r236629 for details.) llvm-svn: 236642
* CodeGen: move over-zealous assert into actual if statement.Tim Northover2015-05-061-3/+2
| | | | | | | | | | | | | | | It's quite possible to encounter an insertvalue instruction that's more deeply nested than the value we're looking for, but when that happens we really mustn't compare beyond the end of the index array. Since I couldn't see any guarantees about what comparisons std::equal makes, we probably need to directly check the size beforehand. In practice, I suspect most std::equal implementations would probably bail early, which would be OK. But just in case... rdar://20834485 llvm-svn: 236635
* DwarfDebug: Emit number of bytes in .debug_loc entry directlyDuncan P. N. Exon Smith2015-05-061-6/+3
| | | | | | | | | | | | | | | | | | | | | | Emit the number of bytes in a `.debug_loc` entry directly. The old code created temp labels (expensive), emitted the difference between them, and then emitted one on each side of the relevant bytes. (I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc` (the optimized version of ld64's `-save-temps` when linking the `verify-uselistorder` executable in an LTO bootstrap). I've hacked `MCContext::Allocate()` to just call `malloc()` instead of using the `BumpPtrAllocator` so that the heap profile is easier to read. As far as peak memory is concerned, `MCContext::Allocate()` is equivalent to a leak, since it only gets freed at process teardown. In my heap profile, this patch drops memory usage of `DwarfDebug::emitDebugLoc()` from 132.56 MB (11.4%) down to 29.86 MB (2.7%) at peak memory. Some of that must be noise from `SmallVector` (or other) allocations -- peak memory only dropped from 1160 MB down to 1100 MB -- but this nevertheless shaves 5% off the top.) llvm-svn: 236629
* Implement `createSanitizerCtor`, common helper function for all sanitizersIsmail Pazarbasi2015-05-061-0/+21
| | | | | | | | | | | | | | | | | Summary: This helper function creates a ctor function, which calls sanitizer's init function with given arguments. This constructor is then expected to be added to module's ctors. The patch helps unifying how sanitizer constructor functions are created, and how init functions are called across all sanitizers. Reviewers: kcc, samsonov Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D8777 llvm-svn: 236627
* [WinEH] Improve fatal error message about failed demotionReid Kleckner2015-05-061-1/+6
| | | | llvm-svn: 236626
* [SelectionDAG] Delete SelectionDAGBuilder::removeValue. NFC.Sanjoy Das2015-05-061-6/+0
| | | | | | SelectionDAGBuilder::removeValue is dead now, after rL236563. llvm-svn: 236618
* Allow 0-weight branches in BranchProbabilityInfo.Diego Novillo2015-05-062-6/+12
| | | | | | | | | | | | | | | | | | | | | | | Summary: When computing branch weights in BPI, we used to disallow branches with weight 0. This is a minor nuisance, because a branch with weight 0 is different to "don't have information". In the context of instrumentation, it may mean "never executed", in the context of sampling, it means "never or seldom executed". In allowing 0 weight branches, I ran into issues with the switch expansion code in selection DAG. It is currently hardwired to not handle branches with weight 0. To maintain the current behaviour, I changed it to use 1 when it finds 0, but perhaps the algorithm needs changes to tolerate branches with weight zero. Reviewers: hansw Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D9533 llvm-svn: 236617
* Add missing dereferenceable_or_null gettersSanjoy Das2015-05-063-0/+19
| | | | | | | | | | | | | | | | | Summary: Add missing dereferenceable_or_null getters required for http://reviews.llvm.org/D9253 change. Separated from the D9253 review. Patch by Artur Pilipenko! Reviewers: sanjoy Reviewed By: sanjoy Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D9499 llvm-svn: 236615
* [X86] Disable loop unrolling in loop vectorization pass when VF is 1.Wei Mi2015-05-0611-12/+18
| | | | | | | | | | | | | The patch disabled unrolling in loop vectorization pass when VF==1 on x86 architecture, by setting MaxInterleaveFactor to 1. Unrolling in loop vectorization pass may introduce the cost of overflow check, memory boundary check and extra prologue/epilogue code when regular unroller will unroll the loop another time. Disable it when VF==1 remove the unnecessary cost on x86. The same can be done for other platforms after verifying interleaving/memory bound checking to be not perf critical on those platforms. Differential Revision: http://reviews.llvm.org/D9515 llvm-svn: 236613
* Add ChangeTo* to MachineOperand for symbolsMatt Arsenault2015-05-061-0/+22
| | | | llvm-svn: 236612
* [ARM] Fast-Isel was incorrectly selecting <2 x double> adds.Pete Cooper2015-05-061-0/+4
| | | | | | | | | | With neon enabled, we reach SelectBinaryFPOp and are able to get registers for a <2 x double> add. However, we shouldn't actually attempt arithmetic on it as ARMIselLowering says "v2f64 is legal so that QR subregs can be extracted as f64 elements, but neither Neon nor VFP support any arithmetic operations on it." This commit disables SelectBinaryFPOp for any vector types. There's already a FIXME to try handle neon. Doing so would require fixing this conditional which isn't safe for vectors 'VT == MVT::f64 || VT == MVT::i64' llvm-svn: 236609
* [PPC64LE] Adjust vector splats during VSX swap optimizationBill Schmidt2015-05-061-7/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | The initial code drop for VSX swap optimization permitted the optimization only when all operations in a web of related computation are lane-insensitive. For some lane-sensitive operations, we can still permit the optimization provided that we make adjustments to those operations. This patch adds special handling for vector splats so that their presence doesn't kill the optimization. Vector splats are lane-sensitive since they identify by number a vector element to be used as the source of a splat. When swap optimizations take place, the desired vector element will move to the opposite doubleword of the quadword vector. We thus replace the index I by (I + N/2) % N, where N is the number of elements in the vector. A new test case is added to test that swap optimization succeeds when vector splats are present, and that the proper input element is used as the source of the splat. An ancillary change removes SH_BUILDVEC as one of the kinds of special handling that may be required by VSX swap optimization. From experience with GCC, I had expected to need some modifications for vector build operations, but I did not find that to be the case. llvm-svn: 236606
* Reformat.NAKAMURA Takumi2015-05-062-6/+5
| | | | llvm-svn: 236601
* Revert r236546, "propagate IR-level fast-math-flags to DAG nodes (NFC)"NAKAMURA Takumi2015-05-065-67/+63
| | | | | | It caused undefined behavior. llvm-svn: 236600
* [ARM] generate VMAXNM/VMINNM for a compare followed by a select, in safe ↵Artyom Skrobov2015-05-061-25/+100
| | | | | | math mode too llvm-svn: 236590
* SelectionDAG: Handle out-of-bounds index in extract vector elementPawel Bylica2015-05-061-0/+4
| | | | | | | | | | | | | | | | | | Summary: This patch correctly handles undef case of EXTRACT_VECTOR_ELT node where the element index is constant and not less than vector size. Test Plan: CodeGen for X86 test included. Also one incorrect regression test fixed. Reviewers: qcolombet, chandlerc, hfinkel Reviewed By: hfinkel Subscribers: hfinkel, llvm-commits Differential Revision: http://reviews.llvm.org/D9250 llvm-svn: 236584
* [DomTree] verifyDomTree to unconditionally perform DT verificationAdam Nemet2015-05-062-7/+6
| | | | | | | | | | | | I folded the check for the flag -verify-dom-info into the only caller where I think it is supposed to be checked: verifyAnalysis. (The idea of the flag is to enable this expensive verification in verifyPreservedAnalysis.) I'm assuming that when manually scheduling the verification pass with -passes=verify<domtree>, we do want to perform the verification. llvm-svn: 236575
* [ARM][FastISel] Use TST #1 instead of CMP #0 for select.Ahmed Bougacha2015-05-061-4/+4
| | | | | | | | | | Since r234249, i1 are sext instead of zext; because of that, doing "CMP rN, #0; IT EQ/NE" isn't correct anymore. "TST #1" is the conservatively correct alternative - the tradeoff being that it doesn't have a 16-bit encoding -, so use that instead. llvm-svn: 236569
* [IRBuilder] Fix indentation. NFC.Sanjoy Das2015-05-061-20/+19
| | | | | | Whitespace-only change. llvm-svn: 236567
* [Statepoint] Clean up StatepointLowering: symbolic constants.Sanjoy Das2015-05-061-2/+3
| | | | | | | | For accessors in the `Statepoint` class, use symbolic constants for offsets into the argument vector instead of literals. This makes the code intent clearer and simpler to change. llvm-svn: 236566
* [Statepoint] Clean up Statepoint.h: accessor names.Sanjoy Das2015-05-067-23/+25
| | | | | | Use getFoo() as accessors consistently and some other naming changes. llvm-svn: 236564
* [StatepointLowering] Don't create temporary instructions. NFCI.Sanjoy Das2015-05-061-73/+69
| | | | | | | | | | | | | | Summary: Instead of creating a temporary call instruction and lowering that, use SelectionDAGBuilder::lowerCallOperands. Reviewers: reames Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D9480 llvm-svn: 236563
* [WinEH] Reset WinEHPrepare::SEHExceptionCodeSlot when we're done.Ahmed Bougacha2015-05-061-0/+1
| | | | | | | This caused a use-after-free on test/CodeGen/X86/win32-eh.ll No functional change intended. llvm-svn: 236561
* InstrProf: Strip filename prefixes from the names we display for coverageJustin Bogner2015-05-051-1/+15
| | | | | | | For consumers of coverage data, any filename prefixes we store in the profile data are just noise. Strip this prefix if it exists. llvm-svn: 236558
* [X86 fast-isel] Constrain the index reg class to not include SP.Pete Cooper2015-05-051-6/+23
| | | | | | The index reg on instructions with complex address modes is a GPR64_NOSP. Constrain it to appease the machine verifier. llvm-svn: 236557
* [SelectionDAG] Make an argument optional in RFV::getCopyToRegs. NFC.Sanjoy Das2015-05-051-5/+6
| | | | | | | | | | | | | | | Summary: We default the value argument to nullptr. The only use of the value is in diagnosePossiblyInvalidConstraint and that seems to be resilient to it being nullptr. Reviewers: atrick, reames Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D9479 llvm-svn: 236555
* [SelectionDAG] Move RegsForValue into SelectionDAGBuilder.h. NFC.Sanjoy Das2015-05-052-85/+90
| | | | | | | | | | | | | | | Summary: The exported class will be used in later change, in StatepointLowering.cpp. It is still internal to SelectionDAG (not exported via include/). Reviewers: reames, atrick Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D9478 llvm-svn: 236554
* [SelectionDAG] Pass explicit type to lowerCallOperands. NFC.Sanjoy Das2015-05-052-5/+6
| | | | | | | | | | | | | | Summary: Currently this does not change anything, but change will be used in a later change to StatepointLowering.cpp Reviewers: reames, atrick Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D9477 llvm-svn: 236553
* [StatepointLowering] Rename variable, NFC.Sanjoy Das2015-05-051-3/+3
| | | | | | Rename LoweredArgs to LoweredMetaArgs to clarify intent. llvm-svn: 236552
* Fix IfConverter to handle regmask machine operands.Pete Cooper2015-05-051-1/+14
| | | | | | | | | | | | | | Note, this is a recommit of r236515 after fixing an error in r236514. The buildbot ran fast enough that it picked up r236514 prior to r236515 and threw an error. r236515 itself ran 'make check' without errors. Original commit message follows: A regmask (typically seen on a call) clobbers the set of registers it lists. The IfConverter, in UpdatePredRedefs, was handling register defs, but not regmasks. These are slightly different to a def in that we need to add both an implicit use and def to appease the machine verifier. Otherwise, uses after the if converted call could think they are reading an undefined register. Reviewed by Matthias Braun and Quentin Colombet. llvm-svn: 236550
* [lib/Fuzzer] on crash print the contents of the crashy input as base64Kostya Serebryany2015-05-053-0/+8
| | | | llvm-svn: 236548
* propagate IR-level fast-math-flags to DAG nodes (NFC)Sanjay Patel2015-05-055-63/+67
| | | | | | | | | | | | | | | | | | | | | | | | This patch adds the minimum plumbing necessary to use IR-level fast-math-flags (FMF) in the backend without actually using them for anything yet. This is a follow-on to: http://reviews.llvm.org/rL235997 ...which split the existing nsw / nuw / exact flags and FMF into their own struct. There are 2 structural changes here: 1. The main diff is that we're preparing to extend the optimization flags to affect more than just binary SDNodes. Eg, IR intrinsics ( https://llvm.org/bugs/show_bug.cgi?id=21290 ) or non-binop nodes that don't even exist in IR such as FMA, FNEG, etc. 2. The other change is that we're actually copying the FP fast-math-flags from the IR instructions to SDNodes. Differential Revision: http://reviews.llvm.org/D8900 llvm-svn: 236546
* use range-based for-loop; NFCSanjay Patel2015-05-051-2/+2
| | | | llvm-svn: 236544
* [Inliner] Discard empty COMDAT groupsDavid Majnemer2015-05-051-11/+51
| | | | | | | | | COMDAT groups which have become rendered unused because of inline are discardable if we can prove that we've made the group empty. This fixes PR22285. llvm-svn: 236539
* Refactor UpdatePredRedefs and StepForward to avoid duplication. NFCPete Cooper2015-05-052-31/+29
| | | | | | | | | | Note, this is a reapplication of r236515 with a fix to not assert on non-register operands, but instead only handle them until the subsequent commit. Original commit message follows. The code was basically the same here already. Just added an out parameter for a vector of seen defs so that UpdatePredRedefs can call StepForward first, then do its own post processing on the seen defs. Will be used in the next commit to also handle regmasks. llvm-svn: 236538
* Thumb2SizeReduction: Check the correct set of registers for LDMIA.Peter Collingbourne2015-05-051-9/+4
| | | | | | | | | | | | | | | | | The register set for LDMIA begins at offset 3, not 4. We were previously missing the short encoding of this instruction in the case where the base register was the first register in the register set. Also clean up some dead code: - The isARMLowRegister check is redundant with what VerifyLowRegs does; replace with an assert. - Remove handling of LDMDB instruction, which has no short encoding (and does not appear in ReduceTable). Differential Revision: http://reviews.llvm.org/D9485 llvm-svn: 236535
* [DAGCombiner] Account for getVectorIdxTy() when narrowing vector loadUlrich Weigand2015-05-051-2/+3
| | | | | | | | | | | This patch makes ReplaceExtractVectorEltOfLoadWithNarrowedLoad convert the element number from getVectorIdxTy() to PtrTy before doing pointer arithmetic on it. This is needed on z, where element numbers are i32 but pointers are i64. Original patch by Richard Sandiford. llvm-svn: 236530
* [DAGCombiner] Fix ReplaceExtractVectorEltOfLoadWithNarrowedLoad for BEUlrich Weigand2015-05-051-7/+0
| | | | | | | | | | | | | For little-endian, the function would convert (extract_vector_elt (load X), Y) to X + Y*sizeof(elt). For big-endian it would instead use X + sizeof(vec) - Y*sizeof(elt). The big-endian case wasn't right since vector index order always follows memory/array order, even for big-endian. (Note that the current handling has to be wrong for Y==0 since it would access beyond the end of the vector.) Original patch by Richard Sandiford. llvm-svn: 236529
* [LegalizeVectorTypes] Allow single loads and stores for more short vectorsUlrich Weigand2015-05-051-1/+4
| | | | | | | | | | | | | | | | | | When lowering a load or store for TypeWidenVector, the type legalizer would use a single load or store if the associated integer type was legal. E.g. it would load a v4i8 as an i32 if i32 was legal. This patch extends that behavior to promoted integers as well as legal ones. If the integer type for the full vector width is TypePromoteInteger, the element type is going to be TypePromoteInteger too, and it's still better to use a single promoting load or truncating store rather than N individual promoting loads or truncating stores. E.g. if you have a v2i8 on a target where i16 is promoted to i32, it's better to load the v2i8 as an i16 rather than load both i8s individually. Original patch by Richard Sandiford. llvm-svn: 236528
* [SystemZ] Add vector intrinsicsUlrich Weigand2015-05-055-153/+497
| | | | | | | | | | | | | | | | | | | This adds intrinsics to allow access to all of the z13 vector instructions. Note that instructions whose semantics can be described by standard LLVM IR do not get any intrinsics. For each instructions whose semantics *cannot* (fully) be described, we define an LLVM IR target-specific intrinsic that directly maps to this instruction. For instructions that also set the condition code, the LLVM IR intrinsic returns the post-instruction CC value as a second result. Instruction selection will attempt to detect code that compares that CC value against constants and use the condition code directly instead. Based on a patch by Richard Sandiford. llvm-svn: 236527
* [SystemZ] Mark v1i128 and v1f128 as unsupportedUlrich Weigand2015-05-051-0/+32
| | | | | | | | | | | | The ABI specifies that <1 x i128> and <1 x fp128> are supposed to be passed in vector registers. We do not yet support those types, and some infrastructure is missing before we can do so. In order to prevent accidentally generating code violating the ABI, this patch adds checks to detect those types and error out if user code attempts to use them. llvm-svn: 236526
* [SystemZ] Handle sub-128 vectorsUlrich Weigand2015-05-056-27/+155
| | | | | | | | | | | | | | | | | | | | | The ABI allows sub-128 vectors to be passed and returned in registers, with the vector occupying the upper part of a register. We therefore want to legalize those types by widening the vector rather than promoting the elements. The patch includes some simple tests for sub-128 vectors and also tests that we can recognize various pack sequences, some of which use sub-128 vectors as temporary results. One of these forms is based on the pack sequences generated by llvmpipe when no intrinsics are used. Signed unpacks are recognized as BUILD_VECTORs whose elements are individually sign-extended. Unsigned unpacks can have the equivalent form with zero extension, but they also occur as shuffles in which some elements are zero. Based on a patch by Richard Sandiford. llvm-svn: 236525
* [SystemZ] Add CodeGen support for scalar f64 ops in vector registersUlrich Weigand2015-05-058-23/+261
| | | | | | | | | | | | The z13 vector facility includes some instructions that operate only on the high f64 in a v2f64, effectively extending the FP register set from 16 to 32 registers. It's still better to use the old instructions if the operands happen to fit though, since the older instructions have a shorter encoding. Based on a patch by Richard Sandiford. llvm-svn: 236524
* [SystemZ] Add CodeGen support for v4f32Ulrich Weigand2015-05-058-20/+231
| | | | | | | | | | | | | | | | The architecture doesn't really have any native v4f32 operations except v4f32->v2f64 and v2f64->v4f32 conversions, with only half of the v4f32 elements being used. Even so, using vector registers for <4 x float> and scalarising individual operations is much better than generating completely scalar code, since there's much less register pressure. It's also more efficient to do v4f32 comparisons by extending to 2 v2f64s, comparing those, then packing the result. This particularly helps with llvmpipe. Based on a patch by Richard Sandiford. llvm-svn: 236523
* [SystemZ] Add CodeGen support for v2f64Ulrich Weigand2015-05-056-37/+342
| | | | | | | | | This adds ABI and CodeGen support for the v2f64 type, which is natively supported by z13 instructions. Based on a patch by Richard Sandiford. llvm-svn: 236522
* [SystemZ] Add CodeGen support for integer vector typesUlrich Weigand2015-05-0514-146/+2138
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This the first of a series of patches to add CodeGen support exploiting the instructions of the z13 vector facility. This patch adds support for the native integer vector types (v16i8, v8i16, v4i32, v2i64). When the vector facility is present, we default to the new vector ABI. This is characterized by two major differences: - Vector types are passed/returned in vector registers (except for unnamed arguments of a variable-argument list function). - Vector types are at most 8-byte aligned. The reason for the choice of 8-byte vector alignment is that the hardware is able to efficiently load vectors at 8-byte alignment, and the ABI only guarantees 8-byte alignment of the stack pointer, so requiring any higher alignment for vectors would require dynamic stack re-alignment code. However, for compatibility with old code that may use vector types, when *not* using the vector facility, the old alignment rules (vector types are naturally aligned) remain in use. These alignment rules are not only implemented at the C language level (implemented in clang), but also at the LLVM IR level. This is done by selecting a different DataLayout string depending on whether the vector ABI is in effect or not. Based on a patch by Richard Sandiford. llvm-svn: 236521
* [SystemZ] Add z13 vector facility and MC supportUlrich Weigand2015-05-0516-116/+2075
| | | | | | | | | | | | | | | | | | | | | This patch adds support for the z13 processor type and its vector facility, and adds MC support for all new instructions provided by that facilily. Apart from defining the new instructions, the main changes are: - Adding VR128, VR64 and VR32 register classes. - Making FP64 a subclass of VR64 and FP32 a subclass of VR32. - Adding a D(V,B) addressing mode for scatter/gather operations - Adding 1-, 2-, and 3-bit immediate operands for some 4-bit fields. Until now all immediate operands have been the same width as the underlying field (hence the assert->return change in decode[SU]ImmOperand). In addition, sys::getHostCPUName is extended to detect running natively on a z13 machine. Based on a patch by Richard Sandiford. llvm-svn: 236520
OpenPOWER on IntegriCloud