summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* Add saving and restoring of r30 to the prologue and epilogue, respectivelyJustin Hibbits2015-01-082-0/+17
| | | | | | | | | | | | | | | | Summary: The PIC additions didn't update the prologue and epilogue code to save and restore r30 (PIC base register). This does that. Test Plan: Tests updated. Reviewers: hfinkel Reviewed By: hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D6876 llvm-svn: 225450
* Explicitly handle LinkOnceODRAutoHideLinkage. NFC. We already have a test.Rafael Espindola2015-01-081-0/+2
| | | | llvm-svn: 225449
* Update naming style and clang-format. NFC.Rafael Espindola2015-01-081-17/+30
| | | | llvm-svn: 225448
* Fix large stack alignment codegen for ARM and Thumb2 targetsKristof Beyls2015-01-082-22/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This partially fixes PR13007 (ARM CodeGen fails with large stack alignment): for ARM and Thumb2 targets, but not for Thumb1, as it seems stack alignment for Thumb1 targets hasn't been supported at all. Producing an aligned stack pointer is done by zero-ing out the lower bits of the stack pointer. The BIC instruction was used for this. However, the immediate field of the BIC instruction only allows to encode an immediate that can zero out up to a maximum of the 8 lower bits. When a larger alignment is requested, a BIC instruction cannot be used; llvm was silently producing incorrect code in this case. This commit fixes code generation for large stack aligments by using the BFC instruction instead, when the BFC instruction is available. When not, it uses 2 instructions: a right shift, followed by a left shift to zero out the lower bits. The lowering of ARM::Int_eh_sjlj_dispatchsetup still has code that unconditionally uses BIC to realign the stack pointer, so it very likely has the same problem. However, I wasn't able to produce a test case for that. This commit adds an assert so that the compiler will fail the assert instead of silently generating wrong code if this is ever reached. llvm-svn: 225446
* R600/SI: Remove SIISelLowering::legalizeOperands()Tom Stellard2015-01-082-176/+1
| | | | | | | | | Its functionality has been replaced by calling SIInstrInfo::legalizeOperands() from SIISelLowering::AdjstInstrPostInstrSelection() and running the SIFoldOperands and SIShrinkInstructions passes. llvm-svn: 225445
* Masked Load/Store - fixed a bug in type legalization.Elena Demikhovsky2015-01-083-3/+107
| | | | llvm-svn: 225441
* Fix include ordering, NFC.Michael Kuperstein2015-01-081-1/+1
| | | | llvm-svn: 225439
* [X86] Don't try to generate direct calls to TLS globalsMichael Kuperstein2015-01-081-1/+2
| | | | | | | | | The call lowering assumes that if the callee is a global, we want to emit a direct call. This is correct for regular globals, but not for TLS ones. Differential Revision: http://reviews.llvm.org/D6862 llvm-svn: 225438
* Move SPAdj logic from PEI into the targets (NFC)Michael Kuperstein2015-01-082-11/+34
| | | | | | | | | | | | PEI tries to keep track of how much starting or ending a call sequence adjusts the stack pointer by, so that it can resolve frame-index references. Currently, it takes a very simplistic view of how SP adjustments are done - both FrameStartOpcode and FrameDestroyOpcode adjust it exactly by the amount written in its first argument. This view is in fact incorrect for some targets (e.g. due to stack re-alignment, or because it may want to adjust the stack pointer in multiple steps). However, that doesn't cause breakage, because most targets (the only in-tree exception appears to be 32-bit ARM) rely on being able to simplify the call frame pseudo-instructions earlier, so this code is never hit. Moving the computation into TargetInstrInfo allows targets to override the way the adjustment is computed if they need to have a non-zero SPAdj. Differential Revision: http://reviews.llvm.org/D6863 llvm-svn: 225437
* [X86] Don't print 'dword ptr' or 'qword ptr' on the operand to some of the ↵Craig Topper2015-01-084-4/+14
| | | | | | LEA variants in Intel syntax. The memory operand is inherently unsized. llvm-svn: 225432
* Revert "Reapply: Teach SROA how to update debug info for fragmented variables."Adrian Prantl2015-01-081-60/+8
| | | | | | | This reverts commit r225379 while investigating an assertion failure reported by Alexey. llvm-svn: 225424
* [RegAllocGreedy] Introduce a late pass to repair broken hints.Quentin Colombet2015-01-083-2/+212
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A broken hint is a copy where both ends are assigned different colors. When a variable gets evicted in the neighborhood of such copies, it is likely we can reconcile some of them. ** Context ** Copies are inserted during the register allocation via splitting. These split points are required to relax the constraints on the allocation problem. When such a point is inserted, both ends of the copy would not share the same color with respect to the current allocation problem. When variables get evicted, the allocation problem becomes different and some split point may not be required anymore. However, the related variables may already have been colored. This usually shows up in the assembly with pattern like this: def A ... save A to B def A use A restore A from B ... use B Whereas we could simply have done: def B ... def A use A ... use B ** Proposed Solution ** A variable having a broken hint is marked for late recoloring if and only if selecting a register for it evict another variable. Indeed, if no eviction happens this is pointless to look for recoloring opportunities as it means the situation was the same as the initial allocation problem where we had to break the hint. Finally, when everything has been allocated, we look for recoloring opportunities for all the identified candidates. The recoloring is performed very late to rely on accurate copy cost (all involved variables are allocated). The recoloring is simple unlike the last change recoloring. It propagates the color of the broken hint to all its copy-related variables. If the color is available for them, the recoloring uses it, otherwise it gives up on that hint even if a more complex coloring would have worked. The recoloring happens only if it is profitable. The profitability is evaluated using the expected frequency of the copies of the currently recolored variable with a) its current color and b) with the target color. If a) is greater or equal than b), then it is profitable and the recoloring happen. ** Example ** Consider the following example: BB1: a = b = BB2: ... = b = a Let us assume b gets split: BB1: a = b = BB2: c = b ... d = c = d = a Because of how the allocation work, b, c, and d may be assigned different colors. Now, if a gets evicted to make room for c, assuming b and d were assigned to something different than a. We end up with: BB1: a = st a, SpillSlot b = BB2: c = b ... d = c = d e = ld SpillSlot = e This is likely that we can assign the same register for b, c, and d, getting rid of 2 copies. ** Performances ** Both ARM64 and x86_64 show performance improvements of up to 3% for the llvm-testsuite + externals with Os and O3. There are a few regressions too that comes from the (in)accuracy of the block frequency estimate. <rdar://problem/18312047> llvm-svn: 225422
* [SelectionDAG] Allow targets to specify legality of extloads' resultAhmed Bougacha2015-01-0820-169/+230
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | type (in addition to the memory type). The *LoadExt* legalization handling used to only have one type, the memory type. This forced users to assume that as long as the extload for the memory type was declared legal, and the result type was legal, the whole extload was legal. However, this isn't always the case. For instance, on X86, with AVX, this is legal: v4i32 load, zext from v4i8 but this isn't: v4i64 load, zext from v4i8 Whereas v4i64 is (arguably) legal, even without AVX2. Note that the same thing was done a while ago for truncstores (r46140), but I assume no one needed it yet for extloads, so here we go. Calls to getLoadExtAction were changed to add the value type, found manually in the surrounding code. Calls to setLoadExtAction were mechanically changed, by wrapping the call in a loop, to match previous behavior. The loop iterates over the MVT subrange corresponding to the memory type (FP vectors, etc...). I also pulled neighboring setTruncStoreActions into some of the loops; those shouldn't make a difference, as the additional types are illegal. (e.g., i128->i1 truncstores on PPC.) No functional change intended. Differential Revision: http://reviews.llvm.org/D6532 llvm-svn: 225421
* Remove empty statement. No functionality change.Nick Lewycky2015-01-081-1/+0
| | | | llvm-svn: 225420
* X86: VZeroUpperInserter: shortcut should not trigger if we have any function ↵Matthias Braun2015-01-081-8/+12
| | | | | | live-ins. llvm-svn: 225419
* RegisterCoalescer: Do not remove IMPLICIT_DEFS if they are required for ↵Matthias Braun2015-01-081-1/+7
| | | | | | | | | | | subranges. The register coalescer used to remove implicit_defs when they are covered by the main range anyway. With subreg liveness tracking we can't do that anymore in places where the IMPLICIT_DEF is required as begin of a subregister liverange. llvm-svn: 225416
* RegisterCoalescer: Fix valuesIdentical() in some subrange merge cases.Matthias Braun2015-01-071-90/+81
| | | | | | | | | | | | | I got confused and assumed SrcIdx/DstIdx of the CoalescerPair is a subregister index in SrcReg/DstReg, but they are actually subregister indices of the coalesced register that get you back to SrcReg/DstReg when applied. Fixed the bug, improved comments and simplified code accordingly. Testcase by Tom Stellard! llvm-svn: 225415
* LiveInterval: Implement feedback by Quentin Colombet.Matthias Braun2015-01-071-25/+32
| | | | llvm-svn: 225413
* [GC] improve testing around gc.relocate and fix a testPhilip Reames2015-01-071-9/+32
| | | | | | | | | | | | | | | Patch by: Ramkumar Ramachandra <artagnon@gmail.com> "This patch started out as an exploration of gc.relocate, and an attempt to write a simple test in call-lowering. I then noticed that the arguments of gc.relocate were not checked fully, so I went in and fixed a few things. Finally, the most important outcome of this patch is that my new error handling code caught a bug in a callsite in stackmap-format." Differential Revision: http://reviews.llvm.org/D6824 llvm-svn: 225412
* R600/SI: Commute instructions to enable more folding opportunitiesTom Stellard2015-01-072-19/+51
| | | | llvm-svn: 225410
* IR: Add MDNode::getDistinct()Duncan P. N. Exon Smith2015-01-071-2/+12
| | | | | | | | | | Allow distinct `MDNode`s to be explicitly created. There's no way (yet) of representing their distinctness in assembly/bitcode, however, so this still isn't first-class. Part of PR22111. llvm-svn: 225406
* R600/SI: Only fold immediates that have one useTom Stellard2015-01-071-1/+8
| | | | | | | Folding the same immediate into multiple instruction will increase program size, which can hurt performance. llvm-svn: 225405
* Update a comment.Adrian Prantl2015-01-071-0/+2
| | | | llvm-svn: 225399
* Linker: Don't use MDNode::replaceOperandWith()Duncan P. N. Exon Smith2015-01-072-11/+26
| | | | | | | | | | | `MDNode::replaceOperandWith()` changes all instances of metadata. Stop using it when linking module flags, since (due to uniquing) the flag values could be used by other metadata. Instead, use new API `NamedMDNode::setOperand()` to update the reference directly. llvm-svn: 225397
* [CodeGen] Use MVT iterator_ranges in legality loops. NFC intended.Ahmed Bougacha2015-01-077-103/+59
| | | | | | | | A few loops do trickier things than just iterating on an MVT subset, so I'll leave them be for now. Follow-up of r225387. llvm-svn: 225392
* R600/SI: Remove VReg_32 register classTom Stellard2015-01-0713-154/+152
| | | | | | | | | | | Use VGPR_32 register class instead. These two register classes were identical and having separate classes was causing SIInstrInfo::isLegalOperands() to be overly conservative in some cases. This change is necessary to prevent future paches from missing a folding opportunity in fneg-fabs.ll. llvm-svn: 225382
* More FMA folding opportunities.Olivier Sallenave2015-01-071-1/+133
| | | | llvm-svn: 225380
* Reapply: Teach SROA how to update debug info for fragmented variables.Adrian Prantl2015-01-071-8/+60
| | | | | | | | The two buildbot failures were addressed in LLVM r225378 and CFE r225359. This rapplies commit 225272 without modifications. llvm-svn: 225379
* Debug info: Allow aggregate types to be described by constants.Adrian Prantl2015-01-071-2/+5
| | | | llvm-svn: 225378
* [Hexagon] Fix 225372 USR register is not fully complete. Removing Uses = ↵Colin LeMahieu2015-01-071-12/+12
| | | | | | [USR] maintains existing functionality to old instructions without encodings. llvm-svn: 225377
* [Hexagon] Adding floating point classification and creation.Colin LeMahieu2015-01-071-0/+45
| | | | llvm-svn: 225374
* R600/SI: Add a V_MOV_B64 pseudo instructionTom Stellard2015-01-073-0/+38
| | | | | | | This is used to simplify the SIFoldOperands pass and make it easier to fold immediates. llvm-svn: 225373
* [Hexagon] Adding encodings for v5 floating point instructions.Colin LeMahieu2015-01-071-0/+326
| | | | llvm-svn: 225372
* [Hexagon] Adding encoding for popcount, fastcorner, dword asr with rounding.Colin LeMahieu2015-01-072-1/+62
| | | | llvm-svn: 225371
* R600/SI: Teach SIFoldOperands to split 64-bit constants when foldingTom Stellard2015-01-073-25/+51
| | | | | | | | | | | | | | | This allows folding of sequences like: s[0:1] = s_mov_b64 4 v_add_i32 v0, s0, v0 v_addc_u32 v1, s1, v1 into v_add_i32 v0, 4, v0 v_add_i32 v1, 0, v1 llvm-svn: 225369
* Test commitOlivier Sallenave2015-01-071-0/+1
| | | | llvm-svn: 225368
* [X86] Fix 512->256 typo in comments. NFC.Ahmed Bougacha2015-01-071-2/+2
| | | | llvm-svn: 225367
* Add a missing file from 225365Philip Reames2015-01-071-0/+54
| | | | llvm-svn: 225366
* Introduce an example statepoint GC strategyPhilip Reames2015-01-074-0/+50
| | | | | | | | | | | | | | This change includes the most basic possible GCStrategy for a GC which is using the statepoint lowering code. At the moment, this GCStrategy doesn't really do much - aside from actually generate correct stackmaps that is - but I went ahead and added a few extra correctness checks as proof of concept. It's mostly here to provide documentation on how to do one, and to provide a point for various optimization legality hooks I'd like to add going forward. (For context, see the TODOs in InstCombine around gc.relocate.) Most of the validation logic added here as proof of concept will soon move in to the Verifier. That move is dependent on http://reviews.llvm.org/D6811 There was discussion in the review thread about addrspace(1) being reserved for something. I'm going to follow up on a seperate llvmdev thread. If needed, I'll update all the code at once. Note that I am deliberately not making a GCStrategy required to use gc.statepoints with this change. I want to give folks out of tree - including myself - a chance to migrate. In a week or two, I'll make having a GCStrategy be required for gc.statepoints. To this end, I added the gc tag to one of the test cases but not others. Differential Revision: http://reviews.llvm.org/D6808 llvm-svn: 225365
* X86: Allow the stack probe size to be configurable per functionDavid Majnemer2015-01-071-3/+9
| | | | | | | | | | | | | | | | | | | LLVM emits stack probes on Windows targets to ensure that the stack is correctly accessed. However, the amount of stack allocated before emitting such a probe is hardcoded to 4096. It is desirable to have this be configurable so that a function might opt-out of stack probes. Our level of granularity is at the function level instead of, say, the module level to permit proper generation of code after LTO. Patch by Andrew H! N.B. The inliner needs to be updated to properly consider what happens after inlining a function with a specific stack-probe-size into another function with a different stack-probe-size. llvm-svn: 225360
* R600/SI: Refactor SIFoldOperands to simplify immediate foldingTom Stellard2015-01-071-25/+54
| | | | | | This will make a future patch much less intrusive. llvm-svn: 225358
* [X86] Teach FCOPYSIGN lowering to recognize constant magnitudes.Ahmed Bougacha2015-01-071-6/+19
| | | | | | | | | | | | | | | | | | | | | For code like: float foo(float x) { return copysign(1.0, x); } We used to generate: andps <-0.000000e+00,0,0,0>, %xmm0 movss <1.000000e+00>, %xmm1 andps <nan>, %xmm1 orps %xmm0, %xmm1 Basically doing an abs(1.0f) in the two middle instructions. We now generate: andps <-0.000000e+00,0,0,0>, %xmm0 orps <1.000000e+00,0,0,0>, %xmm0 Builds on cleanups r223415, r223542. rdar://19049548 Differential Revision: http://reviews.llvm.org/D6555 llvm-svn: 225357
* New method SDep::isNormalMemoryOrBarrier() in ScheduleDAGInstrs.cpp.Jonas Paulsson2015-01-071-5/+6
| | | | | | | | | | | | | | | | | | | | Used to iterate over previously added memory dependencies in adjustChainDeps() and iterateChainSucc(). SDep::isCtrl() was previously used in these places, that also gave anti and output edges. The code may be worse if these are followed, because MisNeedChainEdge() will conservatively return true since a non-memory instruction has no memory operands, and a false chain dep will be added. It is also unnecessary since all memory accesses of interest will be reached by memory dependencies, and there is a budget limit for the number of edges traversed. This problem was found on an out-of-tree target with enabled alias analysis. No test case for an in-tree target has been found. Reviewed by Hal Finkel. llvm-svn: 225351
* Fix typos in comment and option help texts.Jonas Paulsson2015-01-071-3/+3
| | | | | | For -enable-aa-sched-mi and -use-tbaa-in-sched-mi. llvm-svn: 225350
* Fix regression in r225266.Asiri Rathnayake2015-01-071-1/+1
| | | | | | | | The change in r225266 was reviewed under D6722. But the commit r225266 has a typo, causing some MCHammer failures. This patch fixes it. Change-Id: I573efcff25003af7478ac02548ebbe929fc7f5fd llvm-svn: 225347
* [X86] Merge a switch statement inside a default case of another switch ↵Craig Topper2015-01-071-160/+155
| | | | | | statement on the same variable. There was no additional code in the default so this should be no functional change. llvm-svn: 225345
* [X86] Don't mark the shift by 1 instructions as isConvertibleToThreeAddress. ↵Craig Topper2015-01-071-1/+1
| | | | | | There is no handling for them. llvm-svn: 225344
* [X86] Remove some unused TYPE enums from the disassembler.Craig Topper2015-01-073-18/+1
| | | | llvm-svn: 225343
* Revert r225165 and r225169Karthik Bhat2015-01-071-39/+0
| | | | | | | | Even thouh gcc produces simialr instructions as Owen pointed out the two patterns aren’t equivalent in the case where the original subtraction could have caused an overflow. Reverting the same. llvm-svn: 225341
* [PM] Fix a pretty nasty bug where the new pass manager would invalidateChandler Carruth2015-01-072-18/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | passes too many time. I think this is actually the issue that someone raised with me at the developer's meeting and in an email, but that we never really got to the bottom of. Having all the testing utilities made it much easier to dig down and uncover the core issue. When a pass manager is running many passes over a single function, we need it to invalidate the analyses between each run so that they can be re-computed as needed. We also need to track the intersection of preserved higher-level analyses across all the passes that we run (for example, if there is one module analysis which all the function analyses preserve, we want to track that and propagate it). Unfortunately, this interacted poorly with any enclosing pass adaptor between two IR units. It would see the intersection of preserved analyses, and need to invalidate any other analyses, but some of the un-preserved analyses might have already been invalidated *and recomputed*! We would fail to propagate the fact that the analysis had already been invalidated. The solution to this struck me as really strange at first, but the more I thought about it, the more natural it seemed. After a nice discussion with Duncan about it on IRC, it seemed even nicer. The idea is that invalidating an analysis *causes* it to be preserved! Preserving the lack of result is trivial. If it is recomputed, great. Until something *else* invalidates it again, we're good. The consequence of this is that the invalidate methods on the analysis manager which operate over many passes now consume their PreservedAnalyses object, update it to "preserve" every analysis pass to which it delivers an invalidation (regardless of whether the pass chooses to be removed, or handles the invalidation itself by updating itself). Then we return this augmented set from the invalidate routine, letting the pass manager take the result and use the intersection of *that* across each pass run to compute the final preserved set. This accounts for all the places where the early invalidation of an analysis has already "preserved" it for a future run. I've beefed up the testing and adjusted the assertions to show that we no longer repeatedly invalidate or compute the analyses across nested pass managers. llvm-svn: 225333
OpenPOWER on IntegriCloud