summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* Address Pete's review comment and define OrigArg on its own line.Akira Hatanaka2016-09-131-1/+2
| | | | | | This is a follow-up to r281419. llvm-svn: 281421
* [ObjCARC] Traverse chain downwards to replace uses of argument passed toAkira Hatanaka2016-09-131-4/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ObjC library call with call return. ARC contraction tries to replace uses of an argument passed to an objective-c library call with the call return value. For example, in the following IR, it replaces uses of argument %9 and uses of the values discovered traversing the chain upwards (%7 and %8) with the call return %10, if they are dominated by the call to @objc_autoreleaseReturnValue. This transformation enables code-gen to tail-call the call to @objc_autoreleaseReturnValue, which is necessary to enable auto release return value optimization. %7 = tail call i8* @objc_loadWeakRetained(i8** %6) %8 = bitcast i8* %7 to %0* %9 = bitcast %0* %8 to i8* %10 = tail call i8* @objc_autoreleaseReturnValue(i8* %9) ret %0* %8 Since r276727, llvm started removing redundant bitcasts and as a result started feeding the following IR to ARC contraction: %7 = tail call i8* @objc_loadWeakRetained(i8** %6) %8 = bitcast i8* %7 to %0* %9 = tail call i8* @objc_autoreleaseReturnValue(i8* %7) ret %0* %8 ARC contraction no longer does the optimization described above since it only traverses the chain upwards and fails to recognize that the function return can be replaced by the call return. This commit changes ARC contraction to traverse the chain downwards too and replace uses of bitcasts with the call return. rdar://problem/28011339 Differential Revision: https://reviews.llvm.org/D24523 llvm-svn: 281419
* [CodeGen] Fix invalid shift in mul expansionPawel Bylica2016-09-131-6/+11
| | | | | | | | | | | | Summary: When expanding mul in type legalization make sure the type for shift amount can actually fit the value. This fixes PR30354 https://llvm.org/bugs/show_bug.cgi?id=30354. Reviewers: hfinkel, majnemer, RKSimon Subscribers: RKSimon, llvm-commits Differential Revision: https://reviews.llvm.org/D24478 llvm-svn: 281403
* [DAG] Allow build-to-shuffle combine to combine builds from two wide vectors.Michael Kuperstein2016-09-131-27/+53
| | | | | | | | | | | This allows us to, in some cases, create a vector_shuffle out of a build_vector, when the inputs to the build are extract_elements from two different vectors, at least one of which is wider than the output. (E.g. a <8 x i16> being constructed out of elements from a <16 x i16> and a <8 x i16>). Differential Revision: https://reviews.llvm.org/D24491 llvm-svn: 281402
* Next set of additional error checks for invalid Mach-O files for bad load ↵Kevin Enderby2016-09-131-7/+82
| | | | | | | | | | | | | commands that use the Mach::dyld_info_command type for the load commands that are currently use in the MachOObjectFile constructor. This contains the missing checks for LC_DYLD_INFO and LC_DYLD_INFO_ONLY load commands and the fields for the Mach::dyld_info_command type. llvm-svn: 281400
* [Hexagon] Better handling of HVX vector loweringKrzysztof Parzyszek2016-09-132-4/+17
| | | | | | | - Expand SELECT_CC and BR_CC for vector types. - Implement TLI::isShuffleMaskLegal. llvm-svn: 281397
* Reapply "InstCombine: Reduce trunc (shl x, K) width."Matt Arsenault2016-09-131-7/+25
| | | | | | | This reapplies r272987 with a fix for infinitely looping when the truncated value is another shift of a constant. llvm-svn: 281379
* AArch64: Cleanup tailcall CC check, enable swiftcc.Matthias Braun2016-09-132-14/+20
| | | | | | | | | | | | | Cleanup/change the code that checks for possible tailcall conventions to look the same as the one in the X86 target. This makes the distinction between calling conventions that can guarnatee tailcalls and the ones that may tailcall more obvious. - Add Swift to the mayTailCall list - PreserveMost seemed to be incorrectly part of the guarnteed tail call list, move it to the mayTailCall list. llvm-svn: 281376
* AMDGPU: Remove code I think is deadMatt Arsenault2016-09-131-27/+3
| | | | | | | | As far as I can tell, resolveFrameIndex is supposed to be called with a legal offset, so inserting an add shouldn't be necessary. llvm-svn: 281372
* AMDGPU: Support commuting a FrameIndex operandMatt Arsenault2016-09-132-9/+26
| | | | llvm-svn: 281369
* [LV] Clean up uniform induction variable analysis (NFC)Matthew Simpson2016-09-131-23/+31
| | | | llvm-svn: 281368
* [LTO] Don't pass SF_Undefined symbols to the IRmover.Davide Italiano2016-09-131-0/+2
| | | | | | This should fix PR 30363. llvm-svn: 281366
* [DAGCombiner] Use APInt directly in (shl (zext (srl x, C)), C) combine range ↵Simon Pilgrim2016-09-131-2/+2
| | | | | | | | | | test To avoid assertion, we must ensure that the inner shift constant is within range before calling ConstantSDNode::getZExtValue(). We already know that the outer shift constant is in range. Followup to D23007 llvm-svn: 281362
* Revert r281336 (and r281337), it caused PR30372.Nico Weber2016-09-1310-253/+352
| | | | llvm-svn: 281361
* [Myriad]: set LeonCASA processor featureDouglas Katzman2016-09-131-6/+6
| | | | llvm-svn: 281359
* [DAGCombiner] Use APInt directly in (shl (ext (shl x, c1)), c2) combineSimon Pilgrim2016-09-131-11/+15
| | | | | | | | Fix failure to detect out of range shift constants leading to assert in ConstantSDNode::getZExtValue() Followup to D23007 llvm-svn: 281354
* Fix misleading comment for getOrEnforceKnownAlignmentMatt Arsenault2016-09-131-4/+0
| | | | | | | It does not return 0 to indicate failure, and returns the known alignment. llvm-svn: 281350
* [ConstantFold] Improve the bitcast folding logic for constant vectors.Andrea Di Biagio2016-09-131-2/+13
| | | | | | | | | | | | | | | | | | | | The constant folder didn't know how to always fold bitcasts of constant integer vectors. In particular, it was unable to handle the case where a constant vector had some undef elements, and the resulting (i.e. bitcasted) vector type had more elements than the original vector type. Example: %cast = bitcast <2 x i64><i64 undef, i64 2> to <4 x i32> On a little endian target, %cast could have been folded to: <4 x i32><i32 undef, i32 undef, i32 2, i32 0> This patch improves the folding logic by teaching how to correctly propagate undef elements in the folded vector. Differential Revision: https://reviews.llvm.org/D24301 llvm-svn: 281343
* [Hexagon] Clear the flow queue after visiting a single instructionKrzysztof Parzyszek2016-09-131-0/+5
| | | | llvm-svn: 281339
* Apply Clang-format to MCAsmParser.cpp NFC.Nirav Dave2016-09-131-1/+2
| | | | llvm-svn: 281337
* Defer asm errors to post-statement failureNirav Dave2016-09-1310-352/+252
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Recommitting after fixing AsmParser Initialization. Allow errors to be deferred and emitted as part of clean up to simplify and shorten Assembly parser code. This will allow error messages to be emitted in helper functions and be modified by the caller which has better context. As part of this many minor cleanups to the Parser: * Unify parser cleanup on error * Add Workaround for incorrect return values in ParseDirective instances * Tighten checks on error-signifying return values for parser functions and fix in-tree TargetParsers to be more consistent with the changes. * Fix AArch64 test cases checking for spurious error messages that are now fixed. These changes should be backwards compatible with current Target Parsers so long as the error status are correctly returned in appropriate functions. Reviewers: rnk, majnemer Subscribers: aemerson, jyknight, llvm-commits Differential Revision: https://reviews.llvm.org/D24047 llvm-svn: 281336
* [LoopInterchange] Minor refactor. NFC.Chad Rosier2016-09-131-12/+11
| | | | llvm-svn: 281334
* Don't use else if after return. Tidy comments. NFC.Chad Rosier2016-09-131-5/+3
| | | | llvm-svn: 281331
* Typo. NFC.Chad Rosier2016-09-131-3/+3
| | | | llvm-svn: 281330
* [LoopInterchange] Tidy up and remove unnecessary dyn_casts. NFC.Chad Rosier2016-09-131-13/+12
| | | | llvm-svn: 281328
* Revert "[ARM] Promote small global constants to constant pools"James Molloy2016-09-134-144/+1
| | | | | | This reverts commit r281314. Speculatively revert as it's possible this caused linker errors: http://lab.llvm.org:8011/builders/clang-native-arm-lnt/builds/19656 llvm-svn: 281327
* [ARM] Add ".code 32" to functions in the ARM instruction setPablo Barrio2016-09-131-1/+2
| | | | | | | | | | | | | | | | | | | Before, only Thumb functions were marked as ".code 16". These ".code x" directives are effective until the next directive of its kind is encountered. Therefore, in code with interleaved ARM and Thumb functions, it was possible to declare a function as ARM and end up with a Thumb function after assembly. A test has been added. An existing test has also been fixed to take this change into account. Reviewers: aschwaighofer, t.p.northover, jmolloy, rengolin Subscribers: aemerson, rengolin, llvm-commits Differential Revision: https://reviews.llvm.org/D24337 llvm-svn: 281324
* [Thumb] Teach ISel how to lower compares of AND bitmasks efficientlyJames Molloy2016-09-132-5/+138
| | | | | | | | | | | | | For the common pattern (CMPZ (AND x, #bitmask), #0), we can do some more efficient instruction selection if the bitmask is one consecutive sequence of set bits (32 - clz(bm) - ctz(bm) == popcount(bm)). 1) If the bitmask touches the LSB, then we can remove all the upper bits and set the flags by doing one LSLS. 2) If the bitmask touches the MSB, then we can remove all the lower bits and set the flags with one LSRS. 3) If the bitmask has popcount == 1 (only one set bit), we can shift that bit into the sign bit with one LSLS and change the condition query from NE/EQ to MI/PL (we could also implement this by shifting into the carry bit and branching on BCC/BCS). 4) Otherwise, we can emit a sequence of LSLS+LSRS to remove the upper and lower zero bits of the mask. 1-3 require only one 16-bit instruction and can elide the CMP. 4 requires two 16-bit instructions but can elide the CMP and doesn't require materializing a complex immediate, so is also a win. llvm-svn: 281323
* Enable simplify libcalls for ARM PCSSam Parker2016-09-131-3/+35
| | | | | | | | | | Teach SimplifyLibcalls that in can treat functions annotated with apcs, aapcs or aapcs_vfp like normal C functions if they only take and return integer or pointer values, and the target is not iOS. Differential Revision: https://reviews.llvm.org/D24453 llvm-svn: 281322
* [ARM] Support ldr.w in pseudo instruction ldr rd,=immediatePeter Smith2016-09-132-0/+7
| | | | | | | | | | | | The changes made in r269352, r269353 and r269354 to support the transformation of the ldr rd,=immediate to mov introduced a regression from 3.8 (ldr.w rd, =immediate) not supported. This change puts support back in for ldr.w by means of a t2InstAlias for the .w form. The .w is ignored in ARM state and propagated to the ldr in Thumb2. llvm-svn: 281319
* [ARM] Promote small global constants to constant poolsJames Molloy2016-09-134-1/+144
| | | | | | | | | | | | | | | | | | | | | | | | If a constant is unamed_addr and is only used within one function, we can save on the code size and runtime cost of an indirection by changing the global's storage to inside the constant pool. For example, instead of: ldr r0, .CPI0 bl printf bx lr .CPI0: &format_string format_string: .asciz "hello, world!\n" We can emit: adr r0, .CPI0 bl printf bx lr .CPI0: .asciz "hello, world!\n" This can cause significant code size savings when many small strings are used in one function (4 bytes per string). llvm-svn: 281314
* Remove MVT:i1 xor instruction before SELECT. (Performance improvement).Ayman Musa2016-09-131-0/+16
| | | | | | Differential Revision: https://reviews.llvm.org/D23764 llvm-svn: 281308
* Revert of r281304 as it is causing build bot failures in hexagonSjoerd Meijer2016-09-135-52/+49
| | | | | | | hwloop regression tests. These tests pass locally; will be investigating where these differences come from. llvm-svn: 281306
* This adds a new field isAdd to MCInstrDesc. The ARM and Hexagon instructionSjoerd Meijer2016-09-135-49/+52
| | | | | | | | | | | descriptions now tag add instructions, and the Hexagon backend is using this to identify loop induction statements. Patch by Sam Parker and Sjoerd Meijer. Differential Revision: https://reviews.llvm.org/D23601 llvm-svn: 281304
* AVX-512: Fix for PR28175 - Scalar code optimization.Elena Demikhovsky2016-09-132-7/+17
| | | | | | | | | Optimized (truncate (assertzext x) to i1) and anyext i1 to i8/16/32. Optimization of this patterns is a one more step towards i1 optimization on AVX-512. Differential Revision: https://reviews.llvm.org/D24456 llvm-svn: 281302
* [AArch64] Support stackmap/patchpoint in getInstSizeInBytesDiana Picus2016-09-131-4/+21
| | | | | | | | | | | | | | | | | We currently return 4 for stackmaps and patchpoints, which is very optimistic and can in rare cases cause the branch relaxation pass to fail to relax certain branches. This patch causes getInstSizeInBytes to return a pessimistic estimate of the size as the number of bytes requested in the stackmap/patchpoint. In the future, we could provide a more accurate estimate by sharing some of the logic in AArch64::LowerSTACKMAP/PATCHPOINT. Fixes part of https://llvm.org/bugs/show_bug.cgi?id=28750 Differential Revision: https://reviews.llvm.org/D24073 llvm-svn: 281301
* [X86] Remove masked shufpd/shufps intrinsics and autoupgrade to native ↵Craig Topper2016-09-132-12/+26
| | | | | | vector shuffles. They were removed from clang previously but accidentally left in the backend. llvm-svn: 281300
* Revert "[Support][CommandLine] Add cl::getRegisteredSubcommands()"Zachary Turner2016-09-131-11/+0
| | | | | | | This reverts r281290, as it breaks unit tests. http://lab.llvm.org:8011/builders/clang-x86-windows-msvc2015/builds/303 llvm-svn: 281292
* [Support][CommandLine] Add cl::getRegisteredSubcommands()Dean Michael Berris2016-09-131-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | This should allow users of the library to get a range to iterate through all the subcommands that are registered to the global parser. This allows users to define subcommands in libraries that self-register to have dispatch done at a different stage (like main). It allows for writing code like the following: for (auto *S : cl::getRegisteredSubcommands()) { if (*S) { // Dispatch on S->getName(). } } This change also contains tests that show this usage pattern. Reviewers: zturner, dblaikie, echristo Subscribers: llvm-commits, mehdi_amini Differential Revision: https://reviews.llvm.org/D24489 llvm-svn: 281290
* DebugInfo: New metadata representation for global variables.Peter Collingbourne2016-09-1318-105/+153
| | | | | | | | | | | | | This patch reverses the edge from DIGlobalVariable to GlobalVariable. This will allow us to more easily preserve debug info metadata when manipulating global variables. Fixes PR30362. A program for upgrading test cases is attached to that bug. Differential Revision: http://reviews.llvm.org/D20147 llvm-svn: 281284
* [DAG] Refactor BUILD_VECTOR combine to make it easier to extend. NFCI.Michael Kuperstein2016-09-131-123/+156
| | | | | | | This should make it easier to add cases that we currently don't cover, like supporting more kinds of type mismatches and more than 2 input vectors. llvm-svn: 281283
* X86: Conditional tail calls should not have isBarrier = 1Hans Wennborg2016-09-131-18/+31
| | | | | | | | | | That confuses e.g. machine basic block placement, which then doesn't realize that control can fall through a block that ends with a conditional tail call. Instead, isBranch=1 should be set. Also, mark EFLAGS as used by these instructions. llvm-svn: 281281
* Temporarily Revert "[MC] Defer asm errors to post-statement failure" as it's ↵Eric Christopher2016-09-1310-251/+352
| | | | | | | | causing errors on the sanitizer bots. This reverts commit r281249. llvm-svn: 281280
* [LVI] Complete the abstract of the cache layer [NFCI]Philip Reames2016-09-121-72/+94
| | | | | | | | Convert the previous introduced is-a relationship between the LVICache and LVIImple clases into a has-a relationship and hide all the implementation details of the cache from the lazy query layer. The only slightly concerning change here is removing the addition of a queried block into the SeenBlock set in LVIImpl::getBlockValue. As far as I can tell, this was effectively dead code. I think it *used* to be the case that getCachedValueInfo wasn't const and might end up inserting elements in the cache during lookup. That's no longer true and hasn't been for a while. I did fixup the const usage to make that more obvious. llvm-svn: 281272
* [LVI] Sink a couple more cache manipulation routines into the cache itself ↵Philip Reames2016-09-121-36/+45
| | | | | | | | [NFCI] The only interesting bit here is the refactor of the handle callback and even that's pretty straight-forward. llvm-svn: 281267
* [LVI] Abstract out the actual cache logic [NFCI]Philip Reames2016-09-121-89/+97
| | | | | | Seperate the caching logic from the implementation of the lazy analysis. For the moment, the lazy analysis impl has a is-a relationship with the cache; this will change to a has-a relationship shortly. This was done as two steps merely to keep the changes simple and the diff understandable. llvm-svn: 281266
* Revert r281215, it caused PR30358.Nico Weber2016-09-122-139/+5
| | | | llvm-svn: 281263
* Fix the bug introduced in r281252.Dehao Chen2016-09-121-1/+1
| | | | llvm-svn: 281253
* Lower consecutive select instructions correctly.Dehao Chen2016-09-121-23/+75
| | | | | | | | | | | | Summary: If consecutive select instructions are lowered separately in CGP, it will introduce redundant condition check and branches that cannot be removed by later optimization phases. This patch lowers all consecutive select instructions at the same to to avoid inefficent code as demonstrated in https://llvm.org/bugs/show_bug.cgi?id=29095 Reviewers: davidxl Subscribers: vsk, llvm-commits Differential Revision: https://reviews.llvm.org/D24147 llvm-svn: 281252
* [MC] Defer asm errors to post-statement failureNirav Dave2016-09-1210-352/+251
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow errors to be deferred and emitted as part of clean up to simplify and shorten Assembly parser code. This will allow error messages to be emitted in helper functions and be modified by the caller which has better context. As part of this many minor cleanups to the Parser: * Unify parser cleanup on error * Add Workaround for incorrect return values in ParseDirective instances * Tighten checks on error-signifying return values for parser functions and fix in-tree TargetParsers to be more consistent with the changes. * Fix AArch64 test cases checking for spurious error messages that are now fixed. These changes should be backwards compatible with current Target Parsers so long as the error status are correctly returned in appropriate functions. Reviewers: rnk, majnemer Subscribers: aemerson, jyknight, llvm-commits Differential Revision: https://reviews.llvm.org/D24047 llvm-svn: 281249
OpenPOWER on IntegriCloud