summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* Revert 224119 "This patch recognizes (+ (+ v0, v1) (+ v2, v3)), reorders ↵Suyog Sarda2014-12-171-24/+2
| | | | | | | | | | them for bundling into vector of loads, and vectorizes it." This was re-ordering floating point data types resulting in mismatch in output. llvm-svn: 224424
* Strength reduce intrinsics with overflow into regular arithmetic operations ↵Erik Eckstein2014-12-173-0/+61
| | | | | | | | | | if possible. Some intrinsics, like s/uadd.with.overflow and umul.with.overflow, are already strength reduced. This change adds other arithmetic intrinsics: s/usub.with.overflow, smul.with.overflow. It completes the work on PR20194. llvm-svn: 224417
* Revert "Linker: Drop superseded subprograms"Duncan P. N. Exon Smith2014-12-171-54/+0
| | | | | | | | | | | | | | This reverts commit r224389. Based on feedback from the bots, the assertion seems to be going off *more* often, not less (previously I was just seeing it in an internal bootstrap, now it's happening in public builds too). http://lab.llvm.org:8080/green/job/clang-stage2-configure-Rlto_build/936/ http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-bootstrap/builds/5325 Reverting in order to investigate. llvm-svn: 224416
* Add parsing of 'foo@local".Justin Hibbits2014-12-171-0/+1
| | | | | | | | | | | | | | | | | | Summary: Currently, it supports generating, but not parsing, this expression. Test added as well. Test Plan: New test added, no regressions due to this. Reviewers: hfinkel Reviewed By: hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D6672 llvm-svn: 224415
* Remove a debugging assert.Rafael Espindola2014-12-171-1/+0
| | | | | | Sorry for the noise, I have no idea how it survived to the final version. llvm-svn: 224414
* Fix the windows build.Rafael Espindola2014-12-171-0/+2
| | | | llvm-svn: 224412
* Refactor and simplify the code reading /proc/cpuinfo. NFC.Rafael Espindola2014-12-171-47/+32
| | | | llvm-svn: 224410
* RegisterCoalescer: Sprinkle some const modifiers.Matthias Braun2014-12-171-11/+12
| | | | llvm-svn: 224409
* Delete debugging cruft that crept in with r223802.Nick Lewycky2014-12-171-3/+0
| | | | llvm-svn: 224407
* InstSimplify: shl nsw/nuw undef, %V -> undefDavid Majnemer2014-12-171-13/+7
| | | | | | | | | | We can always choose an value for undef which might cause %V to shift out an important bit except for one case, when %V is zero. However, shl behaves like an identity function when the right hand side is zero. llvm-svn: 224405
* Make ValueEnumerator::print use OS for metadata too. Noticed by inspection.Nick Lewycky2014-12-171-2/+1
| | | | llvm-svn: 224404
* [CodeGenPrepare] Reapply r224351 with a fix for the assertion failure:Quentin Colombet2014-12-173-45/+262
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The type promotion helper does not support vector type, so when make such it does not kick in in such cases. Original commit message: [CodeGenPrepare] Move sign/zero extensions near loads using type promotion. This patch extends the optimization in CodeGenPrepare that moves a sign/zero extension near a load when the target can combine them. The optimization may promote any operations between the extension and the load to make that possible. Although this optimization may be beneficial for all targets, in particular AArch64, this is enabled for X86 only as I have not benchmarked it for other targets yet. ** Context ** Most targets feature extended loads, i.e., loads that perform a zero or sign extension for free. In that context it is interesting to expose such pattern in CodeGenPrepare so that the instruction selection pass can form such loads. Sometimes, this pattern is blocked because of instructions between the load and the extension. When those instructions are promotable to the extended type, we can expose this pattern. ** Motivating Example ** Let us consider an example: define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) { %ld = load i8* %addr1 %zextld = zext i8 %ld to i32 %ld2 = load i32* %addr2 %add = add nsw i32 %ld2, %zextld %sextadd = sext i32 %add to i64 %zexta = zext i8 %a to i32 %addza = add nsw i32 %zexta, %zextld %sextaddza = sext i32 %addza to i64 %addb = add nsw i32 %b, %zextld %sextaddb = sext i32 %addb to i64 call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb) ret void } As it is, this IR generates the following assembly on x86_64: [...] movzbl (%rdi), %eax # zero-extended load movl (%rsi), %es # plain load addl %eax, %esi # 32-bit add movslq %esi, %rdi # sign extend the result of add movzbl %dl, %edx # zero extend the first argument addl %eax, %edx # 32-bit add movslq %edx, %rsi # sign extend the result of add addl %eax, %ecx # 32-bit add movslq %ecx, %rdx # sign extend the result of add [...] The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA. Now, by promoting the additions to form more extended loads we would generate: [...] movzbl (%rdi), %eax # zero-extended load movslq (%rsi), %rdi # sign-extended load addq %rax, %rdi # 64-bit add movzbl %dl, %esi # zero extend the first argument addq %rax, %rsi # 64-bit add movslq %ecx, %rdx # sign extend the second argument addq %rax, %rdx # 64-bit add [...] The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA. This kind of sequences happen a lot on code using 32-bit indexes on 64-bit architectures. Note: The throughput numbers are similar on Sandy Bridge and Haswell. ** Proposed Solution ** To avoid the penalty of all these sign/zero extensions, we merge them in the loads at the beginning of the chain of computation by promoting all the chain of computation on the extended type. The promotion is done if and only if we do not introduce new extensions, i.e., if we do not degrade the code quality. To achieve this, we extend the existing “move ext to load” optimization with the promotion mechanism introduced to match larger patterns for addressing mode (r200947). The idea of this extension is to perform the following transformation: ext(promotableInst1(...(promotableInstN(load)))) => promotedInst1(...(promotedInstN(ext(load)))) The promotion mechanism in that optimization is enabled by a new TargetLowering switch, which is off by default. In other words, by default, the optimization performs the “move ext to load” optimization as it was before this patch. ** Performance ** Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10. Tested Optimization Levels: O3/Os Tests: llvm-testsuite + externals. Results: - No regression beside noise. - Improvements: CINT2006/473.astar: ~2% Benchmarks/PAQ8p: ~2% Misc/perlin: ~3% The results are consistent for both O3 and Os. <rdar://problem/18310086> llvm-svn: 224402
* Add printing the LC_ENCRYPTION_INFO_64 load command with llvm-objdump’s ↵Kevin Enderby2014-12-171-0/+5
| | | | | | | | -private-headers and add tests for the two AArch64 binaries. llvm-svn: 224400
* PR21875: codegen for non-type template parameters of nullptr_t typeDavid Blaikie2014-12-171-2/+5
| | | | llvm-svn: 224399
* Revert "[CodeGenPrepare] Move sign/zero extensions near loads using type ↵Reid Kleckner2014-12-173-256/+45
| | | | | | | | | promotion." This reverts commit r224351. It causes assertion failures when building ICU. llvm-svn: 224397
* SelectionDAG switch lowering: use 'unsigned' to count destination popularityHans Wennborg2014-12-161-2/+2
| | | | | | | | | SwitchInst::getNumCases() returns unsinged, so using uint64_t to count cases seems unnecessary. Also fix a missing CHECK in the test case. llvm-svn: 224393
* [Hexagon] Updating doubleword shift usages to new versions.Colin LeMahieu2014-12-163-31/+54
| | | | llvm-svn: 224391
* Add printing the LC_ENCRYPTION_INFO load command with llvm-objdump’s ↵Kevin Enderby2014-12-161-0/+5
| | | | | | -private-headers. llvm-svn: 224390
* Linker: Drop superseded subprogramsDuncan P. N. Exon Smith2014-12-161-0/+54
| | | | | | | | | | | | | When a function gets replaced by `ModuleLinker`, drop superseded subprograms. This ensures that the "first" subprogram pointing at a function is the same one that `!dbg` references point at. This is a stop-gap fix for PR21910. Notably, this fixes Release+Asserts bootstraps that are currently asserting out in `LexicalScopes::initialize()` due to the explicit instantiations in `lib/IR/Dominators.cpp` eventually getting replaced by -argpromotion. llvm-svn: 224389
* [X86][SSE] Vector double -> float conversion memory folding (cvtpd2ps)Simon Pilgrim2014-12-161-0/+3
| | | | | | | | Added a missing memory folding relationship for the (V)CVTPD2PS instruction - we can safely fold these for stack reloads. Differential Revision: http://reviews.llvm.org/D6663 llvm-svn: 224383
* Make the assert a bit stronger.Rafael Espindola2014-12-161-2/+1
| | | | | | We should get no declarations in here. llvm-svn: 224382
* [Hexagon] Removing old XTYPE/BIT instructions and replacing usages.Colin LeMahieu2014-12-162-45/+17
| | | | llvm-svn: 224381
* merge consecutive loads that are offset from a base addressSanjay Patel2014-12-161-5/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SelectionDAG::isConsecutiveLoad() was not detecting consecutive loads when the first load was offset from a base address. This patch recognizes that pattern and subtracts the offset before comparing the second load to see if it is consecutive. The codegen change in the new test case improves from: vmovsd 32(%rdi), %xmm0 vmovsd 48(%rdi), %xmm1 vmovhpd 56(%rdi), %xmm1, %xmm1 vmovhpd 40(%rdi), %xmm0, %xmm0 vinsertf128 $1, %xmm1, %ymm0, %ymm0 To: vmovups 32(%rdi), %ymm0 An existing test case is also improved from: vmovsd (%rdi), %xmm0 vmovsd 16(%rdi), %xmm1 vmovsd 24(%rdi), %xmm2 vunpcklpd %xmm2, %xmm0, %xmm0 ## xmm0 = xmm0[0],xmm2[0] vmovhpd 8(%rdi), %xmm1, %xmm3 To: vmovsd (%rdi), %xmm0 vmovsd 16(%rdi), %xmm1 vmovhpd 24(%rdi), %xmm0, %xmm0 vmovhpd 8(%rdi), %xmm1, %xmm1 This patch fixes PR21771 ( http://llvm.org/bugs/show_bug.cgi?id=21771 ). Differential Revision: http://reviews.llvm.org/D6642 llvm-svn: 224379
* [Hexagon] Adding tstbit/bitclr/bitset instructions.Colin LeMahieu2014-12-161-24/+101
| | | | llvm-svn: 224374
* [sanitizer] prevent function call merging for sanitizer-coverage callbacksKostya Serebryany2014-12-161-0/+7
| | | | llvm-svn: 224372
* [Hexagon] Adding bit count and twiddling instructions.Colin LeMahieu2014-12-161-0/+99
| | | | llvm-svn: 224367
* [Hexagon] Adding asr/lsr/asl reg/imm, asl with saturation, asr with ↵Colin LeMahieu2014-12-161-0/+78
| | | | | | rounding. Doubleword abs/neg/not. Interleave and deinterleave instructions. llvm-svn: 224365
* x86-32: PUSHF/POPF use/def EFLAGSJF Bastien2014-12-161-7/+12
| | | | | | | | | | | | | | Summary: As a side-quest for D6629 jvoung pointed out that I should use -verify-machineinstrs and this found a bug in x86-32's handling of EFLAGS for PUSHF/POPF. This patch fixes the use/def, and adds -verify-machineinstrs to all x86 tests which contain 'EFLAGS'. One exception: this patch leaves inline-asm-fpstack.ll as-is because it fails -verify-machineinstrs in a way unrelated to EFLAGS. This patch also modifies cmpxchg-clobber-flags.ll along the lines of what D6629 already does by also testing i386. Test Plan: ninja check Reviewers: t.p.northover, jvoung Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D6687 llvm-svn: 224359
* Use CastInst::castIsValid to simplify the verifier.Rafael Espindola2014-12-161-47/+9
| | | | | | Also delete a dead member variable. llvm-svn: 224356
* NVPTX: Remove duplicate of AsmPrinter::lowerConstantMatt Arsenault2014-12-162-163/+2
| | | | llvm-svn: 224355
* Move lowerConstant to AsmPrinterMatt Arsenault2014-12-161-25/+20
| | | | | | | This was a static function before, and NVPTX duplicated it because it wasn't exposed. llvm-svn: 224354
* [CodeGenPrepare] Move sign/zero extensions near loads using type promotion.Quentin Colombet2014-12-163-45/+256
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch extends the optimization in CodeGenPrepare that moves a sign/zero extension near a load when the target can combine them. The optimization may promote any operations between the extension and the load to make that possible. Although this optimization may be beneficial for all targets, in particular AArch64, this is enabled for X86 only as I have not benchmarked it for other targets yet. ** Context ** Most targets feature extended loads, i.e., loads that perform a zero or sign extension for free. In that context it is interesting to expose such pattern in CodeGenPrepare so that the instruction selection pass can form such loads. Sometimes, this pattern is blocked because of instructions between the load and the extension. When those instructions are promotable to the extended type, we can expose this pattern. ** Motivating Example ** Let us consider an example: define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) { %ld = load i8* %addr1 %zextld = zext i8 %ld to i32 %ld2 = load i32* %addr2 %add = add nsw i32 %ld2, %zextld %sextadd = sext i32 %add to i64 %zexta = zext i8 %a to i32 %addza = add nsw i32 %zexta, %zextld %sextaddza = sext i32 %addza to i64 %addb = add nsw i32 %b, %zextld %sextaddb = sext i32 %addb to i64 call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb) ret void } As it is, this IR generates the following assembly on x86_64: [...] movzbl (%rdi), %eax # zero-extended load movl (%rsi), %es # plain load addl %eax, %esi # 32-bit add movslq %esi, %rdi # sign extend the result of add movzbl %dl, %edx # zero extend the first argument addl %eax, %edx # 32-bit add movslq %edx, %rsi # sign extend the result of add addl %eax, %ecx # 32-bit add movslq %ecx, %rdx # sign extend the result of add [...] The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA. Now, by promoting the additions to form more extended loads we would generate: [...] movzbl (%rdi), %eax # zero-extended load movslq (%rsi), %rdi # sign-extended load addq %rax, %rdi # 64-bit add movzbl %dl, %esi # zero extend the first argument addq %rax, %rsi # 64-bit add movslq %ecx, %rdx # sign extend the second argument addq %rax, %rdx # 64-bit add [...] The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA. This kind of sequences happen a lot on code using 32-bit indexes on 64-bit architectures. Note: The throughput numbers are similar on Sandy Bridge and Haswell. ** Proposed Solution ** To avoid the penalty of all these sign/zero extensions, we merge them in the loads at the beginning of the chain of computation by promoting all the chain of computation on the extended type. The promotion is done if and only if we do not introduce new extensions, i.e., if we do not degrade the code quality. To achieve this, we extend the existing “move ext to load” optimization with the promotion mechanism introduced to match larger patterns for addressing mode (r200947). The idea of this extension is to perform the following transformation: ext(promotableInst1(...(promotableInstN(load)))) => promotedInst1(...(promotedInstN(ext(load)))) The promotion mechanism in that optimization is enabled by a new TargetLowering switch, which is off by default. In other words, by default, the optimization performs the “move ext to load” optimization as it was before this patch. ** Performance ** Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10. Tested Optimization Levels: O3/Os Tests: llvm-testsuite + externals. Results: - No regression beside noise. - Improvements: CINT2006/473.astar: ~2% Benchmarks/PAQ8p: ~2% Misc/perlin: ~3% The results are consistent for both O3 and Os. <rdar://problem/18310086> llvm-svn: 224351
* [AVX512] Enable integer arithmetic lowering for AVX512BW/VL subsets.Robert Khasanov2014-12-162-1/+6
| | | | | | Added lowering tests. llvm-svn: 224349
* [Hexagon] Adding absolute value, and negate with saturationColin LeMahieu2014-12-161-5/+29
| | | | llvm-svn: 224346
* combine consecutive subvector 16-byte loads into one 32-byte loadSanjay Patel2014-12-162-0/+44
| | | | | | | | | | | | | | This is a fix for PR21709 ( http://llvm.org/bugs/show_bug.cgi?id=21709 ). When we have 2 consecutive 16-byte loads that are merged into one 32-byte vector, we can use a single 32-byte load instead. But we don't do this for SandyBridge / IvyBridge because they have slower 32-byte memops. We also don't bother using 32-byte *integer* loads on a machine that only has AVX1 (btver2) because those operands would have to be split in half anyway since there is no support for 32-byte integer math ops. Differential Revision: http://reviews.llvm.org/D6492 llvm-svn: 224344
* [Hexagon] Adding saturate and swizzle instructions.Colin LeMahieu2014-12-161-0/+22
| | | | llvm-svn: 224343
* [AVX512] Add a comment for avx512_broadcast_pat multiclassRobert Khasanov2014-12-161-0/+3
| | | | llvm-svn: 224341
* [Hexagon] Removing old multiply defs and updating references to new versions.Colin LeMahieu2014-12-163-161/+18
| | | | llvm-svn: 224340
* The single check for N64 inside MipsDisassemblerBase's subclasses is ↵Vladimir Medic2014-12-161-4/+4
| | | | | | actually wrong. It should be testing for FeatureGP64bit.There are no functional changes. llvm-svn: 224339
* [mips][microMIPS] Implement SWP and LWP instructionsZoran Jovanovic2014-12-167-1/+101
| | | | | | Differential Revision: http://reviews.llvm.org/D5667 llvm-svn: 224338
* Fixing -Wsign-compare warnings; NFC.Aaron Ballman2014-12-163-3/+6
| | | | llvm-svn: 224337
* Masked Load and Store Intrinsics in loop vectorizer.Elena Demikhovsky2014-12-161-21/+100
| | | | | | | | | | The loop vectorizer optimizes loops containing conditional memory accesses by generating masked load and store intrinsics. This decision is target dependent. http://reviews.llvm.org/D6527 llvm-svn: 224334
* [ARM] Prevent PerformVCVTCombine from combining a vmul/vcvt with 8 lanesBradley Smith2014-12-161-3/+5
| | | | | | This would result in a crash since the vcvt used does not support v8i32 types. llvm-svn: 224332
* X86: Added FeatureVectorUAMem for all AVX architectures.Elena Demikhovsky2014-12-162-16/+10
| | | | | | | | | | | | | | | | | | | | | | | According to AVX specification: "Most arithmetic and data processing instructions encoded using the VEX prefix and performing memory accesses have more flexible memory alignment requirements than instructions that are encoded without the VEX prefix. Specifically, With the exception of explicitly aligned 16 or 32 byte SIMD load/store instructions, most VEX-encoded, arithmetic and data processing instructions operate in a flexible environment regarding memory address alignment, i.e. VEX-encoded instruction with 32-byte or 16-byte load semantics will support unaligned load operation by default. Memory arguments for most instructions with VEX prefix operate normally without causing #GP(0) on any byte-granularity alignment (unlike Legacy SSE instructions)." The same for AVX-512. This change does not affect anything right now, because only the "memop pattern fragment" depends on FeatureVectorUAMem and it is not used in AVX patterns. All AVX patterns are based on the "unaligned load" anyway. llvm-svn: 224330
* IR: Stop printing 'metadata' in Metadata::print()Duncan P. N. Exon Smith2014-12-161-3/+0
| | | | | | | Stop printing `metadata` in `Metadata::print()` and `Metadata::printAsOperand()`. llvm-svn: 224327
* IR: Make MDNode::dump() useful by adding addressesDuncan P. N. Exon Smith2014-12-161-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's horrible to inspect `MDNode`s in a debugger. All of their operands that are `MDNode`s get dumped as `<badref>`, since we can't assign metadata slots in the context of a `Metadata::dump()`. (Why not? Why not assign numbers lazily? Because then each time you called `dump()`, a given `MDNode` could have a different lazily assigned number.) Fortunately, the C memory model gives us perfectly good identifiers for `MDNode`. Add pointer addresses to the dumps, transforming this: (lldb) e N->dump() !{i32 662302, i32 26, <badref>, null} (lldb) e ((MDNode*)N->getOperand(2))->dump() !{i32 4, !"foo"} into: (lldb) e N->dump() !{i32 662302, i32 26, <0x100706ee0>, null} (lldb) e ((MDNode*)0x100706ee0)->dump() !{i32 4, !"foo"} and this: (lldb) e N->dump() 0x101200248 = !{<badref>, <badref>, <badref>, <badref>, <badref>} (lldb) e N->getOperand(0) (const llvm::MDOperand) $0 = { MD = 0x00000001012004e0 } (lldb) e N->getOperand(1) (const llvm::MDOperand) $1 = { MD = 0x00000001012004e0 } (lldb) e N->getOperand(2) (const llvm::MDOperand) $2 = { MD = 0x0000000101200058 } (lldb) e N->getOperand(3) (const llvm::MDOperand) $3 = { MD = 0x00000001012004e0 } (lldb) e N->getOperand(4) (const llvm::MDOperand) $4 = { MD = 0x0000000101200058 } (lldb) e ((MDNode*)0x00000001012004e0)->dump() !{} (lldb) e ((MDNode*)0x0000000101200058)->dump() !{null} into: (lldb) e N->dump() !{<0x1012004e0>, <0x1012004e0>, <0x101200058>, <0x1012004e0>, <0x101200058>} (lldb) e ((MDNode*)0x1012004e0)->dump() !{} (lldb) e ((MDNode*)0x101200058)->dump() !{null} llvm-svn: 224325
* ARM: diagnose deprecated syntaxSaleem Abdulrasool2014-12-162-1/+16
| | | | | | | | | | | | | The use of SP and PC in the register list for stores is deprecated on ARM (ARM ARM A.8.8.199): ARM deprecates the use of ARM instructions that include the SP or the PC in the list. Provide a deprecation warning from the assembler in the case that the syntax is ever seen. llvm-svn: 224319
* [PowerPC] Improve instruction selection bit-permuting operations (32-bit)Hal Finkel2014-12-162-55/+480
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The PowerPC backend, somewhat embarrassingly, did not generate an optimal-length sequence of instructions for a 32-bit bswap. While adding a pattern for the bswap intrinsic to fix this would not have been terribly difficult, doing so would not have addressed the real problem: we had been generating poor code for many bit-permuting operations (by which I mean things like byte-swap that permute the bits of one or more inputs around in various ways). Here are some initial steps toward solving this deficiency. Bit-permuting operations are represented, at the SDAG level, using ISD::ROTL, SHL, SRL, AND and OR (mostly with constant second operands). Looking back through these operations, we can build up a description of the bits in the resulting value in terms of bits of one or more input values (and constant zeros). For each bit, we compute the rotation amount from the original value, and then group consecutive (value, rotation factor) bits into groups. Groups sharing these attributes are then collected and sorted, and we can then instruction select the entire permutation using a combination of masked rotations (rlwinm), imm ands (andi/andis), and masked rotation inserts (rlwimi). The result is that instead of lowering an i32 bswap as: rlwinm 5, 3, 24, 16, 23 rlwinm 4, 3, 24, 0, 7 rlwimi 4, 3, 8, 8, 15 rlwimi 5, 3, 8, 24, 31 rlwimi 4, 5, 0, 16, 31 we now produce: rlwinm 4, 3, 8, 0, 31 rlwimi 4, 3, 24, 16, 23 rlwimi 4, 3, 24, 0, 7 and for the 'test6' example in the PowerPC/README.txt file: unsigned test6(unsigned x) { return ((x & 0x00FF0000) >> 16) | ((x & 0x000000FF) << 16); } we used to produce: lis 4, 255 rlwinm 3, 3, 16, 0, 31 ori 4, 4, 255 and 3, 3, 4 and now we produce: rlwinm 4, 3, 16, 24, 31 rlwimi 4, 3, 16, 8, 15 and, as a nice bonus, this fixes the FIXME in test/CodeGen/PowerPC/rlwimi-and.ll. This commit does not include instruction-selection for i64 operations, those will come later. llvm-svn: 224318
* ARM: 80-columnSaleem Abdulrasool2014-12-161-4/+5
| | | | | | clang-format a function with an overly long string constant. NFC. llvm-svn: 224314
* LiveRangeCalc: Rewrite subrange calculationMatthias Braun2014-12-163-224/+157
| | | | | | | | | This changes subrange calculation to calculate subranges sequentially instead of in parallel. The code is easier to understand that way and addresses the code review issues raised about LiveOutData being hard to understand/needing more comments by removing them :) llvm-svn: 224313
OpenPOWER on IntegriCloud