| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
ScalarTargetTransformInfo::getIntImmCost() instead. "Legal" is a poorly defined
term for something like integer immediate materialization. It is always possible
to materialize an integer immediate. Whether to use it for memcpy expansion is
more a "cost" conceern.
llvm-svn: 169929
|
| |
|
|
|
|
| |
A new backend supporting AMD GPUs: Radeon HD2XXX - HD7XXX
llvm-svn: 169915
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Given a thread-local symbol x with global-dynamic access, the generated
code to obtain x's address is:
Instruction Relocation Symbol
addis ra,r2,x@got@tlsgd@ha R_PPC64_GOT_TLSGD16_HA x
addi r3,ra,x@got@tlsgd@l R_PPC64_GOT_TLSGD16_L x
bl __tls_get_addr(x@tlsgd) R_PPC64_TLSGD x
R_PPC64_REL24 __tls_get_addr
nop
<use address in r3>
The implementation borrows from the medium code model work for introducing
special forms of ADDIS and ADDI into the DAG representation. This is made
slightly more complicated by having to introduce a call to the external
function __tls_get_addr. Using the full call machinery is overkill and,
more importantly, makes it difficult to add a special relocation. So I've
introduced another opcode GET_TLS_ADDR to represent the function call, and
surrounded it with register copies to set up the parameter and return value.
Most of the code is pretty straightforward. I ran into one peculiarity
when I introduced a new PPC opcode BL8_NOP_ELF_TLSGD, which is just like
BL8_NOP_ELF except that it takes another parameter to represent the symbol
("x" above) that requires a relocation on the call. Something in the
TblGen machinery causes BL8_NOP_ELF and BL8_NOP_ELF_TLSGD to be treated
identically during the emit phase, so this second operand was never
visited to generate relocations. This is the reason for the slightly
messy workaround in PPCMCCodeEmitter.cpp:getDirectBrEncoding().
Two new tests are included to demonstrate correct external assembly and
correct generation of relocations using the integrated assembler.
Comments welcome!
Thanks,
Bill
llvm-svn: 169910
|
| |
|
|
| |
llvm-svn: 169854
|
| |
|
|
|
|
|
|
| |
MVTs, instead of EVTs.
Accordingly, add bitsLT (and similar) to MVT.
llvm-svn: 169850
|
| |
|
|
|
|
| |
EVTs.
llvm-svn: 169848
|
| |
|
|
|
|
| |
of EVT.
llvm-svn: 169845
|
| |
|
|
|
|
|
|
|
| |
Accordingly, add helper funtions getSimpleValueType (in parallel to
getValueType) in SDValue, SDNode, and TargetLowering.
This is the first, in a series of patches.
llvm-svn: 169837
|
| |
|
|
| |
llvm-svn: 169819
|
| |
|
|
| |
llvm-svn: 169814
|
| |
|
|
| |
llvm-svn: 169811
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This shouldn't affect codegen for -O0 compiles as tail call markers are not
emitted in unoptimized compiles. Testing with the external/internal nightly
test suite reveals no change in compile time performance. Testing with -O1,
-O2 and -O3 with fast-isel enabled did not cause any compile-time or
execution-time failures. All tests were performed on my x86 machine.
I'll monitor our arm testers to ensure no regressions occur there.
In an upcoming clang patch I will be marking the objc_autoreleaseReturnValue
and objc_retainAutoreleaseReturnValue as tail calls unconditionally. While
it's theoretically true that this is just an optimization, it's an
optimization that we very much want to happen even at -O0, or else ARC
applications become substantially harder to debug.
Part of rdar://12553082
llvm-svn: 169796
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Teach it to use overlapping unaligned load / store to copy / set the trailing
bytes. e.g. On 86, use two pairs of movups / movaps for 17 - 31 byte copies.
2. Use f64 for memcpy / memset on targets where i64 is not legal but f64 is. e.g.
x86 and ARM.
3. When memcpy from a constant string, do *not* replace the load with a constant
if it's not possible to materialize an integer immediate with a single
instruction (required a new target hook: TLI.isIntImmLegal()).
4. Use unaligned load / stores more aggressively if target hooks indicates they
are "fast".
5. Update ARM target hooks to use unaligned load / stores. e.g. vld1.8 / vst1.8.
Also increase the threshold to something reasonable (8 for memset, 4 pairs
for memcpy).
This significantly improves Dhrystone, up to 50% on ARM iOS devices.
rdar://12760078
llvm-svn: 169791
|
| |
|
|
|
|
| |
getMipsRegisterNumbering and use MCRegisterInfo::getEncodingValue instead.
llvm-svn: 169760
|
| |
|
|
|
|
| |
Accidental commit... git svn betrayed me. Sorry for the noise.
llvm-svn: 169741
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Not all chips targeted by x86_64 have this feature, but a dramatically
increasing number do. Specifying a chip-specific tuning parameter will
continue to turn the feature on or off as appropriate for that
particular chip, but the generic flag should try to achieve the best
performance on the most widely available hardware. Today, the number of
chips with fast UA access dwarfs those without in the x86-64 space.
Note that this also brings LLVM's code generation for this '-march' flag
more in line with that of modern GCCs.
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D195
llvm-svn: 169740
|
| |
|
|
|
|
|
| |
Thanks to the PaX folks for noticing in review! We need some tests here,
any sugestions welcome...
llvm-svn: 169739
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Intel chips.
The model number rules were determined by inspecting Intel's
documentation for their newer chip model numbers. My understanding is
that all of the newer Intel chips have fast unaligned memory access, but
if anyone is concerned about a particular chip, just shout.
No tests updated; it's not clear we have dedicated tests for the chips'
various features, but if anyone would like tests (or can point me at
some existing ones), I'm happy to oblige.
llvm-svn: 169730
|
| |
|
|
| |
llvm-svn: 169724
|
| |
|
|
|
|
|
|
|
| |
- added function to VectorTargetTransformInfo to query cost of intrinsics
- vectorize trivially vectorizable intrinsic calls such as sin, cos, log, etc.
Reviewed by: Nadav
llvm-svn: 169711
|
| |
|
|
|
|
|
| |
- fix a bug which cause sigfault.
- add two testing cases which was causing crash
llvm-svn: 169687
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are still bugs in this pass, as well as other issues that are
being worked on, but the bugs are crashers that occur pretty easily in
the wild. Test cases have been sent to the original commit's review
thread.
This reverts the commits:
r169671: Fix a logic error.
r169604: Move the popcnt tests to an X86 subdirectory.
r168931: Initial commit adding the pass.
llvm-svn: 169683
|
| |
|
|
| |
llvm-svn: 169676
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
std::string to a StringRef. Moreover, the method being called accepts
a Twine to simplify these patterns.
Fixes this ASan failure:
==6312== ERROR: AddressSanitizer: heap-use-after-free on address 0x7fd558b1af58 at pc 0xcb7529 bp 0x7fffff572080 sp 0x7fffff572078
READ of size 1 at 0x7fd558b1af58 thread T0
#0 0xcb7528 .../llvm/include/llvm/ADT/StringRef.h:192 llvm::StringRef::operator[]()
#1 0x1d53c0a .../llvm/include/llvm/ADT/StringExtras.h:128 llvm::HashString()
#2 0x1d53878 .../llvm/lib/Support/StringMap.cpp:64 llvm::StringMapImpl::LookupBucketFor()
#3 0x1b6872f .../llvm/include/llvm/ADT/StringMap.h:352 llvm::StringMap<>::GetOrCreateValue<>()
#4 0x1b61836 .../llvm/lib/MC/MCContext.cpp:109 llvm::MCContext::GetOrCreateSymbol()
#5 0xe9fd47 .../llvm/lib/Target/ARM/MCTargetDesc/ARMELFStreamer.cpp:154 (anonymous namespace)::ARMELFStreamer::EmitMappingSymbol()
#6 0xea01dd .../llvm/lib/Target/ARM/MCTargetDesc/ARMELFStreamer.cpp:133 (anonymous namespace)::ARMELFStreamer::EmitDataMappingSymbol()
#7 0xe9f78b .../llvm/lib/Target/ARM/MCTargetDesc/ARMELFStreamer.cpp:91 (anonymous namespace)::ARMELFStreamer::EmitBytes()
#8 0x1b15d82 .../llvm/lib/MC/MCStreamer.cpp:89 llvm::MCStreamer::EmitIntValue()
#9 0xcc0f9b .../llvm/lib/Target/ARM/ARMAsmPrinter.cpp:713 llvm::ARMAsmPrinter::emitAttributes()
#10 0xcc0d44 .../llvm/lib/Target/ARM/ARMAsmPrinter.cpp:632 llvm::ARMAsmPrinter::EmitStartOfAsmFile()
#11 0x14692ad .../llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp:162 llvm::AsmPrinter::doInitialization()
#12 0x1bc4677 .../llvm/lib/VMCore/PassManager.cpp:1561 llvm::FPPassManager::doInitialization()
#13 0x1bc4990 .../llvm/lib/VMCore/PassManager.cpp:1595 llvm::MPPassManager::runOnModule()
#14 0x1bc55e5 .../llvm/lib/VMCore/PassManager.cpp:1705 llvm::PassManagerImpl::run()
#15 0x1bc5878 .../llvm/lib/VMCore/PassManager.cpp:1740 llvm::PassManager::run()
#16 0xc3954d .../llvm/tools/llc/llc.cpp:378 compileModule()
#17 0xc38001 .../llvm/tools/llc/llc.cpp:194 main
#18 0x7fd557d6a11c __libc_start_main
0x7fd558b1af58 is located 24 bytes inside of 29-byte region [0x7fd558b1af40,0x7fd558b1af5d)
freed by thread T0 here:
#0 0xc337da .../llvm/projects/compiler-rt/lib/asan/asan_new_delete.cc:56 operator delete()
#1 0x1ee9cef .../libstdc++-v3/include/bits/basic_string.h:535 std::string::~string()
#2 0xea01dd .../llvm/lib/Target/ARM/MCTargetDesc/ARMELFStreamer.cpp:133 (anonymous namespace)::ARMELFStreamer::EmitDataMappingSymbol()
#3 0xe9f78b .../llvm/lib/Target/ARM/MCTargetDesc/ARMELFStreamer.cpp:91 (anonymous namespace)::ARMELFStreamer::EmitBytes()
#4 0x1b15d82 .../llvm/lib/MC/MCStreamer.cpp:89 llvm::MCStreamer::EmitIntValue()
#5 0xcc0f9b .../llvm/lib/Target/ARM/ARMAsmPrinter.cpp:713 llvm::ARMAsmPrinter::emitAttributes()
#6 0xcc0d44 .../llvm/lib/Target/ARM/ARMAsmPrinter.cpp:632 llvm::ARMAsmPrinter::EmitStartOfAsmFile()
#7 0x14692ad .../llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp:162 llvm::AsmPrinter::doInitialization()
#8 0x1bc4677 .../llvm/lib/VMCore/PassManager.cpp:1561 llvm::FPPassManager::doInitialization()
#9 0x1bc4990 .../llvm/lib/VMCore/PassManager.cpp:1595 llvm::MPPassManager::runOnModule()
#10 0x1bc55e5 .../llvm/lib/VMCore/PassManager.cpp:1705 llvm::PassManagerImpl::run()
#11 0x1bc5878 .../llvm/lib/VMCore/PassManager.cpp:1740 llvm::PassManager::run()
#12 0xc3954d .../llvm/tools/llc/llc.cpp:378 compileModule()
#13 0xc38001 .../llvm/tools/llc/llc.cpp:194 main
#14 0x7fd557d6a11c __libc_start_main
llvm-svn: 169668
|
| |
|
|
|
|
| |
in the near future.
llvm-svn: 169651
|
| |
|
|
|
|
|
|
| |
the VSRI instruction before it since it does not affect the MSB.
Thanks Craig Topper for suggesting this.
llvm-svn: 169638
|
| |
|
|
|
|
|
|
|
|
| |
In particular, check if MachineBasicBlock::iterator is end() before
using it to call getDebugLoc();
See also this thread on llvm-commits:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20121112/155914.html
llvm-svn: 169634
|
| |
|
|
| |
llvm-svn: 169624
|
| |
|
|
|
|
|
|
|
|
| |
Before this patch, when you objdump an LLVM-compiled file, objdump tried to
decode data-in-code sections as if they were code. This patch adds the missing
Mapping Symbols, as defined by "ELF for the ARM Architecture" (ARM IHI 0044D).
Patch based on work by Greg Fitzgerald.
llvm-svn: 169609
|
| |
|
|
|
|
| |
This is the preferred way of creating bundled machine instructions.
llvm-svn: 169585
|
| |
|
|
|
|
| |
use.
llvm-svn: 169580
|
| |
|
|
| |
llvm-svn: 169579
|
| |
|
|
| |
llvm-svn: 169578
|
| |
|
|
| |
llvm-svn: 169577
|
| |
|
|
|
|
| |
decide what pattern we want to follow in the future.
llvm-svn: 169561
|
| |
|
|
|
|
|
|
|
|
| |
understand target implementation of any_extend / extload, just generate
zero_extend in place of any_extend for liveouts when the target knows the
zero_extend will be implicit (e.g. ARM ldrb / ldrh) or folded (e.g. x86 movz).
rdar://12771555
llvm-svn: 169536
|
| |
|
|
| |
llvm-svn: 169534
|
| |
|
|
| |
llvm-svn: 169521
|
| |
|
|
|
|
| |
to the normal instructions.
llvm-svn: 169482
|
| |
|
|
|
|
| |
neverHasSideEffects.
llvm-svn: 169477
|
| |
|
|
|
|
| |
rdar://12821569
llvm-svn: 169460
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and extload's. If they are implemented as zero-extend, or implicitly
zero-extend, then this can enable more demanded bits optimizations. e.g.
define void @foo(i16* %ptr, i32 %a) nounwind {
entry:
%tmp1 = icmp ult i32 %a, 100
br i1 %tmp1, label %bb1, label %bb2
bb1:
%tmp2 = load i16* %ptr, align 2
br label %bb2
bb2:
%tmp3 = phi i16 [ 0, %entry ], [ %tmp2, %bb1 ]
%cmp = icmp ult i16 %tmp3, 24
br i1 %cmp, label %bb3, label %exit
bb3:
call void @bar() nounwind
br label %exit
exit:
ret void
}
This compiles to the followings before:
push {lr}
mov r2, #0
cmp r1, #99
bhi LBB0_2
@ BB#1: @ %bb1
ldrh r2, [r0]
LBB0_2: @ %bb2
uxth r0, r2
cmp r0, #23
bhi LBB0_4
@ BB#3: @ %bb3
bl _bar
LBB0_4: @ %exit
pop {lr}
bx lr
The uxth is not needed since ldrh implicitly zero-extend the high bits. With
this change it's eliminated.
rdar://12771555
llvm-svn: 169459
|
| |
|
|
|
|
| |
using multiclass.
llvm-svn: 169432
|
| |
|
|
|
|
| |
...) to zero.
llvm-svn: 169423
|
| |
|
|
|
|
|
|
|
|
| |
The encoding of NOP in ARMAsmBackend.cpp is missing a trailing zero, which
causes the emission of a coprocessor instruction rather than "mov r0, r0"
as indicated in the comment. The test also checks for the wrong encoding.
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20121203/157919.html
llvm-svn: 169420
|
| |
|
|
|
|
| |
Patch by Eric Holk
llvm-svn: 169418
|
| |
|
|
|
|
| |
addressing mode and immediate stored value.
llvm-svn: 169408
|
| |
|
|
| |
llvm-svn: 169404
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is for the lldb team so most of but not all of the values are
to be printed as hex with this option. Some small values like the
scale in an X86 address were requested to printed in decimal
without the leading 0x.
There may be some tweaks need to places that may still be in
decimal that they want in hex. Specially for arm. I made my best
guess. Any tweaks from here should be simple.
I also did the best I know now with help from the C++ gurus
creating the cleanest formatImm() utility function and containing
the changes. But if someone has a better idea to make something
cleaner I'm all ears and game for changing the implementation.
rdar://8109283
llvm-svn: 169393
|
| |
|
|
|
|
| |
Generate VPBLENDD for AVX2 and VPBLENDW for v16i16 type on AVX2.
llvm-svn: 169366
|