| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
| |
truncate the stack pointer to 32-bits on a 64-bit machine.
llvm-svn: 116169
|
|
|
|
| |
llvm-svn: 115792
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
allow target to correctly compute latency for cases where static scheduling
itineraries isn't sufficient. e.g. variable_ops instructions such as
ARM::ldm.
This also allows target without scheduling itineraries to compute operand
latencies. e.g. X86 can return (approximated) latencies for high latency
instructions such as division.
- Compute operand latencies for those defined by load multiple instructions,
e.g. ldm and those used by store multiple instructions, e.g. stm.
llvm-svn: 115755
|
|
|
|
|
|
|
|
|
| |
of hardware signed integer conversion without
having to do a double cast (uint64_t --> double --> float). This is based on the algorithm from compiler_rt's __floatundisf
for X86-64.
llvm-svn: 115634
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
// %a = ...
// %b = and i32 %a, 2
// %c = srl i32 %b, 1
// brcond i32 %c ...
//
// into
//
// %a = ...
// %b = and i32 %a, 2
// %c = setcc eq %b, 0
// brcond %c ...
Make sure it restores local variable N1, which corresponds to the condition operand if it fails to match.
This apparently breaks TCE but since that backend isn't in the tree I don't have a test for it.
llvm-svn: 115571
|
|
|
|
|
|
| |
unused argument here. This is a known limitation recorded debuginfo-tests/trunk/dbg-declare2.ll function 'f6' test case.
llvm-svn: 115323
|
|
|
|
| |
llvm-svn: 115310
|
|
|
|
| |
llvm-svn: 115300
|
|
|
|
| |
llvm-svn: 115294
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The x86_mmx type is used for MMX intrinsics, parameters and
return values where these use MMX registers, and is also
supported in load, store, and bitcast.
Only the above operations generate MMX instructions, and optimizations
do not operate on or produce MMX intrinsics.
MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
smaller pieces. Optimizations may occur on these forms and the
result casted back to x86_mmx, provided the result feeds into a
previous existing x86_mmx operation.
The point of all this is prevent optimizations from introducing
MMX operations, which is unsafe due to the EMMS problem.
llvm-svn: 115243
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
edited during emission.
If the basic block ends in a switch that gets lowered to a jump table, any
phis at the default edge were getting updated wrong. The jump table data
structure keeps a pointer to the header blocks that wasn't getting updated
after the MBB is split.
This bug was exposed on 32-bit Linux when disabling critical edge splitting in
codegen prepare.
The fix is to uipdate stale MBB pointers whenever a block is split during
emission.
llvm-svn: 115191
|
|
|
|
|
|
| |
pipeline forwarding path.
llvm-svn: 115098
|
|
|
|
| |
llvm-svn: 114999
|
|
|
|
|
|
| |
and asserts.
llvm-svn: 114843
|
|
|
|
| |
llvm-svn: 114767
|
|
|
|
| |
llvm-svn: 114750
|
|
|
|
|
|
| |
doubt it but it's possible it's exposing another bug somewhere.
llvm-svn: 114681
|
|
|
|
|
|
| |
Patch by Nathan Jeffords!
llvm-svn: 114661
|
|
|
|
|
|
| |
conditional one.
llvm-svn: 114634
|
|
|
|
|
|
|
|
|
|
|
|
| |
when the unconditional branch destination is the fallthrough block. The
canonicalization makes it easier to allow optimizations on DAGs to invert
conditional branches. The branch folding pass (and AnalyzeBranch) will clean up
the unnecessary unconditional branches later.
This is one of the patches leading up to disabling codegen prepare critical edge
splitting.
llvm-svn: 114630
|
|
|
|
|
|
|
| |
lowered using a series of shifts.
Fixes <rdar://problem/8285015>.
llvm-svn: 114599
|
|
|
|
| |
llvm-svn: 114490
|
|
|
|
|
|
|
|
| |
that complex patterns are matched after the entire pattern has
a structural match, therefore the NodeStack isn't in a useful
state when the actual call to the matcher happens.
llvm-svn: 114489
|
|
|
|
|
|
|
|
|
| |
current basic block then insert DBG_VALUE so that debug value of the variable is also transfered to new vreg.
Testcase is in r114476.
This fixes radar 8412415.
llvm-svn: 114478
|
|
|
|
| |
llvm-svn: 114474
|
|
|
|
|
|
|
|
|
| |
target-dependent, by using
the predicate to discover the number of sign bits. Enhance X86's target lowering to provide
a useful response to this query.
llvm-svn: 114473
|
|
|
|
|
|
|
| |
matched, allow ComplexPatterns to opt into getting the parent node
of the operand being matched.
llvm-svn: 114472
|
|
|
|
|
|
|
| |
I think I've audited all uses, so it should be dependable for address spaces,
and the pointer+offset info should also be accurate when there.
llvm-svn: 114464
|
|
|
|
| |
llvm-svn: 114461
|
|
|
|
|
|
| |
and store intrinsics are represented with MemIntrinsicSDNodes.
llvm-svn: 114454
|
|
|
|
|
|
| |
MachinePointerInfo around more.
llvm-svn: 114452
|
|
|
|
| |
llvm-svn: 114450
|
|
|
|
|
|
| |
with an indexed load/store that has an offset in the index.
llvm-svn: 114449
|
|
|
|
|
|
| |
SelectionDAG::getExtLoad overload, and eliminate it.
llvm-svn: 114446
|
|
|
|
|
|
| |
getLoad overloads.
llvm-svn: 114443
|
|
|
|
|
|
| |
with SVOffset computation.
llvm-svn: 114442
|
|
|
|
| |
llvm-svn: 114437
|
|
|
|
|
|
| |
no functionality change (step #1)
llvm-svn: 114436
|
|
|
|
|
|
|
| |
pass a completely incorrect SrcValue, which would result in a miscompile with
combiner-aa.
llvm-svn: 114411
|
|
|
|
|
|
|
| |
instead of srcvalue/offset pairs. This corrects SV info for mem
operations whose size is > 32-bits.
llvm-svn: 114401
|
|
|
|
|
|
|
| |
MachinePointerInfo. Among other virtues, this doesn't silently truncate the
svoffset to 32-bits.
llvm-svn: 114399
|
|
|
|
|
|
| |
MachinePointerInfo
llvm-svn: 114397
|
|
|
|
|
|
| |
eliminating some weird "infer a frame address" logic which was dead.
llvm-svn: 114396
|
|
|
|
| |
llvm-svn: 114395
|
|
|
|
|
|
|
| |
MachinePointerInfo, propagating the type out a level of API. Remove
the old MachineFunction::getMachineMemOperand impl.
llvm-svn: 114393
|
|
|
|
|
|
|
|
|
|
|
|
| |
Therefore,
CombinerAA cannot assume that different FrameIndex's never alias, but can instead use
MachineFrameInfo to get the actual offsets of these slots and check for actual aliasing.
This fixes CodeGen/X86/2010-02-19-TailCallRetAddrBug.ll and CodeGen/X86/tailcallstack64.ll
when CombinerAA is enabled, modulo a different register allocation sequence.
llvm-svn: 114348
|
|
|
|
| |
llvm-svn: 114313
|
|
|
|
|
|
|
| |
r114268 fixed the last of the blockers to enabling it. I will be monitoring
for failures.
llvm-svn: 114312
|
|
|
|
|
|
|
|
|
|
| |
is that there is
NO path to the destination containing side effects, not that SOME path contains no side effects.
In practice, this only manifests with CombinerAA enabled, because otherwise the chain has little
to no branching, so "any" is effectively equivalent to "all".
llvm-svn: 114268
|
|
|
|
|
|
| |
This fixes funcargs.exp regression reported by gdb testsuite.
llvm-svn: 113992
|