| Commit message (Collapse) | Author | Age | Files | Lines | 
| ... |  | 
| | 
| 
| 
|  | 
llvm-svn: 60189
 | 
| | 
| 
| 
|  | 
llvm-svn: 60188
 | 
| | 
| 
| 
|  | 
llvm-svn: 60187
 | 
| | 
| 
| 
|  | 
llvm-svn: 60186
 | 
| | 
| 
| 
| 
| 
|  | 
by 1, as well as multiply by -1.
llvm-svn: 60182
 | 
| | 
| 
| 
| 
| 
|  | 
it ends up being the entry block.
llvm-svn: 60180
 | 
| | 
| 
| 
| 
| 
|  | 
move the other block back up into the entry position!
llvm-svn: 60179
 | 
| | 
| 
| 
| 
| 
| 
|  | 
Despite changing the order of evaluation, this doesn't actually change the
meaning of the statement.
llvm-svn: 60177
 | 
| | 
| 
| 
|  | 
llvm-svn: 60175
 | 
| | 
| 
| 
|  | 
llvm-svn: 60174
 | 
| | 
| 
| 
| 
| 
|  | 
FindAvailableLoadedValue
llvm-svn: 60169
 | 
| | 
| 
| 
|  | 
llvm-svn: 60168
 | 
| | 
| 
| 
| 
| 
|  | 
if it has it.
llvm-svn: 60167
 | 
| | 
| 
| 
|  | 
llvm-svn: 60166
 | 
| | 
| 
| 
| 
| 
|  | 
inlined" message.
llvm-svn: 60165
 | 
| | 
| 
| 
|  | 
llvm-svn: 60164
 | 
| | 
| 
| 
|  | 
llvm-svn: 60163
 | 
| | 
| 
| 
|  | 
llvm-svn: 60162
 | 
| | 
| 
| 
| 
| 
|  | 
just simple threading.
llvm-svn: 60157
 | 
| | 
| 
| 
|  | 
llvm-svn: 60156
 | 
| | 
| 
| 
|  | 
llvm-svn: 60149
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
1. Make it fold blocks separated by an unconditional branch.  This enables
   jump threading to see a broader scope.
2. Make jump threading able to eliminate locally redundant loads when they
   feed the branch condition of a block.  This frequently occurs due to
   reg2mem running.
3. Make jump threading able to eliminate *partially redundant* loads when
   they feed the branch condition of a block.  This is common in code with
   lots of loads and stores like C++ code and 255.vortex.
This implements thread-loads.ll and rdar://6402033.
Per the fixme's, several pieces of this should be moved into Transforms/Utils.
llvm-svn: 60148
 | 
| | 
| 
| 
|  | 
llvm-svn: 60145
 | 
| | 
| 
| 
|  | 
llvm-svn: 60141
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
add, and, xor, etc.) because materializing an immediate in a register is expensive in turns of code size.
e.g.
movl 4(%esp), %eax
addl $4, %eax
is 2 bytes shorter than
movl $4, %eax
addl 4(%esp), %eax
llvm-svn: 60139
 | 
| | 
| 
| 
|  | 
llvm-svn: 60137
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
the conditional for the BRCOND statement. For instance, it will generate:
    addl %eax, %ecx
    jo LOF
instead of
    addl %eax, %ecx
    ; About 10 instructions to compare the signs of LHS, RHS, and sum.
    jl LOF
llvm-svn: 60123
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
performance in most cases on the Grawp tester, but does speed some 
things up (like shootout/hash by 15%).  This also doesn't impact 
compile time in a noticable way on the Grawp tester.
It also, of course, gets the testcase it was designed for right :)
llvm-svn: 60120
 | 
| | 
| 
| 
|  | 
llvm-svn: 60110
 | 
| | 
| 
| 
|  | 
llvm-svn: 60102
 | 
| | 
| 
| 
| 
| 
|  | 
Custom lower AND, OR, XOR bitwise operations.
llvm-svn: 60098
 | 
| | 
| 
| 
|  | 
llvm-svn: 60095
 | 
| | 
| 
| 
| 
| 
| 
|  | 
and the LiveInterval.h top-level comment and accordingly. This fixes blocks
having spurious live-in registers in boundary cases.
llvm-svn: 60092
 | 
| | 
| 
| 
|  | 
llvm-svn: 60088
 | 
| | 
| 
| 
| 
| 
|  | 
current location in the file the stream is writing to.
llvm-svn: 60085
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
heuristic: the value is already live at the new memory operation if
it is used by some other instruction in the memop's block.  This is
cheap and simple to compute (moreso than full liveness).
This improves the new heuristic even more.  For example, it cuts two
out of three new instructions out of 255.vortex:DbmFileInGrpHdr, 
which is one of the functions that the heuristic regressed.  This
overall eliminates another 40 instructions from 403.gcc and visibly
reduces register pressure in 255.vortex (though this only actually
ends up saving the 2 instructions from the whole program).
llvm-svn: 60084
 | 
| | 
| 
| 
| 
| 
|  | 
__attribute__ notation which is supported on more platforms.
llvm-svn: 60083
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
phrased in terms of liveness instead of as a horrible hack.  :)
In pratice, this doesn't change the generated code for either 
255.vortex or 403.gcc, but it could cause minor code changes in 
theory.  This is framework for coming changes.
llvm-svn: 60082
 | 
| | 
| 
| 
|  | 
llvm-svn: 60080
 | 
| | 
| 
| 
|  | 
llvm-svn: 60076
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
-enable-smarter-addr-folding to llc) that gives CGP a better
cost model for when to sink computations into addressing modes.
The basic observation is that sinking increases register 
pressure when part of the addr computation has to be available
for other reasons, such as having a use that is a non-memory
operation.  In cases where it works, it can substantially reduce
register pressure.
This code is currently an overall win on 403.gcc and 255.vortex
(the two things I've been looking at), but there are several 
things I want to do before enabling it by default:
1. This isn't doing any caching of results, so it is much slower 
   than it could be.  It currently slows down release-asserts llc 
   by 1.7% on 176.gcc: 27.12s -> 27.60s.
2. This doesn't think about inline asm memory operands yet.
3. The cost model botches the case when the needed value is live
   across the computation for other reasons.
I'll continue poking at this, and eventually turn it on as llcbeta.
llvm-svn: 60074
 | 
| | 
| 
| 
| 
| 
|  | 
first before trying to convert it to an integer.
llvm-svn: 60072
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
optimize addressing modes.  This allows us to optimize things like isel-sink2.ll
into:
	movl	4(%esp), %eax
	cmpb	$0, 4(%eax)
	jne	LBB1_2	## F
LBB1_1:	## TB
	movl	$4, %eax
	ret
LBB1_2:	## F
	movzbl	7(%eax), %eax
	ret
instead of:
_test:
	movl	4(%esp), %eax
	cmpb	$0, 4(%eax)
	leal	4(%eax), %eax
	jne	LBB1_2	## F
LBB1_1:	## TB
	movl	$4, %eax
	ret
LBB1_2:	## F
	movzbl	3(%eax), %eax
	ret
This shrinks (e.g.) 403.gcc from 1133510 to 1128345 lines of .s.
Note that the 2008-10-16-SpillerBug.ll testcase is dubious at best, I doubt
it is really testing what it thinks it is.
llvm-svn: 60068
 | 
| | 
| 
| 
| 
| 
|  | 
fixes the leakage of those strings and avoids the creation of such strings in static cosntructors (should result in a little improvement of startup time)
llvm-svn: 60064
 | 
| | 
| 
| 
| 
| 
|  | 
differ. Thanks, Duncan.
llvm-svn: 60043
 | 
| | 
| 
| 
| 
| 
|  | 
if sematics of float does not allow arithmetics.
llvm-svn: 60042
 | 
| | 
| 
| 
|  | 
llvm-svn: 60041
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
|  | 
(a) Remove conditionally removed code in SelectXAddr. Basically, hope for the
    best that the A-form and D-form address predicates catch everything before
    the code decides to emit a X-form address.
(b) Expand vector store test cases to include the usual suspects.
llvm-svn: 60034
 | 
| | 
| 
| 
|  | 
llvm-svn: 60016
 | 
| | 
| 
| 
|  | 
llvm-svn: 60015
 |