| Commit message (Collapse) | Author | Age | Files | Lines | 
| ... |  | 
| | 
| 
| 
|  | 
llvm-svn: 148217
 | 
| | 
| 
| 
|  | 
llvm-svn: 148205
 | 
| | 
| 
| 
| 
| 
| 
| 
|  | 
non-determinism in the 32 bit dragonegg buildbot.  Original commit
message:
Only emit the Leh_func_endN symbol when needed.
llvm-svn: 148191
 | 
| | 
| 
| 
|  | 
llvm-svn: 148175
 | 
| | 
| 
| 
|  | 
llvm-svn: 148174
 | 
| | 
| 
| 
|  | 
llvm-svn: 148173
 | 
| | 
| 
| 
|  | 
llvm-svn: 148172
 | 
| | 
| 
| 
|  | 
llvm-svn: 148171
 | 
| | 
| 
| 
| 
| 
|  | 
be split up later.
llvm-svn: 148170
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
live across BBs before register allocation. This miscompiled 197.parser
when a cmp + b are optimized to a cbnz instruction even though the CPSR def
is live-in a successor.
        cbnz    r6, LBB89_12
...
LBB89_12:
        ble     LBB89_1
The fix consists of two parts. 1) Teach LiveVariables that some unallocatable
registers might be liveouts so don't mark their last use as kill if they are.
2) ARM constantpool island pass shouldn't form cbz / cbnz if the conditional
branch does not kill CPSR.
rdar://10676853
llvm-svn: 148168
 | 
| | 
| 
| 
|  | 
llvm-svn: 148156
 | 
| | 
| 
| 
|  | 
llvm-svn: 148150
 | 
| | 
| 
| 
|  | 
llvm-svn: 148143
 | 
| | 
| 
| 
|  | 
llvm-svn: 148105
 | 
| | 
| 
| 
|  | 
llvm-svn: 148103
 | 
| | 
| 
| 
|  | 
llvm-svn: 148102
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
overly conservative. It was concerned about cases where it would prohibit
folding simple [r, c] addressing modes. e.g.
  ldr r0, [r2]
  ldr r1, [r2, #4]
=>
  ldr r0, [r2], #4
  ldr r1, [r2]
Change the logic to look for such cases which allows it to form indexed memory
ops more aggressively.
rdar://10674430
llvm-svn: 148086
 | 
| | 
| 
| 
|  | 
llvm-svn: 148065
 | 
| | 
| 
| 
| 
| 
| 
|  | 
The registers are placed into the saved registers list in the reverse order,
which is why the original loop was written to loop backwards.
llvm-svn: 148064
 | 
| | 
| 
| 
| 
| 
| 
| 
|  | 
Promote for those operations.
Sorry, no test case yet
llvm-svn: 148050
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
killed registers are needed below the insertion point, then unset the kill
marker.
Sorry I'm not able to find a reduced test case.
rdar://10660944
llvm-svn: 148043
 | 
| | 
| 
| 
|  | 
llvm-svn: 148033
 | 
| | 
| 
| 
|  | 
llvm-svn: 148031
 | 
| | 
| 
| 
|  | 
llvm-svn: 147979
 | 
| | 
| 
| 
| 
| 
|  | 
This helper method is too simplistic for RAGreedy.
llvm-svn: 147976
 | 
| | 
| 
| 
|  | 
llvm-svn: 147975
 | 
| | 
| 
| 
| 
| 
|  | 
No functional change.
llvm-svn: 147972
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
are used.
When we load the v12i32 type, the GenWidenVectorLoads method generates two loads: v8i32 and v4i32 
and attempts to use CONCAT_VECTORS to join them. In this fix I concat undef values to widen 
the smaller value. The test "widen_load-2.ll" also exposes this bug on AVX.
llvm-svn: 147964
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
detect a pattern which can be implemented with a small 'shl' embedded in
the addressing mode scale. This happens in real code as follows:
  unsigned x = my_accelerator_table[input >> 11];
Here we have some lookup table that we look into using the high bits of
'input'. Each entity in the table is 4-bytes, which means this
implicitly gets turned into (once lowered out of a GEP):
  *(unsigned*)((char*)my_accelerator_table + ((input >> 11) << 2));
The shift right followed by a shift left is canonicalized to a smaller
shift right and masking off the low bits. That hides the shift right
which x86 has an addressing mode designed to support. We now detect
masks of this form, and produce the longer shift right followed by the
proper addressing mode. In addition to saving a (rather large)
instruction, this also reduces stalls in Intel chips on benchmarks I've
measured.
In order for all of this to work, one part of the DAG needs to be
canonicalized *still further* than it currently is. This involves
removing pointless 'trunc' nodes between a zextload and a zext. Without
that, we end up generating spurious masks and hiding the pattern.
llvm-svn: 147936
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
Consider this code:
int h() {
  int x;
  try {
    x = f();
    g();
  } catch (...) {
    return x+1;
  }
  return x;
}
The variable x is undefined on the first edge to the landing pad, but it
has the f() return value on the second edge to the landing pad.
SplitAnalysis::getLastSplitPoint() would assume that the return value
from f() was live into the landing pad when f() throws, which is of
course impossible.
Detect these cases, and treat them as if the landing pad wasn't there.
This allows spill code to be inserted after the function call to f().
<rdar://problem/10664933>
llvm-svn: 147912
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
|  | 
Delete the alternative implementation in LiveIntervalAnalysis.
These functions computed the same thing, but SplitAnalysis caches the
result.
llvm-svn: 147911
 | 
| | 
| 
| 
| 
| 
|  | 
the physical registers are not allocatable.
llvm-svn: 147902
 | 
| | 
| 
| 
|  | 
llvm-svn: 147884
 | 
| | 
| 
| 
| 
| 
| 
| 
|  | 
of several newly un-defaulted switches. This also helps optimizers
(including LLVM's) recognize that every case is covered, and we should
assume as much.
llvm-svn: 147861
 | 
| | 
| 
| 
|  | 
llvm-svn: 147855
 | 
| | 
| 
| 
| 
| 
|  | 
using BUILD_VECTORS we may be using a BV of different type. Make sure to cast it back.
llvm-svn: 147851
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
define physical registers. It's currently very restrictive, only catching
cases where the CE is in an immediate (and only) predecessor. But it catches
a surprising large number of cases.
rdar://10660865
llvm-svn: 147827
 | 
| | 
| 
| 
|  | 
llvm-svn: 147820
 | 
| | 
| 
| 
| 
| 
|  | 
safely proven not to have been clobbered. No small test case possible.
llvm-svn: 147751
 | 
| | 
| 
| 
|  | 
llvm-svn: 147733
 | 
| | 
| 
| 
| 
| 
|  | 
subc, turn it into a sub. Turn (subc x, x) into 0 with no borrow. Turn (subc x, 0) into x with no borrow. Turn (subc -1, x) into (xor x, -1) with no borrow. Turn sube with no borrow in into subc.
llvm-svn: 147728
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
Reserved registers don't have proper live ranges, their LiveInterval
simply has a snippet of liveness for each def.  Virtual registers with a
single value that is a copy of a reserved register (typically %esp) can
be coalesced with the reserved register if the live range doesn't
overlap any reserved register defs.
When coalescing with a reserved register, don't modify the reserved
register live range.  Just leave it as a bunch of dead defs.  This
eliminates quadratic coalescer behavior in i386 functions with many
function calls.
PR11699
llvm-svn: 147726
 | 
| | 
| 
| 
|  | 
llvm-svn: 147725
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
up so branch folding pass can't use the scavenger. :-(  This doesn't breaks
anything currently. It just means targets which do not carefully update kill
markers cannot run post-ra scheduler (not new, it has always been the case).
We should fix this at some point since it's really hacky.
llvm-svn: 147719
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
|  | 
opportunities that only present themselves after late optimizations
such as tail duplication .e.g.
## BB#1:
        movl    %eax, %ecx
        movl    %ecx, %eax
        ret
The register allocator also leaves some of them around (due to false
dep between copies from phi-elimination, etc.)
This required some changes in codegen passes. Post-ra scheduler and the
pseudo-instruction expansion passes have been moved after branch folding
and tail merging. They were before branch folding before because it did
not always update block livein's. That's fixed now. The pass change makes
independently since we want to properly schedule instructions after
branch folding / tail duplication.
rdar://10428165
rdar://10640363
llvm-svn: 147716
 | 
| | 
| 
| 
|  | 
llvm-svn: 147703
 | 
| | 
| 
| 
|  | 
llvm-svn: 147696
 | 
| | 
| 
| 
| 
| 
|  | 
to bleed from the eyes.
llvm-svn: 147695
 | 
| | 
| 
| 
|  | 
llvm-svn: 147694
 | 
| | 
| 
| 
|  | 
llvm-svn: 147693
 |