|  | Commit message (Collapse) | Author | Age | Files | Lines | 
|---|
| | 
| 
| 
| 
| 
| | some checks to allow better early out.
llvm-svn: 154309 | 
| | 
| 
| 
| | llvm-svn: 154308 | 
| | 
| 
| 
| | llvm-svn: 154307 | 
| | 
| 
| 
| 
| 
| | happen.
llvm-svn: 154305 | 
| | 
| 
| 
| | llvm-svn: 154299 | 
| | 
| 
| 
| | llvm-svn: 154297 | 
| | 
| 
| 
| 
| 
| 
| 
| | when -ffast-math, i.e. don't just always do it if the reciprocal can
be formed exactly.  There is already an IR level transform that does
that, and it does it more carefully.
llvm-svn: 154296 | 
| | 
| 
| 
| 
| 
| | width and the input vector widths don't match. No need to check the min and max are in range before calculating the start index. The range check after having the start index is sufficient. Also no need to check for an extract from the beginning differently.
llvm-svn: 154295 | 
| | 
| 
| 
| 
| 
| 
| 
| | in TargetLowering. There was already a FIXME about this location being
odd. The interface is simplified as a consequence. This will also make
it easier to change TLS models when compiling with PIE.
llvm-svn: 154292 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| | where a chain outside of the loop block-set ended up in the worklist for
scheduling as part of the contiguous loop. However, asserting the first
block in the chain is in the loop-set isn't a valid check -- we may be
forced to drag a chain into the worklist due to one block in the chain
being part of the loop even though the first block is *not* in the loop.
This occurs when we have been forced to form a chain early due to
un-analyzable branches.
No test case here as I have no idea how to even begin reducing one, and
it will be hopelessly fragile. We have to somehow end up with a loop
header of an inner loop which is a successor of a basic block with an
unanalyzable pair of branch instructions. Ow. Self-host triggers it so
it is unlikely it will regress.
This at least gets block placement back to passing selfhost and the test
suite. There are still a lot of slowdown that I don't like coming out of
block placement, although there are now also a lot of speedups. =[ I'm
seeing swings in both directions up to 10%. I'm going to try to find
time to dig into this and see if we can turn this on for 3.1 as it does
a really good job of cleaning up after some loops that degraded with the
inliner changes.
llvm-svn: 154287 | 
| | 
| 
| 
| 
| 
| | debugging.
llvm-svn: 154286 | 
| | 
| 
| 
| 
| 
| | remove patterns for selecting the intrinsic. Similar was already done for avx1.
llvm-svn: 154272 | 
| | 
| 
| 
| | llvm-svn: 154267 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| | shuffle node because it could introduce new shuffle nodes that were not
   supported efficiently by the target.
2. Add a more restrictive shuffle-of-shuffle optimization for cases where the
   second shuffle reverses the transformation of the first shuffle.
llvm-svn: 154266 | 
| | 
| 
| 
| 
| 
| 
| 
| | reciprocal if converting to the reciprocal is exact.  Do it even if inexact
if -ffast-math.  This substantially speeds up ac.f90 from the polyhedron
benchmarks.
llvm-svn: 154265 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| | This enables debuggers to see what are interesting lines for a
breakpoint rather than any line that starts a function.
rdar://9852092
llvm-svn: 154120 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| | LSR always tries to make the ICmp in the loop latch use the incremented
induction variable. This allows the induction variable to be kept in a
single register.
When the induction variable limit is equal to the stride,
SimplifySetCC() would break LSR's hard work by transforming:
   (icmp (add iv, stride), stride) --> (cmp iv, 0)
This forced us to use lea for the IC update, preventing the simpler
incl+cmp.
<rdar://problem/7643606>
<rdar://problem/11184260>
llvm-svn: 154119 | 
| | 
| 
| 
| 
| 
| | during instruction selection.
llvm-svn: 154113 | 
| | 
| 
| 
| 
| 
| | register indices on the source registers.  No simple test case
llvm-svn: 154051 | 
| | 
| 
| 
| | llvm-svn: 154039 | 
| | 
| 
| 
| | llvm-svn: 154032 | 
| | 
| 
| 
| 
| 
| 
| 
| | This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.
llvm-svn: 154011 | 
| | 
| 
| 
| | llvm-svn: 153996 | 
| | 
| 
| 
| 
| 
| | enum values
llvm-svn: 153984 | 
| | 
| 
| 
| 
| 
| | would crash if it encountered a 1 element VSELECT.  Solution is slightly more complicated than just creating a SELET as we have to mask or sign extend the vector condition if it had different boolean contents from the scalar condition.  Fixes <rdar://problem/11178095>
llvm-svn: 153976 | 
| | 
| 
| 
| | llvm-svn: 153975 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| | When folding X == X we need to check getBooleanContents() to determine if the
result is a vector of ones or a vector of negative ones. 
I tried creating a test case, but the problem seems to only be exposed on a
much older version of clang (around r144500).
rdar://10923049
llvm-svn: 153966 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| | might have more than 19 operands. Add a testcase to make sure I
never screw that up again.
Part of rdar://11026482
llvm-svn: 153961 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| | brace) so that we get more accurate line number information about the
declaration of a given function and the line where the function
first starts.
Part of rdar://11026482
llvm-svn: 153916 | 
| | 
| 
| 
| 
| 
| | VirtRegMap is NULL.  Also changed it in this case to just avoid updating the map, but live ranges or intervals will still get updated and created
llvm-svn: 153914 | 
| | 
| 
| 
| 
| 
| | backend, not just libCodeGen
llvm-svn: 153906 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| | This is just the fallback tie-breaker ordering, the main allocation
order is still descending size.
Patch by Shamil Kurmangaleev!
llvm-svn: 153904 | 
| | 
| 
| 
| 
| 
| | TargetInstrInfo, MachineRegisterInfo, LiveIntervals, and VirtRegMap are all passed into the constructor and stored as members instead of passed in to each method.
llvm-svn: 153903 | 
| | 
| 
| 
| 
| 
| | operations, and prevent the DAGCombiner from turning them into bitwise operations if they do.
llvm-svn: 153901 | 
| | 
| 
| 
| 
| 
| 
| | operands. Make TryInstructionTransform return false to reflect this.
Fixes PR11861.
llvm-svn: 153892 | 
| | 
| 
| 
| | llvm-svn: 153880 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| | shuffles.
Do not try to optimize swizzles of shuffles if the source shuffle has more than
a single user, except when the source shuffle is also a swizzle.
llvm-svn: 153864 | 
| | 
| 
| 
| 
| 
| | getInstructionName and the static data it contains since the same tables are already in MCInstrInfo.
llvm-svn: 153860 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| | 1. Simplify xor/and/or (bitcast(A), bitcast(B)) -> bitcast(op (A,B))
   (and also scalar_to_vector).
2. Xor/and/or are indifferent to the swizzle operation (shuffle of one src).
   Simplify xor/and/or (shuff(A), shuff(B)) -> shuff(op (A, B))
3. Optimize swizzles of shuffles:  shuff(shuff(x, y), undef) -> shuff(x, y).
4. Fix an X86ISelLowering optimization which was very bitcast-sensitive.
Code which was previously compiled to this:
movd    (%rsi), %xmm0
movdqa  .LCPI0_0(%rip), %xmm2
pshufb  %xmm2, %xmm0
movd    (%rdi), %xmm1
pshufb  %xmm2, %xmm1
pxor    %xmm0, %xmm1
pshufb  .LCPI0_1(%rip), %xmm1
movd    %xmm1, (%rdi)
ret
Now compiles to this:
movl    (%rsi), %eax
xorl    %eax, (%rdi)
ret
llvm-svn: 153848 | 
| | 
| 
| 
| | llvm-svn: 153846 | 
| | 
| 
| 
| | llvm-svn: 153827 | 
| | 
| 
| 
| 
| 
| 
| 
| | This is the CodeGen equivalent of r153747. I tested that there is not noticeable
performance difference with any combination of -O0/-O2 /-g when compiling
gcc as a single compilation unit.
llvm-svn: 153817 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| | here but it has no other uses, then we have a problem. E.g.,
  int foo (const int *x) {
    char a[*x];
    return 0;
  }
If we assign 'a' a vreg and fast isel later on has to use the selection
DAG isel, it will want to copy the value to the vreg. However, there are
no uses, which goes counter to what selection DAG isel expects.
<rdar://problem/11134152>
llvm-svn: 153705 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| | http://llvm.org/docs/SourceLevelDebugging.html#objcproperty
including type and DECL. Expand the metadata needed accordingly.
rdar://11144023
llvm-svn: 153639 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| | Some targets still mess up the liveness information, but that isn't
verified after MRI->invalidateLiveness().
The verifier can still check other useful things like register classes
and CFG, so it should be enabled after all passes.
llvm-svn: 153615 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| 
| | The late scheduler depends on accurate liveness information if it is
breaking anti-dependencies, so we should be able to verify it.
Relax the terminator checking in the machine code verifier so it can
handle the basic blocks created by if conversion.
llvm-svn: 153614 | 
| | 
| 
| 
| | llvm-svn: 153599 | 
| | 
| 
| 
| 
| 
| 
| | Branch folding invalidates liveness and disables liveness verification
on some targets.
llvm-svn: 153597 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| | Extract the liveness verification into its own method.
This makes it possible to run the machine code verifier after liveness
information is no longer required to be valid.
llvm-svn: 153596 | 
| | 
| 
| 
| 
| 
| 
| 
| 
| | This avoids the silly double search:
  if (isLiveIn(Reg))
    removeLiveIn(Reg);
llvm-svn: 153592 |