| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
| |
A min/max operation is represented by a select(cmp(lt/le/gt/ge, X, Y), X, Y)
sequence in LLVM. If we see such a sequence we can treat it just as any other
commutative binary instruction and reduce it.
This appears to help bzip2 by about 1.5% on an imac12,2.
radar://12960601
llvm-svn: 179773
|
|
|
|
|
|
| |
Fixes PR15748.
llvm-svn: 179757
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Don't classify idiv/udiv as a reduction operation. Integer division is lossy.
For example : (1 / 2) * 4 != 4/2.
Example:
int a[] = { 2, 5, 2, 2}
int x = 80;
for()
x /= a[i];
Scalar:
x /= 2 // = 40
x /= 5 // = 8
x /= 2 // = 4
x /= 2 // = 2
Vectorized:
<80, 1> / <2,5> //= <40,0>
<40, 0> / <2,2> //= <20,0>
20*0 = 0
radar://13640654
llvm-svn: 179381
|
|
|
|
|
|
|
|
|
|
|
|
| |
Pass down the fact that an operand is going to be a vector of constants.
This should bring the performance of MultiSource/Benchmarks/PAQ8p/paq8p on x86
back. It had degraded to scalar performance due to my pervious shift cost change
that made all shifts expensive on x86.
radar://13576547
llvm-svn: 178809
|
|
|
|
|
|
| |
instruction counts.
llvm-svn: 178459
|
|
|
|
|
|
| |
Also remove some unneeded function attributes.
llvm-svn: 177114
|
|
|
|
| |
llvm-svn: 177102
|
|
|
|
|
|
|
| |
We generate a select with a vectorized condition argument when the condition is
NOT loop invariant. Not the other way around.
llvm-svn: 177098
|
|
|
|
| |
llvm-svn: 176772
|
|
|
|
|
|
|
|
|
| |
We want vectorization to happen at -g. Ignore calls to the dbg.value intrinsic
and don't transfer them to the vectorized code.
radar://13378964
llvm-svn: 176768
|
|
|
|
| |
llvm-svn: 176702
|
|
|
|
|
|
|
|
| |
domination.
Fixes PR15344.
llvm-svn: 176701
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This matters for example in following matrix multiply:
int **mmult(int rows, int cols, int **m1, int **m2, int **m3) {
int i, j, k, val;
for (i=0; i<rows; i++) {
for (j=0; j<cols; j++) {
val = 0;
for (k=0; k<cols; k++) {
val += m1[i][k] * m2[k][j];
}
m3[i][j] = val;
}
}
return(m3);
}
Taken from the test-suite benchmark Shootout.
We estimate the cost of the multiply to be 2 while we generate 9 instructions
for it and end up being quite a bit slower than the scalar version (48% on my
machine).
Also, properly differentiate between avx1 and avx2. On avx-1 we still split the
vector into 2 128bits and handle the subvector muls like above with 9
instructions.
Only on avx-2 will we have a cost of 9 for v4i64.
I changed the test case in test/Transforms/LoopVectorize/X86/avx1.ll to use an
add instead of a mul because with a mul we now no longer vectorize. I did
verify that the mul would be indeed more expensive when vectorized with 3
kernels:
for (i ...)
r += a[i] * 3;
for (i ...)
m1[i] = m1[i] * 3; // This matches the test case in avx1.ll
and a matrix multiply.
In each case the vectorized version was considerably slower.
radar://13304919
llvm-svn: 176403
|
|
|
|
|
|
|
|
|
|
| |
The LoopVectorizer often runs multiple times on the same function due to inlining.
When this happens the loop vectorizer often vectorizes the same loops multiple times, increasing code size and adding unneeded branches.
With this patch, the vectorizer during vectorization puts metadata on scalar loops and marks them as 'already vectorized' so that it knows to ignore them when it sees them a second time.
PR14448.
llvm-svn: 176399
|
|
|
|
|
|
| |
Fixes PR15384.
llvm-svn: 176366
|
|
|
|
|
|
|
|
|
|
|
| |
This properly asks TargetLibraryInfo if a call is available and if it is, it
can be translated into the corresponding LLVM builtin. We don't vectorize sqrt()
yet because I'm not sure about the semantics for negative numbers. The other
intrinsic should be exact equivalents to the libm functions.
Differential Revision: http://llvm-reviews.chandlerc.com/D465
llvm-svn: 176188
|
|
|
|
| |
llvm-svn: 175964
|
|
|
|
| |
llvm-svn: 175898
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Storing the load/store instructions with the values
and inspect them using Alias Analysis to make sure
they don't alias, since the GEP pointer operand doesn't
take the offset into account.
Trying hard to not add any extra cost to loads and stores
that don't overlap on global values, AA is *only* calculated
if all of the previous attempts failed.
Using biggest vector register size as the stride for the
vectorization access, as we're being conservative and
the cost model (which calculates the real vectorization
factor) is only run after the legalization phase.
We might re-think this relationship in the future, but
for now, I'd rather be safe than sorry.
llvm-svn: 175818
|
|
|
|
|
|
| |
metadata, sorry.
llvm-svn: 175311
|
|
|
|
| |
llvm-svn: 174380
|
|
|
|
|
|
| |
requires +Asserts.
llvm-svn: 174379
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the loop vectorizer cost model, we used to ignore stores/loads of a pointer
type when computing the widest type within a loop. This meant that if we had
only stores/loads of pointers in a loop we would return a widest type of 8bits
(instead of 32 or 64 bit) and therefore a vector factor that was too big.
Now, if we see a consecutive store/load of pointers we use the size of a pointer
(from data layout).
This problem occured in SingleSource/Benchmarks/Shootout-C++/hash.cpp (reduced
test case is the first test in vector_ptr_load_store.ll).
radar://13139343
llvm-svn: 174377
|
|
|
|
|
|
| |
breakage with builds without X86-support.
llvm-svn: 174052
|
|
|
|
|
|
|
|
|
|
|
| |
Changing ARMBaseTargetMachine to return ARMTargetLowering intead of
the generic one (similar to x86 code).
Tests showing which instructions were added to cast when necessary
or cost zero when not. Downcast to 16 bits are not lowered in NEON,
so costs are not there yet.
llvm-svn: 173849
|
|
|
|
|
|
| |
to a command line switch.
llvm-svn: 173837
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
contain pointers that count backwards.
For example, this is the hot loop in BZIP:
do {
m = *--p;
*p = ( ... );
} while (--n);
llvm-svn: 173219
|
|
|
|
|
|
|
| |
We ignore the cpu frontend and focus on pipeline utilization. We do this because we
don't have a good way to estimate the loop body size at the IR level.
llvm-svn: 172964
|
|
|
|
| |
llvm-svn: 172963
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This separates the check for "too few elements to run the vector loop" from the
"memory overlap" check, giving a lot nicer code and allowing to skip the memory
checks when we're not going to execute the vector code anyways. We still leave
the decision of whether to emit the memory checks as branches or setccs, but it
seems to be doing a good job. If ugly code pops up we may want to emit them as
separate blocks too. Small speedup on MultiSource/Benchmarks/MallocBench/espresso.
Most of this is legwork to allow multiple bypass blocks while updating PHIs,
dominators and loop info.
llvm-svn: 172902
|
|
|
|
|
|
| |
Should fix the arm buildbot (which only builds the arm target).
llvm-svn: 172611
|
|
|
|
|
|
| |
and i16).
llvm-svn: 172348
|
|
|
|
|
|
|
|
| |
the target if it supports the different CAST types. We didn't do this
on X86 because of the different register sizes and types, but on ARM
this makes sense.
llvm-svn: 172245
|
|
|
|
|
|
|
|
|
|
|
| |
order to select the max vectorization factor.
We don't have a detailed analysis on which values are vectorized and which stay scalars in the vectorized loop so we use
another method. We look at reduction variables, loads and stores, which are the only ways to get information in and out
of loop iterations. If the data types are extended and truncated then the cost model will catch the cost of the vector
zext/sext/trunc operations.
llvm-svn: 172178
|
|
|
|
|
|
|
|
| |
BinaryOperator can be folded to an Undef, and we don't want to set NSW flags to undef vals.
PR14878
llvm-svn: 172079
|
|
|
|
|
|
| |
instruction to determine the max vectorization factor.
llvm-svn: 172010
|
|
|
|
| |
llvm-svn: 171931
|
|
|
|
|
|
| |
vectorizer does it now.
llvm-svn: 171930
|
|
|
|
|
|
| |
Cost Model support on ARM.
llvm-svn: 171928
|
|
|
|
| |
llvm-svn: 171812
|
|
|
|
|
|
|
|
|
| |
at once. This is a good thing, except for
small loops. On small loops post-loop that handles scalars (and runs slower) can take more time to execute than the
rest of the loop. This patch disables widening of loops with a small static trip count.
llvm-svn: 171798
|
|
|
|
| |
llvm-svn: 171584
|
|
|
|
|
|
|
|
| |
as long as the reduction chain is used in the LHS.
PR14803.
llvm-svn: 171583
|
|
|
|
|
|
| |
This should fix clang-native-arm-cortex-a9. Thanks Renato.
llvm-svn: 171582
|
|
|
|
|
|
|
|
|
| |
Since subtraction does not commute the loop vectorizer incorrectly vectorizes
reductions such as x = A[i] - x.
Disabling for now.
llvm-svn: 171537
|
|
|
|
|
|
|
|
| |
1. Add code to estimate register pressure.
2. Add code to select the unroll factor based on register pressure.
3. Add bits to TargetTransformInfo to provide the number of registers.
llvm-svn: 171469
|
|
|
|
| |
llvm-svn: 171446
|
|
|
|
| |
llvm-svn: 171429
|
|
|
|
|
|
|
|
|
| |
LCSSA PHIs may have undef values. The vectorizer updates values that are used by outside users such as PHIs.
The bug happened because undefs are not loop values. This patch handles these PHIs.
PR14725
llvm-svn: 171251
|
|
|
|
|
|
|
|
| |
even if the read objects are unidentified.
PR14719.
llvm-svn: 171124
|