| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
This was only applying the deeper nested zext pattern, and missing the
special case code size fold.
|
|
|
|
|
| |
The optimization to turn an add into a sub isn't triggering when the
pattern to use the zeroed high bits is used.
|
|
|
|
|
|
|
| |
This should only matter in vectors with an undef component, since a
full undef vector would have been folded out.
llvm-svn: 363941
|
|
|
|
|
|
| |
Should avoid regression from D62341
llvm-svn: 363899
|
|
|
|
|
|
|
| |
This will catch regressions from D62341, and show improvements from a
future patch to fix them.
llvm-svn: 363888
|
|
|
|
| |
llvm-svn: 362230
|
|
|
|
|
|
|
|
|
|
|
|
| |
v_{add/addc/sub/subrev/subb/subbrev}
See bug 34765: https://bugs.llvm.org//show_bug.cgi?id=34765
Reviewers: tamazov, SamWot, arsenm, vpykhtin
Differential Revision: https://reviews.llvm.org/D40088
llvm-svn: 318675
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the default C calling convention functions are treated
the same as compute kernels. Make this explicit so the default
calling convention can be changed to a non-kernel.
Converted with perl -pi -e 's/define void/define amdgpu_kernel void/'
on the relevant test directories (and undoing in one place that actually
wanted a non-kernel).
llvm-svn: 298444
|
|
This is worse if the original constant is an inline immediate.
This should also be done for 64-bit adds, but requires fixing
operand folding bugs first.
llvm-svn: 293540
|