| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
table
llvm-svn: 218776
|
|
|
|
|
|
|
|
| |
No tests for omod since nothing uses it yet, but
this should get rid of the remaining annoying trailing
zeros after some instructions.
llvm-svn: 218692
|
|
|
|
| |
llvm-svn: 218655
|
|
|
|
| |
llvm-svn: 218609
|
|
|
|
|
|
|
|
|
| |
These turn into fadds, so combine them into the target
mad node.
fadd (fadd (a, a), b) -> mad 2.0, a, b
llvm-svn: 218608
|
|
|
|
|
|
|
|
|
| |
This has weird operand requirements so it's worthwhile
to have very strict checks for its operands.
Add different combinations of SGPR operands.
llvm-svn: 218535
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of moving the first SGPR that is different than the first,
legalize the operand that requires the fewest moves if one
SGPR is used for multiple operands.
This saves extra moves and is also required for some instructions
which require that the same operand be used for multiple operands.
llvm-svn: 218532
|
|
|
|
|
|
|
|
|
| |
Disable the SGPR usage restriction parts of the DAG legalizeOperands.
It now should only be doing immediate folding until it can be replaced
later. The real legalization work is now done by the other
SIInstrInfo::legalizeOperands
llvm-svn: 218531
|
|
|
|
|
|
|
|
| |
e.g. v_cndmask_b32 requires the condition operand be an SGPR.
If one of the source operands were an SGPR, that would be considered
the one SGPR use and the condition operand would be illegally moved.
llvm-svn: 218529
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No test since the current SIISelLowering::legalizeOperands
effectively hides this, and the general uses seem to only fire
on SALU instructions which don't have modifiers between
the operands.
When trying to use legalizeOperands immediately after
instruction selection, it now sees a lot more patterns
it did not see before which break on this.
llvm-svn: 218527
|
|
|
|
| |
llvm-svn: 218487
|
|
|
|
| |
llvm-svn: 218486
|
|
|
|
| |
llvm-svn: 218474
|
|
|
|
| |
llvm-svn: 218473
|
|
|
|
| |
llvm-svn: 218457
|
|
|
|
|
|
| |
This prevents these from failing in a future commit.
llvm-svn: 218356
|
|
|
|
|
|
| |
We can do this now that the FixSGPRLiveRanges pass is working.
llvm-svn: 218353
|
|
|
|
|
|
|
|
|
| |
This reverts commit r218254.
The global_atomics.ll test fails with asserts disabled. For some reason,
the compiler fails to produce the atomic no return variants.
llvm-svn: 218257
|
|
|
|
| |
llvm-svn: 218254
|
|
|
|
| |
llvm-svn: 218165
|
|
|
|
| |
llvm-svn: 218164
|
|
|
|
| |
llvm-svn: 218162
|
|
|
|
| |
llvm-svn: 218131
|
|
|
|
|
|
| |
Just do the left shift as unsigned to avoid the UB.
llvm-svn: 218092
|
|
|
|
|
|
|
| |
I'm not sure what the hardware actually does, so don't
bother trying to fold it for now.
llvm-svn: 218057
|
|
|
|
| |
llvm-svn: 217979
|
|
|
|
|
|
|
|
|
|
| |
Only 1 decimal place should be printed for inline immediates.
Other constants should be hex constants.
Does not include f64 tests because folding those inline
immediates currently does not work.
llvm-svn: 217964
|
|
|
|
|
|
|
|
| |
Add some more tests to make sure better operand
choices are still made. Leave some cases that seem
to have no reason to ever be e64 alone.
llvm-svn: 217789
|
|
|
|
| |
llvm-svn: 217787
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I noticed some odd looking cases where addr64 wasn't set
when storing to a pointer in an SGPR. This seems to be intentional,
and partially tested already.
The documentation seems to describe addr64 in terms of which registers
addressing modifiers come from, but I would expect to always need
addr64 when using 64-bit pointers. If no offset is applied,
it makes sense to not need to worry about doing a 64-bit add
for the final address. A small immediate offset can be applied,
so is it OK to not have addr64 set if a carry is necessary when adding
the base pointer in the resource to the offset?
llvm-svn: 217785
|
|
|
|
| |
llvm-svn: 217777
|
|
|
|
| |
llvm-svn: 217736
|
|
|
|
|
|
|
| |
The register numbers start at 0, so if only 1 register
was used, this was reported as 0.
llvm-svn: 217636
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do
(shl (add x, c1), c2) -> (add (shl x, c2), c1 << c2)
This is already done for multiplies, but since multiplies
by powers of two are turned into shifts, we also need
to handle it here.
This might want checks for isLegalAddImmediate to avoid
transforming an add of a legal immediate with one that isn't.
llvm-svn: 217610
|
|
|
|
|
|
|
|
| |
Now that the operations are all implemented, we can test this sub-arch here.
Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>
llvm-svn: 217595
|
|
|
|
|
|
|
| |
The lost chain resulting in earlier side effecting nodes
being deleted.
llvm-svn: 217561
|
|
|
|
| |
llvm-svn: 217553
|
|
|
|
| |
llvm-svn: 217379
|
|
|
|
| |
llvm-svn: 217320
|
|
|
|
|
|
| |
Fix missing check, and hardcoded register numbers.
llvm-svn: 217318
|
|
|
|
|
|
|
| |
This fixes hitting the same negative base offset problem
that was already fixed for regular loads and stores.
llvm-svn: 217256
|
|
|
|
|
|
|
|
| |
round halfway cases away from zero
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 217250
|
|
|
|
|
|
| |
https://bugs.freedesktop.org/show_bug.cgi?id=83416
llvm-svn: 217248
|
|
|
|
|
|
|
| |
Also fix bug this exposed where when legalizing an immediate
operand, a v_mov_b32 would be created with a VSrc dest register.
llvm-svn: 217108
|
|
|
|
| |
llvm-svn: 217041
|
|
|
|
|
|
| |
This will help with enabling misched
llvm-svn: 216971
|
|
|
|
| |
llvm-svn: 216944
|
|
|
|
|
|
|
| |
This is broken when 64-bit add is only partially
moved to the VALU.
llvm-svn: 216933
|
|
|
|
|
|
|
|
| |
If an fmul was introduced by lowering, it wouldn't be folded
into a multiply by a constant since the earlier combine would
have replaced the fmul with the fadd.
llvm-svn: 216932
|
|
|
|
|
|
|
| |
We can use a negate source modifier to match
this for fsub.
llvm-svn: 216735
|