diff options
| author | Sanjay Patel <spatel@rotateright.com> | 2019-02-01 14:14:47 +0000 |
|---|---|---|
| committer | Sanjay Patel <spatel@rotateright.com> | 2019-02-01 14:14:47 +0000 |
| commit | be23a91fcd9870254a8c0f70f8f2cd1b3835f438 (patch) | |
| tree | b962e265d76d66ba0efe60dc4988e3ad9bcec316 /llvm/lib/Support | |
| parent | d9e66e1b44cfdbecde9143c112a93490c8c0223f (diff) | |
| download | bcm5719-llvm-be23a91fcd9870254a8c0f70f8f2cd1b3835f438.tar.gz bcm5719-llvm-be23a91fcd9870254a8c0f70f8f2cd1b3835f438.zip | |
[InstCombine] try to reduce x86 addcarry to generic uaddo intrinsic
If we can reduce the x86-specific intrinsic to the generic op, it allows existing
simplifications and value tracking folds. AFAICT, this always results in identical
x86 codegen in the non-reduced case...which should be true because we semi-generically
(too aggressively IMO) convert to llvm.uadd.with.overflow in CGP, so the DAG/isel must
already combine/lower this intrinsic as expected.
This isn't quite what was requested in:
https://bugs.llvm.org/show_bug.cgi?id=40486
...but we want to have these kinds of folds early for efficiency and to enable greater
simplifications. For the case in the bug report where we have:
_addcarry_u64(0, ahi, 0, &ahi)
...this gets completely simplified away in IR.
Differential Revision: https://reviews.llvm.org/D57453
llvm-svn: 352870
Diffstat (limited to 'llvm/lib/Support')
0 files changed, 0 insertions, 0 deletions

