| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
register kill info.
llvm-svn: 34692
|
| |
|
|
| |
llvm-svn: 34691
|
| |
|
|
| |
llvm-svn: 34690
|
| |
|
|
| |
llvm-svn: 34685
|
| |
|
|
| |
llvm-svn: 34684
|
| |
|
|
| |
llvm-svn: 34681
|
| |
|
|
|
|
|
|
|
|
| |
1. Add unsigned and signed versions of methods so a "bool" argument doesn't
need to be passed in.
2. Make the various getMin/getMax functions all be inline since they are
so simple.
3. Simplify sdiv and srem code.
llvm-svn: 34680
|
| |
|
|
| |
llvm-svn: 34678
|
| |
|
|
|
|
|
|
|
|
|
| |
Implement review feedback:
1. Use new APInt::RoundDoubleToAPInt interface to specify the bit width so
that we don't have to truncate or extend in constant folding.
2. Fix a pasteo in SDiv that prevented a check for overflow.
3. Fix the shift operators: undef happens when the shift amount is equal
to the bitwidth.
llvm-svn: 34677
|
| |
|
|
|
|
|
| |
2. Change RoundDoubleToAPInt to take a bit width parameter. Use that
parameter to limit the bit width of the result.
llvm-svn: 34673
|
| |
|
|
| |
llvm-svn: 34670
|
| |
|
|
| |
llvm-svn: 34669
|
| |
|
|
| |
llvm-svn: 34668
|
| |
|
|
| |
llvm-svn: 34667
|
| |
|
|
| |
llvm-svn: 34666
|
| |
|
|
| |
llvm-svn: 34664
|
| |
|
|
|
|
| |
widths > 64 bits.
llvm-svn: 34663
|
| |
|
|
| |
llvm-svn: 34662
|
| |
|
|
| |
llvm-svn: 34661
|
| |
|
|
|
|
| |
Implement constant folding via APInt instead of uint64_t.
llvm-svn: 34660
|
| |
|
|
| |
llvm-svn: 34659
|
| |
|
|
| |
llvm-svn: 34658
|
| |
|
|
|
|
| |
lowering uses.
llvm-svn: 34657
|
| |
|
|
| |
llvm-svn: 34656
|
| |
|
|
| |
llvm-svn: 34655
|
| |
|
|
|
|
| |
'clients', etc, and adding CCValAssign instead.
llvm-svn: 34654
|
| |
|
|
|
|
| |
lib/Analysis/ConstantFolding.
llvm-svn: 34653
|
| |
|
|
|
|
|
|
|
| |
to infinite loop:
PPCMachineFunctionInfo.h updated: 1.2 -> 1.3
PPCRegisterInfo.cpp updated: 1.110 -> 1.111
PPCRegisterInfo.h updated: 1.28 -> 1.29
llvm-svn: 34652
|
| |
|
|
|
|
| |
instruction between now and next forward() call.
llvm-svn: 34649
|
| |
|
|
| |
llvm-svn: 34648
|
| |
|
|
|
|
|
|
|
|
| |
Implement the first step towards arbitrary precision integer support in
LLVM. The APInt class provides arbitrary precision arithmetic and value
representation. This patch changes ConstantInt to use APInt as its value
representation without supporting bit widths > 64 yet. That change will
come after ConstantFolding handles bit widths > 64 bits.
llvm-svn: 34647
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
2. Rewrite operator=(const APInt& RHS) to allow the RHS to be a different
bit width than the LHS. This makes it possible to use APInt as the key
of a DenseMap, as needed for the IntConstants map in Constants.cpp
3. Fix operator=(uint64_t) to clear unused bits in case the client assigns
a value that has more bits than the APInt allows.
4. Assert that bit widths are equal in operator==
5. Revise getHashValue() to put the bit width in the low order six bits.
This should help to make i1 0, i2 0, ... i64 0 all distinct in the
IntConstants DenseMap.
llvm-svn: 34646
|
| |
|
|
|
|
| |
the last use.
llvm-svn: 34645
|
| |
|
|
|
|
| |
Fix toString use of getValue to use getZExtValue()
llvm-svn: 34642
|
| |
|
|
| |
llvm-svn: 34640
|
| |
|
|
| |
llvm-svn: 34639
|
| |
|
|
| |
llvm-svn: 34638
|
| |
|
|
| |
llvm-svn: 34637
|
| |
|
|
|
|
| |
conventions. This doesn't do anything yet, but may in the future.
llvm-svn: 34636
|
| |
|
|
| |
llvm-svn: 34634
|
| |
|
|
| |
llvm-svn: 34633
|
| |
|
|
| |
llvm-svn: 34632
|
| |
|
|
|
|
|
| |
mechanics that process it. I'm still not happy with this, but it's a step
in the right direction.
llvm-svn: 34631
|
| |
|
|
|
|
|
|
|
|
| |
2. Fix countTrailingZeros to use a faster algorithm.
3. Simplify sext() slightly by using isNegative().
4. Implement ashr using word-at-a-time logic instead of bit-at-a-time
5. Rename locals named isNegative so they don't clash with method name.
6. Fix fromString to compute negated value correctly.
llvm-svn: 34629
|
| |
|
|
| |
llvm-svn: 34628
|
| |
|
|
| |
llvm-svn: 34627
|
| |
|
|
| |
llvm-svn: 34626
|
| |
|
|
| |
llvm-svn: 34625
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Capture this so that downstream zext/sext's are optimized out. This
compiles:
int test(short X) { return (int)X; }
to:
_test:
movl %edi, %eax
ret
instead of:
_test:
movswl %di, %eax
ret
GCC produces this bizarre code:
_test:
movw %di, -12(%rsp)
movswl -12(%rsp),%eax
ret
llvm-svn: 34623
|
| |
|
|
|
|
|
|
|
| |
sextinreg if not needed. This is useful in two cases: before legalize,
it avoids creating a sextinreg that will be trivially removed. After legalize
if the target doesn't support sextinreg, the trunc/sext would not have been
removed before.
llvm-svn: 34621
|