| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
| |
llvm-svn: 34668
|
| |
|
|
| |
llvm-svn: 34667
|
| |
|
|
| |
llvm-svn: 34666
|
| |
|
|
| |
llvm-svn: 34664
|
| |
|
|
|
|
| |
widths > 64 bits.
llvm-svn: 34663
|
| |
|
|
| |
llvm-svn: 34662
|
| |
|
|
| |
llvm-svn: 34661
|
| |
|
|
|
|
| |
Implement constant folding via APInt instead of uint64_t.
llvm-svn: 34660
|
| |
|
|
| |
llvm-svn: 34659
|
| |
|
|
| |
llvm-svn: 34658
|
| |
|
|
|
|
| |
lowering uses.
llvm-svn: 34657
|
| |
|
|
| |
llvm-svn: 34656
|
| |
|
|
| |
llvm-svn: 34655
|
| |
|
|
|
|
| |
'clients', etc, and adding CCValAssign instead.
llvm-svn: 34654
|
| |
|
|
|
|
| |
lib/Analysis/ConstantFolding.
llvm-svn: 34653
|
| |
|
|
|
|
|
|
|
| |
to infinite loop:
PPCMachineFunctionInfo.h updated: 1.2 -> 1.3
PPCRegisterInfo.cpp updated: 1.110 -> 1.111
PPCRegisterInfo.h updated: 1.28 -> 1.29
llvm-svn: 34652
|
| |
|
|
|
|
| |
instruction between now and next forward() call.
llvm-svn: 34649
|
| |
|
|
| |
llvm-svn: 34648
|
| |
|
|
|
|
|
|
|
|
| |
Implement the first step towards arbitrary precision integer support in
LLVM. The APInt class provides arbitrary precision arithmetic and value
representation. This patch changes ConstantInt to use APInt as its value
representation without supporting bit widths > 64 yet. That change will
come after ConstantFolding handles bit widths > 64 bits.
llvm-svn: 34647
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
2. Rewrite operator=(const APInt& RHS) to allow the RHS to be a different
bit width than the LHS. This makes it possible to use APInt as the key
of a DenseMap, as needed for the IntConstants map in Constants.cpp
3. Fix operator=(uint64_t) to clear unused bits in case the client assigns
a value that has more bits than the APInt allows.
4. Assert that bit widths are equal in operator==
5. Revise getHashValue() to put the bit width in the low order six bits.
This should help to make i1 0, i2 0, ... i64 0 all distinct in the
IntConstants DenseMap.
llvm-svn: 34646
|
| |
|
|
|
|
| |
the last use.
llvm-svn: 34645
|
| |
|
|
|
|
| |
Fix toString use of getValue to use getZExtValue()
llvm-svn: 34642
|
| |
|
|
| |
llvm-svn: 34640
|
| |
|
|
| |
llvm-svn: 34639
|
| |
|
|
| |
llvm-svn: 34638
|
| |
|
|
| |
llvm-svn: 34637
|
| |
|
|
|
|
| |
conventions. This doesn't do anything yet, but may in the future.
llvm-svn: 34636
|
| |
|
|
| |
llvm-svn: 34634
|
| |
|
|
| |
llvm-svn: 34633
|
| |
|
|
| |
llvm-svn: 34632
|
| |
|
|
|
|
|
| |
mechanics that process it. I'm still not happy with this, but it's a step
in the right direction.
llvm-svn: 34631
|
| |
|
|
|
|
|
|
|
|
| |
2. Fix countTrailingZeros to use a faster algorithm.
3. Simplify sext() slightly by using isNegative().
4. Implement ashr using word-at-a-time logic instead of bit-at-a-time
5. Rename locals named isNegative so they don't clash with method name.
6. Fix fromString to compute negated value correctly.
llvm-svn: 34629
|
| |
|
|
| |
llvm-svn: 34628
|
| |
|
|
| |
llvm-svn: 34627
|
| |
|
|
| |
llvm-svn: 34626
|
| |
|
|
| |
llvm-svn: 34625
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Capture this so that downstream zext/sext's are optimized out. This
compiles:
int test(short X) { return (int)X; }
to:
_test:
movl %edi, %eax
ret
instead of:
_test:
movswl %di, %eax
ret
GCC produces this bizarre code:
_test:
movw %di, -12(%rsp)
movswl -12(%rsp),%eax
ret
llvm-svn: 34623
|
| |
|
|
|
|
|
|
|
| |
sextinreg if not needed. This is useful in two cases: before legalize,
it avoids creating a sextinreg that will be trivially removed. After legalize
if the target doesn't support sextinreg, the trunc/sext would not have been
removed before.
llvm-svn: 34621
|
| |
|
|
| |
llvm-svn: 34620
|
| |
|
|
|
|
| |
This makes it much more efficient.
llvm-svn: 34618
|
| |
|
|
| |
llvm-svn: 34617
|
| |
|
|
|
|
|
| |
2. Implement the trunc, sext, and zext operations.
3. Improve fromString to accept negative values as input.
llvm-svn: 34616
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
void foo(short);
void bar(unsigned short A) {
foo(A);
}
into:
_bar:
subq $8, %rsp
movswl %di, %edi
call _foo
addq $8, %rsp
ret
instead of:
_bar:
subq $8, %rsp
call _foo
addq $8, %rsp
ret
Testcase here: test/CodeGen/X86/x86-64-shortint.ll
llvm-svn: 34615
|
| |
|
|
|
|
| |
night: fastcc returns should only go in XMM0 if we have SSE2 or above.
llvm-svn: 34613
|
| |
|
|
| |
llvm-svn: 34610
|
| |
|
|
|
|
|
|
|
|
| |
exprs hanging off a global, even if the global is not otherwise dead. This
requires some tricky iterator gymnastics.
This implements Transforms/GlobalOpt/constantexpr-dangle.ll by deleting a
constantexpr that made it appear that the address of the function was taken.
llvm-svn: 34608
|
| |
|
|
| |
llvm-svn: 34606
|
| |
|
|
| |
llvm-svn: 34605
|
| |
|
|
| |
llvm-svn: 34604
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
2. Move comments for methods to .h file, delete them in .cpp file.
3. All places that were doing manual clear of high order bits now call the
clearUnusedBits() method in order to not depend on undefined behavior
of the >> operator when the number of bits shifted equals the word size.
4. Reduced # of loc by using the new result of clearUnusedBits() method.
5. Simplified logic (decreased indentation) in a few places.
6. Added code comments to larger functions that needed them.
7. Added FIXME notes about weak implementations of things (e.g. bit-by-bit
shift right is sub-optimal).
llvm-svn: 34603
|