| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
#include <mmintrin.h>
extern __m64 C;
void baz(__v2si *A, __v2si *B)
{
*A = C;
_mm_empty();
}
We get this:
_baz:
call "L1$pb"
"L1$pb":
popl %eax
movl L_C$non_lazy_ptr-"L1$pb"(%eax), %eax
movq (%eax), %mm0
movl 4(%esp), %eax
movq %mm0, (%eax)
emms
ret
GCC gives us this:
_baz:
pushl %ebx
call L3
"L00000000001$pb":
L3:
popl %ebx
subl $8, %esp
movl L_C$non_lazy_ptr-"L00000000001$pb"(%ebx), %eax
movl (%eax), %edx
movl 4(%eax), %ecx
movl 16(%esp), %eax
movl %edx, (%eax)
movl %ecx, 4(%eax)
emms
addl $8, %esp
popl %ebx
ret
llvm-svn: 35351
|
|
|
|
| |
llvm-svn: 35350
|
|
|
|
|
|
| |
shift instruction.
llvm-svn: 35349
|
|
|
|
|
|
|
|
| |
Fix SingleSource/Regression/C/2003-05-21-UnionBitFields.c by changing a
getHighBitsSet call to getLowBitsSet call that was incorrectly converted
from the original lshr constant expression.
llvm-svn: 35348
|
|
|
|
| |
llvm-svn: 35347
|
|
|
|
|
|
|
|
| |
512).
This is particularly useful for the JIT, which lazily deserializes functions.
llvm-svn: 35346
|
|
|
|
| |
llvm-svn: 35345
|
|
|
|
|
|
|
|
| |
and shifting down without regard for the bitwidth of the APInt can lead
to incorrect initialization values. Instead, check for the word size case
(to avoid undef results from shift) and then do (1 << loBitsSet) - 1
llvm-svn: 35344
|
|
|
|
| |
llvm-svn: 35343
|
|
|
|
|
|
|
|
|
|
| |
Remove a use of getLowBitsSet that caused the mask used for replacement of
shl/lshr pairs with an AND instruction to be computed incorrectly. Its not
clear exactly why this is the case. This solves the disappearing shifts
problem, but it doesn't fix Regression/C/2003-05-21-UnionBitFields. It
seems there is more going on.
llvm-svn: 35342
|
|
|
|
| |
llvm-svn: 35341
|
|
|
|
| |
llvm-svn: 35340
|
|
|
|
|
|
|
|
|
| |
* Don't assume shift amounts are <= 64 bits
* Avoid creating an extra APInt in SubOne and AddOne by using -- and ++
* Add another use of getLowBitsSet
* Convert a series of if statements to a switch
llvm-svn: 35339
|
|
|
|
|
|
| |
strategy, emit JT's where possible.
llvm-svn: 35338
|
|
|
|
| |
llvm-svn: 35337
|
|
|
|
| |
llvm-svn: 35336
|
|
|
|
|
|
|
|
| |
using the facilities of APInt. While this duplicates a tiny fraction of
the constant folding code, it also makes the code easier to read and
avoids large ConstantExpr overhead for simple, known computations.
llvm-svn: 35335
|
|
|
|
| |
llvm-svn: 35334
|
|
|
|
|
|
|
| |
2. Use isStrictlyPositive() instead of isPositive() in two places where
they need APInt value > 0 not only >=0.
llvm-svn: 35333
|
|
|
|
|
|
| |
CodeGen/X86/2007-03-24-InlineAsmVectorOp.ll
llvm-svn: 35332
|
|
|
|
| |
llvm-svn: 35331
|
|
|
|
| |
llvm-svn: 35330
|
|
|
|
| |
llvm-svn: 35329
|
|
|
|
| |
llvm-svn: 35328
|
|
|
|
|
|
| |
CodeGen/X86/2007-03-24-InlineAsmXConstraint.ll
llvm-svn: 35327
|
|
|
|
| |
llvm-svn: 35326
|
|
|
|
|
|
| |
APInt with its type mask.
llvm-svn: 35325
|
|
|
|
| |
llvm-svn: 35324
|
|
|
|
| |
llvm-svn: 35323
|
|
|
|
|
|
| |
not just the first letter. No functionality change.
llvm-svn: 35322
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Convert the last use of a uint64_t that should have been an APInt.
* Change ComputeMaskedBits to have a const reference argument for the Mask
so that recursions don't cause unneeded temporaries. This causes temps
to be needed in other places (where the mask has to change) but this
change optimizes for the recursion which is more frequent.
* Remove two instances of &ing a Mask with getAllOnesValue. Its not
needed any more because APInt is accurate in its bit computations.
* Start using the getLowBitsSet and getHighBits set methods on APInt
instead of shifting. This makes it more clear in the code what is
going on.
llvm-svn: 35321
|
|
|
|
|
|
| |
alternatives, and end up not being registers.
llvm-svn: 35320
|
|
|
|
| |
llvm-svn: 35319
|
|
|
|
| |
llvm-svn: 35318
|
|
|
|
| |
llvm-svn: 35317
|
|
|
|
| |
llvm-svn: 35316
|
|
|
|
|
|
|
|
| |
illegal. Instead do the 0 valued construction for the user. This is because
the caller may not know (or care to check) that the number of bits set is
zero.
llvm-svn: 35315
|
|
|
|
| |
llvm-svn: 35314
|
|
|
|
|
|
|
|
| |
they should have used the uint64_t constructor. This avoids causing
undefined results via shifts by the word size when the bit width is an
exact multiple of the word size.
llvm-svn: 35313
|
|
|
|
|
|
|
|
|
| |
already covered by getLowBitsSet (i.e. when loBits==0). Consequently, remove
the default value for loBits and reorder the arguments to the more natural
loBits, hiBits order. This makes it more clear that this function is for bit
groups in the middle of the bit width and not towards one end or the other.
llvm-svn: 35312
|
|
|
|
|
|
| |
and getLowBitsSet.
llvm-svn: 35311
|
|
|
|
| |
llvm-svn: 35310
|
|
|
|
| |
llvm-svn: 35309
|
|
|
|
| |
llvm-svn: 35308
|
|
|
|
| |
llvm-svn: 35307
|
|
|
|
| |
llvm-svn: 35306
|
|
|
|
|
|
|
| |
modulus. The previous change was a result of incorrect documentation in
the LangRef.html.
llvm-svn: 35305
|
|
|
|
|
|
|
| |
bug in the srem implementation. Turns out it was a documentation bug
instead.
llvm-svn: 35304
|
|
|
|
|
|
| |
divisor!
llvm-svn: 35303
|
|
|
|
|
|
| |
the result must follow the sign of the divisor.
llvm-svn: 35302
|