| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
| |
llvm-svn: 149331
|
| |
|
|
|
|
|
| |
over the catch information. The catch information is now tacked to the invoke
instruction.
llvm-svn: 149326
|
| |
|
|
|
|
| |
Not committing a testcase because I think it will be too fragile.
llvm-svn: 149315
|
| |
|
|
|
|
|
| |
we should (theoretically optimize and codegen ConstantDataVector as well
as ConstantVector.
llvm-svn: 149116
|
| |
|
|
|
|
|
| |
more robust) ways to do what it was doing now. Also, add static methods
for decoding a ShuffleVector mask.
llvm-svn: 149028
|
| |
|
|
| |
llvm-svn: 148929
|
| |
|
|
| |
llvm-svn: 148897
|
| |
|
|
| |
llvm-svn: 148802
|
| |
|
|
| |
llvm-svn: 148578
|
| |
|
|
|
|
|
|
|
|
|
| |
This SelectionDAG node will be attached to call nodes by LowerCall(),
and eventually becomes a MO_RegisterMask MachineOperand on the
MachineInstr representing the call instruction.
LowerCall() will attach a register mask that depends on the calling
convention.
llvm-svn: 148436
|
| |
|
|
|
|
| |
vector type to another, we must not bitcast the result if one type is widened while the other is promoted.
llvm-svn: 148383
|
| |
|
|
|
|
| |
TwoAddressInstructionPass to insert copies for any physical reg operands of the REG_SEQUENCE
llvm-svn: 148377
|
| |
|
|
| |
llvm-svn: 148337
|
| |
|
|
|
|
| |
type.
llvm-svn: 148297
|
| |
|
|
|
|
| |
checked for legalisation
llvm-svn: 148275
|
| |
|
|
|
|
| |
unused variables).
llvm-svn: 148230
|
| |
|
|
|
|
| |
arithmetic so should not be checked in legalisation
llvm-svn: 148228
|
| |
|
|
|
|
|
|
|
|
|
| |
We know that the blend instructions only use the MSB, so if the mask is
sign-extended then we can convert it into a SHL instruction. This is a
common pattern because the type-legalizer sign-extends the i1 type which
is used by the LLVM-IR for the condition.
Added a new optimization in SimplifyDemandedBits for SIGN_EXTEND_INREG -> SHL.
llvm-svn: 148225
|
| |
|
|
|
|
| |
CodeGen.
llvm-svn: 148218
|
| |
|
|
| |
llvm-svn: 148217
|
| |
|
|
| |
llvm-svn: 148205
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
overly conservative. It was concerned about cases where it would prohibit
folding simple [r, c] addressing modes. e.g.
ldr r0, [r2]
ldr r1, [r2, #4]
=>
ldr r0, [r2], #4
ldr r1, [r2]
Change the logic to look for such cases which allows it to form indexed memory
ops more aggressively.
rdar://10674430
llvm-svn: 148086
|
| |
|
|
|
|
|
|
| |
Promote for those operations.
Sorry, no test case yet
llvm-svn: 148050
|
| |
|
|
| |
llvm-svn: 148033
|
| |
|
|
|
|
|
|
|
|
| |
are used.
When we load the v12i32 type, the GenWidenVectorLoads method generates two loads: v8i32 and v4i32
and attempts to use CONCAT_VECTORS to join them. In this fix I concat undef values to widen
the smaller value. The test "widen_load-2.ll" also exposes this bug on AVX.
llvm-svn: 147964
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
detect a pattern which can be implemented with a small 'shl' embedded in
the addressing mode scale. This happens in real code as follows:
unsigned x = my_accelerator_table[input >> 11];
Here we have some lookup table that we look into using the high bits of
'input'. Each entity in the table is 4-bytes, which means this
implicitly gets turned into (once lowered out of a GEP):
*(unsigned*)((char*)my_accelerator_table + ((input >> 11) << 2));
The shift right followed by a shift left is canonicalized to a smaller
shift right and masking off the low bits. That hides the shift right
which x86 has an addressing mode designed to support. We now detect
masks of this form, and produce the longer shift right followed by the
proper addressing mode. In addition to saving a (rather large)
instruction, this also reduces stalls in Intel chips on benchmarks I've
measured.
In order for all of this to work, one part of the DAG needs to be
canonicalized *still further* than it currently is. This involves
removing pointless 'trunc' nodes between a zextload and a zext. Without
that, we end up generating spurious masks and hiding the pattern.
llvm-svn: 147936
|
| |
|
|
|
|
|
|
| |
of several newly un-defaulted switches. This also helps optimizers
(including LLVM's) recognize that every case is covered, and we should
assume as much.
llvm-svn: 147861
|
| |
|
|
| |
llvm-svn: 147855
|
| |
|
|
|
|
| |
using BUILD_VECTORS we may be using a BV of different type. Make sure to cast it back.
llvm-svn: 147851
|
| |
|
|
| |
llvm-svn: 147733
|
| |
|
|
|
|
| |
subc, turn it into a sub. Turn (subc x, x) into 0 with no borrow. Turn (subc x, 0) into x with no borrow. Turn (subc -1, x) into (xor x, -1) with no borrow. Turn sube with no borrow in into subc.
llvm-svn: 147728
|
| |
|
|
| |
llvm-svn: 147696
|
| |
|
|
|
|
|
|
|
| |
a combined-away node and the result of the combine isn't substantially
smaller than the input, it's just canonicalized. This is the first part
of a significant (7%) performance gain for Snappy's hot decompression
loop.
llvm-svn: 147604
|
| |
|
|
|
|
| |
are commuted in the shuffle mask.
llvm-svn: 147527
|
| |
|
|
| |
llvm-svn: 147525
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before we'd get:
$ clang t.c
fatal error: error in backend: Invalid operand for inline asm constraint 'i'!
Now we get:
$ clang t.c
t.c:16:5: error: invalid operand for inline asm constraint 'i'!
"movq (%4), %%mm0\n"
^
Which at least gets us the inline asm that is the problem.
llvm-svn: 147502
|
| |
|
|
|
|
| |
integer-promoted.
llvm-svn: 147484
|
| |
|
|
|
|
| |
Targets can perfects well support intrinsics on illegal types, as long as they are prepared to perform custom expansion during type legalization. For example, a target where i64 is illegal might still support the i64 intrinsic operation using pairs of i32's. ARM already does some expansions like this for non-intrinsic operations.
llvm-svn: 147472
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The failure seen on win32, when i64 type is illegal.
It happens on stage of conversion VECTOR_SHUFFLE to BUILD_VECTOR.
The failure message is:
llc: SelectionDAG.cpp:784: void VerifyNodeCommon(llvm::SDNode*): Assertion `(I->getValueType() == EltVT || (EltVT.isInteger() && I->getValueType().isInteger() && EltVT.bitsLE(I->getValueType()))) && "Wrong operand type!"' failed.
I added a special test that checks vector shuffle on win32.
llvm-svn: 147445
|
| |
|
|
| |
llvm-svn: 147400
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The failure seen on win32, when i64 type is illegal.
It happens on stage of conversion VECTOR_SHUFFLE to BUILD_VECTOR.
The failure message is:
llc: SelectionDAG.cpp:784: void VerifyNodeCommon(llvm::SDNode*): Assertion `(I->getValueType() == EltVT || (EltVT.isInteger() && I->getValueType().isInteger() && EltVT.bitsLE(I->getValueType()))) && "Wrong operand type!"' failed.
I added a special test that checks vector shuffle on win32.
llvm-svn: 147399
|
| |
|
|
|
|
| |
Promotion of the mask operand needs to be done using PromoteTargetBoolean, and not padded with garbage.
llvm-svn: 147309
|
| |
|
|
|
|
| |
location. PR10747, part 2.
llvm-svn: 147283
|
| |
|
|
| |
llvm-svn: 147272
|
| |
|
|
| |
llvm-svn: 147197
|
| |
|
|
| |
llvm-svn: 146986
|
| |
|
|
|
|
| |
http://llvm.org/docs/CodingStandards.html#ll_virtual_anch
llvm-svn: 146960
|
| |
|
|
| |
llvm-svn: 146927
|
| |
|
|
|
|
| |
attribute themselve.
llvm-svn: 146851
|
| |
|
|
|
|
| |
Patch by Kyriakos Georgiou!
llvm-svn: 146670
|