Commit message (Collapse) | Author | Age | Files | Lines | ||
---|---|---|---|---|---|---|
... | ||||||
* | implement support for sinking a load out the bottom of a block that | Chris Lattner | 2008-01-12 | 1 | -16/+23 | |
| | | | | | | | | | has no stores between the load and the end of block. This works great and sinks hundreds of stores, but we can't turn it on because machineinstrs don't have volatility information and we don't want to sink volatile stores :( llvm-svn: 45894 | |||||
* | Simplify the side effect stuff a bit more and make licm/sinking | Chris Lattner | 2008-01-10 | 1 | -11/+13 | |
| | | | | | | | | | | | | | | | | both work right according to the new flags. This removes the TII::isReallySideEffectFree predicate, and adds TII::isInvariantLoad. It removes NeverHasSideEffects+MayHaveSideEffects and adds UnmodeledSideEffects as machine instr flags. Now the clients can decide everything they need. I think isRematerializable can be implemented in terms of the flags we have now, though I will let others tackle that. llvm-svn: 45843 | |||||
* | Clamp down on sinking of lots of instructions. | Chris Lattner | 2008-01-10 | 1 | -0/+9 | |
| | | | | llvm-svn: 45841 | |||||
* | The current impl is really trivial, add some comments about how it can be ↵ | Chris Lattner | 2008-01-05 | 1 | -2/+24 | |
| | | | | | | made better. llvm-svn: 45625 | |||||
* | don't sink anything with side effects, this makes lots of stuff work, but ↵ | Chris Lattner | 2008-01-05 | 1 | -0/+4 | |
| | | | | | | sinks almost nothing. llvm-svn: 45617 | |||||
* | fix a common crash. | Chris Lattner | 2008-01-05 | 1 | -0/+4 | |
| | | | | llvm-svn: 45614 | |||||
* | Add a really quick hack at a machine code sinking pass, enabled with ↵ | Chris Lattner | 2008-01-04 | 1 | -0/+206 | |
--enable-sinking. It is missing validity checks, so it is known broken. However, it is powerful enough to compile this contrived code: void test1(int C, double A, double B, double *P) { double Tmp = A*A+B*B; *P = C ? Tmp : A; } into: _test1: movsd 8(%esp), %xmm0 cmpl $0, 4(%esp) je LBB1_2 # entry LBB1_1: # entry movsd 16(%esp), %xmm1 mulsd %xmm1, %xmm1 mulsd %xmm0, %xmm0 addsd %xmm1, %xmm0 LBB1_2: # entry movl 24(%esp), %eax movsd %xmm0, (%eax) ret instead of: _test1: movsd 16(%esp), %xmm0 mulsd %xmm0, %xmm0 movsd 8(%esp), %xmm1 movapd %xmm1, %xmm2 mulsd %xmm2, %xmm2 addsd %xmm0, %xmm2 cmpl $0, 4(%esp) je LBB1_2 # entry LBB1_1: # entry movapd %xmm2, %xmm1 LBB1_2: # entry movl 24(%esp), %eax movsd %xmm1, (%eax) ret woo. llvm-svn: 45570 |