summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/X86/conditional-tailcall.ll
Commit message (Collapse)AuthorAgeFilesLines
* Revert r282920 "X86: Allow conditional tail calls in Win64 "leaf" functions ↵Hans Wennborg2016-10-051-25/+2
| | | | | | | | | (PR26302)" This is suspected to cause a miscompile in Chromium. Reverting while investigating. llvm-svn: 283329
* X86: Allow conditional tail calls in Win64 "leaf" functions (PR26302)Hans Wennborg2016-09-301-2/+25
| | | | | | | | | | | We can't use Jcc to leave a Win64 function in general, because that confuses the unwinder. However, for "leaf" functions, that is, functions where the return address is always on top of the stack and which don't have unwind info, it's OK. Differential Revision: https://reviews.llvm.org/D24836 llvm-svn: 282920
* X86: Conditional tail calls should not have isBarrier = 1Hans Wennborg2016-09-131-2/+30
| | | | | | | | | | That confuses e.g. machine basic block placement, which then doesn't realize that control can fall through a block that ends with a conditional tail call. Instead, isBranch=1 should be set. Also, mark EFLAGS as used by these instructions. llvm-svn: 281281
* X86: Fold tail calls into conditional branches also for 64-bit (PR26302)Hans Wennborg2016-09-091-0/+1
| | | | | | | | | This extends the optimization in r280832 to also work for 64-bit. The only quirk is that we can't do this for 64-bit Windows (yet). Differential Revision: https://reviews.llvm.org/D24423 llvm-svn: 281113
* Add more triple to conditional-tailcall.ll testHans Wennborg2016-09-071-1/+1
| | | | llvm-svn: 280835
* X86: Fold tail calls into conditional branches where possible (PR26302)Hans Wennborg2016-09-071-0/+24
When branching to a block that immediately tail calls, it is possible to fold the call directly into the branch if the call is direct and there is no stack adjustment, saving one byte. Example: define void @f(i32 %x, i32 %y) { entry: %p = icmp eq i32 %x, %y br i1 %p, label %bb1, label %bb2 bb1: tail call void @foo() ret void bb2: tail call void @bar() ret void } before: f: movl 4(%esp), %eax cmpl 8(%esp), %eax jne .LBB0_2 jmp foo .LBB0_2: jmp bar after: f: movl 4(%esp), %eax cmpl 8(%esp), %eax jne bar .LBB0_1: jmp foo I don't expect any significant size savings from this (on a Clang bootstrap I saw 288 bytes), but it does make the code a little tighter. This patch only does 32-bit, but 64-bit would work similarly. Differential Revision: https://reviews.llvm.org/D24108 llvm-svn: 280832
OpenPOWER on IntegriCloud