summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/X86/X86MCInstLower.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Add support for AVX to materialize +0.0 when doing scalar FP.Nate Begeman2010-12-091-0/+2
| | | | llvm-svn: 121415
* Move lowering of TLS_addr32 and TLS_addr64 to X86MCInstLower.Rafael Espindola2010-11-281-1/+65
| | | | llvm-svn: 120263
* tidy up, no functionality change.Chris Lattner2010-11-141-2/+0
| | | | llvm-svn: 119092
* move the pic base symbol stuff up to MachineFunctionChris Lattner2010-11-141-9/+4
| | | | | | | since it is trivial and will be shared between ppc and x86. This substantially simplifies the X86 backend also. llvm-svn: 119089
* simplify getPICBaseSymbol a bit.Chris Lattner2010-11-141-1/+1
| | | | llvm-svn: 119088
* handle X86::EH_RETURN64 and X86::EH_RETURN.Rafael Espindola2010-10-261-0/+17
| | | | llvm-svn: 117378
* reapply: Use the new TB_NOT_REVERSABLE flag instead of specialChris Lattner2010-10-081-3/+9
| | | | | | | | | reapply: reimplement the second half of the or/add optimization. We should now with no changes. Turns out that one missing "Defs = [EFLAGS]" can upset things a bit. llvm-svn: 116040
* reapply the patch reverted in r116033:Chris Lattner2010-10-081-0/+8
| | | | | | | | "Reimplement (part of) the or -> add optimization. Matching 'or' into 'add'" With a critical fix: the add pseudos clobber EFLAGS. llvm-svn: 116039
* Revert "Reimplement (part of) the or -> add optimization. Matching 'or' intoDaniel Dunbar2010-10-081-8/+0
| | | | | | 'add'", which seems to have broken just about everything. llvm-svn: 116033
* Revert "reimplement the second half of the or/add optimization. We should now",Daniel Dunbar2010-10-081-9/+3
| | | | | | which depends on r116007, which I am about to revert. llvm-svn: 116031
* reimplement the second half of the or/add optimization. We should nowChris Lattner2010-10-081-3/+9
| | | | | | | | | | only end up emitting LEA instead of OR. If we aren't able to promote something into an LEA, we should never be emitting it as an ADD. Add some testcases that we emit "or" in cases where we used to produce an "add". llvm-svn: 116026
* Reimplement (part of) the or -> add optimization. Matching 'or' into 'add'Chris Lattner2010-10-071-0/+8
| | | | | | | | | | | | | | | | | | | is general goodness because it allows ORs to be converted to LEA to avoid inserting copies. However, this is bad because it makes the generated .s file less obvious and gives valgrind heartburn (tons of false positives in bitfield code). While the general fix should be in valgrind, we can at least try to avoid emitting ADD instructions that *don't* get promoted to LEA. This is more work because it requires introducing pseudo instructions to represents "add that knows the bits are disjoint", but hey, people really love valgrind. This fixes this testcase: https://bugs.kde.org/show_bug.cgi?id=242137#c20 the add r/i cases are coming next. llvm-svn: 116007
* Massive rewrite of MMX: Dale Johannesen2010-09-301-3/+0
| | | | | | | | | | | | | | | | | | | The x86_mmx type is used for MMX intrinsics, parameters and return values where these use MMX registers, and is also supported in load, store, and bitcast. Only the above operations generate MMX instructions, and optimizations do not operate on or produce MMX intrinsics. MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into smaller pieces. Optimizations may occur on these forms and the result casted back to x86_mmx, provided the result feeds into a previous existing x86_mmx operation. The point of all this is prevent optimizations from introducing MMX operations, which is unsafe due to the EMMS problem. llvm-svn: 115243
* Check in forgotten file. Should fix build.Dale Johannesen2010-09-081-1/+1
| | | | llvm-svn: 113409
* More fixes for win64:Anton Korobeynikov2010-08-171-2/+4
| | | | | | | | - Do not clobber al during variadic calls, this is AMD64 ABI-only feature - Emit wincall64, where necessary Patch by Cameron Esfahani! llvm-svn: 111289
* Don't attempt to SimplifyShortMoveForm in 64-bit mode.Eli Friedman2010-08-161-9/+13
| | | | llvm-svn: 111182
* Define AVX 128-bit pattern versions of SET0PS/PD.Bruno Cardoso Lopes2010-08-121-2/+5
| | | | llvm-svn: 110937
* Begin to support some vector operations for AVX 256-bit intructions. The longBruno Cardoso Lopes2010-08-121-6/+8
| | | | | | | | | term goal here is to be able to match enough of vector_shuffle and build_vector so all avx intrinsics which aren't mapped to their own built-ins but to shufflevector calls can be codegen'd. This is the first (baby) step, support building zeroed vectors. llvm-svn: 110897
* Handle the pseudo in MCInstLower.Eric Christopher2010-08-051-0/+6
| | | | llvm-svn: 110359
* X86MCInstLower now depends on AsmPrinter being around.Chris Lattner2010-07-221-27/+9
| | | | llvm-svn: 109154
* add some rough support for making mcinst lowering work without anChris Lattner2010-07-211-5/+23
| | | | | | | | asmprinter or mangler around. This is option #B for killing off X86InstrInfo::GetInstSizeInBytes. Option #A (killing "needsexactsize") was sent for consideration to llvmdev. llvm-svn: 109056
* make asmprinter optional, even though passing in null will cause things to ↵Chris Lattner2010-07-201-10/+9
| | | | | | explode right now. llvm-svn: 108955
* continue pushing dependencies around.Chris Lattner2010-07-201-5/+5
| | | | llvm-svn: 108952
* reduce X86MCInstLower dependencies on asmprinter.Chris Lattner2010-07-201-7/+8
| | | | llvm-svn: 108950
* pass around MF, not MMI.Chris Lattner2010-07-201-3/+3
| | | | llvm-svn: 108949
* cleanups.Chris Lattner2010-07-201-9/+7
| | | | llvm-svn: 108947
* move two asmprinter methods into the asmprinter .cpp file.Chris Lattner2010-07-201-38/+0
| | | | llvm-svn: 108945
* fix a layering problem by moving the x86 implementationChris Lattner2010-07-191-0/+632
of AsmPrinter and InstLowering into libx86 and out of the asmprinter subdirectory. Now X86/AsmPrinter just depends on MC stuff, not all of codegen and LLVM IR. llvm-svn: 108782
OpenPOWER on IntegriCloud