| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The x86_mmx type is used for MMX intrinsics, parameters and
return values where these use MMX registers, and is also
supported in load, store, and bitcast.
Only the above operations generate MMX instructions, and optimizations
do not operate on or produce MMX intrinsics.
MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
smaller pieces. Optimizations may occur on these forms and the
result casted back to x86_mmx, provided the result feeds into a
previous existing x86_mmx operation.
The point of all this is prevent optimizations from introducing
MMX operations, which is unsafe due to the EMMS problem.
llvm-svn: 115243
|
|
|
|
|
|
| |
string.
llvm-svn: 113615
|
|
|
|
| |
llvm-svn: 113603
|
|
|
|
|
|
| |
"llvm.eh.catch.all.value". Only the name needs to be changed.
llvm-svn: 113600
|
|
|
|
|
|
|
|
| |
vabd intrinsic and add and/or zext operations. In the case of vaba, this
also avoids the need for a DAG combine pattern to combine vabd with add.
Update tests. Auto-upgrade the old intrinsics.
llvm-svn: 112941
|
|
|
|
|
|
|
| |
add, and subtract operations with zero-extended or sign-extended vectors.
Update tests. Add auto-upgrade support for the old intrinsics.
llvm-svn: 112773
|
|
|
|
|
|
| |
Auto-upgrade the old intrinsic and update tests.
llvm-svn: 112507
|
|
|
|
|
|
|
| |
IR add/sub operations with one or both operands sign- or zero-extended.
Auto-upgrade the old intrinsics.
llvm-svn: 112416
|
|
|
|
|
|
|
| |
Update all the tests using those intrinsics and add support for
auto-upgrading bitcode files with the old versions of the intrinsics.
llvm-svn: 112271
|
|
|
|
|
|
| |
zero-extend operations.
llvm-svn: 111614
|
|
|
|
| |
llvm-svn: 109092
|
|
|
|
| |
llvm-svn: 107145
|
|
|
|
| |
llvm-svn: 106622
|
|
|
|
| |
llvm-svn: 106573
|
|
|
|
|
|
|
|
| |
shifts and null vectors. Autoupgrade these to what we'd lower them to.
Add a testcase to exercise this.
llvm-svn: 101851
|
|
|
|
|
|
|
| |
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.
llvm-svn: 101579
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix for self-hosting
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101465
|
|
|
|
| |
llvm-svn: 101434
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101397
|
|
|
|
| |
llvm-svn: 101368
|
|
|
|
|
|
|
|
|
|
| |
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101364
|
|
|
|
|
|
|
| |
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100304
|
|
|
|
| |
llvm-svn: 100199
|
|
|
|
|
|
|
| |
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100191
|
|
|
|
| |
llvm-svn: 99948
|
|
|
|
|
|
|
|
|
| |
memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.
llvm-svn: 99928
|
|
|
|
|
|
|
| |
Rewrite the pmulld patterns, and make sure that they fold in loads of
arguments into the instruction.
llvm-svn: 99910
|
|
|
|
|
|
|
|
|
| |
its first argument, via function-local metadata (instead of via a bitcast).
This patch also cleans up code that expects there to be a bitcast in the first argument and testcases that call llvm.dbg.declare.
It also strips old llvm.dbg.declare intrinsics that did not pass metadata as the first argument.
llvm-svn: 93531
|
|
|
|
| |
llvm-svn: 92774
|
|
|
|
|
|
|
|
|
|
| |
Intrinsic::dbg_stoppoint
Intrinsic::dbg_region_start
Intrinsic::dbg_region_end
Intrinsic::dbg_func_start
AutoUpgrade simply ignores these intrinsics now.
llvm-svn: 92557
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
so get rid of eh.selector.i64 and rename eh.selector.i32 to eh.selector.
Likewise for eh.typeid.for. This aligns us with gcc, which always uses a
32 bit value for the selector on all platforms. My understanding is that
the register allocator used to assert if the selector intrinsic size didn't
match the pointer size, and this was the reason for introducing the two
variants. However my testing shows that this is no longer the case (I
fixed some bugs in selector lowering yesterday, and some more today in the
fastisel path; these might have caused the original problems).
llvm-svn: 84106
|
|
|
|
|
|
|
| |
where the element is of a basic builtin type. For example, to get
an i8* use getInt8PtrTy.
llvm-svn: 83379
|
|
|
|
|
|
| |
Use MDNodes to encode debug info in llvm IR.
llvm-svn: 80406
|
|
|
|
| |
llvm-svn: 80073
|
|
|
|
|
|
|
|
| |
llvm.dbg.... global variables, to encode debugging information in llvm IR. This is mostly a mechanical change that tests metadata support very well.
This change speeds up llvm-gcc by more then 6% at "-O0 -g" (measured by compiling InstructionCombining.cpp!)
llvm-svn: 79977
|
|
|
|
| |
llvm-svn: 78948
|
|
|
|
| |
llvm-svn: 77635
|
|
|
|
| |
llvm-svn: 77516
|
|
|
|
| |
llvm-svn: 77366
|
|
|
|
|
|
| |
Also, change MDString to use a StringRef.
llvm-svn: 77098
|
|
|
|
|
|
| |
thanks to contexts-on-types. More to come.
llvm-svn: 77011
|
|
|
|
| |
llvm-svn: 76702
|
|
|
|
|
|
|
|
|
| |
This adds location info for all llvm_unreachable calls (which is a macro now) in
!NDEBUG builds.
In NDEBUG builds location info and the message is off (it only prints
"UREACHABLE executed").
llvm-svn: 75640
|
|
|
|
|
|
|
|
|
| |
Make llvm_unreachable take an optional string, thus moving the cerr<< out of
line.
LLVM_UNREACHABLE is now a simple wrapper that makes the message go away for
NDEBUG builds.
llvm-svn: 75379
|
|
|
|
| |
llvm-svn: 74973
|
|
|
|
| |
llvm-svn: 63812
|
|
|
|
|
|
| |
and llvm-gcc.
llvm-svn: 63786
|
|
|
|
|
|
|
|
|
| |
target directories themselves. This also means that VMCore no longer
needs to know about every target's list of intrinsics. Future work
will include converting the PowerPC target to this interface as an
example implementation.
llvm-svn: 63765
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
s/ParamAttr/Attribute/g
s/PAList/AttrList/g
s/FnAttributeWithIndex/AttributeWithIndex/g
s/FnAttr/Attribute/g
This sets the stage
- to implement function notes as function attributes and
- to distinguish between function attributes and return value attributes.
This requires corresponding changes in llvm-gcc and clang.
llvm-svn: 56622
|
|
|
|
|
|
|
| |
to different address spaces. This alters the naming scheme for those
intrinsics, e.g., atomic.load.add.i32 => atomic.load.add.i32.p0i32
llvm-svn: 54195
|