| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
uses them.
llvm-svn: 131591
|
| |
|
|
| |
llvm-svn: 131590
|
| |
|
|
|
|
| |
indentation level.
llvm-svn: 131589
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
addr_t
Address::GetCallableLoadAddress (Target *target) const;
This will resolve the load address in the Address object and optionally
decorate the address up to be able to be called. For all non ARM targets, this
just essentially returns the result of "Address::GetLoadAddress (target)". But
for ARM targets, it checks if the address is Thumb, and if so, it returns
an address with bit zero set to indicate a mode switch to Thumb. This is how
we need function pointers to be for return addresses and when resolving
function addresses for the JIT. It is also nice to centralize this in one spot
to avoid having multiple copies of this code.
llvm-svn: 131588
|
| |
|
|
| |
llvm-svn: 131587
|
| |
|
|
|
|
| |
Add test case.
llvm-svn: 131582
|
| |
|
|
|
|
| |
tweak so that we don't depend on an uninitialized argument.
llvm-svn: 131581
|
| |
|
|
| |
llvm-svn: 131580
|
| |
|
|
| |
llvm-svn: 131579
|
| |
|
|
|
|
| |
turned on.
llvm-svn: 131578
|
| |
|
|
|
|
|
| |
of the comparison, so that the resulting expression is fully
normalized. This fixes PR9939.
llvm-svn: 131576
|
| |
|
|
| |
llvm-svn: 131575
|
| |
|
|
| |
llvm-svn: 131574
|
| |
|
|
|
|
| |
other things, libcxx not building.
llvm-svn: 131573
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
- StartChained and EndChained delimit a chained unwind area, which can contain
additional operations to be undone if an exception occurs inside of it.
- UnwindOnly declares that this function doesn't handle any exceptions. If it
has a handler, it's an unwind handler instead of an exception handler.
- Lsda declares the location and size of the LSDA, which in the Win64 EH
scheme is kept inside the UNWIND_INFO struct. Windows itself ignores the
LSDA; it's used by the Language-Specific Handler (the "Personality Function"
from DWARF).
llvm-svn: 131572
|
| |
|
|
| |
llvm-svn: 131571
|
| |
|
|
| |
llvm-svn: 131567
|
| |
|
|
| |
llvm-svn: 131566
|
| |
|
|
|
|
| |
immediate operand.
llvm-svn: 131565
|
| |
|
|
| |
llvm-svn: 131561
|
| |
|
|
|
|
| |
optimized into tail-calls when possible.
llvm-svn: 131560
|
| |
|
|
| |
llvm-svn: 131559
|
| |
|
|
| |
llvm-svn: 131558
|
| |
|
|
|
|
|
|
| |
x86_64 sibcall logic. I've filed PR9943 for the sibcall problem, and
this patch alters the testcase to work around the flaw. When PR9943
is fixed, this patch should be reverted.
llvm-svn: 131557
|
| |
|
|
|
|
| |
bonus inter-library dependencies.
llvm-svn: 131556
|
| |
|
|
|
|
| |
rdar://9449159.
llvm-svn: 131555
|
| |
|
|
|
|
| |
type as input. Sorry test cases only trigger when dag combine is disabled. rdar://9449178
llvm-svn: 131553
|
| |
|
|
| |
llvm-svn: 131552
|
| |
|
|
| |
llvm-svn: 131551
|
| |
|
|
|
|
|
|
|
|
|
|
| |
bool SectionLoadList::ResolveLoadAddress (addr_t load_addr, Address &so_addr) const;
Where if the address is in the last map entry, we need to look it up correctly.
My previous fix was incorrect where it looked in the first if there were
no addresses in the map that were > load_addr. Now we correctly look in the
last entry if our std::map::lower_bound search returns the end of the
collection.
llvm-svn: 131550
|
| |
|
|
|
|
|
|
|
|
| |
InstructionLLVM.ctor() unconditionally.
Otherwise, pass m_arch.GetMachine().
Followup patch for rdar://problem/9170971.
llvm-svn: 131549
|
| |
|
|
| |
llvm-svn: 131548
|
| |
|
|
| |
llvm-svn: 131547
|
| |
|
|
| |
llvm-svn: 131546
|
| |
|
|
| |
llvm-svn: 131545
|
| |
|
|
| |
llvm-svn: 131544
|
| |
|
|
| |
llvm-svn: 131543
|
| |
|
|
| |
llvm-svn: 131542
|
| |
|
|
| |
llvm-svn: 131541
|
| |
|
|
| |
llvm-svn: 131540
|
| |
|
|
| |
llvm-svn: 131539
|
| |
|
|
| |
llvm-svn: 131538
|
| |
|
|
|
|
| |
Patch by Dan Bailey
llvm-svn: 131537
|
| |
|
|
|
|
|
|
| |
Original log entry:
Refactor getActionType and getTypeToTransformTo ; place all of the 'decision'
code in one place.
llvm-svn: 131536
|
| |
|
|
|
|
| |
code in one place.
llvm-svn: 131534
|
| |
|
|
|
|
|
|
|
| |
than either the primitive size or the element primitive size (in the case
of vectors), simplify the vector logic. No functionality change. There
is some distracting churn in the patch because I lined up comments better
while there - sorry about that.
llvm-svn: 131533
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
happily accept things like "sext <2 x i32> to <999 x i64>". It would
also accept "sext <2 x i32> to i64", though the verifier would catch
that later. Fixed by having castIsValid check that vector lengths match
except when doing a bitcast. (2) When creating a cast instruction, check
that the cast is valid (this was already done when creating constexpr
casts). While there, replace getScalarSizeInBits (used to allow more
vector casts) with getPrimitiveSizeInBits in getCastOpcode and isCastable
since vector to vector casts are now handled explicitly by passing to the
element types; i.e. this bit should result in no functional change.
llvm-svn: 131532
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
can be used to turn a <4 x i64> into a <4 x i32> but getCastOpcode would assert
if you passed these types to it. Note that this strictly extends the previous
functionality: if getCastOpcode previously accepted two vector types (i.e. didn't
assert) then it still will and returns the same opcode (BitCast). That's because
before it would only accept vectors with the same bitwidth, and the new code only
touches vectors with the same length. However if two vectors have both the same
bitwidth and the same length then their element types have the same bitwidth, so
the new logic will return BitCast as before.
llvm-svn: 131530
|
| |
|
|
|
|
|
|
| |
splits each half. Therefore, the real problem was that we were using a VREV64 for a 4xi16, when we should have been using a VREV32.
Updated test case and reverted change to the PerfectShuffle Table.
llvm-svn: 131529
|
| |
|
|
| |
llvm-svn: 131528
|