| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
| |
and provide a different set of call-clobberred registers.
llvm-svn: 77962
|
| |
|
|
| |
llvm-svn: 77956
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is not just a matter of passing in the target triple from the module;
currently backends are making decisions based on the build and host
architecture. The goal is to migrate to making these decisions based off of the
triple (in conjunction with the feature string). Thus most clients pass in the
target triple, or the host triple if that is empty.
This has one important change in the way behavior of the JIT and llc.
For the JIT, it was previously selecting the Target based on the host
(naturally), but it was setting the target machine features based on the triple
from the module. Now it is setting the target machine features based on the
triple of the host.
For LLC, -march was previously only used to select the target, the target
machine features were initialized from the module's triple (which may have been
empty). Now the target triple is taken from the module, or the host's triple is
used if that is empty. Then the triple is adjusted to match -march.
The take away is that -march for llc is now used in conjunction with the host
triple to initialize the subtarget. If users want more deterministic behavior
from llc, they should use -mtriple, or set the triple in the input module.
llvm-svn: 77946
|
| |
|
|
|
|
| |
Fixes PR4669
llvm-svn: 77940
|
| |
|
|
|
|
| |
all of multisource as well.
llvm-svn: 77939
|
| |
|
|
| |
llvm-svn: 77926
|
| |
|
|
| |
llvm-svn: 77920
|
| |
|
|
|
|
|
|
|
|
| |
__builtin_bfin_ones does the same as ctpop, so it can be implemented in the front-end.
__builtin_bfin_loadbytes loads from an unaligned pointer with the disalignexcpt instruction. It does the same as loading from a pointer with the low bits masked. It is better if the front-end creates a masked load. We can always instruction select the masked to disalignexcpt+load.
We keep csync/ssync/idle. These intrinsics represent instructions that need workarounds for some silicon revisions. We may even want to convert inline assembler to intrinsics to enable the workarounds.
llvm-svn: 77917
|
| |
|
|
|
|
| |
not been spilled.
llvm-svn: 77912
|
| |
|
|
|
|
| |
instruction.
llvm-svn: 77906
|
| |
|
|
|
|
|
| |
Allow imp-def and imp-use of anything in the scavenger asserts, just like the machine code verifier.
Allow redefinition of a sub-register of a live register.
llvm-svn: 77904
|
| |
|
|
| |
llvm-svn: 77903
|
| |
|
|
|
|
| |
We use the same constraints as GCC, including those that are slightly insane for inline assembler.
llvm-svn: 77899
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Generate code for the Blackfin family of DSPs from Analog Devices:
http://www.analog.com/en/embedded-processing-dsp/blackfin/processors/index.html
We aim to be compatible with the exsisting GNU toolchain found at:
http://blackfin.uclinux.org/gf/project/toolchain
The back-end is experimental.
llvm-svn: 77897
|
| |
|
|
| |
llvm-svn: 77852
|
| |
|
|
| |
llvm-svn: 77841
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to:
.quad X
even on a 32-bit system, where X is not 64-bits. There isn't much that
we can do here, so we just print:
.quad ((X) & 4294967295)
instead.
llvm-svn: 77818
|
| |
|
|
| |
llvm-svn: 77792
|
| |
|
|
|
|
|
|
|
|
| |
myself because I'm getting tired of seeing the red buildbots, which have
been red since 5:30PM PDT last night.
Proposed supplement to developer policy: committers should make sure to
be around to watch for buildbot failures after committing.
llvm-svn: 77785
|
| |
|
|
| |
llvm-svn: 77781
|
| |
|
|
| |
llvm-svn: 77767
|
| |
|
|
|
|
| |
alias with predicate.
llvm-svn: 77764
|
| |
|
|
| |
llvm-svn: 77761
|
| |
|
|
|
|
|
|
|
| |
instructions for calls since BL and BLX are always 32-bit long and BX is always
16-bit long.
Also, we should be using BLX to call external function stubs.
llvm-svn: 77756
|
| |
|
|
| |
llvm-svn: 77750
|
| |
|
|
| |
llvm-svn: 77749
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Operands which are just a label should be parsed as immediates, not memory
operands (from the assembler perspective).
- Match a few more flavors of immediates.
- Distinguish match functions for memory operands which don't take a segment
register.
- We match the .s for "hello world" now!
llvm-svn: 77745
|
| |
|
|
|
|
|
|
|
|
|
|
| |
padding is disabled, tabs get replaced by spaces except in the case of
the first operand, where the tab is output to line up the operands after
the mnemonics.
Add some better comments and eliminate redundant code.
Fix some testcases to not assume tabs.
llvm-svn: 77740
|
| |
|
|
|
|
|
|
|
| |
- Uses MCAsmToken::getIdentifier which returns the (sub)string representing the
meaningfull contents a string or identifier token.
- Directives aren't done yet.
llvm-svn: 77739
|
| |
|
|
|
|
|
| |
Also, change scale value to always be 1 when unspecified to machine MachineInst
encoding.
llvm-svn: 77728
|
| |
|
|
| |
llvm-svn: 77716
|
| |
|
|
|
|
| |
MCSection subclasses yet, but this is a step in the right direction.
llvm-svn: 77708
|
| |
|
|
|
|
|
|
| |
to ensure the instruction that follows a TBB (when the number of table entries
is odd) is 2-byte aligned.
Patch by Sandeep Patel.
llvm-svn: 77705
|
| |
|
|
|
|
|
|
| |
into the mergable section if it is one of our special cases. This could
obviously be improved, but this is the minimal fix and restores us to the
previous behavior.
llvm-svn: 77679
|
| |
|
|
| |
llvm-svn: 77662
|
| |
|
|
| |
llvm-svn: 77659
|
| |
|
|
|
|
|
|
|
|
|
| |
- This is "experimental" code, I am feeling my way around and working out the
best way to do things (and learning tblgen in the process). Comments welcome,
but keep in mind this stuff will change radically.
- This is enough to match "subb" and friends, but not much else. The next step is to
automatically generate the matchers for individual operands.
llvm-svn: 77657
|
| |
|
|
|
|
| |
T2_i8 ones. Take that into consideration when determining stack size limit for reserving register scavenging slot.
llvm-svn: 77642
|
| |
|
|
| |
llvm-svn: 77637
|
| |
|
|
| |
llvm-svn: 77627
|
| |
|
|
| |
llvm-svn: 77625
|
| |
|
|
| |
llvm-svn: 77622
|
| |
|
|
|
|
|
|
|
|
|
|
| |
__sync_add_and_fetch() and __sync_sub_and_fetch.
When the return value is not used (i.e. only care about the value in the memory), x86 does not have to use add to implement these. Instead, it can use add, sub, inc, dec instructions with the "lock" prefix.
This is currently implemented using a bit of instruction selection trick. The issue is the target independent pattern produces one output and a chain and we want to map it into one that just output a chain. The current trick is to select it into a merge_values with the first definition being an implicit_def. The proper solution is to add new ISD opcodes for the no-output variant. DAG combiner can then transform the node before it gets to target node selection.
Problem #2 is we are adding a whole bunch of x86 atomic instructions when in fact these instructions are identical to the non-lock versions. We need a way to add target specific information to target nodes and have this information carried over to machine instructions. Asm printer (or JIT) can use this information to add the "lock" prefix.
llvm-svn: 77582
|
| |
|
|
|
|
|
| |
due to x86 encoding restrictions. This is currently off by default
because it may cause code quality regressions. This is for PR4572.
llvm-svn: 77565
|
| |
|
|
| |
llvm-svn: 77522
|
| |
|
|
| |
llvm-svn: 77521
|
| |
|
|
| |
llvm-svn: 77517
|
| |
|
|
|
|
| |
- Call RAUW to delete all instructions (this is a patch from Nick Lewycky).
llvm-svn: 77512
|
| |
|
|
| |
llvm-svn: 77478
|
| |
|
|
|
|
|
| |
wide vectors. Likewise, change VSTn intrinsics to take separate arguments
for each vector in a multi-vector struct. Adjust tests accordingly.
llvm-svn: 77468
|