| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
|
|
|
| |
Attribute.
This is more code to isolate the use of the Attribute class to that of just
holding one attribute instead of a collection of attributes.
llvm-svn: 173094
|
| |
|
|
|
|
|
|
|
| |
(sub 0, (sext bool to A)) to (zext bool to A).
Patch by Muhammad Ahmad
Reviewed by Duncan Sands
llvm-svn: 173093
|
| |
|
|
| |
llvm-svn: 173090
|
| |
|
|
| |
llvm-svn: 173087
|
| |
|
|
| |
llvm-svn: 173086
|
| |
|
|
| |
llvm-svn: 173085
|
| |
|
|
| |
llvm-svn: 173083
|
| |
|
|
|
|
|
|
| |
BLOB (i.e., large, performance intensive data) in a bitcode file was switched to
invoking one virtual method call per byte read. Now we do one virtual call per
BLOB.
llvm-svn: 173065
|
| |
|
|
|
|
|
|
| |
scheduler to use it.
A SparseMultiSet adds multiset behavior to SparseSet, while retaining SparseSet's desirable properties. Essentially, SparseMultiSet provides multiset behavior by storing its dense data in doubly linked lists that are inlined into the dense vector. This allows it to provide good data locality as well as vector-like constant-time clear() and fast constant time find(), insert(), and erase(). It also allows SparseMultiSet to have a builtin recycler rather than keeping SparseSet's behavior of always swapping upon removal, which allows it to preserve more iterators. It's often a better alternative to a SparseSet of a growable container or vector-of-vector.
llvm-svn: 173064
|
| |
|
|
|
|
|
| |
it reason about the current bit position, which is always independent of the
underlying cursors word size.
llvm-svn: 173063
|
| |
|
|
|
|
| |
bytes.
llvm-svn: 173062
|
| |
|
|
| |
llvm-svn: 173061
|
| |
|
|
|
|
|
|
|
| |
Patch by: Michel Dänzer
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 173053
|
| |
|
|
|
|
|
|
|
| |
Patch by: Michel Dänzer
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 173052
|
| |
|
|
|
|
|
|
|
| |
Patch by: Michel Dänzer
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 173051
|
| |
|
|
| |
llvm-svn: 173040
|
| |
|
|
|
|
|
|
|
|
|
|
| |
is free. The whole CodeMetrics API should probably be reworked more, but
this is enough to allow deleting the duplicate code there for computing
whether an instruction is free.
All of the passes using this have been updated to pull in TTI and hand
it to the CodeMetrics stuff. Further, a dead CodeMetrics API
(analyzeFunction) is nuked for lack of users.
llvm-svn: 173036
|
| |
|
|
|
|
|
|
|
|
| |
analysis. How cute that it wasn't previously. ;]
Part of this confusion stems from the flattened header file tree. Thanks
to Benjamin for pointing out the goof on IRC, and we're considering
un-flattening the headers, so speak now if that would bug you.
llvm-svn: 173033
|
| |
|
|
|
|
|
|
| |
old CodeMetrics system. TTI has the specific advantage of being
extensible and customizable by targets to reflect target-specific cost
metrics.
llvm-svn: 173032
|
| |
|
|
|
|
|
|
|
|
|
| |
depend on and use other analyses (as long as they're either immutable
passes or CGSCC passes of course -- nothing in the pass manager has been
fixed here). Leverage this to thread TargetTransformInfo down through
the inline cost analysis.
No functionality changed here, this just threads things through.
llvm-svn: 173031
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a dynamic analysis done on each call to the routine. However, now it can
use the standard pass infrastructure to reference other analyses,
instead of a silly setter method. This will become more interesting as
I teach it about more analysis passes.
This updates the two inliner passes to use the inline cost analysis.
Doing so highlights how utterly redundant these two passes are. Either
we should find a cheaper way to do always inlining, or we should merge
the two and just fiddle with the thresholds to get the desired behavior.
I'm leaning increasingly toward the latter as it would also remove the
Inliner sub-class split.
llvm-svn: 173030
|
| |
|
|
|
|
| |
Formatting fixes brought to you by clang-format.
llvm-svn: 173029
|
| |
|
|
|
|
| |
functionality changed.
llvm-svn: 173028
|
| |
|
|
| |
llvm-svn: 173010
|
| |
|
|
| |
llvm-svn: 173009
|
| |
|
|
| |
llvm-svn: 173008
|
| |
|
|
| |
llvm-svn: 173006
|
| |
|
|
| |
llvm-svn: 173005
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
lowered cost.
Currently, this is a direct port of the logic implementing
isInstructionFree in CodeMetrics. The hope is that the interface can be
improved (f.ex. supporting un-formed instruction queries) and the
implementation abstracted so that as we have test cases and target
knowledge we can expose increasingly accurate heuristics to clients.
I'll start switching existing consumers over and kill off the routine in
CodeMetrics in subsequent commits.
llvm-svn: 172998
|
| |
|
|
| |
llvm-svn: 172995
|
| |
|
|
|
|
| |
this file despite it not matching coding standards.
llvm-svn: 172994
|
| |
|
|
| |
llvm-svn: 172992
|
| |
|
|
| |
llvm-svn: 172990
|
| |
|
|
| |
llvm-svn: 172987
|
| |
|
|
| |
llvm-svn: 172986
|
| |
|
|
| |
llvm-svn: 172985
|
| |
|
|
|
|
|
|
| |
It is not possible to distinguish 3r instructions from 2r / rus instructions
using only the fixed bits. Therefore if an instruction doesn't match the
2r / rus format try to decode it as a 3r instruction before returning Fail.
llvm-svn: 172984
|
| |
|
|
| |
llvm-svn: 172971
|
| |
|
|
| |
llvm-svn: 172969
|
| |
|
|
|
|
|
|
|
| |
The optimization handles esoteric cases but adds a lot of complexity both to the X86 backend and to other backends.
This optimization disables an important canonicalization of chains of SEXT nodes and makes SEXT and ZEXT asymmetrical.
Disabling the canonicalization of consecutive SEXT nodes into a single node disables other DAG optimizations that assume
that there is only one SEXT node. The AVX mask optimizations is one example. Additionally this optimization does not update the cost model.
llvm-svn: 172968
|
| |
|
|
|
|
|
| |
We ignore the cpu frontend and focus on pipeline utilization. We do this because we
don't have a good way to estimate the loop body size at the IR level.
llvm-svn: 172964
|
| |
|
|
|
|
| |
eagerly calling it.
llvm-svn: 172953
|
| |
|
|
|
|
|
|
| |
new advance() APIs,
simplifying things and making a bunch of details more private to BitstreamCursor.
llvm-svn: 172947
|
| |
|
|
| |
llvm-svn: 172941
|
| |
|
|
|
|
|
|
| |
method to easy
the transition.
llvm-svn: 172940
|
| |
|
|
| |
llvm-svn: 172936
|
| |
|
|
|
|
| |
from the SVOp passed in.
llvm-svn: 172935
|
| |
|
|
| |
llvm-svn: 172933
|
| |
|
|
| |
llvm-svn: 172931
|
| |
|
|
| |
llvm-svn: 172930
|