| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that MachineBasicBlock::reverse_instr_iterator knows when it's at
the end (since r281168 and r281170), implement
MachineBasicBlock::reverse_iterator directly on top of an
ilist::reverse_iterator by adding an IsReverse template parameter to
MachineInstrBundleIterator. This replaces another hard-to-reason-about
use of std::reverse_iterator on list iterators, matching the changes for
ilist::reverse_iterator from r280032 (see the "out of scope" section at
the end of that commit message). MachineBasicBlock::reverse_iterator
now has a handle to the current node and has obvious invalidation
semantics.
r280032 has a more detailed explanation of how list-style reverse
iterators (invalidated when the pointed-at node is deleted) are
different from vector-style reverse iterators like std::reverse_iterator
(invalidated on every operation). A great motivating example is this
commit's changes to lib/CodeGen/DeadMachineInstructionElim.cpp.
Note: If your out-of-tree backend deletes instructions while iterating
on a MachineBasicBlock::reverse_iterator or converts between
MachineBasicBlock::iterator and MachineBasicBlock::reverse_iterator,
you'll need to update your code in similar ways to r280032. The
following table might help:
[Old] ==> [New]
delete &*RI, RE = end() delete &*RI++
RI->erase(), RE = end() RI++->erase()
reverse_iterator(I) std::prev(I).getReverse()
reverse_iterator(I) ++I.getReverse()
--reverse_iterator(I) I.getReverse()
reverse_iterator(std::next(I)) I.getReverse()
RI.base() std::prev(RI).getReverse()
RI.base() ++RI.getReverse()
--RI.base() RI.getReverse()
std::next(RI).base() RI.getReverse()
(For more details, have a look at r280032.)
llvm-svn: 281172
|
|
|
|
| |
llvm-svn: 273330
|
|
|
|
|
|
|
|
|
|
| |
support.
The original commit was reverted because of a buildbot problem with LazyCallGraph::SCC handling (not related to the OptBisect handling).
Differential Revision: http://reviews.llvm.org/D19172
llvm-svn: 267231
|
|
|
|
|
|
|
|
| |
This reverts commit r267022, due to an ASan failure:
http://lab.llvm.org:8080/green/job/clang-stage2-cmake-RgSan_check/1549
llvm-svn: 267115
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch implements a optimization bisect feature, which will allow optimizations to be selectively disabled at compile time in order to track down test failures that are caused by incorrect optimizations.
The bisection is enabled using a new command line option (-opt-bisect-limit). Individual passes that may be skipped call the OptBisect object (via an LLVMContext) to see if they should be skipped based on the bisect limit. A finer level of control (disabling individual transformations) can be managed through an addition OptBisect method, but this is not yet used.
The skip checking in this implementation is based on (and replaces) the skipOptnoneFunction check. Where that check was being called, a new call has been inserted in its place which checks the bisect limit and the optnone attribute. A new function call has been added for module and SCC passes that behaves in a similar way.
Differential Revision: http://reviews.llvm.org/D19172
llvm-svn: 267022
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With subregister liveness enabled we can detect the case where only
parts of a register are live in, this is expressed as a 32bit lanemask.
The current code only keeps registers in the live-in list and therefore
enumerated all subregisters affected by the lanemask. This turned out to
be too conservative as the subregister may also cover additional parts
of the lanemask which are not live. Expressing a given lanemask by
enumerating a minimum set of subregisters is computationally expensive
so the best solution is to simply change the live-in list to store the
lanemasks as well. This will reduce memory usage for targets using
subregister liveness and slightly increase it for other targets
Differential Revision: http://reviews.llvm.org/D12442
llvm-svn: 247171
|
|
|
|
| |
llvm-svn: 245895
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of the pattern
for (auto I = x.rbegin(), E = x.end(); I != E; ++I)
we can use make_range to construct the reverse range and iterate using
that instead.
llvm-svn: 243163
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Initially, these intrinsics seemed like part of a family of "frame"
related intrinsics, but now I think that's more confusing than helpful.
Initially, the LangRef specified that this would create a new kind of
allocation that would be allocated at a fixed offset from the frame
pointer (EBP/RBP). We ended up dropping that design, and leaving the
stack frame layout alone.
These intrinsics are really about sharing local stack allocations, not
frame pointers. I intend to go further and add an `llvm.localaddress()`
intrinsic that returns whatever register (EBP, ESI, ESP, RBX) is being
used to address locals, which should not be confused with the frame
pointer.
Naming suggestions at this point are welcome, I'm happy to re-run sed.
Reviewers: majnemer, nicholas
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11011
llvm-svn: 241633
|
|
|
|
|
|
| |
Apparently, the style needs to be agreed upon first.
llvm-svn: 240390
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch is generated using this command:
tools/clang/tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \
-checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \
llvm/lib/
Thanks to Eugene Kosov for the original patch!
llvm-svn: 240137
|
|
|
|
| |
llvm-svn: 237726
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These intrinsics allow multiple functions to share a single stack
allocation from one function's call frame. The function with the
allocation may only perform one allocation, and it must be in the entry
block.
Functions accessing the allocation call llvm.recoverframeallocation with
the function whose frame they are accessing and a frame pointer from an
active call frame of that function.
These intrinsics are very difficult to inline correctly, so the
intention is that they be introduced rarely, or at least very late
during EH preparation.
Reviewers: echristo, andrew.w.kaylor
Differential Revision: http://reviews.llvm.org/D6493
llvm-svn: 225746
|
|
|
|
| |
llvm-svn: 219672
|
|
|
|
|
|
|
|
|
|
| |
New function to erase a machine instruction and mark DBG_VALUE
for removal. A DBG_VALUE is marked for removal when it references
an operand defined in the instruction.
Use the new function to cleanup code in dead machine instruction
removal pass.
llvm-svn: 215580
|
|
|
|
|
|
|
|
|
|
|
| |
shorter/easier and have the DAG use that to do the same lookup. This
can be used in the future for TargetMachine based caching lookups from
the MachineFunction easily.
Update the MIPS subtarget switching machinery to update this pointer
at the same time it runs.
llvm-svn: 214838
|
|
|
|
|
|
| |
information and update all callers. No functional change.
llvm-svn: 214781
|
|
|
|
|
|
|
|
|
|
|
|
| |
define below all header includes in the lib/CodeGen/... tree. While the
current modules implementation doesn't check for this kind of ODR
violation yet, it is likely to grow support for it in the future. It
also removes one layer of macro pollution across all the included
headers.
Other sub-trees will follow.
llvm-svn: 206837
|
|
|
|
|
|
| |
instead of comparing to nullptr.
llvm-svn: 206142
|
|
|
|
|
|
|
| |
pass normally runs at optimization level None, or is part of the
register allocation pipeline.
llvm-svn: 205228
|
|
|
|
|
|
|
| |
This patch fixes the bug in peephole optimization that folds a load which defines one vreg into the one and only use of that vreg. With debug info, a DBG_VALUE that referenced the vreg considered to be a use, preventing the optimization. The fix is to ignore DBG_VALUE's during the optimization, and undef a DBG_VALUE that references a vreg that gets removed.
Patch by Trevor Smigiel!
llvm-svn: 203829
|
|
|
|
|
|
| |
class.
llvm-svn: 203220
|
|
|
|
|
|
| |
Remove the old functions.
llvm-svn: 202636
|
|
|
|
| |
llvm-svn: 182531
|
|
|
|
|
|
|
| |
Now that return value registers are return instruction uses, there is no
need for special treatment of return blocks.
llvm-svn: 174416
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.
Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]
llvm-svn: 169131
|
|
|
|
|
|
|
| |
Using the cached bit vector in MRI avoids comstantly allocating and
recomputing the reserved register bit vector.
llvm-svn: 165983
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No functional change intended.
Sorry for the churn. The iterator classes are supposed to help avoid
giant commits like this one in the future. The TableGen-produced
register lists are getting quite large, and it may be necessary to
change the table representation.
This makes it possible to do so without changing all clients (again).
llvm-svn: 157854
|
|
|
|
|
|
|
| |
MCRegAliasIterator can optionally visit the register itself, allowing
for simpler code.
llvm-svn: 157837
|
|
|
|
|
|
| |
static data size.
llvm-svn: 152016
|
|
|
|
| |
llvm-svn: 152001
|
|
|
|
|
|
|
|
|
|
|
| |
I think this was already the intention, but DeadMachineInstructionElim
was accidentally tracking the liveness of reserved registers. Now,
instructions with reserved defs are never deleted.
This prevents the call stack adjustment instructions from getting
deleted when enabling register masks.
llvm-svn: 150116
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Moving toward a uniform style of pass definition to allow easier target configuration.
Globally declare Pass ID.
Globally declare pass initializer.
Use INITIALIZE_PASS consistently.
Add a call to the initializer from CodeGen.cpp.
Remove redundant "createPass" functions and "getPassName" methods.
While cleaning up declarations, cleaned up comments (sorry for large diff).
llvm-svn: 150100
|
|
|
|
| |
llvm-svn: 150094
|
|
|
|
|
|
|
| |
It doesn't seem worthwhile to give meaning to a NULL register mask
pointer. It complicates all the code using register mask operands.
llvm-svn: 149646
|
|
|
|
|
|
| |
Don't track live physregs that are clobbered by a register mask operand.
llvm-svn: 148588
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.
For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.
llvm-svn: 146026
|
|
|
|
|
|
| |
Patch by Sanjoy Das!
llvm-svn: 133910
|
|
|
|
|
|
|
|
| |
These functions not longer assert when passed 0, but simply return false instead.
No functional change intended.
llvm-svn: 123155
|
|
|
|
|
|
|
|
|
|
| |
Instead encode llvm IR level property "HasSideEffects" in an operand (shared
with IsAlignStack). Added MachineInstrs::hasUnmodeledSideEffects() to check
the operand when the instruction is an INLINEASM.
This allows memory instructions to be moved around INLINEASM instructions.
llvm-svn: 123044
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
exposes an initializeMyPassFunction(), which
must be called in the pass's constructor. This function uses static dependency declarations to recursively initialize
the pass's dependencies.
Clients that only create passes through the createFooPass() APIs will require no changes. Clients that want to use the
CommandLine options for passes will need to manually call the appropriate initialization functions in PassInitialization.h
before parsing commandline arguments.
I have tested this with all standard configurations of clang and llvm-gcc on Darwin. It is possible that there are problems
with the static dependencies that will only be visible with non-standard options. If you encounter any crash in pass
registration/creation, please send the testcase to me directly.
llvm-svn: 116820
|
|
|
|
| |
llvm-svn: 115996
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reserved registers are unpredictable, and are treated as always live by machine
DCE.
Allocatable registers are never reserved, and can be used for virtual registers.
Unreserved, unallocatable registers can not be used for virtual registers, but
otherwise behave like a normal allocatable register. Most targets only have
the flag register in this set.
llvm-svn: 112649
|
|
|
|
| |
llvm-svn: 110460
|
|
|
|
| |
llvm-svn: 110410
|
|
|
|
|
|
|
|
| |
address of the static
ID member as the sole unique type identifier. Clean up APIs related to this change.
llvm-svn: 110396
|
|
|
|
| |
llvm-svn: 109045
|
|
|
|
| |
llvm-svn: 97578
|
|
|
|
|
|
|
|
|
|
|
| |
didn't handle
X =
Y<dead> = use X
DBG_VALUE(X)
I was hoping to avoid this approach as it's slower,
but I don't think it can be done.
llvm-svn: 95996
|
|
|
|
|
|
| |
same dead instruction.
llvm-svn: 95890
|