| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
the other files that have the same typo. All comments, no functionality change! (Merely a "fuctionality" change.)
Bonus change to remove emacs major mode marker from SystemZMachineFunctionInfo.cpp because emacs already knows it's C++ from the extension. Also fix typo "appeary" in AMDGPUMCAsmInfo.h.
llvm-svn: 243585
|
| |
|
|
|
|
| |
Previously successor selection was simply wrong.
llvm-svn: 243545
|
| |
|
|
| |
llvm-svn: 243544
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
As added initially, statepoints required their call targets to be a
constant pointer null if ``numPatchBytes`` was non-zero. This turns out
to be a problem ergonomically, since there is no way to mark patchable
statepoints as calling a (readable) symbolic value.
This change remove the restriction of requiring ``null`` call targets
for patchable statepoints, and changes PlaceSafepoints to maintain the
symbolic call target through its transformation.
Reviewers: reames, swaroop.sridhar
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11550
llvm-svn: 243502
|
| |
|
|
| |
llvm-svn: 243471
|
| |
|
|
| |
llvm-svn: 243466
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before the patch, the checks were generated internally in
addRuntimeCheck. Now, we use the new overloaded version of
addRuntimeCheck that takes the ready-made set of checks as a parameter.
The checks are now generated by the client (LoopDistribution) with the
new RuntimePointerChecking::generateChecks API.
Also the new printChecks API is used to print out the checks for
debugging.
This is to continue the transition over to the new model whereby clients
will get the full set of checks from LAA, filter it and then pass it to
LoopVersioning and in turn to addRuntimeCheck.
llvm-svn: 243382
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
If a scale or a base register can be rewritten as "Zext({A,+,1})" then
LSR will now consider a formula of that form in its normal cost
computation.
Depends on D9180
Reviewers: qcolombet, atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9181
llvm-svn: 243348
|
| |
|
|
|
|
|
|
|
|
|
| |
This reverts commit r243167.
Duncan pointed out that dyn_cast can return null in these cases, so this
was an unsafe commit to make. Sorry for the noise.
Worryingly there were no tests which fail...
llvm-svn: 243302
|
| |
|
|
|
|
|
|
|
| |
r243250 appeared to break clang/test/Analysis/dead-store.c on one of the build
slaves, but I couldn't reproduce this failure locally. Probably a false
positive as I saw this test was broken by r243246 or r243247 too but passed
later without people fixing anything.
llvm-svn: 243253
|
| |
|
|
|
|
| |
breaks tests
llvm-svn: 243251
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CostModel::getInstructionCost
Summary:
This patch updates TargetTransformInfoImplCRTPBase::getGEPCost to consider
addressing modes. It now returns TCC_Free when the GEP can be completely folded
to an addresing mode.
I started this patch as I refactored SLSR. Function isGEPFoldable looks common
and is indeed used by some WIP of mine. So I extracted that logic to getGEPCost.
Furthermore, I noticed getGEPCost wasn't directly tested anywhere. The best
testing bed seems CostModel, but its getInstructionCost method invokes
getAddressComputationCost for GEPs which provides very coarse estimation. So
this patch also makes getInstructionCost call the updated getGEPCost for GEPs.
This change inevitably breaks some tests because the cost model changes, but
nothing looks seriously wrong -- if we believe the new cost model is the right
way to go, these tests should be updated.
This patch is not perfect yet -- the comments in some tests need to be updated.
I want to know whether this is a right approach before fixing those details.
Reviewers: chandlerc, hfinkel
Subscribers: aschwaighofer, llvm-commits, aemerson
Differential Revision: http://reviews.llvm.org/D9819
llvm-svn: 243250
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
conditions more effectively
Summary:
This patch improves trivial loop unswitch.
The current trivial loop unswitch only checks if loop header's terminator contains a trivial unswitch condition. But if the loop header only has one reachable successor (due to intentionally or unintentionally missed code simplification), we should consider the successor as part of the loop header. Therefore, instead of stopping at loop header's terminator, we should keep traversing its successors within loop until reach a *real* conditional branch or switch (whose condition can not be constant folded). This change will enable a single -loop-unswitch pass to unswitch multiple trivial conditions (unswitch one trivial condition could open opportunity to unswitch another one in the same loop), while the old implementation can unswitch only one per pass.
Reviewers: reames, broune
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11481
llvm-svn: 243203
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This patch extend LoopReroll pass to hand the loops which
is similar to the following:
while (len > 1) {
sum4 += buf[len];
sum4 += buf[len-1];
len -= 2;
}
llvm-svn: 243171
|
| |
|
|
|
|
|
|
| |
Since both places which set this variable do so with dyn_cast, and not
dyn_cast_or_null, its impossible to get a nullptr here, so we can remove
the check.
llvm-svn: 243167
|
| |
|
|
|
|
|
|
|
|
|
| |
Instead of the pattern
for (auto I = x.rbegin(), E = x.end(); I != E; ++I)
we can use make_range to construct the reverse range and iterate using
that instead.
llvm-svn: 243163
|
| |
|
|
|
|
| |
The names for instructions inserted were previous dependent on iteration order. By deriving the names from the original instructions, we can avoid instability in tests without resorting to ordered traversals. It also makes the IR mildly easier to read at large scale.
llvm-svn: 243140
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Resolving a branch allows us to ignore blocks that won't be executed, and thus make our estimate more accurate.
This patch is intended to be applied after D10205 (though it could be applied independently).
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10206
llvm-svn: 243084
|
| |
|
|
| |
llvm-svn: 243076
|
| |
|
|
|
|
|
|
|
|
| |
pointer algorithm [NFC]
The new code should hopefully be equivalent to the old code; it just uses a worklist to track instructions which need to visited rather than iterating over all instructions visited each time. This should be faster, but the primary benefit is that the purpose should be more clear and the diff of adding another instruction type (forthcoming) much more obvious.
Differential Revision: http://reviews.llvm.org/D11480
llvm-svn: 243071
|
| |
|
|
|
|
| |
This check is already done by findClosestMatchingDominator.
llvm-svn: 243065
|
| |
|
|
|
|
|
|
| |
w/more than just PHIs
Today, Select instructions also have associated PhiStates. In the near future, so will ExtractElement and SuffleVector.
llvm-svn: 243056
|
| |
|
|
|
|
| |
Deleting much of the code using trace-rewrite-statepoints and use idiomatic DEBUG statements instead. This includes adding operator<< to a helper class.
llvm-svn: 243054
|
| |
|
|
|
|
| |
We don't need to pass in the map from BDV to PhiStates; we can instead handle that externally and let the MeetPhiStates helper class just meet PhiStates.
llvm-svn: 243045
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Scalarizer has two data structures that hold information about changes
to the function, Gathered and Scattered. These are cleared in finish()
at the end of runOnFunction() if finish() detects any changes to the
function.
However, finish() was checking for changes by only checking if
Gathered was non-empty. The function visitStore() only modifies
Scattered without touching Gathered. As a result, Scattered could have
ended up having stale data if Scalarizer only scalarized store
instructions. Since the data in Scattered is used during the execution
of the pass, this introduced dangling pointer errors.
The fix is to check whether both Scattered and Gathered are empty
before deciding what to do in finish(). This also fixes a problem
where the Function can be modified although the pass returns false.
Reviewers: rnk
Subscribers: rnk, srhines, llvm-commits
Differential Revision: http://reviews.llvm.org/D10459
llvm-svn: 243040
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the general GMR-in-non-LTO flag.
Without this, we have the global information during the CGSCC pipeline
for GVN and such, but don't have it available during the late loop
optimizations such as the vectorizer. Moreover, after the CGSCC pipeline
has finished we have substantially more accurate and refined call graph
information, function annotations, etc, which will make GMR even more
powerful than it is early in the pipelien.
Note that we have to play silly games with preserving AliasAnalysis
(which is now trivially preserved) in order to let a module analysis
magically be preserved into the entire function pass pipeline.
Simultaneously we have to not make GMR an immutable pass in order to be
able to re-run it and collect fresh data on the final call graph.
llvm-svn: 242999
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
preparation for de-coupling the AA implementations.
In order to do this, they had to become fake-scoped using the
traditional LLVM pattern of a leading initialism. These can't be actual
scoped enumerations because they're bitfields and thus inherently we use
them as integers.
I've also renamed the behavior enums that are specific to reasoning
about the mod/ref behavior of functions when called. This makes it more
clear that they have a very narrow domain of applicability.
I think there is a significantly cleaner API for all of this, but
I don't want to try to do really substantive changes for now, I just
want to refactor the things away from analysis groups so I'm preserving
the exact original design and just cleaning up the names, style, and
lifting out of the class.
Differential Revision: http://reviews.llvm.org/D10564
llvm-svn: 242963
|
| |
|
|
|
|
|
|
|
|
| |
through APIs that are no longer necessary now that the update API has
been removed.
This will make changes to the AA interfaces significantly less
disruptive (I hope). Either way, it seems like a really nice cleanup.
llvm-svn: 242882
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
non-trivial loop unswitch in processCurrentLoop()
Summary: The current code in LoopUnswtich::processCurrentLoop() mixes trivial loop unswitch and non-trivial loop unswitch together. It goes over all basic blocks in the loop and checks if a condition is trivial or non-trivial unswitch condition. However, trivial unswitch condition can only occur in the loop header basic block (where it controls whether or not the loop does something at all). This refactoring separate trivial loop unswitch and non-trivial loop unswitch. Before going over all basic blocks in the loop, it checks if the loop header contains a trivial unswitch condition. If so, unswitch it. Otherwise, go over all blocks like before but don't check trivial condition any more since they are not possible to be in the other blocks. This code has no functionality change.
Reviewers: meheff, reames, broune
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11276
llvm-svn: 242873
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
types and loads, loads or stores widened past the size of an alloca,
etc.
This started off with a bug report about big-endian behavior with
bitfields and loads and stores to a { i32, i24 } struct. An initial
attempt to fix this was sent for review in D10357, but that didn't
really get to the root of the problem.
The core issue was that canConvertValue and convertValue in SROA were
handling different bitwidth integers by doing a zext of the integer. It
wouldn't do a trunc though, only a zext! This would in turn lead SROA to
form an i24 load from an i24 alloca, zext it to i32, and then use it.
This would at least produce the wrong value for big-endian systems.
One of my many false starts here was to correct the computation for
big-endian systems by shifting. But this doesn't actually work because
the original code has a 64-bit store to the entire 8 bytes, and a 32-bit
load of the last 4 bytes, and because the alloc size is 8 bytes, we
can't lose that last (least significant if bigendian) byte! The real
problem here is that we're forming an i24 load in SROA which is actually
not sufficiently wide to load all of the necessary bits here. The source
has an i32 load, and SROA needs to form that as well.
The straightforward way to do this is to disable the zext logic in
canConvertValue and convertValue, forcing us to actually load all
32-bits. This seems like a really good change, but it in turn breaks
several other parts of SROA.
First in the chain of knock-on failures, we had places where we were
doing integer-widening promotion even though some of the integer loads
or stores extended *past the end* of the alloca's memory! There was even
a comment about preventing this, but it only prevented the case where
the type had a different bit size from its store size. So I added checks
to handle the cases where we actually have a widened load or store and
to avoid trying to special integer widening promotion in those cases.
Second, we actually rely on the ability to promote in the face of loads
past the end of an alloca! This is important so that we can (for
example) speculate loads around PHI nodes to do more promotion. The bits
loaded are garbage, but as long as they aren't used and the alignment is
suitable high (which it wasn't in the test case!) this is "fine". And we
can't stop promoting here, lots of things stop working well if we do. So
we need to add specific logic to handle the extension (and truncation)
case, but *only* where that extension or truncation are over bytes that
*are outside the alloca's allocated storage* and thus totally bogus to
load or store.
And of course, once we add back this correct handling of extension or
truncation, we need to correctly handle bigendian systems to avoid
re-introducing the exact bug that started us off on this chain of misery
in the first place, but this time even more subtle as it only happens
along speculated loads atop a PHI node.
I've ported an existing test for PHI speculation to the big-endian test
file and checked that we get that part correct, and I've added several
more interesting big-endian test cases that should help check that we're
getting this correct.
Fun times.
llvm-svn: 242869
|
| |
|
|
|
|
| |
when inserting a new range. No functionality change intended. Patch by Anthony Pesch!
llvm-svn: 242843
|
| |
|
|
|
|
| |
Use a named lambda for readability, common some code, remove a stale comments, and use llvm style variable names.
llvm-svn: 242827
|
| |
|
|
| |
llvm-svn: 242808
|
| |
|
|
|
|
| |
A bit more code cleanup: delete some a trivial true assertion and supporting code, remove a redundant cast, and use count in assertions where feasible.
llvm-svn: 242805
|
| |
|
|
|
|
| |
We can use builders to simplify part of the code and we only check for the existance of the metadata value; this enables us to delete some redundant code.
llvm-svn: 242751
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
directly model in the new PM.
This also was an incredibly brittle and expensive update API that was
never fully utilized by all the passes that claimed to preserve AA, nor
could it reasonably have been extended to all of them. Any number of
places add uses of values. If we ever wanted to reliably instrument
this, we would want a callback hook much like we have with ValueHandles,
but doing this for every use addition seems *extremely* expensive in
terms of compile time.
The only user of this update mechanism is GlobalsModRef. The idea of
using this to keep it up to date doesn't really work anyways as its
analysis requires a symmetric analysis of two different memory
locations. It would be very hard to make updates be sufficiently
rigorous to *guarantee* symmetric analysis in this way, and it pretty
certainly isn't true today.
However, folks have been using GMR with this update for a long time and
seem to not be hitting the issues. The reported issue that the update
hook fixes isn't even a problem any more as other changes to
GetUnderlyingObject worked around it, and that issue stemmed from *many*
years ago. As a consequence, a prior patch provided a flag to control
the unsafe behavior of GMR, and this patch removes the update mechanism
that has questionable compile-time tradeoffs and is causing problems
with moving to the new pass manager. Note the lack of test updates --
not one test in tree actually requires this update, even for a contrived
case.
All of this was extensively discussed on the dev list, this patch will
just enact what that discussion decides on. I'm sending it for review in
part to show what I'm planning, and in part to show the *amazing* amount
of work this avoids. Every call to the AA here is something like three
to six indirect function calls, which in the non-LTO pipeline never do
any work! =[
Differential Revision: http://reviews.llvm.org/D11214
llvm-svn: 242605
|
| |
|
|
|
|
|
|
| |
This new wrapper pass is useful when we want to do branch probability analysis conditionally (e.g. only in PGO mode) but don't want to add one more pass dependence.
http://reviews.llvm.org/D11241
llvm-svn: 242349
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
checking HeaderTerm instruction type
Summary:
This is a trivial code change with no functionality effect.
When LoopUnswitch determines trivial unswitch condition, it checks whether the loop header's terminator instruction is a branch instruction or switch instruction since trivial unswitch condition can only apply to these two instruction types. The current code does not fail the check directly on other instruction types, but check the nullness of LoopExitBB variable instead. The added else clause makes the check fail immediately on other instruction types and makes the code more obvious.
Reviewers: reames
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11239
llvm-svn: 242345
|
| |
|
|
|
|
|
|
|
| |
During estimation of unrolling effect we should be able to propagate
constants through casts.
Differential Revision: http://reviews.llvm.org/D10207
llvm-svn: 242257
|
| |
|
|
|
|
|
| |
Turn this structure-of-arrays (i.e. the various pointer attributes) into
array-of-structures.
llvm-svn: 242219
|
| |
|
|
|
|
|
|
|
| |
I am planning to add more nested classes inside RuntimePointerCheck so
all these triple-nesting would be hard to follow.
Also rename it to RuntimePointerChecking (i.e. append 'ing').
llvm-svn: 242218
|
| |
|
|
| |
llvm-svn: 242202
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sometimes an incidentally created instruction can duplicate a Value used
elsewhere. It then often doesn't end up in the leader table. If it's later
removed, we attempt to remove it from the leader table and segfault.
Instead we should just ignore the removal request, which won't cause any
problems. The reverse situation, where the original instruction is replaced by
the new one (which you might think could leave the leader table empty) cannot
occur, because the incidental instruction will never be found in the first
place.
llvm-svn: 242199
|
| |
|
|
|
|
|
|
|
|
|
| |
Volatile loads and stores are made visible in global state regardless of
what memory is involved. It is not correct to disregard the ordering
and synchronization scope because it is possible to synchronize with
memory operations performed by hardware.
This partially addresses PR23737.
llvm-svn: 242126
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When spotting that a loop can use ctpop, we were incorrectly replacing all uses of a value with a value derived from ctpop.
The bug here was exposed because we were replacing a use prior to the ctpop with the ctpop value and so we have a use before def, i.e., we changed
%tobool.5 = icmp ne i32 %num, 0
store i1 %tobool.5, i1* %ptr
br i1 %tobool.5, label %for.body.lr.ph, label %for.end
to
store i1 %1, i1* %ptr
%0 = call i32 @llvm.ctpop.i32(i32 %num)
%1 = icmp ne i32 %0, 0
br i1 %1, label %for.body.lr.ph, label %for.end
Even if we inserted the ctpop so that it dominates the store here, that would still be incorrect. The store doesn’t want the result of ctpop.
The fix is very simple, and involves replacing only the branch condition with the ctpop instead of all uses.
Reviewed by Hal Finkel.
llvm-svn: 242068
|
| |
|
|
|
|
|
|
|
|
| |
Enable runtime unrolling for loops with unroll count metadata ("#pragma unroll N")
and a runtime trip count. Also, do not unroll loops with unroll full metadata if the
loop has a runtime loop count. Previously, such loops would be unrolled with a
very large threshold (pragma-unroll-threshold) if runtime unrolled happened to be
enabled resulting in a very large (and likely unwise) unroll factor.
llvm-svn: 242047
|
| |
|
|
|
|
|
| |
Passes should never modify it, just use the const version. While there
reduce copying in LoopInterchange. No functional change intended.
llvm-svn: 242041
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
There is no suitable basic block to sink instructions in loops without
exits. The only way an instruction in a loop without exits can be used
is as an incoming value to a PHI. In such cases, the incoming block for
the corresponding value is unreachable.
This fixes PR24013.
Differential Revision: http://reviews.llvm.org/D10903
llvm-svn: 241987
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No in-tree alias analysis used this facility, and it was not called in
any particularly rigorous way, so it seems unlikely to be correct.
Note that one of the only stateful AA implementations in-tree,
GlobalsModRef is completely broken currently (and any AA passes like it
are equally broken) because Module AA passes are not effectively
invalidated when a function pass that fails to update the AA stack runs.
Ultimately, it doesn't seem like we know how we want to build stateful
AA, and until then trying to support and maintain correctness for an
untested API is essentially impossible. To that end, I'm planning to rip
out all of the update API. It can return if and when we need it and know
how to build it on top of the new pass manager and as part of *tested*
stateful AA implementations in the tree.
Differential Revision: http://reviews.llvm.org/D10889
llvm-svn: 241975
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The class will obviously need improvement down the road. For one, there
is no reason that addPHINodes would have to be exposed like that. I
will make this and other improvements in follow-up patches.
The main goal is to be able to share this functionality. The
LoopLoadElimination pass I am working on needs it too. Later we can
move other clients as well (LV and Ashutosh's LICMVer).
Reviewers: hfinkel, ashutosh.nema
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10577
llvm-svn: 241932
|