summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Analysis/AliasAnalysis.cpp
diff options
context:
space:
mode:
authorChandler Carruth <chandlerc@gmail.com>2016-03-02 15:56:53 +0000
committerChandler Carruth <chandlerc@gmail.com>2016-03-02 15:56:53 +0000
commit12884f7f801eccd8322bd496da0a85bd16ac0c82 (patch)
treef290b649fd3852c75adbc63fc05e39b950e4f1df /llvm/lib/Analysis/AliasAnalysis.cpp
parent4de44b7ef87bcef83798eee69fdcbfab9866d52e (diff)
downloadbcm5719-llvm-12884f7f801eccd8322bd496da0a85bd16ac0c82.tar.gz
bcm5719-llvm-12884f7f801eccd8322bd496da0a85bd16ac0c82.zip
[AA] Hoist the logic to reformulate various AA queries in terms of other
parts of the AA interface out of the base class of every single AA result object. Because this logic reformulates the query in terms of some other aspect of the API, it would easily cause O(n^2) query patterns in alias analysis. These could in turn be magnified further based on the number of call arguments, and then further based on the number of AA queries made for a particular call. This ended up causing problems for Rust that were actually noticable enough to get a bug (PR26564) and probably other places as well. When originally re-working the AA infrastructure, the desire was to regularize the pattern of refinement without losing any generality. While I think it was successful, that is clearly proving to be too costly. And the cost is needless: we gain no actual improvement for this generality of making a direct query to tbaa actually be able to re-use some other alias analysis's refinement logic for one of the other APIs, or some such. In short, this is entirely wasted work. To the extent possible, delegation to other API surfaces should be done at the aggregation layer so that we can avoid re-walking the aggregation. In fact, this significantly simplifies the logic as we no longer need to smuggle the aggregation layer into each alias analysis (or the TargetLibraryInfo into each alias analysis just so we can form argument memory locations!). However, we also have some delegation logic inside of BasicAA and some of it even makes sense. When the delegation logic is baking in specific knowledge of aliasing properties of the LLVM IR, as opposed to simply reformulating the query to utilize a different alias analysis interface entry point, it makes a lot of sense to restrict that logic to a different layer such as BasicAA. So one aspect of the delegation that was in every AA base class is that when we don't have operand bundles, we re-use function AA results as a fallback for callsite alias results. This relies on the IR properties of calls and functions w.r.t. aliasing, and so seems a better fit to BasicAA. I've lifted the logic up to that point where it seems to be a natural fit. This still does a bit of redundant work (we query function attributes twice, once via the callsite and once via the function AA query) but it is *exactly* twice here, no more. The end result is that all of the delegation logic is hoisted out of the base class and into either the aggregation layer when it is a pure retargeting to a different API surface, or into BasicAA when it relies on the IR's aliasing properties. This should fix the quadratic query pattern reported in PR26564, although I don't have a stand-alone test case to reproduce it. It also seems general goodness. Now the numerous AAs that don't need target library info don't carry it around and depend on it. I think I can even rip out the general access to the aggregation layer and only expose that in BasicAA as it is the only place where we re-query in that manner. However, this is a non-trivial change to the AA infrastructure so I want to get some additional eyes on this before it lands. Sadly, it can't wait long because we should really cherry pick this into 3.8 if we're going to go this route. Differential Revision: http://reviews.llvm.org/D17329 llvm-svn: 262490
Diffstat (limited to 'llvm/lib/Analysis/AliasAnalysis.cpp')
-rw-r--r--llvm/lib/Analysis/AliasAnalysis.cpp138
1 files changed, 127 insertions, 11 deletions
diff --git a/llvm/lib/Analysis/AliasAnalysis.cpp b/llvm/lib/Analysis/AliasAnalysis.cpp
index 81d8d702885..29669899147 100644
--- a/llvm/lib/Analysis/AliasAnalysis.cpp
+++ b/llvm/lib/Analysis/AliasAnalysis.cpp
@@ -52,18 +52,11 @@ using namespace llvm;
static cl::opt<bool> DisableBasicAA("disable-basicaa", cl::Hidden,
cl::init(false));
-AAResults::AAResults(AAResults &&Arg) : AAs(std::move(Arg.AAs)) {
+AAResults::AAResults(AAResults &&Arg) : TLI(Arg.TLI), AAs(std::move(Arg.AAs)) {
for (auto &AA : AAs)
AA->setAAResults(this);
}
-AAResults &AAResults::operator=(AAResults &&Arg) {
- AAs = std::move(Arg.AAs);
- for (auto &AA : AAs)
- AA->setAAResults(this);
- return *this;
-}
-
AAResults::~AAResults() {
// FIXME; It would be nice to at least clear out the pointers back to this
// aggregation here, but we end up with non-nesting lifetimes in the legacy
@@ -141,6 +134,44 @@ ModRefInfo AAResults::getModRefInfo(ImmutableCallSite CS,
return Result;
}
+ // Try to refine the mod-ref info further using other API entry points to the
+ // aggregate set of AA results.
+ auto MRB = getModRefBehavior(CS);
+ if (MRB == FMRB_DoesNotAccessMemory)
+ return MRI_NoModRef;
+
+ if (onlyReadsMemory(MRB))
+ Result = ModRefInfo(Result & MRI_Ref);
+
+ if (onlyAccessesArgPointees(MRB)) {
+ bool DoesAlias = false;
+ ModRefInfo AllArgsMask = MRI_NoModRef;
+ if (doesAccessArgPointees(MRB)) {
+ for (auto AI = CS.arg_begin(), AE = CS.arg_end(); AI != AE; ++AI) {
+ const Value *Arg = *AI;
+ if (!Arg->getType()->isPointerTy())
+ continue;
+ unsigned ArgIdx = std::distance(CS.arg_begin(), AI);
+ MemoryLocation ArgLoc = MemoryLocation::getForArgument(CS, ArgIdx, TLI);
+ AliasResult ArgAlias = alias(ArgLoc, Loc);
+ if (ArgAlias != NoAlias) {
+ ModRefInfo ArgMask = getArgModRefInfo(CS, ArgIdx);
+ DoesAlias = true;
+ AllArgsMask = ModRefInfo(AllArgsMask | ArgMask);
+ }
+ }
+ }
+ if (!DoesAlias)
+ return MRI_NoModRef;
+ Result = ModRefInfo(Result & AllArgsMask);
+ }
+
+ // If Loc is a constant memory location, the call definitely could not
+ // modify the memory location.
+ if ((Result & MRI_Mod) &&
+ pointsToConstantMemory(Loc, /*OrLocal*/ false))
+ Result = ModRefInfo(Result & ~MRI_Mod);
+
return Result;
}
@@ -156,6 +187,88 @@ ModRefInfo AAResults::getModRefInfo(ImmutableCallSite CS1,
return Result;
}
+ // Try to refine the mod-ref info further using other API entry points to the
+ // aggregate set of AA results.
+
+ // If CS1 or CS2 are readnone, they don't interact.
+ auto CS1B = getModRefBehavior(CS1);
+ if (CS1B == FMRB_DoesNotAccessMemory)
+ return MRI_NoModRef;
+
+ auto CS2B = getModRefBehavior(CS2);
+ if (CS2B == FMRB_DoesNotAccessMemory)
+ return MRI_NoModRef;
+
+ // If they both only read from memory, there is no dependence.
+ if (onlyReadsMemory(CS1B) && onlyReadsMemory(CS2B))
+ return MRI_NoModRef;
+
+ // If CS1 only reads memory, the only dependence on CS2 can be
+ // from CS1 reading memory written by CS2.
+ if (onlyReadsMemory(CS1B))
+ Result = ModRefInfo(Result & MRI_Ref);
+
+ // If CS2 only access memory through arguments, accumulate the mod/ref
+ // information from CS1's references to the memory referenced by
+ // CS2's arguments.
+ if (onlyAccessesArgPointees(CS2B)) {
+ ModRefInfo R = MRI_NoModRef;
+ if (doesAccessArgPointees(CS2B)) {
+ for (auto I = CS2.arg_begin(), E = CS2.arg_end(); I != E; ++I) {
+ const Value *Arg = *I;
+ if (!Arg->getType()->isPointerTy())
+ continue;
+ unsigned CS2ArgIdx = std::distance(CS2.arg_begin(), I);
+ auto CS2ArgLoc = MemoryLocation::getForArgument(CS2, CS2ArgIdx, TLI);
+
+ // ArgMask indicates what CS2 might do to CS2ArgLoc, and the dependence
+ // of CS1 on that location is the inverse.
+ ModRefInfo ArgMask = getArgModRefInfo(CS2, CS2ArgIdx);
+ if (ArgMask == MRI_Mod)
+ ArgMask = MRI_ModRef;
+ else if (ArgMask == MRI_Ref)
+ ArgMask = MRI_Mod;
+
+ ArgMask = ModRefInfo(ArgMask & getModRefInfo(CS1, CS2ArgLoc));
+
+ R = ModRefInfo((R | ArgMask) & Result);
+ if (R == Result)
+ break;
+ }
+ }
+ return R;
+ }
+
+ // If CS1 only accesses memory through arguments, check if CS2 references
+ // any of the memory referenced by CS1's arguments. If not, return NoModRef.
+ if (onlyAccessesArgPointees(CS1B)) {
+ ModRefInfo R = MRI_NoModRef;
+ if (doesAccessArgPointees(CS1B)) {
+ for (auto I = CS1.arg_begin(), E = CS1.arg_end(); I != E; ++I) {
+ const Value *Arg = *I;
+ if (!Arg->getType()->isPointerTy())
+ continue;
+ unsigned CS1ArgIdx = std::distance(CS1.arg_begin(), I);
+ auto CS1ArgLoc = MemoryLocation::getForArgument(CS1, CS1ArgIdx, TLI);
+
+ // ArgMask indicates what CS1 might do to CS1ArgLoc; if CS1 might Mod
+ // CS1ArgLoc, then we care about either a Mod or a Ref by CS2. If CS1
+ // might Ref, then we care only about a Mod by CS2.
+ ModRefInfo ArgMask = getArgModRefInfo(CS1, CS1ArgIdx);
+ ModRefInfo ArgR = getModRefInfo(CS2, CS1ArgLoc);
+ if (((ArgMask & MRI_Mod) != MRI_NoModRef &&
+ (ArgR & MRI_ModRef) != MRI_NoModRef) ||
+ ((ArgMask & MRI_Ref) != MRI_NoModRef &&
+ (ArgR & MRI_Mod) != MRI_NoModRef))
+ R = ModRefInfo((R | ArgMask) & Result);
+
+ if (R == Result)
+ break;
+ }
+ }
+ return R;
+ }
+
return Result;
}
@@ -464,7 +577,8 @@ bool AAResultsWrapperPass::runOnFunction(Function &F) {
// unregistering themselves with them. We need to carefully tear down the
// previous object first, in this case replacing it with an empty one, before
// registering new results.
- AAR.reset(new AAResults());
+ AAR.reset(
+ new AAResults(getAnalysis<TargetLibraryInfoWrapperPass>().getTLI()));
// BasicAA is always available for function analyses. Also, we add it first
// so that it can trump TBAA results when it proves MustAlias.
@@ -501,6 +615,7 @@ bool AAResultsWrapperPass::runOnFunction(Function &F) {
void AAResultsWrapperPass::getAnalysisUsage(AnalysisUsage &AU) const {
AU.setPreservesAll();
AU.addRequired<BasicAAWrapperPass>();
+ AU.addRequired<TargetLibraryInfoWrapperPass>();
// We also need to mark all the alias analysis passes we will potentially
// probe in runOnFunction as used here to ensure the legacy pass manager
@@ -516,7 +631,7 @@ void AAResultsWrapperPass::getAnalysisUsage(AnalysisUsage &AU) const {
AAResults llvm::createLegacyPMAAResults(Pass &P, Function &F,
BasicAAResult &BAR) {
- AAResults AAR;
+ AAResults AAR(P.getAnalysis<TargetLibraryInfoWrapperPass>().getTLI());
// Add in our explicitly constructed BasicAA results.
if (!DisableBasicAA)
@@ -567,10 +682,11 @@ bool llvm::isIdentifiedFunctionLocal(const Value *V) {
return isa<AllocaInst>(V) || isNoAliasCall(V) || isNoAliasArgument(V);
}
-void llvm::addUsedAAAnalyses(AnalysisUsage &AU) {
+void llvm::getAAResultsAnalysisUsage(AnalysisUsage &AU) {
// This function needs to be in sync with llvm::createLegacyPMAAResults -- if
// more alias analyses are added to llvm::createLegacyPMAAResults, they need
// to be added here also.
+ AU.addRequired<TargetLibraryInfoWrapperPass>();
AU.addUsedIfAvailable<ScopedNoAliasAAWrapperPass>();
AU.addUsedIfAvailable<TypeBasedAAWrapperPass>();
AU.addUsedIfAvailable<objcarc::ObjCARCAAWrapperPass>();
OpenPOWER on IntegriCloud