summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Analysis/ValueTracking.cpp
diff options
context:
space:
mode:
authorChandler Carruth <chandlerc@gmail.com>2017-08-14 07:03:24 +0000
committerChandler Carruth <chandlerc@gmail.com>2017-08-14 07:03:24 +0000
commit37c7b08710be264a216bf474dc5118df5e86ea40 (patch)
treef1ea07f2a3747d8bd94f00ef6aabf068c75f0e8d /llvm/lib/Analysis/ValueTracking.cpp
parent1e09c1363cb7818ffd946354b1e2fe068f85ea5d (diff)
downloadbcm5719-llvm-37c7b08710be264a216bf474dc5118df5e86ea40.tar.gz
bcm5719-llvm-37c7b08710be264a216bf474dc5118df5e86ea40.zip
[ValueTracking] Revert r310583 which enabled functionality that still is
causing compile time issues. Moreover, the patch *deleted* the flag in addition to changing the default, and links to a code review that doesn't even discuss the flag and just has an update to a Clang test case. I've followed up on the commit thread to ask for numbers on compile time at this point, leaving the flag in place until things stabilize, and pointing at specific code that seems to exhibit excessive compile time with this patch. Original commit message for r310583: """ [ValueTracking] Enabling ValueTracking patch by default (recommit). Part 2. The original patch was an improvement to IR ValueTracking on non-negative integers. It has been checked in to trunk (D18777, r284022). But was disabled by default due to performance regressions. Perf impact has improved. The patch would be enabled by default. """" llvm-svn: 310816
Diffstat (limited to 'llvm/lib/Analysis/ValueTracking.cpp')
-rw-r--r--llvm/lib/Analysis/ValueTracking.cpp9
1 files changed, 9 insertions, 0 deletions
diff --git a/llvm/lib/Analysis/ValueTracking.cpp b/llvm/lib/Analysis/ValueTracking.cpp
index 279d880904e..b81bf9ec9d1 100644
--- a/llvm/lib/Analysis/ValueTracking.cpp
+++ b/llvm/lib/Analysis/ValueTracking.cpp
@@ -54,6 +54,12 @@ const unsigned MaxDepth = 6;
static cl::opt<unsigned> DomConditionsMaxUses("dom-conditions-max-uses",
cl::Hidden, cl::init(20));
+// This optimization is known to cause performance regressions is some cases,
+// keep it under a temporary flag for now.
+static cl::opt<bool>
+DontImproveNonNegativePhiBits("dont-improve-non-negative-phi-bits",
+ cl::Hidden, cl::init(true));
+
/// Returns the bitwidth of the given scalar or pointer type. For vector types,
/// returns the element type's bitwidth.
static unsigned getBitWidth(Type *Ty, const DataLayout &DL) {
@@ -1252,6 +1258,9 @@ static void computeKnownBitsFromOperator(const Operator *I, KnownBits &Known,
Known.Zero.setLowBits(std::min(Known2.countMinTrailingZeros(),
Known3.countMinTrailingZeros()));
+ if (DontImproveNonNegativePhiBits)
+ break;
+
auto *OverflowOp = dyn_cast<OverflowingBinaryOperator>(LU);
if (OverflowOp && OverflowOp->hasNoSignedWrap()) {
// If initial value of recurrence is nonnegative, and we are adding
OpenPOWER on IntegriCloud