| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
function calls. NFCI."
This broke two tests. Presumably the non-short-circuting '|' was
intentional here.
This reverts commit f7efea0ded8e16c7751b378523407a491016edd6.
|
|
|
|
|
|
| |
calls. NFCI.
Fixes warning from PVS Studio
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This file lists every pass in LLVM, and is included by Pass.h, which is
very popular. Every time we add, remove, or rename a pass in LLVM, it
caused lots of recompilation.
I found this fact by looking at this table, which is sorted by the
number of times a file was changed over the last 100,000 git commits
multiplied by the number of object files that depend on it in the
current checkout:
recompiles touches affected_files header
342380 95 3604 llvm/include/llvm/ADT/STLExtras.h
314730 234 1345 llvm/include/llvm/InitializePasses.h
307036 118 2602 llvm/include/llvm/ADT/APInt.h
213049 59 3611 llvm/include/llvm/Support/MathExtras.h
170422 47 3626 llvm/include/llvm/Support/Compiler.h
162225 45 3605 llvm/include/llvm/ADT/Optional.h
158319 63 2513 llvm/include/llvm/ADT/Triple.h
140322 39 3598 llvm/include/llvm/ADT/StringRef.h
137647 59 2333 llvm/include/llvm/Support/Error.h
131619 73 1803 llvm/include/llvm/Support/FileSystem.h
Before this change, touching InitializePasses.h would cause 1345 files
to recompile. After this change, touching it only causes 550 compiles in
an incremental rebuild.
Reviewers: bkramer, asbirlea, bollu, jdoerfert
Differential Revision: https://reviews.llvm.org/D70211
|
|
|
|
|
|
|
|
| |
the load chain. NFCI.
Silence static analyzer uninitialized variable warning by setting the LoadTy to null and then asserting we find a real value.
llvm-svn: 371936
|
|
|
|
| |
llvm-svn: 370948
|
|
|
|
|
|
|
|
|
| |
Added code to truncate or shrink offsets so that we can continue
base pointer search if size has changed along the way.
Differential Revision: https://reviews.llvm.org/D65612
llvm-svn: 367646
|
|
|
|
|
|
|
|
|
|
|
| |
The previous change to fix crash in the vectorizer introduced
performance regressions. The condition to preserve pointer
address space during the search is too tight, we only need to
match the size.
Differential Revision: https://reviews.llvm.org/D65600
llvm-svn: 367624
|
|
|
|
|
|
|
|
|
| |
When vectorizer strips pointers it can eventually end up with
pointers of two different sizes, then SCEV will crash.
Differential Revision: https://reviews.llvm.org/D65480
llvm-svn: 367443
|
|
|
|
| |
llvm-svn: 365903
|
|
|
|
|
|
|
|
| |
Properly initialize store type to null then ensure we find a real store type in the chain.
Fixes scan-build null dereference warning and makes the code clearer.
llvm-svn: 360031
|
|
|
|
|
|
|
|
|
| |
This cleans up all LoadInst creation in LLVM to explicitly pass the
value type rather than deriving it from the pointer's element-type.
Differential Revision: https://reviews.llvm.org/D57172
llvm-svn: 352911
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
|
|
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D54848
llvm-svn: 348570
|
|
|
|
|
|
|
|
|
|
| |
This was checking the hardcoded address space 0 for the stack.
Additionally, this should be checking for legality with
the adjusted alignment, so defer the alignment check.
Also try to split if the unaligned access isn't allowed.
llvm-svn: 342442
|
|
|
|
| |
llvm-svn: 337992
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In some cases LSV sees (load/store _ (select _ <pointer expression>
<pointer expression>)) patterns in input IR, often due to sinking and
other forms of CFG simplification, sometimes interspersed with
bitcasts and all-constant-indices GEPs. With this
patch`areConsecutivePointers` method would attempt to handle select
instructions. This leads to an increased number of successful
vectorizations.
Technically, select instructions could appear in index arithmetic as
well, however, we don't see those in our test suites / benchmarks.
Also, there is a lot more freedom in IR shapes computing integral
indices in general than in what's common in pointer computations, and
it appears that it's quite unreliable to do anything short of making
select instructions first class citizens of Scalar Evolution, which
for the purposes of this patch is most definitely an overkill.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D49428
llvm-svn: 337965
|
|
|
|
|
|
|
|
| |
This reapplies commit r337489 reverted by r337541
Additionally, this commit contains a speculative fix to the issue reported in r337541
(the report does not contain an actionable reproducer, just a stack trace)
llvm-svn: 337606
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit r337489.
It causes asserts to fire in some TensorFlow tests, e.g.
tensorflow/compiler/tests/gather_test.py on GPU.
Example stack trace:
Start test case: GatherTest.testHigherRank
assertion failed at third_party/llvm/llvm/lib/Support/APInt.cpp:819 in llvm::APInt llvm::APInt::trunc(unsigned int) const: width && "Can't truncate to 0 bits"
@ 0x5559446ebe10 __assert_fail
@ 0x55593ef32f5e llvm::APInt::trunc()
@ 0x55593d78f86e (anonymous namespace)::Vectorizer::lookThroughComplexAddresses()
@ 0x55593d78f2bc (anonymous namespace)::Vectorizer::areConsecutivePointers()
@ 0x55593d78d128 (anonymous namespace)::Vectorizer::isConsecutiveAccess()
@ 0x55593d78c926 (anonymous namespace)::Vectorizer::vectorizeInstructions()
@ 0x55593d78c221 (anonymous namespace)::Vectorizer::vectorizeChains()
@ 0x55593d78b948 (anonymous namespace)::Vectorizer::run()
@ 0x55593d78b725 (anonymous namespace)::LoadStoreVectorizer::runOnFunction()
@ 0x55593edf4b17 llvm::FPPassManager::runOnFunction()
@ 0x55593edf4e55 llvm::FPPassManager::runOnModule()
@ 0x55593edf563c (anonymous namespace)::MPPassManager::runOnModule()
@ 0x55593edf5137 llvm::legacy::PassManagerImpl::run()
@ 0x55593edf5b71 llvm::legacy::PassManager::run()
@ 0x55593ced250d xla::gpu::IrDumpingPassManager::run()
@ 0x55593ced5033 xla::gpu::(anonymous namespace)::EmitModuleToPTX()
@ 0x55593ced40ba xla::gpu::(anonymous namespace)::CompileModuleToPtx()
@ 0x55593ced33d0 xla::gpu::CompileToPtx()
@ 0x55593b26b2a2 xla::gpu::NVPTXCompiler::RunBackend()
@ 0x55593b21f973 xla::Service::BuildExecutable()
@ 0x555938f44e64 xla::LocalService::CompileExecutable()
@ 0x555938f30a85 xla::LocalClient::Compile()
@ 0x555938de3c29 tensorflow::XlaCompilationCache::BuildExecutable()
@ 0x555938de4e9e tensorflow::XlaCompilationCache::CompileImpl()
@ 0x555938de3da5 tensorflow::XlaCompilationCache::Compile()
@ 0x555938c5d962 tensorflow::XlaLocalLaunchBase::Compute()
@ 0x555938c68151 tensorflow::XlaDevice::Compute()
@ 0x55593f389e1f tensorflow::(anonymous namespace)::ExecutorState::Process()
@ 0x55593f38a625 tensorflow::(anonymous namespace)::ExecutorState::ScheduleReady()::$_1::operator()()
*** SIGABRT received by PID 7798 (TID 7837) from PID 7798; ***
llvm-svn: 337541
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is mostly a preparation work for adding a limited support for
select instructions. It proved to be difficult to do due to size and
irregularity of Vectorizer::isConsecutiveAccess, this is fixed here I
believe.
It also turned out that these changes make it simpler to finish one of
the TODOs and fix a number of other small issues, namely:
1. Looking through bitcasts to a type of a different size (requires
careful tracking of the original load/store size and some math
converting sizes in bytes to expected differences in indices of GEPs).
2. Reusing partial analysis of pointers done by first attempt in proving
them consecutive instead of starting from scratch. This added limited
support for nested GEPs co-existing with difficult sext/zext
instructions. This also required a careful handling of negative
differences between constant parts of offsets.
3. Handing a case where the first pointer index is not an add, but
something else (a function parameter for instance).
I observe an increased number of successful vectorizations on a large
set of shader programs. Only few shaders are affected, but those that
are affected sport >5% less loads and stores than before the patch.
Reviewed By: rampitec
Differential-Revision: https://reviews.llvm.org/D49342
llvm-svn: 337489
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pointers.
Summary: Currently, isConsecutiveAccess() detects two pointers(PtrA and PtrB) as consecutive by
comparing PtrB with BaseDelta+PtrA. This works when both pointers are factorized or
both of them are not factorized. But isConsecutiveAccess() fails if one of the
pointers is factorized but the other one is not.
Here is an example:
PtrA = 4 * (A + B)
PtrB = 4 + 4A + 4B
This patch uses getMinusSCEV() to compute the distance between two pointers.
getMinusSCEV() allows combining the expressions and computing the simplified distance.
Author: FarhanaAleen
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D49516
llvm-svn: 337471
|
|
|
|
|
|
|
|
|
|
| |
Review feedback from r328165. Split out just the one function from the
file that's used by Analysis. (As chandlerc pointed out, the original
change only moved the header and not the implementation anyway - which
was fine for the one function that was used (since it's a
template/inlined in the header) but not in general)
llvm-svn: 333954
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The DEBUG() macro is very generic so it might clash with other projects.
The renaming was done as follows:
- git grep -l 'DEBUG' | xargs sed -i 's/\bDEBUG\s\?(/LLVM_DEBUG(/g'
- git diff -U0 master | ../clang/tools/clang-format/clang-format-diff.py -i -p1 -style LLVM
- Manual change to APInt
- Manually chage DOCS as regex doesn't match it.
In the transition period the DEBUG() macro is still present and aliased
to the LLVM_DEBUG() one.
Differential Revision: https://reviews.llvm.org/D43624
llvm-svn: 332240
|
|
|
|
|
|
|
|
|
| |
The memory location an invariant load is using can never be clobbered by
any store, so it's safe to move the load ahead of the store.
Differential Revision: https://reviews.llvm.org/D46011
llvm-svn: 330725
|
|
|
|
|
|
|
|
|
| |
When we skip bitcasts while looking for GEP in LoadSoreVectorizer
we should also verify that the type is sized otherwise we assert
Differential Revision: https://reviews.llvm.org/D45709
llvm-svn: 330221
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove #include of Transforms/Scalar.h from Transform/Utils to fix layering.
Transforms depends on Transforms/Utils, not the other way around. So
remove the header and the "createStripGCRelocatesPass" function
declaration (& definition) that is unused and motivated this dependency.
Move Transforms/Utils/Local.h into Analysis because it's used by
Analysis/MemoryBuiltins.cpp.
llvm-svn: 328165
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are six separate instances of getPointerOperand() utility.
LoopVectorize.cpp has one of them,
and I don't want to create a 7th one while I'm trying to move
LoopVectorizationLegality into a separate file
(eventual objective is to move it to Analysis tree).
See http://lists.llvm.org/pipermail/llvm-dev/2018-February/120999.html
for llvm-dev discussions
Closes D43323.
Patch by Hideki Saito <hideki.saito@intel.com>.
llvm-svn: 327173
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: GCN ISA supports instructions that can read 16 consecutive dwords from memory through the scalar data cache;
loadstoreVectorizer should take advantage of the wider vector length and pack 16/8 elements of dwords/quadwords.
Author: FarhanaAleen
Reviewed By: rampitec
Subscribers: llvm-commits, AMDGPU
Differential Revision: https://reviews.llvm.org/D44179
llvm-svn: 326910
|
|
|
|
|
|
| |
This reverts commit ce988cc100dc65e7c6c727aff31ceb99231cab03.
llvm-svn: 326907
|
|
|
|
| |
llvm-svn: 326904
|
|
|
|
|
|
|
|
|
|
|
| |
The LoadStoreVectorizer thought that <1 x T> and T were the same types
when merging stores, leading to a crash later.
Patch by Erik Hogeman.
Differential Revision: https://reviews.llvm.org/D44014
llvm-svn: 326884
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Making a width of GEP Index, which is used for address calculation, to be one of the pointer properties in the Data Layout.
p[address space]:size:memory_size:alignment:pref_alignment:index_size_in_bits.
The index size parameter is optional, if not specified, it is equal to the pointer size.
Till now, the InstCombiner normalized GEPs and extended the Index operand to the pointer width.
It works fine if you can convert pointer to integer for address calculation and all registered targets do this.
But some ISAs have very restricted instruction set for the pointer calculation. During discussions were desided to retrieve information for GEP index from the Data Layout.
http://lists.llvm.org/pipermail/llvm-dev/2018-January/120416.html
I added an interface to the Data Layout and I changed the InstCombiner and some other passes to take the Index width into account.
This change does not affect any in-tree target. I added tests to cover data layouts with explicitly specified index size.
Differential Revision: https://reviews.llvm.org/D42123
llvm-svn: 325102
|
|
|
|
|
|
|
|
|
|
| |
Reviewers: arsenm
Subscribers: rengolin, sanjoy, wdng, hiraditya, asbirlea
Differential Revision: https://reviews.llvm.org/D41890
llvm-svn: 322157
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch implements Chandler's idea [0] for supporting languages that
require support for infinite loops with side effects, such as Rust, providing
part of a solution to bug 965 [1].
Specifically, it adds an `llvm.sideeffect()` intrinsic, which has no actual
effect, but which appears to optimization passes to have obscure side effects,
such that they don't optimize away loops containing it. It also teaches
several optimization passes to ignore this intrinsic, so that it doesn't
significantly impact optimization in most cases.
As discussed on llvm-dev [2], this patch is the first of two major parts.
The second part, to change LLVM's semantics to have defined behavior
on infinite loops by default, with a function attribute for opting into
potential-undefined-behavior, will be implemented and posted for review in
a separate patch.
[0] http://lists.llvm.org/pipermail/llvm-dev/2015-July/088103.html
[1] https://bugs.llvm.org/show_bug.cgi?id=965
[2] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118632.html
Differential Revision: https://reviews.llvm.org/D38336
llvm-svn: 317729
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
We no longer add vectors of pointers as candidates for
load/store vectorization. It does not seem to work anyway,
but without this patch we can end up in asserts when trying
to create casts between an integer type and the pointer of
vectors type.
The test case I've added used to assert like this when trying to
cast between i64 and <2 x i16*>:
opt: ../lib/IR/Instructions.cpp:2565: Assertion `castIsValid(op, S, Ty) && "Invalid cast!"' failed.
#0 PrintStackTraceSignalHandler(void*)
#1 SignalHandler(int)
#2 __restore_rt
#3 __GI_raise
#4 __GI_abort
#5 __GI___assert_fail
#6 llvm::CastInst::Create(llvm::Instruction::CastOps, llvm::Value*, llvm::Type*, llvm::Twine const&, llvm::Instruction*)
#7 llvm::IRBuilder<llvm::ConstantFolder, llvm::IRBuilderDefaultInserter>::CreateBitOrPointerCast(llvm::Value*, llvm::Type*, llvm::Twine const&)
#8 Vectorizer::vectorizeStoreChain(llvm::ArrayRef<llvm::Instruction*>, llvm::SmallPtrSet<llvm::Instruction*, 16u>*)
Reviewers: arsenm
Reviewed By: arsenm
Subscribers: nhaehnle, llvm-commits
Differential Revision: https://reviews.llvm.org/D39296
llvm-svn: 316665
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The code comments indicate that no effort has been spent on
handling load/stores when the size isn't a multiple of the
byte size correctly. However, the code only avoided types
smaller than 8 bits. So for example a load of an i28 could
still be considered as a candidate for vectorization.
This patch adjusts the code to behave according to the code
comment.
The test case used to hit the following assert when
trying to use "cast" an i32 to i28 using CreateBitOrPointerCast:
opt: ../lib/IR/Instructions.cpp:2565: Assertion `castIsValid(op, S, Ty) && "Invalid cast!"' failed.
#0 PrintStackTraceSignalHandler(void*)
#1 SignalHandler(int)
#2 __restore_rt
#3 __GI_raise
#4 __GI_abort
#5 __GI___assert_fail
#6 llvm::CastInst::Create(llvm::Instruction::CastOps, llvm::Value*, llvm::Type*, llvm::Twine const&, llvm::Instruction*)
#7 llvm::IRBuilder<llvm::ConstantFolder, llvm::IRBuilderDefaultInserter>::CreateBitOrPointerCast(llvm::Value*, llvm::Type*, llvm::Twine const&)
#8 (anonymous namespace)::Vectorizer::vectorizeLoadChain(llvm::ArrayRef<llvm::Instruction*>, llvm::SmallPtrSet<llvm::Instruction*, 16u>*)
Reviewers: arsenm
Reviewed By: arsenm
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D39295
llvm-svn: 316663
|
|
|
|
|
|
| |
warnings; other minor fixes (NFC).
llvm-svn: 316034
|
|
|
|
|
|
|
|
|
|
|
|
| |
possible
This patch adds min/max population count, leading/trailing zero/one bit counting methods.
The min methods return answers based on bits that are known without considering unknown bits. The max methods give answers taking into account the largest count that unknown bits could give.
Differential Revision: https://reviews.llvm.org/D32931
llvm-svn: 302925
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
computeKnownBits
This patch introduces a new KnownBits struct that wraps the two APInt used by computeKnownBits. This allows us to treat them as more of a unit.
Initially I've just altered the signatures of computeKnownBits and InstCombine's simplifyDemandedBits to pass a KnownBits reference instead of two separate APInt references. I'll do similar to the SelectionDAG version of computeKnownBits/simplifyDemandedBits as a separate patch.
I've added a constructor that allows initializing both APInts to the same bit width with a starting value of 0. This reduces the repeated pattern of initializing both APInts. Once place default constructed the APInts so I added a default constructor for those cases.
Going forward I would like to add more methods that will work on the pairs. For example trunc, zext, and sext occur on both APInts together in several places. We should probably add a clear method that can be used to clear both pieces. Maybe a method to check for conflicting information. A method to return (Zero|One) so we don't write it out everywhere. Maybe a method for (Zero|One).isAllOnesValue() to determine if all bits are known. I'm sure there are many other methods we can come up with.
Differential Revision: https://reviews.llvm.org/D32376
llvm-svn: 301432
|
|
|
|
|
|
| |
Differential Revisison: https://reviews.llvm.org/D32101
llvm-svn: 301343
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
temporary APInts
This patch uses various APInt methods to reduce temporary APInt creation.
This should be all of the unrelated cleanups that got buried in D32376(creating a KnownBits struct) as well as some pointed out by Simon during the review of that. Plus a few improvements to use counting instead of masking.
I've left out any places where we do something like (KnownZero & KnownOne) != 0 as I plan to add a helper method to KnownBits to ask that question and didn't want to thrash that code an additional time.
Differential Revision: https://reviews.llvm.org/D32495
llvm-svn: 301338
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement isLegalToVectorizeLoadChain for AMDGPU to avoid
producing private address spaces accesses that will need to be
split up later. This was doing the wrong thing in the case
where the queried chain was an even number of elements.
A possible <4 x i32> store was being split into
store <2 x i32>
store i32
store i32
rather than
store <2 x i32>
store <2 x i32>
when legal.
llvm-svn: 295933
|
|
|
|
| |
llvm-svn: 291916
|
|
|
|
|
|
|
| |
This creates non-linear behavior in the inliner (see more details in
r289755's commit thread).
llvm-svn: 290086
|
|
|
|
|
|
|
|
|
| |
After r289755, the AssumptionCache is no longer needed. Variables affected by
assumptions are now found by using the new operand-bundle-based scheme. This
new scheme is more computationally efficient, and also we need much less
code...
llvm-svn: 289756
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
aliasing load
Summary:
The "getVectorizablePrefix" method would give up if it found an aliasing load for a store chain.
In practice, the aliasing load can be treated as a memory barrier and all stores that precede it
are a valid vectorizable prefix.
Issue found by volkan in D26962. Testcase is a pruned version of the one in the original patch.
Reviewers: jlebar, arsenm, tstellarAMD
Subscribers: mzolotukhin, wdng, nhaehnle, anna, volkan, llvm-commits
Differential Revision: https://reviews.llvm.org/D27008
llvm-svn: 287781
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: Added 6 new target hooks for the vectorizer in order to filter types, handle size constraints and decide how to split chains.
Reviewers: tstellarAMD, arsenm
Subscribers: arsenm, mzolotukhin, wdng, llvm-commits, nhaehnle
Differential Revision: https://reviews.llvm.org/D24727
llvm-svn: 283099
|
|
|
|
| |
llvm-svn: 283004
|
|
|
|
| |
llvm-svn: 282717
|
|
|
|
|
|
|
| |
If the unaligned access has a dynamic offset, it may be odd which
would make the adjusted alignment incorrect to use.
llvm-svn: 281110
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
LSV replaces multiple adjacent loads with one vectorized load and a
bunch of extractelement instructions. This patch makes the
extractelement instructions' names match those of the original loads,
for (hopefully) improved readability.
Reviewers: asbirlea, tstellarAMD
Subscribers: arsenm, mzolotukhin
Differential Revision: https://reviews.llvm.org/D23748
llvm-svn: 280818
|