| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
| |
We remove -polly-detect-unprofitable and -polly-no-early-exit. Both have been
superseeded by -polly-process-unprofitable and were only kept as aliases for
our buildbots to continue to work. As all buildbots have been moved to the new
options, we can now remove the old ones for good.
llvm-svn: 251787
|
|
|
|
| |
llvm-svn: 249975
|
|
|
|
| |
llvm-svn: 249427
|
|
|
|
|
|
|
|
|
|
| |
This single option replaces -polly-detect-unprofitable and -polly-no-early-exit
and is supposed to be the only option that disables compile-time heuristics that
aim to bail out early on scops that are believed to not benefit from Polly
optimizations.
Suggested-by: Johannes Doerfert
llvm-svn: 249426
|
|
|
|
|
|
|
|
|
|
| |
Instead of generating code for an empty assumed context we bail out
early. As the number of assumptions we generate increases this becomes
more and more important. Additionally, this change will allow us to
hide internal contexts that are only used in runtime checks e.g., a
boundary context with constraints not suited for simplifications.
llvm-svn: 245540
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The July issue of TOPLAS contains a 50 page discussion of the AST generation
techniques used in Polly. This discussion gives not only an in-depth
description of how we (re)generate an imperative AST from our polyhedral based
mathematical program description, but also gives interesting insights about:
- Schedule trees: A tree-based mathematical program description that enables us
to perform loop transformations on an abstract level, while issues like the
generation of the correct loop structure and loop bounds will be taken care of
by our AST generator.
- Polyhedral unrolling: We discuss techniques that allow the unrolling of
non-trivial loops in the context of parameteric loop bounds, complex tile
shapes and conditionally executed statements. Such unrolling support enables
the generation of predicated code e.g. in the context of GPGPU computing.
- Isolation for full/partial tile separation: We discuss native support for
handling full/partial tile separation and -- in general -- native support for
isolation of boundary cases to enable smooth code generation for core
computations.
- AST generation with modulo constraints: We discuss how modulo mappings are
lowered to efficient C/LLVM code.
- User-defined constraint sets for run-time checks We discuss how arbitrary
sets of constraints can be used to automatically create run-time checks that
ensure a set of constrainst actually hold. This feature is very useful to
verify at run-time various assumptions that have been taken program
optimization.
Polyhedral AST generation is more than scanning polyhedra
Tobias Grosser, Sven Verdoolaege, Albert Cohen
ACM Transations on Programming Languages and Systems (TOPLAS), 37(4), July 2015
llvm-svn: 245157
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is common practice to keep constructors lightweight. The reasons
include:
- The vtable during the constructor's execution is set to the static
type of the object, not to the vtable of the derived class. That is,
method calls behave differently in constructors and ordinary methods.
This way it is possible to call unimplemented methods of abstract
classes, which usually results in a segmentation fault.
- If an exception is thrown in the constructor, the destructor is not
called, potentially leaking memory.
- Code in constructors cannot be called in a regular way, e.g. from
non-constructor methods of derived classes.
- Because it is common practice, people may not expect the constructor
to do more than initializing data and skip them when looking for bugs.
Not all of these are applicable to LLVM (e.g. exceptions are disabled).
This patch refactors out the computational work in the constructors of
Scop and IslAst into regular init functions and introduces static
create-functions as replacement.
Differential revision: http://reviews.llvm.org/D11491
Reviewers: grosser, jdoerfert
llvm-svn: 243677
|
|
|
|
| |
llvm-svn: 243245
|
|
|
|
| |
llvm-svn: 243215
|
|
|
|
|
|
|
|
|
| |
As specified in PR23888, run-time alias check generation is expensive
in terms of compile-time. This reduces the compile time by computing
minimal/maximal access only once for each base pointer
Contributed-by: Pratik Bhatu <cs12b1010@iith.ac.in>
llvm-svn: 243024
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of flat schedules, we now use so-called schedule trees to represent the
execution order of the statements in a SCoP. Schedule trees make it a lot easier
to analyze, understand and modify properties of a schedule, as specific nodes
in the tree can be choosen and possibly replaced.
This patch does not yet fully move our DependenceInfo pass to schedule trees,
as some additional performance analysis is needed here. (In general schedule
trees should be faster in compile-time, as the more structured representation
is generally easier to analyze and work with). We also can not yet perform the
reduction analysis on schedule trees.
For more information regarding schedule trees, please see Section 6 of
https://lirias.kuleuven.be/handle/123456789/497238
llvm-svn: 242130
|
|
|
|
|
|
| |
With this commit 'make check-polly' is now address sanitizer clean.
llvm-svn: 239131
|
|
|
|
|
|
|
|
|
|
| |
Upcoming revisions of isl require us to include header files explicitly, which
have previously been already transitively included. Before we add them, we sort
the existing includes.
Thanks to Chandler for sort_includes.py. A simple, but very convenient script.
llvm-svn: 236930
|
|
|
|
|
|
|
|
|
|
|
| |
The new Dependences struct in the DependenceInfo holds all information
that was formerly part of the DependenceInfo. It also provides the
same interface for the user to access this information.
This is another step to a more general ScopPass interface that does
allow multiple SCoPs to be "in flight".
llvm-svn: 231327
|
|
|
|
|
|
|
|
|
|
| |
We rename the Dependences pass to DependenceInfo as a first step to a
caching pass policy. The new DependenceInfo pass will later provide
"Dependences" for a SCoP.
To keep consistency the test folder is renamed too.
llvm-svn: 231308
|
|
|
|
| |
llvm-svn: 230902
|
|
|
|
|
|
| |
This is the first step in the interface simplification.
llvm-svn: 230897
|
|
|
|
|
| |
Suggest-by: Johannes Doerfert
llvm-svn: 230642
|
|
|
|
|
|
|
| |
isl recently introduced a new interface to create run-time checks from
constraint sets. Use this interface to simplify our run-time check generation.
llvm-svn: 230640
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us to skip ast and code generation if we did not optimize
a SCoP and will not generate parallel or alias annotations. The
initial heuristic to exit is simple but allows improvements later on.
All failing test cases have been modified to disable early exit, thus
to keep their coverage.
Differential Revision: http://reviews.llvm.org/D7254
llvm-svn: 228851
|
|
|
|
| |
llvm-svn: 222106
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of parallelizing every parallel outermost loop, we now use a very
minimalistic cost model. Specifically, we assume innermost loops are not
worth parallelising and all non-innermost loops are.
When parallelizing all loops in LNT we got several slowdowns/timeouts due to
us parallelizing innermost loops that are executed only a couple of times
(number of iterations not known statically). With this basic heuristic enabled
LNT does not show any more timeouts, while several interesting loops are still
parallelized.
There are many ways to obtain an improved heuristic. Constructing such an
improvide heuristic from a position of minimal slow-down and zero code size
increase seems to be the best, as it allows us to track progress on LNT.
llvm-svn: 222096
|
|
|
|
| |
llvm-svn: 221982
|
|
|
|
| |
llvm-svn: 221483
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We introduces a new flag -polly-parallel and use it to annotate the for-nodes in
the isl ast that we want to execute thread parallel (e.g., using OpenMP). We
previously already emmitted openmp annotations, but we did this for various
kinds of parallel loops, including some which we can not run in parallel.
With this patch we now have three annotations:
1) #pragma known-parallel [reduction]
2) #pragma omp for
3) #pragma simd
meaning:
1) loop has no loop carried dependences
2) loop will be executed thread-parallel
3) loop can possibly be vectorized
This patch introduces 1) and reduces the use of 2) to only the cases where we
will actually generate thread parallel code.
It is in preparation of openmp code generation in our isl backend.
Legacy:
- We also have a command line option -enable-polly-openmp. This option controls
the OpenMP code generation in CLooG. It will become an alias of
-polly-parallel after the CLooG code generation has been dropped.
http://reviews.llvm.org/D6142
llvm-svn: 221479
|
|
|
|
|
|
|
|
|
| |
This makes sure we consistently use dbgs() when printing debug output.
Previously, the code just mixed calls to isl_*_dump() with printing to dbgs()
and was relying for both methods to interact in predictable ways (same output
stream, no unexpected reordering of outputs).
llvm-svn: 220443
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change will build all alias groups (minimal/maximal accesses
to possible aliasing base pointers) we have to check before
we can assume an alias free environment. It will also use these
to create Runtime Alias Checks (RTC) in the ISL code generation
backend, thus allow us to optimize SCoPs despite possibly aliasing
pointers when this backend is used.
This feature will be enabled for the isl code generator, e.g.,
--polly-code-generator=isl, but disabled for:
- The cloog code generator (still the default).
- The case delinearization is enabled.
- The case non-affine accesses are allowed.
llvm-svn: 218046
|
|
|
|
|
|
|
|
|
|
|
| |
During the IslAst parallelism check also compute the minimal dependency
distance and store it in the IstAst for node.
Reviewer: sebpop
Differential Revision: http://reviews.llvm.org/D4987
llvm-svn: 217729
|
|
|
|
|
|
|
|
|
|
| |
There was a bug in the IslAst which caused that no more outermost
parallel loops were detected/checked after a parallel outermost loop
of depth 1.
+ Test case attached
llvm-svn: 217452
|
|
|
|
|
|
|
|
|
|
|
|
| |
+ Remove the class IslGenerator which duplicates the functionality of
IslExprBuilder.
+ Use the IslExprBuilder to create code for memory access relations.
+ Also handle array types during access creation.
+ Enable scev codegen for one of the transformed memory access tests,
thus access creation without canonical induction variables available.
+ Update one test case to the new output.
llvm-svn: 214659
|
|
|
|
|
|
|
|
|
|
|
|
| |
+ Split all reduction dependences and map them to the causing memory accesses.
+ Print the types & base addresses of broken reductions for each "reduction
parallel" marked loop (OpenMP style).
+ 3 test cases to show how reductions are now represented in the isl ast.
The mapping "(ast) loops -> broken reductions" is also needed to find the
memory accesses we need to privatize in a loop.
llvm-svn: 214489
|
|
|
|
|
|
|
|
|
| |
The functions isParallel, isInnermostParallel and IsOutermostParallel in
IslAstInfo will now return true even in the presence of broken reductions.
To compensate for this change the negated result of isReductionParallel can
be used.
llvm-svn: 214488
|
|
|
|
|
|
|
|
| |
+ Perform the parallelism check on the innermost loop only once.
+ Inline the markOpenmpParallel function.
+ Rename all IslAstUserPayload * into Payload to make it consistent.
llvm-svn: 214448
|
|
|
|
| |
llvm-svn: 214447
|
|
|
|
|
|
|
|
| |
Whe we build the IslAst we visit for nodes (in pre and post order) as well as
user/domain nodes. As these two sets are non overlapping we do not need to
check if we annotated a node earlier when we visit it.
llvm-svn: 214170
|
|
|
|
| |
llvm-svn: 214061
|
|
|
|
|
|
|
|
|
| |
Use the fact that if we visit a for node first in pre and next in post order
we know we did not visit any children, thus we found an innermost loop.
+ Test case for an innermost loop with a conditional inside
llvm-svn: 213870
|
|
|
|
|
|
|
|
|
|
|
| |
+ Renamed context into build when it's the isl_ast_build
+ Use the IslAstInfo functions to extract the schedule of a node
+ Use the IslAstInfo functions to extract the build/context of a node
+ Move the payload struct into the IslAstInfo class
+ Use a constructor and destructor (also new and delete) to
allocate/initialize the payload struct
llvm-svn: 213792
|
|
|
|
|
|
| |
+ Add const annotations to some member functions
llvm-svn: 213779
|
|
|
|
|
|
|
|
|
|
|
|
| |
Offer the static functions to extract information out of an IslAst for node
as members of IslAstInfo not as top level entities.
+ Refactor common code
+ Add isParallel and isReductionParallel
+ Rename IslAstUser to IslAstUserPayload to make it clear this is just a (or
the) payload struct.
llvm-svn: 213272
|
|
|
|
| |
llvm-svn: 213026
|
|
|
|
|
|
|
|
|
|
| |
+ Introduced dependency type TYPE_TC_RED to represent the transitive closure
(& the reverse) of reduction dependences. These are used when we check for
reduction parallel loops.
+ Test cases including loop reversals and modulo schedules which compute
reductions in a alternated order.
llvm-svn: 213019
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As our delinearization works optimistically, we need in some cases run-time
checks that verify our optimistic assumptions. A simple example is the
following code:
void foo(long n, long m, long o, double A[n][m][o]) {
for (long i = 0; i < 100; i++)
for (long j = 0; j < 150; j++)
for (long k = 0; k < 200; k++)
A[i][j][k] = 1.0;
}
After clang linearized the access to A and we delinearized it again to
A[i][j][k] we need to ensure that we do not access the delinearized array
out of bounds (this information is not available in LLVM-IR). Hence, we
need to verify the following constraints at run-time:
CHECK: Assumed Context:
CHECK: [o, m] -> { : m >= 150 and o >= 200 }
llvm-svn: 212198
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To translate the old induction variables as they exist before Polly to new
new induction variables introduced during AST code generation we need to
generate code that computes the new values from the old ones. We can do this
by just looking at the arguments isl generates in each scheduled statement.
Example:
// Old
for i
S(i)
// New
for c0
for c1
S(c0 + c1)
To get the value of i, we need to compute 'c0 + c1'. This expression is readily
available in the user statements generated by isl and just needs to be
translated to LLVM-IR.
This replaces an old confusing construct that constructed during ast generation
an isl multi affine expression that described this relation and which was then
again ast generated for each statement and argument when translating the isl ast
to LLVM-IR. This approach was difficult to understand and the additional ast
generation calls where entirely redundant as isl provides the relevant
expressions as arguments of the generated user statements.
llvm-svn: 212186
|
|
|
|
| |
llvm-svn: 211980
|
|
|
|
| |
llvm-svn: 211803
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
+ Collect reduction dependences
+ Introduced TYPE_RED in Dependences.h which can be used to obtain the
reduction dependences
+ Used TYPE_RED to prevent parallelization while we do not have a privatizing
code generation
+ Relax the dependences for non-parallel code generation
+ Add privatization dependences to ensure correctness
+ 12 Test cases to check for reduction and privatization dependences
llvm-svn: 211369
|
|
|
|
| |
llvm-svn: 210170
|
|
|
|
|
|
|
|
|
|
| |
definition below all of the header #include lines, Polly edition.
If you want to know more details about this, you can see the recent
commits to Debug.h in LLVM. This is just the Polly segment of a cleanup
I'm doing globally for this macro.
llvm-svn: 206852
|
|
|
|
| |
llvm-svn: 206361
|