summaryrefslogtreecommitdiffstats
path: root/polly/lib/CodeGen/IslNodeBuilder.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* [FIX] Look through div & srem instructions in SCEVsJohannes Doerfert2016-04-081-2/+2
| | | | | | | | | The findValues() function did not look through div & srem instructions that were part of the argument SCEV. However, in different other places we already look through it. This mismatch caused us to preload values in the wrong order. llvm-svn: 265775
* Fix non-synthesizable loop exit values.Michael Kruse2016-03-011-5/+6
| | | | | | | | | | | Polly recognizes affine loops that ScalarEvolution does not, in particular those with loop conditions that depend on hoisted invariant loads. Check for SCEVAddRec dependencies on such loops and do not consider their exit values as synthesizable because SCEVExpander would generate them as expressions that depend on the original induction variables. These are not available in generated code. llvm-svn: 262404
* [FIX] Prevent compile time problems due to complex invariant loadsJohannes Doerfert2016-03-011-1/+18
| | | | | | This cures the symptoms we see in h264 of SPEC2006 but not the cause. llvm-svn: 262327
* Introduce Scop::getStmtFor. NFC.Michael Kruse2016-02-241-1/+1
| | | | | | | | | | | Replace Scop::getStmtForBasicBlock and Scop::getStmtForRegionNode, and add overloads for llvm::Instruction and llvm::RegionNode. getStmtFor and overloads become the common interface to get the Stmt that contains something. Named after LoopInfo::getLoopFor and RegionInfo::getRegionFor. llvm-svn: 261791
* Annotation of SIMD loopsRoman Gareev2016-02-231-5/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | Use 'mark' nodes annotate a SIMD loop during ScheduleTransformation and skip parallelism checks. The buildbot shows the following compile/execution time changes: Compile time: Improvements Δ Previous Current σ …/gesummv -6.06% 0.2640 0.2480 0.0055 …/gemver -4.46% 0.4480 0.4280 0.0044 …/covariance -4.31% 0.8360 0.8000 0.0065 …/adi -3.23% 0.9920 0.9600 0.0065 …/doitgen -2.53% 0.9480 0.9240 0.0090 …/3mm -2.33% 1.0320 1.0080 0.0087 Execution time: Regressions Δ Previous Current σ …/viterbi 1.70% 5.1840 5.2720 0.0074 …/smallpt 1.06% 12.4920 12.6240 0.0040 Reviewed-by: Tobias Grosser <tobias@grosser.es> Differential Revision: http://reviews.llvm.org/D14491 llvm-svn: 261620
* Set AST Build for all statements [NFC]Johannes Doerfert2016-02-161-2/+5
| | | | llvm-svn: 260956
* Separate invariant equivalence classes by typeJohannes Doerfert2016-02-071-11/+6
| | | | | | | | | | | | | We now distinguish invariant loads to the same memory location if they have different types. This will cause us to pre-load an invariant location once for each type that is used to access it. However, we can thereby avoid invalid casting, especially if an array is accessed though different typed/sized invariant loads. This basically reverts the changes in r260023 but keeps the test cases. llvm-svn: 260045
* Simplify code [NFC]Johannes Doerfert2016-02-071-1/+1
| | | | llvm-svn: 260030
* IslNodeBuilder: Invariant load hoisting of elements with differing sizesTobias Grosser2016-02-061-18/+7
| | | | | | | | | | | | | | | Always use access-instruction pointer type to load the invariant values. Otherwise mismatches between ScopArrayInfo element type and memory access element type will result in invalid casts. These type mismatches are after r259784 a lot more common and also arise with types of different size, which have not been handled before. Interestingly, this change actually simplifies the code, as we now have only one code path that is always taken, rather then a standard code path for the common case and a "fixup" code path that replaces the standard code path in case of mismatching types. llvm-svn: 260009
* Support accesses with differently sized types to the same arrayTobias Grosser2016-02-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows code such as: void multiple_types(char *Short, char *Float, char *Double) { for (long i = 0; i < 100; i++) { Short[i] = *(short *)&Short[2 * i]; Float[i] = *(float *)&Float[4 * i]; Double[i] = *(double *)&Double[8 * i]; } } To model such code we use as canonical element type of the modeled array the smallest element type of all original array accesses, if type allocation sizes are multiples of each other. Otherwise, we use a newly created iN type, where N is the gcd of the allocation size of the types used in the accesses to this array. Accesses with types larger as the canonical element type are modeled as multiple accesses with the smaller type. For example the second load access is modeled as: { Stmt_bb2[i0] -> MemRef_Float[o0] : 4i0 <= o0 <= 3 + 4i0 } To support code-generating these memory accesses, we introduce a new method getAccessAddressFunction that assigns each statement instance a single memory location, the address we load from/store to. Currently we obtain this address by taking the lexmin of the access function. We may consider keeping track of the memory location more explicitly in the future. We currently do _not_ handle multi-dimensional arrays and also keep the restriction of not supporting accesses where the offset expression is not a multiple of the access element type size. This patch adds tests that ensure we correctly invalidate a scop in case these accesses are found. Both types of accesses can be handled using the very same model, but are left to be added in the future. We also move the initialization of the scop-context into the constructor to ensure it is already available when invalidating the scop. Finally, we add this as a new item to the 2.9 release notes Reviewers: jdoerfert, Meinersbur Differential Revision: http://reviews.llvm.org/D16878 llvm-svn: 259784
* Introduce MemAccInst helper class; NFCMichael Kruse2016-01-271-1/+2
| | | | | | | | | | | | | | | | MemAccInst wraps the common members of LoadInst and StoreInst. Also use of this class in: - ScopInfo::buildMemoryAccess - BlockGenerator::generateLocationAccessed - ScopInfo::addArrayAccess - Scop::buildAliasGroups - Replace every use of polly::getPointerOperand Reviewers: jdoerfert, grosser Differential Revision: http://reviews.llvm.org/D16530 llvm-svn: 258947
* Make sure we preserve alignment information after hoisting invariant loadJohannes Doerfert2016-01-191-4/+11
| | | | | | | | | | | In Polly, after hoisting loop invariant loads outside loop, the alignment information for hoisted loads are missing, this patch restore them. Contributed-by: Lawrence Hu <lawrence@codeaurora.org> Differential Revision: http://reviews.llvm.org/D16160 llvm-svn: 258105
* ScopInfo: Harmonize the different array kindsTobias Grosser2015-12-131-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Over time different vocabulary has been introduced to describe the different memory objects in Polly, resulting in different - often inconsistent - naming schemes in different parts of Polly. We now standartize this to the following scheme: KindArray, KindValue, KindPHI, KindExitPHI | ------- isScalar -----------| In most cases this naming scheme has already been used previously (this minimizes changes and ensures we remain consistent with previous publications). The main change is that we remove KindScalar to clearify the difference between a scalar as a memory object of kind Value, PHI or ExitPHI and a value (former KindScalar) which is a memory object modeling a llvm::Value. We also move all documentation to the Kind* enum in the ScopArrayInfo class, remove the second enum in the MemoryAccess class and update documentation to be formulated from the perspective of the memory object, rather than the memory access. The terms "Implicit"/"Explicit", formerly used to describe memory accesses, have been dropped. From the perspective of memory accesses they described the different memory kinds well - especially from the perspective of code generation - but just from the perspective of a memory object it seems more straightforward to talk about scalars and arrays, rather than explicit and implicit arrays. The last comment is clearly subjective, though. A less subjective reason to go for these terms is the historic use both in mailing list discussions and publications. llvm-svn: 255467
* Remove non-debug printing of domain setTobias Grosser2015-11-301-1/+0
| | | | | | | | Contributed-by: Chris Jenneisch <chrisj@codeaurora.org> Differential Revision: http://reviews.llvm.org/D15094 llvm-svn: 254343
* [FIX] Do not generate code for parameters referencing dead valuesJohannes Doerfert2015-11-111-3/+34
| | | | | | | Check if a value that is referenced by a parameter is dead and do not generate code for the parameter in such a case. llvm-svn: 252813
* [FIX] Cast pre-loaded values correctly or reload them with adjusted type.Johannes Doerfert2015-11-111-1/+22
| | | | | | | | | | | | | Especially for structs, the SAI object of a base pointer does not describe all the types that the user might expect when he loads from that base pointer. While we will still cast integers and pointers we will now reload the value with the correct type if floating point and non-floating point values are involved. However, there are now TODOs where we use bitcasts instead of a proper conversion or reloading. This fixes bug 25479. llvm-svn: 252706
* [FIX] Create empty invariant equivalence classesJohannes Doerfert2015-11-111-3/+15
| | | | | | | | | | | We now create all invariant equivalence classes for required invariant loads instead of creating them on-demand. This way we can check if a parameter references an invariant load that is actually not executed and was therefor not materialized. If that happens the parameter is not materialized either. This fixes bug 25469. llvm-svn: 252701
* ScopInfo: Introduce ArrayKindTobias Grosser2015-11-101-1/+1
| | | | | | | | | | | | | Since 252422 we do not only distinguish two ScopArrayInfo kinds, PHI nodes and others, but work with three kind of ScopArrayInfo objects. SCALAR, PHI and ARRAY objects. Instead of keeping two boolean flags isPHI and isScalar and wonder what an ScopArrayInfo object of kind (!isScalar && isPHI) is, we list now explicitly the three different possible types of memory objects. This change also allows us to remove the confusing nested pairs that have been used in ArrayInfoMapTy. llvm-svn: 252620
* [FIX] Use same alloca for invariant loads and the scalar usersJohannes Doerfert2015-11-091-12/+22
| | | | llvm-svn: 252451
* [FIX] Introduce different SAI objects for scalar and memory accessesJohannes Doerfert2015-11-081-1/+1
| | | | | | | | | | | | Even if a scalar and memory access have the same base pointer, we cannot use one SAI object as the type but also the number of dimensions are wrong. For the attached test case this caused a crash in the invariant load hoisting, though it could cause various other problems too. This fixes bug 25428 and a execution time bug in MallocBench/cfrac. Reported-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com> llvm-svn: 252422
* [FIX] Bail out if there is a dependence cycle between invariant loadsJohannes Doerfert2015-11-071-46/+79
| | | | | | | | | | | | | While the program cannot cause a dependence cycle between invariant loads, additional constraints (e.g., to ensure finite loops) can introduce them. It is hard to detect them in the SCoP description, thus we will only check for them at code generation time. If such a recursion is detected we will bail out the code generation and place a "false" runtime check to guarantee the original code is used. This fixes bug 25443. llvm-svn: 252412
* polly/ADT: Remove implicit ilist iterator conversions, NFCDuncan P. N. Exon Smith2015-11-061-19/+19
| | | | | | | | | | | | | Remove all the implicit ilist iterator conversions from polly, in preparation for making them illegal in ADT. There was one oddity I came across: at line 95 of lib/CodeGen/LoopGenerators.cpp, there was a post-increment `Builder.GetInsertPoint()++`. Since it was a no-op, I removed it, but I admit I wonder if it might be a bug (both before and after this change)? Perhaps it should be a pre-increment? llvm-svn: 252357
* [FIX] Simplify and correct preloading of base pointer originJohannes Doerfert2015-11-031-9/+2
| | | | | | | | | To simplify and correct the preloading of a base pointer origin, e.g., the base pointer for the current indirect invariant load, we now just check if there is an invariant access class that involves the base pointer of the current class. llvm-svn: 251962
* [FIX] Ensure base pointer origin was preloaded alreadyJohannes Doerfert2015-11-031-1/+13
| | | | | | | | If a base pointer of a preloaded value has a base pointer origin, thus it is an indirect invariant load, we have to make sure the base pointer origin is preloaded first. llvm-svn: 251946
* [FIX] Correctly update SAI base pointerJohannes Doerfert2015-11-031-2/+10
| | | | | | | | | If a base pointer load is preloaded, we have change the base pointer of the derived SAI. However, as the derived SAI relationship is is coarse grained, we need to check if we actually preloaded the base pointer or a different element of the base pointer SAI array. llvm-svn: 251881
* [FIX] Restructure invariant load equivalence classesJohannes Doerfert2015-10-181-46/+76
| | | | | | | | | | | | | | | | | Sorting is replaced by a demand driven code generation that will pre-load a value when it is needed or, if it was not needed before, at some point determined by the order of invariant accesses in the program. Only in very little cases this demand driven pre-loading will kick in, though it will prevent us from generating faulty code. An example where it is needed is shown in: test/ScopInfo/invariant_loads_complicated_dependences.ll Invariant loads that appear in parameters but are not on the top-level (e.g., the parameter is not a SCEVUnknown) will now be treated correctly. Differential Revision: http://reviews.llvm.org/D13831 llvm-svn: 250655
* [FIX] Cast preloaded valuesJohannes Doerfert2015-10-181-18/+22
| | | | | | | Preloaded values have to match the type of their counterpart in the original code and not the type of the base array. llvm-svn: 250654
* Consolidate invariant loadsJohannes Doerfert2015-10-091-6/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | If a (assumed) invariant location is loaded multiple times we generated a parameter for each location. However, this caused compile time problems for several benchmarks (e.g., 445_gobmk in SPEC2006 and BT in the NAS benchmarks). Additionally, the code we generate is suboptimal as we preload the same location multiple times and perform the same checks on all the parameters that refere to the same value. With this patch we consolidate the invariant loads in three steps: 1) During SCoP initialization required invariant loads are put in equivalence classes based on their pointer operand. One representing load is used to generate a parameter for the whole class, thus we never generate multiple parameters for the same location. 2) During the SCoP simplification we remove invariant memory accesses that are in the same equivalence class. While doing so we build the union of all execution domains as it is only important that the location is at least accessed once. 3) During code generation we only preload one element of each equivalence class with the unified execution domain. All others are mapped to that preloaded value. Differential Revision: http://reviews.llvm.org/D13338 llvm-svn: 249853
* Allow invariant loads in the SCoP descriptionJohannes Doerfert2015-10-071-8/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch allows invariant loads to be used in the SCoP description, e.g., as loop bounds, conditions or in memory access functions. First we collect "required invariant loads" during SCoP detection that would otherwise make an expression we care about non-affine. To this end a new level of abstraction was introduced before SCEVValidator::isAffineExpr() namely ScopDetection::isAffine() and ScopDetection::onlyValidRequiredInvariantLoads(). Here we can decide if we want a load inside the region to be optimistically assumed invariant or not. If we do, it will be marked as required and in the SCoP generation we bail if it is actually not invariant. If we don't it will be a non-affine expression as before. At the moment we optimistically assume all "hoistable" (namely non-loop-carried) loads to be invariant. This causes us to expand some SCoPs and dismiss them later but it also allows us to detect a lot we would dismiss directly if we would ask e.g., AliasAnalysis::canBasicBlockModify(). We also allow potential aliases between optimistically assumed invariant loads and other pointers as our runtime alias checks are sound in case the loads are actually invariant. Together with the invariant checks this combination allows to handle a lot more than LICM can. The code generation of the invariant loads had to be extended as we can now have dependences between parameters and invariant (hoisted) loads as well as the other way around, e.g., test/Isl/CodeGen/invariant_load_parameters_cyclic_dependence.ll First, it is important to note that we cannot have real cycles but only dependences from a hoisted load to a parameter and from another parameter to that hoisted load (and so on). To handle such cases we materialize llvm::Values for parameters that are referred by a hoisted load on demand and then materialize the remaining parameters. Second, there are new kinds of dependences between hoisted loads caused by the constraints on their execution. If a hoisted load is conditionally executed it might depend on the value of another hoisted load. To deal with such situations we sort them already in the ScopInfo such that they can be generated in the order they are listed in the Scop::InvariantAccesses list (see compareInvariantAccesses). The dependences between hoisted loads caused by indirect accesses are handled the same way as before. llvm-svn: 249607
* Move the ValueMapT declaration out of BlockGeneratorJohannes Doerfert2015-10-071-5/+4
| | | | | | | | | | | Value maps are created and used in many places and it is not always possible to include CodeGen/Blockgenerators.h. To this end, ValueMapT now lives in the ScopHelper.h which does not have any dependences itself. This patch also replaces uses of different other value map types with the ValueMapT. llvm-svn: 249606
* Consolidate the different ValueMapTypes we are usingTobias Grosser2015-10-041-2/+2
| | | | | | | | | | There have been various places where llvm::DenseMap<const llvm::Value *, llvm::Value *> types have been defined, but all types have been expected to be identical. We make this more clear by consolidating the different types and use BlockGenerator::ValueMapT wherever there is a need for types to match BlockGenerator::ValueMapT. llvm-svn: 249264
* IslExprBuilder: Use AssertingVH for IdToValueTyTobias Grosser2015-10-031-4/+4
| | | | llvm-svn: 249239
* Hand down referenced & globally mapped values to the subfunctionJohannes Doerfert2015-10-021-8/+5
| | | | | | | | | | | | | | | | | | | | | | If a value is globally mapped (IslNodeBuilder::ValueMap) and referenced in the code that will be put into a subfunction, we hand down the new value to the subfunction. This patch also removes code that handed down all invariant loads to the subfunction. Instead, only needed invariant loads are given to the subfunction. There are two possible reasons for an invariant load to be handed down: 1) The invariant load is used in a block that is placed in the subfunction but which is not the parent of the load. In this case, the scalar access that will read the loaded value, will cause its base pointer (the preloaded value) to be handed down to the subfunction. 2) The invariant load is defined and used in a block that is placed in the subfunction. With this patch we will hand down the preloaded value to the subfunction as the invariant load is globally mapped to that value. llvm-svn: 249126
* [FIX] Parallel codegen for invariant loadsJohannes Doerfert2015-10-011-0/+5
| | | | | | Hand down all preloaded values to the parallel subfunction. llvm-svn: 249010
* [NFC] Extract materialization of parametersJohannes Doerfert2015-09-301-8/+21
| | | | llvm-svn: 248882
* [FIX] Use escape logic for invariant loadsJohannes Doerfert2015-09-301-7/+18
| | | | | | | | | Before we unconditinoally forced all users outside the SCoP to use the preloaded value. However, if the SCoP is not executed due to the runtime checks, we need to use the original value because it might not be invariant in the first place. llvm-svn: 248881
* Identify and hoist definitively invariant loadsJohannes Doerfert2015-09-291-0/+117
| | | | | | | | | | | | | | | | | | | As a first step in the direction of assumed invariant loads (loads that are not written in some context) we now detect and hoist definitively invariant loads. These invariant loads will be preloaded in the code generation and used in the optimized version of the SCoP. If the load is only conditionally executed the preloaded version will also only be executed under the same condition, hence we will never access memory that wouldn't have been accessed otherwise. This is also the most distinguishing feature to licm. As hoisting can make statements empty we will simplify the SCoP and remove empty statements that would otherwise cause artifacts in the code generation. Differential Revision: http://reviews.llvm.org/D13194 llvm-svn: 248861
* Create parallel code in a separate blockJohannes Doerfert2015-09-261-0/+8
| | | | | | | | | | | This commit basically reverts r246427 but still solves the issue tackled by that commit. Instead of emitting initialization code in the beginning of the start block we now generate parallel code in its own block and thereby guarantee separation. This is necessary as we cannot generate code for hoisted loads prior to the start block but it still needs to be placed prior to everything else. llvm-svn: 248674
* Let MemoryAccess remember its purposeMichael Kruse2015-09-251-1/+1
| | | | | | | | | | There are three possible reasons to add a memory memory access: For explicit load and stores, for llvm::Value defs/uses, and to emulate PHI nodes (the latter two called implicit accesses). Previously MemoryAccess only stored IsPHI. Register accesses could be identified through the isScalar() method if it was no IsPHI. isScalar() determined the number of dimensions of the underlaying array, scalars represented by zero dimensions. For the work on de-LICM, implicit accesses can have more than zero dimensions, making the distinction of isScalars() useless, hence now stored explicitly in the MemoryAccess. Instead, we replace it by isImplicit() and avoid the term scalar for zero-dimensional arrays as it might be confused with llvm::Value which are also often referred to as scalars (or alternatively, as registers). No behavioral change intended, under the condition that it was impossible to create explicit accesses to zero-dimensional "arrays". llvm-svn: 248616
* Merge TempScopInfo.{cpp|h} into ScopInfo.{cpp|h}Michael Kruse2015-09-101-1/+0
| | | | | | | | | | | | | | | | | | This prepares for a series of patches that merges TempScopInfo into ScopInfo to reduce Polly's code complexity. Only ScopInfo.{cpp|h} will be left thereafter. Moving the code of TempScopInfo in one commit makes the mains diffs simpler to understand. In detail, merging the following classes is planned: TempScopInfo into ScopInfo TempScop into Scop IRAccess into MemoryAccess Only moving code, no functional changes intended. Differential Version: http://reviews.llvm.org/D12693 llvm-svn: 247274
* IslNodeBuilder: Add virtual function to obtain the schedule of an ast nodeTobias Grosser2015-09-091-2/+7
| | | | | | | | | | | | Not all users of our IslNodeBuilder will attach scheduling information to the AST in the same way IslAstInfo is doing it today. By going through a virtual function when extracting the schedule of an AST node other users can provide their own functions for extract scheduling information in case they attach scheduling information in a different way to the AST nodes. No functional change for Polly itself intended. llvm-svn: 247126
* Add some more documentation and structure to the collection of subtree ↵Tobias Grosser2015-09-051-24/+63
| | | | | | | | | references Some of the structures are renamed, subfunction introduced to clarify the individual steps and comments are added describing their functionality. llvm-svn: 246929
* IslNodeBuilder: Only obtain the isl_ast_build, when neededTobias Grosser2015-09-051-4/+6
| | | | | | | | | In the common case, the access functions are not modified, hence there is no need to obtain the IslAstBuild context at all. This should not only be minimally faster, but this also allows the IslNodeBuilder to work on asts that are not annotated with isl_ast_builds as long as the memory accesses are not modified. llvm-svn: 246928
* BlockGenerator: Make GlobalMap a member variableTobias Grosser2015-09-051-18/+9
| | | | | | | | | | | | | | | | | | | The GlobalMap variable used in BlockGenerator should always reference the same list througout the entire code generation, hence we can make it a member variable to avoid passing it around through every function call. History: Before we switched to the SCEV based code generation the GlobalMap also contained a mapping form old to new induction variables, hence it was different for each ScopStmt, which is why we passed it as function argument to copyStmt. The new SCEV based code generation now uses a separate mapping called LTS -> LoopToSCEV that maps each original loop to a new loop iteration variable provided as a SCEVExpr. The GlobalMap is currently mostly used for OpenMP code generation, where references to parameters in the original function need to be rewritten to the locations of these variables after they have been passed to the subfunction. Suggested-by: Johannes Doerfert <doerfert@cs.uni-saarland.de> llvm-svn: 246920
* OpenMP-codegen: Correctly pass function arguments to subfunctionsTobias Grosser2015-08-311-11/+7
| | | | | | | Before we only checked if certain instructions can be expanded by us. Now we check any value, including function arguments. llvm-svn: 246425
* Add support for scalar dependences to OpenMP code generationTobias Grosser2015-08-311-7/+17
| | | | | | | | | | | | | | | Scalar dependences between scop statements have caused troubles during parallel code generation as we did not pass on the new stack allocation created for such scalars to the parallel subfunctions. This change now detects all scalar reads/writes in parallel subfunctions, creates the allocas for these scalar objects, passes the resulting memory locations to the subfunctions and ensures that within the subfunction requests for these memory locations will return the rewritten values. Johannes suggested as a future optimization to privatizing some of the scalars in the subfunction. llvm-svn: 246414
* BlockGenerator: Add the possiblity to pass a set of new access functionsTobias Grosser2015-08-271-7/+28
| | | | | | | | | | | | | | | | | | This change allows the BlockGenerator to be reused in contexts where we want to provide different/modified isl_ast_expressions, which are not only changed to a different access relation than the original statement, but which may indeed be different for each code-generated instance of the statement. We ensure testing of this feature by moving Polly's support to import changed access functions through a jscop file to use the BlockGenerators support for generating arbitary access functions if provided. This commit should not change the behavior of Polly for now. The diff is rather large, but most changes are due to us passing the NewAccesses hash table through functions. This style, even though rather verbose, matches what is done throughout the BlockGenerator with other per-statement properties. llvm-svn: 246144
* Only derive number of loop iterations for loops we can actually vectorizeTobias Grosser2015-08-241-0/+27
| | | | llvm-svn: 245870
* Use marker nodes to annotate the different levels of tilingTobias Grosser2015-08-231-1/+8
| | | | | | | Currently, marker nodes are ignored during AST generation, but visible in the -debug-only=polly-ast output. llvm-svn: 245809
* Manually check a loop formRoman Gareev2015-08-211-42/+39
| | | | | | | Add manual check of a loop form and return non-negative number of iterations in case of trivially vectorizable loop. llvm-svn: 245680
OpenPOWER on IntegriCloud