| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
properly value-typed.
Summary: These were temporary methods used to simplify the transition.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D72548
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce a set of function that promote a memref argument of a `gpu.func` to
workgroup memory using memory attribution. The promotion boils down to
additional loops performing the copy from the original argument to the
attributed memory in the beginning of the function, and back at the end of the
function using all available threads. The loop bounds are specified so as to
adapt to any size of the workgroup. These utilities are intended to compose
with other existing utilities (loop coalescing and tiling) in cases where the
distribution of work across threads is uneven, e.g. copying a 2D memref with
only the threads along the "x" dimension. Similarly, specialization of the
kernel to specific launch sizes should be implemented as a separate pass
combining constant propagation and canonicalization.
Introduce a simple attribute-driven pass to test the promotion transformation
since we don't have a heuristic at the moment.
Differential revision: https://reviews.llvm.org/D71904
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
memory.
Summary:
This changes the implementation of OpResult to have some of the results be represented inline in Value, via a pointer int pair of Operation*+result number, and the rest being trailing objects on the main operation. The full details of the new representation is detailed in the proposal here:
https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ
The only difference between here and the above proposal is that we only steal 2-bits for the Value kind instead of 3. This means that we can only fit 2-results inline instead of 6. This allows for other users to steal the final bit for PointerUnion/etc. If necessary, we can always steal this bit back in the future to save more space if 3-6 results are common enough.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D72020
|
|
|
|
|
|
|
|
|
| |
This change refactors pass options to be more similar to how statistics are modeled. More specifically, the options are specified directly on the pass instead of in a separate options class. (Note that the behavior and specification for pass pipelines remains the same.) This brings about several benefits:
* The specification of options is much simpler
* The round-trip format of a pass can be generated automatically
* This gives a somewhat deeper integration with "configuring" a pass, which we could potentially expose to users in the future.
PiperOrigin-RevId: 286953824
|
|
|
|
|
|
| |
ValuePtr was a temporary typedef during the transition to a value-typed Value.
PiperOrigin-RevId: 286945714
|
|
|
|
| |
PiperOrigin-RevId: 286906740
|
|
|
|
|
|
|
|
|
|
| |
Value being value-typed.
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ
This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.
PiperOrigin-RevId: 286844725
|
|
|
|
|
|
| |
Fix the usage of the map size when appending to the map with [].
PiperOrigin-RevId: 284985916
|
|
|
|
| |
PiperOrigin-RevId: 284979684
|
|
|
|
|
|
| |
Closes tensorflow/mlir#255
PiperOrigin-RevId: 284935454
|
|
|
|
|
|
| |
vector unroll size.
PiperOrigin-RevId: 284880592
|
|
|
|
|
|
| |
Centralize all patterns that test Linalg transforms in a single pass.
PiperOrigin-RevId: 284835938
|
|
|
|
|
|
| |
This reorganizes the vector transformations to be more easily testable as patterns and more easily composable into fused passes in the future.
PiperOrigin-RevId: 284817474
|
|
|
|
|
|
|
|
|
|
|
| |
This patch closes issue tensorflow/mlir#271.
It adds an optional permutation map to declarative tiling transformations.
The map is expressed as a list of integers.
Closes tensorflow/mlir#288
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/288 from tetuante:issue271 2df2938d6a1f01b3bc404ded08dea2dd1e10b588
PiperOrigin-RevId: 284064151
|
|
|
|
|
|
|
|
|
| |
This CL refactors some of the MLIR vector dependencies to allow decoupling VectorOps, vector analysis, vector transformations and vector conversions from each other.
This makes the system more modular and allows extracting VectorToVector into VectorTransforms that do not depend on vector conversions.
This refactoring exhibited a bunch of cyclic library dependencies that have been cleaned up.
PiperOrigin-RevId: 283660308
|
|
|
|
|
|
| |
This will make it easier to scale out test patterns and build specific passes that do not interfere with independent testing.
PiperOrigin-RevId: 281736335
|
|
|
|
|
|
|
|
|
| |
This CL uses the pattern rewrite infrastructure to implement a simple VectorOps -> VectorOps legalization strategy to unroll coarse-grained vector operations into finer grained ones.
The transformation is written using local pattern rewrites to allow composition with other rewrites. It proceeds by iteratively introducing fake cast ops and cleaning canonicalizing or lowering them away where appropriate.
This is an example of writing transformations as compositions of local pattern rewrites that should enable us to make them significantly more declarative.
PiperOrigin-RevId: 281555100
|
|
|
|
|
|
| |
This is step 1/n in refactoring infrastructure along the Vector dialect to make it ready for retargetability and composable progressive lowering.
PiperOrigin-RevId: 280529784
|
|
|
|
|
|
| |
This allows for them to be used on other non-function, or even other function-like, operations. The algorithms are already generic, so this is simply changing the derived pass type. The majority of this change is just ensuring that the nesting of these passes remains the same, as the pass manager won't auto-nest them anymore.
PiperOrigin-RevId: 276573038
|
|
|
|
|
|
| |
Closes tensorflow/mlir#175
PiperOrigin-RevId: 275726876
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows individual passes to define options structs and for these options to be parsed per instance of the pass while building the pass pipeline from the command line provided textual specification.
The user can specify these per-instance pipeline options like so:
```
struct MyPassOptions : public PassOptions<MyPassOptions> {
Option<int> exampleOption{*this, "flag-name", llvm::cl::desc("...")};
List<int> exampleListOption{*this, "list-flag-name", llvm::cl::desc("...")};
};
static PassRegistration<MyPass, MyPassOptions> pass("my-pass", "description");
```
PiperOrigin-RevId: 273650140
|
|
|
|
| |
PiperOrigin-RevId: 273296399
|
|
|
|
|
|
|
|
|
| |
See RFC: https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/xE2IzfhE3Wg.
Opaque location stores two pointers, one of them points to some data structure that is external to MLIR, and the other one is unique for each type and represents type id of that data structure. OpaqueLoc also stores an optional location that can be used if the first one is not suitable.
OpaqueLoc is managed similar to FileLineColLoc. It is passed around by MLIR transformations and can be used in compound locations like CallSiteLoc.
PiperOrigin-RevId: 273266510
|
|
|
|
|
|
|
|
|
|
| |
MemRefType:;getDynamicStrideOrOffset() method - NFC
This fixes global ODR-use issues, some of which manifest in Parser.cpp.
Fixes tensorflow/mlir#167.
PiperOrigin-RevId: 272886347
|
|
|
|
|
|
|
|
| |
offset determination.
This also adds coverage with a missing test, which uncovered a bug in the conditional for testing whether an offset is dynamic or not.
PiperOrigin-RevId: 272505798
|
|
|
|
|
|
|
| |
This CL finishes the implementation of the Linalg + Affine type unification of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
As a consequence, the !linalg.view type, linalg::DimOp, linalg::LoadOp and linalg::StoreOp can now disappear and Linalg can use standard types everywhere.
PiperOrigin-RevId: 272187165
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This CL finishes the implementation of the lowering part of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
Strided memrefs correspond conceptually to the following templated C++ struct:
```
template <typename Elem, size_t Rank>
struct {
Elem *ptr;
int64_t offset;
int64_t sizes[Rank];
int64_t strides[Rank];
};
```
The linearization procedure for address calculation for strided memrefs is the same as for linalg views:
`base_offset + SUM_i index_i * stride_i`.
The following CL will unify Linalg and Standard by removing !linalg.view in favor of strided memrefs.
PiperOrigin-RevId: 272033399
|
|
|
|
|
|
|
|
| |
MemRefType::getStrides uses AffineExpr::walk which operates in post-order from the leaves. In order to compute strides properly, it needs to escape on terminal nodes and analyze binary ops only. This did not work for AffineExpr that consist of a single term (i.e. without a binary op).
This CL fixes the corner case and adds relevant tests.
PiperOrigin-RevId: 271975746
|
|
|
|
|
|
| |
Bring back CHECK-LABEL post
PiperOrigin-RevId: 271166428
|
|
|
|
|
|
|
|
| |
Call llvm::outs().flush() to make sure we don't mix streams.
Remove CHECK-LABEL to avoid assuming the relative order
between the additional info and the output IR.
PiperOrigin-RevId: 271131100
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using the two call interfaces, CallOpInterface and CallableOpInterface, this change adds support for an initial multi-level CallGraph. This call graph builds a set of nodes for each callable region, and connects them via edges. An edge may be any of the following types:
* Abstract
- An edge not produced by a call operation, used for connecting to internal nodes from external nodes.
* Call
- A call edge is an edge defined via a call-like operation.
* Child
- This is an artificial edge connecting nested callgraph nodes.
This callgraph will be used, and improved upon, to begin supporting more interesting interprocedural analyses and transformation. In a followup, this callgraph will be used to support more complex inlining support.
PiperOrigin-RevId: 270724968
|
|
|
|
|
|
| |
TestMemRefStrideCalculation.cpp was missing
PiperOrigin-RevId: 270304543
|
|
|
|
|
|
|
|
|
|
| |
The RFC for unifying Linalg and Affine compilation passes into an end-to-end flow discusses the notion of a strided MemRef (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
This CL adds helper functions to extract strides from the layout map which in turn will allow converting between a strided form of the type and a layout map.
For now strides are only computed on a single affine map with a single result (i.e. the closed subset of linearization maps that are compatible with striding semantics). This restriction will be reevaluated / lifted in the future based on concrete use cases.
PiperOrigin-RevId: 270284686
|
|
|
|
|
|
| |
These directives were temporary during the generalization of FunctionPass/ModulePass to OpPass.
PiperOrigin-RevId: 268970259
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This defines a set of initial utilities for inlining a region(or a FuncOp), and defines a simple inliner pass for testing purposes.
A new dialect interface is defined, DialectInlinerInterface, that allows for dialects to override hooks controlling inlining legality. The interface currently provides the following hooks, but these are just premilinary and should be changed/added to/modified as necessary:
* isLegalToInline
- Determine if a region can be inlined into one of this dialect, *or* if an operation of this dialect can be inlined into a given region.
* shouldAnalyzeRecursively
- Determine if an operation with regions should be analyzed recursively for legality. This allows for child operations to be closed off from the legality checks for operations like lambdas.
* handleTerminator
- Process a terminator that has been inlined.
This cl adds support for inlining StandardOps, but other dialects will be added in followups as necessary.
PiperOrigin-RevId: 267426759
|
|
|
|
|
|
| |
This interface will allow for providing hooks to interrop with operation folding. The first hook, 'shouldMaterializeInto', will allow for controlling which region to insert materialized constants into. The folder will generally materialize constants into the top-level isolated region, this allows for materializing into a lower level ancestor region if it is more profitable/correct.
PiperOrigin-RevId: 266702972
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change refactors and cleans up the implementation of the operation walk methods. After this refactoring is that the explicit template parameter for the operation type is no longer needed for the explicit op walks. For example:
op->walk<AffineForOp>([](AffineForOp op) { ... });
is now accomplished via:
op->walk([](AffineForOp op) { ... });
PiperOrigin-RevId: 266209552
|
|
|
|
| |
PiperOrigin-RevId: 264482571
|
|
|
|
| |
PiperOrigin-RevId: 264193915
|
|
|
|
|
|
|
|
| |
Switch to C++14 standard method as llvm::make_unique has been removed (
https://reviews.llvm.org/D66259). Also mark some targets as c++14 to ease next
integrates.
PiperOrigin-RevId: 263953918
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since raw pointers are always passed around for IR construct without
implying any ownership transfer, it can be error prone to have implicit
ownership transferred the same way.
For example this code can seem harmless:
Pass *pass = ....
pm.addPass(pass);
pm.addPass(pass);
pm.run(module);
PiperOrigin-RevId: 263053082
|
|
|
|
|
|
| |
There are currently several different terms used to refer to a parent IR unit in 'get' methods: getParent/getEnclosing/getContaining. This cl standardizes all of these methods to use 'getParent*'.
PiperOrigin-RevId: 262680287
|
|
|
|
|
|
| |
Remove hardcoded SSA names and make use of CHECK-LABEL directives.
PiperOrigin-RevId: 259767803
|
|
|
|
|
|
| |
This CL moves LoopParametricTiling into test/lib as a pass for purely testing purposes.
PiperOrigin-RevId: 259300264
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This CL introduces a simple loop utility function which rewrites the bounds and step of a loop so as to become mappable on a regular grid of processors whose identifiers are given by SSA values.
A corresponding unit test is added.
For example, using CUDA terminology, and assuming a 2-d grid with processorIds = [blockIdx.x, threadIdx.x] and numProcessors = [gridDim.x, blockDim.x], the loop:
```
loop.for %i = %lb to %ub step %step {
...
}
```
is rewritten into a version resembling the following pseudo-IR:
```
loop.for %i = %lb + threadIdx.x + blockIdx.x * blockDim.x to %ub
step %gridDim.x * blockDim.x {
...
}
```
PiperOrigin-RevId: 258945942
|
|
|
|
| |
PiperOrigin-RevId: 257293379
|
|
|
|
|
|
| |
Move the data members out of Function and into a new impl storage class 'FunctionStorage'. This allows for Function to become value typed, which will greatly simplify the transition of Function to FuncOp(given that FuncOp is also value typed).
PiperOrigin-RevId: 255983022
|
|
|
|
|
|
|
|
| |
into the mlir namespace.
Now that Locations are attributes, they have direct access to the MLIR context. This allows for simplifying error emission by removing unnecessary context lookups.
PiperOrigin-RevId: 255112791
|
|
|
|
|
|
|
|
| |
materializing constants.
The OperationFolder currently just inserts into the entry block of a Function, but regions may be isolated above, i.e. explicit capture only, and blindly inserting constants may break the invariants of these regions.
PiperOrigin-RevId: 254987796
|
|
Instead put their impl in test/lib and link them into mlir-test-opt
PiperOrigin-RevId: 254837439
|