| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
| |
| |
| |
| |
| |
| |
| | |
The assert assumed that the escaped character could not appear at the end of the string.
Fixes tensorflow/mlir#117
PiperOrigin-RevId: 266975471
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Remove unused variables and attributes from BaseViewConversionHelper
on mlir/lib/Dialect/Linalg/Transforms/LowerToLLVMDialect.cpp
Closes tensorflow/mlir#116
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/116 from alexst07:fix-warnings 5f638e4677492cf71a9cc040eeb6b57427d32e06
PiperOrigin-RevId: 266972082
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Some of the operations in the LLVM dialect are required to model the LLVM IR in
MLIR, for example "constant" operations are needed to declare a constant value
since MLIR, unlike LLVM, does not support immediate values as operands. To
avoid confusion with actual LLVM operations, we prefix such axuiliary
operations with "mlir.".
PiperOrigin-RevId: 266942838
|
| |
| |
| |
| | |
PiperOrigin-RevId: 266863802
|
| |
| |
| |
| |
| |
| | |
The SelectOp models the semantics of OpSelect from SPIR-V spec.
PiperOrigin-RevId: 266849559
|
| |
| |
| |
| |
| |
| |
| |
| | |
there is at least one template pattern type
Also remove the other insert overload with pattern pointer as there are no existing users nor any potential known use-case.
PiperOrigin-RevId: 266842920
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This change generalizes the structure of the pass manager to allow arbitrary nesting pass managers for other operations, at any level. The only user visible change to existing code is the fact that a PassManager must now provide an MLIRContext on construction. A new class `OpPassManager` has been added that represents a pass manager on a specific operation type. `PassManager` will remain the top-level entry point into the pipeline, with OpPassManagers being nested underneath. OpPassManagers will still be implicitly nested if the operation type on the pass differs from the pass manager. To explicitly build a pipeline, the 'nest' methods on OpPassManager may be used:
// Pass manager for the top-level module.
PassManager pm(ctx);
// Nest a pipeline operating on FuncOp.
OpPassManager &fpm = pm.nest<FuncOp>();
fpm.addPass(...);
// Nest a pipeline under the FuncOp pipeline that operates on spirv::ModuleOp
OpPassManager &spvModulePM = pm.nest<spirv::ModuleOp>();
// Nest a pipeline on FuncOps inside of the spirv::ModuleOp.
OpPassManager &spvFuncPM = spvModulePM.nest<FuncOp>();
To help accomplish this a new general OperationPass is added that operates on opaque Operations. This pass can be inserted in a pass manager of any type to operate on any operation opaquely. An example of this opaque OperationPass is a VerifierPass, that simply runs the verifier opaquely on the current operation.
/// Pass to verify an operation and signal failure if necessary.
class VerifierPass : public OperationPass<VerifierPass> {
void runOnOperation() override {
Operation *op = getOperation();
if (failed(verify(op)))
signalPassFailure();
markAllAnalysesPreserved();
}
};
PiperOrigin-RevId: 266840344
|
| |
| |
| |
| |
| |
| | |
This interface will allow for providing hooks to interrop with operation folding. The first hook, 'shouldMaterializeInto', will allow for controlling which region to insert materialized constants into. The folder will generally materialize constants into the top-level isolated region, this allows for materializing into a lower level ancestor region if it is more profitable/correct.
PiperOrigin-RevId: 266702972
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
pointer (NFC)
This is a convenient utility around the existing `getUsedValuesDefinedAbove()`
that take two regions.
PiperOrigin-RevId: 266686854
|
| |
| |
| |
| |
| |
| | |
underlying `Operation` (NFC)
PiperOrigin-RevId: 266685852
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- the list of passes run by mlir-cpu-runner included -lower-affine and
-lower-to-llvm but was missing -lower-to-cfg (because -lower-affine at
some point used to lower straight to CFG); add -lower-to-cfg in
between. IR with affine ops can now be run by mlir-cpu-runner.
- update -lower-to-cfg to be consistent with other passes (create*Pass methods
were changed to return unique ptrs, but -lower-to-cfg appears to have been
missed).
- mlir-cpu-runner was unable to parse custom form of affine op's - fix
link options
- drop unnecessary run options from test/mlir-cpu-runner/simple.mlir
(none of the test cases had loops)
- -convert-to-llvmir was changed to -lower-to-llvm at some point, but the
create pass method name wasn't updated (this pass converts/lowers to LLVM
dialect as opposed to LLVM IR). Fix this.
(If we prefer "convert", the cmd-line options could be changed to
"-convert-to-llvm/cfg" then.)
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closes tensorflow/mlir#115
PiperOrigin-RevId: 266666909
|
| |
| |
| |
| | |
PiperOrigin-RevId: 266583374
|
| |
| |
| |
| |
| |
| | |
pointer in general
PiperOrigin-RevId: 266583029
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Use the existing SPV_LogicalOp specification to add the floating-point
comparison operations (both ordered and unordered versions).
To make it easier to import the op-definitions automatically modify
the dialect generation script to update the different .td files based
on whether the operation is an arithmetic op, logical op, etc. Also
allow specification of multiple opcodes with define_inst.sh.
Since this reuses the SPV_LogicalOp framework, no tests specific to
the floating point comparison ops are added with this CL.
PiperOrigin-RevId: 266561634
|
| |
| |
| |
| | |
PiperOrigin-RevId: 266561495
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- more highlighting: numbers, elemental types inside shaped types
- add some more keywords
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closes tensorflow/mlir#110
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/110 from bondhugula:vim 029777db0ecb95bfc6453c0869af1c233d84d521
PiperOrigin-RevId: 266487768
|
| |
| |
| |
| |
| |
| | |
AffineForOp themselves are pure and can be removed if there are no internal operations.
PiperOrigin-RevId: 266481293
|
| |
| |
| |
| |
| |
| | |
This maintains consistency with other *AttrBase/Attr splits.
PiperOrigin-RevId: 266469869
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This commit adds `TensorRankOf<types, typeNames, ranks>` to specify ranked
tensor types with the specified types and ranks. For example,
`TensorRankOf<[I32, F32], ["i32", "F32"], [0, 1]>` matches `tensor<i32>`,
`tensor<?xi32>`, `tensor<f32>`, or `tensor<?xf32>`.
PiperOrigin-RevId: 266461256
|
| |
| |
| |
| | |
PiperOrigin-RevId: 266452719
|
| |
| |
| |
| |
| |
| | |
This pass class generalizes the current functionality between FunctionPass and ModulePass, and allows for operating on any operation type. The pass manager currently only supports OpPasses operating on FuncOp and ModuleOp, but this restriction will be relaxed in follow-up changes. A utility class OpPassBase<OpT> allows for generically referring to operation specific passes: e.g. FunctionPassBase == OpPassBase<FuncOp>.
PiperOrigin-RevId: 266442239
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This commit introduces the bits to be able to dump JIT-compile
objects to external files by passing an object cache to OrcJit.
The new functionality is tested in mlir-cpu-runner under the flag
`dump-object-file`.
Closes tensorflow/mlir#95
PiperOrigin-RevId: 266439265
|
| |
| |
| |
| |
| |
| | |
Similar to enum, added a generator for structured data. This provide Dictionary that stores a fixed set of values and guarantees the values are valid. It is intended to store a fixed number of values by a given name.
PiperOrigin-RevId: 266437460
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is done by providing a walk callback that returns a WalkResult. This result is either `advance` or `interrupt`. `advance` means that the walk should continue, whereas `interrupt` signals that the walk should stop immediately. An example is shown below:
auto result = op->walk([](Operation *op) {
if (some_invariant)
return WalkResult::interrupt();
return WalkResult::advance();
});
if (result.wasInterrupted())
...;
PiperOrigin-RevId: 266436700
|
| |
| |
| |
| |
| |
| |
| |
| | |
This CL just covers the op definition, its parsing, printing,
and verification. (De)serialization is to be implemented
in a subsequent CL.
PiperOrigin-RevId: 266431077
|
| |
| |
| |
| |
| |
| | |
This avoids potential memory leaks from misuse of the API.
PiperOrigin-RevId: 266305750
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This change refactors and cleans up the implementation of the operation walk methods. After this refactoring is that the explicit template parameter for the operation type is no longer needed for the explicit op walks. For example:
op->walk<AffineForOp>([](AffineForOp op) { ... });
is now accomplished via:
op->walk([](AffineForOp op) { ... });
PiperOrigin-RevId: 266209552
|
| |
| |
| |
| | |
PiperOrigin-RevId: 266198057
|
| |
| |
| |
| |
| |
| | |
We should consider both signed and narrow_range cases.
PiperOrigin-RevId: 266167366
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- extend canonicalizeMapAndOperands to propagate constant operands into
the map's expressions (and thus drop those operands).
- canonicalizeMapAndOperands previously only dropped duplicate and
unused operands; however, operands that were constants were
retained.
This change makes IR maps/expressions generated by various
utilities/passes even simpler; also makes some of the test checks more
accurate and simpler -- for eg., 0' instead of symbol(%{{.*}}).
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closes tensorflow/mlir#107
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/107 from bondhugula:canonicalize-maps c889a51486d14fbf7db489f224f881e7e1ff7d72
PiperOrigin-RevId: 266085289
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- fix operand mapping while cloning sub-blocks to jam - was incorrect
for imperfect nests where def/use was across sub-blocks
- strengthen/generalize the first test case to cover the previously
missed scenario
- clean up the other cases while on this.
Previously, unroll-jamming the following nest
```
affine.for %arg0 = 0 to 2048 {
%0 = alloc() : memref<512x10xf32>
affine.for %arg1 = 0 to 10 {
%1 = affine.load %0[%arg0, %arg1] : memref<512x10xf32>
}
dealloc %0 : memref<512x10xf32>
}
```
would yield
```
%0 = alloc() : memref<512x10xf32>
%1 = affine.apply #map0(%arg0)
%2 = alloc() : memref<512x10xf32>
affine.for %arg1 = 0 to 10 {
%4 = affine.load %0[%arg0, %arg1] : memref<512x10xf32>
%5 = affine.apply #map0(%arg0)
%6 = affine.load %0[%5, %arg1] : memref<512x10xf32>
}
dealloc %0 : memref<512x10xf32>
%3 = affine.apply #map0(%arg0)
dealloc %0 : memref<512x10xf32>
```
instead of
```
module {
affine.for %arg0 = 0 to 2048 step 2 {
%0 = alloc() : memref<512x10xf32>
%1 = affine.apply #map0(%arg0)
%2 = alloc() : memref<512x10xf32>
affine.for %arg1 = 0 to 10 {
%4 = affine.load %0[%arg0, %arg1] : memref<512x10xf32>
%5 = affine.apply #map0(%arg0)
%6 = affine.load %2[%5, %arg1] : memref<512x10xf32>
}
dealloc %0 : memref<512x10xf32>
%3 = affine.apply #map0(%arg0)
dealloc %2 : memref<512x10xf32>
}
```
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closes tensorflow/mlir#98
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/98 from bondhugula:ujam ddbc853f69b5608b3e8ff9b5ac1f6a5a0bb315a4
PiperOrigin-RevId: 266073460
|
| |
| |
| |
| | |
PiperOrigin-RevId: 266073204
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- some of it has been adapted from LLVM's vim utils
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closes tensorflow/mlir#90
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/90 from bondhugula:vim 22b1c958818c4b09de0ec8e1d7a4893171a03dbf
PiperOrigin-RevId: 266071752
|
| |
| |
| |
| | |
PiperOrigin-RevId: 266022088
|
| |
| |
| |
| |
| |
| |
| |
| | |
nesting.
The pass manager is moving towards being able to run on operations at arbitrary nesting. An operation may have both parent and child operations, and the AnalysisManager must be able to handle this generalization. The AnalysisManager class now contains generic 'getCachedParentAnalysis' and 'getChildAnalysis/getCachedChildAnalysis' functions to query analyses on parent/child operations. This removes the hard coded nesting relationship between Module/Function.
PiperOrigin-RevId: 266003636
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Tweak to the pretty type parser to recognize that `->` is a special token that
shouldn't be split into two characters. This change allows dialect
types to wrap function types as in `!my.ptr_type<(i32) -> i32>`.
Closes tensorflow/mlir#105
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/105 from schweitzpgi:parse-arrow 8b2d768053f419daae5a1a864121a44c4319acbe
PiperOrigin-RevId: 265986240
|
| |
| |
| |
| |
| |
| | |
This change adds definitions, parsing and verification for both ops.
PiperOrigin-RevId: 265954051
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of lowering the program in two steps (Standard->LLVM followed
by GPU->NVVM), leading to invalid IR inbetween, the runner now uses
one pattern based rewrite step to go directly from Standard+GPU to
LLVM+NVVM.
PiperOrigin-RevId: 265861934
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Refactor replaceAllMemRefUsesWith to split it into two methods: the new
method does the replacement on a single op, and is used by the existing
one.
- make the methods return LogicalResult instead of bool
- Earlier, when replacement failed (due to non-deferencing uses of the
memref), the set of ops that had already been processed would have
been replaced leaving the IR in an inconsistent state. Now, a
pass is made over all ops to first check for non-deferencing
uses, and then replacement is performed. No test cases were affected
because all clients of this method were first checking for
non-deferencing uses before calling this method (for other reasons).
This isn't true for a use case in another upcoming PR (scalar
replacement); clients can now bail out with consistent IR on failure
of replaceAllMemRefUsesWith. Add test case.
- multiple deferencing uses of the same memref in a single op is
possible (we have no such use cases/scenarios), and this has always
remained unsupported. Add an assertion for this.
- minor fix to another test pipeline-data-transfer case.
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closes tensorflow/mlir#87
PiperOrigin-RevId: 265808183
|
| |
| |
| |
| |
| |
| |
| |
| | |
The code and documentation for this chapter of the tutorial have been updated to follow the new flow. The toy 'array' type has been replaced by usages of the MLIR tensor type. The code has also been cleaned up and modernized.
Closes tensorflow/mlir#101
PiperOrigin-RevId: 265744086
|
| |
| |
| |
| |
| |
| | |
block-wide reduce.
PiperOrigin-RevId: 265720077
|
| |
| |
| |
| |
| |
| |
| |
| | |
Each basic block in SPIR-V must start with an OpLabel instruction.
We don't support control flow yet, so this CL just makes sure that
the entry block follows this rule and is valid.
PiperOrigin-RevId: 265718841
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
To support a conversion of a simple load-compute-store kernel from GPU
dialect to SPIR-V dialect, the conversion of operations like
"gpu.block_dim", "gpu.thread_id" which allow threads to get the launch
conversion is needed. In SPIR-V these are specified as global
variables with builin attributes. This CL adds support to specify
builtin variables in SPIR-V conversion framework. This is used to
convert the relevant operations from GPU dialect to SPIR-V dialect.
Also add support for conversion of load/store operation in Standard
dialect to SPIR-V dialect.
To simplify the conversion add a method to build a spv.AccessChain
operation that automatically determines the return type based on the
base pointer type and the indices provided.
PiperOrigin-RevId: 265718525
|
| |
| |
| |
| |
| |
| |
| |
| | |
Add Block decoration for top-level spv.struct.
Closes tensorflow/mlir#102
PiperOrigin-RevId: 265716241
|
| |
| |
| |
| |
| |
| | |
The context can easily be recovered from the Location in these situations.
PiperOrigin-RevId: 265578574
|
| |
| |
| |
| |
| |
| | |
Closes tensorflow/mlir#99
PiperOrigin-RevId: 265538731
|
| |
| |
| |
| |
| |
| | |
The context can be recovered by other means in these methods and doesn't need to be passed explicitly.
PiperOrigin-RevId: 265532956
|
| |
| |
| |
| | |
PiperOrigin-RevId: 265485862
|
| |
| |
| |
| |
| |
| | |
This fixes a bug when folding ops with inner ops and inner ops are still being visited.
PiperOrigin-RevId: 265475780
|
| |
| |
| |
| | |
PiperOrigin-RevId: 265190168
|