summaryrefslogtreecommitdiffstats
path: root/mlir
Commit message (Collapse)AuthorAgeFilesLines
...
* Add the initial inlining infrastructure.River Riddle2019-09-0512-2/+859
| | | | | | | | | | | | | | | | | | This defines a set of initial utilities for inlining a region(or a FuncOp), and defines a simple inliner pass for testing purposes. A new dialect interface is defined, DialectInlinerInterface, that allows for dialects to override hooks controlling inlining legality. The interface currently provides the following hooks, but these are just premilinary and should be changed/added to/modified as necessary: * isLegalToInline - Determine if a region can be inlined into one of this dialect, *or* if an operation of this dialect can be inlined into a given region. * shouldAnalyzeRecursively - Determine if an operation with regions should be analyzed recursively for legality. This allows for child operations to be closed off from the legality checks for operations like lambdas. * handleTerminator - Process a terminator that has been inlined. This cl adds support for inlining StandardOps, but other dialects will be added in followups as necessary. PiperOrigin-RevId: 267426759
* Make GPU kernel outlining test independent of value names.Stephan Herhut2019-09-051-24/+36
| | | | PiperOrigin-RevId: 267323604
* Generalize I32ElementsAttr definition and introduce I64ElementsAttrSmit Hinsu2019-09-043-14/+38
| | | | | | Also, fix constBuilderCall to return attribute of the storage class DenseIntElementsAttr PiperOrigin-RevId: 267305813
* Use transform function on llvm::Module in the ExecutionEngineNicolas Vasilache2019-09-041-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The refactoring of ExecutionEngine dropped the usage of the irTransform function used to pass -O3 and other options to LLVM. As a consequence, the proper optimizations do not kick in in LLMV-land. This CL makes use of the transform function and allows producing avx512 instructions, on an internal example, when using: `mlir-cpu-runner -dump-object-file=1 -object-filename=foo.o` combined with `objdump -D foo.o`. Assembly produced resembles: ``` 2b2e: 62 72 7d 48 18 04 0e vbroadcastss (%rsi,%rcx,1),%zmm8 2b35: 62 71 7c 48 28 ce vmovaps %zmm6,%zmm9 2b3b: 62 72 3d 48 a8 c9 vfmadd213ps %zmm1,%zmm8,%zmm9 2b41: 62 f1 7c 48 28 cf vmovaps %zmm7,%zmm1 2b47: 62 f2 3d 48 a8 c8 vfmadd213ps %zmm0,%zmm8,%zmm1 2b4d: 62 f2 7d 48 18 44 0e vbroadcastss 0x4(%rsi,%rcx,1),%zmm0 2b54: 01 2b55: 62 71 7c 48 28 c6 vmovaps %zmm6,%zmm8 2b5b: 62 72 7d 48 a8 c3 vfmadd213ps %zmm3,%zmm0,%zmm8 2b61: 62 f1 7c 48 28 df vmovaps %zmm7,%zmm3 2b67: 62 f2 7d 48 a8 da vfmadd213ps %zmm2,%zmm0,%zmm3 2b6d: 62 f2 7d 48 18 44 0e vbroadcastss 0x8(%rsi,%rcx,1),%zmm0 2b74: 02 2b75: 62 f2 7d 48 a8 f5 vfmadd213ps %zmm5,%zmm0,%zmm6 2b7b: 62 f2 7d 48 a8 fc vfmadd213ps %zmm4,%zmm0,%zmm7 ``` etc. Fixes tensorflow/mlir#120 PiperOrigin-RevId: 267281097
* Updated StructAttr to use the struct name for StorageType and ReturnType.Rob Suderman2019-09-041-0/+6
| | | | PiperOrigin-RevId: 267266687
* Retain address space during MLIR > LLVM conversion.MLIR Team2019-09-042-12/+20
| | | | PiperOrigin-RevId: 267206460
* Move LLVMIR dialect tests from test/LLVMIR to test/Dialect and test/ConversionAlex Zinenko2019-09-0410-0/+0
| | | | | | | | | | | This follows up on the recent restructuring that moved the dialects under lib/Dialect and inter-dialect conversions to lib/Conversion. Originally, the tests for both the LLVMIR dialect itself and the conversion from Standard to LLVMIR dialect lived under test/LLVMIR. This no longer reflects the code structure. Move the tests to either test/Dialect/LLVMIR or test/Conversion/StandardToLLVM depending on the features they exercise. PiperOrigin-RevId: 267159219
* Make isIsolatedAbove robuster to invalid IRJacques Pienaar2019-09-041-0/+9
| | | | | | This function is only called from the verifier. PiperOrigin-RevId: 267145495
* pipeline-data-transfer: remove dead tag alloc's and improve test coverage ↵Uday Bondhugula2019-09-044-35/+98
| | | | | | | | | | | | | | | for replaceMemRefUsesWith / pipeline-data-transfer - address remaining comments from PR tensorflow/mlir#87 for better test coverage for pipeline-data-transfer/replaceAllMemRefUsesWith - remove dead tag allocs the same way they are removed for the replaced buffers Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#106 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/106 from bondhugula:followup 9e868666d047e8d43e5f82f43e4093b838c710fa PiperOrigin-RevId: 267144774
* Move Linalg dialect tests to test/Dialect/LinalgAlex Zinenko2019-09-0410-0/+0
| | | | | | This was missing from the commit that moved the Linalg dialect to lib/Dialect. PiperOrigin-RevId: 267141176
* Make GPU kernel outlining inline constants.Stephan Herhut2019-09-042-2/+60
| | | | | | | It is generally beneficial to pass less arguments to a kernel, so cloning constants into the kernel is beneficial. PiperOrigin-RevId: 267139084
* Add support for array-typed constants.MLIR Team2019-09-042-11/+29
| | | | PiperOrigin-RevId: 267121729
* Update the syntax of splat attribute in LLVM.mdAlex Zinenko2019-09-042-3/+5
| | | | | | | | | The syntax for splat attributes changed, but was not updated in the description of the LLVM dialect constant operations in LLVM.md. Update the document to use the correct syntax. Also add a dialect roundtrip test for such attribute, which was previously missing. PiperOrigin-RevId: 267116305
* Mention clang-format in the developer guideAlex Zinenko2019-09-041-0/+8
| | | | PiperOrigin-RevId: 267114122
* Properly clone Linalg ops with regionsNicolas Vasilache2019-09-036-14/+120
| | | | | | This CL adds support for proper cloning of Linalg ops that have regions (i.e. the generic linalg op). This is used to properly implement tiling and fusion for such ops. Adequate tests are added. PiperOrigin-RevId: 267027176
* Utility to normalize memrefs with non-identity layout mapsUday Bondhugula2019-09-037-11/+250
| | | | | | | | | | | | | | | | - introduce utility to convert memrefs with non-identity layout maps to ones with identity layout maps: convert the type and rewrite/remap all its uses - add this utility to -simplify-affine-structures pass for testing purposes Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#104 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/104 from bondhugula:memref-normalize f2c914aa1890e8860326c9e33f9aa160b3d65e6d PiperOrigin-RevId: 266985317
* Add folding rule and dialect materialization hook for spv.constantLei Zhang2019-09-0310-22/+93
| | | | | | | | | This will allow us to use MLIR's folding infrastructure to deduplicate SPIR-V constants. This CL also changed isValidSPIRVType in SPIRVDialect to a static method. PiperOrigin-RevId: 266984403
* Fix affine data copy generation corner cases/bugsUday Bondhugula2019-09-032-80/+165
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - the [begin, end) range identified for copying could end in between the block, which makes hoisting invalid in some cases. Change the range identification to always end with end of block. - add test case to exercise these (with fast mem capacity set to minimal so that single element memref buffers are generated at the innermost loop) - the location of begin/end of the block range for data copying was being confused with the insert points for copy in and copy out code. In cases, where we choose to hoist transfers, these are separate. - when copy loops are single iteration ones, promote their bodies at the end of the pass. - change default fast mem space to 1 (setting it to zero made it generate DMA op's that won't verify in the default case - since the DMA ops have a check for src/dest memref spaces being different). Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Co-Authored-By: Mehdi Amini <joker.eph@gmail.com> Closes tensorflow/mlir#88 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/88 from bondhugula:datacopy 88697267c45e850c3ced87671e16e4a930c02a42 PiperOrigin-RevId: 266980911
* Add information about the SIG + Open Design meetings to the README.MLIR Team2019-09-031-0/+8
| | | | PiperOrigin-RevId: 266978247
* Fix an invalid assert when processing escaped strings.River Riddle2019-09-032-1/+4
| | | | | | | | The assert assumed that the escaped character could not appear at the end of the string. Fixes tensorflow/mlir#117 PiperOrigin-RevId: 266975471
* Remove unused variablesAlex Torres2019-09-031-13/+7
| | | | | | | | | | Remove unused variables and attributes from BaseViewConversionHelper on mlir/lib/Dialect/Linalg/Transforms/LowerToLLVMDialect.cpp Closes tensorflow/mlir#116 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/116 from alexst07:fix-warnings 5f638e4677492cf71a9cc040eeb6b57427d32e06 PiperOrigin-RevId: 266972082
* LLVM dialect: prefix auxiliary operations with "mlir."Alex Zinenko2019-09-0318-284/+288
| | | | | | | | | | Some of the operations in the LLVM dialect are required to model the LLVM IR in MLIR, for example "constant" operations are needed to declare a constant value since MLIR, unlike LLVM, does not support immediate values as operands. To avoid confusion with actual LLVM operations, we prefix such axuiliary operations with "mlir.". PiperOrigin-RevId: 266942838
* Support bf16 in Builder::getZeroAttrSmit Hinsu2019-09-021-3/+2
| | | | PiperOrigin-RevId: 266863802
* Add Select operation to SPIR-V dialect.Mahesh Ravishankar2019-09-026-10/+247
| | | | | | The SelectOp models the semantics of OpSelect from SPIR-V spec. PiperOrigin-RevId: 266849559
* Enable OwningRewritePatternList insert overload with parameter pack only whenSmit Hinsu2019-09-021-4/+3
| | | | | | | | there is at least one template pattern type Also remove the other insert overload with pattern pointer as there are no existing users nor any potential known use-case. PiperOrigin-RevId: 266842920
* Refactor the pass manager to support operations other than FuncOp/ModuleOp.River Riddle2019-09-0217-323/+375
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change generalizes the structure of the pass manager to allow arbitrary nesting pass managers for other operations, at any level. The only user visible change to existing code is the fact that a PassManager must now provide an MLIRContext on construction. A new class `OpPassManager` has been added that represents a pass manager on a specific operation type. `PassManager` will remain the top-level entry point into the pipeline, with OpPassManagers being nested underneath. OpPassManagers will still be implicitly nested if the operation type on the pass differs from the pass manager. To explicitly build a pipeline, the 'nest' methods on OpPassManager may be used: // Pass manager for the top-level module. PassManager pm(ctx); // Nest a pipeline operating on FuncOp. OpPassManager &fpm = pm.nest<FuncOp>(); fpm.addPass(...); // Nest a pipeline under the FuncOp pipeline that operates on spirv::ModuleOp OpPassManager &spvModulePM = pm.nest<spirv::ModuleOp>(); // Nest a pipeline on FuncOps inside of the spirv::ModuleOp. OpPassManager &spvFuncPM = spvModulePM.nest<FuncOp>(); To help accomplish this a new general OperationPass is added that operates on opaque Operations. This pass can be inserted in a pass manager of any type to operate on any operation opaquely. An example of this opaque OperationPass is a VerifierPass, that simply runs the verifier opaquely on the current operation. /// Pass to verify an operation and signal failure if necessary. class VerifierPass : public OperationPass<VerifierPass> { void runOnOperation() override { Operation *op = getOperation(); if (failed(verify(op))) signalPassFailure(); markAllAnalysesPreserved(); } }; PiperOrigin-RevId: 266840344
* Add a new dialect interface for the OperationFolder `OpFolderDialectInterface`.River Riddle2019-09-0110-11/+87
| | | | | | This interface will allow for providing hooks to interrop with operation folding. The first hook, 'shouldMaterializeInto', will allow for controlling which region to insert materialized constants into. The folder will generally materialize constants into the top-level isolated region, this allows for materializing into a lower level ancestor region if it is more profitable/correct. PiperOrigin-RevId: 266702972
* Add a `getUsedValuesDefinedAbove()` overload that takes an `Operation` ↵Mehdi Amini2019-09-012-0/+11
| | | | | | | | | pointer (NFC) This is a convenient utility around the existing `getUsedValuesDefinedAbove()` that take two regions. PiperOrigin-RevId: 266686854
* Add a convenient `clone()` method on the `Op` class that forward to the ↵Mehdi Amini2019-09-012-1/+17
| | | | | | underlying `Operation` (NFC) PiperOrigin-RevId: 266685852
* Add missing lowering to CFG in mlir-cpu-runner + related cleanupMehdi Amini2019-09-018-15/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - the list of passes run by mlir-cpu-runner included -lower-affine and -lower-to-llvm but was missing -lower-to-cfg (because -lower-affine at some point used to lower straight to CFG); add -lower-to-cfg in between. IR with affine ops can now be run by mlir-cpu-runner. - update -lower-to-cfg to be consistent with other passes (create*Pass methods were changed to return unique ptrs, but -lower-to-cfg appears to have been missed). - mlir-cpu-runner was unable to parse custom form of affine op's - fix link options - drop unnecessary run options from test/mlir-cpu-runner/simple.mlir (none of the test cases had loops) - -convert-to-llvmir was changed to -lower-to-llvm at some point, but the create pass method name wasn't updated (this pass converts/lowers to LLVM dialect as opposed to LLVM IR). Fix this. (If we prefer "convert", the cmd-line options could be changed to "-convert-to-llvm/cfg" then.) Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#115 PiperOrigin-RevId: 266666909
* Add a link to the rational on lack of const for IR units in the developer guideMehdi Amini2019-08-311-0/+1
| | | | PiperOrigin-RevId: 266583374
* Document that non-IR units are passed by non-const reference instead of ↵Mehdi Amini2019-08-311-2/+5
| | | | | | pointer in general PiperOrigin-RevId: 266583029
* Add floating-point comparison operations to SPIR-V dialect.Mahesh Ravishankar2019-08-314-110/+571
| | | | | | | | | | | | | Use the existing SPV_LogicalOp specification to add the floating-point comparison operations (both ordered and unordered versions). To make it easier to import the op-definitions automatically modify the dialect generation script to update the different .td files based on whether the operation is an arithmetic op, logical op, etc. Also allow specification of multiple opcodes with define_inst.sh. Since this reuses the SPV_LogicalOp framework, no tests specific to the floating point comparison ops are added with this CL. PiperOrigin-RevId: 266561634
* Add missing link dependency to MLIRTableGenTestsLei Zhang2019-08-311-1/+1
| | | | PiperOrigin-RevId: 266561495
* update vim syntax fileUday Bondhugula2019-08-301-10/+22
| | | | | | | | | | | | - more highlighting: numbers, elemental types inside shaped types - add some more keywords Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#110 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/110 from bondhugula:vim 029777db0ecb95bfc6453c0869af1c233d84d521 PiperOrigin-RevId: 266487768
* Add a canonicalization to erase empty AffineForOps.River Riddle2019-08-304-52/+49
| | | | | | AffineForOp themselves are pure and can be removed if there are no internal operations. PiperOrigin-RevId: 266481293
* Splits DictionaryAttr into DictionaryAttrBase and DictionaryAttr.MLIR Team2019-08-301-2/+4
| | | | | | This maintains consistency with other *AttrBase/Attr splits. PiperOrigin-RevId: 266469869
* Add TensorRankOf for ranked tensor types with specific ranksLogan Chien2019-08-303-18/+74
| | | | | | | | | This commit adds `TensorRankOf<types, typeNames, ranks>` to specify ranked tensor types with the specified types and ranks. For example, `TensorRankOf<[I32, F32], ["i32", "F32"], [0, 1]>` matches `tensor<i32>`, `tensor<?xi32>`, `tensor<f32>`, or `tensor<?xf32>`. PiperOrigin-RevId: 266461256
* Fix StructsGenTest.cpp CMakeFile build errorRob Suderman2019-08-301-1/+1
| | | | PiperOrigin-RevId: 266452719
* Generalize the pass hierarchy by adding a general OpPass<PassT, OpT>.River Riddle2019-08-3019-202/+211
| | | | | | This pass class generalizes the current functionality between FunctionPass and ModulePass, and allows for operating on any operation type. The pass manager currently only supports OpPasses operating on FuncOp and ModuleOp, but this restriction will be relaxed in follow-up changes. A utility class OpPassBase<OpT> allows for generically referring to operation specific passes: e.g. FunctionPassBase == OpPassBase<FuncOp>. PiperOrigin-RevId: 266442239
* Add mechanism to dump JIT-compiled objects to filesJacques Pienaar2019-08-304-14/+70
| | | | | | | | | | | This commit introduces the bits to be able to dump JIT-compile objects to external files by passing an object cache to OrcJit. The new functionality is tested in mlir-cpu-runner under the flag `dump-object-file`. Closes tensorflow/mlir#95 PiperOrigin-RevId: 266439265
* Added a TableGen generator for structured dataRob Suderman2019-08-308-1/+548
| | | | | | Similar to enum, added a generator for structured data. This provide Dictionary that stores a fixed set of values and guarantees the values are valid. It is intended to store a fixed number of values by a given name. PiperOrigin-RevId: 266437460
* Add support for early exit walk methods.River Riddle2019-08-308-44/+162
| | | | | | | | | | | | | | | This is done by providing a walk callback that returns a WalkResult. This result is either `advance` or `interrupt`. `advance` means that the walk should continue, whereas `interrupt` signals that the walk should stop immediately. An example is shown below: auto result = op->walk([](Operation *op) { if (some_invariant) return WalkResult::interrupt(); return WalkResult::advance(); }); if (result.wasInterrupted()) ...; PiperOrigin-RevId: 266436700
* Add spv.Branch and spv.BranchConditionalLei Zhang2019-08-305-3/+371
| | | | | | | | This CL just covers the op definition, its parsing, printing, and verification. (De)serialization is to be implemented in a subsequent CL. PiperOrigin-RevId: 266431077
* Change the parseSource* methods to return OwningModuleRef instead of ModuleOp.River Riddle2019-08-292-14/+19
| | | | | | This avoids potential memory leaks from misuse of the API. PiperOrigin-RevId: 266305750
* Refactor the 'walk' methods for operations.River Riddle2019-08-2928-93/+175
| | | | | | | | | | | | This change refactors and cleans up the implementation of the operation walk methods. After this refactoring is that the explicit template parameter for the operation type is no longer needed for the explicit op walks. For example: op->walk<AffineForOp>([](AffineForOp op) { ... }); is now accomplished via: op->walk([](AffineForOp op) { ... }); PiperOrigin-RevId: 266209552
* Make dumping using generic form more robust when IR ill-formedJacques Pienaar2019-08-292-2/+11
| | | | PiperOrigin-RevId: 266198057
* Add tests to verify 0.0 is quantized correctlyFeng Liu2019-08-293-1/+94
| | | | | | We should consider both signed and narrow_range cases. PiperOrigin-RevId: 266167366
* Extend map canonicalization to propagate constant operandsUday Bondhugula2019-08-299-75/+89
| | | | | | | | | | | | | | | | | | | - extend canonicalizeMapAndOperands to propagate constant operands into the map's expressions (and thus drop those operands). - canonicalizeMapAndOperands previously only dropped duplicate and unused operands; however, operands that were constants were retained. This change makes IR maps/expressions generated by various utilities/passes even simpler; also makes some of the test checks more accurate and simpler -- for eg., 0' instead of symbol(%{{.*}}). Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#107 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/107 from bondhugula:canonicalize-maps c889a51486d14fbf7db489f224f881e7e1ff7d72 PiperOrigin-RevId: 266085289
* fix loop unroll and jam - operand mapping - imperfect nest caseUday Bondhugula2019-08-282-56/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - fix operand mapping while cloning sub-blocks to jam - was incorrect for imperfect nests where def/use was across sub-blocks - strengthen/generalize the first test case to cover the previously missed scenario - clean up the other cases while on this. Previously, unroll-jamming the following nest ``` affine.for %arg0 = 0 to 2048 { %0 = alloc() : memref<512x10xf32> affine.for %arg1 = 0 to 10 { %1 = affine.load %0[%arg0, %arg1] : memref<512x10xf32> } dealloc %0 : memref<512x10xf32> } ``` would yield ``` %0 = alloc() : memref<512x10xf32> %1 = affine.apply #map0(%arg0) %2 = alloc() : memref<512x10xf32> affine.for %arg1 = 0 to 10 { %4 = affine.load %0[%arg0, %arg1] : memref<512x10xf32> %5 = affine.apply #map0(%arg0) %6 = affine.load %0[%5, %arg1] : memref<512x10xf32> } dealloc %0 : memref<512x10xf32> %3 = affine.apply #map0(%arg0) dealloc %0 : memref<512x10xf32> ``` instead of ``` module { affine.for %arg0 = 0 to 2048 step 2 { %0 = alloc() : memref<512x10xf32> %1 = affine.apply #map0(%arg0) %2 = alloc() : memref<512x10xf32> affine.for %arg1 = 0 to 10 { %4 = affine.load %0[%arg0, %arg1] : memref<512x10xf32> %5 = affine.apply #map0(%arg0) %6 = affine.load %2[%5, %arg1] : memref<512x10xf32> } dealloc %0 : memref<512x10xf32> %3 = affine.apply #map0(%arg0) dealloc %2 : memref<512x10xf32> } ``` Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#98 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/98 from bondhugula:ujam ddbc853f69b5608b3e8ff9b5ac1f6a5a0bb315a4 PiperOrigin-RevId: 266073460
OpenPOWER on IntegriCloud