summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
| * Fix an invalid assert when processing escaped strings.River Riddle2019-09-032-1/+4
| | | | | | | | | | | | | | | | The assert assumed that the escaped character could not appear at the end of the string. Fixes tensorflow/mlir#117 PiperOrigin-RevId: 266975471
| * Remove unused variablesAlex Torres2019-09-031-13/+7
| | | | | | | | | | | | | | | | | | | | Remove unused variables and attributes from BaseViewConversionHelper on mlir/lib/Dialect/Linalg/Transforms/LowerToLLVMDialect.cpp Closes tensorflow/mlir#116 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/116 from alexst07:fix-warnings 5f638e4677492cf71a9cc040eeb6b57427d32e06 PiperOrigin-RevId: 266972082
| * LLVM dialect: prefix auxiliary operations with "mlir."Alex Zinenko2019-09-0318-284/+288
| | | | | | | | | | | | | | | | | | | | Some of the operations in the LLVM dialect are required to model the LLVM IR in MLIR, for example "constant" operations are needed to declare a constant value since MLIR, unlike LLVM, does not support immediate values as operands. To avoid confusion with actual LLVM operations, we prefix such axuiliary operations with "mlir.". PiperOrigin-RevId: 266942838
| * Support bf16 in Builder::getZeroAttrSmit Hinsu2019-09-021-3/+2
| | | | | | | | PiperOrigin-RevId: 266863802
| * Add Select operation to SPIR-V dialect.Mahesh Ravishankar2019-09-026-10/+247
| | | | | | | | | | | | The SelectOp models the semantics of OpSelect from SPIR-V spec. PiperOrigin-RevId: 266849559
| * Enable OwningRewritePatternList insert overload with parameter pack only whenSmit Hinsu2019-09-021-4/+3
| | | | | | | | | | | | | | | | there is at least one template pattern type Also remove the other insert overload with pattern pointer as there are no existing users nor any potential known use-case. PiperOrigin-RevId: 266842920
| * Refactor the pass manager to support operations other than FuncOp/ModuleOp.River Riddle2019-09-0217-323/+375
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change generalizes the structure of the pass manager to allow arbitrary nesting pass managers for other operations, at any level. The only user visible change to existing code is the fact that a PassManager must now provide an MLIRContext on construction. A new class `OpPassManager` has been added that represents a pass manager on a specific operation type. `PassManager` will remain the top-level entry point into the pipeline, with OpPassManagers being nested underneath. OpPassManagers will still be implicitly nested if the operation type on the pass differs from the pass manager. To explicitly build a pipeline, the 'nest' methods on OpPassManager may be used: // Pass manager for the top-level module. PassManager pm(ctx); // Nest a pipeline operating on FuncOp. OpPassManager &fpm = pm.nest<FuncOp>(); fpm.addPass(...); // Nest a pipeline under the FuncOp pipeline that operates on spirv::ModuleOp OpPassManager &spvModulePM = pm.nest<spirv::ModuleOp>(); // Nest a pipeline on FuncOps inside of the spirv::ModuleOp. OpPassManager &spvFuncPM = spvModulePM.nest<FuncOp>(); To help accomplish this a new general OperationPass is added that operates on opaque Operations. This pass can be inserted in a pass manager of any type to operate on any operation opaquely. An example of this opaque OperationPass is a VerifierPass, that simply runs the verifier opaquely on the current operation. /// Pass to verify an operation and signal failure if necessary. class VerifierPass : public OperationPass<VerifierPass> { void runOnOperation() override { Operation *op = getOperation(); if (failed(verify(op))) signalPassFailure(); markAllAnalysesPreserved(); } }; PiperOrigin-RevId: 266840344
| * Add a new dialect interface for the OperationFolder `OpFolderDialectInterface`.River Riddle2019-09-0110-11/+87
| | | | | | | | | | | | This interface will allow for providing hooks to interrop with operation folding. The first hook, 'shouldMaterializeInto', will allow for controlling which region to insert materialized constants into. The folder will generally materialize constants into the top-level isolated region, this allows for materializing into a lower level ancestor region if it is more profitable/correct. PiperOrigin-RevId: 266702972
| * Add a `getUsedValuesDefinedAbove()` overload that takes an `Operation` ↵Mehdi Amini2019-09-012-0/+11
| | | | | | | | | | | | | | | | | | pointer (NFC) This is a convenient utility around the existing `getUsedValuesDefinedAbove()` that take two regions. PiperOrigin-RevId: 266686854
| * Add a convenient `clone()` method on the `Op` class that forward to the ↵Mehdi Amini2019-09-012-1/+17
| | | | | | | | | | | | underlying `Operation` (NFC) PiperOrigin-RevId: 266685852
| * Add missing lowering to CFG in mlir-cpu-runner + related cleanupMehdi Amini2019-09-018-15/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - the list of passes run by mlir-cpu-runner included -lower-affine and -lower-to-llvm but was missing -lower-to-cfg (because -lower-affine at some point used to lower straight to CFG); add -lower-to-cfg in between. IR with affine ops can now be run by mlir-cpu-runner. - update -lower-to-cfg to be consistent with other passes (create*Pass methods were changed to return unique ptrs, but -lower-to-cfg appears to have been missed). - mlir-cpu-runner was unable to parse custom form of affine op's - fix link options - drop unnecessary run options from test/mlir-cpu-runner/simple.mlir (none of the test cases had loops) - -convert-to-llvmir was changed to -lower-to-llvm at some point, but the create pass method name wasn't updated (this pass converts/lowers to LLVM dialect as opposed to LLVM IR). Fix this. (If we prefer "convert", the cmd-line options could be changed to "-convert-to-llvm/cfg" then.) Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#115 PiperOrigin-RevId: 266666909
| * Add a link to the rational on lack of const for IR units in the developer guideMehdi Amini2019-08-311-0/+1
| | | | | | | | PiperOrigin-RevId: 266583374
| * Document that non-IR units are passed by non-const reference instead of ↵Mehdi Amini2019-08-311-2/+5
| | | | | | | | | | | | pointer in general PiperOrigin-RevId: 266583029
| * Add floating-point comparison operations to SPIR-V dialect.Mahesh Ravishankar2019-08-314-110/+571
| | | | | | | | | | | | | | | | | | | | | | | | | | Use the existing SPV_LogicalOp specification to add the floating-point comparison operations (both ordered and unordered versions). To make it easier to import the op-definitions automatically modify the dialect generation script to update the different .td files based on whether the operation is an arithmetic op, logical op, etc. Also allow specification of multiple opcodes with define_inst.sh. Since this reuses the SPV_LogicalOp framework, no tests specific to the floating point comparison ops are added with this CL. PiperOrigin-RevId: 266561634
| * Add missing link dependency to MLIRTableGenTestsLei Zhang2019-08-311-1/+1
| | | | | | | | PiperOrigin-RevId: 266561495
| * update vim syntax fileUday Bondhugula2019-08-301-10/+22
| | | | | | | | | | | | | | | | | | | | | | | | - more highlighting: numbers, elemental types inside shaped types - add some more keywords Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#110 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/110 from bondhugula:vim 029777db0ecb95bfc6453c0869af1c233d84d521 PiperOrigin-RevId: 266487768
| * Add a canonicalization to erase empty AffineForOps.River Riddle2019-08-304-52/+49
| | | | | | | | | | | | AffineForOp themselves are pure and can be removed if there are no internal operations. PiperOrigin-RevId: 266481293
| * Splits DictionaryAttr into DictionaryAttrBase and DictionaryAttr.MLIR Team2019-08-301-2/+4
| | | | | | | | | | | | This maintains consistency with other *AttrBase/Attr splits. PiperOrigin-RevId: 266469869
| * Add TensorRankOf for ranked tensor types with specific ranksLogan Chien2019-08-303-18/+74
| | | | | | | | | | | | | | | | | | This commit adds `TensorRankOf<types, typeNames, ranks>` to specify ranked tensor types with the specified types and ranks. For example, `TensorRankOf<[I32, F32], ["i32", "F32"], [0, 1]>` matches `tensor<i32>`, `tensor<?xi32>`, `tensor<f32>`, or `tensor<?xf32>`. PiperOrigin-RevId: 266461256
| * Fix StructsGenTest.cpp CMakeFile build errorRob Suderman2019-08-301-1/+1
| | | | | | | | PiperOrigin-RevId: 266452719
| * Generalize the pass hierarchy by adding a general OpPass<PassT, OpT>.River Riddle2019-08-3019-202/+211
| | | | | | | | | | | | This pass class generalizes the current functionality between FunctionPass and ModulePass, and allows for operating on any operation type. The pass manager currently only supports OpPasses operating on FuncOp and ModuleOp, but this restriction will be relaxed in follow-up changes. A utility class OpPassBase<OpT> allows for generically referring to operation specific passes: e.g. FunctionPassBase == OpPassBase<FuncOp>. PiperOrigin-RevId: 266442239
| * Add mechanism to dump JIT-compiled objects to filesJacques Pienaar2019-08-304-14/+70
| | | | | | | | | | | | | | | | | | | | | | This commit introduces the bits to be able to dump JIT-compile objects to external files by passing an object cache to OrcJit. The new functionality is tested in mlir-cpu-runner under the flag `dump-object-file`. Closes tensorflow/mlir#95 PiperOrigin-RevId: 266439265
| * Added a TableGen generator for structured dataRob Suderman2019-08-308-1/+548
| | | | | | | | | | | | Similar to enum, added a generator for structured data. This provide Dictionary that stores a fixed set of values and guarantees the values are valid. It is intended to store a fixed number of values by a given name. PiperOrigin-RevId: 266437460
| * Add support for early exit walk methods.River Riddle2019-08-308-44/+162
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is done by providing a walk callback that returns a WalkResult. This result is either `advance` or `interrupt`. `advance` means that the walk should continue, whereas `interrupt` signals that the walk should stop immediately. An example is shown below: auto result = op->walk([](Operation *op) { if (some_invariant) return WalkResult::interrupt(); return WalkResult::advance(); }); if (result.wasInterrupted()) ...; PiperOrigin-RevId: 266436700
| * Add spv.Branch and spv.BranchConditionalLei Zhang2019-08-305-3/+371
| | | | | | | | | | | | | | | | This CL just covers the op definition, its parsing, printing, and verification. (De)serialization is to be implemented in a subsequent CL. PiperOrigin-RevId: 266431077
| * Change the parseSource* methods to return OwningModuleRef instead of ModuleOp.River Riddle2019-08-292-14/+19
| | | | | | | | | | | | This avoids potential memory leaks from misuse of the API. PiperOrigin-RevId: 266305750
| * Refactor the 'walk' methods for operations.River Riddle2019-08-2928-93/+175
| | | | | | | | | | | | | | | | | | | | | | | | This change refactors and cleans up the implementation of the operation walk methods. After this refactoring is that the explicit template parameter for the operation type is no longer needed for the explicit op walks. For example: op->walk<AffineForOp>([](AffineForOp op) { ... }); is now accomplished via: op->walk([](AffineForOp op) { ... }); PiperOrigin-RevId: 266209552
| * Make dumping using generic form more robust when IR ill-formedJacques Pienaar2019-08-292-2/+11
| | | | | | | | PiperOrigin-RevId: 266198057
| * Add tests to verify 0.0 is quantized correctlyFeng Liu2019-08-293-1/+94
| | | | | | | | | | | | We should consider both signed and narrow_range cases. PiperOrigin-RevId: 266167366
| * Extend map canonicalization to propagate constant operandsUday Bondhugula2019-08-299-75/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - extend canonicalizeMapAndOperands to propagate constant operands into the map's expressions (and thus drop those operands). - canonicalizeMapAndOperands previously only dropped duplicate and unused operands; however, operands that were constants were retained. This change makes IR maps/expressions generated by various utilities/passes even simpler; also makes some of the test checks more accurate and simpler -- for eg., 0' instead of symbol(%{{.*}}). Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#107 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/107 from bondhugula:canonicalize-maps c889a51486d14fbf7db489f224f881e7e1ff7d72 PiperOrigin-RevId: 266085289
| * fix loop unroll and jam - operand mapping - imperfect nest caseUday Bondhugula2019-08-282-56/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - fix operand mapping while cloning sub-blocks to jam - was incorrect for imperfect nests where def/use was across sub-blocks - strengthen/generalize the first test case to cover the previously missed scenario - clean up the other cases while on this. Previously, unroll-jamming the following nest ``` affine.for %arg0 = 0 to 2048 { %0 = alloc() : memref<512x10xf32> affine.for %arg1 = 0 to 10 { %1 = affine.load %0[%arg0, %arg1] : memref<512x10xf32> } dealloc %0 : memref<512x10xf32> } ``` would yield ``` %0 = alloc() : memref<512x10xf32> %1 = affine.apply #map0(%arg0) %2 = alloc() : memref<512x10xf32> affine.for %arg1 = 0 to 10 { %4 = affine.load %0[%arg0, %arg1] : memref<512x10xf32> %5 = affine.apply #map0(%arg0) %6 = affine.load %0[%5, %arg1] : memref<512x10xf32> } dealloc %0 : memref<512x10xf32> %3 = affine.apply #map0(%arg0) dealloc %0 : memref<512x10xf32> ``` instead of ``` module { affine.for %arg0 = 0 to 2048 step 2 { %0 = alloc() : memref<512x10xf32> %1 = affine.apply #map0(%arg0) %2 = alloc() : memref<512x10xf32> affine.for %arg1 = 0 to 10 { %4 = affine.load %0[%arg0, %arg1] : memref<512x10xf32> %5 = affine.apply #map0(%arg0) %6 = affine.load %2[%5, %arg1] : memref<512x10xf32> } dealloc %0 : memref<512x10xf32> %3 = affine.apply #map0(%arg0) dealloc %2 : memref<512x10xf32> } ``` Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#98 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/98 from bondhugula:ujam ddbc853f69b5608b3e8ff9b5ac1f6a5a0bb315a4 PiperOrigin-RevId: 266073460
| * Add verification for dimension attribute on GPUDialect index operations.Stephan Herhut2019-08-283-1/+46
| | | | | | | | PiperOrigin-RevId: 266073204
| * Add vim scripts for indent/syntaxUday Bondhugula2019-08-286-51/+192
| | | | | | | | | | | | | | | | | | | | | | - some of it has been adapted from LLVM's vim utils Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#90 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/90 from bondhugula:vim 22b1c958818c4b09de0ec8e1d7a4893171a03dbf PiperOrigin-RevId: 266071752
| * Fix the equality check of two floating point valuesFeng Liu2019-08-281-3/+5
| | | | | | | | PiperOrigin-RevId: 266022088
| * Generalize the analysis manager framework to work on any operation at any ↵River Riddle2019-08-286-175/+193
| | | | | | | | | | | | | | | | nesting. The pass manager is moving towards being able to run on operations at arbitrary nesting. An operation may have both parent and child operations, and the AnalysisManager must be able to handle this generalization. The AnalysisManager class now contains generic 'getCachedParentAnalysis' and 'getChildAnalysis/getCachedChildAnalysis' functions to query analyses on parent/child operations. This removes the hard coded nesting relationship between Module/Function. PiperOrigin-RevId: 266003636
| * Tweak to the pretty type parser to recognize that `->` is a special token.Eric Schweitz2019-08-284-0/+18
| | | | | | | | | | | | | | | | | | | | | | Tweak to the pretty type parser to recognize that `->` is a special token that shouldn't be split into two characters. This change allows dialect types to wrap function types as in `!my.ptr_type<(i32) -> i32>`. Closes tensorflow/mlir#105 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/105 from schweitzpgi:parse-arrow 8b2d768053f419daae5a1a864121a44c4319acbe PiperOrigin-RevId: 265986240
| * Add implementation for tensor_load and tensor_store operations.Stephan Herhut2019-08-288-7/+234
| | | | | | | | | | | | This change adds definitions, parsing and verification for both ops. PiperOrigin-RevId: 265954051
| * Port mlir-cuda-runner to use dialect conversion framework.Stephan Herhut2019-08-284-91/+134
| | | | | | | | | | | | | | | | | | Instead of lowering the program in two steps (Standard->LLVM followed by GPU->NVVM), leading to invalid IR inbetween, the runner now uses one pattern based rewrite step to go directly from Standard+GPU to LLVM+NVVM. PiperOrigin-RevId: 265861934
| * Refactor / improve replaceAllMemRefUsesWithUday Bondhugula2019-08-275-192/+270
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactor replaceAllMemRefUsesWith to split it into two methods: the new method does the replacement on a single op, and is used by the existing one. - make the methods return LogicalResult instead of bool - Earlier, when replacement failed (due to non-deferencing uses of the memref), the set of ops that had already been processed would have been replaced leaving the IR in an inconsistent state. Now, a pass is made over all ops to first check for non-deferencing uses, and then replacement is performed. No test cases were affected because all clients of this method were first checking for non-deferencing uses before calling this method (for other reasons). This isn't true for a use case in another upcoming PR (scalar replacement); clients can now bail out with consistent IR on failure of replaceAllMemRefUsesWith. Add test case. - multiple deferencing uses of the same memref in a single op is possible (we have no such use cases/scenarios), and this has always remained unsupported. Add an assertion for this. - minor fix to another test pipeline-data-transfer case. Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#87 PiperOrigin-RevId: 265808183
| * Update Ch.2 of the Toy tutorial.River Riddle2019-08-277-339/+328
| | | | | | | | | | | | | | | | The code and documentation for this chapter of the tutorial have been updated to follow the new flow. The toy 'array' type has been replaced by usages of the MLIR tensor type. The code has also been cleaned up and modernized. Closes tensorflow/mlir#101 PiperOrigin-RevId: 265744086
| * Add 3 additional intrinsic ops to NVVM dialect, in preparation to implement ↵MLIR Team2019-08-275-35/+159
| | | | | | | | | | | | block-wide reduce. PiperOrigin-RevId: 265720077
| * [spirv] Fix the entry block to start with OpLabelLei Zhang2019-08-274-8/+82
| | | | | | | | | | | | | | | | Each basic block in SPIR-V must start with an OpLabel instruction. We don't support control flow yet, so this CL just makes sure that the entry block follows this rule and is valid. PiperOrigin-RevId: 265718841
| * Enhance GPU To SPIR-V conversion to support builtins and load/store ops.Mahesh Ravishankar2019-08-278-27/+437
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To support a conversion of a simple load-compute-store kernel from GPU dialect to SPIR-V dialect, the conversion of operations like "gpu.block_dim", "gpu.thread_id" which allow threads to get the launch conversion is needed. In SPIR-V these are specified as global variables with builin attributes. This CL adds support to specify builtin variables in SPIR-V conversion framework. This is used to convert the relevant operations from GPU dialect to SPIR-V dialect. Also add support for conversion of load/store operation in Standard dialect to SPIR-V dialect. To simplify the conversion add a method to build a spv.AccessChain operation that automatically determines the return type based on the base pointer type and the indices provided. PiperOrigin-RevId: 265718525
| * [spirv] Add Block decoration for spv.struct.Denis Khalikov2019-08-274-0/+165
| | | | | | | | | | | | | | | | Add Block decoration for top-level spv.struct. Closes tensorflow/mlir#102 PiperOrigin-RevId: 265716241
| * NFC: Remove the explicit context from Operation::create and OperationState.River Riddle2019-08-268-27/+23
| | | | | | | | | | | | The context can easily be recovered from the Location in these situations. PiperOrigin-RevId: 265578574
| * Add FPToSI/FPExt/FPTrunc cast ops to the LLVM dialect.Eric Schweitz2019-08-262-0/+13
| | | | | | | | | | | | Closes tensorflow/mlir#99 PiperOrigin-RevId: 265538731
| * NFC: Remove unnecessary context parameters from several Location getters.River Riddle2019-08-263-20/+16
| | | | | | | | | | | | The context can be recovered by other means in these methods and doesn't need to be passed explicitly. PiperOrigin-RevId: 265532956
| * Update documentation for custom rewrite specs.MLIR Team2019-08-261-5/+7
| | | | | | | | PiperOrigin-RevId: 265485862
| * Support folding of ops with inner ops in GreedyPatternRewriteDriver.Andy Ly2019-08-264-16/+53
| | | | | | | | | | | | This fixes a bug when folding ops with inner ops and inner ops are still being visited. PiperOrigin-RevId: 265475780
| * NFC: Add doc for id-punctAlina Sbirlea2019-08-231-0/+1
| | | | | | | | PiperOrigin-RevId: 265190168
OpenPOWER on IntegriCloud