summaryrefslogtreecommitdiffstats
path: root/mlir/lib/Dialect
Commit message (Collapse)AuthorAgeFilesLines
...
* Use vector.InsertStridedSlice in Vector -> Vector unrollingNicolas Vasilache2019-11-251-12/+9
| | | | | | | | This CL uses the recently added op to finish the implementation of Vector -> Vector unrolling by replacing the "fake join op" by a series of InsertStridedSliceOp. Test is updated accordingly PiperOrigin-RevId: 282451126
* Add a vector.InsertStridedSliceOpNicolas Vasilache2019-11-251-117/+211
| | | | | | This new op is the counterpart of vector.StridedSliceOp and will be used for in the pattern rewrites for vector unrolling. PiperOrigin-RevId: 282447414
* Allow LLVM::ExtractElementOp to have non-i32 indices.MLIR Team2019-11-251-17/+14
| | | | | | Also change the text format a bit, so that indices are braced by squares. PiperOrigin-RevId: 282437095
* Make std.divis and std.diviu support ElementsAttr folding.Ben Vanik2019-11-251-24/+20
| | | | PiperOrigin-RevId: 282434465
* Add missing rule to generate SPIR-V ABI Attribute using tblgen to CMake.Mahesh Ravishankar2019-11-251-0/+1
| | | | PiperOrigin-RevId: 282415592
* Update VectorContractionOp to take iterator types and index mapping ↵Andy Davis2019-11-251-17/+87
| | | | | | attributes compatible with linalg ops. PiperOrigin-RevId: 282412311
* Introduce attributes that specify the final ABI for a spirv::ModuleOp.Mahesh Ravishankar2019-11-256-177/+360
| | | | | | | | | | | | | | | | | | | | | | | | To simplify the lowering into SPIR-V, while still respecting the ABI requirements of SPIR-V/Vulkan, split the process into two 1) While lowering a function to SPIR-V (when the function is an entry point function), allow specifying attributes on arguments and function itself that describe the ABI of the function. 2) Add a pass that materializes the ABI described in the function. Two attributes are needed. 1) Attribute on arguments of the entry point function that describe the descriptor_set, binding, storage class, etc, of the spv.globalVariable this argument will be replaced by 2) Attribute on function that specifies workgroup size, etc. (for now only workgroup size). Add the pass -spirv-lower-abi-attrs to materialize the ABI described by the attributes. This change makes the SPIRVBasicTypeConverter class unnecessary and is removed, further simplifying the SPIR-V lowering path. PiperOrigin-RevId: 282387587
* Allow memref_cast from static strides to dynamic strides.Mahesh Ravishankar2019-11-251-2/+22
| | | | | | | | Memref_cast supports cast from static shape to dynamic shape memrefs. The same should be true for strides as well, i.e a memref with static strides can be casted to a memref with dynamic strides. PiperOrigin-RevId: 282381862
* Add vector.insertelement opNicolas Vasilache2019-11-251-1/+73
| | | | | | | | | This is the counterpart of vector.extractelement op and has the same limitations at the moment (static I64IntegerArrayAttr to express position). This restriction will be filterd in the future. LLVM lowering will be added in a subsequent commit. PiperOrigin-RevId: 282365760
* Introduce gpu.funcAlex Zinenko2019-11-251-6/+195
| | | | | | | | | | | | | | | | | | Introduce a new function-like operation to the GPU dialect to provide a placeholder for the execution semantic description and to add support for GPU memory hierarchy. This aligns with the overall goal of the dialect to expose the common abstraction layer for GPU devices, in particular by providing an MLIR unit of semantics (i.e. an operation) for memory modeling. This proposal has been discussed in the mailing list: https://groups.google.com/a/tensorflow.org/d/msg/mlir/RfXNP7Hklsc/MBNN7KhjAgAJ As decided, the "convergence" aspect of the execution model will be factored out into a new discussion and therefore is not included in this commit. This commit only introduces the operation but does not hook it up with the remaining flow. The intention is to develop the new flow while keeping the old flow operational and do the switch in a simple, separately reversible commit. PiperOrigin-RevId: 282357599
* Support folding of StandardOps with DenseElementsAttr.Ben Vanik2019-11-241-14/+30
| | | | PiperOrigin-RevId: 282270243
* NFC: Wire up DRR settings for SPIR-V canonicalization patternsLei Zhang2019-11-233-58/+55
| | | | | | | | This CL added necessary files and settings for using DRR to write SPIR-V canonicalization patterns and also converted the patterns for spv.Bitcast and spv.LogicalNot. PiperOrigin-RevId: 282132786
* Make isValidSymbol more powerfulUday Bondhugula2019-11-221-10/+42
| | | | | | | | | | | | | | | | | The check in isValidSymbol, as far as a DimOp result went, checked if the dim op was on a top-level memref. However, any alloc'ed, view, or subview memref would be fine as long as the corresponding dimension of that memref is either a static one or was in turn created using a valid symbol in the case of dynamic dimensions. Reported-by: Jose Gomez Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#252 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/252 from bondhugula:symbol 7b57dc394df9375e651f497231c6e4525a32a662 PiperOrigin-RevId: 282097114
* Fix Windows BuildNicolas Vasilache2019-11-221-1/+2
| | | | PiperOrigin-RevId: 282048102
* [spirv] Add a canonicalizer for `spirv::LogicalNotOp`.Denis Khalikov2019-11-221-0/+41
| | | | | | | | | | | | | | | | Add a canonicalizer for `spirv::LogicalNotOp`. Converts: * spv.LogicalNot(spv.IEqual(...)) -> spv.INotEqual(...) * spv.LogicalNot(spv.INotEqual(...)) -> spv.IEqual(...) * spv.LogicalNot(spv.LogicalEqual(...)) -> spv.LogicalNotEqual(...) * spv.LogicalNot(spv.LogicalNotEqual(...)) -> spv.LogicalEqual(...) Also moved the test for spv.IMul to arithemtic tests. Closes tensorflow/mlir#256 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/256 from denis0x0D:sandbox/canon_logical_not 76ab5787b2c777f948c8978db061d99e76453d44 PiperOrigin-RevId: 282012356
* Add more canonicalizations for SubViewOp.Mahesh Ravishankar2019-11-221-92/+122
| | | | | | | | | | Depending on which of the offsets, sizes, or strides are constant, the subview op can be canonicalized in different ways. Add such canonicalizations, which generalize the existing approach of canonicalizing subview op only if all of offsets, sizes and shapes are constants. PiperOrigin-RevId: 282010703
* Unify vector op names with other dialects.Jean-Michel Gorius2019-11-221-66/+63
| | | | | | | | | | Change vector op names from VectorFooOp to Vector_FooOp and from vector::VectorFooOp to vector::FooOp. Closes tensorflow/mlir#257 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/257 from Kayjukh:master dfc3a0e04114885aaec8740d5951d6984d6e1577 PiperOrigin-RevId: 281967461
* Move Linalg Transforms that are actually Conversions - NFCNicolas Vasilache2019-11-212-530/+0
| | | | PiperOrigin-RevId: 281844602
* Fix OSS builds - NFCNicolas Vasilache2019-11-211-1/+1
| | | | PiperOrigin-RevId: 281757979
* Drop unused function - NFCNicolas Vasilache2019-11-211-5/+0
| | | | PiperOrigin-RevId: 281741923
* Split Linalg declarative patterns from specific test patterns - NFCNicolas Vasilache2019-11-211-62/+37
| | | | | | This will make it easier to scale out test patterns and build specific passes that do not interfere with independent testing. PiperOrigin-RevId: 281736335
* Add VectorContractionOp to the VectorOps dialect.Andy Davis2019-11-201-0/+209
| | | | PiperOrigin-RevId: 281605471
* Verify subview op result has dynamic shape, when sizes are specified.Mahesh Ravishankar2019-11-201-0/+14
| | | | | | | If the sizes are specified as arguments to the subview op, then the shape must be dynamic as well. PiperOrigin-RevId: 281591608
* Changes to SubViewOp to make it more amenable to canonicalization.Mahesh Ravishankar2019-11-201-37/+118
| | | | | | | | | | | | | | The current SubViewOp specification allows for either all offsets, shape and stride to be dynamic or all of them to be static. There are opportunities for more fine-grained canonicalization based on which of these are static. For example, if the sizes are static, the result memref is of static shape. The specification of SubViewOp is modified to allow on or more of offsets, shapes and strides to be statically specified. The verification is updated to ensure that the result type of the subview op is consistent with which of these are static and which are dynamic. PiperOrigin-RevId: 281560457
* Add a new OpAsmOpInterface to allow for ops to directly hook into the ↵River Riddle2019-11-201-32/+26
| | | | | | | | | | | | | | | | | | | | | | | AsmPrinter. This interface provides more fine-grained hooks into the AsmPrinter than the dialect interface, allowing for operations to define the asm name to use for results directly on the operations themselves. The hook is also expanded to enable defining named result "groups". Get a special name to use when printing the results of this operation. The given callback is invoked with a specific result value that starts a result "pack", and the name to give this result pack. To signal that a result pack should use the default naming scheme, a None can be passed in instead of the name. For example, if you have an operation that has four results and you want to split these into three distinct groups you could do the following: setNameFn(getResult(0), "first_result"); setNameFn(getResult(1), "middle_results"); setNameFn(getResult(3), ""); // use the default numbering. This would print the operation as follows: %first_result, %middle_results:2, %0 = "my.op" ... PiperOrigin-RevId: 281546873
* Extend kernel outlining to also consider dim worth inlining.Stephan Herhut2019-11-201-9/+21
| | | | PiperOrigin-RevId: 281483447
* Add VectorOps.StridedSliceOpNicolas Vasilache2019-11-191-2/+176
| | | | | | | | | | | | | | | | | | | | | The `vector.strided_slice` takes an n-D vector, k-D `offsets` integer array attribute, a k-D `sizes` integer array attribute, a k-D `strides` integer array attribute and extracts the n-D subvector at the proper offset. Returns an n-D vector where the first k-D dimensions match the `sizes` attribute. The returned subvector contains the elements starting at offset `offsets` and ending at `offsets + sizes`. Example: ``` %1 = vector.strided_slice %0 {offsets : [0, 2], sizes : [2, 4], strides : [1, 1]}: vector<4x8x16xf32> // returns a vector<2x4x16xf32> ``` This op will be useful for progressive lowering within the VectorOp dialect. PiperOrigin-RevId: 281352749
* Support SPIR-V constant op to take DenseElementsAttr as input.Hanhan Wang2019-11-182-174/+110
| | | | | | | | Iterates each element to build the array. This includes a little refactor to combine bool/int/float into a function, since they are similar. The only difference is calling different function in the end. PiperOrigin-RevId: 281210288
* Use SmallVectorImpl instead of SmallVector for function parameters (NFC)Tian Jin2019-11-181-1/+1
| | | | | | Closes tensorflow/mlir#247 PiperOrigin-RevId: 281185661
* Lower linalg.indexed_generic to loops.Alexander Belyaev2019-11-183-65/+212
| | | | PiperOrigin-RevId: 281169885
* Fix SubViewOp stride calculation in constant folding.Andy Davis2019-11-181-6/+8
| | | | | | Adds unit tests for subview offset and stride argument constant folding. PiperOrigin-RevId: 281161041
* [spirv] Add a canonicalizer for BitcastOp.Denis Khalikov2019-11-181-4/+33
| | | | | | | | | | Convert chained `spirv::BitcastOp` operations into one `spirv::BitcastOp` operation. Closes tensorflow/mlir#238 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/238 from denis0x0D:sandbox/canon_bitcast 4352ed4f81b959ec92f849c599e733b62a99c010 PiperOrigin-RevId: 281129234
* Standardize all VectorOps class names to be prefixed by Vector - NFCNicolas Vasilache2019-11-181-17/+29
| | | | | | This improves consistency and will concretely avoid collisions between VectorExtractElementOp and ExtractElementOp when they are included in the same transforms / rewrites. PiperOrigin-RevId: 281101588
* Implement folding of pattern dim(subview(_)[...][s1, ..., sn][...], i) -> si.Stephan Herhut2019-11-181-1/+9
| | | | PiperOrigin-RevId: 281042016
* [spirv] Add bit opsDenis Khalikov2019-11-151-0/+86
| | | | | | | | | | | | | This CL added op definitions for a few bit operations: * OpBitFieldInsert * OpBitFieldSExtract * OpBitFieldUExtract Closes tensorflow/mlir#233 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/233 from denis0x0D:sandbox/bit_field_ops e7fd85b00d72d483d7992dc42b9cc4d673903455 PiperOrigin-RevId: 280691816
* NFC: Convert CmpIPredicate in StandardOps to use EnumAttrLei Zhang2019-11-151-70/+19
| | | | | | This turns several hand-written functions to auto-generated ones. PiperOrigin-RevId: 280684326
* Fix build warningsNicolas Vasilache2019-11-151-0/+1
| | | | | | | | | Delete unused constexpr ints in LowerToLLVMDialect. Add (void)toStringRef for non-debug builds. Fixes tensorflow/mlir#232. PiperOrigin-RevId: 280671014
* Templatize linalg::LowerToLoops - NFCNicolas Vasilache2019-11-151-65/+80
| | | | | | | This modification will allow to easily plug lowering of linalg ops to different types of loops (affine, loop.for and other future constructs). This is purely NFC for now. PiperOrigin-RevId: 280652186
* Refactor the LowerVectorTransfers pass to use the RewritePattern infra - NFCNicolas Vasilache2019-11-141-1/+1
| | | | | | This is step 1/n in refactoring infrastructure along the Vector dialect to make it ready for retargetability and composable progressive lowering. PiperOrigin-RevId: 280529784
* NFC: Refactor Dialect Conversion targeting SPIR-V.Mahesh Ravishankar2019-11-142-2/+344
| | | | | | | | | | | | Refactoring the conversion from StandardOps/GPU dialect to SPIR-V dialect: 1) Move the SPIRVTypeConversion and SPIRVOpLowering class into SPIR-V dialect. 2) Add header files that expose functions to add patterns for the dialects to SPIR-V lowering, as well as a pass that does the dialect to SPIR-V lowering. 3) Make SPIRVOpLowering derive from OpLowering class. PiperOrigin-RevId: 280486871
* Adds canonicalizer to SubViewOp which folds constants from base memref and ↵Andy Davis2019-11-141-4/+161
| | | | | | | | operands into the subview result memref type. Changes SubViewOp to support zero operands case, when offset, strides and sizes are all constant. PiperOrigin-RevId: 280485075
* Move Affine to Standard conversion to lib/ConversionAlex Zinenko2019-11-141-1/+1
| | | | | | | This is essentially a dialect conversion and conceptually belongs to conversions. PiperOrigin-RevId: 280460034
* Move VectorOps to Tablegen - (almost) NFCNicolas Vasilache2019-11-141-288/+97
| | | | | | | | | | | | | | | | This CL moves VectorOps to Tablegen and cleans up the implementation. This is almost NFC but 2 changes occur: 1. an interface change occurs in the padding value specification in vector_transfer_read: the value becomes non-optional. As a shortcut we currently use %f0 for all paddings. This should become an OpInterface for vectorization in the future. 2. the return type of vector.type_cast is trivial and simplified to `memref<vector<...>>` Relevant roundtrip and invalid tests that used to sit in core are moved to the vector dialect. The op documentation is moved to the .td file. PiperOrigin-RevId: 280430869
* Use MemRefDescriptor in Linalg-to-LLVM conversionAlex Zinenko2019-11-141-96/+61
| | | | | | | | | | | Following up on the consolidation of MemRef descriptor conversion, update Linalg-to-LLVM conversion to use the helper class that abstracts away the implementation details of the MemRef descriptor. This required MemRefDescriptor to become publicly visible. Since this conversion is heavily EDSC-based, introduce locally an additional wrapper that uses builder and location pointed to by the EDSC context while emitting descriptor manipulation operations. PiperOrigin-RevId: 280429228
* Replace explicit concatenation by llvm::concatNicolas Vasilache2019-11-131-4/+2
| | | | PiperOrigin-RevId: 280258938
* Deprecate linalg.subview in favor of std.subviewNicolas Vasilache2019-11-138-173/+118
| | | | | | | | | | | This CL uses the now standard std.subview in linalg. Two shortcuts are currently taken to allow this port: 1. the type resulting from a view is currently degraded to fully dynamic to pass the SubViewOp verifier. 2. indexing into SubViewOp may access out of bounds since lowering to LLVM does not currently enforce it by construction. These will be fixed in subsequent commits after discussions. PiperOrigin-RevId: 280250129
* Make VariableOp instructions be in the first block in the function.Hanhan Wang2019-11-121-20/+97
| | | | | | | | | | | Since VariableOp is serialized during processBlock, we add two more fields, `functionHeader` and `functionBody`, to collect instructions for a function. After all the blocks have been processed, we append them to the `functions`. Also, fix a bug in processGlobalVariableOp. The global variables should be encoded into `typesGlobalValues`. PiperOrigin-RevId: 280105366
* Add support for OpPhi in loop header blockLei Zhang2019-11-123-8/+52
| | | | | | | | | | | | During deserialization, the loop header block will be moved into the spv.loop's region. If the loop header block has block arguments, we need to make sure it is correctly carried over to the block where the new spv.loop resides. During serialization, we need to make sure block arguments from the spv.loop's entry block are not silently dropped. PiperOrigin-RevId: 280021777
* Adds affine.min operation which returns the minimum value from a ↵Andy Davis2019-11-121-0/+75
| | | | | | multi-result affine map. This operation is useful for things like computing the dynamic value of affine loop bounds, and is trivial to constant fold. PiperOrigin-RevId: 279959714
* Add support for alignment attribute in std.alloc.Nicolas Vasilache2019-11-122-32/+50
| | | | | | | | | | | | This CL adds an extra pointer to the memref descriptor to allow specifying alignment. In a previous implementation, we used 2 types: `linalg.buffer` and `view` where the buffer type was the unit of allocation/deallocation/alignment and `view` was the unit of indexing. After multiple discussions it was decided to use a single type, which conflates both, so the memref descriptor now needs to carry both pointers. This is consistent with the [RFC-Proposed Changes to MemRef and Tensor MLIR Types](https://groups.google.com/a/tensorflow.org/forum/#!searchin/mlir/std.view%7Csort:date/mlir/-wKHANzDNTg/4K6nUAp8AAAJ). PiperOrigin-RevId: 279959463
OpenPOWER on IntegriCloud