summaryrefslogtreecommitdiffstats
path: root/mlir/test/Dialect
Commit message (Collapse)AuthorAgeFilesLines
...
* Minor spelling tweaksKazuaki Ishizaki2019-12-094-4/+4
| | | | | | Closes tensorflow/mlir#304 PiperOrigin-RevId: 284568358
* [StructuredOps][Linalg] Add a primitive pattern to rewrite the ↵Nicolas Vasilache2019-12-091-0/+33
| | | | | | | | | | | linalg.generic form of matmul to vector form. This CL uses the newly expanded matcher support to easily detect when a linalg.generic has a multiply-accumulate body. A linalg.generic with such a body is rewritten as a vector contraction. This CL additionally limits the rewrite to the case of matrix multiplication on contiguous and statically shaped memrefs for now. Before expanding further, we should harden the infrastructure for expressing custom ops with the structured ops abstraction. PiperOrigin-RevId: 284566659
* [VecOps] Rename vector.[insert|extract]element to just vector.[insert|extract]Aart Bik2019-12-062-38/+38
| | | | | | | Since these operations lower to [insert|extract][element|value] at LLVM dialect level, neither element nor value would correctly reflect the meaning. PiperOrigin-RevId: 284240727
* [VectorOps] Add lowering of vector.broadcast to LLVM IRAart Bik2019-12-061-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For example, a scalar broadcast %0 = vector.broadcast %x : f32 to vector<2xf32> return %0 : vector<2xf32> which expands scalar x into vector [x,x] by lowering to the following LLVM IR dialect to implement the duplication over the leading dimension. %0 = llvm.mlir.undef : !llvm<"<2 x float>"> %1 = llvm.mlir.constant(0 : index) : !llvm.i64 %2 = llvm.insertelement %x, %0[%1 : !llvm.i64] : !llvm<"<2 x float>"> %3 = llvm.shufflevector %2, %0 [0 : i32, 0 : i32] : !llvm<"<2 x float>">, !llvm<"<2 x float>"> return %3 : vector<2xf32> In the trailing dimensions, the operand is simply "passed through", unless a more elaborate "stretch" is required. For example %0 = vector.broadcast %arg0 : vector<1xf32> to vector<4xf32> return %0 : vector<4xf32> becomes %0 = llvm.mlir.undef : !llvm<"<4 x float>"> %1 = llvm.mlir.constant(0 : index) : !llvm.i64 %2 = llvm.extractelement %arg0[%1 : !llvm.i64] : !llvm<"<1 x float>"> %3 = llvm.mlir.constant(0 : index) : !llvm.i64 %4 = llvm.insertelement %2, %0[%3 : !llvm.i64] : !llvm<"<4 x float>"> %5 = llvm.shufflevector %4, %0 [0 : i32, 0 : i32, 0 : i32, 0 : i32] : !llvm<"<4 x float>">, !llvm<"<4 x float>"> llvm.return %5 : !llvm<"<4 x float>"> PiperOrigin-RevId: 284219926
* Unroll vector masks along with their associated vector arguments.Andy Davis2019-12-062-8/+4
| | | | | | | | | Updates vector ContractionOp to use proper vector masks (produced by CreateMaskOp/ConstantMaskOp). Leverages the following canonicalizations in unrolling unit test: CreateMaskOp -> ConstantMaskOp, StridedSliceOp(ConstantMaskOp) -> ConstantMaskOp Removes IndexTupleOp (no longer needed now that we have vector mask ops). Updates all unit tests. PiperOrigin-RevId: 284182168
* DimOp folding for alloc/view dynamic dimensionsUday Bondhugula2019-12-061-9/+9
| | | | | | | | | Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#253 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/253 from bondhugula:dimop a4b464f24ae63fd259114558d87e11b8ee4dae86 PiperOrigin-RevId: 284169689
* LLVM::AddressOfOp: properly take into account the address spaceAlex Zinenko2019-12-061-0/+16
| | | | | | | | | | The AddressOf operation in the LLVM dialect return a pointer to a global variable. The latter may be in a non-default address space as indicated by the "addr_space" attribute. Check that the address space of the pointer returned by AddressOfOp matches that of the referenced GlobalOp. Update the AddressOfOp builder to respect this constraint. PiperOrigin-RevId: 284138860
* [Linalg] Add permutation information to tilingJose Ignacio Gomez2019-12-051-0/+70
| | | | | | | | | | | This patch closes issue tensorflow/mlir#271. It adds an optional permutation map to declarative tiling transformations. The map is expressed as a list of integers. Closes tensorflow/mlir#288 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/288 from tetuante:issue271 2df2938d6a1f01b3bc404ded08dea2dd1e10b588 PiperOrigin-RevId: 284064151
* [spirv] Add CompositeInsertOp operationDenis Khalikov2019-12-053-144/+188
| | | | | | | | | | A CompositeInsertOp operation make a copy of a composite object, while modifying one part of it. Closes tensorflow/mlir#292 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/292 from denis0x0D:sandbox/composite_insert 2200962b9057bda53cd2f2866b461e2797196380 PiperOrigin-RevId: 284036551
* Add spv.AtomicCompareExchangeWeakLei Zhang2019-12-052-0/+45
| | | | PiperOrigin-RevId: 283997917
* [spirv] Fix nested loop (de)serializationLei Zhang2019-12-051-0/+83
| | | | | | | | | | | | | | | | | | | | | | For serialization, when we have nested ops, the inner loop will create multiple SPIR-V blocks. If the outer loop has block arguments (which corresponds to OpPhi instructions), we defer the handling of OpPhi's parent block handling until we serialized all blocks and then fix it up with the result <id>. These two cases happening together was generating invalid SPIR-V blob because we previously assume the parent block to be the block containing the terminator. That is not true anymore when the block contains structured control flow ops. If that happens, it should be fixed to use the structured control flow op's merge block. For deserialization, we record a map from header blocks to their corresponding merge and continue blocks during the initial deserialization and then use the info to construct spv.selection/spv.loop. The existing implementation will also fall apart when we have nested loops. If so, we clone all blocks for the outer loop, including the ones for the inner loop, to the spv.loop's region. So the map for header blocks' merge info need to be updated; otherwise we are operating on already deleted blocks. PiperOrigin-RevId: 283949230
* Add canonicalization patterns for vector CreateMaskOp and StridedSliceOp to ↵Andy Davis2019-12-043-0/+129
| | | | | | | | | | | be used in the unroll vector op transformation. Adds a ConstantMaskOp to the vector ops dialect. Adds the following canonicalization patterns: CreateMaskOp -> ConstantMaskOp StridedSliceOp(ConstantMaskOp) -> ConstantMaskOp PiperOrigin-RevId: 283816752
* [spirv] Adding sqrt op in the GLSL extension.Scott Todd2019-12-042-1/+39
| | | | PiperOrigin-RevId: 283769736
* [spirv] Add spv.GroupNonUniformBallotLei Zhang2019-12-033-0/+40
| | | | | | | | | This CL also did the following cleanup: - Moved the test for spv.SubgroupBallotKHR to its own file - Wrapped generated canonicalization patterns in anonymous namespace - Updated header comments in SPVOps.td PiperOrigin-RevId: 283650091
* Add CreateMaskOp to the VectorOps dialect.Andy Davis2019-12-032-0/+22
| | | | PiperOrigin-RevId: 283591888
* Fix ViewOp to have at most one offset operandAlex Zinenko2019-12-033-13/+13
| | | | | | | | | | | | | | | | | | | | | As described in the documentation, ViewOp is expected to take an optional dynamic offset followed by a list of dynamic sizes. However, the ViewOp parser did not include a check for the offset being a single value and accepeted a list of values instead. Furthermore, several tests have been exercising the wrong syntax of a ViewOp, passing multiple values to the dyanmic stride list, which was not caught by the parser. The trailing values could have been erronously interpreted as dynamic sizes. This is likely due to resyntaxing of the ViewOp, with the previous syntax taking the list of sizes before the offset. Update the tests to use the syntax with the offset preceding the sizes. Worse, the conversion of ViewOp to the LLVM dialect assumed the wrong order of operands with offset in the trailing position, and erronously relied on the permissive parsing that interpreted trailing dynamic offset values as leading dynamic sizes. Fix the lowering to use the correct order of operands. PiperOrigin-RevId: 283532506
* [spirv] Add spv.SubgroupBallotKHROpLei Zhang2019-12-032-0/+21
| | | | PiperOrigin-RevId: 283522284
* Add linkage support to LLVMFuncOpAlex Zinenko2019-12-031-0/+35
| | | | | | | | | A recent commit introduced the Linkage attribute to the LLVM dialect and used it in the Global Op. Also use it in LLVMFuncOp. As per LLVM Language Reference, if the linkage attribute is omitted, the function is assumed to have external linkage. PiperOrigin-RevId: 283493299
* [VectorOps] Add legality rules to broadcastAart Bik2019-12-022-2/+20
| | | | PiperOrigin-RevId: 283360101
* Lower linalg.indexed_generic with libcall to LLVM.Alexander Belyaev2019-12-021-2/+24
| | | | PiperOrigin-RevId: 283328994
* Introduce Linkage attribute to the LLVM dialectAlex Zinenko2019-12-022-26/+48
| | | | | | | | | | | LLVM IR supports linkage on global objects such as global variables and functions. Introduce the Linkage attribute into the LLVM dialect, backed by an integer storage. Use this attribute on LLVM::GlobalOp and make it mandatory. Implement parsing/printing of the attribute and conversion to LLVM IR. See tensorflow/mlir#277. PiperOrigin-RevId: 283309328
* [spirv] Check that operand of `spirv::CompositeExtractOp` is constant while ↵Denis Khalikov2019-11-281-0/+11
| | | | | | | | | folding. Closes tensorflow/mlir#281 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/281 from denis0x0D:sandbox/composite_ex_fold d02d73658bd1b9eaa515eb4e0aee34bc41d4252b PiperOrigin-RevId: 282971563
* [Linalg] Change attribute n_loop_types to iteratorJose Ignacio Gomez2019-11-287-37/+37
| | | | | | | | | | This addresses issue tensorflow/mlir#270. Linalg is updated to take the same form of iterator_types than vector contraction. Closes tensorflow/mlir#280 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/280 from tetuante:PRissue270 d26d88d090d3765d3b9884bfabdd023143f27287 PiperOrigin-RevId: 282905396
* [spirv] Add folders for spv.IAdd and spv.IMulLei Zhang2019-11-271-73/+107
| | | | | | | | | Adding zero and multiplying one can be common when generating code for index calculation. This CL also sorted canonicalize.mlir to alphabetical order. PiperOrigin-RevId: 282828055
* Implement Linalg to loops lowering as a patternNicolas Vasilache2019-11-273-3/+9
| | | | | | This CL rewrites the linalg ops to loops transformations as patterns that can be targeted directly from Tablegen. Reliance on OpFolder is removed and to cope with it we introduce local folding patterns that are applied greedily. PiperOrigin-RevId: 282765550
* [VectorOps] Refine BroadcastOp in VectorOps dialectAart Bik2019-11-262-8/+8
| | | | | | | | Since second argument is always fully overwritten and shape is define in "to" clause, it is not needed. Also renamed "into" to "to" now that arg is dropped. PiperOrigin-RevId: 282686475
* [VectorOps] Add a BroadcastOp to the VectorOps dialectAart Bik2019-11-262-0/+16
| | | | PiperOrigin-RevId: 282643305
* Misc changes to lowering to SPIR-V.Mahesh Ravishankar2019-11-262-19/+12
| | | | | | | | | | | | | | | These changes to SPIR-V lowering while adding support for lowering SUbViewOp, but are not directly related. - Change the lowering of MemRefType to !spv.ptr<!spv.struct<!spv.array<...>[offset]>, ..> This is consistent with the Vulkan spec. - To enable testing a simple pattern of lowering functions is added to ConvertStandardToSPIRVPass. This is just used to convert the type of the arguments of the function. The added function lowering itself is not meant to be the way functions are eventually lowered into SPIR-V dialect. PiperOrigin-RevId: 282589644
* Add a vector.InsertStridedSliceOpNicolas Vasilache2019-11-252-7/+56
| | | | | | This new op is the counterpart of vector.StridedSliceOp and will be used for in the pattern rewrites for vector unrolling. PiperOrigin-RevId: 282447414
* Allow LLVM::ExtractElementOp to have non-i32 indices.MLIR Team2019-11-252-4/+4
| | | | | | Also change the text format a bit, so that indices are braced by squares. PiperOrigin-RevId: 282437095
* Update VectorContractionOp to take iterator types and index mapping ↵Andy Davis2019-11-252-26/+180
| | | | | | attributes compatible with linalg ops. PiperOrigin-RevId: 282412311
* Introduce attributes that specify the final ABI for a spirv::ModuleOp.Mahesh Ravishankar2019-11-252-0/+158
| | | | | | | | | | | | | | | | | | | | | | | | To simplify the lowering into SPIR-V, while still respecting the ABI requirements of SPIR-V/Vulkan, split the process into two 1) While lowering a function to SPIR-V (when the function is an entry point function), allow specifying attributes on arguments and function itself that describe the ABI of the function. 2) Add a pass that materializes the ABI described in the function. Two attributes are needed. 1) Attribute on arguments of the entry point function that describe the descriptor_set, binding, storage class, etc, of the spv.globalVariable this argument will be replaced by 2) Attribute on function that specifies workgroup size, etc. (for now only workgroup size). Add the pass -spirv-lower-abi-attrs to materialize the ABI described by the attributes. This change makes the SPIRVBasicTypeConverter class unnecessary and is removed, further simplifying the SPIR-V lowering path. PiperOrigin-RevId: 282387587
* Add vector.insertelement opNicolas Vasilache2019-11-252-4/+48
| | | | | | | | | This is the counterpart of vector.extractelement op and has the same limitations at the moment (static I64IntegerArrayAttr to express position). This restriction will be filterd in the future. LLVM lowering will be added in a subsequent commit. PiperOrigin-RevId: 282365760
* Introduce gpu.funcAlex Zinenko2019-11-252-0/+71
| | | | | | | | | | | | | | | | | | Introduce a new function-like operation to the GPU dialect to provide a placeholder for the execution semantic description and to add support for GPU memory hierarchy. This aligns with the overall goal of the dialect to expose the common abstraction layer for GPU devices, in particular by providing an MLIR unit of semantics (i.e. an operation) for memory modeling. This proposal has been discussed in the mailing list: https://groups.google.com/a/tensorflow.org/d/msg/mlir/RfXNP7Hklsc/MBNN7KhjAgAJ As decided, the "convergence" aspect of the execution model will be factored out into a new discussion and therefore is not included in this commit. This commit only introduces the operation but does not hook it up with the remaining flow. The intention is to develop the new flow while keeping the old flow operational and do the switch in a simple, separately reversible commit. PiperOrigin-RevId: 282357599
* [spirv] Add a canonicalizer for `spirv::LogicalNotOp`.Denis Khalikov2019-11-223-12/+69
| | | | | | | | | | | | | | | | Add a canonicalizer for `spirv::LogicalNotOp`. Converts: * spv.LogicalNot(spv.IEqual(...)) -> spv.INotEqual(...) * spv.LogicalNot(spv.INotEqual(...)) -> spv.IEqual(...) * spv.LogicalNot(spv.LogicalEqual(...)) -> spv.LogicalNotEqual(...) * spv.LogicalNot(spv.LogicalNotEqual(...)) -> spv.LogicalEqual(...) Also moved the test for spv.IMul to arithemtic tests. Closes tensorflow/mlir#256 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/256 from denis0x0D:sandbox/canon_logical_not 76ab5787b2c777f948c8978db061d99e76453d44 PiperOrigin-RevId: 282012356
* Don't force newline before function attributesAlex Zinenko2019-11-212-5/+5
| | | | | | | | | | | Due to legacy reasons, a newline character followed by two spaces was always inserted before the attributes of the function Op in pretty form. This breaks formatting when functions are nested in some other operations. Don't print the newline and just put the attributes on the same line, which is also more consistent with module Op. Line breaking aware of indentation can be introduced separately into the parser if deemed useful. PiperOrigin-RevId: 281721793
* Add VectorContractionOp to the VectorOps dialect.Andy Davis2019-11-202-0/+98
| | | | PiperOrigin-RevId: 281605471
* Extend kernel outlining to also consider dim worth inlining.Stephan Herhut2019-11-201-1/+1
| | | | PiperOrigin-RevId: 281483447
* Add VectorOps.StridedSliceOpNicolas Vasilache2019-11-192-0/+71
| | | | | | | | | | | | | | | | | | | | | The `vector.strided_slice` takes an n-D vector, k-D `offsets` integer array attribute, a k-D `sizes` integer array attribute, a k-D `strides` integer array attribute and extracts the n-D subvector at the proper offset. Returns an n-D vector where the first k-D dimensions match the `sizes` attribute. The returned subvector contains the elements starting at offset `offsets` and ending at `offsets + sizes`. Example: ``` %1 = vector.strided_slice %0 {offsets : [0, 2], sizes : [2, 4], strides : [1, 1]}: vector<4x8x16xf32> // returns a vector<2x4x16xf32> ``` This op will be useful for progressive lowering within the VectorOp dialect. PiperOrigin-RevId: 281352749
* Support SPIR-V constant op to take DenseElementsAttr as input.Hanhan Wang2019-11-182-1/+45
| | | | | | | | Iterates each element to build the array. This includes a little refactor to combine bool/int/float into a function, since they are similar. The only difference is calling different function in the end. PiperOrigin-RevId: 281210288
* Lower linalg.indexed_generic to loops.Alexander Belyaev2019-11-182-9/+164
| | | | PiperOrigin-RevId: 281169885
* Add a parseAttribute<AttrType> overload for the non-type case.River Riddle2019-11-181-1/+1
| | | | | | The variant that accepts a type will check that the parsed attribute is a valid instance of AttrType. The non-type variant would silently fail in this case, leading to garbage attribute values. PiperOrigin-RevId: 281136528
* [spirv] Add a canonicalizer for BitcastOp.Denis Khalikov2019-11-181-0/+28
| | | | | | | | | | Convert chained `spirv::BitcastOp` operations into one `spirv::BitcastOp` operation. Closes tensorflow/mlir#238 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/238 from denis0x0D:sandbox/canon_bitcast 4352ed4f81b959ec92f849c599e733b62a99c010 PiperOrigin-RevId: 281129234
* [spirv] Add bit opsDenis Khalikov2019-11-152-0/+65
| | | | | | | | | | | | | This CL added op definitions for a few bit operations: * OpBitFieldInsert * OpBitFieldSExtract * OpBitFieldUExtract Closes tensorflow/mlir#233 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/233 from denis0x0D:sandbox/bit_field_ops e7fd85b00d72d483d7992dc42b9cc4d673903455 PiperOrigin-RevId: 280691816
* Move VectorOps to Tablegen - (almost) NFCNicolas Vasilache2019-11-142-0/+165
| | | | | | | | | | | | | | | | This CL moves VectorOps to Tablegen and cleans up the implementation. This is almost NFC but 2 changes occur: 1. an interface change occurs in the padding value specification in vector_transfer_read: the value becomes non-optional. As a shortcut we currently use %f0 for all paddings. This should become an OpInterface for vectorization in the future. 2. the return type of vector.type_cast is trivial and simplified to `memref<vector<...>>` Relevant roundtrip and invalid tests that used to sit in core are moved to the vector dialect. The op documentation is moved to the .td file. PiperOrigin-RevId: 280430869
* Deprecate linalg.subview in favor of std.subviewNicolas Vasilache2019-11-138-258/+174
| | | | | | | | | | | This CL uses the now standard std.subview in linalg. Two shortcuts are currently taken to allow this port: 1. the type resulting from a view is currently degraded to fully dynamic to pass the SubViewOp verifier. 2. indexing into SubViewOp may access out of bounds since lowering to LLVM does not currently enforce it by construction. These will be fixed in subsequent commits after discussions. PiperOrigin-RevId: 280250129
* Make VariableOp instructions be in the first block in the function.Hanhan Wang2019-11-121-0/+13
| | | | | | | | | | | Since VariableOp is serialized during processBlock, we add two more fields, `functionHeader` and `functionBody`, to collect instructions for a function. After all the blocks have been processed, we append them to the `functions`. Also, fix a bug in processGlobalVariableOp. The global variables should be encoded into `typesGlobalValues`. PiperOrigin-RevId: 280105366
* Add support for OpPhi in loop header blockLei Zhang2019-11-121-1/+45
| | | | | | | | | | | | During deserialization, the loop header block will be moved into the spv.loop's region. If the loop header block has block arguments, we need to make sure it is correctly carried over to the block where the new spv.loop resides. During serialization, we need to make sure block arguments from the spv.loop's entry block are not silently dropped. PiperOrigin-RevId: 280021777
* Add support for alignment attribute in std.alloc.Nicolas Vasilache2019-11-121-48/+51
| | | | | | | | | | | | This CL adds an extra pointer to the memref descriptor to allow specifying alignment. In a previous implementation, we used 2 types: `linalg.buffer` and `view` where the buffer type was the unit of allocation/deallocation/alignment and `view` was the unit of indexing. After multiple discussions it was decided to use a single type, which conflates both, so the memref descriptor now needs to carry both pointers. This is consistent with the [RFC-Proposed Changes to MemRef and Tensor MLIR Types](https://groups.google.com/a/tensorflow.org/forum/#!searchin/mlir/std.view%7Csort:date/mlir/-wKHANzDNTg/4K6nUAp8AAAJ). PiperOrigin-RevId: 279959463
* Look for SymbolRefAttr in KernelOutlining instead of hard-coding CallOpMLIR Team2019-11-081-4/+14
| | | | | | | | This code should be exercised using the existing kernel outlining unit test, but let me know if I should add a dedicated unit test using a fake call instruction as well. PiperOrigin-RevId: 279436321
OpenPOWER on IntegriCloud