summaryrefslogtreecommitdiffstats
path: root/mlir/lib/Dialect
Commit message (Collapse)AuthorAgeFilesLines
...
* [spirv] Implement inliner interfaceLei Zhang2019-10-162-0/+73
| | | | | | | | | | | | | | We just need to implement a few interface hooks to DialectInlinerInterface and CallOpInterface to gain the benefits of an inliner. :) Right now only supports some trivial cases: * Inlining single block with spv.Return/spv.ReturnValue * Inlining multi block with spv.Return * Inlining spv.selection/spv.loop without return ops More advanced cases will require block argument and Phi support. PiperOrigin-RevId: 275151132
* Makes spv.module generated by GPU->SPIRV conversion spec compliantMahesh Ravishankar2019-10-163-191/+168
| | | | | | | | | | | | | | | | Makes the spv.module generated by the GPU to SPIR-V conversion SPIR-V spec compliant (validated using spirv-val from Vulkan tools). 1) Separate out the VulkanLayoutUtils from DecorateSPIRVCompositeTypeLayoutPass to make it reusable within the Type converter in SPIR-V lowering infrastructure. This is used to compute the layout of the !spv.struct used in global variable type description. 2) Set the capabilities of the spv.module to Shader (needed for use of Logical Memory Model, and the extensions to SPV_KHR_storage_buffer_storage_class for use of Storage Buffer) PiperOrigin-RevId: 275081486
* Support custom accumulator provided as region to gpu.all_reduce.Christian Sigg2019-10-161-0/+32
| | | | | | | | | | In addition to specifying the type of accumulation through the 'op' attribute, the accumulation can now also be specified as arbitrary code region. Adds a gpu.yield op to specify the result of the accumulation. Also support more types (integers) and accumulations (mul). PiperOrigin-RevId: 275065447
* Add support for PatternRewriter::eraseOp.River Riddle2019-10-163-11/+9
| | | | | | This hook is useful when an operation is known to be dead, and no replacement values make sense. PiperOrigin-RevId: 275052756
* Fix CMake configuration after introduction of LICM and LoopLikeInterfaceMehdi Amini2019-10-162-2/+6
| | | | | | | b843cc5d5a introduced a new op LICM transformation and a LoopLike interface, but missed the CMake aspects of it. This should fix the build. PiperOrigin-RevId: 275038533
* Implement simple loop-invariant-code-motion based on dialect interfaces.Stephan Herhut2019-10-162-1/+60
| | | | PiperOrigin-RevId: 275004258
* [spirv] Add support for SpecId decoration on spv.specConstantLei Zhang2019-10-153-35/+88
| | | | | | | | The SpecId decoration is the handle for providing external specialization. Similar to descriptor set and binding on global variables, we directly bake it into assembly parsing and printing. PiperOrigin-RevId: 274893879
* Fix linalg.subview behavior in (partially) static cases.Nicolas Vasilache2019-10-143-21/+44
| | | | | | | | | | | | | | | | | | | | | When the implementation of the strided memref [RFC](https://groups.google.com/a/tensorflow.org/forum/#!msg/mlir/MaL8m2nXuio/1scRqZa6AQAJ) landed, linalg started using this type instead of the now retired !linalg.view. As static and partially static cases appear, the stride information needs to be maintained properly. In particular, the result type of the subview op was generally incorrect. This CL fixes the issue by computing a return type that: 1. always has dynamic sizes, which is generally the only correct way to construct a subview in the absence of data padding and/or code versioning. 2. has the same strides as the base strided memref. Point 1. above can be further refined but will needs further analysis and canonicalization to optimize the particular case where: 1. The base memref has static size along a given dimension. 2. The subview size can be statically derived (e.g. after canonicalization). 3. *And* the subview size is an even divisor of the base memref. This 3rd constraint is well-known in the case of tiled layouts that don't assume implicit padding: the boundary tile may be only partial and has size given by `problem_size % tile_size`. Tests are updated as appropriate. PiperOrigin-RevId: 274578624
* Add lowering of VectorOps dialect to LLVM to the Linalg LLVM lowering passNicolas Vasilache2019-10-141-0/+2
| | | | | | | | | This fixes an omission that prevents Linalg to lower generic ops regions operating on ops in the VectorOps dialect. To achieve this we simply need to `populateVectorToLLVMConversionPatterns` in the conversion. Relevant tests are added. PiperOrigin-RevId: 274577325
* Add LLVM IR dialect hooks for FP128 and X86_FP80 typesEric Schweitz2019-10-111-1/+10
| | | | | | | Closes tensorflow/mlir#184 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/184 from schweitzpgi:more-float-types ca27d00510a86ffc9c79c65fb3a0193b5ea097a0 PiperOrigin-RevId: 274288813
* LLVM Dialect: introduce llvm.mlir.null operationAlex Zinenko2019-10-111-0/+27
| | | | | | | | Similarly to `llvm.mlir.undef`, this auxiliary operation creates an SSA value that corresponds to `null` in LLVM IR. This operation is necessary to model sizeof(<...>) behavior when allocating memory. PiperOrigin-RevId: 274158760
* Add unary ops and ExpOp to Standard Dialect.Alexander Belyaev2019-10-111-2/+17
| | | | PiperOrigin-RevId: 274152154
* Remove the need to convert operations in regions of operations that have ↵River Riddle2019-10-101-9/+0
| | | | | | | | been replaced. When an operation with regions gets replaced, we currently require that all of the remaining nested operations are still converted even though they are going to be replaced when the rewrite is finished. This cl adds a tracking for a minimal set of operations that are known to be "dead". This allows for ignoring the legalization of operations that are won't survive after conversion. PiperOrigin-RevId: 274009003
* Use llvm.func to define functions with wrapped LLVM IR function typeAlex Zinenko2019-10-103-22/+34
| | | | | | | | | | | | | | This function-like operation allows one to define functions that have wrapped LLVM IR function type, in particular variadic functions. The operation was added in parallel to the existing lowering flow, this commit only switches the flow to use it. Using a custom function type makes the LLVM IR dialect type system more consistent and avoids complex conversion rules for functions that previously had to use the built-in function type instead of a wrapped LLVM IR dialect type and perform conversions during the analysis. PiperOrigin-RevId: 273910855
* Fix typo in QuantizedType method namesKazuaki Ishizaki2019-10-092-6/+6
| | | | | | | Closes tensorflow/mlir#172 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/172 from kiszk:quantops e27b57eac8f4c6ef7ee6a6f7b497d3e2f56f6798 PiperOrigin-RevId: 273879164
* Allow dynamic but ranked types in ops with SameOperandsAndResultShape and ↵Smit Hinsu2019-10-081-18/+1
| | | | | | | | | | SameOperandsAndResultType traits Currently SameOperandsAndResultShape trait allows operands to have tensor<*xf32> and tensor<2xf32> but doesn't allow tensor<?xf32> and tensor<10xf32>. Also, use the updated shape compatibility helper function in TensorCastOp::areCastCompatible method. PiperOrigin-RevId: 273658336
* Add support for parsing/printing non bare-identifier SymbolRefs.River Riddle2019-10-082-13/+18
| | | | | | The restriction that symbols can only have identifier names is arbitrary, and artificially limits the names that a symbol may have. This change adds support for parsing and printing symbols that don't fit in the 'bare-identifier' grammar by printing the reference in quotes, e.g. @"0_my_reference" can now be used as a symbol name. PiperOrigin-RevId: 273644768
* [spirv] Add a pass to decorate the composite types with layout info.Denis Khalikov2019-10-084-3/+334
| | | | | | | | | | | Add a pass to decorate the composite types used by composite objects in the StorageBuffer, PhysicalStorageBuffer, Uniform, and PushConstant storage classes with layout information. Closes tensorflow/mlir#156 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/156 from denis0x0D:sandbox/layout_info_decoration 7c50840fd38ca169a2da7ce9886b52b50c868b84 PiperOrigin-RevId: 273634140
* Linalg to LLVM lowering: decrease the reliance on symbol lookup in a moduleAlex Zinenko2019-10-081-19/+20
| | | | | | | | | | | During the conversion, both the original and the converted function may coexist in the module and have the same symbol name. There is no guarantee which of the two will be found by the symbol lookup. Avoid returning the result of the library function lookup when lowering Linalg to Standard or LLVM. Use the symbol reference instead. After the conversion completes, only one symbol will remain and the Ops using SymbolRefAttrs will be referring to the correct one. PiperOrigin-RevId: 273510079
* Use named modules for gpu.launch_funcAlex Zinenko2019-10-082-47/+122
| | | | | | | | | | | | | | | | | | The kernel function called by gpu.launch_func is now placed into an isolated nested module during the outlining stage to simplify separate compilation. Until recently, modules did not have names and could not be referenced. This limitation was circumvented by introducing a stub kernel at the same name at the same nesting level as the module containing the actual kernel. This relation is only effective in one direction: from actual kernel function to its launch_func "caller". Leverage the recently introduced symbol name attributes on modules to refer to a specific nested module from `gpu.launch_func`. This removes the implicit connection between the identically named stub and kernel functions. It also enables support for `gpu.launch_func`s to call different kernels located in the same module. PiperOrigin-RevId: 273491891
* Update upgrade some uses of mlir::interleave API to take container argument ↵Jing Pu2019-10-071-6/+2
| | | | | | directly. PiperOrigin-RevId: 273446814
* Expose `fuseProducerOf` in Linalg/Utils/Utils.h.MLIR Team2019-10-071-11/+6
| | | | PiperOrigin-RevId: 273384063
* Do not add spirv::BitcastOp for cast from signed to unsigned type.Mahesh Ravishankar2019-10-071-0/+75
| | | | | | | | | Since MLIR integer types don't make a distinction between signed vs unsigned integers, during deserialization of SPIR-V binaries, the OpBitcast might result in a cast from/to the same type. Do not add a spv.Bitcast operation to the spv.module in these cases. PiperOrigin-RevId: 273381887
* Update UndefOp (de)serialization to generate OpUndef at module level.Mahesh Ravishankar2019-10-072-0/+55
| | | | | | | | | | | | | The SPIR-V spec recommends all OpUndef instructions be generated at module level. For the SPIR-V dialect its better for UndefOp to produce an SSA value for use with other instructions. If UndefOp is to be used at module level, it cannot produce an SSA value (use of this SSA value within FuncOp would need implicit capture). To satisfy needs of the SPIR-V spec while making it simpler to represent UndefOp in the SPIR-V dialect, the serialization is updated to create OpUndef instruction at module scope. PiperOrigin-RevId: 273355526
* [spirv] Fix function entry block erase after moving to spv.selectionLei Zhang2019-10-072-34/+58
| | | | | | | | | | | | | | | | The structured selection/loop's entry block does not have arguments. If the function's header block is also part of the structured control flow, we cannot just simply erase it because it may contain arguments matching the function signature and used by the cloned blocks. Instead, turn it into a block only containing a spv.Branch op. Also, we can directly emit instructions for the spv.selection header block to the block containing the spv.selection op. This eliminates unnecessary branches in the SPIR-V blob. Added a test for nested spv.loop. PiperOrigin-RevId: 273351424
* Support AllocOp terminal in Linalg::AliasAnalysis.Nicolas Vasilache2019-10-072-0/+6
| | | | | | | | Now that linalg.view and strided memrefs are unified, there is no reason to disallow AllocOp in alias analysis. This CLs adds support for AllocOp which allows writing shorter tests that do not require explicitly creating a view for each operation. PiperOrigin-RevId: 273303060
* [spirv] Allow return ops to be in control flow opsLei Zhang2019-10-041-2/+2
| | | | | | | Use `getParentOfType<FunctionOp>()` instead of `cast<FuncOp>(getParentOp())` to avoid crash when return ops are used inside spv.selection/spv.loop. PiperOrigin-RevId: 273006041
* Add spv.Undef op to support OpUndef instruction in SPIR-V.Mahesh Ravishankar2019-10-041-0/+17
| | | | | | | | Adding support for OpUndef instruction. Updating the dialect generation script to fix a few bugs in the instruction spec generation. PiperOrigin-RevId: 272975685
* Add some utility builder functions for SPIR-V operations.Mahesh Ravishankar2019-10-042-10/+48
| | | | | | | | | | | Add builder functions for spv._address_of, spv.EntryPoint, spv.ExecutionMode and spv.Load to make it easier to create these operations. Fix a minor bug in printing of spv.EntryPoint Add a utility function to get the attribute name associated with a decoration. PiperOrigin-RevId: 272952846
* Add missing Linalg lowerings to allow roundtrip.mlir to lower to LLVMNicolas Vasilache2019-10-041-20/+47
| | | | | | | | | | | | Certain lowering patterns were reported as [missing](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/dkdmHa77sSQ). This CL adds them and allows Linalg/roundtrip.mlir and Linalg/loops.mlir to lower to LLVM directly. Those 2 tests are updated to additionally check that the direct lowering to LLVM does not crash. The following points, left as TODOs still need to be addressed for correct end-to-end execution: 1. the lowering for ConvOp needs to pass attributes such as strides and dilations; the external library call needs to support it. 2. the lowering for GenericOp needs to support lowering to loops as a DialectConversion pattern. This is blocked on the DialectConversion infrastructure accepting an OperationFolder. PiperOrigin-RevId: 272878131
* Fix typos, NFC.Christian Sigg2019-10-044-10/+10
| | | | PiperOrigin-RevId: 272851237
* Add `axis` attribute to the quant.stats opFeng Liu2019-10-031-0/+1
| | | | | | | The first dim length of the axisStats attribute should equals to the slice size of the input argument when splitted by the axis dimension. PiperOrigin-RevId: 272798042
* Add fpext and fptrunc to the Standard dialect and includes conversion to LLVMMLIR Team2019-10-031-0/+22
| | | | PiperOrigin-RevId: 272768027
* NFC: rename Conversion/ControlFlowToCFG to Conversion/LoopToStandardAlex Zinenko2019-10-031-1/+1
| | | | | | | | This makes the name of the conversion pass more consistent with the naming scheme, since it actually converts from the Loop dialect to the Standard dialect rather than working with arbitrary control flow operations. PiperOrigin-RevId: 272612112
* Extract MemRefType::getStridesAndOffset as a free function and fix dynamic ↵Nicolas Vasilache2019-10-021-2/+2
| | | | | | | | offset determination. This also adds coverage with a missing test, which uncovered a bug in the conditional for testing whether an offset is dynamic or not. PiperOrigin-RevId: 272505798
* [spirv] Add support for spv.selectionLei Zhang2019-10-023-81/+287
| | | | | | | | | | | | | | | Similar to spv.loop, spv.selection is another op for modelling SPIR-V structured control flow. It covers both OpBranchConditional and OpSwitch with OpSelectionMerge. Instead of having a `spv.SelectionMerge` op to directly model selection merge instruction for indicating the merge target, we use regions to delimit the boundary of the selection: the merge target is the next op following the `spv.selection` op. This way it's easier to discover all blocks belonging to the selection and it plays nicer with the MLIR system. PiperOrigin-RevId: 272475006
* Unify Linalg types by using strided memrefsNicolas Vasilache2019-10-017-586/+221
| | | | | | | This CL finishes the implementation of the Linalg + Affine type unification of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio). As a consequence, the !linalg.view type, linalg::DimOp, linalg::LoadOp and linalg::StoreOp can now disappear and Linalg can use standard types everywhere. PiperOrigin-RevId: 272187165
* [spirv] Add array length check.Denis Khalikov2019-09-302-0/+9
| | | | | | | | | | According to the SPIR-V spec: "Length is the number of elements in the array. It must be at least 1." Closes tensorflow/mlir#160 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/160 from denis0x0D:sandbox/array_len 0840dc0986ad0088a3aa7d5d8d3e97d489377ed9 PiperOrigin-RevId: 272094669
* Add support for Logical Ops in SPIR-V dialectMahesh Ravishankar2019-09-301-10/+26
| | | | | | | | | | | | | | | | | | Add operations corresponding to OpLogicalAnd, OpLogicalNot, OpLogicalEqual, OpLogicalNotEqual and OpLogicalOr instructions in SPIR-V dialect. This needs changes to class hierarchy in SPIR-V TableGen files to split SPIRVLogicalOp into SPIRVLogicalUnaryOp and SPIRVLogicalBinaryOp. All derived classes of SPIRVLogicalOp are updated accordingly. Update the spirv dialect generation script to 1) Allow specifying base class to use for instruction spec generation and file name to generate the specification in separately. 2) Use the existing descriptions for operations. 3) Update define_inst.sh to also invoke define_opcode.sh to also define the corresponding SPIR-V instruction opcode enum. PiperOrigin-RevId: 272014876
* NFC - clean up op accessor usage, std.load/store op verify, other stale infoUday Bondhugula2019-09-273-52/+33
| | | | | | | | | | | - also remove stale terminology/references in docs Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#148 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/148 from bondhugula:cleanup e846b641a3c2936e874138aff480a23cdbf66591 PiperOrigin-RevId: 271618279
* Promote MemRefDescriptor to a pointer to struct when passing function ↵Nicolas Vasilache2019-09-271-7/+11
| | | | | | | | | | | | | boundaries in LLVMLowering. The strided MemRef RFC discusses a normalized descriptor and interaction with library calls (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio). Lowering of nested LLVM structs as value types does not play nicely with externally compiled C/C++ functions due to ABI issues. Solving the ABI problem generally is a very complex problem and most likely involves taking a dependence on clang that we do not want atm. A simple workaround is to pass pointers to memref descriptors at function boundaries, which this CL implement. PiperOrigin-RevId: 271591708
* [ROCm] Adding ROCDL Dialect.Deven Desai2019-09-272-0/+91
| | | | | | | | | | | | | This commit introduces the ROCDL Dialect (i.e. the ROCDL ops + the code to lower those ROCDL ops to LLWM intrinsics/functions). Think of ROCDL Dialect as analogous to the NVVM Dialect, but for AMD GPUs. This patch contains just the essentials needed to get a simple example up and running. We expect to make further additions to the ROCDL Dialect. This is the first of 3 commits, the follow-up will be: * add a pass that lowers GPU Dialect to ROCDL Dialect * add a "mlir-rocm-runner" utility Closes tensorflow/mlir#146 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/146 from deven-amd:deven-rocdl-dialect e78e8005c75a78912631116c78dc844fcc4b0de9 PiperOrigin-RevId: 271511259
* Decouple tiling from fusion in Linalg.Nicolas Vasilache2019-09-263-162/+140
| | | | | | | | | | | | This CL modifies the linalg-fusion pass such that it does not tile anymore as part of the pass. Tiling is a separate concern that enables linalg fusion but should happen before. This makes fusion more composable with other decisions. In particular the fusion pass now becomes greedy and only applies the transformation on a best-effort basis. This should also let fusion work in a multi-hop fashion with chains of producer/consumers. Since the fusion pass does not perform tiling anymore, tests are rewritten to be in pretiled form and make the intent of the test clearer (albeit more verbose). PiperOrigin-RevId: 271357741
* Remove unused variables and methods to address compiler warningsLei Zhang2019-09-252-11/+1
| | | | PiperOrigin-RevId: 271256784
* Add spv.Bitcast operation to SPIR-V dialectMahesh Ravishankar2019-09-251-0/+73
| | | | | | | | | Support the OpBitcast instruction of SPIR-V using the spv.Bitcast operation. The semantics implemented in the dialect differ from the SPIR-V spec in that the dialect does not allow conversion to/from pointer types from/to non-pointer types. PiperOrigin-RevId: 271255957
* [spirv] Add SPV_UnaryOp and spv.FNegateLei Zhang2019-09-251-57/+56
| | | | | | | This CL also moves common parsers and printers to the same section in SPIRVOps.cpp. PiperOrigin-RevId: 271233546
* Miscellaneous fixes to SPIR-V Deserializer (details below).Mahesh Ravishankar2019-09-241-4/+17
| | | | | | | | | | | | | | | 1) Process and ignore the following debug instructions: OpSource, OpSourceContinued, OpSourceExtension, OpString, OpModuleProcessed. 2) While processing OpTypeInt instruction, ignore the signedness specification. Currently MLIR doesnt make a distinction between signed and unsigned integer types. 3) Process and ignore BufferBlock decoration (similar to Buffer decoration). StructType needs to be enhanced to track this attribute since its needed for proper validation checks. 4) Report better error for unhandled instruction during deserialization. PiperOrigin-RevId: 271057060
* [spirv] Replace bitwiseCast with llvm::bit_castLei Zhang2019-09-241-11/+3
| | | | PiperOrigin-RevId: 271035618
* Introduce splat op + provide its LLVM loweringUday Bondhugula2019-09-241-8/+57
| | | | | | | | | | | | | | | - introduce splat op in standard dialect (currently for int/float/index input type, output type can be vector or statically shaped tensor) - implement LLVM lowering (when result type is 1-d vector) - add constant folding hook for it - while on Ops.cpp, fix some stale names Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#141 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/141 from bondhugula:splat 48976a6aa0a75be6d91187db6418de989e03eb51 PiperOrigin-RevId: 270965304
* Clone called functions into nested GPU module.Christian Sigg2019-09-241-7/+31
| | | | PiperOrigin-RevId: 270891190
OpenPOWER on IntegriCloud