summaryrefslogtreecommitdiffstats
path: root/mlir/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* Enable emitting dialect summary & description during op generationJacques Pienaar2019-10-051-0/+26
| | | | | | Sort ops per dialect and emit summary & description (if provided) of each dialect before emitting the ops of the dialect. PiperOrigin-RevId: 273077138
* Allow element type traits to operate on scalarsGeoffrey Martin-Noble2019-10-051-22/+8
| | | | | | | | This allows confirming that a scalar argument has the same element type as a shaped one. It's easy to validate a type is shaped on its own if that's desirable, so this shouldn't make that use case harder. This matches the behavior of other traits that operate on element type (e.g. AllElementTypesMatch). Also this makes the code simpler because now we just use getElementTypeOrSelf. Verified that all uses in core already check the type is shaped in another way. PiperOrigin-RevId: 273068507
* [spirv] Allow return ops to be in control flow opsLei Zhang2019-10-041-2/+2
| | | | | | | Use `getParentOfType<FunctionOp>()` instead of `cast<FuncOp>(getParentOp())` to avoid crash when return ops are used inside spv.selection/spv.loop. PiperOrigin-RevId: 273006041
* Add spv.Undef op to support OpUndef instruction in SPIR-V.Mahesh Ravishankar2019-10-041-0/+17
| | | | | | | | Adding support for OpUndef instruction. Updating the dialect generation script to fix a few bugs in the instruction spec generation. PiperOrigin-RevId: 272975685
* Add some utility builder functions for SPIR-V operations.Mahesh Ravishankar2019-10-042-10/+48
| | | | | | | | | | | Add builder functions for spv._address_of, spv.EntryPoint, spv.ExecutionMode and spv.Load to make it easier to create these operations. Fix a minor bug in printing of spv.EntryPoint Add a utility function to get the attribute name associated with a decoration. PiperOrigin-RevId: 272952846
* Replace constexpr MemRefType::kDynamicStrideOrOffset by a ↵Nicolas Vasilache2019-10-043-15/+15
| | | | | | | | | | MemRefType:;getDynamicStrideOrOffset() method - NFC This fixes global ODR-use issues, some of which manifest in Parser.cpp. Fixes tensorflow/mlir#167. PiperOrigin-RevId: 272886347
* Add missing Linalg lowerings to allow roundtrip.mlir to lower to LLVMNicolas Vasilache2019-10-041-20/+47
| | | | | | | | | | | | Certain lowering patterns were reported as [missing](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/dkdmHa77sSQ). This CL adds them and allows Linalg/roundtrip.mlir and Linalg/loops.mlir to lower to LLVM directly. Those 2 tests are updated to additionally check that the direct lowering to LLVM does not crash. The following points, left as TODOs still need to be addressed for correct end-to-end execution: 1. the lowering for ConvOp needs to pass attributes such as strides and dilations; the external library call needs to support it. 2. the lowering for GenericOp needs to support lowering to loops as a DialectConversion pattern. This is blocked on the DialectConversion infrastructure accepting an OperationFolder. PiperOrigin-RevId: 272878131
* Moving the GPUIndexIntrinsicOpLowering template to a common locationDeven Desai2019-10-043-139/+96
| | | | | | | | | | The GPUIndexIntrinsicOpLowering template is currently used by the code in both the GPUToNVVM and GPUToROCDL dirs. Moving it to a common location to remove code duplication. Closes tensorflow/mlir#163 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/163 from deven-amd:deven-refactor-gpu-index-ops-lowering b8dc2a5f5353df196039b6ff2ad42106028693ed PiperOrigin-RevId: 272863297
* Fix typos, NFC.Christian Sigg2019-10-0419-30/+30
| | | | PiperOrigin-RevId: 272851237
* Add support for inlining calls with different arg/result types from the ↵River Riddle2019-10-032-36/+106
| | | | | | | | callable. Some dialects have implicit conversions inherent in their modeling, meaning that a call may have a different type that the type that the callable expects. To support this, a hook is added to the dialect interface that allows for materializing conversion operations during inlining when there is a mismatch. A hook is also added to the callable interface to allow for introspecting the expected result types. PiperOrigin-RevId: 272814379
* Update the Inliner pass to work on SCCs of the CallGraph.River Riddle2019-10-032-26/+163
| | | | | | This allows for the inliner to work on arbitrary call operations. The updated inliner will also work bottom-up through the callgraph enabling support for multiple levels of inlining. PiperOrigin-RevId: 272813876
* Add `axis` attribute to the quant.stats opFeng Liu2019-10-032-3/+4
| | | | | | | The first dim length of the axisStats attribute should equals to the slice size of the input argument when splitted by the axis dimension. PiperOrigin-RevId: 272798042
* Add fpext and fptrunc to the Standard dialect and includes conversion to LLVMMLIR Team2019-10-032-3/+35
| | | | PiperOrigin-RevId: 272768027
* Generalize parse/printBinaryOp to parse/printOneResultOp.Christian Sigg2019-10-031-9/+10
| | | | PiperOrigin-RevId: 272722539
* Add syntactic sugar for strided memref parsing.Nicolas Vasilache2019-10-033-13/+104
| | | | | | | | | | | | | | | This CL implements the last remaining bit of the [strided memref proposal](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio). The syntax is a bit more explicit than what was originally proposed and resembles: `memref<?x?xf32, offset: 0 strides: [?, 1]>` Nonnegative strides and offsets are currently supported. Future extensions will include negative strides. This also gives a concrete example of syntactic sugar for the ([RFC] Proposed Changes to MemRef and Tensor MLIR Types)[https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/-wKHANzDNTg]. The underlying implementation still uses AffineMap layout. PiperOrigin-RevId: 272717437
* Make Module::getName return Optional<StringRef>Alex Zinenko2019-10-031-14/+12
| | | | | | | | Module names are optional so it makes more sense to take and return an optional any time the name is involved. Also update the language reference to reflect the module names. PiperOrigin-RevId: 272684698
* Give modules a nameAlex Zinenko2019-10-031-7/+31
| | | | | | | | | | Modules are now Ops and, as such, can be nested. They do not produce an SSA value so there is no possibility to refer to them in the IR. Introduce support for symbol names attached to the module Op so that it can be referred to using SymbolRefAttrs. The name is optional, for example the implicit top-level module does not have a name. PiperOrigin-RevId: 272671600
* Add parentheses around boolean operators in assertAlex Zinenko2019-10-031-2/+3
| | | | | | This removes a warning and is generally a good practice. PiperOrigin-RevId: 272613597
* NFC: rename Conversion/ControlFlowToCFG to Conversion/LoopToStandardAlex Zinenko2019-10-036-18/+19
| | | | | | | | This makes the name of the conversion pass more consistent with the naming scheme, since it actually converts from the Loop dialect to the Standard dialect rather than working with arbitrary control flow operations. PiperOrigin-RevId: 272612112
* Disallow index types in memrefs.Alex Zinenko2019-10-031-0/+7
| | | | | | | | | | | | | | | | | As specified in the MLIR language reference and rationale documents, `memref` types should not be allowed to have `index` as element types. As observed in https://groups.google.com/a/tensorflow.org/forum/#!msg/mlir/P49hVWqTMNc/nW89a4i_AgAJ this restriction was lifted when canonicalization unit tests for affine operations were introduced, without sufficient motivation to lift the restriction itself. The test in question can be trivially rewritten (return the value from a function instead of storing it to prevent DCE from removing the producer operation) and the restriction put back in place. If `memref<...x index>` is relevant for some use cases, the relaxation of the type system can be implemented separately with appropriate modifications to the documentation. PiperOrigin-RevId: 272607043
* Extract MemRefType::getStridesAndOffset as a free function and fix dynamic ↵Nicolas Vasilache2019-10-023-18/+18
| | | | | | | | offset determination. This also adds coverage with a missing test, which uncovered a bug in the conditional for testing whether an offset is dynamic or not. PiperOrigin-RevId: 272505798
* [spirv] Add support for spv.selectionLei Zhang2019-10-023-81/+287
| | | | | | | | | | | | | | | Similar to spv.loop, spv.selection is another op for modelling SPIR-V structured control flow. It covers both OpBranchConditional and OpSwitch with OpSelectionMerge. Instead of having a `spv.SelectionMerge` op to directly model selection merge instruction for indicating the merge target, we use regions to delimit the boundary of the selection: the merge target is the next op following the `spv.selection` op. This way it's easier to discover all blocks belonging to the selection and it plays nicer with the MLIR system. PiperOrigin-RevId: 272475006
* [ROCm] Adding pass to lower GPU Dialect to ROCDL Dialect.Deven Desai2019-10-023-0/+159
| | | | | | | | | This is a follow-up to the PRtensorflow/mlir#146 which introduced the ROCDL Dialect. This PR introduces a pass to lower GPU Dialect to the ROCDL Dialect. As with the previous PR, this one builds on the work done by @whchung, and addresses most of the review comments in the original PR. Closes tensorflow/mlir#154 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/154 from deven-amd:deven-lower-gpu-to-rocdl 809893e08236da5ab6a38e3459692fa04247773d PiperOrigin-RevId: 272390729
* Show type even if elementsattr is elided in graphJacques Pienaar2019-10-021-2/+4
| | | | | | The type is quite useful for debugging and shouldn't be too large. PiperOrigin-RevId: 272390311
* Add a pair of hooks to DominanceInfo.Eric Schweitz2019-10-011-0/+11
| | | | | | | | | This exposes hooks for accessing internal dominance nodes, and updating the internal DFS numbers. Closes tensorflow/mlir#151 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/151 from schweitzpgi:dominance_hooks 69d14214a423b816cbd59feffcacdd02f3b5f921 PiperOrigin-RevId: 272287352
* Fix and simplify CallOp/CallIndirectOp to LLVM::CallOp conversionAlex Zinenko2019-10-011-38/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | A recent ABI compatibility change affected the conversion from standard CallOp/CallIndirectOp to LLVM::CallOp by changing its signature. In order to analyze the signature, the code was looking up the callee symbol in the module. This is incorrect since, during the conversion, the module may contain both the original and the converted function op that have the same symbol name. There is no strict guarantee on which of the two symbols will be found by the lookup. The conversion was not failing because the type legalizer converts the LLVM types to themselves making the original and the converted function signatures ultimately produce the same type. Instead of looking up the function signature to get the list of result types, use the types of the CallOp/CallIndirectOp results which must match those of the function in valid IR. These types are guaranteed to be the original, unconverted types when converting the operation. Furthermore, this avoids the need to perform a lookup of a symbol name in the module which may be expensive. Finally, propagate attributes as-is from the original op to the converted op since they share the attribute name for the callee of direct calls and the rest of attributes are not affected by the conversion. This removes the need for additional contorsions between direct and indirect calls to extract the name of the optional callee attribute only to insert it back. This also prevents the conversion from unintentionally dropping the other attributes of the op. PiperOrigin-RevId: 272218871
* Unify Linalg types by using strided memrefsNicolas Vasilache2019-10-019-647/+345
| | | | | | | This CL finishes the implementation of the Linalg + Affine type unification of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio). As a consequence, the !linalg.view type, linalg::DimOp, linalg::LoadOp and linalg::StoreOp can now disappear and Linalg can use standard types everywhere. PiperOrigin-RevId: 272187165
* Change all_reduce lowering to support 2D and 3D blocks.Christian Sigg2019-10-011-43/+124
| | | | | | | | Perform second reduce only with first warp. This requires an additional __sync_threads(), but doesn't need special handling when the last warp is small. This simplifies support for block sizes that are not multiple of 32. Supporting partial warp reduce will be done in a separate CL. PiperOrigin-RevId: 272168917
* Add verification error message for ops that require at least one operand or ↵Christian Sigg2019-10-011-5/+8
| | | | | | result. PiperOrigin-RevId: 272153634
* Pass the pointer of the parent pipeline collection pass to ↵River Riddle2019-09-302-25/+28
| | | | | | | | PassInstrumentation::run*Pipeline. For the cases where there are multiple levels of nested pass managers, the parent thread ID is not enough to distinguish the parent of a given pass pipeline. Passing in the parent pass gives an exact anchor point. PiperOrigin-RevId: 272105461
* [spirv] Add array length check.Denis Khalikov2019-09-302-0/+9
| | | | | | | | | | According to the SPIR-V spec: "Length is the number of elements in the array. It must be at least 1." Closes tensorflow/mlir#160 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/160 from denis0x0D:sandbox/array_len 0840dc0986ad0088a3aa7d5d8d3e97d489377ed9 PiperOrigin-RevId: 272094669
* Add missing file from cmakelistJacques Pienaar2019-09-301-0/+1
| | | | PiperOrigin-RevId: 272054623
* Enable autogenerating OpInterface method declarationsJacques Pienaar2019-09-302-7/+113
| | | | | | | | | | Add DeclareOpInterfaceFunctions to enable specifying whether OpInterfaceMethods for an OpInterface should be generated automatically. This avoids needing to declare the extra methods, while also allowing adding function declaration by way of trait/inheritance. Most of this change is mechanical/extracting classes to be reusable. PiperOrigin-RevId: 272042739
* Normalize MemRefType lowering to LLVM as strided MemRef descriptorNicolas Vasilache2019-09-302-61/+127
| | | | | | | | | | | | | | | | | | | | | This CL finishes the implementation of the lowering part of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio). Strided memrefs correspond conceptually to the following templated C++ struct: ``` template <typename Elem, size_t Rank> struct { Elem *ptr; int64_t offset; int64_t sizes[Rank]; int64_t strides[Rank]; }; ``` The linearization procedure for address calculation for strided memrefs is the same as for linalg views: `base_offset + SUM_i index_i * stride_i`. The following CL will unify Linalg and Standard by removing !linalg.view in favor of strided memrefs. PiperOrigin-RevId: 272033399
* Add support for Logical Ops in SPIR-V dialectMahesh Ravishankar2019-09-301-10/+26
| | | | | | | | | | | | | | | | | | Add operations corresponding to OpLogicalAnd, OpLogicalNot, OpLogicalEqual, OpLogicalNotEqual and OpLogicalOr instructions in SPIR-V dialect. This needs changes to class hierarchy in SPIR-V TableGen files to split SPIRVLogicalOp into SPIRVLogicalUnaryOp and SPIRVLogicalBinaryOp. All derived classes of SPIRVLogicalOp are updated accordingly. Update the spirv dialect generation script to 1) Allow specifying base class to use for instruction spec generation and file name to generate the specification in separately. 2) Use the existing descriptions for operations. 3) Update define_inst.sh to also invoke define_opcode.sh to also define the corresponding SPIR-V instruction opcode enum. PiperOrigin-RevId: 272014876
* Fix MemRefType::getStrides corner caseNicolas Vasilache2019-09-301-9/+45
| | | | | | | | MemRefType::getStrides uses AffineExpr::walk which operates in post-order from the leaves. In order to compute strides properly, it needs to escape on terminal nodes and analyze binary ops only. This did not work for AffineExpr that consist of a single term (i.e. without a binary op). This CL fixes the corner case and adds relevant tests. PiperOrigin-RevId: 271975746
* Switch comments from GPU dialect terms to CUDA terms (NFC).Christian Sigg2019-09-301-8/+7
| | | | | | local workgroup -> block, subgroup -> warp, invocation -> thread. PiperOrigin-RevId: 271946342
* Add InferTypeOpTrait & enable generating its member function definitionJacques Pienaar2019-09-292-0/+33
| | | | | | | | | | Use OpInterfaces to add an interface for ops defining a return type function. This change does not use this trait in any meaningful way, I'll use it in a follow up to generalize and unify some of the op type traits/constraints. Also, currently the infer type function can only be manually specified in C++, that should rather be the fallback in future. PiperOrigin-RevId: 271883746
* Switch explicit create methods to match generated build's orderJacques Pienaar2019-09-283-8/+8
| | | | | | The generated build methods have result type before the arguments (operands and attributes, which are also now adjacent in the explicit create method). This also results in changing the create method's ordering to match most build method's ordering. PiperOrigin-RevId: 271755054
* Append a newline when dumping a Value.Yanan Cao2019-09-271-1/+4
| | | | | | This is more consistent with other dump methods. Otherwise successive Value dumps are concatenated in same line, hurting readability. PiperOrigin-RevId: 271669846
* Add TODO to revisit coupling of CallOp to MemRefType loweringNicolas Vasilache2019-09-271-0/+3
| | | | PiperOrigin-RevId: 271619132
* NFC - clean up op accessor usage, std.load/store op verify, other stale infoUday Bondhugula2019-09-278-62/+40
| | | | | | | | | | | - also remove stale terminology/references in docs Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#148 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/148 from bondhugula:cleanup e846b641a3c2936e874138aff480a23cdbf66591 PiperOrigin-RevId: 271618279
* Promote MemRefDescriptor to a pointer to struct when passing function ↵Nicolas Vasilache2019-09-274-22/+220
| | | | | | | | | | | | | boundaries in LLVMLowering. The strided MemRef RFC discusses a normalized descriptor and interaction with library calls (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio). Lowering of nested LLVM structs as value types does not play nicely with externally compiled C/C++ functions due to ABI issues. Solving the ABI problem generally is a very complex problem and most likely involves taking a dependence on clang that we do not want atm. A simple workaround is to pass pointers to memref descriptors at function boundaries, which this CL implement. PiperOrigin-RevId: 271591708
* Fix JitRunner.cpp Error creation pattern and reactivate tests.Nicolas Vasilache2019-09-271-7/+13
| | | | | | | | | | linalg_integration_test.mlir and simple.mlir were temporarily disabled due to an OSS-only failure. The issue is that, once created, an llvm::Error must be explicitly checked before it can be discarded or overwritten. This CL fixes the issue and reenable the test. PiperOrigin-RevId: 271589651
* [ROCm] Adding ROCDL Dialect.Deven Desai2019-09-274-0/+224
| | | | | | | | | | | | | This commit introduces the ROCDL Dialect (i.e. the ROCDL ops + the code to lower those ROCDL ops to LLWM intrinsics/functions). Think of ROCDL Dialect as analogous to the NVVM Dialect, but for AMD GPUs. This patch contains just the essentials needed to get a simple example up and running. We expect to make further additions to the ROCDL Dialect. This is the first of 3 commits, the follow-up will be: * add a pass that lowers GPU Dialect to ROCDL Dialect * add a "mlir-rocm-runner" utility Closes tensorflow/mlir#146 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/146 from deven-amd:deven-rocdl-dialect e78e8005c75a78912631116c78dc844fcc4b0de9 PiperOrigin-RevId: 271511259
* Decouple tiling from fusion in Linalg.Nicolas Vasilache2019-09-263-162/+140
| | | | | | | | | | | | This CL modifies the linalg-fusion pass such that it does not tile anymore as part of the pass. Tiling is a separate concern that enables linalg fusion but should happen before. This makes fusion more composable with other decisions. In particular the fusion pass now becomes greedy and only applies the transformation on a best-effort basis. This should also let fusion work in a multi-hop fashion with chains of producer/consumers. Since the fusion pass does not perform tiling anymore, tests are rewritten to be in pretiled form and make the intent of the test clearer (albeit more verbose). PiperOrigin-RevId: 271357741
* Drop support for memrefs from JitRunnerAlex Zinenko2019-09-261-104/+9
| | | | | | | | | | | | | The support for functions taking and returning memrefs of floats was introduced in the first version of the runner, created before MLIR had reliable lowering of allocation/deallocation to library calls. It forcibly runs MLIR transformation convering affine, loop and standard dialects into the LLVM dialect, unlike the other runner flows that accept the LLVM dialect directly. Memref support leads to more complex layering and is generally fragile. Drop it in favor of functions returning a scalar, or library-based function calls to print memrefs and other data structures. PiperOrigin-RevId: 271330839
* Add AllReduceOp to GPU dialect with lowering to NVVM.Christian Sigg2019-09-261-2/+169
| | | | | | | | The reduction operation is currently fixed to "add", and the scope is fixed to "workgroup". The implementation is currently limited to sizes that are multiple 32 (warp size) and no larger than 1024. PiperOrigin-RevId: 271290265
* Remove unused variables and methods to address compiler warningsLei Zhang2019-09-253-11/+2
| | | | PiperOrigin-RevId: 271256784
* Add spv.Bitcast operation to SPIR-V dialectMahesh Ravishankar2019-09-251-0/+73
| | | | | | | | | Support the OpBitcast instruction of SPIR-V using the spv.Bitcast operation. The semantics implemented in the dialect differ from the SPIR-V spec in that the dialect does not allow conversion to/from pointer types from/to non-pointer types. PiperOrigin-RevId: 271255957
OpenPOWER on IntegriCloud