summaryrefslogtreecommitdiffstats
path: root/mlir/test
Commit message (Collapse)AuthorAgeFilesLines
...
* Generalize parse/printBinaryOp to parse/printOneResultOp.Christian Sigg2019-10-031-3/+3
| | | | PiperOrigin-RevId: 272722539
* Add syntactic sugar for strided memref parsing.Nicolas Vasilache2019-10-0310-240/+262
| | | | | | | | | | | | | | | This CL implements the last remaining bit of the [strided memref proposal](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio). The syntax is a bit more explicit than what was originally proposed and resembles: `memref<?x?xf32, offset: 0 strides: [?, 1]>` Nonnegative strides and offsets are currently supported. Future extensions will include negative strides. This also gives a concrete example of syntactic sugar for the ([RFC] Proposed Changes to MemRef and Tensor MLIR Types)[https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/-wKHANzDNTg]. The underlying implementation still uses AffineMap layout. PiperOrigin-RevId: 272717437
* Give modules a nameAlex Zinenko2019-10-031-0/+12
| | | | | | | | | | Modules are now Ops and, as such, can be nested. They do not produce an SSA value so there is no possibility to refer to them in the IR. Introduce support for symbol names attached to the module Op so that it can be referred to using SymbolRefAttrs. The name is optional, for example the implicit top-level module does not have a name. PiperOrigin-RevId: 272671600
* NFC: rename Conversion/ControlFlowToCFG to Conversion/LoopToStandardAlex Zinenko2019-10-031-1/+1
| | | | | | | | This makes the name of the conversion pass more consistent with the naming scheme, since it actually converts from the Loop dialect to the Standard dialect rather than working with arbitrary control flow operations. PiperOrigin-RevId: 272612112
* Disallow index types in memrefs.Alex Zinenko2019-10-032-7/+3
| | | | | | | | | | | | | | | | | As specified in the MLIR language reference and rationale documents, `memref` types should not be allowed to have `index` as element types. As observed in https://groups.google.com/a/tensorflow.org/forum/#!msg/mlir/P49hVWqTMNc/nW89a4i_AgAJ this restriction was lifted when canonicalization unit tests for affine operations were introduced, without sufficient motivation to lift the restriction itself. The test in question can be trivially rewritten (return the value from a function instead of storing it to prevent DCE from removing the producer operation) and the restriction put back in place. If `memref<...x index>` is relevant for some use cases, the relaxation of the type system can be implemented separately with appropriate modifications to the documentation. PiperOrigin-RevId: 272607043
* Extract MemRefType::getStridesAndOffset as a free function and fix dynamic ↵Nicolas Vasilache2019-10-022-1/+7
| | | | | | | | offset determination. This also adds coverage with a missing test, which uncovered a bug in the conditional for testing whether an offset is dynamic or not. PiperOrigin-RevId: 272505798
* [spirv] Add support for spv.selectionLei Zhang2019-10-022-3/+169
| | | | | | | | | | | | | | | Similar to spv.loop, spv.selection is another op for modelling SPIR-V structured control flow. It covers both OpBranchConditional and OpSwitch with OpSelectionMerge. Instead of having a `spv.SelectionMerge` op to directly model selection merge instruction for indicating the merge target, we use regions to delimit the boundary of the selection: the merge target is the next op following the `spv.selection` op. This way it's easier to discover all blocks belonging to the selection and it plays nicer with the MLIR system. PiperOrigin-RevId: 272475006
* [ROCm] Adding pass to lower GPU Dialect to ROCDL Dialect.Deven Desai2019-10-021-0/+37
| | | | | | | | | This is a follow-up to the PRtensorflow/mlir#146 which introduced the ROCDL Dialect. This PR introduces a pass to lower GPU Dialect to the ROCDL Dialect. As with the previous PR, this one builds on the work done by @whchung, and addresses most of the review comments in the original PR. Closes tensorflow/mlir#154 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/154 from deven-amd:deven-lower-gpu-to-rocdl 809893e08236da5ab6a38e3459692fa04247773d PiperOrigin-RevId: 272390729
* Fix and simplify CallOp/CallIndirectOp to LLVM::CallOp conversionAlex Zinenko2019-10-011-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | A recent ABI compatibility change affected the conversion from standard CallOp/CallIndirectOp to LLVM::CallOp by changing its signature. In order to analyze the signature, the code was looking up the callee symbol in the module. This is incorrect since, during the conversion, the module may contain both the original and the converted function op that have the same symbol name. There is no strict guarantee on which of the two symbols will be found by the lookup. The conversion was not failing because the type legalizer converts the LLVM types to themselves making the original and the converted function signatures ultimately produce the same type. Instead of looking up the function signature to get the list of result types, use the types of the CallOp/CallIndirectOp results which must match those of the function in valid IR. These types are guaranteed to be the original, unconverted types when converting the operation. Furthermore, this avoids the need to perform a lookup of a symbol name in the module which may be expensive. Finally, propagate attributes as-is from the original op to the converted op since they share the attribute name for the callee of direct calls and the rest of attributes are not affected by the conversion. This removes the need for additional contorsions between direct and indirect calls to extract the name of the optional callee attribute only to insert it back. This also prevents the conversion from unintentionally dropping the other attributes of the op. PiperOrigin-RevId: 272218871
* Unify Linalg types by using strided memrefsNicolas Vasilache2019-10-0113-625/+616
| | | | | | | This CL finishes the implementation of the Linalg + Affine type unification of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio). As a consequence, the !linalg.view type, linalg::DimOp, linalg::LoadOp and linalg::StoreOp can now disappear and Linalg can use standard types everywhere. PiperOrigin-RevId: 272187165
* Change all_reduce lowering to support 2D and 3D blocks.Christian Sigg2019-10-011-12/+18
| | | | | | | | Perform second reduce only with first warp. This requires an additional __sync_threads(), but doesn't need special handling when the last warp is small. This simplifies support for block sizes that are not multiple of 32. Supporting partial warp reduce will be done in a separate CL. PiperOrigin-RevId: 272168917
* Add verification error message for ops that require at least one operand or ↵Christian Sigg2019-10-012-5/+40
| | | | | | result. PiperOrigin-RevId: 272153634
* Add integer shift ops to LLVM dialect.Christian Sigg2019-09-301-0/+6
| | | | PiperOrigin-RevId: 272140049
* [spirv] Add array length check.Denis Khalikov2019-09-301-0/+5
| | | | | | | | | | According to the SPIR-V spec: "Length is the number of elements in the array. It must be at least 1." Closes tensorflow/mlir#160 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/160 from denis0x0D:sandbox/array_len 0840dc0986ad0088a3aa7d5d8d3e97d489377ed9 PiperOrigin-RevId: 272094669
* Enable autogenerating OpInterface method declarationsJacques Pienaar2019-09-301-7/+1
| | | | | | | | | | Add DeclareOpInterfaceFunctions to enable specifying whether OpInterfaceMethods for an OpInterface should be generated automatically. This avoids needing to declare the extra methods, while also allowing adding function declaration by way of trait/inheritance. Most of this change is mechanical/extracting classes to be reusable. PiperOrigin-RevId: 272042739
* Normalize MemRefType lowering to LLVM as strided MemRef descriptorNicolas Vasilache2019-09-306-154/+207
| | | | | | | | | | | | | | | | | | | | | This CL finishes the implementation of the lowering part of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio). Strided memrefs correspond conceptually to the following templated C++ struct: ``` template <typename Elem, size_t Rank> struct { Elem *ptr; int64_t offset; int64_t sizes[Rank]; int64_t strides[Rank]; }; ``` The linearization procedure for address calculation for strided memrefs is the same as for linalg views: `base_offset + SUM_i index_i * stride_i`. The following CL will unify Linalg and Standard by removing !linalg.view in favor of strided memrefs. PiperOrigin-RevId: 272033399
* Add support for Logical Ops in SPIR-V dialectMahesh Ravishankar2019-09-301-0/+107
| | | | | | | | | | | | | | | | | | Add operations corresponding to OpLogicalAnd, OpLogicalNot, OpLogicalEqual, OpLogicalNotEqual and OpLogicalOr instructions in SPIR-V dialect. This needs changes to class hierarchy in SPIR-V TableGen files to split SPIRVLogicalOp into SPIRVLogicalUnaryOp and SPIRVLogicalBinaryOp. All derived classes of SPIRVLogicalOp are updated accordingly. Update the spirv dialect generation script to 1) Allow specifying base class to use for instruction spec generation and file name to generate the specification in separately. 2) Use the existing descriptions for operations. 3) Update define_inst.sh to also invoke define_opcode.sh to also define the corresponding SPIR-V instruction opcode enum. PiperOrigin-RevId: 272014876
* Fix MemRefType::getStrides corner caseNicolas Vasilache2019-09-302-1/+11
| | | | | | | | MemRefType::getStrides uses AffineExpr::walk which operates in post-order from the leaves. In order to compute strides properly, it needs to escape on terminal nodes and analyze binary ops only. This did not work for AffineExpr that consist of a single term (i.e. without a binary op). This CL fixes the corner case and adds relevant tests. PiperOrigin-RevId: 271975746
* Add InferTypeOpTrait & enable generating its member function definitionJacques Pienaar2019-09-295-2/+68
| | | | | | | | | | Use OpInterfaces to add an interface for ops defining a return type function. This change does not use this trait in any meaningful way, I'll use it in a follow up to generalize and unify some of the op type traits/constraints. Also, currently the infer type function can only be manually specified in C++, that should rather be the fallback in future. PiperOrigin-RevId: 271883746
* Tablegen helpers for accessing properties of shaped typesGeoffrey Martin-Noble2019-09-271-3/+2
| | | | | | Tablegen's lack of functions continues to be annoying PiperOrigin-RevId: 271680947
* Remove spurious debug spew in testsNicolas Vasilache2019-09-274-4/+0
| | | | PiperOrigin-RevId: 271624731
* NFC - clean up op accessor usage, std.load/store op verify, other stale infoUday Bondhugula2019-09-271-1/+1
| | | | | | | | | | | - also remove stale terminology/references in docs Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#148 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/148 from bondhugula:cleanup e846b641a3c2936e874138aff480a23cdbf66591 PiperOrigin-RevId: 271618279
* Promote MemRefDescriptor to a pointer to struct when passing function ↵Nicolas Vasilache2019-09-276-184/+234
| | | | | | | | | | | | | boundaries in LLVMLowering. The strided MemRef RFC discusses a normalized descriptor and interaction with library calls (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio). Lowering of nested LLVM structs as value types does not play nicely with externally compiled C/C++ functions due to ABI issues. Solving the ABI problem generally is a very complex problem and most likely involves taking a dependence on clang that we do not want atm. A simple workaround is to pass pointers to memref descriptors at function boundaries, which this CL implement. PiperOrigin-RevId: 271591708
* Fix JitRunner.cpp Error creation pattern and reactivate tests.Nicolas Vasilache2019-09-272-18/+16
| | | | | | | | | | linalg_integration_test.mlir and simple.mlir were temporarily disabled due to an OSS-only failure. The issue is that, once created, an llvm::Error must be explicitly checked before it can be discarded or overwritten. This CL fixes the issue and reenable the test. PiperOrigin-RevId: 271589651
* [ROCm] Adding ROCDL Dialect.Deven Desai2019-09-272-0/+65
| | | | | | | | | | | | | This commit introduces the ROCDL Dialect (i.e. the ROCDL ops + the code to lower those ROCDL ops to LLWM intrinsics/functions). Think of ROCDL Dialect as analogous to the NVVM Dialect, but for AMD GPUs. This patch contains just the essentials needed to get a simple example up and running. We expect to make further additions to the ROCDL Dialect. This is the first of 3 commits, the follow-up will be: * add a pass that lowers GPU Dialect to ROCDL Dialect * add a "mlir-rocm-runner" utility Closes tensorflow/mlir#146 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/146 from deven-amd:deven-rocdl-dialect e78e8005c75a78912631116c78dc844fcc4b0de9 PiperOrigin-RevId: 271511259
* Disable failing testsJacques Pienaar2019-09-262-16/+18
| | | | PiperOrigin-RevId: 271460509
* Decouple tiling from fusion in Linalg.Nicolas Vasilache2019-09-262-247/+366
| | | | | | | | | | | | This CL modifies the linalg-fusion pass such that it does not tile anymore as part of the pass. Tiling is a separate concern that enables linalg fusion but should happen before. This makes fusion more composable with other decisions. In particular the fusion pass now becomes greedy and only applies the transformation on a best-effort basis. This should also let fusion work in a multi-hop fashion with chains of producer/consumers. Since the fusion pass does not perform tiling anymore, tests are rewritten to be in pretiled form and make the intent of the test clearer (albeit more verbose). PiperOrigin-RevId: 271357741
* Drop support for memrefs from JitRunnerAlex Zinenko2019-09-261-33/+35
| | | | | | | | | | | | | The support for functions taking and returning memrefs of floats was introduced in the first version of the runner, created before MLIR had reliable lowering of allocation/deallocation to library calls. It forcibly runs MLIR transformation convering affine, loop and standard dialects into the LLVM dialect, unlike the other runner flows that accept the LLVM dialect directly. Memref support leads to more complex layering and is generally fragile. Drop it in favor of functions returning a scalar, or library-based function calls to print memrefs and other data structures. PiperOrigin-RevId: 271330839
* Add AllReduceOp to GPU dialect with lowering to NVVM.Christian Sigg2019-09-263-0/+35
| | | | | | | | The reduction operation is currently fixed to "add", and the scope is fixed to "workgroup". The implementation is currently limited to sizes that are multiple 32 (warp size) and no larger than 1024. PiperOrigin-RevId: 271290265
* Add spv.Bitcast operation to SPIR-V dialectMahesh Ravishankar2019-09-252-0/+91
| | | | | | | | | Support the OpBitcast instruction of SPIR-V using the spv.Bitcast operation. The semantics implemented in the dialect differ from the SPIR-V spec in that the dialect does not allow conversion to/from pointer types from/to non-pointer types. PiperOrigin-RevId: 271255957
* [spirv] Add SPV_UnaryOp and spv.FNegateLei Zhang2019-09-252-0/+17
| | | | | | | This CL also moves common parsers and printers to the same section in SPIRVOps.cpp. PiperOrigin-RevId: 271233546
* Emit function name being tested in TestMemRefStrideCalculationJacques Pienaar2019-09-252-1/+3
| | | | | | Bring back CHECK-LABEL post PiperOrigin-RevId: 271166428
* Add tablegen verification traits for comparing different propertiesGeoffrey Martin-Noble2019-09-252-0/+36
| | | | | | This allows things like comparing the rank of one operand to the size of another that specifies indices into it. PiperOrigin-RevId: 271150439
* Fix memref-stride-calculation on WindowsLei Zhang2019-09-252-1/+1
| | | | | | | | Call llvm::outs().flush() to make sure we don't mix streams. Remove CHECK-LABEL to avoid assuming the relative order between the additional info and the output IR. PiperOrigin-RevId: 271131100
* Add support for GLSL Binary ops, and use it to implement GLSL FMax.Mahesh Ravishankar2019-09-242-1/+27
| | | | | | | | A base class is added to implement all GLSL Binary operations and is used to implement the FMax operation. The existing framework already generates all the necessary (de)serialization code. PiperOrigin-RevId: 271037166
* Introduce splat op + provide its LLVM loweringUday Bondhugula2019-09-244-0/+62
| | | | | | | | | | | | | | | - introduce splat op in standard dialect (currently for int/float/index input type, output type can be vector or statically shaped tensor) - implement LLVM lowering (when result type is 1-d vector) - add constant folding hook for it - while on Ops.cpp, fix some stale names Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#141 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/141 from bondhugula:splat 48976a6aa0a75be6d91187db6418de989e03eb51 PiperOrigin-RevId: 270965304
* Normalize lowering of MemRef typesNicolas Vasilache2019-09-245-95/+83
| | | | | | | | | | | | | | | | The RFC for unifying Linalg and Affine compilation passes into an end-to-end flow with a predictable ABI and linkage to external function calls raised the question of why we have variable sized descriptors for memrefs depending on whether they have static or dynamic dimensions (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio). This CL standardizes the ABI on the rank of the memrefs. The LLVM struct for a memref becomes equivalent to: ``` template <typename Elem, size_t Rank> struct { Elem *ptr; int64_t sizes[Rank]; }; ``` PiperOrigin-RevId: 270947276
* Clone called functions into nested GPU module.Christian Sigg2019-09-241-2/+12
| | | | PiperOrigin-RevId: 270891190
* [spirv] NFC: clean up (de)serialization testsLei Zhang2019-09-2321-167/+179
| | | | | | | | This CL uses the newly added -split-input-file CLI option to mlir-translate to combine certain (de)serialization tests. It also renames certain test filenames. PiperOrigin-RevId: 270816324
* Make spirv::RuntimeArrayType part of spirv::CompositeType.Mahesh Ravishankar2019-09-231-0/+8
| | | | | | | According to SPIR-V spec, spirv::CompositeType includes spirv::RuntimeArrayType. This allows using objects of spirv::RuntimeArrayType with spirv::AccessChainOp. PiperOrigin-RevId: 270809492
* Handle OpMemberName instruction in SPIR-V deserializer.Mahesh Ravishankar2019-09-231-6/+0
| | | | | | | | | | Sdd support in deserializer for OpMemberName instruction. For now the name is just processed and not associated with the spirv::StructType being built. That needs an enhancement to spirv::StructTypes itself. Add tests to check for errors reported during deserialization with some refactoring to common out some utility functions. PiperOrigin-RevId: 270794524
* Use constant's location for reporting errors in parsing of hex constantJacques Pienaar2019-09-231-1/+1
| | | | | | Before this the line following the error would be reported in some cases. PiperOrigin-RevId: 270778722
* Simplify the way spirv::StructTypes are parsed.Mahesh Ravishankar2019-09-233-16/+46
| | | | | | | | | | | | | The existing logic to parse spirv::StructTypes is very brittle. This change simplifies the parsing logic a lot. The simplification also allows for memberdecorations to be separated by commas instead of spaces (which was an artifact of the existing parsing logic). The change also needs a modification to mlir::parseType to return the number of chars parsed. Adding a new parseType method to do so. Also allow specification of spirv::StructType with no members. PiperOrigin-RevId: 270739672
* Add initial callgraph support.River Riddle2019-09-233-0/+92
| | | | | | | | | | | | | | Using the two call interfaces, CallOpInterface and CallableOpInterface, this change adds support for an initial multi-level CallGraph. This call graph builds a set of nodes for each callable region, and connects them via edges. An edge may be any of the following types: * Abstract - An edge not produced by a call operation, used for connecting to internal nodes from external nodes. * Call - A call edge is an edge defined via a call-like operation. * Child - This is an artificial edge connecting nested callgraph nodes. This callgraph will be used, and improved upon, to begin supporting more interesting interprocedural analyses and transformation. In a followup, this callgraph will be used to support more complex inlining support. PiperOrigin-RevId: 270724968
* Add interfaces for call-like/callable operations.River Riddle2019-09-232-4/+11
| | | | | | | | | | | These two operation interfaces will be used in a followup to support building a callgraph: * CallOpInterface - Operations providing this interface are call-like, and have a "call" target. A call target may be a symbol reference, via SymbolRefAttr, or a SSA value. * CallableOpInterface - Operations providing this interfaces define destinations to call-like operations, e.g. FuncOp. These operations may define any number of callable regions. PiperOrigin-RevId: 270723300
* Outline GPU kernel function into a nested module.Christian Sigg2019-09-234-44/+93
| | | | | | | | Roll forward of commit 5684a12. When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module. PiperOrigin-RevId: 270639748
* Fix a number of Clang-Tidy warnings.Christian Sigg2019-09-231-1/+1
| | | | PiperOrigin-RevId: 270632324
* Specalize f32->i8/u8 Quanitization with C++ native arithmetic to optimize ↵Jing Pu2019-09-221-1/+1
| | | | | | | | performance. The CL adds a rounding mode flag to the class and changes the default to rmNearestTiesToAway from rmNearestTiesToEven because 1) Tensorflow QuantizeV2 ops uses rmNearestTiesToAway; 2) the specialization only implements rmNearestTiesToAway. PiperOrigin-RevId: 270600739
* Add integer sign- and zero-extension and truncation to standard.Manuel Freiberger2019-09-214-0/+156
| | | | | | | | | | | | This adds sign- and zero-extension and truncation of integer types to the standard dialects. This allows to perform integer type conversions without having to go to the LLVM dialect and introduce custom type casts (between standard and LLVM integer types). Closes tensorflow/mlir#134 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/134 from ombre5733:sext-zext-trunc-in-std c7657bc84c0ca66b304e53ec03797e09152e4d31 PiperOrigin-RevId: 270479722
* [spirv] Add OpControlBarrier and OpMemoryBarrier.Denis Khalikov2019-09-212-0/+72
| | | | | | | | | Add OpControlBarrier and OpMemoryBarrier (de)serialization. Closes tensorflow/mlir#130 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/130 from denis0x0D:sandbox/memory_barrier 2e3fff16bca44904dc1039592cb9a65d526faea8 PiperOrigin-RevId: 270457478
OpenPOWER on IntegriCloud