summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
| * Misc changes to lowering to SPIR-V.Mahesh Ravishankar2019-11-2610-69/+171
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These changes to SPIR-V lowering while adding support for lowering SUbViewOp, but are not directly related. - Change the lowering of MemRefType to !spv.ptr<!spv.struct<!spv.array<...>[offset]>, ..> This is consistent with the Vulkan spec. - To enable testing a simple pattern of lowering functions is added to ConvertStandardToSPIRVPass. This is just used to convert the type of the arguments of the function. The added function lowering itself is not meant to be the way functions are eventually lowered into SPIR-V dialect. PiperOrigin-RevId: 282589644
| * Automated rollback of commit d60133f89bb08341718bb3132b19bc891f7d4f4dNicolas Vasilache2019-11-268-19/+19
| | | | | | | | PiperOrigin-RevId: 282574110
| * Relax restriction on affine_apply dim and symbol operandsNicolas Vasilache2019-11-262-31/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The affine_apply operation is currently "doubly" affine and conflates two things: 1. it applies an affine map to a list of values of type `index` that are defined as either dim or symbol 2. it restricts (and propagates constraints on) the provenance of dims and symbols to a small subset of ops for which more restrictive polyhedral constraints apply. Point 2. is related to the ability to form so-called static control parts and is related to dependence analysis and legality of transformations. Point 1. however is completely independent, the only local implication of dims and symbol for affine_apply is that dims compose while symbols concatenate as well as the structural constraint that dims may not be multiplied. The properties of composition and canonicalization in affine_apply are more generally useful. This CL relaxes the verifier on affine_apply so it can be used more generally. The relevant affine.for/if/load/store op verifiers already implement the dim and symbol checking. See this thread for the related discussion: https://groups.google.com/a/tensorflow.org/g/mlir/c/HkwCbV8D9N0/m/8srUNrX6CAAJ PiperOrigin-RevId: 282562517
| * Some minor corrections and improvements to LangRefAndrew Anderson2019-11-251-25/+26
| | | | | | | | | | | | | | | | | | | | Some productions in the LangRef were using undefined terminals and non-terminals, which have been added to the EBNF. The dialect type and dialect attribute productions matched precisely the same structure and have been deduplicated. The production for ssa-id was ambiguous but the fix is trivial (merging the leading '%') and has been applied. Closes tensorflow/mlir#265 PiperOrigin-RevId: 282470892
| * Add support for AttrSizedOperandSegments/AttrSizedResultSegmentsLei Zhang2019-11-259-39/+345
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Certain operations can have multiple variadic operands and their size relationship is not always known statically. For such cases, we need a per-op-instance specification to divide the operands into logical groups or segments. This can be modeled by attributes. This CL introduces C++ trait AttrSizedOperandSegments for operands and AttrSizedResultSegments for results. The C++ trait just guarantees such size attribute has the correct type (1D vector) and values (non-negative), etc. It serves as the basis for ODS sugaring that with ODS argument declarations we can further verify the number of elements match the number of ODS-declared operands and we can generate handy getter methods. PiperOrigin-RevId: 282467075
| * Use vector.InsertStridedSlice in Vector -> Vector unrollingNicolas Vasilache2019-11-253-28/+94
| | | | | | | | | | | | | | | | This CL uses the recently added op to finish the implementation of Vector -> Vector unrolling by replacing the "fake join op" by a series of InsertStridedSliceOp. Test is updated accordingly PiperOrigin-RevId: 282451126
| * Add a vector.InsertStridedSliceOpNicolas Vasilache2019-11-254-149/+337
| | | | | | | | | | | | This new op is the counterpart of vector.StridedSliceOp and will be used for in the pattern rewrites for vector unrolling. PiperOrigin-RevId: 282447414
| * Allow LLVM::ExtractElementOp to have non-i32 indices.MLIR Team2019-11-256-34/+40
| | | | | | | | | | | | Also change the text format a bit, so that indices are braced by squares. PiperOrigin-RevId: 282437095
| * Make std.divis and std.diviu support ElementsAttr folding.Ben Vanik2019-11-252-33/+91
| | | | | | | | PiperOrigin-RevId: 282434465
| * NFC: Actually expose the implementation of createGPUToSPIRVLoweringPass.Mahesh Ravishankar2019-11-251-2/+3
| | | | | | | | | | | | | | | | A mismatch in the function declaration and function definition, prevented the implementation of the createGPUToSPIRVLoweringPass from being exposed. PiperOrigin-RevId: 282419815
| * Add missing rule to generate SPIR-V ABI Attribute using tblgen to CMake.Mahesh Ravishankar2019-11-252-0/+6
| | | | | | | | PiperOrigin-RevId: 282415592
| * Update VectorContractionOp to take iterator types and index mapping ↵Andy Davis2019-11-254-76/+320
| | | | | | | | | | | | attributes compatible with linalg ops. PiperOrigin-RevId: 282412311
| * Changing directory shortcut for CPU/GPU runner utils.Christian Sigg2019-11-258-19/+19
| | | | | | | | | | | | | | Moving cuda-runtime-wrappers.so into subdirectory to match libmlir_runner_utils.so. Provide parent directory when running test and load .so from subdirectory. PiperOrigin-RevId: 282410749
| * De-duplicate EnumAttr overrides by defining defaultsLei Zhang2019-11-258-51/+19
| | | | | | | | | | | | | | EnumAttr should provide meaningful defaults so concrete instances do not need to duplicate the fields. PiperOrigin-RevId: 282398431
| * Introduce attributes that specify the final ABI for a spirv::ModuleOp.Mahesh Ravishankar2019-11-2521-292/+662
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To simplify the lowering into SPIR-V, while still respecting the ABI requirements of SPIR-V/Vulkan, split the process into two 1) While lowering a function to SPIR-V (when the function is an entry point function), allow specifying attributes on arguments and function itself that describe the ABI of the function. 2) Add a pass that materializes the ABI described in the function. Two attributes are needed. 1) Attribute on arguments of the entry point function that describe the descriptor_set, binding, storage class, etc, of the spv.globalVariable this argument will be replaced by 2) Attribute on function that specifies workgroup size, etc. (for now only workgroup size). Add the pass -spirv-lower-abi-attrs to materialize the ABI described by the attributes. This change makes the SPIRVBasicTypeConverter class unnecessary and is removed, further simplifying the SPIR-V lowering path. PiperOrigin-RevId: 282387587
| * Allow memref_cast from static strides to dynamic strides.Mahesh Ravishankar2019-11-254-3/+57
| | | | | | | | | | | | | | | | Memref_cast supports cast from static shape to dynamic shape memrefs. The same should be true for strides as well, i.e a memref with static strides can be casted to a memref with dynamic strides. PiperOrigin-RevId: 282381862
| * Add vector.insertelement opNicolas Vasilache2019-11-255-5/+161
| | | | | | | | | | | | | | | | | | This is the counterpart of vector.extractelement op and has the same limitations at the moment (static I64IntegerArrayAttr to express position). This restriction will be filterd in the future. LLVM lowering will be added in a subsequent commit. PiperOrigin-RevId: 282365760
| * Introduce gpu.funcAlex Zinenko2019-11-257-59/+552
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce a new function-like operation to the GPU dialect to provide a placeholder for the execution semantic description and to add support for GPU memory hierarchy. This aligns with the overall goal of the dialect to expose the common abstraction layer for GPU devices, in particular by providing an MLIR unit of semantics (i.e. an operation) for memory modeling. This proposal has been discussed in the mailing list: https://groups.google.com/a/tensorflow.org/d/msg/mlir/RfXNP7Hklsc/MBNN7KhjAgAJ As decided, the "convergence" aspect of the execution model will be factored out into a new discussion and therefore is not included in this commit. This commit only introduces the operation but does not hook it up with the remaining flow. The intention is to develop the new flow while keeping the old flow operational and do the switch in a simple, separately reversible commit. PiperOrigin-RevId: 282357599
| * Support folding of StandardOps with DenseElementsAttr.Ben Vanik2019-11-242-14/+58
| | | | | | | | PiperOrigin-RevId: 282270243
| * NFC: Wire up DRR settings for SPIR-V canonicalization patternsLei Zhang2019-11-233-58/+55
| | | | | | | | | | | | | | | | This CL added necessary files and settings for using DRR to write SPIR-V canonicalization patterns and also converted the patterns for spv.Bitcast and spv.LogicalNot. PiperOrigin-RevId: 282132786
| * [spirv] NFC: rename test files and sort tests insideLei Zhang2019-11-233-34/+50
| | | | | | | | PiperOrigin-RevId: 282132339
| * Make isValidSymbol more powerfulUday Bondhugula2019-11-227-13/+80
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The check in isValidSymbol, as far as a DimOp result went, checked if the dim op was on a top-level memref. However, any alloc'ed, view, or subview memref would be fine as long as the corresponding dimension of that memref is either a static one or was in turn created using a valid symbol in the case of dynamic dimensions. Reported-by: Jose Gomez Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#252 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/252 from bondhugula:symbol 7b57dc394df9375e651f497231c6e4525a32a662 PiperOrigin-RevId: 282097114
| * NFC: Remove unnecessarily guarded tablegen includes.River Riddle2019-11-2245-163/+12
| | | | | | | | | | | | Support for including a file multiple times was added in tablegen, removing the need for these extra guards. This is because we already insert c/c++ style header guards within each of the specific .td files. PiperOrigin-RevId: 282076728
| * Fix Windows BuildNicolas Vasilache2019-11-222-3/+3
| | | | | | | | PiperOrigin-RevId: 282048102
| * [spirv] Add a canonicalizer for `spirv::LogicalNotOp`.Denis Khalikov2019-11-225-12/+112
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a canonicalizer for `spirv::LogicalNotOp`. Converts: * spv.LogicalNot(spv.IEqual(...)) -> spv.INotEqual(...) * spv.LogicalNot(spv.INotEqual(...)) -> spv.IEqual(...) * spv.LogicalNot(spv.LogicalEqual(...)) -> spv.LogicalNotEqual(...) * spv.LogicalNot(spv.LogicalNotEqual(...)) -> spv.LogicalEqual(...) Also moved the test for spv.IMul to arithemtic tests. Closes tensorflow/mlir#256 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/256 from denis0x0D:sandbox/canon_logical_not 76ab5787b2c777f948c8978db061d99e76453d44 PiperOrigin-RevId: 282012356
| * Add more canonicalizations for SubViewOp.Mahesh Ravishankar2019-11-222-102/+184
| | | | | | | | | | | | | | | | | | | | Depending on which of the offsets, sizes, or strides are constant, the subview op can be canonicalized in different ways. Add such canonicalizations, which generalize the existing approach of canonicalizing subview op only if all of offsets, sizes and shapes are constants. PiperOrigin-RevId: 282010703
| * Small formatting fix in Tutorial Ch2.Lucy Fox2019-11-221-1/+1
| | | | | | | | PiperOrigin-RevId: 281998069
| * Unify vector op names with other dialects.Jean-Michel Gorius2019-11-2210-137/+129
| | | | | | | | | | | | | | | | | | | | Change vector op names from VectorFooOp to Vector_FooOp and from vector::VectorFooOp to vector::FooOp. Closes tensorflow/mlir#257 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/257 from Kayjukh:master dfc3a0e04114885aaec8740d5951d6984d6e1577 PiperOrigin-RevId: 281967461
| * Add more detail about locations in Chapter 2 of tutorial.Lucy Fox2019-11-211-6/+21
| | | | | | | | | | | | Resolves issue 241 (tensorflow/mlir#241). PiperOrigin-RevId: 281867192
| * Move Linalg Transforms that are actually Conversions - NFCNicolas Vasilache2019-11-217-27/+76
| | | | | | | | PiperOrigin-RevId: 281844602
| * Add support for using the ODS result names as the Asm result names for ↵River Riddle2019-11-216-44/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | multi-result operations. This changes changes the OpDefinitionsGen to automatically add the OpAsmOpInterface for operations with multiple result groups using the provided ODS names. We currently just limit the generation to multi-result ops as most single result operations don't have an interesting name(result/output/etc.). An example is shown below: // The following operation: def MyOp : ... { let results = (outs AnyType:$first, Variadic<AnyType>:$middle, AnyType); } // May now be printed as: %first, %middle:2, %0 = "my.op" ... PiperOrigin-RevId: 281834156
| * Change CUDA tests to use print_memref.Christian Sigg2019-11-214-31/+18
| | | | | | | | | | | | Swap dimensions in all-reduce-op test. PiperOrigin-RevId: 281791744
| * NFC: Add wrappers around DenseIntElementsAttr/DenseFPElementsAttr::get to ↵River Riddle2019-11-211-0/+28
| | | | | | | | | | | | | | | | avoid the need to cast. This avoids the need to cast back to the derived type when calling get, i.e. removes the need to do DenseIntElementsAttr::get(...).cast<DenseIntElementsAttr>(). PiperOrigin-RevId: 281772163
| * Fix OSS builds - NFCNicolas Vasilache2019-11-212-2/+2
| | | | | | | | PiperOrigin-RevId: 281757979
| * Drop unused function - NFCNicolas Vasilache2019-11-211-5/+0
| | | | | | | | PiperOrigin-RevId: 281741923
| * Split Linalg declarative patterns from specific test patterns - NFCNicolas Vasilache2019-11-219-114/+276
| | | | | | | | | | | | This will make it easier to scale out test patterns and build specific passes that do not interfere with independent testing. PiperOrigin-RevId: 281736335
| * Add missing include after LLVM 049043b598ef5b12a5894c0c22db8608be70f517Benjamin Kramer2019-11-211-0/+1
| | | | | | | | PiperOrigin-RevId: 281732683
| * Don't force newline before function attributesAlex Zinenko2019-11-215-13/+9
| | | | | | | | | | | | | | | | | | | | | | Due to legacy reasons, a newline character followed by two spaces was always inserted before the attributes of the function Op in pretty form. This breaks formatting when functions are nested in some other operations. Don't print the newline and just put the attributes on the same line, which is also more consistent with module Op. Line breaking aware of indentation can be introduced separately into the parser if deemed useful. PiperOrigin-RevId: 281721793
| * Fix OSS buildNicolas Vasilache2019-11-211-0/+1
| | | | | | | | | | | | | | Add include of ADT/SmallVector.h. Fixes tensorflow/mlir#254. PiperOrigin-RevId: 281721705
| * Fixed typo in 2-d tiled layoutAart Bik2019-11-201-1/+1
| | | | | | | | PiperOrigin-RevId: 281671097
| * NFC: Use Region::getBlocks to fix build failure with drop_begin.River Riddle2019-11-201-1/+1
| | | | | | | | PiperOrigin-RevId: 281656603
| * Add a document detailing operation traits, how to define them, and the ↵River Riddle2019-11-202-2/+249
| | | | | | | | | | | | | | | | current list. Traits are an important piece of operation definition, but don't really have a good documentation presence at the moment. PiperOrigin-RevId: 281649025
| * Correctly parse empty affine maps.MLIR Team2019-11-204-6/+22
| | | | | | | | | | | | Previously the test case crashes / produces an error. PiperOrigin-RevId: 281630540
| * Merge DCE and unreachable block elimination into a new utility ↵River Riddle2019-11-208-282/+301
| | | | | | | | | | | | | | | | 'simplifyRegions'. This moves the different canonicalizations of regions into one place and invokes them in the fixed-point iteration of the canonicalizer. PiperOrigin-RevId: 281617072
| * Add VectorContractionOp to the VectorOps dialect.Andy Davis2019-11-204-0/+422
| | | | | | | | PiperOrigin-RevId: 281605471
| * Verify subview op result has dynamic shape, when sizes are specified.Mahesh Ravishankar2019-11-202-1/+25
| | | | | | | | | | | | | | If the sizes are specified as arguments to the subview op, then the shape must be dynamic as well. PiperOrigin-RevId: 281591608
| * missing outer index %i in search_bodyMLIR Team2019-11-201-2/+2
| | | | | | | | PiperOrigin-RevId: 281580028
| * Add multi-level DCE pass.Sean Silva2019-11-206-0/+418
| | | | | | | | | | | | | | | | | | This is a simple multi-level DCE pass that operates pretty generically on the IR. Its key feature compared to the existing peephole dead op folding that happens during canonicalization is being able to delete recursively dead cycles of the use-def graph, including block arguments. PiperOrigin-RevId: 281568202
| * Changes to SubViewOp to make it more amenable to canonicalization.Mahesh Ravishankar2019-11-206-68/+322
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The current SubViewOp specification allows for either all offsets, shape and stride to be dynamic or all of them to be static. There are opportunities for more fine-grained canonicalization based on which of these are static. For example, if the sizes are static, the result memref is of static shape. The specification of SubViewOp is modified to allow on or more of offsets, shapes and strides to be statically specified. The verification is updated to ensure that the result type of the subview op is consistent with which of these are static and which are dynamic. PiperOrigin-RevId: 281560457
| * Implement unrolling of vector ops to finer-grained vector ops as a pattern.Nicolas Vasilache2019-11-2013-29/+570
| | | | | | | | | | | | | | | | | | This CL uses the pattern rewrite infrastructure to implement a simple VectorOps -> VectorOps legalization strategy to unroll coarse-grained vector operations into finer grained ones. The transformation is written using local pattern rewrites to allow composition with other rewrites. It proceeds by iteratively introducing fake cast ops and cleaning canonicalizing or lowering them away where appropriate. This is an example of writing transformations as compositions of local pattern rewrites that should enable us to make them significantly more declarative. PiperOrigin-RevId: 281555100
OpenPOWER on IntegriCloud