summaryrefslogtreecommitdiffstats
path: root/mlir/test
Commit message (Collapse)AuthorAgeFilesLines
...
* Upgrade/fix/simplify store to load forwardingUday Bondhugula2019-09-211-0/+33
| | | | | | | | | | | | | | | | - fix store to load forwarding for a certain set of cases (where forwarding shouldn't have happened); use AffineValueMap difference based MemRefAccess equality checking; utility logic is also greatly simplified - add missing equality/inequality operators for AffineExpr ==/!= ints - add == != operators on MemRefAccess Closes tensorflow/mlir#136 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/136 from bondhugula:store-load-forwarding d79fd1add8bcfbd9fa71d841a6a9905340dcd792 PiperOrigin-RevId: 270457011
* [ODS] Add support for FloatElementsAttrLei Zhang2019-09-213-0/+73
| | | | | | | | | This CL adds a new FloatElementsAttr definition to ODS for float elements attributes of a certain type. Tests are added to show both verification and how to use it in patterns. PiperOrigin-RevId: 270455487
* Make GlobalOp's value attribute optional.Christian Sigg2019-09-212-5/+6
| | | | | | Make GlobalOp's value attribute an OptionalAttr. Change code that uses the value to handle 'nullopt'. Translate an unitialized value attribute to llvm::UndefValue. PiperOrigin-RevId: 270423646
* NFC: Pass OpAsmPrinter by reference instead of by pointer.River Riddle2019-09-203-12/+12
| | | | | | MLIR follows the LLVM style of pass-by-reference. PiperOrigin-RevId: 270401378
* NFC: Pass OperationState by reference instead of by pointer.River Riddle2019-09-206-64/+64
| | | | | | MLIR follows the LLVM convention of passing by reference instead of by pointer. PiperOrigin-RevId: 270396945
* NFC: Pass OpAsmParser by reference instead of by pointer.River Riddle2019-09-203-19/+19
| | | | | | MLIR follows the LLVM style of pass-by-reference. PiperOrigin-RevId: 270315612
* Fix public buildNicolas Vasilache2019-09-201-0/+1
| | | | | | TestMemRefStrideCalculation.cpp was missing PiperOrigin-RevId: 270304543
* Add utility to extract strides from layout map in MemRefType.Nicolas Vasilache2019-09-202-0/+114
| | | | | | | | | | The RFC for unifying Linalg and Affine compilation passes into an end-to-end flow discusses the notion of a strided MemRef (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio). This CL adds helper functions to extract strides from the layout map which in turn will allow converting between a strided form of the type and a layout map. For now strides are only computed on a single affine map with a single result (i.e. the closed subset of linearization maps that are compatible with striding semantics). This restriction will be reevaluated / lifted in the future based on concrete use cases. PiperOrigin-RevId: 270284686
* Allow specification of decorators on SPIR-V StructType members.Mahesh Ravishankar2019-09-192-2/+44
| | | | | | | | | | Allow specification of decorators on SPIR-V StructType members. If the struct has layout information, these decorations are to be specified after the offset specification of the member. These decorations are emitted as OpMemberDecorate instructions on the struct <id>. Update (de)serialization to handle these decorations. PiperOrigin-RevId: 270130136
* Automated rollback of commit 5684a12434f923d03b6870f2aa16226bfb0b38b6George Karpenkov2019-09-194-93/+44
| | | | PiperOrigin-RevId: 270126672
* Quantize attribute values by per axis quantization parametersFeng Liu2019-09-191-0/+20
| | | | | | | | | | | A new converter with per axis quantization parameters is added to quantize a dense elements attribute. For each slice along the quantization axis, it creates an uniform quantized value converter, with different scale and zero point, and quantizes the values in the slice. The current implementation doesn't handle sparse elements attributes. PiperOrigin-RevId: 270121986
* Add address space attribute to LLVMIR's GlobalOp.MLIR Team2019-09-192-0/+13
| | | | PiperOrigin-RevId: 270012505
* Outline GPU kernel function into a nested module.MLIR Team2019-09-194-44/+93
| | | | | | When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module. PiperOrigin-RevId: 269987720
* Fix nested dominance relationship between parent results and child operations.River Riddle2019-09-181-0/+12
| | | | | | This modifies DominanceInfo::properlyDominates(Value *value, Operation *op) to return false if the value is defined by a parent operation of 'op'. This prevents using values defined by the parent operation from within any child regions. PiperOrigin-RevId: 269934920
* Support symbolic operands for memref replacement; fix memrefNormalizeUday Bondhugula2019-09-181-0/+15
| | | | | | | | | | | | | - allow symbols in index remapping provided for memref replacement - fix memref normalize crash on cases with layout maps with symbols Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Reported by: Alex Zinenko Closes tensorflow/mlir#139 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/139 from bondhugula:memref-rep-symbols 2f48c1fdb5d4c58915bbddbd9f07b18541819233 PiperOrigin-RevId: 269851182
* Add support to OpAsmParser for parsing unknown keywords.River Riddle2019-09-173-3/+35
| | | | | | This is useful in several cases, for example a user may want to sugar the syntax of a string(as we do with custom operation syntax), or avoid many nested ifs for parsing a set of known keywords. PiperOrigin-RevId: 269695451
* Add (de)serialization support for OpRuntimeArray.Mahesh Ravishankar2019-09-171-0/+8
| | | | | | Update the SPIR-V (de)serialization to handle RuntimeArrayType. PiperOrigin-RevId: 269667196
* Register a -test-spirv-roundtrip hook to mlir-translateLei Zhang2019-09-1721-21/+21
| | | | | | | | | | This CL registers a new mlir-translate hook, -test-spirv-roundtrip, for testing SPIR-V serialization and deserialization round-trip. This CL also moves the existing -serialize-spirv and -deserialize-spirv hooks to one source file. PiperOrigin-RevId: 269659528
* Add rewrite pattern to compose maps into affine load/storesUday Bondhugula2019-09-172-3/+29
| | | | | | | | | | | | | | | | | | | | | | | | - add canonicalization pattern to compose maps into affine loads/stores; templatize the pattern and reuse it for affine.apply as well - rename getIndices -> getMapOperands() (getIndices is confusing since these are no longer the indices themselves but operands to the map whose results are the indices). This also makes the accessor uniform across affine.apply/load/store. Change arg names on the affine load/store builder to avoid confusion. Drop an unused confusing build method on AffineStoreOp. - update incomplete doc comment for canonicalizeMapAndOperands (this was missed from a previous update). Addresses issue tensorflow/mlir#121 Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#122 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/122 from bondhugula:compose-load-store e71de1771e56a85c4282c10cb43f30cef0701c4f PiperOrigin-RevId: 269619540
* Autogenerate (de)serialization for Extended Instruction SetsMahesh Ravishankar2019-09-162-9/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | A generic mechanism for (de)serialization of extended instruction sets is added with this CL. To facilitate this, a new class "SPV_ExtendedInstSetOp" is added which is a base class for all operations corresponding to extended instruction sets. The methods to (de)serialization such ops as well as its dispatch is generated automatically. The behavior controlled by autogenSerialization and hasOpcode is also slightly modified to enable this. They are now decoupled. 1) Setting hasOpcode=1 means the operation has a corresponding opcode in SPIR-V binary format, and its dispatch for (de)serialization is automatically generated. 2) Setting autogenSerialization=1 generates the function for (de)serialization automatically. So now it is possible to have hasOpcode=0 and autogenSerialization=1 (for example SPV_ExtendedInstSetOp). Since the dispatch functions is also auto-generated, the input file needs to contain all operations. To this effect, SPIRVGLSLOps.td is included into SPIRVOps.td. This makes the previously added SPIRVGLSLOps.h and SPIRVGLSLOps.cpp unnecessary, and are deleted. The SPIRVUtilsGen.cpp is also changed to make better use of formatv,making the code more readable. PiperOrigin-RevId: 269456263
* [spirv] Add support for function calls.Denis Khalikov2019-09-162-0/+158
| | | | | | | | | Add spv.FunctionCall operation and (de)serialization. Closes tensorflow/mlir#137 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/137 from denis0x0D:sandbox/function_call_op e2e6f07d21e7f23e8b44c7df8a8ab784f3356ce4 PiperOrigin-RevId: 269437167
* Add support for multi-level value mapping to DialectConversion.River Riddle2019-09-164-4/+102
| | | | | | When performing A->B->C conversion, an operation may still refer to an operand of A. This makes it necessary to unmap through multiple levels of replacement for a specific value. PiperOrigin-RevId: 269367859
* [spirv] Add support for BitEnumAttrLei Zhang2019-09-161-6/+78
| | | | | | | | | | | | | | | | | | | | Certain enum classes in SPIR-V, like function/loop control and memory access, are bitmasks. This CL introduces a BitEnumAttr to properly model this and drive auto-generation of verification code and utility functions. We still store the attribute using an 32-bit IntegerAttr for minimal memory footprint and easy (de)serialization. But utility conversion functions are adjusted to inspect each bit and generate "|"-concatenated strings for the bits; vice versa. Each such enum class has a "None" case that means no bit is set. We need special handling for "None". Because of this, the logic is not general anymore. So right now the definition is placed in the SPIR-V dialect. If later this turns out to be useful for other dialects, then we can see how to properly adjust it and move to OpBase.td. Added tests for SPV_MemoryAccess to check and demonstrate. PiperOrigin-RevId: 269350620
* Overhaul the SDBM expression kind hierarchyAlex Zinenko2019-09-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Swap the allowed nesting of sum and diff expressions: now a diff expression can contain a sum expression, but only on the left hand side. A difference of two expressions sum must be canonicalized by grouping their constant terms in a single expression. This change of sturcture became possible thanks to the introduction of the "direct" super-kind. It is necessary to enable support of sum expressions on the left hand side of the stripe expression. SDBM expressions are now grouped into the following structure - expression - varying - direct - sum <- (term, constant) - term - symbol - dimension - stripe <- (term, constant) - negation <- (direct) - difference <- (direct, term) - constant The notation <- (...) denotes the types of subexpressions a compound expression can combine. PiperOrigin-RevId: 269337222
* Add mechanism to specify extended instruction sets in SPIR-V.Mahesh Ravishankar2019-09-151-0/+49
| | | | | | | | | | | | | | | | | | Add support for specifying extended instructions sets. The operations in SPIR-V dialect are named as 'spv.<extension-name>.<op-name>'. Use this mechanism to define a 'Exp' operation from GLSL(450) instructions. Later CLs will add support for (de)serialization of these operations, and update the dialect generation scripts to auto-generate the specification using the spec directly. Additional changes: Add a Type Constraint to OpBase.td to check for vector of specified lengths. This is used to check that the vector type used in SPIR-V dialect are of lengths 2, 3 or 4. Update SPIRVBase.td to use this Type constraints for vectors. PiperOrigin-RevId: 269234377
* Fix typo in test/AffineOps/ops.mlirUday Bondhugula2019-09-151-1/+1
| | | | | | | Closes tensorflow/mlir#135 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/135 from bondhugula:patch-1 539a7f1b43d09ef539b2fd15875f8ac765600263 PiperOrigin-RevId: 269187507
* Update the IRPrinter instrumentation to work on non function/module operations.River Riddle2019-09-141-2/+2
| | | | | | This is necessary now that the pass manager may work on different types of operations. PiperOrigin-RevId: 269139669
* update normalizeMemRef utility; handle missing failure check + add more testsUday Bondhugula2019-09-141-0/+42
| | | | | | | | | | | | | | - take care of symbolic operands with alloc - add missing check for compose map failure and a test case - add test cases on strides - drop incorrect check for one-to-one'ness Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#132 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/132 from bondhugula:normalize-memrefs 8aebf285fb0d7c19269d85255aed644657e327b7 PiperOrigin-RevId: 269105947
* Clean up build trip count analysis method - avoid mutating IRUday Bondhugula2019-09-141-4/+26
| | | | | | | | | | | | | | | | | | | | | | | | - NFC - on any pass/utility logic/output. - Resolve TODO; the method building loop trip count maps was creating and deleting affine.apply ops (transforming IR from under analysis!, strictly speaking). Introduce AffineValueMap::difference to do this correctly (without the need to create any IR). - Move AffineApplyNormalizer out so that its methods are reusable from AffineStructures.cpp; add a helper method 'normalize' to it. Fix AffineApplyNormalize::renumberOneDim (Issue tensorflow/mlir#89). - Trim includes on files touched. - add test case on a scenario previously not covered Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#133 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/133 from bondhugula:trip-count-build 7fc34d857f7788f98b641792cafad6f5bd50e47b PiperOrigin-RevId: 269101118
* Add pattern to canonicalize for loop boundsUday Bondhugula2019-09-131-0/+30
| | | | | | | | | | | | | | - add pattern to canonicalize affine.for loop bounds (using canonicalizeMapAndOperands) - rename AffineForLoopBoundFolder -> AffineForLoopBoundFolder for consistency Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#111 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/111 from bondhugula:bound-canonicalize ee8fb7f43a7ffd45f6df3f53c95098d8b7e494c7 PiperOrigin-RevId: 269041220
* Verify that ModuleOps only contain dialect specific attributes.River Riddle2019-09-131-0/+7
| | | | | | ModuleOp has no expected operations, so only dialect-specific attributes are valid. PiperOrigin-RevId: 269020062
* add missing memref cast fold pattern for dim opUday Bondhugula2019-09-131-4/+8
| | | | | | | | | | | | | | | - add missing canonicalization pattern to fold memref_cast + dim to dim (needed to propagate constant when folding a dynamic shape to a static one) - also fix an outdated/inconsistent comment in StandardOps/Ops.td Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#126 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/126 from bondhugula:quickfix 4566e75e49685c532faffff91d64c5d83d4da524 PiperOrigin-RevId: 269020058
* Publicly expose the functionality to parse a textual pass pipeline.River Riddle2019-09-132-0/+17
| | | | | | This allows for users other than those on the command line to apply a textual description of a pipeline to a given pass manager. PiperOrigin-RevId: 269017028
* Add type constraints for shaped types with same rank and element countGeoffrey Martin-Noble2019-09-132-0/+78
| | | | PiperOrigin-RevId: 269000237
* Update SPIR-V symbols and use GLSL450 instead of VulkanKHRLei Zhang2019-09-1319-58/+58
| | | | | | | | | | | | | | SPIR-V recently publishes v1.5, which brings a bunch of symbols into core. So the suffix "KHR"/"EXT"/etc. is removed from the symbols. We use a script to pull information from the spec directly. Also changed conversion and tests to use GLSL450 instead of VulkanKHR memory model. GLSL450 is still the main memory model supported by Vulkan shaders and it does not require extra capability to enable. PiperOrigin-RevId: 268992661
* NFC: Finish replacing FunctionPassBase/ModulePassBase with OpPassBase.River Riddle2019-09-134-4/+4
| | | | | | These directives were temporary during the generalization of FunctionPass/ModulePass to OpPass. PiperOrigin-RevId: 268970259
* Add tablegen class for memrefs with rank constraintsGeoffrey Martin-Noble2019-09-132-0/+31
| | | | PiperOrigin-RevId: 268968004
* Refactor pass pipeline command line parsing to support explicit pipeline ↵River Riddle2019-09-132-1/+37
| | | | | | | | | | | | | | | | strings. This allows for explicitly specifying the pipeline to add to the pass manager. This includes the nesting structure, as well as the passes/pipelines to run. A textual pipeline string is defined as a series of names, each of which may in itself recursively contain a nested pipeline description. A name is either the name of a registered pass, or pass pipeline, (e.g. "cse") or the name of an operation type (e.g. "func"). For example, the following pipeline: $ mlir-opt foo.mlir -cse -canonicalize -lower-to-llvm Could now be specified as: $ mlir-opt foo.mlir -pass-pipeline='func(cse, canonicalize), lower-to-llvm' This will allow for running pipelines on nested operations, like say spirv modules. This does not remove any of the current functionality, and in fact can be used in unison. The new option is available via 'pass-pipeline'. PiperOrigin-RevId: 268954279
* Cmpf constant folding for nan and infGeoffrey Martin-Noble2019-09-121-8/+76
| | | | PiperOrigin-RevId: 268783645
* NFC: Clean up constant fold testsGeoffrey Martin-Noble2019-09-121-85/+86
| | | | | | | | Use variable captures to make constant folding tests less sensitive to printer/parser implementation details. See guidelines at https://github.com/tensorflow/mlir/blob/master/g3doc/TestingGuide.md PiperOrigin-RevId: 268780812
* [spirv] Add support for spv.loop (de)serializationLei Zhang2019-09-111-0/+59
| | | | | | | | This CL adds support for serializing and deserializing spv.loop ops. This adds support for spv.Branch and spv.BranchConditional op (de)serialization, too, because they are needed for spv.loop. PiperOrigin-RevId: 268536962
* Add folding rule for spv.CompositeExtractLei Zhang2019-09-101-0/+41
| | | | | | | | If the composite is a constant, we can fold it away. This only supports vector and array constants for now, given that struct constant is not supported in spv.constant yet. PiperOrigin-RevId: 268350340
* Remove the constraint that min / max should stride zeroFeng Liu2019-09-102-27/+37
| | | | | | | | | | | | Since we apply nudging for the zero point to make sure the nudged zerop points can be in the range of [qmin, qmax], the constraint that rmin / rmax should stride zero isn't necessary. This also matches the documentation of tensorflow's FakeQuantWithMinMaxArgs op, where min and max don't need to stride zero: https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_args PiperOrigin-RevId: 268296285
* Convert ConstFakeQuantPerAxis to qcast and dcast pairFeng Liu2019-09-101-0/+19
| | | | | | This is also to add the test to the fakeQuantAttrsToType for per-channel fake quant. PiperOrigin-RevId: 268260032
* Add quant.const_fake_quant_per_axis opFeng Liu2019-09-091-0/+15
| | | | | | | Comparing to the existing quant.const_fake_quant op, the min and max attributes of this new op is for each channel of last dimension of the input. PiperOrigin-RevId: 268093722
* Add warpsize and laneid intrinsics to NVVM dialect.MLIR Team2019-09-091-0/+4
| | | | PiperOrigin-RevId: 268041263
* Add support for coalescing adjacent nested pass pipelines.River Riddle2019-09-091-4/+5
| | | | | | This allows for parallelizing across pipelines of multiple operation types. AdaptorPasses can now hold pass managers for multiple operation types and will dispatch based upon the operation being operated on. PiperOrigin-RevId: 268017344
* Addressing some late review comments on kernel inlining.Stephan Herhut2019-09-091-1/+3
| | | | | | Just formatting and better lit tests, no functional change. PiperOrigin-RevId: 267942907
* Add `parseGenericOperation()` to the OpAsmParserMehdi Amini2019-09-083-0/+64
| | | | | | | | | This method parses an operation in its generic form, from the current parser state. This is the symmetric of OpAsmPrinter::printGenericOp(). An immediate use case is illustrated in the test dialect, where an operation wraps another one in its region and makes use of a single-line pretty-print form. PiperOrigin-RevId: 267930869
* Refactor PassTiming to support nested pipelines.River Riddle2019-09-084-0/+82
| | | | | | This is done via a new set of instrumentation hooks runBeforePipeline/runAfterPipeline, that signal the lifetime of a pass pipeline on a specific operation type. These hooks also provide the parent thread of the pipeline, allowing for accurate merging of timers running on different threads. PiperOrigin-RevId: 267909193
OpenPOWER on IntegriCloud