summaryrefslogtreecommitdiffstats
path: root/mlir/lib/Dialect
Commit message (Collapse)AuthorAgeFilesLines
...
* Allow null Attribute for value when building GlobalOp.Christian Sigg2019-09-241-1/+2
| | | | PiperOrigin-RevId: 270853596
* Make spirv::RuntimeArrayType part of spirv::CompositeType.Mahesh Ravishankar2019-09-231-0/+17
| | | | | | | According to SPIR-V spec, spirv::CompositeType includes spirv::RuntimeArrayType. This allows using objects of spirv::RuntimeArrayType with spirv::AccessChainOp. PiperOrigin-RevId: 270809492
* Handle OpMemberName instruction in SPIR-V deserializer.Mahesh Ravishankar2019-09-233-28/+52
| | | | | | | | | | Sdd support in deserializer for OpMemberName instruction. For now the name is just processed and not associated with the spirv::StructType being built. That needs an enhancement to spirv::StructTypes itself. Add tests to check for errors reported during deserialization with some refactoring to common out some utility functions. PiperOrigin-RevId: 270794524
* Simplify the way spirv::StructTypes are parsed.Mahesh Ravishankar2019-09-233-124/+85
| | | | | | | | | | | | | The existing logic to parse spirv::StructTypes is very brittle. This change simplifies the parsing logic a lot. The simplification also allows for memberdecorations to be separated by commas instead of spaces (which was an artifact of the existing parsing logic). The change also needs a modification to mlir::parseType to return the number of chars parsed. Adding a new parseType method to do so. Also allow specification of spirv::StructType with no members. PiperOrigin-RevId: 270739672
* Add interfaces for call-like/callable operations.River Riddle2019-09-231-1/+6
| | | | | | | | | | | These two operation interfaces will be used in a followup to support building a callgraph: * CallOpInterface - Operations providing this interface are call-like, and have a "call" target. A call target may be a symbol reference, via SymbolRefAttr, or a SSA value. * CallableOpInterface - Operations providing this interfaces define destinations to call-like operations, e.g. FuncOp. These operations may define any number of callable regions. PiperOrigin-RevId: 270723300
* Add variants of interleave that take separatorJacques Pienaar2019-09-231-3/+1
| | | | | | Make the common case of string separator easier to specify. PiperOrigin-RevId: 270697581
* Outline GPU kernel function into a nested module.Christian Sigg2019-09-231-4/+33
| | | | | | | | Roll forward of commit 5684a12. When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module. PiperOrigin-RevId: 270639748
* Fix a number of Clang-Tidy warnings.Christian Sigg2019-09-236-8/+8
| | | | PiperOrigin-RevId: 270632324
* Fix undefined reference to mlir::getElementTypeOrSelf(mlir::Type)Denis Khalikov2019-09-221-2/+2
| | | | | | | | | | | Fix undefined reference: mlir/lib/Dialect/StandardOps/Ops.cpp:2029: undefined reference to `mlir::getElementTypeOrSelf(mlir::Type)' Closes tensorflow/mlir#144 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/144 from denis0x0D:sandbox/fix_undef 494d4f7fa2e98ba21954d2b2f7ec1776b9397e08 PiperOrigin-RevId: 270545190
* Add integer sign- and zero-extension and truncation to standard.Manuel Freiberger2019-09-211-0/+67
| | | | | | | | | | | | This adds sign- and zero-extension and truncation of integer types to the standard dialects. This allows to perform integer type conversions without having to go to the LLVM dialect and introduce custom type casts (between standard and LLVM integer types). Closes tensorflow/mlir#134 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/134 from ombre5733:sext-zext-trunc-in-std c7657bc84c0ca66b304e53ec03797e09152e4d31 PiperOrigin-RevId: 270479722
* [spirv] Add OpControlBarrier and OpMemoryBarrier.Denis Khalikov2019-09-213-11/+186
| | | | | | | | | Add OpControlBarrier and OpMemoryBarrier (de)serialization. Closes tensorflow/mlir#130 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/130 from denis0x0D:sandbox/memory_barrier 2e3fff16bca44904dc1039592cb9a65d526faea8 PiperOrigin-RevId: 270457478
* Make GlobalOp's value attribute optional.Christian Sigg2019-09-211-11/+19
| | | | | | Make GlobalOp's value attribute an OptionalAttr. Change code that uses the value to handle 'nullopt'. Translate an unitialized value attribute to llvm::UndefValue. PiperOrigin-RevId: 270423646
* NFC: Pass OpAsmPrinter by reference instead of by pointer.River Riddle2019-09-209-592/+591
| | | | | | MLIR follows the LLVM style of pass-by-reference. PiperOrigin-RevId: 270401378
* NFC: Pass OperationState by reference instead of by pointer.River Riddle2019-09-2010-607/+605
| | | | | | MLIR follows the LLVM convention of passing by reference instead of by pointer. PiperOrigin-RevId: 270396945
* NFC: Pass OpAsmParser by reference instead of by pointer.River Riddle2019-09-209-916/+904
| | | | | | MLIR follows the LLVM style of pass-by-reference. PiperOrigin-RevId: 270315612
* Allow specification of decorators on SPIR-V StructType members.Mahesh Ravishankar2019-09-194-107/+226
| | | | | | | | | | Allow specification of decorators on SPIR-V StructType members. If the struct has layout information, these decorations are to be specified after the offset specification of the member. These decorations are emitted as OpMemberDecorate instructions on the struct <id>. Update (de)serialization to handle these decorations. PiperOrigin-RevId: 270130136
* Automated rollback of commit 5684a12434f923d03b6870f2aa16226bfb0b38b6George Karpenkov2019-09-191-33/+4
| | | | PiperOrigin-RevId: 270126672
* Quantize attribute values by per axis quantization parametersFeng Liu2019-09-192-8/+55
| | | | | | | | | | | A new converter with per axis quantization parameters is added to quantize a dense elements attribute. For each slice along the quantization axis, it creates an uniform quantized value converter, with different scale and zero point, and quantizes the values in the slice. The current implementation doesn't handle sparse elements attributes. PiperOrigin-RevId: 270121986
* Outline GPU kernel function into a nested module.MLIR Team2019-09-191-4/+33
| | | | | | When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module. PiperOrigin-RevId: 269987720
* SDBM: support sum expressions on the LHS of stripe expressionsAlex Zinenko2019-09-182-13/+17
| | | | | | | | | Introduce support for applying the stripe operator to sum expressions, as in (x + A) # B = x + A - (x + A) mod B. This is required to represent a combination of tiling and padding in the SDBM framework, and is a valid SDBM construct that was not originally supported. PiperOrigin-RevId: 269758807
* Simplify SDBM expressions more aggressively in operators and conversionsAlex Zinenko2019-09-181-32/+52
| | | | | | | | | | | | Extend SDBM simplification patterns to support more cases where the addition of two expressions each involving one or two variables would result in a sum expression that only contains one variable and thus remains in the SDBM domain. This is made possible by the new canonical structure of SDBM where the constant term appears once. This simplification will be necessary to support round-tripping of stripe expressions containing constant terms on the LHS through affine expressions. PiperOrigin-RevId: 269757732
* Add support to OpAsmParser for parsing unknown keywords.River Riddle2019-09-172-4/+3
| | | | | | This is useful in several cases, for example a user may want to sugar the syntax of a string(as we do with custom operation syntax), or avoid many nested ifs for parsing a set of known keywords. PiperOrigin-RevId: 269695451
* Add (de)serialization support for OpRuntimeArray.Mahesh Ravishankar2019-09-172-0/+31
| | | | | | Update the SPIR-V (de)serialization to handle RuntimeArrayType. PiperOrigin-RevId: 269667196
* Register a -test-spirv-roundtrip hook to mlir-translateLei Zhang2019-09-174-134/+158
| | | | | | | | | | This CL registers a new mlir-translate hook, -test-spirv-roundtrip, for testing SPIR-V serialization and deserialization round-trip. This CL also moves the existing -serialize-spirv and -deserialize-spirv hooks to one source file. PiperOrigin-RevId: 269659528
* Change MLIR translation functions signatureLei Zhang2019-09-172-28/+15
| | | | | | | | | | This CL changes translation functions to take MemoryBuffer as input and raw_ostream as output. It is generally better to avoid handling files directly in a library (unless the library is specifically for file manipulation) and we can unify all file handling to the mlir-translate binary itself. PiperOrigin-RevId: 269625911
* Add rewrite pattern to compose maps into affine load/storesUday Bondhugula2019-09-171-30/+73
| | | | | | | | | | | | | | | | | | | | | | | | - add canonicalization pattern to compose maps into affine loads/stores; templatize the pattern and reuse it for affine.apply as well - rename getIndices -> getMapOperands() (getIndices is confusing since these are no longer the indices themselves but operands to the map whose results are the indices). This also makes the accessor uniform across affine.apply/load/store. Change arg names on the affine load/store builder to avoid confusion. Drop an unused confusing build method on AffineStoreOp. - update incomplete doc comment for canonicalizeMapAndOperands (this was missed from a previous update). Addresses issue tensorflow/mlir#121 Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#122 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/122 from bondhugula:compose-load-store e71de1771e56a85c4282c10cb43f30cef0701c4f PiperOrigin-RevId: 269619540
* Autogenerate (de)serialization for Extended Instruction SetsMahesh Ravishankar2019-09-166-72/+123
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | A generic mechanism for (de)serialization of extended instruction sets is added with this CL. To facilitate this, a new class "SPV_ExtendedInstSetOp" is added which is a base class for all operations corresponding to extended instruction sets. The methods to (de)serialization such ops as well as its dispatch is generated automatically. The behavior controlled by autogenSerialization and hasOpcode is also slightly modified to enable this. They are now decoupled. 1) Setting hasOpcode=1 means the operation has a corresponding opcode in SPIR-V binary format, and its dispatch for (de)serialization is automatically generated. 2) Setting autogenSerialization=1 generates the function for (de)serialization automatically. So now it is possible to have hasOpcode=0 and autogenSerialization=1 (for example SPV_ExtendedInstSetOp). Since the dispatch functions is also auto-generated, the input file needs to contain all operations. To this effect, SPIRVGLSLOps.td is included into SPIRVOps.td. This makes the previously added SPIRVGLSLOps.h and SPIRVGLSLOps.cpp unnecessary, and are deleted. The SPIRVUtilsGen.cpp is also changed to make better use of formatv,making the code more readable. PiperOrigin-RevId: 269456263
* [spirv] Add support for function calls.Denis Khalikov2019-09-163-6/+206
| | | | | | | | | Add spv.FunctionCall operation and (de)serialization. Closes tensorflow/mlir#137 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/137 from denis0x0D:sandbox/function_call_op e2e6f07d21e7f23e8b44c7df8a8ab784f3356ce4 PiperOrigin-RevId: 269437167
* [spirv] Add support for BitEnumAttrLei Zhang2019-09-163-4/+7
| | | | | | | | | | | | | | | | | | | | Certain enum classes in SPIR-V, like function/loop control and memory access, are bitmasks. This CL introduces a BitEnumAttr to properly model this and drive auto-generation of verification code and utility functions. We still store the attribute using an 32-bit IntegerAttr for minimal memory footprint and easy (de)serialization. But utility conversion functions are adjusted to inspect each bit and generate "|"-concatenated strings for the bits; vice versa. Each such enum class has a "None" case that means no bit is set. We need special handling for "None". Because of this, the logic is not general anymore. So right now the definition is placed in the SPIR-V dialect. If later this turns out to be useful for other dialects, then we can see how to properly adjust it and move to OpBase.td. Added tests for SPV_MemoryAccess to check and demonstrate. PiperOrigin-RevId: 269350620
* Overhaul the SDBM expression kind hierarchyAlex Zinenko2019-09-162-116/+197
| | | | | | | | | | | | | | | | | | | | | | | | | | Swap the allowed nesting of sum and diff expressions: now a diff expression can contain a sum expression, but only on the left hand side. A difference of two expressions sum must be canonicalized by grouping their constant terms in a single expression. This change of sturcture became possible thanks to the introduction of the "direct" super-kind. It is necessary to enable support of sum expressions on the left hand side of the stripe expression. SDBM expressions are now grouped into the following structure - expression - varying - direct - sum <- (term, constant) - term - symbol - dimension - stripe <- (term, constant) - negation <- (direct) - difference <- (direct, term) - constant The notation <- (...) denotes the types of subexpressions a compound expression can combine. PiperOrigin-RevId: 269337222
* Unify how errors are emitted in LaunchFuncOp verification.MLIR Team2019-09-161-3/+3
| | | | PiperOrigin-RevId: 269331869
* Drop makePositionAttr and the like in favor of Builder::getI64ArrayAttrAlex Zinenko2019-09-161-24/+14
| | | | | | | | | | The helper functions makePositionAttr() and positionAttr() were originally introduced in the lowering-to-LLVM-dialect pass to construct integer array attributes that are used for static positions in extract/insertelement. Constructing an integer array attribute being fairly common, a utility function Builder::getI64ArrayAttr was later introduced into the Builder API. Drop makePositionAttr and similar homegrown functions and use that API instead. PiperOrigin-RevId: 269295836
* Add mechanism to specify extended instruction sets in SPIR-V.Mahesh Ravishankar2019-09-153-0/+67
| | | | | | | | | | | | | | | | | | Add support for specifying extended instructions sets. The operations in SPIR-V dialect are named as 'spv.<extension-name>.<op-name>'. Use this mechanism to define a 'Exp' operation from GLSL(450) instructions. Later CLs will add support for (de)serialization of these operations, and update the dialect generation scripts to auto-generate the specification using the spec directly. Additional changes: Add a Type Constraint to OpBase.td to check for vector of specified lengths. This is used to check that the vector type used in SPIR-V dialect are of lengths 2, 3 or 4. Update SPIRVBase.td to use this Type constraints for vectors. PiperOrigin-RevId: 269234377
* Clean up build trip count analysis method - avoid mutating IRUday Bondhugula2019-09-141-70/+12
| | | | | | | | | | | | | | | | | | | | | | | | - NFC - on any pass/utility logic/output. - Resolve TODO; the method building loop trip count maps was creating and deleting affine.apply ops (transforming IR from under analysis!, strictly speaking). Introduce AffineValueMap::difference to do this correctly (without the need to create any IR). - Move AffineApplyNormalizer out so that its methods are reusable from AffineStructures.cpp; add a helper method 'normalize' to it. Fix AffineApplyNormalize::renumberOneDim (Issue tensorflow/mlir#89). - Trim includes on files touched. - add test case on a scenario previously not covered Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#133 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/133 from bondhugula:trip-count-build 7fc34d857f7788f98b641792cafad6f5bd50e47b PiperOrigin-RevId: 269101118
* Add pattern to canonicalize for loop boundsUday Bondhugula2019-09-131-2/+35
| | | | | | | | | | | | | | - add pattern to canonicalize affine.for loop bounds (using canonicalizeMapAndOperands) - rename AffineForLoopBoundFolder -> AffineForLoopBoundFolder for consistency Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#111 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/111 from bondhugula:bound-canonicalize ee8fb7f43a7ffd45f6df3f53c95098d8b7e494c7 PiperOrigin-RevId: 269041220
* add missing memref cast fold pattern for dim opUday Bondhugula2019-09-131-0/+6
| | | | | | | | | | | | | | | - add missing canonicalization pattern to fold memref_cast + dim to dim (needed to propagate constant when folding a dynamic shape to a static one) - also fix an outdated/inconsistent comment in StandardOps/Ops.td Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#126 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/126 from bondhugula:quickfix 4566e75e49685c532faffff91d64c5d83d4da524 PiperOrigin-RevId: 269020058
* NFC: Finish replacing FunctionPassBase/ModulePassBase with OpPassBase.River Riddle2019-09-138-9/+11
| | | | | | These directives were temporary during the generalization of FunctionPass/ModulePass to OpPass. PiperOrigin-RevId: 268970259
* Cmpf constant folding for nan and infGeoffrey Martin-Noble2019-09-121-4/+4
| | | | PiperOrigin-RevId: 268783645
* [spirv] Add support for spv.loop (de)serializationLei Zhang2019-09-113-50/+696
| | | | | | | | This CL adds support for serializing and deserializing spv.loop ops. This adds support for spv.Branch and spv.BranchConditional op (de)serialization, too, because they are needed for spv.loop. PiperOrigin-RevId: 268536962
* Rename SDBMPositiveExpr to SDBMTermExprAlex Zinenko2019-09-113-42/+39
| | | | | | | | | | | This better reflects how this kind of expressions is used and avoids the potential confusion since the expression can take negative values. Term expressions comprise dimensions, symbols and stripe expressions. In an SDBM domain, a stripe expression always corresponds to a variable, input or temporary. This expression can appear anywhere an input variable can, including on the LHS of other stripe expressions. PiperOrigin-RevId: 268486066
* Add folding rule for spv.CompositeExtractLei Zhang2019-09-101-1/+34
| | | | | | | | If the composite is a constant, we can fold it away. This only supports vector and array constants for now, given that struct constant is not supported in spv.constant yet. PiperOrigin-RevId: 268350340
* Remove the constraint that min / max should stride zeroFeng Liu2019-09-101-12/+13
| | | | | | | | | | | | Since we apply nudging for the zero point to make sure the nudged zerop points can be in the range of [qmin, qmax], the constraint that rmin / rmax should stride zero isn't necessary. This also matches the documentation of tensorflow's FakeQuantWithMinMaxArgs op, where min and max don't need to stride zero: https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_args PiperOrigin-RevId: 268296285
* Convert ConstFakeQuantPerAxis to qcast and dcast pairFeng Liu2019-09-102-27/+70
| | | | | | This is also to add the test to the fakeQuantAttrsToType for per-channel fake quant. PiperOrigin-RevId: 268260032
* Remove unused variableJacques Pienaar2019-09-101-2/+1
| | | | PiperOrigin-RevId: 268173638
* [NFC] Rename ExpressedToUniformQuantizedType to ExpressedToQuantizedTypeFeng Liu2019-09-092-9/+8
| | | | PiperOrigin-RevId: 268090906
* Convert per channel fake quant attributes to typeFeng Liu2019-09-091-36/+102
| | | | | | | | For per channel fake quant attributes, the returned type should be UniformQuantizedPerAxisType. Currently, this method isn't under test because we haven't added the quant_ConstFakeQuantPerAxis op and the convert method. PiperOrigin-RevId: 268084017
* Addressing some late review comments on kernel inlining.Stephan Herhut2019-09-091-15/+15
| | | | | | Just formatting and better lit tests, no functional change. PiperOrigin-RevId: 267942907
* Restrict affine inlining to just Function operations.River Riddle2019-09-061-4/+6
| | | | | | The current restrictions on dim/symbols require a top-level symbol for the conservative case of a non-affine region. This should be relaxed in the future. PiperOrigin-RevId: 267641838
* Add custom builder for AffineIfOpNagy Mostafa2019-09-061-0/+11
| | | | | | | Closes tensorflow/mlir#109 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/109 from nmostafa:nmostafa/AffineIfOp 7dbf2115f0092ffab26381ea8704aa05a0253971 PiperOrigin-RevId: 267633077
* Simplify Linalg ABI integration with external function calls.Nicolas Vasilache2019-09-061-121/+123
| | | | | | | View descriptors are converted to *pointer to* LLVM struct to avoid ABI issues related to C struct packing. This creates unnecessary complexity and hampers unification with memrefs. Instead, this CL makes view descriptors convert to LLVM struct (as it was originally) and promotes all structs to pointers right before calling an external function. PiperOrigin-RevId: 267602693
OpenPOWER on IntegriCloud