summaryrefslogtreecommitdiffstats
path: root/mlir
Commit message (Collapse)AuthorAgeFilesLines
...
* Add the LLVM IR unreachable instruction to the LLVMIR dialect.Eric Schweitz2019-08-082-0/+9
| | | | | | | | http://llvm.org/docs/LangRef.html#unreachable-instruction Closes tensorflow/mlir#64 PiperOrigin-RevId: 262301557
* NFC: Update FuncOp::addEntryBlock to return the newly inserted block.River Riddle2019-08-078-20/+11
| | | | | | The entry block is often used recently after insertion. This removes the need to perform an additional lookup in such cases. PiperOrigin-RevId: 262265671
* Initialize local variables for opcode to fix MSAN failuresLei Zhang2019-08-071-3/+3
| | | | PiperOrigin-RevId: 262225919
* Add utility 'replaceAllUsesWith' methods to Operation.River Riddle2019-08-076-13/+33
| | | | | | These methods will allow replacing the uses of results with an existing operation, with the same number of results, or a range of values. This removes a number of hand-rolled result replacement loops and simplifies replacement for operations with multiple results. PiperOrigin-RevId: 262206600
* Improve support for opaque types in MLIR, allowing dialects to opt intoChris Lattner2019-08-075-8/+45
| | | | | | supporting opaque types, and providing ODS support for matching them. PiperOrigin-RevId: 262183028
* Fix verification of zero-dim memref in ↵Diego Caballero2019-08-078-9/+53
| | | | | | | | | | | | | affine.load/affine.store/std.load/std.store Verification complained when using zero-dimensional memrefs in affine.load, affine.store, std.load and std.store. This PR extends verification so that those memrefs can be used. Closes tensorflow/mlir#58 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/58 from dcaballe:dcaballe/zero-dim 49bcdcd45c52c48beca776431328e5ce551dfa9e PiperOrigin-RevId: 262164916
* Have ValueUseIterator template use OperandType instead of IROperand.Andy Ly2019-08-061-1/+1
| | | | | | This was causing some issues using helper methods like llvm::make_early_inc_range on Value::getUses(), resulting in IROperand instead of OpOperand. PiperOrigin-RevId: 262056425
* NFC: Simplify ModuleTerminatorOp by using the HasParent trait.River Riddle2019-08-063-15/+2
| | | | PiperOrigin-RevId: 261962104
* Remove ops in regions/blocks from worklist when parent op is being removed ↵Andy Ly2019-08-064-0/+41
| | | | | | | | via GreedyPatternRewriteDriver::replaceOp. This fixes a bug where ops inside the parent op are visited even though the parent op has been removed. PiperOrigin-RevId: 261953580
* NFC: Simplify ModuleOp by using the SingleBlockImplicitTerminator trait.River Riddle2019-08-063-22/+11
| | | | PiperOrigin-RevId: 261944712
* Emit matchAndRewrite() for declarative rewrite rulesLei Zhang2019-08-061-115/+107
| | | | | | | | | Previously we are emitting separate match() and rewrite() methods, which requires conveying a match state struct in a unique_ptr across these two methods. Changing to emit matchAndRewrite() simplifies the picture. PiperOrigin-RevId: 261906804
* [spirv] Provide decorations in batch for op constructionLei Zhang2019-08-061-11/+10
| | | | | | | | | | Instead of setting the attributes for decorations one by one after constructing the op, this CL changes to attach all the attributes for decorations to the attribute vector for constructing the op. This should be simpler and more efficient. PiperOrigin-RevId: 261905578
* Add a region to linalg.genericNicolas Vasilache2019-08-069-39/+292
| | | | | | | | | | This CL extends the Linalg GenericOp with an alternative way of specifying the body of the computation based on a single block region. The "fun" attribute becomes optional. Either a SymbolRef "fun" attribute or a single block region must be specified to describe the side-effect-free computation. Upon lowering to loops, the new region body is inlined in the innermost loop. The parser, verifier and pretty printer are extended. Appropriate roundtrip, negative and lowering to loop tests are added. PiperOrigin-RevId: 261895568
* Refactor Linalg ops to loop lowering (NFC)Nicolas Vasilache2019-08-0614-269/+358
| | | | | | | This CL modifies the LowerLinalgToLoopsPass to use RewritePattern. This will make it easier to inline Linalg generic functions and regions when emitting to loops in a subsequent CL. PiperOrigin-RevId: 261894120
* Add TTI pass initialization to pass managers.Diego Caballero2019-08-054-16/+41
| | | | | | | | | Many LLVM transformations benefits from knowing the targets. This enables optimizations, especially in a JIT context when the target is (generally) well-known. Closes tensorflow/mlir#49 PiperOrigin-RevId: 261840617
* NFC: Implement OwningRewritePatternList as a class instead of a using directive.River Riddle2019-08-0534-133/+113
| | | | | | This allows for proper forward declaration, as opposed to leaking the internal implementation via a using directive. This also allows for all pattern building to go through 'insert' methods on the OwningRewritePatternList, replacing uses of 'push_back' and 'RewriteListBuilder'. PiperOrigin-RevId: 261816316
* Fix header guard.Suharsh Sivakumar2019-08-051-0/+1
| | | | PiperOrigin-RevId: 261774919
* Drop linalg.range_intersect opNicolas Vasilache2019-08-055-120/+1
| | | | | | This op is not useful. PiperOrigin-RevId: 261665736
* Use SingleBlockImplicitTerminator trait for spv.moduleLei Zhang2019-08-053-11/+17
| | | | | | | | This trait provides the ensureTerminator() utility function and the checks to make sure a spv.module is indeed terminated with spv._module_end. PiperOrigin-RevId: 261664153
* Introduce custom syntax for llvm.funcAlex Zinenko2019-08-057-25/+190
| | | | | | | Similar to all LLVM dialect operations, llvm.func needs to have the custom syntax. Use the generic FunctionLike printer and parser to implement it. PiperOrigin-RevId: 261641755
* [mlir-translate] Fix test suite.Denis Khalikov2019-08-051-10/+10
| | | | | | | | | | llvm ir printer was changed at LLVM r367755. Prints value numbers for unnamed functions argument. Closes tensorflow/mlir#67 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/67 from denis0x0D:sandbox/fix_mlir_translate ae46844e66f34a02e0cf86782ddadc5bce58b30d PiperOrigin-RevId: 261640048
* Remove non-needed includes from ConvertControlFlowToCFG.cpp (NFC)Mehdi Amini2019-08-041-5/+0
| | | | | | | | The includes related to the LLVM dialect are not used in this file and introduce an implicit dependencies between the two libraries which isn't reflected in the CMakeLists.txt, causing non-deterministic build failures. PiperOrigin-RevId: 261576935
* Fix ExecutionEngine post-update in upstream LLVMAlex Zinenko2019-08-041-1/+3
| | | | | | | | | LLVM r367686 changed the locking scheme to avoid potential deadlocks and the related llvm::orc::ThreadSafeModule APIs ExecutionEngine was relying upon, breaking the MLIR build. Update our use of ThreadSafeModule to unbreak the build. PiperOrigin-RevId: 261566571
* [ODS] Add new definitions for non-negative integer attributesLei Zhang2019-08-033-5/+57
| | | | | | | This CL added a new NonNegativeIntAttrBase class and two instantiations, one for I32 and the other for I64. PiperOrigin-RevId: 261513292
* Fix clang 5.0 by using type aliases for LLVM DenseSet/MapMehdi Amini2019-08-034-5/+7
| | | | | | | | | | | | | | | | | | | | | | | | When inlining the declaration for llvm::DenseSet/DenseMap in the mlir namespace from a forward declaration, clang does not take the default for the template parameters if their are declared later. namespace llvm { template<typename Foo> class DenseMap; } namespace mlir { using llvm::DenseMap; } namespace llvm { template<typename Foo = int> class DenseMap {}; } namespace mlir { DenseMap<> map; } PiperOrigin-RevId: 261495612
* Add a generic Linalg opNicolas Vasilache2019-08-0216-70/+772
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This CL introduces a linalg.generic op to represent generic tensor contraction operations on views. A linalg.generic operation requires a numbers of attributes that are sufficient to emit the computation in scalar form as well as compute the appropriate subviews to enable tiling and fusion. These attributes are very similar to the attributes for existing operations such as linalg.matmul etc and existing operations can be implemented with the generic form. In the future, most existing operations can be implemented using the generic form. This CL starts by splitting out most of the functionality of the linalg::NInputsAndOutputs trait into a ViewTrait that queries the per-instance properties of the op. This allows using the attribute informations. This exposes an ordering of verifiers issue where ViewTrait::verify uses attributes but the verifiers for those attributes have not been run. The desired behavior would be for the verifiers of the attributes specified in the builder to execute first but it is not the case atm. As a consequence, to emit proper error messages and avoid crashing, some of the linalg.generic methods are defensive as such: ``` unsigned getNumInputs() { // This is redundant with the `n_views` attribute verifier but ordering of verifiers // may exhibit cases where we crash instead of emitting an error message. if (!getAttr("n_views") || n_views().getValue().size() != 2) return 0; ``` In pretty-printed form, the specific attributes required for linalg.generic are factored out in an independent dictionary named "_". When parsing its content is flattened and the "_name" is dropped. This allows using aliasing for reducing boilerplate at each linalg.generic invocation while benefiting from the Tablegen'd verifier form for each named attribute in the dictionary. For instance, implementing linalg.matmul in terms of linalg.generic resembles: ``` func @mac(%a: f32, %b: f32, %c: f32) -> f32 { %d = mulf %a, %b: f32 %e = addf %c, %d: f32 return %e: f32 } #matmul_accesses = [ (m, n, k) -> (m, k), (m, n, k) -> (k, n), (m, n, k) -> (m, n) ] #matmul_trait = { doc = "C(m, n) += A(m, k) * B(k, n)", fun = @mac, indexing_maps = #matmul_accesses, library_call = "linalg_matmul", n_views = [2, 1], n_loop_types = [2, 1, 0] } ``` And can be used in multiple places as: ``` linalg.generic #matmul_trait %A, %B, %C [other-attributes] : !linalg.view<?x?xf32>, !linalg.view<?x?xf32>, !linalg.view<?x?xf32> ``` In the future it would be great to have a mechanism to alias / register a new linalg.op as a pair of linalg.generic, #trait. Also, note that with one could theoretically only specify the `doc` string and parse all the attributes from it. PiperOrigin-RevId: 261338740
* Fully qualify DenseMap.Jacques Pienaar2019-08-021-1/+2
| | | | PiperOrigin-RevId: 261325481
* Add StdIndexedValue to EDSC helpersDiego Caballero2019-08-022-1/+40
| | | | | | | | | Add StdIndexedValue to EDSC helper so that we can use it to generated std.load and std.store in EDSC. Closes tensorflow/mlir#59 PiperOrigin-RevId: 261324965
* AffineDataCopyGeneration: don't use CL flag values inside the passAlex Zinenko2019-08-021-13/+22
| | | | | | | | | | AffineDataCopyGeneration pass relied on command line flags for internal logic in several places, which makes it unusable in a library context (i.e. outside a standalone mlir-opt binary that does the command line parsing). Define configuration flags in the constructor instead, and set them up to command line-based defaults to maintain the original behavior. PiperOrigin-RevId: 261322364
* WritingAPass doc: demonstrate registration of a non-default-constructible passAlex Zinenko2019-08-021-0/+19
| | | | | | | | This functionality was added recently and is intended to ensure that parametric passes can be configured programmatically and not only from command-line flags, which are mostly useless outside of standalone mlir-opt biary. PiperOrigin-RevId: 261320932
* Add missing include to DenseMap in MLIRContext.cppMehdi Amini2019-08-011-0/+1
| | | | | | This is fixing the build of MLIR on MacOS when built within TensorFlow PiperOrigin-RevId: 261223250
* Introduce explicit copying optimization by generalizing the DMA generation passUday Bondhugula2019-08-015-195/+400
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Explicit copying to contiguous buffers is a standard technique to avoid conflict misses and TLB misses, and improve hardware prefetching performance. When done in conjunction with cache tiling, it nearly eliminates all cache conflict and TLB misses, and a single hardware prefetch stream is needed per data tile. - generalize/extend DMA generation pass (renamed data copying pass) to perform either point-wise explicit copies to fast memory buffers or DMAs (depending on a cmd line option). All logic is the same as erstwhile -dma-generate. - -affine-dma-generate is now renamed -affine-data-copy; when -dma flag is provided, DMAs are generated, or else explicit copy loops are generated (point-wise) by default. - point-wise copying could be used for CPUs (or GPUs); some indicative performance numbers with a "C" version of the MLIR when compiled with and without this optimization (about 2x improvement here). With a matmul on 4096^2 matrices on a single core of an Intel Core i7 Skylake i7-8700K with clang 8.0.0: clang -O3: 518s clang -O3 with MLIR tiling (128x128): 24.5s clang -O3 with MLIR tiling + data copying 12.4s (code equivalent to test/Transforms/data-copy.mlir func @matmul) - fix some misleading comments. - change default fast-mem space to 0 (more intuitive now with the default copy generation using point-wise copies instead of DMAs) On a simple 3-d matmul loop nest, code generated with -affine-data-copy: ``` affine.for %arg3 = 0 to 4096 step 128 { affine.for %arg4 = 0 to 4096 step 128 { %0 = affine.apply #map0(%arg3, %arg4) %1 = affine.apply #map1(%arg3, %arg4) %2 = alloc() : memref<128x128xf32, 2> // Copy-in Out matrix. affine.for %arg5 = 0 to 128 { %5 = affine.apply #map2(%arg3, %arg5) affine.for %arg6 = 0 to 128 { %6 = affine.apply #map2(%arg4, %arg6) %7 = load %arg2[%5, %6] : memref<4096x4096xf32> affine.store %7, %2[%arg5, %arg6] : memref<128x128xf32, 2> } } affine.for %arg5 = 0 to 4096 step 128 { %5 = affine.apply #map0(%arg3, %arg5) %6 = affine.apply #map1(%arg3, %arg5) %7 = alloc() : memref<128x128xf32, 2> // Copy-in LHS. affine.for %arg6 = 0 to 128 { %11 = affine.apply #map2(%arg3, %arg6) affine.for %arg7 = 0 to 128 { %12 = affine.apply #map2(%arg5, %arg7) %13 = load %arg0[%11, %12] : memref<4096x4096xf32> affine.store %13, %7[%arg6, %arg7] : memref<128x128xf32, 2> } } %8 = affine.apply #map0(%arg5, %arg4) %9 = affine.apply #map1(%arg5, %arg4) %10 = alloc() : memref<128x128xf32, 2> // Copy-in RHS. affine.for %arg6 = 0 to 128 { %11 = affine.apply #map2(%arg5, %arg6) affine.for %arg7 = 0 to 128 { %12 = affine.apply #map2(%arg4, %arg7) %13 = load %arg1[%11, %12] : memref<4096x4096xf32> affine.store %13, %10[%arg6, %arg7] : memref<128x128xf32, 2> } } // Compute. affine.for %arg6 = #map7(%arg3) to #map8(%arg3) { affine.for %arg7 = #map7(%arg4) to #map8(%arg4) { affine.for %arg8 = #map7(%arg5) to #map8(%arg5) { %11 = affine.load %7[-%arg3 + %arg6, -%arg5 + %arg8] : memref<128x128xf32, 2> %12 = affine.load %10[-%arg5 + %arg8, -%arg4 + %arg7] : memref<128x128xf32, 2> %13 = affine.load %2[-%arg3 + %arg6, -%arg4 + %arg7] : memref<128x128xf32, 2> %14 = mulf %11, %12 : f32 %15 = addf %13, %14 : f32 affine.store %15, %2[-%arg3 + %arg6, -%arg4 + %arg7] : memref<128x128xf32, 2> } } } dealloc %10 : memref<128x128xf32, 2> dealloc %7 : memref<128x128xf32, 2> } %3 = affine.apply #map0(%arg3, %arg4) %4 = affine.apply #map1(%arg3, %arg4) // Copy out result matrix. affine.for %arg5 = 0 to 128 { %5 = affine.apply #map2(%arg3, %arg5) affine.for %arg6 = 0 to 128 { %6 = affine.apply #map2(%arg4, %arg6) %7 = affine.load %2[%arg5, %arg6] : memref<128x128xf32, 2> store %7, %arg2[%5, %6] : memref<4096x4096xf32> } } dealloc %2 : memref<128x128xf32, 2> } } ``` With -affine-data-copy -dma: ``` affine.for %arg3 = 0 to 4096 step 128 { %0 = affine.apply #map3(%arg3) %1 = alloc() : memref<128xf32, 2> %2 = alloc() : memref<1xi32> affine.dma_start %arg2[%arg3], %1[%c0], %2[%c0], %c128_0 : memref<4096xf32>, memref<128xf32, 2>, memref<1xi32> affine.dma_wait %2[%c0], %c128_0 : memref<1xi32> %3 = alloc() : memref<1xi32> affine.for %arg4 = 0 to 4096 step 128 { %5 = affine.apply #map0(%arg3, %arg4) %6 = affine.apply #map1(%arg3, %arg4) %7 = alloc() : memref<128x128xf32, 2> %8 = alloc() : memref<1xi32> affine.dma_start %arg0[%arg3, %arg4], %7[%c0, %c0], %8[%c0], %c16384, %c4096, %c128_2 : memref<4096x4096xf32>, memref<128x128xf32, 2>, memref<1xi32> affine.dma_wait %8[%c0], %c16384 : memref<1xi32> %9 = affine.apply #map3(%arg4) %10 = alloc() : memref<128xf32, 2> %11 = alloc() : memref<1xi32> affine.dma_start %arg1[%arg4], %10[%c0], %11[%c0], %c128_1 : memref<4096xf32>, memref<128xf32, 2>, memref<1xi32> affine.dma_wait %11[%c0], %c128_1 : memref<1xi32> affine.for %arg5 = #map3(%arg3) to #map5(%arg3) { affine.for %arg6 = #map3(%arg4) to #map5(%arg4) { %12 = affine.load %7[-%arg3 + %arg5, -%arg4 + %arg6] : memref<128x128xf32, 2> %13 = affine.load %10[-%arg4 + %arg6] : memref<128xf32, 2> %14 = affine.load %1[-%arg3 + %arg5] : memref<128xf32, 2> %15 = mulf %12, %13 : f32 %16 = addf %14, %15 : f32 affine.store %16, %1[-%arg3 + %arg5] : memref<128xf32, 2> } } dealloc %11 : memref<1xi32> dealloc %10 : memref<128xf32, 2> dealloc %8 : memref<1xi32> dealloc %7 : memref<128x128xf32, 2> } %4 = affine.apply #map3(%arg3) affine.dma_start %1[%c0], %arg2[%arg3], %3[%c0], %c128 : memref<128xf32, 2>, memref<4096xf32>, memref<1xi32> affine.dma_wait %3[%c0], %c128 : memref<1xi32> dealloc %3 : memref<1xi32> dealloc %2 : memref<1xi32> dealloc %1 : memref<128xf32, 2> } ``` Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#50 PiperOrigin-RevId: 261221903
* Qualify StringRef to fix Windows build failureLei Zhang2019-08-011-1/+2
| | | | PiperOrigin-RevId: 261195069
* [spirv] Add support for specialization constantLei Zhang2019-08-016-151/+262
| | | | | | | | This CL extends the existing spv.constant op to also support specialization constant by adding an extra unit attribute on it. PiperOrigin-RevId: 261194869
* Add FIR, the Flang project's IR, to the dialect registry.Eric Schweitz2019-08-011-0/+1
| | | | | | Closes tensorflow/mlir#62 PiperOrigin-RevId: 261187850
* [spirv] Add binary logical operations.Denis Khalikov2019-08-015-27/+589
| | | | | | | | | | | Add binary logical operations regarding to the spec section 3.32.15: OpIEqual, OpINotEqual, OpUGreaterThan, OpSGreaterThan, OpUGreaterThanEqual, OpSGreaterThanEqual, OpULessThan, OpSLessThan, OpULessThanEqual, OpSLessThanEqual. Closes tensorflow/mlir#61 PiperOrigin-RevId: 261181281
* Replace the verifyUnusedValue directive with HasNoUseOf constraintLei Zhang2019-08-018-101/+28
| | | | | | | | | verifyUnusedValue is a bit strange given that it is specified in a result pattern but used to generate match statements. Now we are able to support multi-result ops better, we can retire it and replace it with a HasNoUseOf constraint. This reduces the number of mechanisms. PiperOrigin-RevId: 261166863
* Migrate pattern symbol binding tests to use TestDialectLei Zhang2019-07-313-70/+62
| | | | PiperOrigin-RevId: 261045611
* Fix support for auxiliary ops in declarative rewrite rulesLei Zhang2019-07-316-181/+215
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We allow to generate more ops than what are needed for replacing the matched root op. Only the last N static values generated are used as replacement; the others serve as auxiliary ops/values for building the replacement. With the introduction of multi-result op support, an op, if used as a whole, may be used to replace multiple static values of the matched root op. We need to consider this when calculating the result range an generated op is to replace. For example, we can have the following pattern: ```tblgen def : Pattern<(ThreeResultOp ...), [(OneResultOp ...), (OneResultOp ...), (OneResultOp ...)]>; // Two op to replace all three results def : Pattern<(ThreeResultOp ...), [(TwoResultOp ...), (OneResultOp ...)]>; // One op to replace all three results def : Pat<(ThreeResultOp ...), (ThreeResultOp ...)>; def : Pattern<(ThreeResultOp ...), [(AuxiliaryOp ...), (ThreeResultOp ...)]>; ``` PiperOrigin-RevId: 261017235
* NFC: refactor ODS builder generationLei Zhang2019-07-313-150/+192
| | | | | | | | | | | Previously we use one single method with lots of branches to generate multiple builders. This makes the method difficult to follow and modify. This CL splits the method into multiple dedicated ones, by extracting common logic into helper methods while leaving logic specific to each builder in their own methods. PiperOrigin-RevId: 261011082
* Use operand number during serialization to get the <id>s of the operandsMahesh Ravishankar2019-07-312-1/+12
| | | | | | | | | | | During serialization, the operand number must be used to get the values assocaited with an operand. Using the argument number in Op specification was wrong since some of the elements in the arguments list might be attributes on the operation. This resulted in a segfault during serialization. Add a test that exercise that path. PiperOrigin-RevId: 260977758
* [spirv] Add binary arithmetic operations tensorflow/mlir#2.Denis Khalikov2019-07-314-1/+263
| | | | | | | | | Add binary operations such as: OpUdiv, OpSDiv, OpUMod, OpSRem, OpSMod. Closes tensorflow/mlir#56 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/56 from denis0x0D:sandbox/bin_ops_int 4959325a693b4658b978a8b97f79b8237eb39764 PiperOrigin-RevId: 260961681
* Add missing include file to StringExtrasTest.cppMahesh Ravishankar2019-07-302-1/+3
| | | | | | | Use of std::isupper and std::islower need <cctype> header file. Fix that and also fix the header of a file to match the file name. PiperOrigin-RevId: 260816852
* Support hexadecimal floats in tensor literalsAlex Zinenko2019-07-303-115/+174
| | | | | | | | | | | | | | | Extend the recently introduced support for hexadecimal float literals to tensor literals, which may also contain special floating point values such as infinities and NaNs. Modify TensorLiteralParser to store the list of tokens representing values until the type is parsed instead of trying to guess the tensor element type from the token kinds (hexadecimal values can be either integers or floats, and can be mixed with both). Maintain the error reports as close as possible to the existing implementation to avoid disturbing the tests. They can be improved in a separate clean-up if deemed necessary. PiperOrigin-RevId: 260794716
* Add support for (de)serialization of SPIR-V Op DecorationsMahesh Ravishankar2019-07-3010-50/+427
| | | | | | | | | | | | All non-argument attributes specified for an operation are treated as decorations on the result value and (de)serialized using OpDecorate instruction. An error is generated if an attribute is not an argument, and the name doesn't correspond to a Decoration enum. Name of the attributes that represent decoerations are to be the snake-case-ified version of the Decoration name. Add utility methods to convert to snake-case and camel-case. PiperOrigin-RevId: 260792638
* Add support for hexadecimal float literalsAlex Zinenko2019-07-305-17/+188
| | | | | | | | | | | | | | | | | | | | MLIR does not have support for parsing special floating point values such as infinities and NaNs. If programmatically constructed, these values are printed as NaN and (+-)Inf and cannot be parsed back. Add parser support for hexadecimal literals in float attributes, following LLVM IR. The literal corresponds to the in-memory representation of the floating point value. IEEE 754 defines a range of possible values for NaNs, storing the bitwise representation allows MLIR to properly roundtrip NaNs with different bit values of significands. The initial version of this commit was missing support for float literals that used to be printed in decimal notation as a fallback, but ended up being printed in hexadecimal format which became the fallback for special values. The decimal fallback behavior was not exercised by tests. It is currently reinstated and tested by the newly added test @f32_potential_precision_loss in parser.mlir. PiperOrigin-RevId: 260790900
* Link in MLIRGPUtoSPIRVTransforms with mlir-optMahesh Ravishankar2019-07-301-0/+1
| | | | | | | | Add a missed library that needs to be linked with mlir-opt. This results in a test failure in the MLIR due to the pass `-convert-gpu-to-spirv` not being found. PiperOrigin-RevId: 260773067
* Add std::move in UniformSupport.Jacques Pienaar2019-07-301-1/+1
| | | | | | | | | Fixes build warnings on clang-8, no warnings on redundant moves on gcc-(6.5,7.4,8.3). Closes tensorflow/mlir#41 PiperOrigin-RevId: 260764269
* Initial implementation to translate kernel fn in GPU Dialect to SPIR-V DialectMahesh Ravishankar2019-07-3020-145/+618
| | | | | | | | | | | | | | | This CL adds an initial implementation for translation of kernel function in GPU Dialect (used with a gpu.launch_kernel) op to a spv.Module. The original function is translated into an entry function. Most of the heavy lifting is done by adding TypeConversion and other utility functions/classes that provide most of the functionality to translate from Standard Dialect to SPIR-V Dialect. These are intended to be reusable in implementation of different dialect conversion pipelines. Note : Some of the files for have been renamed to be consistent with the norm used by the other Conversion frameworks. PiperOrigin-RevId: 260759165
* [spirv] Add basic infrastructure for negative deserializer testsLei Zhang2019-07-308-59/+318
| | | | | | | | | | | | | We are relying on serializer to construct positive cases to drive the test for deserializer. This leaves negative cases untested. This CL adds a basic test fixture for covering the negative corner cases to enforce a more robust deserializer. Refactored common SPIR-V building methods out of serializer to share it with the deserialization test. PiperOrigin-RevId: 260742733
OpenPOWER on IntegriCloud