diff options
author | Jacques Pienaar <jpienaar@google.com> | 2019-12-18 10:48:02 -0800 |
---|---|---|
committer | A. Unique TensorFlower <gardener@tensorflow.org> | 2019-12-18 10:57:59 -0800 |
commit | d7e2cc9bd1d17cbc7182bd904a9173817745525a (patch) | |
tree | 9cf94f5abf6ca3992c5e5089ee7bf2cec9cbda44 | |
parent | 2666b97314ad1b50f88fcc4376ae941f601f67ea (diff) | |
download | bcm5719-llvm-d7e2cc9bd1d17cbc7182bd904a9173817745525a.tar.gz bcm5719-llvm-d7e2cc9bd1d17cbc7182bd904a9173817745525a.zip |
Update code block designations
'```mlir' is used to indicate the code block is MLIR code/should use MLIR syntax
highlighting, while '{.mlir}' was a markdown extension that used a style file
to color the background differently of the code block. The background color
extension was a custom one that we can retire given we have syntax
highlighting.
Also change '```td' to '```tablegen' to match chroma syntax highlighting
designation.
PiperOrigin-RevId: 286222976
-rw-r--r-- | mlir/g3doc/Dialects/SPIR-V.md | 6 | ||||
-rw-r--r-- | mlir/g3doc/Dialects/Standard.md | 2 | ||||
-rw-r--r-- | mlir/g3doc/QuickstartRewrites.md | 6 | ||||
-rw-r--r-- | mlir/g3doc/Traits.md | 4 | ||||
-rw-r--r-- | mlir/g3doc/Tutorials/Toy/Ch-3.md | 20 | ||||
-rw-r--r-- | mlir/g3doc/Tutorials/Toy/Ch-4.md | 10 | ||||
-rw-r--r-- | mlir/g3doc/Tutorials/Toy/Ch-5.md | 2 | ||||
-rw-r--r-- | mlir/g3doc/Tutorials/Toy/Ch-6.md | 4 | ||||
-rw-r--r-- | mlir/g3doc/Tutorials/Toy/Ch-7.md | 4 | ||||
-rw-r--r-- | mlir/g3doc/includes/style.css | 11 | ||||
-rw-r--r-- | mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.h | 2 | ||||
-rw-r--r-- | mlir/include/mlir/Dialect/Linalg/IR/LinalgTypes.h | 2 | ||||
-rw-r--r-- | mlir/include/mlir/IR/AffineMap.h | 12 | ||||
-rw-r--r-- | mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp | 2 |
14 files changed, 38 insertions, 49 deletions
diff --git a/mlir/g3doc/Dialects/SPIR-V.md b/mlir/g3doc/Dialects/SPIR-V.md index 1413b181407..b753435c3c4 100644 --- a/mlir/g3doc/Dialects/SPIR-V.md +++ b/mlir/g3doc/Dialects/SPIR-V.md @@ -105,7 +105,7 @@ array-type ::= `!spv.array<` integer-literal `x` element-type `>` For example, -```{.mlir} +```mlir !spv.array<4 x i32> !spv.array<16 x vector<4 x f32>> ``` @@ -154,7 +154,7 @@ pointer-type ::= `!spv.ptr<` element-type `,` storage-class `>` For example, -```{.mlir} +```mlir !spv.ptr<i32, Function> !spv.ptr<vector<4 x f32>, Uniform> ``` @@ -169,7 +169,7 @@ runtime-array-type ::= `!spv.rtarray<` element-type `>` For example, -```{.mlir} +```mlir !spv.rtarray<i32> !spv.rtarray<vector<4 x f32>> ``` diff --git a/mlir/g3doc/Dialects/Standard.md b/mlir/g3doc/Dialects/Standard.md index 05ec703b059..f84a2c94e92 100644 --- a/mlir/g3doc/Dialects/Standard.md +++ b/mlir/g3doc/Dialects/Standard.md @@ -374,7 +374,7 @@ Example: TODO: This operation is easy to extend to broadcast to dynamically shaped tensors in the same way dynamically shaped memrefs are handled. -```mlir {.mlir} +```mlir // Broadcasts %s to a 2-d dynamically shaped tensor, with %m, %n binding // to the sizes of the two dynamic dimensions. %m = "foo"() : () -> (index) diff --git a/mlir/g3doc/QuickstartRewrites.md b/mlir/g3doc/QuickstartRewrites.md index 2e2192071ae..d7bf9a54370 100644 --- a/mlir/g3doc/QuickstartRewrites.md +++ b/mlir/g3doc/QuickstartRewrites.md @@ -43,7 +43,7 @@ operations are generated from. To define an operation one needs to specify: are ignored by the main op and doc generators, but could be used in, say, the translation from a dialect to another representation. -```td {.td} +```tablegen def TFL_LeakyReluOp: TFL_Op<TFL_Dialect, "leaky_relu", [NoSideEffect, SameValueType]>, Results<(outs Tensor)> { @@ -99,7 +99,7 @@ generated. Let us continue with LeakyRelu. To map from TensorFlow's `LeakyRelu` to TensorFlow Lite's `LeakyRelu`: -```td {.td} +```tablegen def : Pat<(TF_LeakyReluOp $arg, F32Attr:$a), (TFL_LeakyReluOp $arg, $a)> ``` @@ -119,7 +119,7 @@ as destination then one could use a general native code fallback method. This consists of defining a pattern as well as adding a C++ function to perform the replacement: -```td {.td} +```tablegen def createTFLLeakyRelu : NativeCodeCall< "createTFLLeakyRelu($_builder, $0->getDefiningOp(), $1, $2)">; diff --git a/mlir/g3doc/Traits.md b/mlir/g3doc/Traits.md index 25e20234691..b233f9bef66 100644 --- a/mlir/g3doc/Traits.md +++ b/mlir/g3doc/Traits.md @@ -88,7 +88,7 @@ definition of the trait class. This can be done using the `NativeOpTrait` and `ParamNativeOpTrait` classes. `ParamNativeOpTrait` provides a mechanism in which to specify arguments to a parametric trait class with an internal `Impl`. -```td +```tablegen // The argument is the c++ trait class name. def MyTrait : NativeOpTrait<"MyTrait">; @@ -100,7 +100,7 @@ class MyParametricTrait<int prop> These can then be used in the `traits` list of an op definition: -```td +```tablegen def OpWithInferTypeInterfaceOp : Op<...[MyTrait, MyParametricTrait<10>]> { ... } ``` diff --git a/mlir/g3doc/Tutorials/Toy/Ch-3.md b/mlir/g3doc/Tutorials/Toy/Ch-3.md index 57936e61fa8..07ead64d455 100644 --- a/mlir/g3doc/Tutorials/Toy/Ch-3.md +++ b/mlir/g3doc/Tutorials/Toy/Ch-3.md @@ -36,7 +36,7 @@ def transpose_transpose(x) { Which corresponds to the following IR: -```MLIR(.mlir) +```mlir func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> { %0 = "toy.transpose"(%arg0) : (tensor<*xf64>) -> tensor<*xf64> %1 = "toy.transpose"(%0) : (tensor<*xf64>) -> tensor<*xf64> @@ -131,7 +131,7 @@ similar way to LLVM: Finally, we can run `toyc-ch3 test/transpose_transpose.toy -emit=mlir -opt` and observe our pattern in action: -```MLIR(.mlir) +```mlir func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> { %0 = "toy.transpose"(%arg0) : (tensor<*xf64>) -> tensor<*xf64> "toy.return"(%arg0) : (tensor<*xf64>) -> () @@ -146,13 +146,13 @@ input. The Canonicalizer knows to clean up dead operations; however, MLIR conservatively assumes that operations may have side-effects. We can fix this by adding a new trait, `NoSideEffect`, to our `TransposeOp`: -```TableGen(.td): +```tablegen: def TransposeOp : Toy_Op<"transpose", [NoSideEffect]> {...} ``` Let's retry now `toyc-ch3 test/transpose_transpose.toy -emit=mlir -opt`: -```MLIR(.mlir) +```mlir func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> { "toy.return"(%arg0) : (tensor<*xf64>) -> () } @@ -169,7 +169,7 @@ Declarative, rule-based pattern-match and rewrite (DRR) is an operation DAG-based declarative rewriter that provides a table-based syntax for pattern-match and rewrite rules: -```TableGen(.td): +```tablegen: class Pattern< dag sourcePattern, list<dag> resultPatterns, list<dag> additionalConstraints = [], @@ -179,7 +179,7 @@ class Pattern< A redundant reshape optimization similar to SimplifyRedundantTranspose can be expressed more simply using DRR as follows: -```TableGen(.td): +```tablegen: // Reshape(Reshape(x)) = Reshape(x) def ReshapeReshapeOptPattern : Pat<(ReshapeOp(ReshapeOp $arg)), (ReshapeOp $arg)>; @@ -193,7 +193,7 @@ transformation is conditional on some properties of the arguments and results. An example is a transformation that eliminates reshapes when they are redundant, i.e. when the input and output shapes are identical. -```TableGen(.td): +```tablegen: def TypesAreIdentical : Constraint<CPred<"$0->getType() == $1->getType()">>; def RedundantReshapeOptPattern : Pat< (ReshapeOp:$res $arg), (replaceWithValue $arg), @@ -207,7 +207,7 @@ C++. An example of such an optimization is FoldConstantReshape, where we optimize Reshape of a constant value by reshaping the constant in place and eliminating the reshape operation. -```TableGen(.td): +```tablegen: def ReshapeConstant : NativeCodeCall<"$0.reshape(($1->getType()).cast<ShapedType>())">; def FoldConstantReshapeOptPattern : Pat< (ReshapeOp:$res (ConstantOp $arg)), @@ -226,7 +226,7 @@ def main() { } ``` -```MLIR(.mlir) +```mlir module { func @main() { %0 = "toy.constant"() {value = dense<[1.000000e+00, 2.000000e+00]> : tensor<2xf64>} @@ -243,7 +243,7 @@ module { We can try to run `toyc-ch3 test/trivialReshape.toy -emit=mlir -opt` and observe our pattern in action: -```MLIR(.mlir) +```mlir module { func @main() { %0 = "toy.constant"() {value = dense<[[1.000000e+00], [2.000000e+00]]> \ diff --git a/mlir/g3doc/Tutorials/Toy/Ch-4.md b/mlir/g3doc/Tutorials/Toy/Ch-4.md index b39380a15f4..ac124699c2f 100644 --- a/mlir/g3doc/Tutorials/Toy/Ch-4.md +++ b/mlir/g3doc/Tutorials/Toy/Ch-4.md @@ -107,7 +107,7 @@ and core to a single operation. The interface that we will be adding here is the To add this interface we just need to include the definition into our operation specification file (`Ops.td`): -```.td +```tablegen #ifdef MLIR_CALLINTERFACES #else include "mlir/Analysis/CallInterfaces.td" @@ -116,7 +116,7 @@ include "mlir/Analysis/CallInterfaces.td" and add it to the traits list of `GenericCallOp`: -```.td +```tablegen def GenericCallOp : Toy_Op<"generic_call", [DeclareOpInterfaceMethods<CallOpInterface>]> { ... @@ -176,7 +176,7 @@ the inliner expects an explicit cast operation to be inserted. For this, we need to add a new operation to the Toy dialect, `ToyCastOp`(toy.cast), to represent casts between two different shapes. -```.td +```tablegen def CastOp : Toy_Op<"cast", [NoSideEffect, SameOperandsAndResultShape]> { let summary = "shape cast operation"; let description = [{ @@ -263,7 +263,7 @@ to be given to the generated C++ interface class as a template argument. For our purposes, we will name the generated class a simpler `ShapeInference`. We also provide a description for the interface. -```.td +```tablegen def ShapeInferenceOpInterface : OpInterface<"ShapeInference"> { let description = [{ Interface to access a registered method to infer the return types for an @@ -279,7 +279,7 @@ the need. See the [ODS documentation](../../OpDefinitions.md#operation-interfaces) for more information. -```.td +```tablegen def ShapeInferenceOpInterface : OpInterface<"ShapeInference"> { let description = [{ Interface to access a registered method to infer the return types for an diff --git a/mlir/g3doc/Tutorials/Toy/Ch-5.md b/mlir/g3doc/Tutorials/Toy/Ch-5.md index 5573354aef1..1124cf14a43 100644 --- a/mlir/g3doc/Tutorials/Toy/Ch-5.md +++ b/mlir/g3doc/Tutorials/Toy/Ch-5.md @@ -237,7 +237,7 @@ def PrintOp : Toy_Op<"print"> { Looking back at our current working example: -```.mlir +```mlir func @main() { %0 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64> %2 = "toy.transpose"(%0) : (tensor<2x3xf64>) -> tensor<3x2xf64> diff --git a/mlir/g3doc/Tutorials/Toy/Ch-6.md b/mlir/g3doc/Tutorials/Toy/Ch-6.md index 4f1f3177811..939b2b4f776 100644 --- a/mlir/g3doc/Tutorials/Toy/Ch-6.md +++ b/mlir/g3doc/Tutorials/Toy/Ch-6.md @@ -113,7 +113,7 @@ that only legal operations will remain after the conversion. Looking back at our current working example: -```.mlir +```mlir func @main() { %0 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64> %2 = "toy.transpose"(%0) : (tensor<2x3xf64>) -> tensor<3x2xf64> @@ -125,7 +125,7 @@ func @main() { We can now lower down to the LLVM dialect, which produces the following code: -```.mlir +```mlir llvm.func @free(!llvm<"i8*">) llvm.func @printf(!llvm<"i8*">, ...) -> !llvm.i32 llvm.func @malloc(!llvm.i64) -> !llvm<"i8*"> diff --git a/mlir/g3doc/Tutorials/Toy/Ch-7.md b/mlir/g3doc/Tutorials/Toy/Ch-7.md index 398983ac469..6298e8253e9 100644 --- a/mlir/g3doc/Tutorials/Toy/Ch-7.md +++ b/mlir/g3doc/Tutorials/Toy/Ch-7.md @@ -358,7 +358,7 @@ A few of our existing operations will need to be updated to handle `StructType`. The first step is to make the ODS framework aware of our Type so that we can use it in the operation definitions. A simple example is shown below: -```td +```tablegen // Provide a definition for the Toy StructType for use in ODS. This allows for // using StructType in a similar way to Tensor or MemRef. def Toy_StructType : @@ -371,7 +371,7 @@ def Toy_Type : AnyTypeOf<[F64Tensor, Toy_StructType]>; We can then update our operations, e.g. `ReturnOp`, to also accept the `Toy_StructType`: -```td +```tablegen def ReturnOp : Toy_Op<"return", [Terminator, HasParent<"FuncOp">]> { ... let arguments = (ins Variadic<Toy_Type>:$input); diff --git a/mlir/g3doc/includes/style.css b/mlir/g3doc/includes/style.css deleted file mode 100644 index d47c43fab59..00000000000 --- a/mlir/g3doc/includes/style.css +++ /dev/null @@ -1,11 +0,0 @@ -.mlir { - background-color: #eef; -} - -.ebnf { - background-color: #ffe; -} - -.td { - background-color: #eef; -} diff --git a/mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.h b/mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.h index 41155701b8d..2226b5ee6e4 100644 --- a/mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.h +++ b/mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.h @@ -67,7 +67,7 @@ std::string generateLibraryCallName(Operation *op); /// `A(i, k) * B(k, j) -> C(i, j)` will have the following, ordered, list of /// affine maps: /// -/// ```{.mlir} +/// ```mlir /// ( /// (i, j, k) -> (i, k), /// (i, j, k) -> (k, j), diff --git a/mlir/include/mlir/Dialect/Linalg/IR/LinalgTypes.h b/mlir/include/mlir/Dialect/Linalg/IR/LinalgTypes.h index 181a79ce38d..f779c3de6ae 100644 --- a/mlir/include/mlir/Dialect/Linalg/IR/LinalgTypes.h +++ b/mlir/include/mlir/Dialect/Linalg/IR/LinalgTypes.h @@ -46,7 +46,7 @@ public: /// It is constructed by calling the linalg.range op with three values index of /// index type: /// -/// ```{.mlir} +/// ```mlir /// func @foo(%arg0 : index, %arg1 : index, %arg2 : index) { /// %0 = linalg.range %arg0:%arg1:%arg2 : !linalg.range /// } diff --git a/mlir/include/mlir/IR/AffineMap.h b/mlir/include/mlir/IR/AffineMap.h index ab07cfac227..abd3712b0e1 100644 --- a/mlir/include/mlir/IR/AffineMap.h +++ b/mlir/include/mlir/IR/AffineMap.h @@ -180,27 +180,27 @@ AffineMap simplifyAffineMap(AffineMap map); /// /// Example 1: /// -/// ```{.mlir} +/// ```mlir /// (d0, d1, d2) -> (d1, d1, d0, d2, d1, d2, d1, d0) /// 0 2 3 /// ``` /// /// returns: /// -/// ```{.mlir} +/// ```mlir /// (d0, d1, d2, d3, d4, d5, d6, d7) -> (d2, d0, d3) /// ``` /// /// Example 2: /// -/// ```{.mlir} +/// ```mlir /// (d0, d1, d2) -> (d1, d0 + d1, d0, d2, d1, d2, d1, d0) /// 0 2 3 /// ``` /// /// returns: /// -/// ```{.mlir} +/// ```mlir /// (d0, d1, d2, d3, d4, d5, d6, d7) -> (d2, d0, d3) /// ``` AffineMap inversePermutation(AffineMap map); @@ -214,7 +214,7 @@ AffineMap inversePermutation(AffineMap map); /// Example: /// When applied to the following list of 3 affine maps, /// -/// ```{.mlir} +/// ```mlir /// { /// (i, j, k) -> (i, k), /// (i, j, k) -> (k, j), @@ -224,7 +224,7 @@ AffineMap inversePermutation(AffineMap map); /// /// Returns the map: /// -/// ```{.mlir} +/// ```mlir /// (i, j, k) -> (i, k, k, j, i, j) /// ``` AffineMap concatAffineMaps(ArrayRef<AffineMap> maps); diff --git a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp index 0fd29cdc6e0..31545168a6a 100644 --- a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp +++ b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp @@ -512,7 +512,7 @@ static LogicalResult verify(YieldOp op) { // A LinalgLibraryOp prints as: // -// ```{.mlir} +// ```mlir // concrete_op_name (ssa-inputs, ssa-outputs) : view-types // ``` // |