diff options
Diffstat (limited to 'mlir/docs/Tutorials/Toy')
-rw-r--r-- | mlir/docs/Tutorials/Toy/Ch-1.md | 4 | ||||
-rwxr-xr-x | mlir/docs/Tutorials/Toy/Ch-2.md | 4 | ||||
-rw-r--r-- | mlir/docs/Tutorials/Toy/Ch-3.md | 12 | ||||
-rw-r--r-- | mlir/docs/Tutorials/Toy/Ch-4.md | 2 | ||||
-rw-r--r-- | mlir/docs/Tutorials/Toy/Ch-6.md | 7 |
5 files changed, 14 insertions, 15 deletions
diff --git a/mlir/docs/Tutorials/Toy/Ch-1.md b/mlir/docs/Tutorials/Toy/Ch-1.md index cb7f97cb3f6..e12f615b49b 100644 --- a/mlir/docs/Tutorials/Toy/Ch-1.md +++ b/mlir/docs/Tutorials/Toy/Ch-1.md @@ -51,7 +51,7 @@ of rank <= 2, and the only datatype in Toy is a 64-bit floating point type (aka and deallocation is automatically managed. But enough with the long description; nothing is better than walking through an example to get a better understanding: -```Toy {.toy} +```toy def main() { # Define a variable `a` with shape <2, 3>, initialized with the literal value. # The shape is inferred from the supplied literal. @@ -74,7 +74,7 @@ tensors, but we don't know their dimensions). They are specialized for every newly discovered signature at call sites. Let's revisit the previous example by adding a user-defined function: -```Toy {.toy} +```toy # User defined generic function that operates on unknown shaped arguments. def multiply_transpose(a, b) { return transpose(a) * transpose(b); diff --git a/mlir/docs/Tutorials/Toy/Ch-2.md b/mlir/docs/Tutorials/Toy/Ch-2.md index ce46788f4ae..11c3936b546 100755 --- a/mlir/docs/Tutorials/Toy/Ch-2.md +++ b/mlir/docs/Tutorials/Toy/Ch-2.md @@ -351,7 +351,7 @@ At this point you probably might want to know what the C++ code generated by TableGen looks like. Simply run the `mlir-tblgen` command with the `gen-op-decls` or the `gen-op-defs` action like so: -``` +```shell ${build_root}/bin/mlir-tblgen -gen-op-defs ${mlir_src_root}/examples/toy/Ch2/include/toy/Ops.td -I ${mlir_src_root}/include/ ``` @@ -527,7 +527,7 @@ variadic operands, etc. Check out the At this point we can generate our "Toy IR". A simplified version of the previous example: -```.toy +```toy # User defined generic function that operates on unknown shaped arguments. def multiply_transpose(a, b) { return transpose(a) * transpose(b); diff --git a/mlir/docs/Tutorials/Toy/Ch-3.md b/mlir/docs/Tutorials/Toy/Ch-3.md index 615c2c1bbec..6ff9d3cb299 100644 --- a/mlir/docs/Tutorials/Toy/Ch-3.md +++ b/mlir/docs/Tutorials/Toy/Ch-3.md @@ -28,7 +28,7 @@ Let's start with a simple pattern and try to eliminate a sequence of two transpose that cancel out: `transpose(transpose(X)) -> X`. Here is the corresponding Toy example: -```Toy(.toy) +```toy def transpose_transpose(x) { return transpose(transpose(x)); } @@ -146,7 +146,7 @@ input. The Canonicalizer knows to clean up dead operations; however, MLIR conservatively assumes that operations may have side-effects. We can fix this by adding a new trait, `NoSideEffect`, to our `TransposeOp`: -```tablegen: +```tablegen def TransposeOp : Toy_Op<"transpose", [NoSideEffect]> {...} ``` @@ -169,7 +169,7 @@ Declarative, rule-based pattern-match and rewrite (DRR) is an operation DAG-based declarative rewriter that provides a table-based syntax for pattern-match and rewrite rules: -```tablegen: +```tablegen class Pattern< dag sourcePattern, list<dag> resultPatterns, list<dag> additionalConstraints = [], @@ -179,7 +179,7 @@ class Pattern< A redundant reshape optimization similar to SimplifyRedundantTranspose can be expressed more simply using DRR as follows: -```tablegen: +```tablegen // Reshape(Reshape(x)) = Reshape(x) def ReshapeReshapeOptPattern : Pat<(ReshapeOp(ReshapeOp $arg)), (ReshapeOp $arg)>; @@ -193,7 +193,7 @@ transformation is conditional on some properties of the arguments and results. An example is a transformation that eliminates reshapes when they are redundant, i.e. when the input and output shapes are identical. -```tablegen: +```tablegen def TypesAreIdentical : Constraint<CPred<"$0->getType() == $1->getType()">>; def RedundantReshapeOptPattern : Pat< (ReshapeOp:$res $arg), (replaceWithValue $arg), @@ -207,7 +207,7 @@ C++. An example of such an optimization is FoldConstantReshape, where we optimize Reshape of a constant value by reshaping the constant in place and eliminating the reshape operation. -```tablegen: +```tablegen def ReshapeConstant : NativeCodeCall<"$0.reshape(($1->getType()).cast<ShapedType>())">; def FoldConstantReshapeOptPattern : Pat< (ReshapeOp:$res (ConstantOp $arg)), diff --git a/mlir/docs/Tutorials/Toy/Ch-4.md b/mlir/docs/Tutorials/Toy/Ch-4.md index 4a4e11c68e6..2df009ddc2d 100644 --- a/mlir/docs/Tutorials/Toy/Ch-4.md +++ b/mlir/docs/Tutorials/Toy/Ch-4.md @@ -296,7 +296,7 @@ def ShapeInferenceOpInterface : OpInterface<"ShapeInference"> { Now that the interface is defined, we can add it to the necessary Toy operations in a similar way to how we added the `CallOpInterface` to the GenericCallOp: -``` +```tablegen def MulOp : Toy_Op<"mul", [..., DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> { ... diff --git a/mlir/docs/Tutorials/Toy/Ch-6.md b/mlir/docs/Tutorials/Toy/Ch-6.md index 939b2b4f776..a25f0d2091d 100644 --- a/mlir/docs/Tutorials/Toy/Ch-6.md +++ b/mlir/docs/Tutorials/Toy/Ch-6.md @@ -189,7 +189,7 @@ utility: Exporting our module to LLVM IR generates: -```.llvm +```llvm define void @main() { ... @@ -224,7 +224,7 @@ define void @main() { If we enable optimization on the generated LLVM IR, we can trim this down quite a bit: -```.llvm +```llvm define void @main() %0 = tail call i32 (i8*, ...) @printf(i8* nonnull dereferenceable(1) getelementptr inbounds ([4 x i8], [4 x i8]* @frmt_spec, i64 0, i64 0), double 1.000000e+00) %1 = tail call i32 (i8*, ...) @printf(i8* nonnull dereferenceable(1) getelementptr inbounds ([4 x i8], [4 x i8]* @frmt_spec, i64 0, i64 0), double 1.600000e+01) @@ -237,7 +237,6 @@ define void @main() %putchar.2 = tail call i32 @putchar(i32 10) ret void } - ``` The full code listing for dumping LLVM IR can be found in `Ch6/toy.cpp` in the @@ -308,7 +307,7 @@ int runJit(mlir::ModuleOp module) { You can play around with it from the build directory: -```sh +```shell $ echo 'def main() { print([[1, 2], [3, 4]]); }' | ./bin/toyc-ch6 -emit=jit 1.000000 2.000000 3.000000 4.000000 |