summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJacques Pienaar <jpienaar@google.com>2019-12-31 09:52:19 -0800
committerJacques Pienaar <jpienaar@google.com>2019-12-31 09:54:16 -0800
commit430bba2a0f39b543e66119761d1526d037229936 (patch)
treeb969316fc20c589905b213465dd75d01eb4f3833
parenta041c4ec6f7aa659b235cb67e9231a05e0a33b7d (diff)
downloadbcm5719-llvm-430bba2a0f39b543e66119761d1526d037229936.tar.gz
bcm5719-llvm-430bba2a0f39b543e66119761d1526d037229936.zip
[mlir] Make code blocks more consistent
Use the same form specification for the same type of code.
-rw-r--r--mlir/docs/ConversionToLLVMDialect.md2
-rw-r--r--mlir/docs/DeclarativeRewrites.md42
-rw-r--r--mlir/docs/RationaleSimplifiedPolyhedralForm.md12
-rw-r--r--mlir/docs/Tutorials/Toy/Ch-1.md4
-rwxr-xr-xmlir/docs/Tutorials/Toy/Ch-2.md4
-rw-r--r--mlir/docs/Tutorials/Toy/Ch-3.md12
-rw-r--r--mlir/docs/Tutorials/Toy/Ch-4.md2
-rw-r--r--mlir/docs/Tutorials/Toy/Ch-6.md7
-rw-r--r--mlir/lib/Support/CMakeLists.txt2
9 files changed, 43 insertions, 44 deletions
diff --git a/mlir/docs/ConversionToLLVMDialect.md b/mlir/docs/ConversionToLLVMDialect.md
index 19403e27dc4..09cca35c577 100644
--- a/mlir/docs/ConversionToLLVMDialect.md
+++ b/mlir/docs/ConversionToLLVMDialect.md
@@ -3,7 +3,7 @@
Conversion from the Standard to the [LLVM Dialect](Dialects/LLVM.md) can be
performed by the specialized dialect conversion pass by running
-```sh
+```shell
mlir-opt -convert-std-to-llvm <filename.mlir>
```
diff --git a/mlir/docs/DeclarativeRewrites.md b/mlir/docs/DeclarativeRewrites.md
index 67ff102fef9..d5e8ccaac07 100644
--- a/mlir/docs/DeclarativeRewrites.md
+++ b/mlir/docs/DeclarativeRewrites.md
@@ -59,7 +59,7 @@ features:
The core construct for defining a rewrite rule is defined in
[`OpBase.td`][OpBase] as
-```tblgen
+```tablegen
class Pattern<
dag sourcePattern, list<dag> resultPatterns,
list<dag> additionalConstraints = [],
@@ -78,7 +78,7 @@ We allow multiple result patterns to support
convert one DAG of operations to another DAG of operations. There is a handy
wrapper of `Pattern`, `Pat`, which takes a single result pattern:
-```tblgen
+```tablegen
class Pat<
dag sourcePattern, dag resultPattern,
list<dag> additionalConstraints = [],
@@ -113,7 +113,7 @@ attribute).
For example,
-```tblgen
+```tablegen
def AOp : Op<"a_op"> {
let arguments = (ins
AnyType:$a_input,
@@ -155,7 +155,7 @@ bound symbol, for example, `def : Pat<(AOp $a, F32Attr), ...>`.
To match an DAG of ops, use nested `dag` objects:
-```tblgen
+```tablegen
def BOp : Op<"b_op"> {
let arguments = (ins);
@@ -182,7 +182,7 @@ that is, the following MLIR code:
To bind a symbol to the results of a matched op for later reference, attach the
symbol to the op itself:
-```tblgen
+```tablegen
def : Pat<(AOp (BOp:$b_result), $attr), ...>;
```
@@ -200,7 +200,7 @@ potentially **apply transformations**.
For example,
-```tblgen
+```tablegen
def COp : Op<"c_op"> {
let arguments = (ins
AnyType:$c_input,
@@ -222,7 +222,7 @@ method.
We can also reference symbols bound to matched op's results:
-```tblgen
+```tablegen
def : Pat<(AOp (BOp:$b_result) $attr), (COp $b_result $attr)>;
```
@@ -253,7 +253,7 @@ the result type(s). The pattern author will need to define a custom builder
that has result type deduction ability via `OpBuilder` in ODS. For example,
in the following pattern
-```tblgen
+```tablegen
def : Pat<(AOp $input, $attr), (COp (AOp $input, $attr) $attr)>;
```
@@ -282,7 +282,7 @@ parameters mismatch.
`dag` objects can be nested to generate a DAG of operations:
-```tblgen
+```tablegen
def : Pat<(AOp $input, $attr), (COp (BOp), $attr)>;
```
@@ -296,7 +296,7 @@ attaching symbols to the op. (But we **cannot** bind to op arguments given that
they are referencing previously bound symbols.) This is useful for reusing
newly created results where suitable. For example,
-```tblgen
+```tablegen
def DOp : Op<"d_op"> {
let arguments = (ins
AnyType:$d_input1,
@@ -326,7 +326,7 @@ achieved by `NativeCodeCall`.
For example, if we want to capture some op's attributes and group them as an
array attribute to construct a new op:
-```tblgen
+```tablegen
def TwoAttrOp : Op<"two_attr_op"> {
let arguments = (ins
@@ -360,7 +360,7 @@ Attribute createArrayAttr(Builder &builder, Attribute a, Attribute b) {
And then write the pattern as:
-```tblgen
+```tablegen
def createArrayAttr : NativeCodeCall<"createArrayAttr($_builder, $0, $1)">;
def : Pat<(TwoAttrOp $attr1, $attr2),
@@ -399,7 +399,7 @@ handy methods on `mlir::Builder`.
`NativeCodeCall<"...">:$symbol`. For example, if we want to reverse the previous
example and decompose the array attribute into two attributes:
-```tblgen
+```tablegen
class getNthAttr<int n> : NativeCodeCall<"$_self.getValue()[" # n # "]">;
def : Pat<(OneAttrOp $attr),
@@ -427,7 +427,7 @@ Operation *createMyOp(OpBuilder builder, Value input, Attribute attr);
We can wrap it up and invoke it like:
-```tblgen
+```tablegen
def createMyOp : NativeCodeCall<"createMyOp($_builder, $0, $1)">;
def : Pat<(... $input, $attr), (createMyOp $input, $attr)>;
@@ -467,7 +467,7 @@ store %mem, %sum
We cannot fit in with just one result pattern given `store` does not return a
value. Instead we can use multiple result patterns:
-```tblgen
+```tablegen
def : Pattern<(AddIOp $lhs, $rhs),
[(StoreOp (AllocOp:$mem (ShapeOp %lhs)), (AddIOp $lhs, $rhs)),
(LoadOp $mem)];
@@ -491,7 +491,7 @@ The `__N` suffix is specifying the `N`-th result as a whole (which can be
[variadic](#supporting-variadic-ops)). For example, we can bind a symbol to some
multi-result op and reference a specific result later:
-```tblgen
+```tablegen
def ThreeResultOp : Op<"three_result_op"> {
let arguments = (ins ...);
@@ -513,7 +513,7 @@ patterns.
We can also bind a symbol and reference one of its specific result at the same
time, which is typically useful when generating multi-result ops:
-```tblgen
+```tablegen
// TwoResultOp has similar definition as ThreeResultOp, but only has two
// results.
@@ -539,7 +539,7 @@ multiple declared values. So it means we do not necessarily need `N` result
patterns to replace an `N`-result op. For example, to replace an op with three
results, you can have
-```tblgen
+```tablegen
// ThreeResultOp/TwoResultOp/OneResultOp generates three/two/one result(s),
// respectively.
@@ -563,7 +563,7 @@ forbidden, i.e., the following is not allowed because that the first
`TwoResultOp` generates two results but only the second result is used for
replacing the matched op's result:
-```tblgen
+```tablegen
def : Pattern<(ThreeResultOp ...),
[(TwoResultOp ...), (TwoResultOp ...)]>;
```
@@ -584,7 +584,7 @@ regarding an op's values.
The above terms are needed because ops can have multiple results, and some of the
results can also be variadic. For example,
-```tblgen
+```tablegen
def MultiVariadicOp : Op<"multi_variadic_op"> {
let arguments = (ins
AnyTensor:$input1,
@@ -617,7 +617,7 @@ results. The third parameter to `Pattern` (and `Pat`) is for this purpose.
For example, we can write
-```tblgen
+```tablegen
def HasNoUseOf: Constraint<
CPred<"$_self->use_begin() == $_self->use_end()">, "has no use">;
diff --git a/mlir/docs/RationaleSimplifiedPolyhedralForm.md b/mlir/docs/RationaleSimplifiedPolyhedralForm.md
index ec2ecc9fe50..f904b6e2120 100644
--- a/mlir/docs/RationaleSimplifiedPolyhedralForm.md
+++ b/mlir/docs/RationaleSimplifiedPolyhedralForm.md
@@ -1,4 +1,4 @@
-# MLIR: The case for a <em>simplified</em> polyhedral form
+# MLIR: The case for a simplified polyhedral form
MLIR embraces polyhedral compiler techniques for their many advantages
representing and transforming dense numerical kernels, but it uses a form that
@@ -80,7 +80,7 @@ will abstract them into S1/S2/S3 in the discussion below. Originally, we planned
to represent this with a classical form like (syntax details are not important
and probably slightly incorrect below):
-```
+```mlir
mlfunc @simple_example(... %N) {
%tmp = call @S1(%X, %i, %j)
domain: (0 <= %i < %N), (0 <= %j < %N)
@@ -104,7 +104,7 @@ a better fit for our needs, because it exposes important structure that will
make analyses and optimizations more efficient, and also makes the scoping of
SSA values more explicit. This leads us to a representation along the lines of:
-```
+```mlir
mlfunc @simple_example(... %N) {
d0/d1 = mlspace
for S1(d0), S2(d0), S3(d0) {
@@ -132,7 +132,7 @@ interesting features, including the ability for instructions within a loop nest
to have non-equal domains, like this - the second instruction ignores the outer
10 points inside the loop:
-```
+```mlir
mlfunc @reduced_domain_example(... %N) {
d0/d1 = mlspace
for S1(d0), S2(d0) {
@@ -151,7 +151,7 @@ It also allows schedule remapping within the instruction, like this example that
introduces a diagonal skew through a simple change to the schedules of the two
instructions:
-```
+```mlir
mlfunc @skewed_domain_example(... %N) {
d0/d1 = mlspace
for S1(d0), S2(d0+d1) {
@@ -350,7 +350,7 @@ In the traditional form though, this is not the case: it seems that a lot of
knowledge about how codegen will emit the code is necessary to determine if SSA
form is correct or not. For example, this is invalid code:
-```
+```mlir
%tmp = call @S1(%X, %0, %1)
domain: (10 <= %i < %N), (0 <= %j < %N)
schedule: (i, j)
diff --git a/mlir/docs/Tutorials/Toy/Ch-1.md b/mlir/docs/Tutorials/Toy/Ch-1.md
index cb7f97cb3f6..e12f615b49b 100644
--- a/mlir/docs/Tutorials/Toy/Ch-1.md
+++ b/mlir/docs/Tutorials/Toy/Ch-1.md
@@ -51,7 +51,7 @@ of rank <= 2, and the only datatype in Toy is a 64-bit floating point type (aka
and deallocation is automatically managed. But enough with the long description;
nothing is better than walking through an example to get a better understanding:
-```Toy {.toy}
+```toy
def main() {
# Define a variable `a` with shape <2, 3>, initialized with the literal value.
# The shape is inferred from the supplied literal.
@@ -74,7 +74,7 @@ tensors, but we don't know their dimensions). They are specialized for every
newly discovered signature at call sites. Let's revisit the previous example by
adding a user-defined function:
-```Toy {.toy}
+```toy
# User defined generic function that operates on unknown shaped arguments.
def multiply_transpose(a, b) {
return transpose(a) * transpose(b);
diff --git a/mlir/docs/Tutorials/Toy/Ch-2.md b/mlir/docs/Tutorials/Toy/Ch-2.md
index ce46788f4ae..11c3936b546 100755
--- a/mlir/docs/Tutorials/Toy/Ch-2.md
+++ b/mlir/docs/Tutorials/Toy/Ch-2.md
@@ -351,7 +351,7 @@ At this point you probably might want to know what the C++ code generated by
TableGen looks like. Simply run the `mlir-tblgen` command with the
`gen-op-decls` or the `gen-op-defs` action like so:
-```
+```shell
${build_root}/bin/mlir-tblgen -gen-op-defs ${mlir_src_root}/examples/toy/Ch2/include/toy/Ops.td -I ${mlir_src_root}/include/
```
@@ -527,7 +527,7 @@ variadic operands, etc. Check out the
At this point we can generate our "Toy IR". A simplified version of the previous
example:
-```.toy
+```toy
# User defined generic function that operates on unknown shaped arguments.
def multiply_transpose(a, b) {
return transpose(a) * transpose(b);
diff --git a/mlir/docs/Tutorials/Toy/Ch-3.md b/mlir/docs/Tutorials/Toy/Ch-3.md
index 615c2c1bbec..6ff9d3cb299 100644
--- a/mlir/docs/Tutorials/Toy/Ch-3.md
+++ b/mlir/docs/Tutorials/Toy/Ch-3.md
@@ -28,7 +28,7 @@ Let's start with a simple pattern and try to eliminate a sequence of two
transpose that cancel out: `transpose(transpose(X)) -> X`. Here is the
corresponding Toy example:
-```Toy(.toy)
+```toy
def transpose_transpose(x) {
return transpose(transpose(x));
}
@@ -146,7 +146,7 @@ input. The Canonicalizer knows to clean up dead operations; however, MLIR
conservatively assumes that operations may have side-effects. We can fix this by
adding a new trait, `NoSideEffect`, to our `TransposeOp`:
-```tablegen:
+```tablegen
def TransposeOp : Toy_Op<"transpose", [NoSideEffect]> {...}
```
@@ -169,7 +169,7 @@ Declarative, rule-based pattern-match and rewrite (DRR) is an operation
DAG-based declarative rewriter that provides a table-based syntax for
pattern-match and rewrite rules:
-```tablegen:
+```tablegen
class Pattern<
dag sourcePattern, list<dag> resultPatterns,
list<dag> additionalConstraints = [],
@@ -179,7 +179,7 @@ class Pattern<
A redundant reshape optimization similar to SimplifyRedundantTranspose can be
expressed more simply using DRR as follows:
-```tablegen:
+```tablegen
// Reshape(Reshape(x)) = Reshape(x)
def ReshapeReshapeOptPattern : Pat<(ReshapeOp(ReshapeOp $arg)),
(ReshapeOp $arg)>;
@@ -193,7 +193,7 @@ transformation is conditional on some properties of the arguments and results.
An example is a transformation that eliminates reshapes when they are redundant,
i.e. when the input and output shapes are identical.
-```tablegen:
+```tablegen
def TypesAreIdentical : Constraint<CPred<"$0->getType() == $1->getType()">>;
def RedundantReshapeOptPattern : Pat<
(ReshapeOp:$res $arg), (replaceWithValue $arg),
@@ -207,7 +207,7 @@ C++. An example of such an optimization is FoldConstantReshape, where we
optimize Reshape of a constant value by reshaping the constant in place and
eliminating the reshape operation.
-```tablegen:
+```tablegen
def ReshapeConstant : NativeCodeCall<"$0.reshape(($1->getType()).cast<ShapedType>())">;
def FoldConstantReshapeOptPattern : Pat<
(ReshapeOp:$res (ConstantOp $arg)),
diff --git a/mlir/docs/Tutorials/Toy/Ch-4.md b/mlir/docs/Tutorials/Toy/Ch-4.md
index 4a4e11c68e6..2df009ddc2d 100644
--- a/mlir/docs/Tutorials/Toy/Ch-4.md
+++ b/mlir/docs/Tutorials/Toy/Ch-4.md
@@ -296,7 +296,7 @@ def ShapeInferenceOpInterface : OpInterface<"ShapeInference"> {
Now that the interface is defined, we can add it to the necessary Toy operations
in a similar way to how we added the `CallOpInterface` to the GenericCallOp:
-```
+```tablegen
def MulOp : Toy_Op<"mul",
[..., DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
...
diff --git a/mlir/docs/Tutorials/Toy/Ch-6.md b/mlir/docs/Tutorials/Toy/Ch-6.md
index 939b2b4f776..a25f0d2091d 100644
--- a/mlir/docs/Tutorials/Toy/Ch-6.md
+++ b/mlir/docs/Tutorials/Toy/Ch-6.md
@@ -189,7 +189,7 @@ utility:
Exporting our module to LLVM IR generates:
-```.llvm
+```llvm
define void @main() {
...
@@ -224,7 +224,7 @@ define void @main() {
If we enable optimization on the generated LLVM IR, we can trim this down quite
a bit:
-```.llvm
+```llvm
define void @main()
%0 = tail call i32 (i8*, ...) @printf(i8* nonnull dereferenceable(1) getelementptr inbounds ([4 x i8], [4 x i8]* @frmt_spec, i64 0, i64 0), double 1.000000e+00)
%1 = tail call i32 (i8*, ...) @printf(i8* nonnull dereferenceable(1) getelementptr inbounds ([4 x i8], [4 x i8]* @frmt_spec, i64 0, i64 0), double 1.600000e+01)
@@ -237,7 +237,6 @@ define void @main()
%putchar.2 = tail call i32 @putchar(i32 10)
ret void
}
-
```
The full code listing for dumping LLVM IR can be found in `Ch6/toy.cpp` in the
@@ -308,7 +307,7 @@ int runJit(mlir::ModuleOp module) {
You can play around with it from the build directory:
-```sh
+```shell
$ echo 'def main() { print([[1, 2], [3, 4]]); }' | ./bin/toyc-ch6 -emit=jit
1.000000 2.000000
3.000000 4.000000
diff --git a/mlir/lib/Support/CMakeLists.txt b/mlir/lib/Support/CMakeLists.txt
index 7594a8c7bbe..86802d7c795 100644
--- a/mlir/lib/Support/CMakeLists.txt
+++ b/mlir/lib/Support/CMakeLists.txt
@@ -15,7 +15,7 @@ add_llvm_library(MLIRSupport
ADDITIONAL_HEADER_DIRS
${MLIR_MAIN_INCLUDE_DIR}/mlir/Support
)
-target_link_libraries(MLIRSupport LLVMSupport)
+target_link_libraries(MLIRSupport LLVMSupport ${LLVM_PTHREAD_LIB})
add_llvm_library(MLIROptMain
MlirOptMain.cpp
OpenPOWER on IntegriCloud