summaryrefslogtreecommitdiffstats
path: root/mlir/docs
diff options
context:
space:
mode:
Diffstat (limited to 'mlir/docs')
-rw-r--r--mlir/docs/Canonicalization.md64
-rw-r--r--mlir/docs/ConversionToLLVMDialect.md443
-rw-r--r--mlir/docs/DeclarativeRewrites.md690
-rw-r--r--mlir/docs/DefiningAttributesAndTypes.md282
-rw-r--r--mlir/docs/DeveloperGuide.md107
-rw-r--r--mlir/docs/Diagnostics.md402
-rw-r--r--mlir/docs/DialectConversion.md277
-rw-r--r--mlir/docs/Dialects/Affine.md610
-rw-r--r--mlir/docs/Dialects/GPU.md132
-rw-r--r--mlir/docs/Dialects/LLVM.md429
-rw-r--r--mlir/docs/Dialects/Linalg.md8
-rw-r--r--mlir/docs/Dialects/SPIR-V.md1039
-rw-r--r--mlir/docs/Dialects/Standard.md1146
-rw-r--r--mlir/docs/Dialects/Vector.md14
-rw-r--r--mlir/docs/EDSC.md132
-rw-r--r--mlir/docs/GenericDAGRewriter.md415
-rw-r--r--mlir/docs/Glossary.md174
-rw-r--r--mlir/docs/Interfaces.md200
-rw-r--r--mlir/docs/LangRef.md1497
-rw-r--r--mlir/docs/MLIRForGraphAlgorithms.md403
-rw-r--r--mlir/docs/OpDefinitions.md1210
-rw-r--r--mlir/docs/Passes.md298
-rw-r--r--mlir/docs/Quantization.md359
-rw-r--r--mlir/docs/QuickstartRewrites.md255
-rw-r--r--mlir/docs/Rationale.md1121
-rw-r--r--mlir/docs/RationaleSimplifiedPolyhedralForm.md415
-rw-r--r--mlir/docs/TestingGuide.md171
-rw-r--r--mlir/docs/Traits.md246
-rw-r--r--mlir/docs/Tutorials/Toy/Ch-1.md169
-rwxr-xr-xmlir/docs/Tutorials/Toy/Ch-2.md577
-rw-r--r--mlir/docs/Tutorials/Toy/Ch-3.md264
-rw-r--r--mlir/docs/Tutorials/Toy/Ch-4.md387
-rw-r--r--mlir/docs/Tutorials/Toy/Ch-5.md357
-rw-r--r--mlir/docs/Tutorials/Toy/Ch-6.md323
-rw-r--r--mlir/docs/Tutorials/Toy/Ch-7.md539
-rw-r--r--mlir/docs/UsageOfConst.md272
-rw-r--r--mlir/docs/WritingAPass.md835
-rw-r--r--mlir/docs/includes/img/index-map.svg380
-rw-r--r--mlir/docs/includes/img/view-operation.svg580
39 files changed, 17222 insertions, 0 deletions
diff --git a/mlir/docs/Canonicalization.md b/mlir/docs/Canonicalization.md
new file mode 100644
index 00000000000..642717faa73
--- /dev/null
+++ b/mlir/docs/Canonicalization.md
@@ -0,0 +1,64 @@
+# Operation Canonicalization in MLIR
+
+Canonicalization is an important part of compiler IR design: it makes it easier
+to implement reliable compiler transformations and to reason about what is
+better or worse in the code, and it forces interesting discussions about the
+goals of a particular level of IR. Dan Gohman wrote
+[an article](https://sunfishcode.github.io/blog/2018/10/22/Canonicalization.html)
+exploring these issues; it is worth reading if you're not familiar with these
+concepts.
+
+Most compilers have canonicalization passes, and sometimes they have many
+different ones (e.g. instcombine, dag combine, etc in LLVM). Because MLIR is a
+multi-level IR, we can provide a single canonicalization infrastructure and
+reuse it across many different IRs that it represents. This document describes
+the general approach, global canonicalizations performed, and provides sections
+to capture IR-specific rules for reference.
+
+## General Design
+
+MLIR has a single canonicalization pass, which iteratively applies
+canonicalization transformations in a greedy way until the IR converges. These
+transformations are defined by the operations themselves, which allows each
+dialect to define its own set of operations and canonicalizations together.
+
+Some important things to think about w.r.t. canonicalization patterns:
+
+* Repeated applications of patterns should converge. Unstable or cyclic
+ rewrites will cause infinite loops in the canonicalizer.
+
+* It is generally better to canonicalize towards operations that have fewer
+ uses of a value when the operands are duplicated, because some patterns only
+ match when a value has a single user. For example, it is generally good to
+ canonicalize "x + x" into "x * 2", because this reduces the number of uses
+ of x by one.
+
+* It is always good to eliminate operations entirely when possible, e.g. by
+ folding known identities (like "x + 0 = x").
+
+## Globally Applied Rules
+
+These transformations are applied to all levels of IR:
+
+* Elimination of operations that have no side effects and have no uses.
+
+* Constant folding - e.g. "(addi 1, 2)" to "3". Constant folding hooks are
+ specified by operations.
+
+* Move constant operands to commutative binary operators to the right side -
+ e.g. "(addi 4, x)" to "(addi x, 4)".
+
+## Builtin Ops Canonicalizations
+
+These transformations are applied to builtin ops:
+
+* `constant` ops are uniqued and hoisted into the entry block of the first
+ parent region that is isolated from above, e.g. the entry block of a
+ function.
+* (TODO) Merge `affine.apply` operations that directly feed each other.
+
+## Standard Ops Canonicalizations
+
+* Shape folding of `alloc` operations to turn dynamic dimensions into static
+ ones.
+* Folding `memref_cast` operations into users where possible.
diff --git a/mlir/docs/ConversionToLLVMDialect.md b/mlir/docs/ConversionToLLVMDialect.md
new file mode 100644
index 00000000000..19403e27dc4
--- /dev/null
+++ b/mlir/docs/ConversionToLLVMDialect.md
@@ -0,0 +1,443 @@
+# Conversion to the LLVM Dialect
+
+Conversion from the Standard to the [LLVM Dialect](Dialects/LLVM.md) can be
+performed by the specialized dialect conversion pass by running
+
+```sh
+mlir-opt -convert-std-to-llvm <filename.mlir>
+```
+
+It performs type and operation conversions for a subset of operations from
+standard dialect (operations on scalars and vectors, control flow operations) as
+described in this document. We use the terminology defined by the
+[LLVM IR Dialect description](Dialects/LLVM.md) throughout this document.
+
+[TOC]
+
+## Type Conversion
+
+### Scalar Types
+
+Scalar types are converted to their LLVM counterparts if they exist. The
+following conversions are currently implemented.
+
+- `i*` converts to `!llvm.i*`
+- `f16` converts to `!llvm.half`
+- `f32` converts to `!llvm.float`
+- `f64` converts to `!llvm.double`
+
+Note: `bf16` type is not supported by LLVM IR and cannot be converted.
+
+### Index Type
+
+Index type is converted to a wrapped LLVM IR integer with bitwidth equal to the
+bitwidth of the pointer size as specified by the
+[data layout](https://llvm.org/docs/LangRef.html#data-layout) of the LLVM module
+[contained](Dialects/LLVM.md#context-and-module-association) in the LLVM Dialect
+object. For example, on x86-64 CPUs it converts to `!llvm.i64`.
+
+### Vector Types
+
+LLVM IR only supports *one-dimensional* vectors, unlike MLIR where vectors can
+be multi-dimensional. Vector types cannot be nested in either IR. In the
+one-dimensional case, MLIR vectors are converted to LLVM IR vectors of the same
+size with element type converted using these conversion rules. In the
+n-dimensional case, MLIR vectors are converted to (n-1)-dimensional array types
+of one-dimensional vectors.
+
+For example, `vector<4 x f32>` converts to `!llvm<"<4 x float>">` and `vector<4
+x 8 x 16 x f32>` converts to `!llvm<"[4 x [8 x <16 x float>]]">`.
+
+### Memref Types
+
+Memref types in MLIR have both static and dynamic information associated with
+them. The dynamic information comprises the buffer pointer as well as sizes and
+strides of any dynamically sized dimensions. Memref types are normalized and
+converted to a descriptor that is only dependent on the rank of the memref. The
+descriptor contains:
+
+1. the pointer to the data buffer, followed by
+2. the pointer to properly aligned data payload that the memref indexes,
+ followed by
+3. a lowered `index`-type integer containing the distance between the beginning
+ of the buffer and the first element to be accessed through the memref,
+ followed by
+4. an array containing as many `index`-type integers as the rank of the memref:
+ the array represents the size, in number of elements, of the memref along
+ the given dimension. For constant MemRef dimensions, the corresponding size
+ entry is a constant whose runtime value must match the static value,
+ followed by
+5. a second array containing as many 64-bit integers as the rank of the MemRef:
+ the second array represents the "stride" (in tensor abstraction sense), i.e.
+ the number of consecutive elements of the underlying buffer.
+
+For constant memref dimensions, the corresponding size entry is a constant whose
+runtime value matches the static value. This normalization serves as an ABI for
+the memref type to interoperate with externally linked functions. In the
+particular case of rank `0` memrefs, the size and stride arrays are omitted,
+resulting in a struct containing two pointers + offset.
+
+Examples:
+
+```mlir
+memref<f32> -> !llvm<"{ float*, float*, i64 }">
+memref<1 x f32> -> !llvm<"{ float*, float*, i64, [1 x i64], [1 x i64] }">
+memref<? x f32> -> !llvm<"{ float*, float*, i64, [1 x i64], [1 x i64] }">
+memref<10x42x42x43x123 x f32> -> !llvm<"{ float*, float*, i64, [5 x i64], [5 x i64] }">
+memref<10x?x42x?x123 x f32> -> !llvm<"{ float*, float*, i64, [5 x i64], [5 x i64] }">
+
+// Memref types can have vectors as element types
+memref<1x? x vector<4xf32>> -> !llvm<"{ <4 x float>*, <4 x float>*, i64, [1 x i64], [1 x i64] }">
+```
+
+If the rank of the memref is unknown at compile time, the Memref is converted to
+an unranked descriptor that contains:
+
+1. a 64-bit integer representing the dynamic rank of the memref, followed by
+2. a pointer to a ranked memref descriptor with the contents listed above.
+
+Dynamic ranked memrefs should be used only to pass arguments to external library
+calls that expect a unified memref type. The called functions can parse any
+unranked memref descriptor by reading the rank and parsing the enclosed ranked
+descriptor pointer.
+
+Examples:
+
+```mlir
+// unranked descriptor
+memref<*xf32> -> !llvm<"{i64, i8*}">
+```
+
+**In function signatures,** `memref` is passed as a _pointer_ to the structured
+defined above to comply with the calling convention.
+
+Example:
+
+```mlir
+// A function type with memref as argument
+(memref<?xf32>) -> ()
+// is transformed into the LLVM function with pointer-to-structure argument.
+!llvm<"void({ float*, float*, i64, [1 x i64], [1 x i64]}*) ">
+```
+
+### Function Types
+
+Function types get converted to LLVM function types. The arguments are converted
+individually according to these rules. The result types need to accommodate the
+fact that LLVM IR functions always have a return type, which may be a Void type.
+The converted function always has a single result type. If the original function
+type had no results, the converted function will have one result of the wrapped
+`void` type. If the original function type had one result, the converted
+function will have one result converted using these rules. Otherwise, the result
+type will be a wrapped LLVM IR structure type where each element of the
+structure corresponds to one of the results of the original function, converted
+using these rules. In high-order functions, function-typed arguments and results
+are converted to a wrapped LLVM IR function pointer type (since LLVM IR does not
+allow passing functions to functions without indirection) with the pointee type
+converted using these rules.
+
+Examples:
+
+```mlir
+// zero-ary function type with no results.
+() -> ()
+// is converted to a zero-ary function with `void` result
+!llvm<"void ()">
+
+// unary function with one result
+(i32) -> (i64)
+// has its argument and result type converted, before creating the LLVM IR function type
+!llvm<"i64 (i32)">
+
+// binary function with one result
+(i32, f32) -> (i64)
+// has its arguments handled separately
+!llvm<"i64 (i32, float)">
+
+// binary function with two results
+(i32, f32) -> (i64, f64)
+// has its result aggregated into a structure type
+!llvm<"{i64, double} (i32, f32)">
+
+// function-typed arguments or results in higher-order functions
+(() -> ()) -> (() -> ())
+// are converted into pointers to functions
+!llvm<"void ()* (void ()*)">
+```
+
+## Calling Convention
+
+### Function Signature Conversion
+
+LLVM IR functions are defined by a custom operation. The function itself has a
+wrapped LLVM IR function type converted as described above. The function
+definition operation uses MLIR syntax.
+
+Examples:
+
+```mlir
+// zero-ary function type with no results.
+func @foo() -> ()
+// gets LLVM type void().
+llvm.func @foo() -> ()
+
+// function with one result
+func @bar(i32) -> (i64)
+// gets converted to LLVM type i64(i32).
+func @bar(!llvm.i32) -> !llvm.i64
+
+// function with two results
+func @qux(i32, f32) -> (i64, f64)
+// has its result aggregated into a structure type
+func @qux(!llvm.i32, !llvm.float) -> !llvm<"{i64, double}">
+
+// function-typed arguments or results in higher-order functions
+func @quux(() -> ()) -> (() -> ())
+// are converted into pointers to functions
+func @quux(!llvm<"void ()*">) -> !llvm<"void ()*">
+// the call flow is handled by the LLVM dialect `call` operation supporting both
+// direct and indirect calls
+```
+
+### Result Packing
+
+In case of multi-result functions, the returned values are inserted into a
+structure-typed value before being returned and extracted from it at the call
+site. This transformation is a part of the conversion and is transparent to the
+defines and uses of the values being returned.
+
+Example:
+
+```mlir
+func @foo(%arg0: i32, %arg1: i64) -> (i32, i64) {
+ return %arg0, %arg1 : i32, i64
+}
+func @bar() {
+ %0 = constant 42 : i32
+ %1 = constant 17 : i64
+ %2:2 = call @foo(%0, %1) : (i32, i64) -> (i32, i64)
+ "use_i32"(%2#0) : (i32) -> ()
+ "use_i64"(%2#1) : (i64) -> ()
+}
+
+// is transformed into
+
+func @foo(%arg0: !llvm.i32, %arg1: !llvm.i64) -> !llvm<"{i32, i64}"> {
+ // insert the vales into a structure
+ %0 = llvm.mlir.undef : !llvm<"{i32, i64}">
+ %1 = llvm.insertvalue %arg0, %0[0] : !llvm<"{i32, i64}">
+ %2 = llvm.insertvalue %arg1, %1[1] : !llvm<"{i32, i64}">
+
+ // return the structure value
+ llvm.return %2 : !llvm<"{i32, i64}">
+}
+func @bar() {
+ %0 = llvm.mlir.constant(42 : i32) : !llvm.i32
+ %1 = llvm.mlir.constant(17) : !llvm.i64
+
+ // call and extract the values from the structure
+ %2 = llvm.call @bar(%0, %1) : (%arg0: !llvm.i32, %arg1: !llvm.i32) -> !llvm<"{i32, i64}">
+ %3 = llvm.extractvalue %2[0] : !llvm<"{i32, i64}">
+ %4 = llvm.extractvalue %2[1] : !llvm<"{i32, i64}">
+
+ // use as before
+ "use_i32"(%3) : (!llvm.i32) -> ()
+ "use_i64"(%4) : (!llvm.i64) -> ()
+}
+```
+
+### Calling Convention for `memref`
+
+For function _arguments_ of `memref` type, ranked or unranked, the type of the
+argument is a _pointer_ to the memref descriptor type defined above. The caller
+of such function is required to store the descriptor in memory and guarantee
+that the storage remains live until the callee returns. The caller can than pass
+the pointer to that memory as function argument. The callee loads from the
+pointers it was passed as arguments in the entry block of the function, making
+the descriptor passed in as argument available for use similarly to
+ocally-defined descriptors.
+
+This convention is implemented in the conversion of `std.func` and `std.call` to
+the LLVM dialect. Conversions from other dialects should take it into account.
+The motivation for this convention is to simplify the ABI for interfacing with
+other LLVM modules, in particular those generated from C sources, while avoiding
+platform-specific aspects until MLIR has a proper ABI modeling.
+
+Example:
+
+```mlir
+
+func @foo(memref<?xf32>) -> () {
+ %c0 = constant 0 : index
+ load %arg0[%c0] : memref<?xf32>
+ return
+}
+
+func @bar(%arg0: index) {
+ %0 = alloc(%arg0) : memref<?xf32>
+ call @foo(%0) : (memref<?xf32>)-> ()
+ return
+}
+
+// Gets converted to the following IR.
+// Accepts a pointer to the memref descriptor.
+llvm.func @foo(!llvm<"{ float*, float*, i64, [1 x i64], [1 x i64] }*">) {
+ // Loads the descriptor so that it can be used similarly to locally
+ // created descriptors.
+ %0 = llvm.load %arg0 : !llvm<"{ float*, float*, i64, [1 x i64], [1 x i64] }*">
+}
+
+llvm.func @bar(%arg0: !llvm.i64) {
+ // ... Allocation ...
+ // Definition of the descriptor.
+ %7 = llvm.mlir.undef : !llvm<"{ float*, float*, i64, [1 x i64], [1 x i64] }">
+ // ... Filling in the descriptor ...
+ %14 = // The final value of the allocated descriptor.
+ // Allocate the memory for the descriptor and store it.
+ %15 = llvm.mlir.constant(1 : index) : !llvm.i64
+ %16 = llvm.alloca %15 x !llvm<"{ float*, float*, i64, [1 x i64], [1 x i64] }">
+ : (!llvm.i64) -> !llvm<"{ float*, float*, i64, [1 x i64], [1 x i64] }*">
+ llvm.store %14, %16 : !llvm<"{ float*, float*, i64, [1 x i64], [1 x i64] }*">
+ // Pass the pointer to the function.
+ llvm.call @foo(%16) : (!llvm<"{ float*, float*, i64, [1 x i64], [1 x i64] }*">) -> ()
+ llvm.return
+}
+```
+
+*This convention may or may not apply if the conversion of MemRef types is
+overridden by the user.*
+
+## Repeated Successor Removal
+
+Since the goal of the LLVM IR dialect is to reflect LLVM IR in MLIR, the dialect
+and the conversion procedure must account for the differences between block
+arguments and LLVM IR PHI nodes. In particular, LLVM IR disallows PHI nodes with
+different values coming from the same source. Therefore, the LLVM IR dialect
+disallows operations that have identical successors accepting arguments, which
+would lead to invalid PHI nodes. The conversion process resolves the potential
+PHI source ambiguity by injecting dummy blocks if the same block is used more
+than once as a successor in an instruction. These dummy blocks branch
+unconditionally to the original successors, pass them the original operands
+(available in the dummy block because it is dominated by the original block) and
+are used instead of them in the original terminator operation.
+
+Example:
+
+```mlir
+ cond_br %0, ^bb1(%1 : i32), ^bb1(%2 : i32)
+^bb1(%3 : i32)
+ "use"(%3) : (i32) -> ()
+```
+
+leads to a new basic block being inserted,
+
+```mlir
+ cond_br %0, ^bb1(%1 : i32), ^dummy
+^bb1(%3 : i32):
+ "use"(%3) : (i32) -> ()
+^dummy:
+ br ^bb1(%4 : i32)
+```
+
+before the conversion to the LLVM IR dialect:
+
+```mlir
+ llvm.cond_br %0, ^bb1(%1 : !llvm.i32), ^dummy
+^bb1(%3 : !llvm<"i32">):
+ "use"(%3) : (!llvm.i32) -> ()
+^dummy:
+ llvm.br ^bb1(%2 : !llvm.i32)
+```
+
+## Default Memref Model
+
+### Memref Descriptor
+
+Within a converted function, a `memref`-typed value is represented by a memref
+_descriptor_, the type of which is the structure type obtained by converting
+from the memref type. This descriptor holds all the necessary information to
+produce an address of a specific element. In particular, it holds dynamic values
+for static sizes, and they are expected to match at all times.
+
+It is created by the allocation operation and is updated by the conversion
+operations that may change static dimensions into dynamic and vice versa.
+
+**Note**: LLVM IR conversion does not support `memref`s with layouts that are
+not amenable to the strided form.
+
+### Index Linearization
+
+Accesses to a memref element are transformed into an access to an element of the
+buffer pointed to by the descriptor. The position of the element in the buffer
+is calculated by linearizing memref indices in row-major order (lexically first
+index is the slowest varying, similar to C, but accounting for strides). The
+computation of the linear address is emitted as arithmetic operation in the LLVM
+IR dialect. Strides are extracted from the memref descriptor.
+
+Accesses to zero-dimensional memref (that are interpreted as pointers to the
+elemental type) are directly converted into `llvm.load` or `llvm.store` without
+any pointer manipulations.
+
+Examples:
+
+An access to a zero-dimensional memref is converted into a plain load:
+
+```mlir
+// before
+%0 = load %m[] : memref<f32>
+
+// after
+%0 = llvm.load %m : !llvm<"float*">
+```
+
+An access to a memref with indices:
+
+```mlir
+%0 = load %m[1,2,3,4] : memref<10x?x13x?xf32>
+```
+
+is transformed into the equivalent of the following code:
+
+```mlir
+// Compute the linearized index from strides. Each block below extracts one
+// stride from the descriptor, multipllies it with the index and accumulates
+// the total offset.
+%stride1 = llvm.extractvalue[4, 0] : !llvm<"{float*, float*, i64, i64[4], i64[4]}">
+%idx1 = llvm.mlir.constant(1 : index) !llvm.i64
+%addr1 = muli %stride1, %idx1 : !llvm.i64
+
+%stride2 = llvm.extractvalue[4, 1] : !llvm<"{float*, float*, i64, i64[4], i64[4]}">
+%idx2 = llvm.mlir.constant(2 : index) !llvm.i64
+%addr2 = muli %stride2, %idx2 : !llvm.i64
+%addr3 = addi %addr1, %addr2 : !llvm.i64
+
+%stride3 = llvm.extractvalue[4, 2] : !llvm<"{float*, float*, i64, i64[4], i64[4]}">
+%idx3 = llvm.mlir.constant(3 : index) !llvm.i64
+%addr4 = muli %stride3, %idx3 : !llvm.i64
+%addr5 = addi %addr3, %addr4 : !llvm.i64
+
+%stride4 = llvm.extractvalue[4, 3] : !llvm<"{float*, float*, i64, i64[4], i64[4]}">
+%idx4 = llvm.mlir.constant(4 : index) !llvm.i64
+%addr6 = muli %stride4, %idx4 : !llvm.i64
+%addr7 = addi %addr5, %addr6 : !llvm.i64
+
+// Add the linear offset to the address.
+%offset = llvm.extractvalue[2] : !llvm<"{float*, float*, i64, i64[4], i64[4]}">
+%addr8 = addi %addr7, %offset : !llvm.i64
+
+// Obtain the aligned pointer.
+%aligned = llvm.extractvalue[1] : !llvm<"{float*, float*, i64, i64[4], i64[4]}">
+
+// Get the address of the data pointer.
+%ptr = llvm.getelementptr %aligned[%addr8]
+ : !llvm<"{float*, float*, i64, i64[4], i64[4]}"> -> !llvm<"float*">
+
+// Perform the actual load.
+%0 = llvm.load %ptr : !llvm<"float*">
+```
+
+For stores, the address computation code is identical and only the actual store
+operation is different.
+
+Note: the conversion does not perform any sort of common subexpression
+elimination when emitting memref accesses.
diff --git a/mlir/docs/DeclarativeRewrites.md b/mlir/docs/DeclarativeRewrites.md
new file mode 100644
index 00000000000..67ff102fef9
--- /dev/null
+++ b/mlir/docs/DeclarativeRewrites.md
@@ -0,0 +1,690 @@
+# Table-driven Declarative Rewrite Rule (DRR)
+
+In addition to subclassing the `mlir::RewritePattern` C++ class, MLIR also
+supports defining rewrite rules in a declarative manner. Similar to
+[Op Definition Specification](OpDefinitions.md) (ODS), this is achieved via
+[TableGen][TableGen], which is a language to maintain records of domain-specific
+information. The rewrite rules are specified concisely in a TableGen record,
+which will be expanded into an equivalent `mlir::RewritePattern` subclass at
+compiler build time.
+
+This manual explains in detail all of the available mechanisms for defining
+rewrite rules in such a declarative manner. It aims to be a specification
+instead of a tutorial. Please refer to
+[Quickstart tutorial to adding MLIR graph rewrite](QuickstartRewrites.md) for
+the latter.
+
+Given that declarative rewrite rules depend on op definition specification, this
+manual assumes knowledge of the [ODS](OpDefinitions.md) doc.
+
+## Benefits
+
+Compared to the hand-written C++ classes, this declarative approach has several
+benefits, including but not limited to:
+
+* **Being declarative**: The pattern creator just needs to state the rewrite
+ pattern declaratively, without worrying about the concrete C++ methods to
+ call.
+* **Removing boilerplate and showing the very essence of the rewrite**:
+ `mlir::RewritePattern` is already good at hiding boilerplate for defining a
+ rewrite rule. But we still need to write the class and function structures
+ required by the C++ programming language, inspect ops for matching, and call
+ op `build()` methods for constructing. These statements are typically quite
+ simple and similar, so they can be further condensed with auto-generation.
+ Because we reduce the boilerplate to the bare minimum, the declarative
+ rewrite rule will just contain the very essence of the rewrite. This makes
+ it very easy to understand the pattern.
+
+## Strengths and Limitations
+
+The declarative rewrite rule is **operation-based**: it describes a rule to
+match against a directed acyclic graph (DAG) of operations and generate DAGs of
+operations. This gives DRR both its strengths and limitations: it is good at
+expressing op to op conversions, but not that well suited for, say, converting
+an op into a loop nest.
+
+Per the current implementation, DRR does not have good support for the following
+features:
+
+* Matching and generating ops with regions.
+* Matching and generating ops with block arguments.
+* Matching multi-result ops in nested patterns.
+* Matching and generating variadic operand/result ops in nested patterns.
+* Packing and unpacking variadic operands/results during generation.
+* [`NativeCodeCall`](#native-code-call-transforming-the-generated-op)
+ returning more than one results.
+
+## Rule Definition
+
+The core construct for defining a rewrite rule is defined in
+[`OpBase.td`][OpBase] as
+
+```tblgen
+class Pattern<
+ dag sourcePattern, list<dag> resultPatterns,
+ list<dag> additionalConstraints = [],
+ dag benefitsAdded = (addBenefit 0)>;
+```
+
+A declarative rewrite rule contains two main components:
+
+* A _source pattern_, which is used for matching a DAG of operations.
+* One or more _result patterns_, which are used for generating DAGs of
+ operations to replace the matched DAG of operations.
+
+We allow multiple result patterns to support
+[multi-result ops](#supporting-multi-result-ops) and
+[auxiliary ops](#supporting-auxiliary-ops), but frequently we just want to
+convert one DAG of operations to another DAG of operations. There is a handy
+wrapper of `Pattern`, `Pat`, which takes a single result pattern:
+
+```tblgen
+class Pat<
+ dag sourcePattern, dag resultPattern,
+ list<dag> additionalConstraints = [],
+ dag benefitsAdded = (addBenefit 0)> :
+ Pattern<sourcePattern, [resultPattern], additionalConstraints, benefitAdded>;
+```
+
+Each pattern is specified as a TableGen `dag` object with the syntax of
+`(operator arg0, arg1, ...)`.
+
+`operator` is typically an MLIR op, but it can also be other
+[directives](#special-directives). `argN` is for matching (if used in source
+pattern) or generating (if used in result pattern) the `N`-th argument for
+`operator`. If the `operator` is some MLIR operation, it means the `N`-th
+argument as specified in the `arguments` list of the op's definition.
+Therefore, we say op argument specification in pattern is **position-based**:
+the position where they appear matters.
+
+`argN` can be a `dag` object itself, thus we can have nested `dag` tree to model
+the def-use relationship between ops.
+
+### Source pattern
+
+The source pattern is for matching a DAG of operations. Arguments in the `dag`
+object are intended to **capture** the op arguments. They can also be used to
+**further limit** the match criteria. The capturing is done by specifying a
+symbol starting with the `$` sign, while further constraints are introduced by
+specifying a `TypeConstraint` (for an operand) or a `AttrConstraint` (for an
+attribute).
+
+#### Binding op arguments and limiting the match
+
+For example,
+
+```tblgen
+def AOp : Op<"a_op"> {
+ let arguments = (ins
+ AnyType:$a_input,
+ AnyAttr:$a_attr
+ );
+
+ let results = (outs
+ AnyType:$a_output
+ );
+}
+
+def : Pat<(AOp $input, F32Attr:$attr), ...>;
+```
+
+In the above, we are matching an `AOp` whose `$input` can be anything valid as
+defined by the op and whose `$attr` must be a float attribute. If the match
+succeeds, we bind the `$input` symbol to the op's only input (`$a_input`) and
+`$attr` to the only attribute (`$a_attr`); we can reference them using `$input`
+and `$attr` in result patterns and additional constraints.
+
+The pattern is position-based: the symbol names used for capturing here do not
+need to match with the op definition as shown in the above example. As another
+example, the pattern can be written as ` def : Pat<(AOp $a, F32Attr:$b), ...>;`
+and use `$a` and `$b` to refer to the captured input and attribute. But using
+the ODS name directly in the pattern is also allowed.
+
+Also note that we only need to add `TypeConstraint` or `AttributeConstraint`
+when we need to further limit the match criteria. If all valid cases to the op
+are acceptable, then we can leave the constraint unspecified.
+
+`$_` is a special symbol to mean ignore capturing an argument. For example,
+`def : Pat<(AOp $_, $b), ...>` means only `$b` is interesting to capture and
+will be referenced later in result patterns. It's still possible to place
+additional constraints even if the symbol is not to be captured; for such case,
+you can simply use just the `TypeConstraint` or `AttributeConstraint` without a
+bound symbol, for example, `def : Pat<(AOp $a, F32Attr), ...>`.
+
+#### Matching DAG of operations
+
+To match an DAG of ops, use nested `dag` objects:
+
+```tblgen
+
+def BOp : Op<"b_op"> {
+ let arguments = (ins);
+
+ let results = (outs
+ AnyType:$b_output
+ );
+}
+
+
+def : Pat<(AOp (BOp), $attr), ...>;
+```
+
+The above pattern matches an `AOp` whose only operand is generated by a `BOp`,
+that is, the following MLIR code:
+
+```mlir
+%0 = "b_op"() : () -> (...)
+%1 = "a_op"(%0) {attr: ...} : () -> (...)
+```
+
+#### Binding op results
+
+To bind a symbol to the results of a matched op for later reference, attach the
+symbol to the op itself:
+
+```tblgen
+def : Pat<(AOp (BOp:$b_result), $attr), ...>;
+```
+
+The above will bind `$b_result` to the matched `BOp`'s result. (There are more
+details regarding multi-result ops, which is covered
+[later](#supporting-multi-result-ops).)
+
+### Result pattern
+
+The result pattern is for generating a DAG of operations. Arguments in the `dag`
+object are intended to **reference** values captured in the source pattern and
+potentially **apply transformations**.
+
+#### Referencing bound symbols
+
+For example,
+
+```tblgen
+def COp : Op<"c_op"> {
+ let arguments = (ins
+ AnyType:$c_input,
+ AnyAttr:$c_attr
+ );
+
+ let results = (outs
+ AnyType:$c_output
+ );
+}
+
+def : Pat<(AOp $input, $attr), (COp $input, $attr)>;
+```
+
+In the above, `AOp`'s only operand and attribute are bound to `$input` and
+`$attr`, respectively. We then reference them in the result pattern for
+generating the `COp` by passing them in as arguments to `COp`'s `build()`
+method.
+
+We can also reference symbols bound to matched op's results:
+
+```tblgen
+def : Pat<(AOp (BOp:$b_result) $attr), (COp $b_result $attr)>;
+```
+
+In the above, we are using `BOp`'s result for building `COp`.
+
+#### Building operations
+
+Given that `COp` was specified with table-driven op definition, there will be
+several `build()` methods generated for it. One of them has aggregated
+parameters for result types, operands, and attributes in the signature: `void
+COp::build(..., ArrayRef<Type> resultTypes, Array<Value> operands,
+ArrayRef<NamedAttribute> attr)`. The pattern in the above calls this `build()`
+method for constructing the `COp`.
+
+In general, arguments in the result pattern will be passed directly to the
+`build()` method to leverage the auto-generated `build()` method, list them in
+the pattern by following the exact same order as the ODS `arguments` definition.
+Otherwise, a custom `build()` method that matches the argument list is required.
+
+Right now all ODS-generated `build()` methods require specifying the result
+type(s), unless the op has known traits like `SameOperandsAndResultType` that
+we can use to auto-generate a `build()` method with result type deduction.
+When generating an op to replace the result of the matched root op, we can use
+the matched root op's result type when calling the ODS-generated builder.
+Otherwise (e.g., generating an [auxiliary op](#supporting-auxiliary-ops) or
+generating an op with a nested result pattern), DRR will not be able to deduce
+the result type(s). The pattern author will need to define a custom builder
+that has result type deduction ability via `OpBuilder` in ODS. For example,
+in the following pattern
+
+```tblgen
+def : Pat<(AOp $input, $attr), (COp (AOp $input, $attr) $attr)>;
+```
+
+`AOp` is generated via a nested result pattern; DRR won't be able to deduce the
+result type for it. A custom builder for `AOp` should be defined and it should
+deduce the result type by itself. The builder should have the separate parameter
+for each operand and attribute and deduce the result type internally by itself.
+For example, for the above `AOp`, a possible builder is:
+
+```c++
+
+void AOp::build(Builder *builder, OperationState &state,
+ Value input, Attribute attr) {
+ state.addOperands({input});
+ state.addAttribute("a_attr", attr);
+ Type type = ...; // Deduce result type here
+ state.addTypes({type});
+}
+```
+
+Failing to define such a builder will result in an error at C++ compilation time
+saying the call to `AOp::build()` cannot be resolved because of the number of
+parameters mismatch.
+
+#### Generating DAG of operations
+
+`dag` objects can be nested to generate a DAG of operations:
+
+```tblgen
+def : Pat<(AOp $input, $attr), (COp (BOp), $attr)>;
+```
+
+In the above, we generate a `BOp`, and then use its result to generate the `COp`
+to replace the matched `AOp`.
+
+#### Binding op results
+
+In the result pattern, we can bind to the result(s) of a newly built op by
+attaching symbols to the op. (But we **cannot** bind to op arguments given that
+they are referencing previously bound symbols.) This is useful for reusing
+newly created results where suitable. For example,
+
+```tblgen
+def DOp : Op<"d_op"> {
+ let arguments = (ins
+ AnyType:$d_input1,
+ AnyType:$d_input2,
+ );
+
+ let results = (outs
+ AnyType:$d_output
+ );
+}
+
+def : Pat<(AOp $input, $ignored_attr), (DOp (BOp:$b_result) $b_result)>;
+```
+
+In this pattern, an `AOp` is matched and replaced with a `DOp` whose two
+operands are from the result of a single `BOp`. This is only possible by binding
+the result of the `BOp` to a name and reuse it for the second operand of the
+`DOp`
+
+#### `NativeCodeCall`: transforming the generated op
+
+Sometimes the captured arguments are not exactly what we want so they cannot be
+directly fed in as arguments to build the new op. For such cases, we can apply
+transformations on the arguments by calling into C++ helper functions. This is
+achieved by `NativeCodeCall`.
+
+For example, if we want to capture some op's attributes and group them as an
+array attribute to construct a new op:
+
+```tblgen
+
+def TwoAttrOp : Op<"two_attr_op"> {
+ let arguments = (ins
+ AnyAttr:$op_attr1,
+ AnyAttr:$op_attr2
+ );
+
+ let results = (outs
+ AnyType:$op_output
+ );
+}
+
+def OneAttrOp : Op<"one_attr_op"> {
+ let arguments = (ins
+ ArrayAttr:$op_attr
+ );
+
+ let results = (outs
+ AnyType:$op_output
+ );
+}
+```
+
+We can write a C++ helper function:
+
+```c++
+Attribute createArrayAttr(Builder &builder, Attribute a, Attribute b) {
+ return builder.getArrayAttr({a, b});
+}
+```
+
+And then write the pattern as:
+
+```tblgen
+def createArrayAttr : NativeCodeCall<"createArrayAttr($_builder, $0, $1)">;
+
+def : Pat<(TwoAttrOp $attr1, $attr2),
+ (OneAttrOp (createArrayAttr $attr1, $attr2))>;
+```
+
+And make sure the generated C++ code from the above pattern has access to the
+definition of the C++ helper function.
+
+In the above example, we are using a string to specialize the `NativeCodeCall`
+template. The string can be an arbitrary C++ expression that evaluates into
+some C++ object expected at the `NativeCodeCall` site (here it would be
+expecting an array attribute). Typically the string should be a function call.
+
+Note that currently `NativeCodeCall` must return no more than one value or
+attribute. This might change in the future.
+
+##### `NativeCodeCall` placeholders
+
+In `NativeCodeCall`, we can use placeholders like `$_builder`, `$N`. The former
+is called _special placeholder_, while the latter is called _positional
+placeholder_.
+
+`NativeCodeCall` right now only supports two special placeholders: `$_builder`
+and `$_self`:
+
+* `$_builder` will be replaced by the current `mlir::PatternRewriter`.
+* `$_self` will be replaced with the entity `NativeCodeCall` is attached to.
+
+We have seen how `$_builder` can be used in the above; it allows us to pass a
+`mlir::Builder` (`mlir::PatternRewriter` is a subclass of `mlir::OpBuilder`,
+which is a subclass of `mlir::Builder`) to the C++ helper function to use the
+handy methods on `mlir::Builder`.
+
+`$_self` is useful when we want to write something in the form of
+`NativeCodeCall<"...">:$symbol`. For example, if we want to reverse the previous
+example and decompose the array attribute into two attributes:
+
+```tblgen
+class getNthAttr<int n> : NativeCodeCall<"$_self.getValue()[" # n # "]">;
+
+def : Pat<(OneAttrOp $attr),
+ (TwoAttrOp (getNthAttr<0>:$attr), (getNthAttr<1>:$attr)>;
+```
+
+In the above, `$_self` is substituted by the attribute bound by `$attr`, which
+is `OnAttrOp`'s array attribute.
+
+Positional placeholders will be substituted by the `dag` object parameters at
+the `NativeCodeCall` use site. For example, if we define `SomeCall :
+NativeCodeCall<"someFn($1, $2, $0)">` and use it like `(SomeCall $in0, $in1,
+$in2)`, then this will be translated into C++ call `someFn($in1, $in2, $in0)`.
+
+##### Customizing entire op building
+
+`NativeCodeCall` is not only limited to transforming arguments for building an
+op; it can be also used to specify how to build an op entirely. An example:
+
+If we have a C++ function for building an op:
+
+```c++
+Operation *createMyOp(OpBuilder builder, Value input, Attribute attr);
+```
+
+We can wrap it up and invoke it like:
+
+```tblgen
+def createMyOp : NativeCodeCall<"createMyOp($_builder, $0, $1)">;
+
+def : Pat<(... $input, $attr), (createMyOp $input, $attr)>;
+```
+
+### Supporting auxiliary ops
+
+A declarative rewrite rule supports multiple result patterns. One of the
+purposes is to allow generating _auxiliary ops_. Auxiliary ops are operations
+used for building the replacement ops; but they are not directly used for
+replacement themselves.
+
+For the case of uni-result ops, if there are multiple result patterns, only the
+value generated from the last result pattern will be used to replace the matched
+root op's result; all other result patterns will be considered as generating
+auxiliary ops.
+
+Normally we want to specify ops as nested `dag` objects if their def-use
+relationship can be expressed in the way that an op's result can feed as the
+argument to consuming op. But that is not always possible. For example, if we
+want to allocate memory and store some computation (in pseudocode):
+
+```mlir
+%dst = addi %lhs, %rhs
+```
+
+into
+
+```mlir
+%shape = shape %lhs
+%mem = alloc %shape
+%sum = addi %lhs, %rhs
+store %mem, %sum
+%dst = load %mem
+```
+
+We cannot fit in with just one result pattern given `store` does not return a
+value. Instead we can use multiple result patterns:
+
+```tblgen
+def : Pattern<(AddIOp $lhs, $rhs),
+ [(StoreOp (AllocOp:$mem (ShapeOp %lhs)), (AddIOp $lhs, $rhs)),
+ (LoadOp $mem)];
+```
+
+In the above we use the first result pattern to generate the first four ops, and
+use the last pattern to generate the last op, which is used to replace the
+matched op.
+
+### Supporting multi-result ops
+
+Multi-result ops bring extra complexity to declarative rewrite rules. We use
+TableGen `dag` objects to represent ops in patterns; there is no native way to
+indicate that an op generates multiple results. The approach adopted is based
+on **naming convention**: a `__N` suffix is added to a symbol to indicate the
+`N`-th result.
+
+#### `__N` suffix
+
+The `__N` suffix is specifying the `N`-th result as a whole (which can be
+[variadic](#supporting-variadic-ops)). For example, we can bind a symbol to some
+multi-result op and reference a specific result later:
+
+```tblgen
+def ThreeResultOp : Op<"three_result_op"> {
+ let arguments = (ins ...);
+
+ let results = (outs
+ AnyTensor:$op_output1,
+ AnyTensor:$op_output2,
+ AnyTensor:$op_output3
+ );
+}
+
+def : Pattern<(ThreeResultOp:$results ...),
+ [(... $results__0), ..., (... $results__2), ...]>;
+```
+
+In the above pattern we bind `$results` to all the results generated by
+`ThreeResultOp` and references its `$input1` and `$input3` later in the result
+patterns.
+
+We can also bind a symbol and reference one of its specific result at the same
+time, which is typically useful when generating multi-result ops:
+
+```tblgen
+// TwoResultOp has similar definition as ThreeResultOp, but only has two
+// results.
+
+def : Pattern<(TwoResultOp ...),
+ [(ThreeResultOp:$results__2, ...),
+ (replaceWithValue $results__0)]>;
+```
+
+In the above, we created a `ThreeResultOp` and bind `results` to its results,
+and uses its last result (`$output3`) and first result (`$output1`) to replace
+the `TwoResultOp`'s two results, respectively.
+
+#### Replacing multi-result ops
+
+The above example also shows how to replace a matched multi-result op.
+
+To replace a `N`-result op, the result patterns must generate at least `N`
+declared values (see [Declared vs. actual value](#declared-vs-actual-value) for
+definition). If there are more than `N` declared values generated, only the
+last `N` declared values will be used to replace the matched op. Note that
+because of the existence of multi-result op, one result pattern **may** generate
+multiple declared values. So it means we do not necessarily need `N` result
+patterns to replace an `N`-result op. For example, to replace an op with three
+results, you can have
+
+```tblgen
+// ThreeResultOp/TwoResultOp/OneResultOp generates three/two/one result(s),
+// respectively.
+
+// Replace each result with a result generated from an individual op.
+def : Pattern<(ThreeResultOp ...),
+ [(OneResultOp ...), (OneResultOp ...), (OneResultOp ...)]>;
+
+// Replace the first two results with two results generated from the same op.
+def : Pattern<(ThreeResultOp ...),
+ [(TwoResultOp ...), (OneResultOp ...)]>;
+
+// Replace all three results with three results generated from the same op.
+def : Pat<(ThreeResultOp ...), (ThreeResultOp ...)>;
+
+def : Pattern<(ThreeResultOp ...),
+ [(AuxiliaryOp ...), (ThreeResultOp ...)]>;
+```
+
+But using a single op to serve as both auxiliary op and replacement op is
+forbidden, i.e., the following is not allowed because that the first
+`TwoResultOp` generates two results but only the second result is used for
+replacing the matched op's result:
+
+```tblgen
+def : Pattern<(ThreeResultOp ...),
+ [(TwoResultOp ...), (TwoResultOp ...)]>;
+```
+
+### Supporting variadic ops
+
+#### Declared vs. actual value
+
+Before going into details on variadic op support, we need to define a few terms
+regarding an op's values.
+
+* _Value_: either an operand or a result
+* _Declared operand/result/value_: an operand/result/value statically declared
+ in ODS of the op
+* _Actual operand/result/value_: an operand/result/value of an op instance at
+ runtime
+
+The above terms are needed because ops can have multiple results, and some of the
+results can also be variadic. For example,
+
+```tblgen
+def MultiVariadicOp : Op<"multi_variadic_op"> {
+ let arguments = (ins
+ AnyTensor:$input1,
+ Variadic<AnyTensor>:$input2,
+ AnyTensor:$input3
+ );
+
+ let results = (outs
+ AnyTensor:$output1,
+ Variadic<AnyTensor>:$output2,
+ AnyTensor:$output3
+ );
+}
+```
+
+We say the above op has 3 declared operands and 3 declared results. But at
+runtime, an instance can have 3 values corresponding to `$input2` and 2 values
+correspond to `$output2`; we say it has 5 actual operands and 4 actual
+results. A variadic operand/result is a considered as a declared value that can
+correspond to multiple actual values.
+
+[TODO]
+
+### Supplying additional constraints
+
+Constraints can be placed on op arguments when matching. But sometimes we need
+to also place constraints on the matched op's results or sometimes need to limit
+the matching with some constraints that cover both the arguments and the
+results. The third parameter to `Pattern` (and `Pat`) is for this purpose.
+
+For example, we can write
+
+```tblgen
+def HasNoUseOf: Constraint<
+ CPred<"$_self->use_begin() == $_self->use_end()">, "has no use">;
+
+def HasSameElementType : Constraint<
+ CPred<"$0.cast<ShapedType>().getElementType() == "
+ "$1.cast<ShapedType>().getElementType()">,
+ "has same element type">;
+
+def : Pattern<(TwoResultOp:$results $input),
+ [(...), (...)],
+ [(F32Tensor:$results__0), (HasNoUseOf:$results__1),
+ (HasSameElementShape $results__0, $input)]>;
+```
+
+You can
+
+* Use normal `TypeConstraint`s on previous bound symbols (the first result of
+ `TwoResultOp` must be a float tensor);
+* Define new `Constraint` for previous bound symbols (the second result of
+ `TwoResultOp` must has no use);
+* Apply constraints on multiple bound symbols (`$input` and `TwoResultOp`'s
+ first result must have the same element type).
+
+### Adjusting benefits
+
+The benefit of a `Pattern` is an integer value indicating the benefit of matching
+the pattern. It determines the priorities of patterns inside the pattern rewrite
+driver. A pattern with a higher benefit is applied before one with a lower
+benefit.
+
+In DRR, a rule is set to have a benefit of the number of ops in the source
+pattern. This is based on the heuristics and assumptions that:
+
+* Larger matches are more beneficial than smaller ones.
+* If a smaller one is applied first the larger one may not apply anymore.
+
+
+The fourth parameter to `Pattern` (and `Pat`) allows to manually tweak a
+pattern's benefit. Just supply `(addBenefit N)` to add `N` to the benefit value.
+
+## Special directives
+
+[TODO]
+
+## Debugging Tips
+
+### Run `mlir-tblgen` to see the generated content
+
+TableGen syntax sometimes can be obscure; reading the generated content can be
+a very helpful way to understand and debug issues. To build `mlir-tblgen`, run
+`cmake --build . --target mlir-tblgen` in your build directory and find the
+`mlir-tblgen` binary in the `bin/` subdirectory. All the supported generators
+can be found via `mlir-tblgen --help`.
+
+To see the generated code, invoke `mlir-tblgen` with a specific generator by
+providing include paths via `-I`. For example,
+
+```sh
+# To see all the C++ pattern rewrite classes
+mlir-tblgen --gen-rewriters -I /path/to/mlir/include /path/to/input/td/file
+```
+
+### Compilation error: no matching member function for call to 'build'
+
+This is because DRR is failing to call a `build()` method with result type
+deduction ability. See [building operations](#building-operations) for more
+details.
+
+[TableGen]: https://llvm.org/docs/TableGen/index.html
+[OpBase]: https://github.com/tensorflow/mlir/blob/master/include/mlir/IR/OpBase.td
diff --git a/mlir/docs/DefiningAttributesAndTypes.md b/mlir/docs/DefiningAttributesAndTypes.md
new file mode 100644
index 00000000000..60243e5fd57
--- /dev/null
+++ b/mlir/docs/DefiningAttributesAndTypes.md
@@ -0,0 +1,282 @@
+# Quickstart tutorial to defining custom dialect attributes and types
+
+This document is a quickstart to defining dialect specific extensions to the
+[attribute](LangRef.md#attributes) and [type system](LangRef.md#type-system).
+The main part of the tutorial focuses on defining types, but the instructions
+are nearly identical for defining attributes.
+
+See [MLIR specification](LangRef.md) for more information about MLIR, the
+structure of the IR, operations, etc.
+
+## Types
+
+Types in MLIR (like attributes, locations, and many other things) are
+value-typed. This means that instances of `Type` should be passed around
+by-value, as opposed to by-pointer or by-reference. The `Type` class in itself
+acts as a wrapper around an internal storage object that is uniqued within an
+instance of an `MLIRContext`.
+
+### Reserving a range of type kinds
+
+Types in MLIR rely on having a unique `kind` value to ensure that casting checks
+remain extremely
+efficient([rationale](Rationale.md#reserving-dialect-type-kinds). For a dialect
+author, this means that a range of type `kind` values must be explicitly, and
+statically, reserved. A dialect can reserve a range of values by adding a new
+entry to the
+[DialectSymbolRegistry](https://github.com/tensorflow/mlir/blob/master/include/mlir/IR/DialectSymbolRegistry.def).
+To support out-of-tree and experimental dialects, the registry predefines a set
+of privates ranges, `PRIVATE_EXPERIMENTAL_[0-9]`, that are free for immediate
+use.
+
+```c++
+DEFINE_SYM_KIND_RANGE(LINALG) // Linear Algebra Dialect
+DEFINE_SYM_KIND_RANGE(TOY) // Toy language (tutorial) Dialect
+
+// The following ranges are reserved for experimenting with MLIR dialects in a
+// private context without having to register them here.
+DEFINE_SYM_KIND_RANGE(PRIVATE_EXPERIMENTAL_0)
+```
+
+For the sake of this tutorial, we will use the predefined
+`PRIVATE_EXPERIMENTAL_0` range. These definitions will provide a range in the
+Type::Kind enum to use when defining the derived types.
+
+```c++
+namespace MyTypes {
+enum Kinds {
+ // These kinds will be used in the examples below.
+ Simple = Type::Kind::FIRST_PRIVATE_EXPERIMENTAL_0_TYPE,
+ Complex
+};
+}
+```
+
+### Defining the type class
+
+As described above, `Type` objects in MLIR are value-typed and rely on having an
+implicitly internal storage object that holds the actual data for the type. When
+defining a new `Type` it isn't always necessary to define a new storage class.
+So before defining the derived `Type`, it's important to know which of the two
+classes of `Type` we are defining. Some types are `primitives` meaning they do
+not have any parameters and are singletons uniqued by kind, like the
+[`index` type](LangRef.md#index-type). Parametric types on the other hand, have
+additional information that differentiates different instances of the same
+`Type` kind. For example the [`integer` type](LangRef.md#integer-type) has a
+bitwidth, making `i8` and `i16` be different instances of
+[`integer` type](LangRef.md#integer-type).
+
+#### Simple non-parametric types
+
+For simple parameterless types, we can jump straight into defining the derived
+type class. Given that these types are uniqued solely on `kind`, we don't need
+to provide our own storage class.
+
+```c++
+/// This class defines a simple parameterless type. All derived types must
+/// inherit from the CRTP class 'Type::TypeBase'. It takes as template
+/// parameters the concrete type (SimpleType), and the base class to use (Type).
+/// 'Type::TypeBase' also provides several utility methods to simplify type
+/// construction.
+class SimpleType : public Type::TypeBase<SimpleType, Type> {
+public:
+ /// Inherit some necessary constructors from 'TypeBase'.
+ using Base::Base;
+
+ /// This static method is used to support type inquiry through isa, cast,
+ /// and dyn_cast.
+ static bool kindof(unsigned kind) { return kind == MyTypes::Simple; }
+
+ /// This method is used to get an instance of the 'SimpleType'. Given that
+ /// this is a parameterless type, it just needs to take the context for
+ /// uniquing purposes.
+ static SimpleType get(MLIRContext *context) {
+ // Call into a helper 'get' method in 'TypeBase' to get a uniqued instance
+ // of this type.
+ return Base::get(context, MyTypes::Simple);
+ }
+};
+```
+
+#### Parametric types
+
+Parametric types are those that have additional construction or uniquing
+constraints outside of the type `kind`. As such, these types require defining a
+type storage class.
+
+##### Defining a type storage
+
+Type storage objects contain all of the data necessary to construct and unique a
+parametric type instance. The storage classes must obey the following:
+
+* Inherit from the base type storage class `TypeStorage`.
+* Define a type alias, `KeyTy`, that maps to a type that uniquely identifies
+ an instance of the parent type.
+* Provide a construction method that is used to allocate a new instance of the
+ storage class.
+ - `Storage *construct(TypeStorageAllocator &, const KeyTy &key)`
+* Provide a comparison method between the storage and `KeyTy`.
+ - `bool operator==(const KeyTy &) const`
+* Provide a method to generate the `KeyTy` from a list of arguments passed to
+ the uniquer. (Note: This is only necessary if the `KeyTy` cannot be default
+ constructed from these arguments).
+ - `static KeyTy getKey(Args...&& args)`
+* Provide a method to hash an instance of the `KeyTy`. (Note: This is not
+ necessary if an `llvm::DenseMapInfo<KeyTy>` specialization exists)
+ - `static llvm::hash_code hashKey(const KeyTy &)`
+
+Let's look at an example:
+
+```c++
+/// Here we define a storage class for a ComplexType, that holds a non-zero
+/// integer and an integer type.
+struct ComplexTypeStorage : public TypeStorage {
+ ComplexTypeStorage(unsigned nonZeroParam, Type integerType)
+ : nonZeroParam(nonZeroParam), integerType(integerType) {}
+
+ /// The hash key for this storage is a pair of the integer and type params.
+ using KeyTy = std::pair<unsigned, Type>;
+
+ /// Define the comparison function for the key type.
+ bool operator==(const KeyTy &key) const {
+ return key == KeyTy(nonZeroParam, integerType);
+ }
+
+ /// Define a hash function for the key type.
+ /// Note: This isn't necessary because std::pair, unsigned, and Type all have
+ /// hash functions already available.
+ static llvm::hash_code hashKey(const KeyTy &key) {
+ return llvm::hash_combine(key.first, key.second);
+ }
+
+ /// Define a construction function for the key type.
+ /// Note: This isn't necessary because KeyTy can be directly constructed with
+ /// the given parameters.
+ static KeyTy getKey(unsigned nonZeroParam, Type integerType) {
+ return KeyTy(nonZeroParam, integerType);
+ }
+
+ /// Define a construction method for creating a new instance of this storage.
+ static ComplexTypeStorage *construct(TypeStorageAllocator &allocator,
+ const KeyTy &key) {
+ return new (allocator.allocate<ComplexTypeStorage>())
+ ComplexTypeStorage(key.first, key.second);
+ }
+
+ unsigned nonZeroParam;
+ Type integerType;
+};
+```
+
+##### Type class definition
+
+Now that the storage class has been created, the derived type class can be
+defined. This structure is similar to the
+[simple type](#simple-non-parametric-types), except for a bit more of the
+functionality of `Type::TypeBase` is put to use.
+
+```c++
+/// This class defines a parametric type. All derived types must inherit from
+/// the CRTP class 'Type::TypeBase'. It takes as template parameters the
+/// concrete type (ComplexType), the base class to use (Type), and the storage
+/// class (ComplexTypeStorage). 'Type::TypeBase' also provides several utility
+/// methods to simplify type construction and verification.
+class ComplexType : public Type::TypeBase<ComplexType, Type,
+ ComplexTypeStorage> {
+public:
+ /// Inherit some necessary constructors from 'TypeBase'.
+ using Base::Base;
+
+ /// This static method is used to support type inquiry through isa, cast,
+ /// and dyn_cast.
+ static bool kindof(unsigned kind) { return kind == MyTypes::Complex; }
+
+ /// This method is used to get an instance of the 'ComplexType'. This method
+ /// asserts that all of the construction invariants were satisfied. To
+ /// gracefully handle failed construction, getChecked should be used instead.
+ static ComplexType get(MLIRContext *context, unsigned param, Type type) {
+ // Call into a helper 'get' method in 'TypeBase' to get a uniqued instance
+ // of this type. All parameters to the storage class are passed after the
+ // type kind.
+ return Base::get(context, MyTypes::Complex, param, type);
+ }
+
+ /// This method is used to get an instance of the 'ComplexType', defined at
+ /// the given location. If any of the construction invariants are invalid,
+ /// errors are emitted with the provided location and a null type is returned.
+ /// Note: This method is completely optional.
+ static ComplexType getChecked(MLIRContext *context, unsigned param, Type type,
+ Location location) {
+ // Call into a helper 'getChecked' method in 'TypeBase' to get a uniqued
+ // instance of this type. All parameters to the storage class are passed
+ // after the type kind.
+ return Base::getChecked(location, context, MyTypes::Complex, param, type);
+ }
+
+ /// This method is used to verify the construction invariants passed into the
+ /// 'get' and 'getChecked' methods. Note: This method is completely optional.
+ static LogicalResult verifyConstructionInvariants(
+ llvm::Optional<Location> loc, MLIRContext *context, unsigned param,
+ Type type) {
+ // Our type only allows non-zero parameters.
+ if (param == 0) {
+ if (loc)
+ context->emitError(loc) << "non-zero parameter passed to 'ComplexType'";
+ return failure();
+ }
+ // Our type also expects an integer type.
+ if (!type.isa<IntegerType>()) {
+ if (loc)
+ context->emitError(loc) << "non integer-type passed to 'ComplexType'";
+ return failure();
+ }
+ return success();
+ }
+
+ /// Return the parameter value.
+ unsigned getParameter() {
+ // 'getImpl' returns a pointer to our internal storage instance.
+ return getImpl()->nonZeroParam;
+ }
+
+ /// Return the integer parameter type.
+ IntegerType getParameterType() {
+ // 'getImpl' returns a pointer to our internal storage instance.
+ return getImpl()->integerType;
+ }
+};
+```
+
+### Registering types with a Dialect
+
+Once the dialect types have been defined, they must then be registered with a
+`Dialect`. This is done via similar mechanism to
+[operations](LangRef.md#operations), `addTypes`.
+
+```c++
+struct MyDialect : public Dialect {
+ MyDialect(MLIRContext *context) : Dialect(/*name=*/"mydialect", context) {
+ /// Add these types to the dialect.
+ addTypes<SimpleType, ComplexType>();
+ }
+};
+```
+
+### Parsing and Printing
+
+As a final step after registration, a dialect must override the `printType` and
+`parseType` hooks. These enable native support for roundtripping the type in the
+textual IR.
+
+## Attributes
+
+As stated in the introduction, the process for defining dialect attributes is
+nearly identical to that of defining dialect types. That key difference is that
+the things named `*Type` are generally now named `*Attr`.
+
+* `Type::TypeBase` -> `Attribute::AttrBase`
+* `TypeStorageAllocator` -> `AttributeStorageAllocator`
+* `addTypes` -> `addAttributes`
+
+Aside from that, all of the interfaces for uniquing and storage construction are
+all the same.
diff --git a/mlir/docs/DeveloperGuide.md b/mlir/docs/DeveloperGuide.md
new file mode 100644
index 00000000000..74500995925
--- /dev/null
+++ b/mlir/docs/DeveloperGuide.md
@@ -0,0 +1,107 @@
+# Developer Guide
+
+This document attempts to describe a few developer policies used in MLIR (such
+as coding standards used) as well as development approach (such as, testing
+methods).
+
+## Style guide
+
+MLIR follows the [LLVM style](https://llvm.org/docs/CodingStandards.html) guide.
+We also adhere to the following (which deviate from or are not specified in the
+LLVM style guide):
+
+* Adopts [camelBack](https://llvm.org/docs/Proposals/VariableNames.html);
+* Except for IR units (Region, Block, and Operation), non-nullable output
+ arguments are passed by non-const reference in general.
+* IR constructs are not designed for [const correctness](UsageOfConst.md).
+* Do *not* use recursive algorithms if the recursion can't be bounded
+ statically: that is avoid recursion if there is a possible IR input that can
+ trigger a stack overflow (for example traversing use-def chains in a
+ recursive way). At the moment, we tolerate it for the two following cases:
+ * The nesting of the IR: we use recursion when traversing nested regions.
+ * Type nesting: recursion may be used for the nesting of composite types.
+* Follow the `git` conventions for writing a commit message, in particular the
+ first line is the "title", it should be followed by an empty line and an
+ optional description. This [post](https://chris.beams.io/posts/git-commit/)
+ give examples and more details.
+
+Please run clang-format on the files you modified with the `.clang-format`
+configuration file available in the root directory. Check the clang-format
+[documentation](https://clang.llvm.org/docs/ClangFormat.html) for more details
+on integrating it with your development environment. In particular, if clang is
+installed system-wide, running `git clang-format origin/master` will update the
+files in the working directory with the relevant formatting changes; don't
+forget to include those to the commit.
+
+## Pass name and other command line options
+
+To avoid collision between options provided by different dialects, the naming
+convention is to prepend the dialect name to every dialect-specific passes and
+options in general. Options that are specific to a pass should also be prefixed
+with the pass name. For example, the affine dialect provides a loop tiling pass
+that is registered on the command line as `-affine-tile`, and with a tile size
+option that can be set with `-affine-tile-size`.
+
+We also avoid `cl::opt` to provide pass options in favor of the
+[pass options](WritingAPass.md#instance-specific-pass-options) mechanism. This
+allows for these options to be serialized in a pass pipeline description, as
+well as passing different options to multiple instances of a pass in the same
+pipeline.
+
+## Testing guidelines
+
+See here for the [testing guide](TestingGuide.md).
+
+## Guidelines on contributing a new dialect (or important components)
+
+To contribute a dialect (or a major component in MLIR), it is usual to write an
+overview "RFC" (it can be just a few informal paragraphs) and send it to the
+MLIR mailing list. When accepting a new component to MLIR, the community is also
+accepting the burden of maintaining it. The following points should be
+considered when evaluating whether a dialect is a good fit for the core MLIR
+repository:
+
+* What is the overall goal of the dialect? What is the first implementation
+ milestone?
+* How does it fit into the MLIR dialect ecosystem?
+ * Connection: how does it connect to the existing dialects in a
+ compilation pipeline(s)?
+ * Consolidation: is there already a dialect with a similar goal or
+ matching abstractions; if so, can it be improved instead of adding a new
+ one?
+ * Reuse: how does it generalize to similar but slightly different
+ use-cases?
+* What is the community of users that it is serving?
+* Who are the future contributors/maintainers beyond those who propose the
+ dialect?
+
+On a practical aspect, we will expect the code to follow the other sections of
+this document, with an emphasis on the documentation alongside the source code.
+
+It is prefered to upstream your dialects/components in small incremental patches
+that can be individually reviewed. That is, after the initial RFC has been
+agreed on, we encourage dialects to be built progressively by faster iterations
+in-tree; as long as it is clear they evolve towards their milestones and goals.
+
+We have seen the following broad categories of dialects:
+
+* Edge dialects that model a representation external to MLIR. Examples include
+ LLVM, SPIR-V dialects, TensorFlow, XLA/HLO, ... Such dialects may be a
+ better fit for the project that contains the original representation instead
+ of being added to the MLIR repository. In particular, because MLIR will not
+ take an external dependency on another project.
+* Structured Abstraction dialects that generalize common features of several
+ other dialects or introduce a programming model. Generalization is sometimes
+ demonstrated by having several dialects lower to or originate from a new
+ dialect. While additional abstractions may be useful, they should be traded
+ off against the additional complexity of the dialect ecosystem. Examples of
+ abstraction dialects include the GPU and Loop dialects.
+* Transformation dialects that serve as input/output for program
+ transformations. These dialects are commonly introduced to materialize
+ transformation pre- and post-conditions in the IR, while conditions can be
+ obtained through analysis or through operation semantics. Examples include
+ Affine and Linalg dialects.
+
+While it can be useful to frame the goals of a proposal, this categorization is
+not exhaustive or absolute, and the community is open to discussing any new
+dialect beyond this taxonomy.
diff --git a/mlir/docs/Diagnostics.md b/mlir/docs/Diagnostics.md
new file mode 100644
index 00000000000..69a30942c00
--- /dev/null
+++ b/mlir/docs/Diagnostics.md
@@ -0,0 +1,402 @@
+# Introduction and Usage Guide to MLIR's Diagnostics Infrastructure
+
+[TOC]
+
+This document presents an introduction to using and interfacing with MLIR's
+diagnostics infrastructure.
+
+See [MLIR specification](LangRef.md) for more information about MLIR, the
+structure of the IR, operations, etc.
+
+## Source Locations
+
+Source location information is extremely important for any compiler, because it
+provides a baseline for debuggability and error-reporting. MLIR provides several
+different location types depending on the situational need.
+
+### CallSite Location
+
+```
+callsite-location ::= 'callsite' '(' location 'at' location ')'
+```
+
+An instance of this location allows for representing a directed stack of
+location usages. This connects a location of a `callee` with the location of a
+`caller`.
+
+### FileLineCol Location
+
+```
+filelinecol-location ::= string-literal ':' integer-literal ':' integer-literal
+```
+
+An instance of this location represents a tuple of file, line number, and column
+number. This is similar to the type of location that you get from most source
+languages.
+
+### Fused Location
+
+```
+fused-location ::= `fused` fusion-metadata? '[' location (location ',')* ']'
+fusion-metadata ::= '<' attribute-value '>'
+```
+
+An instance of a `fused` location represents a grouping of several other source
+locations, with optional metadata that describes the context of the fusion.
+There are many places within a compiler in which several constructs may be fused
+together, e.g. pattern rewriting, that normally result partial or even total
+loss of location information. With `fused` locations, this is a non-issue.
+
+### Name Location
+
+```
+name-location ::= string-literal ('(' location ')')?
+```
+
+An instance of this location allows for attaching a name to a child location.
+This can be useful for representing the locations of variable, or node,
+definitions.
+
+### Opaque Location
+
+An instance of this location essentially contains a pointer to some data
+structure that is external to MLIR and an optional location that can be used if
+the first one is not suitable. Since it contains an external structure, only the
+optional location is used during serialization.
+
+### Unknown Location
+
+```
+unknown-location ::= `unknown`
+```
+
+Source location information is an extremely integral part of the MLIR
+infrastructure. As such, location information is always present in the IR, and
+must explicitly be set to unknown. Thus an instance of the `unknown` location,
+represents an unspecified source location.
+
+## Diagnostic Engine
+
+The `DiagnosticEngine` acts as the main interface for diagnostics in MLIR. It
+manages the registration of diagnostic handlers, as well as the core API for
+diagnostic emission. Handlers generally take the form of
+`LogicalResult(Diagnostic &)`. If the result is `success`, it signals that the
+diagnostic has been fully processed and consumed. If `failure`, it signals that
+the diagnostic should be propagated to any previously registered handlers. It
+can be interfaced with via an `MLIRContext` instance.
+
+```c++
+DiagnosticEngine engine = ctx->getDiagEngine();
+
+/// Handle the reported diagnostic.
+// Return success to signal that the diagnostic has either been fully processed,
+// or failure if the diagnostic should be propagated to the previous handlers.
+DiagnosticEngine::HandlerID id = engine.registerHandler(
+ [](Diagnostic &diag) -> LogicalResult {
+ bool should_propage_diagnostic = ...;
+ return failure(should_propage_diagnostic);
+});
+
+
+// We can also elide the return value completely, in which the engine assumes
+// that all diagnostics are consumed(i.e. a success() result).
+DiagnosticEngine::HandlerID id = engine.registerHandler([](Diagnostic &diag) {
+ return;
+});
+
+// Unregister this handler when we are done.
+engine.eraseHandler(id);
+```
+
+### Constructing a Diagnostic
+
+As stated above, the `DiagnosticEngine` holds the core API for diagnostic
+emission. A new diagnostic can be emitted with the engine via `emit`. This
+method returns an [InFlightDiagnostic](#inflight-diagnostic) that can be
+modified further.
+
+```c++
+InFlightDiagnostic emit(Location loc, DiagnosticSeverity severity);
+```
+
+Using the `DiagnosticEngine`, though, is generally not the preferred way to emit
+diagnostics in MLIR. [`operation`](LangRef.md#operations) provides utility
+methods for emitting diagnostics:
+
+```c++
+// `emit` methods available in the mlir namespace.
+InFlightDiagnostic emitError/Remark/Warning(Location);
+
+// These methods use the location attached to the operation.
+InFlightDiagnostic Operation::emitError/Remark/Warning();
+
+// This method creates a diagnostic prefixed with "'op-name' op ".
+InFlightDiagnostic Operation::emitOpError();
+```
+
+## Diagnostic
+
+A `Diagnostic` in MLIR contains all of the necessary information for reporting a
+message to the user. A `Diagnostic` essentially boils down to three main
+components:
+
+* [Source Location](#source-locations)
+* Severity Level
+ - Error, Note, Remark, Warning
+* Diagnostic Arguments
+ - The diagnostic arguments are used when constructing the output message.
+
+### Appending arguments
+
+One a diagnostic has been constructed, the user can start composing it. The
+output message of a diagnostic is composed of a set of diagnostic arguments that
+have been attached to it. New arguments can be attached to a diagnostic in a few
+different ways:
+
+```c++
+// A few interesting things to use when composing a diagnostic.
+Attribute fooAttr;
+Type fooType;
+SmallVector<int> fooInts;
+
+// Diagnostics can be composed via the streaming operators.
+op->emitError() << "Compose an interesting error: " << fooAttr << ", " << fooType
+ << ", (" << fooInts << ')';
+
+// This could generate something like (FuncAttr:@foo, IntegerType:i32, {0,1,2}):
+"Compose an interesting error: @foo, i32, (0, 1, 2)"
+```
+
+### Attaching notes
+
+Unlike many other compiler frameworks, notes in MLIR cannot be emitted directly.
+They must be explicitly attached to another diagnostic non-note diagnostic. When
+emitting a diagnostic, notes can be directly attached via `attachNote`. When
+attaching a note, if the user does not provide an explicit source location the
+note will inherit the location of the parent diagnostic.
+
+```c++
+// Emit a note with an explicit source location.
+op->emitError("...").attachNote(noteLoc) << "...";
+
+// Emit a note that inherits the parent location.
+op->emitError("...").attachNote() << "...";
+```
+
+## InFlight Diagnostic
+
+Now that [Diagnostics](#diagnostic) have been explained, we introduce the
+`InFlightDiagnostic`. is an RAII wrapper around a diagnostic that is set to be
+reported. This allows for modifying a diagnostic while it is still in flight. If
+it is not reported directly by the user it will automatically report when
+destroyed.
+
+```c++
+{
+ InFlightDiagnostic diag = op->emitError() << "...";
+} // The diagnostic is automatically reported here.
+```
+
+## Diagnostic Configuration Options
+
+Several options are provided to help control and enhance the behavior of
+diagnostics. These options are listed below:
+
+### Print Operation On Diagnostic
+
+Command Line Flag: `-mlir-print-op-on-diagnostic`
+
+When a diagnostic is emitted on an operation, via `Operation::emitError/...`,
+the textual form of that operation is printed and attached as a note to the
+diagnostic. This option is useful for understanding the current form of an
+operation that may be invalid, especially when debugging verifier failures. An
+example output is shown below:
+
+```shell
+test.mlir:3:3: error: 'module_terminator' op expects parent op 'module'
+ "module_terminator"() : () -> ()
+ ^
+test.mlir:3:3: note: see current operation: "module_terminator"() : () -> ()
+ "module_terminator"() : () -> ()
+ ^
+```
+
+### Print StackTrace On Diagnostic
+
+Command Line Flag: `-mlir-print-stacktrace-on-diagnostic`
+
+When a diagnostic is emitted, attach the current stack trace as a note to the
+diagnostic. This option is useful for understanding which part of the compiler
+generated certain diagnostics. An example output is shown below:
+
+```shell
+test.mlir:3:3: error: 'module_terminator' op expects parent op 'module'
+ "module_terminator"() : () -> ()
+ ^
+test.mlir:3:3: note: diagnostic emitted with trace:
+ #0 0x000055dd40543805 llvm::sys::PrintStackTrace(llvm::raw_ostream&) llvm/lib/Support/Unix/Signals.inc:553:11
+ #1 0x000055dd3f8ac162 emitDiag(mlir::Location, mlir::DiagnosticSeverity, llvm::Twine const&) /lib/IR/Diagnostics.cpp:292:7
+ #2 0x000055dd3f8abe8e mlir::emitError(mlir::Location, llvm::Twine const&) /lib/IR/Diagnostics.cpp:304:10
+ #3 0x000055dd3f998e87 mlir::Operation::emitError(llvm::Twine const&) /lib/IR/Operation.cpp:324:29
+ #4 0x000055dd3f99d21c mlir::Operation::emitOpError(llvm::Twine const&) /lib/IR/Operation.cpp:652:10
+ #5 0x000055dd3f96b01c mlir::OpTrait::HasParent<mlir::ModuleOp>::Impl<mlir::ModuleTerminatorOp>::verifyTrait(mlir::Operation*) /mlir/IR/OpDefinition.h:897:18
+ #6 0x000055dd3f96ab38 mlir::Op<mlir::ModuleTerminatorOp, mlir::OpTrait::ZeroOperands, mlir::OpTrait::ZeroResult, mlir::OpTrait::HasParent<mlir::ModuleOp>::Impl, mlir::OpTrait::IsTerminator>::BaseVerifier<mlir::OpTrait::HasParent<mlir::ModuleOp>::Impl<mlir::ModuleTerminatorOp>, mlir::OpTrait::IsTerminator<mlir::ModuleTerminatorOp> >::verifyTrait(mlir::Operation*) /mlir/IR/OpDefinition.h:1052:29
+ # ...
+ "module_terminator"() : () -> ()
+ ^
+```
+
+## Common Diagnostic Handlers
+
+To interface with the diagnostics infrastructure, users will need to register a
+diagnostic handler with the [`DiagnosticEngine`](#diagnostic-engine).
+Recognizing the many users will want the same handler functionality, MLIR
+provides several common diagnostic handlers for immediate use.
+
+### Scoped Diagnostic Handler
+
+This diagnostic handler is a simple RAII class that registers and unregisters a
+given diagnostic handler. This class can be either be used directly, or in
+conjunction with a derived diagnostic handler.
+
+```c++
+// Construct the handler directly.
+MLIRContext context;
+ScopedDiagnosticHandler scopedHandler(&context, [](Diagnostic &diag) {
+ ...
+});
+
+// Use this handler in conjunction with another.
+class MyDerivedHandler : public ScopedDiagnosticHandler {
+ MyDerivedHandler(MLIRContext *ctx) : ScopedDiagnosticHandler(ctx) {
+ // Set the handler that should be RAII managed.
+ setHandler([&](Diagnostic diag) {
+ ...
+ });
+ }
+};
+```
+
+### SourceMgr Diagnostic Handler
+
+This diagnostic handler is a wrapper around an llvm::SourceMgr instance. It
+provides support for displaying diagnostic messages inline with a line of a
+respective source file. This handler will also automatically load newly seen
+source files into the SourceMgr when attempting to display the source line of a
+diagnostic. Example usage of this handler can be seen in the `mlir-opt` tool.
+
+```shell
+$ mlir-opt foo.mlir
+
+/tmp/test.mlir:6:24: error: expected non-function type
+func @foo() -> (index, ind) {
+ ^
+```
+
+To use this handler in your tool, add the following:
+
+```c++
+SourceMgr sourceMgr;
+MLIRContext context;
+SourceMgrDiagnosticHandler sourceMgrHandler(sourceMgr, &context);
+```
+
+### SourceMgr Diagnostic Verifier Handler
+
+This handler is a wrapper around a llvm::SourceMgr that is used to verify that
+certain diagnostics have been emitted to the context. To use this handler,
+annotate your source file with expected diagnostics in the form of:
+
+* `expected-(error|note|remark|warning) {{ message }}`
+
+A few examples are shown below:
+
+```mlir
+// Expect an error on the same line.
+func @bad_branch() {
+ br ^missing // expected-error {{reference to an undefined block}}
+}
+
+// Expect an error on an adjacent line.
+func @foo(%a : f32) {
+ // expected-error@+1 {{unknown comparison predicate "foo"}}
+ %result = cmpf "foo", %a, %a : f32
+ return
+}
+
+// Expect an error on the next line that does not contain a designator.
+// expected-remark@below {{remark on function below}}
+// expected-remark@below {{another remark on function below}}
+func @bar(%a : f32)
+
+// Expect an error on the previous line that does not contain a designator.
+func @baz(%a : f32)
+// expected-remark@above {{remark on function above}}
+// expected-remark@above {{another remark on function above}}
+
+```
+
+The handler will report an error if any unexpected diagnostics were seen, or if
+any expected diagnostics weren't.
+
+```shell
+$ mlir-opt foo.mlir
+
+/tmp/test.mlir:6:24: error: unexpected error: expected non-function type
+func @foo() -> (index, ind) {
+ ^
+
+/tmp/test.mlir:15:4: error: expected remark "expected some remark" was not produced
+// expected-remark {{expected some remark}}
+ ^~~~~~~~~~~~~~~~~~~~~~~~~~
+```
+
+Similarly to the [SourceMgr Diagnostic Handler](#sourcemgr-diagnostic-handler),
+this handler can be added to any tool via the following:
+
+```c++
+SourceMgr sourceMgr;
+MLIRContext context;
+SourceMgrDiagnosticVerifierHandler sourceMgrHandler(sourceMgr, &context);
+```
+
+### Parallel Diagnostic Handler
+
+MLIR is designed from the ground up to be multi-threaded. One important to thing
+to keep in mind when multi-threading is determinism. This means that the
+behavior seen when operating on multiple threads is the same as when operating
+on a single thread. For diagnostics, this means that the ordering of the
+diagnostics is the same regardless of the amount of threads being operated on.
+The ParallelDiagnosticHandler is introduced to solve this problem.
+
+After creating a handler of this type, the only remaining step is to ensure that
+each thread that will be emitting diagnostics to the handler sets a respective
+'orderID'. The orderID corresponds to the order in which diagnostics would be
+emitted when executing synchronously. For example, if we were processing a list
+of operations [a, b, c] on a single-thread. Diagnostics emitted while processing
+operation 'a' would be emitted before those for 'b' or 'c'. This corresponds 1-1
+with the 'orderID'. The thread that is processing 'a' should set the orderID to
+'0'; the thread processing 'b' should set it to '1'; and so on and so forth.
+This provides a way for the handler to deterministically order the diagnostics
+that it receives given the thread that it is receiving on.
+
+A simple example is shown below:
+
+```c++
+MLIRContext *context = ...;
+ParallelDiagnosticHandler handler(context);
+
+// Process a list of operations in parallel.
+std::vector<Operation *> opsToProcess = ...;
+llvm::for_each_n(llvm::parallel::par, 0, opsToProcess.size(),
+ [&](size_t i) {
+ // Notify the handler that we are processing the i'th operation.
+ handler.setOrderIDForThread(i);
+ auto *op = opsToProcess[i];
+ ...
+
+ // Notify the handler that we are finished processing diagnostics on this
+ // thread.
+ handler.eraseOrderIDForThread();
+});
+```
diff --git a/mlir/docs/DialectConversion.md b/mlir/docs/DialectConversion.md
new file mode 100644
index 00000000000..e6b652f2191
--- /dev/null
+++ b/mlir/docs/DialectConversion.md
@@ -0,0 +1,277 @@
+# Dialect Conversion
+
+This document describes a framework in MLIR in which to perform operation
+conversions between, and within dialects. This framework allows for transforming
+illegal operations to those supported by a provided conversion target, via a set
+of pattern-based operation rewriting patterns.
+
+[TOC]
+
+To utilize the framework, a few things must be provided:
+
+* A [Conversion Target](#conversion-target)
+* A set of [Rewrite Patterns](#rewrite-pattern-specification)
+* A [Type Converter](#type-conversion) (Optional)
+
+## Modes of Conversion
+
+When applying a conversion to a set of operations, there are several conversion
+modes that can be selected from:
+
+* Partial Conversion
+
+ - A partial conversion will legalize as many operations to the target as
+ possible, but will allow pre-existing operations that were not
+ explicitly marked as `illegal` to remain unconverted. This allows for
+ partially lowering parts of the module in the presence of unknown
+ operations.
+ - A partial conversion can be applied via `applyPartialConversion`.
+
+* Full Conversion
+
+ - A full conversion is only successful if all operations are properly
+ legalized to the given conversion target. This ensures that only known
+ operations will exist after the conversion process.
+ - A full conversion can be applied via `applyFullConversion`.
+
+* Analysis Conversion
+
+ - An analysis conversion will analyze which operations are legalizable to
+ the given conversion target if a conversion were to be applied. Note
+ that no rewrites, or transformations, are actually applied to the input
+ operations.
+ - An analysis conversion can be applied via `applyAnalysisConversion`.
+
+## Conversion Target
+
+The conversion target is the formal definition of what is considered to be legal
+during the conversion process. The final operations generated by the conversion
+framework must be marked as legal on the `ConversionTarget` for the rewrite to
+be a success. Existing operations need not always be legal, though; see the
+different conversion modes for why. Operations and dialects may be marked with
+any of the provided legality actions below:
+
+* Legal
+
+ - This action signals that every instance of a given operation is legal,
+ i.e. any combination of attributes, operands, types, etc. are valid.
+
+* Dynamic
+
+ - This action signals that only some instances of a given operation are
+ legal. This allows for defining fine-tune constraints, e.g. saying that
+ `addi` is only legal when operating on 32-bit integers.
+ - If a specific handler is not provided when setting the action, the
+ target must override the `isDynamicallyLegal` hook provided by
+ `ConversionTarget`.
+
+* Illegal
+
+ - This action signals that no instance of a given operation is legal.
+ Operations marked as `illegal` must always be converted for the
+ conversion to be successful. This action also allows for selectively
+ marking specific operations as illegal in an otherwise legal dialect.
+
+An example conversion target is shown below:
+
+```c++
+struct MyTarget : public ConversionTarget {
+ MyTarget(MLIRContext &ctx) : ConversionTarget(ctx) {
+ //--------------------------------------------------------------------------
+ // Marking an operation as Legal:
+
+ /// Mark all operations within the LLVM dialect are legal.
+ addLegalDialects<LLVMDialect>();
+
+ /// Mark `std.constant` op is always legal on this target.
+ addLegalOps<ConstantOp>();
+
+ //--------------------------------------------------------------------------
+ // Marking an operation as dynamically legal.
+
+ /// Mark all operations within Affine dialect have dynamic legality
+ /// constraints.
+ addDynamicallyLegalDialects<AffineDialect>();
+
+ /// Mark `std.return` as dynamically legal.
+ addDynamicallyLegalOp<ReturnOp>();
+
+ /// Mark `std.return` as dynamically legal, but provide a specific legality
+ /// callback.
+ addDynamicallyLegalOp<ReturnOp>([](ReturnOp op) { ... });
+
+ //--------------------------------------------------------------------------
+ // Marking an operation as illegal.
+
+ /// All operations within the GPU dialect are illegal.
+ addIllegalDialect<GPUDialect>();
+
+ /// Mark `std.br` and `std.cond_br` as illegal.
+ addIllegalOp<BranchOp, CondBranchOp>();
+ }
+
+ /// Implement the default legalization handler to handle operations marked as
+ /// dynamically legal that were not provided with an explicit handler.
+ bool isDynamicallyLegal(Operation *op) override { ... }
+};
+```
+
+### Recursive Legality
+
+In some cases, it may be desirable to mark entire regions of operations as
+legal. This provides an additional granularity of context to the concept of
+"legal". The `ConversionTarget` supports marking operations, that were
+previously added as `Legal` or `Dynamic`, as `recursively` legal. Recursive
+legality means that if an operation instance is legal, either statically or
+dynamically, all of the operations nested within are also considered legal. An
+operation can be marked via `markOpRecursivelyLegal<>`:
+
+```c++
+ConversionTarget &target = ...;
+
+/// The operation must first be marked as `Legal` or `Dynamic`.
+target.addLegalOp<MyOp>(...);
+target.addDynamicallyLegalOp<MySecondOp>(...);
+
+/// Mark the operation as always recursively legal.
+target.markOpRecursivelyLegal<MyOp>();
+/// Mark optionally with a callback to allow selective marking.
+target.markOpRecursivelyLegal<MyOp, MySecondOp>([](Operation *op) { ... });
+/// Mark optionally with a callback to allow selective marking.
+target.markOpRecursivelyLegal<MyOp>([](MyOp op) { ... });
+```
+
+## Rewrite Pattern Specification
+
+After the conversion target has been defined, a set of legalization patterns
+must be provided to transform illegal operations into legal ones. The patterns
+supplied here, that do not [require type changes](#conversion-patterns), are the
+same as those described in the
+[quickstart rewrites guide](QuickstartRewrites.md#adding-patterns), but have a
+few additional [restrictions](#restrictions). The patterns provided do not need
+to generate operations that are directly legal on the target. The framework will
+automatically build a graph of conversions to convert non-legal operations into
+a set of legal ones.
+
+As an example, say you define a target that supports one operation: `foo.add`.
+When providing the following patterns: [`bar.add` -> `baz.add`, `baz.add` ->
+`foo.add`], the framework will automatically detect that it can legalize
+`baz.add` -> `foo.add` even though a direct conversion does not exist. This
+means that you don’t have to define a direct legalization pattern for `bar.add`
+-> `foo.add`.
+
+### Restrictions
+
+The framework processes operations in topological order, trying to legalize them
+individually. As such, patterns used in the conversion framework have a few
+additional restrictions:
+
+1. If a pattern matches, it must erase or replace the op it matched on.
+ Operations can *not* be updated in place.
+2. Match criteria should not be based on the IR outside of the op itself. The
+ preceding ops will already have been processed by the framework (although it
+ may not update uses), and the subsequent IR will not yet be processed. This
+ can create confusion if a pattern attempts to match against a sequence of
+ ops (e.g. rewrite A + B -> C). That sort of rewrite should be performed in a
+ separate pass.
+
+## Type Conversion
+
+It is sometimes necessary as part of a conversion to convert the set types of
+being operated on. In these cases, a `TypeConverter` object may be defined that
+details how types should be converted. The `TypeConverter` is used by patterns
+and by the general conversion infrastructure to convert the signatures of blocks
+and regions.
+
+### Type Converter
+
+As stated above, the `TypeConverter` contains several hooks for detailing how to
+convert types. Several of these hooks are detailed below:
+
+```c++
+class TypeConverter {
+ public:
+ /// This hook allows for converting a type. This function should return
+ /// failure if no valid conversion exists, success otherwise. If the new set
+ /// of types is empty, the type is removed and any usages of the existing
+ /// value are expected to be removed during conversion.
+ virtual LogicalResult convertType(Type t, SmallVectorImpl<Type> &results);
+
+ /// This hook simplifies defining 1-1 type conversions. This function returns
+ /// the type to convert to on success, and a null type on failure.
+ virtual Type convertType(Type t);
+
+ /// This hook allows for materializing a conversion from a set of types into
+ /// one result type by generating a cast operation of some kind. The generated
+ /// operation should produce one result, of 'resultType', with the provided
+ /// 'inputs' as operands. This hook must be overridden when a type conversion
+ /// results in more than one type, or if a type conversion may persist after
+ /// the conversion has finished.
+ virtual Operation *materializeConversion(PatternRewriter &rewriter,
+ Type resultType,
+ ArrayRef<Value> inputs,
+ Location loc);
+};
+```
+
+### Conversion Patterns
+
+When type conversion comes into play, the general Rewrite Patterns can no longer
+be used. This is due to the fact that the operands of the operation being
+matched will not correspond with the operands of the correct type as determined
+by `TypeConverter`. The operation rewrites on type boundaries must thus use a
+special pattern, the `ConversionPattern`. This pattern provides, as an
+additional argument to the `matchAndRewrite` and `rewrite` methods, the set of
+remapped operands corresponding to the desired type. These patterns also utilize
+a special `PatternRewriter`, `ConversionPatternRewriter`, that provides special
+hooks for use with the conversion infrastructure.
+
+```c++
+struct MyConversionPattern : public ConversionPattern {
+ /// The `matchAndRewrite` hooks on ConversionPatterns take an additional
+ /// `operands` parameter, containing the remapped operands of the original
+ /// operation.
+ virtual PatternMatchResult
+ matchAndRewrite(Operation *op, ArrayRef<Value> operands,
+ ConversionPatternRewriter &rewriter) const;
+};
+```
+
+These patterns have the same [restrictions](#restrictions) as the basic rewrite
+patterns used in dialect conversion.
+
+### Region Signature Conversion
+
+From the perspective of type conversion, the entry block to a region is often
+special. The types of the entry block arguments are often tied semantically to
+details on the operation, e.g. FuncOp, AffineForOp, etc. Given this, the
+conversion of the types for this block must be done explicitly via a conversion
+pattern. To convert the signature of a region entry block, a custom hook on the
+ConversionPatternRewriter must be invoked `applySignatureConversion`. A
+signature conversion, `TypeConverter::SignatureConversion`, can be built
+programmatically:
+
+```c++
+class SignatureConversion {
+public:
+ /// Remap an input of the original signature with a new set of types. The
+ /// new types are appended to the new signature conversion.
+ void addInputs(unsigned origInputNo, ArrayRef<Type> types);
+
+ /// Append new input types to the signature conversion, this should only be
+ /// used if the new types are not intended to remap an existing input.
+ void addInputs(ArrayRef<Type> types);
+
+ /// Remap an input of the original signature with a range of types in the
+ /// new signature.
+ void remapInput(unsigned origInputNo, unsigned newInputNo,
+ unsigned newInputCount = 1);
+
+ /// Remap an input of the original signature to another `replacement`
+ /// value. This drops the original argument.
+ void remapInput(unsigned origInputNo, Value replacement);
+};
+```
+
+The `TypeConverter` provides several default utilities for signature conversion:
+`convertSignatureArg`/`convertBlockSignature`.
diff --git a/mlir/docs/Dialects/Affine.md b/mlir/docs/Dialects/Affine.md
new file mode 100644
index 00000000000..c5dcf6a6790
--- /dev/null
+++ b/mlir/docs/Dialects/Affine.md
@@ -0,0 +1,610 @@
+# Affine Dialect
+
+This dialect provides a powerful abstraction for affine operations and analyses.
+
+[TOC]
+
+## Polyhedral Structures
+
+MLIR uses techniques from polyhedral compilation to make dependence analysis and
+loop transformations efficient and reliable. This section introduces some of the
+core concepts that are used throughout the document.
+
+### Dimensions and Symbols
+
+Dimensions and symbols are the two kinds of identifiers that can appear in the
+polyhedral structures, and are always of [`index`](../LangRef.md#index-type)
+type. Dimensions are declared in parentheses and symbols are declared in square
+brackets.
+
+Examples:
+
+```mlir
+// A 2d to 3d affine mapping.
+// d0/d1 are dimensions, s0 is a symbol
+#affine_map2to3 = (d0, d1)[s0] -> (d0, d1 + s0, d1 - s0)
+```
+
+Dimensional identifiers correspond to the dimensions of the underlying structure
+being represented (a map, set, or more concretely a loop nest or a tensor); for
+example, a three-dimensional loop nest has three dimensional identifiers. Symbol
+identifiers represent an unknown quantity that can be treated as constant for a
+region of interest.
+
+Dimensions and symbols are bound to SSA values by various operations in MLIR and
+use the same parenthesized vs square bracket list to distinguish the two.
+
+Syntax:
+
+```
+// Uses of SSA values that are passed to dimensional identifiers.
+dim-use-list ::= `(` ssa-use-list? `)`
+
+// Uses of SSA values that are used to bind symbols.
+symbol-use-list ::= `[` ssa-use-list? `]`
+
+// Most things that bind SSA values bind dimensions and symbols.
+dim-and-symbol-use-list ::= dim-use-list symbol-use-list?
+```
+
+SSA values bound to dimensions and symbols must always have 'index' type.
+
+Example:
+
+```mlir
+#affine_map2to3 = (d0, d1)[s0] -> (d0, d1 + s0, d1 - s0)
+// Binds %N to the s0 symbol in affine_map2to3.
+%x = alloc()[%N] : memref<40x50xf32, #affine_map2to3>
+```
+
+### Restrictions on Dimensions and Symbols
+
+The affine dialect imposes certain restrictions on dimension and symbolic
+identifiers to enable powerful analysis and transformation. A symbolic
+identifier can be bound to an SSA value that is either an argument to the
+function, a value defined at the top level of that function (outside of all
+loops and if operations), the result of a
+[`constant` operation](Standard.md#constant-operation), or the result of an
+[`affine.apply` operation](#affineapply-operation) that recursively takes as
+arguments any symbolic identifiers, or the result of a [`dim`
+operation](Standard.md#dim-operation) on either a memref that is a function
+argument or a memref where the corresponding dimension is either static or a
+dynamic one in turn bound to a symbolic identifier. Dimensions may be bound not
+only to anything that a symbol is bound to, but also to induction variables of
+enclosing [`affine.for` operations](#affinefor-operation), and the result of an
+[`affine.apply` operation](#affineapply-operation) (which recursively may use
+other dimensions and symbols).
+
+### Affine Expressions
+
+Syntax:
+
+```
+affine-expr ::= `(` affine-expr `)`
+ | affine-expr `+` affine-expr
+ | affine-expr `-` affine-expr
+ | `-`? integer-literal `*` affine-expr
+ | affine-expr `ceildiv` integer-literal
+ | affine-expr `floordiv` integer-literal
+ | affine-expr `mod` integer-literal
+ | `-`affine-expr
+ | bare-id
+ | `-`? integer-literal
+
+multi-dim-affine-expr ::= `(` affine-expr (`,` affine-expr)* `)`
+```
+
+`ceildiv` is the ceiling function which maps the result of the division of its
+first argument by its second argument to the smallest integer greater than or
+equal to that result. `floordiv` is a function which maps the result of the
+division of its first argument by its second argument to the largest integer
+less than or equal to that result. `mod` is the modulo operation: since its
+second argument is always positive, its results are always positive in our
+usage. The `integer-literal` operand for ceildiv, floordiv, and mod is always
+expected to be positive. `bare-id` is an identifier which must have type
+[index](../LangRef.md#index-type). The precedence of operations in an affine
+expression are ordered from highest to lowest in the order: (1)
+parenthesization, (2) negation, (3) modulo, multiplication, floordiv, and
+ceildiv, and (4) addition and subtraction. All of these operators associate from
+left to right.
+
+A _multidimensional affine expression_ is a comma separated list of
+one-dimensional affine expressions, with the entire list enclosed in
+parentheses.
+
+**Context:** An affine function, informally, is a linear function plus a
+constant. More formally, a function f defined on a vector $$\vec{v} \in
+\mathbb{Z}^n$$ is a multidimensional affine function of $$\vec{v}$$ if
+$$f(\vec{v})$$ can be expressed in the form $$M \vec{v} + \vec{c}$$ where $$M$$
+is a constant matrix from $$\mathbb{Z}^{m \times n}$$ and $$\vec{c}$$ is a
+constant vector from $$\mathbb{Z}$$. $$m$$ is the dimensionality of such an
+affine function. MLIR further extends the definition of an affine function to
+allow 'floordiv', 'ceildiv', and 'mod' with respect to positive integer
+constants. Such extensions to affine functions have often been referred to as
+quasi-affine functions by the polyhedral compiler community. MLIR uses the term
+'affine map' to refer to these multidimensional quasi-affine functions. As
+examples, $$(i+j+1, j)$$, $$(i \mod 2, j+i)$$, $$(j, i/4, i \mod 4)$$, $$(2i+1,
+j)$$ are two-dimensional affine functions of $$(i, j)$$, but $$(i \cdot j,
+i^2)$$, $$(i \mod j, i/j)$$ are not affine functions of $$(i, j)$$.
+
+### Affine Maps
+
+Syntax:
+
+```
+affine-map-inline
+ ::= dim-and-symbol-id-lists `->` multi-dim-affine-expr
+```
+
+The identifiers in the dimensions and symbols lists must be unique. These are
+the only identifiers that may appear in 'multi-dim-affine-expr'. Affine maps
+with one or more symbols in its specification are known as "symbolic affine
+maps", and those with no symbols as "non-symbolic affine maps".
+
+**Context:** Affine maps are mathematical functions that transform a list of
+dimension indices and symbols into a list of results, with affine expressions
+combining the indices and symbols. Affine maps distinguish between
+[indices and symbols](#dimensions-and-symbols) because indices are inputs to the
+affine map when the map is called (through an operation such as
+[affine.apply](#affineapply-operation)), whereas symbols are bound when
+the map is established (e.g. when a memref is formed, establishing a
+memory [layout map](../LangRef.md#layout-map)).
+
+Affine maps are used for various core structures in MLIR. The restrictions we
+impose on their form allows powerful analysis and transformation, while keeping
+the representation closed with respect to several operations of interest.
+
+#### Named affine mappings
+
+Syntax:
+
+```
+affine-map-id ::= `#` suffix-id
+
+// Definitions of affine maps are at the top of the file.
+affine-map-def ::= affine-map-id `=` affine-map-inline
+module-header-def ::= affine-map-def
+
+// Uses of affine maps may use the inline form or the named form.
+affine-map ::= affine-map-id | affine-map-inline
+```
+
+Affine mappings may be defined inline at the point of use, or may be hoisted to
+the top of the file and given a name with an affine map definition, and used by
+name.
+
+Examples:
+
+```mlir
+// Affine map out-of-line definition and usage example.
+#affine_map42 = (d0, d1)[s0] -> (d0, d0 + d1 + s0 floordiv 2)
+
+// Use an affine mapping definition in an alloc operation, binding the
+// SSA value %N to the symbol s0.
+%a = alloc()[%N] : memref<4x4xf32, #affine_map42>
+
+// Same thing with an inline affine mapping definition.
+%b = alloc()[%N] : memref<4x4xf32, (d0, d1)[s0] -> (d0, d0 + d1 + s0 floordiv 2)>
+```
+
+### Semi-affine maps
+
+Semi-affine maps are extensions of affine maps to allow multiplication,
+`floordiv`, `ceildiv`, and `mod` with respect to symbolic identifiers.
+Semi-affine maps are thus a strict superset of affine maps.
+
+Syntax of semi-affine expressions:
+
+```
+semi-affine-expr ::= `(` semi-affine-expr `)`
+ | semi-affine-expr `+` semi-affine-expr
+ | semi-affine-expr `-` semi-affine-expr
+ | symbol-or-const `*` semi-affine-expr
+ | semi-affine-expr `ceildiv` symbol-or-const
+ | semi-affine-expr `floordiv` symbol-or-const
+ | semi-affine-expr `mod` symbol-or-const
+ | bare-id
+ | `-`? integer-literal
+
+symbol-or-const ::= `-`? integer-literal | symbol-id
+
+multi-dim-semi-affine-expr ::= `(` semi-affine-expr (`,` semi-affine-expr)* `)`
+```
+
+The precedence and associativity of operations in the syntax above is the same
+as that for [affine expressions](#affine-expressions).
+
+Syntax of semi-affine maps:
+
+```
+semi-affine-map-inline
+ ::= dim-and-symbol-id-lists `->` multi-dim-semi-affine-expr
+```
+
+Semi-affine maps may be defined inline at the point of use, or may be hoisted to
+the top of the file and given a name with a semi-affine map definition, and used
+by name.
+
+```
+semi-affine-map-id ::= `#` suffix-id
+
+// Definitions of semi-affine maps are at the top of file.
+semi-affine-map-def ::= semi-affine-map-id `=` semi-affine-map-inline
+module-header-def ::= semi-affine-map-def
+
+// Uses of semi-affine maps may use the inline form or the named form.
+semi-affine-map ::= semi-affine-map-id | semi-affine-map-inline
+```
+
+### Integer Sets
+
+An integer set is a conjunction of affine constraints on a list of identifiers.
+The identifiers associated with the integer set are separated out into two
+classes: the set's dimension identifiers, and the set's symbolic identifiers.
+The set is viewed as being parametric on its symbolic identifiers. In the
+syntax, the list of set's dimension identifiers are enclosed in parentheses
+while its symbols are enclosed in square brackets.
+
+Syntax of affine constraints:
+
+```
+affine-constraint ::= affine-expr `>=` `0`
+ | affine-expr `==` `0`
+affine-constraint-conjunction ::= affine-constraint (`,` affine-constraint)*
+```
+
+Integer sets may be defined inline at the point of use, or may be hoisted to the
+top of the file and given a name with an integer set definition, and used by
+name.
+
+```
+integer-set-id ::= `#` suffix-id
+
+integer-set-inline
+ ::= dim-and-symbol-id-lists `:` '(' affine-constraint-conjunction? ')'
+
+// Declarations of integer sets are at the top of the file.
+integer-set-decl ::= integer-set-id `=` integer-set-inline
+
+// Uses of integer sets may use the inline form or the named form.
+integer-set ::= integer-set-id | integer-set-inline
+```
+
+The dimensionality of an integer set is the number of identifiers appearing in
+dimension list of the set. The affine-constraint non-terminals appearing in the
+syntax above are only allowed to contain identifiers from dims and symbols. A
+set with no constraints is a set that is unbounded along all of the set's
+dimensions.
+
+Example:
+
+```mlir
+// A example two-dimensional integer set with two symbols.
+#set42 = (d0, d1)[s0, s1]
+ : (d0 >= 0, -d0 + s0 - 1 >= 0, d1 >= 0, -d1 + s1 - 1 >= 0)
+
+// Inside a Region
+affine.if #set42(%i, %j)[%M, %N] {
+ ...
+}
+```
+
+`d0` and `d1` correspond to dimensional identifiers of the set, while `s0` and
+`s1` are symbol identifiers.
+
+## Operations
+
+#### 'affine.apply' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `affine.apply` affine-map dim-and-symbol-use-list
+```
+
+The `affine.apply` operation applies an
+[affine mapping](#affine-expressions) to a list of SSA values,
+yielding a single SSA value. The number of dimension and symbol arguments to
+affine.apply must be equal to the respective number of dimensional and symbolic
+inputs to the affine mapping; the `affine.apply` operation always returns one
+value. The input operands and result must all have 'index' type.
+
+Example:
+
+```mlir
+#map10 = (d0, d1) -> (d0 floordiv 8 + d1 floordiv 128)
+...
+%1 = affine.apply #map10 (%s, %t)
+
+// Inline example.
+%2 = affine.apply (i)[s0] -> (i+s0) (%42)[%n]
+```
+
+#### 'affine.for' operation
+
+Syntax:
+
+```
+operation ::= `affine.for` ssa-id `=` lower-bound `to` upper-bound
+ (`step` integer-literal)? `{` op* `}`
+
+lower-bound ::= `max`? affine-map dim-and-symbol-use-list | shorthand-bound
+upper-bound ::= `min`? affine-map dim-and-symbol-use-list | shorthand-bound
+shorthand-bound ::= ssa-id | `-`? integer-literal
+```
+
+The `affine.for` operation represents an affine loop nest. It has one region
+containing its body. This region must contain one block that terminates with
+[`affine.terminator`](#affineterminator-operation). *Note:* when `affine.for` is
+printed in custom format, the terminator is omitted. The block has one argument
+of [`index`](../LangRef.md#index-type) type that represents the induction
+variable of the loop.
+
+The `affine.for` operation executes its body a number of times iterating from a
+lower bound to an upper bound by a stride. The stride, represented by `step`, is
+a positive constant integer which defaults to "1" if not present. The lower and
+upper bounds specify a half-open range: the range includes the lower bound but
+does not include the upper bound.
+
+The lower and upper bounds of a `affine.for` operation are represented as an
+application of an affine mapping to a list of SSA values passed to the map. The
+[same restrictions](#restrictions-on-dimensions-and-symbols) hold for these SSA
+values as for all bindings of SSA values to dimensions and symbols.
+
+The affine mappings for the bounds may return multiple results, in which case
+the `max`/`min` keywords are required (for the lower/upper bound respectively),
+and the bound is the maximum/minimum of the returned values. There is no
+semantic ambiguity, but MLIR syntax requires the use of these keywords to make
+things more obvious to human readers.
+
+Many upper and lower bounds are simple, so MLIR accepts two custom form
+syntaxes: the form that accepts a single 'ssa-id' (e.g. `%N`) is shorthand for
+applying that SSA value to a function that maps a single symbol to itself, e.g.,
+`()[s]->(s)()[%N]`. The integer literal form (e.g. `-42`) is shorthand for a
+nullary mapping function that returns the constant value (e.g. `()->(-42)()`).
+
+Example showing reverse iteration of the inner loop:
+
+```mlir
+#map57 = (d0)[s0] -> (s0 - d0 - 1)
+
+func @simple_example(%A: memref<?x?xf32>, %B: memref<?x?xf32>) {
+ %N = dim %A, 0 : memref<?x?xf32>
+ affine.for %i = 0 to %N step 1 {
+ affine.for %j = 0 to %N { // implicitly steps by 1
+ %0 = affine.apply #map57(%j)[%N]
+ %tmp = call @F1(%A, %i, %0) : (memref<?x?xf32>, index, index)->(f32)
+ call @F2(%tmp, %B, %i, %0) : (f32, memref<?x?xf32>, index, index)->()
+ }
+ }
+ return
+}
+```
+
+#### 'affine.if' operation
+
+Syntax:
+
+```
+operation ::= `affine.if` if-op-cond `{` op* `}` (`else` `{` op* `}`)?
+if-op-cond ::= integer-set dim-and-symbol-use-list
+```
+
+The `affine.if` operation restricts execution to a subset of the loop iteration
+space defined by an integer set (a conjunction of affine constraints). A single
+`affine.if` may end with an optional `else` clause.
+
+The condition of the `affine.if` is represented by an
+[integer set](#integer-sets) (a conjunction of affine constraints),
+and the SSA values bound to the dimensions and symbols in the integer set. The
+[same restrictions](#restrictions-on-dimensions-and-symbols) hold for these SSA
+values as for all bindings of SSA values to dimensions and symbols.
+
+The `affine.if` operation contains two regions for the "then" and "else"
+clauses. The latter may be empty (i.e. contain no blocks), meaning the absence
+of the else clause. When non-empty, both regions must contain exactly one block
+terminating with [`affine.terminator`](#affineterminator-operation). *Note:*
+when `affine.if` is printed in custom format, the terminator is omitted. These
+blocks must not have any arguments.
+
+Example:
+
+```mlir
+#set = (d0, d1)[s0]: (d0 - 10 >= 0, s0 - d0 - 9 >= 0,
+ d1 - 10 >= 0, s0 - d1 - 9 >= 0)
+func @reduced_domain_example(%A, %X, %N) : (memref<10xi32>, i32, i32) {
+ affine.for %i = 0 to %N {
+ affine.for %j = 0 to %N {
+ %0 = affine.apply #map42(%j)
+ %tmp = call @S1(%X, %i, %0)
+ affine.if #set(%i, %j)[%N] {
+ %1 = affine.apply #map43(%i, %j)
+ call @S2(%tmp, %A, %i, %1)
+ }
+ }
+ }
+ return
+}
+```
+
+#### 'affine.load' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `affine.load` ssa-use `[` multi-dim-affine-map-of-ssa-ids `]` `:` memref-type
+```
+
+The `affine.load` op reads an element from a memref, where the index for each
+memref dimension is an affine expression of loop induction variables and
+symbols. The output of 'affine.load' is a new value with the same type as the
+elements of the memref. An affine expression of loop IVs and symbols must be
+specified for each dimension of the memref. The keyword 'symbol' can be used to
+indicate SSA identifiers which are symbolic.
+
+Example:
+
+```mlir
+
+ Example 1:
+
+ %1 = affine.load %0[%i0 + 3, %i1 + 7] : memref<100x100xf32>
+
+ Example 2: Uses 'symbol' keyword for symbols '%n' and '%m'.
+
+ %1 = affine.load %0[%i0 + symbol(%n), %i1 + symbol(%m)]
+ : memref<100x100xf32>
+
+```
+
+#### 'affine.store' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `affine.store` ssa-use, ssa-use `[` multi-dim-affine-map-of-ssa-ids `]` `:` memref-type
+```
+
+The `affine.store` op writes an element to a memref, where the index for each
+memref dimension is an affine expression of loop induction variables and
+symbols. The 'affine.store' op stores a new value which is the same type as the
+elements of the memref. An affine expression of loop IVs and symbols must be
+specified for each dimension of the memref. The keyword 'symbol' can be used to
+indicate SSA identifiers which are symbolic.
+
+Example:
+
+```mlir
+
+ Example 1:
+
+ affine.store %v0, %0[%i0 + 3, %i1 + 7] : memref<100x100xf32>
+
+ Example 2: Uses 'symbol' keyword for symbols '%n' and '%m'.
+
+ affine.store %v0, %0[%i0 + symbol(%n), %i1 + symbol(%m)]
+ : memref<100x100xf32>
+
+```
+
+#### 'affine.dma_start' operation
+
+Syntax:
+
+```
+operation ::= `affine.dma_Start` ssa-use `[` multi-dim-affine-map-of-ssa-ids `]`, `[` multi-dim-affine-map-of-ssa-ids `]`, `[` multi-dim-affine-map-of-ssa-ids `]`, ssa-use `:` memref-type
+```
+
+The `affine.dma_start` op starts a non-blocking DMA operation that transfers
+data from a source memref to a destination memref. The source and destination
+memref need not be of the same dimensionality, but need to have the same
+elemental type. The operands include the source and destination memref's
+each followed by its indices, size of the data transfer in terms of the
+number of elements (of the elemental type of the memref), a tag memref with
+its indices, and optionally at the end, a stride and a
+number_of_elements_per_stride arguments. The tag location is used by an
+AffineDmaWaitOp to check for completion. The indices of the source memref,
+destination memref, and the tag memref have the same restrictions as any
+affine.load/store. In particular, index for each memref dimension must be an
+affine expression of loop induction variables and symbols.
+The optional stride arguments should be of 'index' type, and specify a
+stride for the slower memory space (memory space with a lower memory space
+id), transferring chunks of number_of_elements_per_stride every stride until
+%num_elements are transferred. Either both or no stride arguments should be
+specified. The value of 'num_elements' must be a multiple of
+'number_of_elements_per_stride'.
+
+
+Example:
+
+```mlir
+
+For example, a DmaStartOp operation that transfers 256 elements of a memref
+'%src' in memory space 0 at indices [%i + 3, %j] to memref '%dst' in memory
+space 1 at indices [%k + 7, %l], would be specified as follows:
+
+ %num_elements = constant 256
+ %idx = constant 0 : index
+ %tag = alloc() : memref<1xi32, 4>
+ affine.dma_start %src[%i + 3, %j], %dst[%k + 7, %l], %tag[%idx],
+ %num_elements :
+ memref<40x128xf32, 0>, memref<2x1024xf32, 1>, memref<1xi32, 2>
+
+ If %stride and %num_elt_per_stride are specified, the DMA is expected to
+ transfer %num_elt_per_stride elements every %stride elements apart from
+ memory space 0 until %num_elements are transferred.
+
+ affine.dma_start %src[%i, %j], %dst[%k, %l], %tag[%idx], %num_elements,
+ %stride, %num_elt_per_stride : ...
+
+```
+
+#### 'affine.dma_wait' operation
+
+Syntax:
+
+```
+operation ::= `affine.dma_Start` ssa-use `[` multi-dim-affine-map-of-ssa-ids `]`, `[` multi-dim-affine-map-of-ssa-ids `]`, `[` multi-dim-affine-map-of-ssa-ids `]`, ssa-use `:` memref-type
+```
+
+The `affine.dma_start` op blocks until the completion of a DMA operation
+associated with the tag element '%tag[%index]'. %tag is a memref, and %index
+has to be an index with the same restrictions as any load/store index.
+In particular, index for each memref dimension must be an affine expression of
+loop induction variables and symbols. %num_elements is the number of elements
+associated with the DMA operation. For example:
+
+Example:
+
+```mlir
+
+ affine.dma_start %src[%i, %j], %dst[%k, %l], %tag[%index], %num_elements :
+ memref<2048xf32, 0>, memref<256xf32, 1>, memref<1xi32, 2>
+ ...
+ ...
+ affine.dma_wait %tag[%index], %num_elements : memref<1xi32, 2>
+
+```
+
+#### 'affine.min' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `affine.min` affine-map dim-and-symbol-use-list
+```
+
+The `affine.min` operation applies an
+[affine mapping](#affine-expressions) to a list of SSA values, and returns the
+minimum value of all result expressions. The number of dimension and symbol
+arguments to affine.min must be equal to the respective number of dimensional
+and symbolic inputs to the affine mapping; the `affine.min` operation always
+returns one value. The input operands and result must all have 'index' type.
+
+Example:
+
+```mlir
+
+%0 = affine.min (d0)[s0] -> (1000, d0 + 512, s0) (%arg0)[%arg1]
+
+```
+
+#### `affine.terminator` operation
+
+Syntax:
+
+```
+operation ::= `"affine.terminator"() : () -> ()`
+```
+
+Affine terminator is a special terminator operation for blocks inside affine
+loops ([`affine.for`](#affinefor-operation)) and branches
+([`affine.if`](#affineif-operation)). It unconditionally transmits the control
+flow to the successor of the operation enclosing the region.
+
+*Rationale*: bodies of affine operations are [blocks](../LangRef.md#blocks) that
+must have terminators. Loops and branches represent structured control flow and
+should not accept arbitrary branches as terminators.
+
+This operation does _not_ have a custom syntax. However, affine control
+operations omit the terminator in their custom syntax for brevity.
diff --git a/mlir/docs/Dialects/GPU.md b/mlir/docs/Dialects/GPU.md
new file mode 100644
index 00000000000..7dcd8f6053c
--- /dev/null
+++ b/mlir/docs/Dialects/GPU.md
@@ -0,0 +1,132 @@
+# GPU Dialect
+
+Note: this dialect is more likely to change than others in the near future; use
+with caution.
+
+This dialect provides middle-level abstractions for launching GPU kernels
+following a programming model similar to that of CUDA or OpenCL. It provides
+abstractions for kernel invocations (and may eventually provide those for device
+management) that are not present at the lower level (e.g., as LLVM IR intrinsics
+for GPUs). Its goal is to abstract away device- and driver-specific
+manipulations to launch a GPU kernel and provide a simple path towards GPU
+execution from MLIR. It may be targeted, for example, by DSLs using MLIR. The
+dialect uses `gpu` as its canonical prefix.
+
+## Memory attribution
+
+Memory buffers are defined at the function level, either in "gpu.launch" or in
+"gpu.func" ops. This encoding makes it clear where the memory belongs and makes
+the lifetime of the memory visible. The memory is only accessible while the
+kernel is launched/the function is currently invoked. The latter is more strict
+than actual GPU implementations but using static memory at the function level is
+just for convenience. It is also always possible to pass pointers to the
+workgroup memory into other functions, provided they expect the correct memory
+space.
+
+The buffers are considered live throughout the execution of the GPU function
+body. The absence of memory attribution syntax means that the function does not
+require special buffers. Rationale: although the underlying models declare
+memory buffers at the module level, we chose to do it at the function level to
+provide some structuring for the lifetime of those buffers; this avoids the
+incentive to use the buffers for communicating between different kernels or
+launches of the same kernel, which should be done through function arguments
+instead; we chose not to use `alloca`-style approach that would require more
+complex lifetime analysis following the principles of MLIR that promote
+structure and representing analysis results in the IR.
+
+## Operations
+
+### `gpu.block_dim`
+
+Returns the number of threads in the thread block (aka the block size) along the
+x, y, or z `dimension`.
+
+Example:
+
+```mlir
+ %bDimX = "gpu.block_dim"() {dimension = "x"} : () -> (index)
+```
+
+### `gpu.block_id`
+
+Returns the block id, i.e. the index of the current block within the grid along
+the x, y, or z `dimension`.
+
+Example:
+
+```mlir
+ %bIdY = "gpu.block_id"() {dimension = "y"} : () -> (index)
+```
+
+### `gpu.grid_dim`
+
+Returns the number of thread blocks in the grid along the x, y, or z
+`dimension`.
+
+Example:
+
+```mlir
+ %gDimZ = "gpu.grid_dim"() {dimension = "z"} : () -> (index)
+```
+
+### `gpu.thread_id`
+
+Returns the thread id, i.e. the index of the current thread within the block
+along the x, y, or z `dimension`.
+
+Example:
+
+```mlir
+ %tIdX = "gpu.thread_id"() {dimension = "x"} : () -> (index)
+```
+
+### `gpu.yield`
+
+Is a special terminator operation for blocks inside regions in gpu ops. It
+returns values to the immediately enclosing gpu op.
+
+Example:
+
+```mlir
+gpu.yield %f0, %f1 : f32, f32
+```
+
+### `gpu.all_reduce`
+
+The "all_reduce" op reduces the value of every work item across a local
+workgroup. The result is equal for all work items of a workgroup.
+
+For example, both
+
+```mlir
+%1 = "gpu.all_reduce"(%0) ({}) { op = "add" } : (f32) -> (f32)
+%2 = "gpu.all_reduce"(%0) ({
+^bb(%lhs : f32, %rhs : f32):
+ %sum = addf %lhs, %rhs : f32
+ "gpu.yield"(%sum) : (f32) -> ()
+}) : (f32) -> (f32)
+```
+
+compute the sum of each work item's %0 value. The first version specifies the
+accumulation as operation, whereas the second version specifies the accumulation
+as code region. The accumulation operation must either be `add` or `mul`.
+
+Either none or all work items of a workgroup need to execute this op
+in convergence.
+
+### `gpu.barrier`
+
+The "barrier" op synchronizes all work items of a workgroup. It is used
+to coordinate communication between the work items of the workgroup.
+
+```mlir
+gpu.barrier
+```
+
+waits until all work items in the workgroup have reached this point and all
+memory accesses made by these work items prior to the op are visible to all work
+items in the workgroup. Data hazards between work items accessing the same
+memory can be avoided by synchronizing work items in-between these accesses.
+
+Either none or all work items of a workgroup need to execute this op
+in convergence.
diff --git a/mlir/docs/Dialects/LLVM.md b/mlir/docs/Dialects/LLVM.md
new file mode 100644
index 00000000000..00d0fa02fec
--- /dev/null
+++ b/mlir/docs/Dialects/LLVM.md
@@ -0,0 +1,429 @@
+# LLVM IR Dialect
+
+This dialect wraps the LLVM IR types and instructions into MLIR types and
+operations. It provides several additional operations that are necessary to
+cover for the differences in the IR structure (e.g., MLIR does not have `phi`
+operations and LLVM IR does not have a `constant` operation).
+
+In this document, we use "LLVM IR" to designate the
+[intermediate representation of LLVM](https://llvm.org/docs/LangRef.html) and
+"LLVM IR _dialect_" to refer to the MLIR dialect reflecting LLVM instructions
+and types.
+
+[TOC]
+
+## Context and Module Association
+
+The LLVM IR dialect object _contains_ an LLVM Context and an LLVM Module that it
+uses to define, print, parse and manage LLVM IR types. These objects can be
+obtained from the dialect object using `.getLLVMContext()` and
+`getLLVMModule()`. All LLVM IR objects that interact with the LLVM IR dialect
+must exist in the dialect's context.
+
+## Types
+
+The LLVM IR dialect defines a single MLIR type, `LLVM::LLVMType`, that can wrap
+any existing LLVM IR type. Its syntax is as follows
+
+```
+type ::= `!llvm<"` llvm-canonical-type `">
+llvm-canonical-type ::= <canonical textual representation defined by LLVM>
+```
+
+For example, one can use primitive types `!llvm.i32`, pointer types
+`!llvm<"i8*">`, vector types `!llvm<"<4 x float>">` or structure types
+`!llvm<"{i32, float}">`. The parsing and printing of the canonical form is
+delegated to the LLVM assembly parser and printer.
+
+LLVM IR dialect types contain an `llvm::Type*` object that can be obtained by
+calling `.getUnderlyingType()` and used in LLVM API calls directly. These
+objects are allocated within the LLVM context associated with the LLVM IR
+dialect and may be linked to the properties of the associated LLVM module.
+
+LLVM IR dialect type can be constructed from any `llvm::Type*` that is
+associated with the LLVM context of the dialect. In this document, we use the
+term "wrapped LLVM IR type" to refer to the LLVM IR dialect type containing a
+specific LLVM IR type.
+
+## Operations
+
+All operations in the LLVM IR dialect have a custom form in MLIR. The mnemonic
+of an operation is that used in LLVM IR prefixed with "`llvm.`".
+
+### LLVM functions
+
+MLIR functions are defined by an operation that is not built into the IR itself.
+The LLVM IR dialect provides an `llvm.func` operation to define functions
+compatible with LLVM IR. These functions have wrapped LLVM IR function type but
+use MLIR syntax to express it. They are required to have exactly one result
+type. LLVM function operation is intended to capture additional properties of
+LLVM functions, such as linkage and calling convention, that may be modeled
+differently by the built-in MLIR function.
+
+```mlir
+// The type of @bar is !llvm<"i64 (i64)">
+llvm.func @bar(%arg0: !llvm.i64) -> !llvm.i64 {
+ llvm.return %arg0 : !llvm.i64
+}
+
+// Type type of @foo is !llvm<"void (i64)">
+// !llvm.void type is omitted
+llvm.func @foo(%arg0: !llvm.i64) {
+ llvm.return
+}
+
+// A function with `internal` linkage.
+llvm.func internal @internal_func() {
+ llvm.return
+}
+
+```
+
+### LLVM IR operations
+
+The following operations are currently supported. The semantics of these
+operations corresponds to the semantics of the similarly-named LLVM IR
+instructions.
+
+#### Integer binary arithmetic operations
+
+Take two arguments of wrapped LLVM IR integer type, produce one value of the
+same type.
+
+- `add`
+- `sub`
+- `mul`
+- `udiv`
+- `sdiv`
+- `urem`
+- `srem`
+
+Examples:
+
+```mlir
+// Integer addition.
+%0 = llvm.add %a, %b : !llvm.i32
+
+// Unsigned integer division.
+%1 = llvm.udiv %a, %b : !llvm.i32
+```
+
+#### Floating point binary arithmetic operations
+
+Take two arguments of wrapped LLVM IR floating point type, produce one value of
+the same type.
+
+- `fadd`
+- `fsub`
+- `fmul`
+- `fdiv`
+- `frem`
+
+Examples:
+
+```mlir
+// Float addition.
+%0 = llvm.fadd %a, %b : !llvm.float
+
+// Float division.
+%1 = llvm.fdiv %a, %b : !llvm.float
+```
+
+#### Memory-related operations
+
+- `<r> = alloca <size> x <type>`
+- `<r> = getelementptr <address>[<index> (, <index>)+]`
+- `<r> = load <address>`
+- `store <value>, <address>`
+
+In these operations, `<size>` must be a value of wrapped LLVM IR integer type,
+`<address>` must be a value of wrapped LLVM IR pointer type, and `<value>` must
+be a value of wrapped LLVM IR type that corresponds to the pointer type of
+`<address>`.
+
+The `index` operands are integer values whose semantics is identical to the
+non-pointer arguments of LLVM IR's `getelementptr`.
+
+Examples:
+
+```mlir
+// Allocate an array of 4 floats on stack
+%c4 = llvm.mlir.constant(4) : !llvm.i64
+%0 = llvm.alloca %c4 x !llvm.float : (!llvm.i64) -> !llvm<"float*">
+
+// Get the second element of the array (note 0-based indexing).
+%c1 = llvm.mlir.constant(1) : !llvm.i64
+%1 = llvm.getelementptr %0[%c1] : (!llvm<"float*">, !llvm.i64)
+ -> !llvm<"float*">
+
+// Store a constant into this element.
+%cf = llvm.mlir.constant(42.0 : f32) : !llvm.float
+llvm.store %cf, %1 : !llvm<"float*">
+
+// Load the value from this element.
+%3 = llvm.load %1 : !llvm<"float*">
+```
+
+#### Operations on values of aggregate type.
+
+- `<value> = extractvalue <struct>[<index> (, <index>)+]`
+- `<struct> = insertvalue <value>, <struct>[<index> (, <index>)+]`
+
+In these operations, `<struct>` must be a value of wrapped LLVM IR structure
+type and `<value>` must be a value that corresponds to one of the (nested)
+structure element types.
+
+Note the use of integer literals to designate subscripts, which is made possible
+by `extractvalue` and `insertvalue` must have constant subscripts. Internally,
+they are modeled as array attributes.
+
+Examples:
+
+```mlir
+// Get the value third element of the second element of a structure.
+%0 = llvm.extractvalue %s[1, 2] : !llvm<"{i32, {i1, i8, i16}">
+
+// Insert the value to the third element of the second element of a structure.
+// Note that this returns a new structure-typed value.
+%1 = llvm.insertvalue %0, %s[1, 2] : !llvm<"{i32, {i1, i8, i16}">
+```
+
+#### Terminator operations.
+
+Branch operations:
+
+- `br [<successor>(<operands>)]`
+- `cond_br <condition> [<true-successor>(<true-operands>),`
+ `<false-successor>(<false-operands>)]`
+
+In order to comply with MLIR design, branch operations in the LLVM IR dialect
+pass arguments to basic blocks. Successors must be valid block MLIR identifiers
+and operand lists for each of them must have the same types as the arguments of
+the respective blocks. `<condition>` must be a wrapped LLVM IR `i1` type.
+
+Since LLVM IR uses the name of the predecessor basic block to identify the
+sources of a PHI node, it is invalid for two entries of the PHI node to indicate
+different values coming from the same block. Therefore, `cond_br` in the LLVM IR
+dialect disallows its successors to be the same block _if_ this block has
+arguments.
+
+Examples:
+
+```mlir
+// Branch without arguments.
+^bb0:
+ llvm.br ^bb0
+
+// Branch and pass arguments.
+^bb1(%arg: !llvm.i32):
+ llvm.br ^bb1(%arg : !llvm.i32)
+
+// Conditionally branch and pass arguments to one of the blocks.
+llvm.cond_br %cond, ^bb0, %bb1(%arg : !llvm.i32)
+
+// It's okay to use the same block without arguments, but probably useless.
+llvm.cond_br %cond, ^bb0, ^bb0
+
+// ERROR: Passing different arguments to the same block in a conditional branch.
+llvm.cond_br %cond, ^bb1(%0 : !llvm.i32), ^bb1(%1 : !llvm.i32)
+
+```
+
+Call operations:
+
+- `<r> = call(<operands>)`
+- `call(<operands>)`
+
+In LLVM IR, functions may return either 0 or 1 value. LLVM IR dialect implements
+this behavior by providing a variadic `call` operation for 0- and 1-result
+functions. Even though MLIR supports multi-result functions, LLVM IR dialect
+disallows them.
+
+The `call` instruction supports both direct and indirect calls. Direct calls
+start with a function name (`@`-prefixed) and indirect calls start with an SSA
+value (`%`-prefixed). The direct callee, if present, is stored as a function
+attribute `callee`. The trailing type of the instruction is always the MLIR
+function type, which may be different from the indirect callee that has the
+wrapped LLVM IR function type.
+
+Examples:
+
+```mlir
+// Direct call without arguments and with one result.
+%0 = llvm.call @foo() : () -> (!llvm.float)
+
+// Direct call with arguments and without a result.
+llvm.call @bar(%0) : (!llvm.float) -> ()
+
+// Indirect call with an argument and without a result.
+llvm.call %1(%0) : (!llvm.float) -> ()
+```
+
+#### Miscellaneous operations.
+
+Integer comparisons: `icmp "predicate" <lhs>, <rhs>`. The following predicate
+values are supported:
+
+- `eq` - equality comparison;
+- `ne` - inequality comparison;
+- `slt` - signed less-than comparison
+- `sle` - signed less-than-or-equal comparison
+- `sgt` - signed greater-than comparison
+- `sge` - signed greater-than-or-equal comparison
+- `ult` - unsigned less-than comparison
+- `ule` - unsigned less-than-or-equal comparison
+- `ugt` - unsigned greater-than comparison
+- `uge` - unsigned greater-than-or-equal comparison
+
+Bitwise reinterpretation: `bitcast <value>`.
+
+Selection: `select <condition>, <lhs>, <rhs>`.
+
+### Auxiliary MLIR operations
+
+These operations do not have LLVM IR counterparts but are necessary to map LLVM
+IR into MLIR. They should be prefixed with `llvm.mlir`.
+
+#### `llvm.mlir.addressof`
+
+Creates an SSA value containing a pointer to a global variable or constant
+defined by `llvm.mlir.global`. The global value can be defined after its first
+referenced. If the global value is a constant, storing into it is not allowed.
+
+Examples:
+
+```mlir
+func @foo() {
+ // Get the address of a global.
+ %0 = llvm.mlir.addressof @const : !llvm<"i32*">
+
+ // Use it as a regular pointer.
+ %1 = llvm.load %0 : !llvm<"i32*">
+}
+
+// Define the global.
+llvm.mlir.global @const(42 : i32) : !llvm.i32
+```
+
+#### `llvm.mlir.constant`
+
+Unlike LLVM IR, MLIR does not have first-class constant values. Therefore, all
+constants must be created as SSA values before being used in other operations.
+`llvm.mlir.constant` creates such values for scalars and vectors. It has a
+mandatory `value` attribute, which may be an integer, floating point attribute;
+dense or sparse attribute containing integers or floats. The type of the
+attribute is one the corresponding MLIR standard types. It may be omitted for
+`i64` and `f64` types that are implied. The operation produces a new SSA value
+of the specified LLVM IR dialect type. The type of that value _must_ correspond
+to the attribute type converted to LLVM IR.
+
+Examples:
+
+```mlir
+// Integer constant, internal i32 is mandatory
+%0 = llvm.mlir.constant(42 : i32) : !llvm.i32
+
+// It's okay to omit i64.
+%1 = llvm.mlir.constant(42) : !llvm.i64
+
+// Floating point constant.
+%2 = llvm.mlir.constant(42.0 : f32) : !llvm.float
+
+// Splat dense vector constant.
+%3 = llvm.mlir.constant(dense<1.0> : vector<4xf32>) : !llvm<"<4 x float>">
+```
+
+#### `llvm.mlir.global`
+
+Since MLIR allows for arbitrary operations to be present at the top level,
+global variables are defined using the `llvm.mlir.global` operation. Both global
+constants and variables can be defined, and the value may also be initialized in
+both cases.
+
+There are two forms of initialization syntax. Simple constants that can be
+represented as MLIR attributes can be given in-line:
+
+```mlir
+llvm.mlir.global @variable(32.0 : f32) : !llvm.float
+```
+
+This initialization and type syntax is similar to `llvm.mlir.constant` and may
+use two types: one for MLIR attribute and another for the LLVM value. These
+types must be compatible.
+
+More complex constants that cannot be represented as MLIR attributes can be
+given in an initializer region:
+
+```mlir
+// This global is initialized with the equivalent of:
+// i32* getelementptr (i32* @g2, i32 2)
+llvm.mlir.global constant @int_gep() : !llvm<"i32*"> {
+ %0 = llvm.mlir.addressof @g2 : !llvm<"i32*">
+ %1 = llvm.mlir.constant(2 : i32) : !llvm.i32
+ %2 = llvm.getelementptr %0[%1] : (!llvm<"i32*">, !llvm.i32) -> !llvm<"i32*">
+ // The initializer region must end with `llvm.return`.
+ llvm.return %2 : !llvm<"i32*">
+}
+```
+
+Only one of the initializer attribute or initializer region may be provided.
+
+`llvm.mlir.global` must appear at top-level of the enclosing module. It uses an
+@-identifier for its value, which will be uniqued by the module with respect to
+other @-identifiers in it.
+
+Examples:
+
+```mlir
+// Global values use @-identifiers.
+llvm.mlir.global constant @cst(42 : i32) : !llvm.i32
+
+// Non-constant values must also be initialized.
+llvm.mlir.global @variable(32.0 : f32) : !llvm.float
+
+// Strings are expected to be of wrapped LLVM i8 array type and do not
+// automatically include the trailing zero.
+llvm.mlir.global @string("abc") : !llvm<"[3 x i8]">
+
+// For strings globals, the trailing type may be omitted.
+llvm.mlir.global constant @no_trailing_type("foo bar")
+
+// A complex initializer is constructed with an initializer region.
+llvm.mlir.global constant @int_gep() : !llvm<"i32*"> {
+ %0 = llvm.mlir.addressof @g2 : !llvm<"i32*">
+ %1 = llvm.mlir.constant(2 : i32) : !llvm.i32
+ %2 = llvm.getelementptr %0[%1] : (!llvm<"i32*">, !llvm.i32) -> !llvm<"i32*">
+ llvm.return %2 : !llvm<"i32*">
+}
+```
+
+#### `llvm.mlir.null`
+
+Unlike LLVM IR, MLIR does not have first-class null pointers. They must be
+explicitly created as SSA values using `llvm.mlir.null`. This operation has
+operands or attributes, and returns a null value of a wrapped LLVM IR pointer
+type.
+
+Examples:
+
+```mlir
+// Null pointer to i8 value.
+%0 = llvm.mlir.null : !llvm<"i8*">
+
+// Null pointer to a function with signature void() value.
+%1 = llvm.mlir.null : !llvm<"void()*">
+```
+
+#### `llvm.mlir.undef`
+
+Unlike LLVM IR, MLIR does not have first-class undefined values. Such values
+must be created as SSA values using `llvm.mlir.undef`. This operation has no
+operands or attributes. It creates an undefined value of the specified LLVM IR
+dialect type wrapping an LLVM IR structure type.
+
+Example:
+
+```mlir
+// Create a structure with a 32-bit integer followed by a float.
+%0 = llvm.mlir.undef : !llvm<"{i32, float}">
+```
diff --git a/mlir/docs/Dialects/Linalg.md b/mlir/docs/Dialects/Linalg.md
new file mode 100644
index 00000000000..1ed5a2c2a26
--- /dev/null
+++ b/mlir/docs/Dialects/Linalg.md
@@ -0,0 +1,8 @@
+# Linalg Dialect
+
+To generate the documentation:
+
+```sh
+mlir-tblgen --gen-op-doc -I /path/to/mlir/include \
+/path/to/mlir/include/mlir/Dialect/Linalg/IR/LinalgDoc.td
+```
diff --git a/mlir/docs/Dialects/SPIR-V.md b/mlir/docs/Dialects/SPIR-V.md
new file mode 100644
index 00000000000..1d72e5449d3
--- /dev/null
+++ b/mlir/docs/Dialects/SPIR-V.md
@@ -0,0 +1,1039 @@
+# SPIR-V Dialect
+
+This document describes the design of the SPIR-V dialect in MLIR. It lists
+various design choices we made for modeling different SPIR-V mechanisms, and
+their rationale.
+
+This document also explains in a high-level manner how different components are
+organized and implemented in the code and gives steps to follow for extending
+them.
+
+This document assumes familiarity with SPIR-V. [SPIR-V][Spirv] is the Khronos
+Group’s binary intermediate language for representing graphics shaders and
+compute kernels. It is adopted by multiple Khronos Group’s APIs, including
+Vulkan and OpenCL. It is fully defined in a
+[human-readable specification][SpirvSpec]; the syntax of various SPIR-V
+instructions are encoded in a [machine-readable grammar][SpirvGrammar].
+
+## Design Guidelines
+
+SPIR-V is a binary intermediate language that serves dual purpose: on one side,
+it is an intermediate language to represent graphics shaders and compute kernels
+for high-level languages to target; on the other side, it defines a stable
+binary format for hardware driver consumption. As a result, SPIR-V has design
+principles pertain to not only intermediate language, but also binary format.
+For example, regularity is one of the design goals of SPIR-V. All concepts are
+represented as SPIR-V instructions, including declaring extensions and
+capabilities, defining types and constants, defining functions, attaching
+additional properties to computation results, etc. This way favors binary
+encoding and decoding for driver consumption but not necessarily compiler
+transformations.
+
+### Dialect design principles
+
+The main objective of the SPIR-V dialect is to be a proper intermediate
+representation (IR) to facilitate compiler transformations. While we still aim
+to support serializing to and deserializing from the binary format for various
+good reasons, the binary format and its concerns play less a role in the design
+of the SPIR-V dialect: when there is a trade-off to be made between favoring IR
+and supporting binary format, we lean towards the former.
+
+On the IR aspect, the SPIR-V dialect aims to model SPIR-V at the same semantic
+level. It is not intended to be a higher level or lower level abstraction than
+the SPIR-V specification. Those abstractions are easily outside the domain of
+SPIR-V and should be modeled with other proper dialects so they can be shared
+among various compilation paths. Because of the dual purpose of SPIR-V, SPIR-V
+dialect staying at the same semantic level as the SPIR-V specification also
+means we can still have straightforward serailization and deserailization for
+the majority of functionalities.
+
+To summarize, the SPIR-V dialect follows the following design principles:
+
+* Stay as the same semantic level as the SPIR-V specification by having
+ one-to-one mapping for most concepts and entities.
+* Adopt SPIR-V specification's syntax if possible, but deviate intentionally
+ to utilize MLIR mechanisms if it results in better representation and
+ benefits transformation.
+* Be straightforward to serialize into and deserialize from the SPIR-V binary
+ format.
+
+SPIR-V is designed to be consumed by hardware drivers, so its representation is
+quite clear, yet verbose for some cases. Allowing representational deviation
+gives us the flexibility to reduce the verbosity by using MLIR mechanisms.
+
+### Dialect scopes
+
+SPIR-V supports multiple execution environments, specified by client APIs.
+Notable adopters include Vulkan and OpenCL. It follows that the SPIR-V dialect
+should support multiple execution environments if to be a proper proxy of SPIR-V
+in MLIR systems. The SPIR-V dialect is designed with these considerations: it
+has proper support for versions, extensions, and capabilities and is as
+extensible as SPIR-V specification.
+
+## Conventions
+
+The SPIR-V dialect adopts the following conventions for IR:
+
+* The prefix for all SPIR-V types and operations are `spv.`.
+* All instructions in an extended instruction set are further qualified with
+ the extended instruction set's prefix. For example, all operations in the
+ GLSL extended instruction set is has the prefix of `spv.GLSL.`.
+* Ops that directly mirror instructions in the specification have `CamelCase`
+ names that are the same as the instruction opnames (without the `Op`
+ prefix). For example, `spv.FMul` is a direct mirror of `OpFMul` in the
+ specification. Such an op will be serialized into and deserialized from one
+ SPIR-V instruction.
+* Ops with `snake_case` names are those that have different representation
+ from corresponding instructions (or concepts) in the specification. These
+ ops are mostly for defining the SPIR-V structure. For example, `spv.module`
+ and `spv.constant`. They may correspond to one or more instructions during
+ (de)serialization.
+* Ops with `_snake_case` names are those that have no corresponding
+ instructions (or concepts) in the binary format. They are introduced to
+ satisfy MLIR structural requirements. For example, `spv._module_end` and
+ `spv._merge`. They maps to no instructions during (de)serialization.
+
+(TODO: consider merging the last two cases and adopting `spv.mlir.` prefix for
+them.)
+
+## Module
+
+A SPIR-V module is defined via the `spv.module` op, which has one region that
+contains one block. Model-level instructions, including function definitions,
+are all placed inside the block. Functions are defined using the builtin `func`
+op.
+
+We choose to model a SPIR-V module with a dedicated `spv.module` op based on the
+following considerations:
+
+* It maps cleanly to a SPIR-V module in the specification.
+* We can enforce SPIR-V specific verification that is suitable to be performed
+ at the module-level.
+* We can attach additional model-level attributes.
+* We can control custom assembly form.
+
+The `spv.module` op's region cannot capture SSA values from outside, neither
+implicitly nor explicitly. The `spv.module` op's region is closed as to what ops
+can appear inside: apart from the builtin `func` op, it can only contain ops
+from the SPIR-V dialect. The `spv.module` op's verifier enforces this rule. This
+meaningfully guarantees that a `spv.module` can be the entry point and boundary
+for serialization.
+
+### Module-level operations
+
+SPIR-V binary format defines the following [sections][SpirvLogicalLayout]:
+
+1. Capabilities required by the module.
+1. Extensions required by the module.
+1. Extended instructions sets required by the module.
+1. Addressing and memory model specification.
+1. Entry point specifications.
+1. Execution mode declarations.
+1. Debug instructions.
+1. Annotation/decoration instructions.
+1. Type, constant, global variables.
+1. Function declarations.
+1. Function definitions.
+
+Basically, a SPIR-V binary module contains multiple module-level instructions
+followed by a list of functions. Those module-level instructions are essential
+and they can generate result ids referenced by functions, notably, declaring
+resource variables to interact with the execution environment.
+
+Compared to the binary format, we adjust how these module-level SPIR-V
+instructions are represented in the SPIR-V dialect:
+
+#### Use MLIR attributes for metadata
+
+* Requirements for capabilities, extensions, extended instruction sets,
+ addressing model, and memory model is conveyed using `spv.module`
+ attributes. This is considered better because these information are for the
+ execution environment. It's easier to probe them if on the module op itself.
+* Annotations/decoration instructions are "folded" into the instructions they
+ decorate and represented as attributes on those ops. This eliminates
+ potential forward references of SSA values, improves IR readability, and
+ makes querying the annotations more direct. More discussions can be found in
+ the [`Decorations`](#decorations) section.
+
+#### Model types with MLIR custom types
+
+* Types are represented using MLIR standard types and SPIR-V dialect specific
+ types. There are no type declaration ops in the SPIR-V dialect. More
+ discussions can be found in the [Types](#types) section later.
+
+#### Unify and localize constants
+
+* Various normal constant instructions are represented by the same
+ `spv.constant` op. Those instructions are just for constants of different
+ types; using one op to represent them reduces IR verbosity and makes
+ transformations less tedious.
+* Normal constants are not placed in `spv.module`'s region; they are localized
+ into functions. This is to make functions in the SPIR-V dialect to be
+ isolated and explicit capturing. Constants are cheap to duplicate given
+ attributes are uniqued in `MLIRContext`.
+
+#### Adopt symbol-based global variables and specialization constant
+
+* Global variables are defined with the `spv.globalVariable` op. They do not
+ generate SSA values. Instead they have symbols and should be referenced via
+ symbols. To use a global variables in a function block, `spv._address_of` is
+ needed to turn the symbol into a SSA value.
+* Specialization constants are defined with the `spv.specConstant` op. Similar
+ to global variables, they do not generate SSA values and have symbols for
+ reference, too. `spv._reference_of` is needed to turn the symbol into a SSA
+ value for use in a function block.
+
+The above choices enables functions in the SPIR-V dialect to be isolated and
+explicit capturing.
+
+#### Disallow implicit capturing in functions
+
+* In SPIR-V specification, functions support implicit capturing: they can
+ reference SSA values defined in modules. In the SPIR-V dialect functions are
+ defined with `func` op, which disallows implicit capturing. This is more
+ friendly to compiler analyses and transformations. More discussions can be
+ found in the [Function](#function) section later.
+
+### Model entry points and execution models as normal ops
+
+* A SPIR-V module can have multiple entry points. And these entry points refer
+ to the function and interface variables. It’s not suitable to model them as
+ `spv.module` op attributes. We can model them as normal ops of using symbol
+ references.
+* Similarly for execution modes, which are coupled with entry points, we can
+ model them as normal ops in `spv.module`'s region.
+
+## Decorations
+
+Annotations/decorations provide additional information on result ids. In SPIR-V,
+all instructions can generate result ids, including value-computing and
+type-defining ones.
+
+For decorations on value result ids, we can just have a corresponding attribute
+attached to the operation generating the SSA value. For example, for the
+following SPIR-V:
+
+```spirv
+OpDecorate %v1 RelaxedPrecision
+OpDecorate %v2 NoContraction
+...
+%v1 = OpFMul %float %0 %0
+%v2 = OpFMul %float %1 %1
+```
+
+We can represent them in the SPIR-V dialect as:
+
+```mlir
+%v1 = "spv.FMul"(%0, %0) {RelaxedPrecision: unit} : (f32, f32) -> (f32)
+%v2 = "spv.FMul"(%1, %1) {NoContraction: unit} : (f32, f32) -> (f32)
+```
+
+This approach benefits transformations. Essentially those decorations are just
+additional properties of the result ids (and thus their defining instructions).
+In SPIR-V binary format, they are just represented as instructions. Literally
+following SPIR-V binary format means we need to through def-use chains to find
+the decoration instructions and query information from them.
+
+For decorations on type result ids, notice that practically, only result ids
+generated from composite types (e.g., `OpTypeArray`, `OpTypeStruct`) need to be
+decorated for memory layouting purpose (e.g., `ArrayStride`, `Offset`, etc.);
+scalar/vector types are required to be uniqued in SPIR-V. Therefore, we can just
+encode them directly in the dialect-specific type.
+
+## Types
+
+Theoretically we can define all SPIR-V types using MLIR extensible type system,
+but other than representational purity, it does not buy us more. Instead, we
+need to maintain the code and invest in pretty printing them. So we prefer to
+use builtin/standard types if possible.
+
+The SPIR-V dialect reuses standard integer, float, and vector types:
+
+Specification | Dialect
+:----------------------------------: | :-------------------------------:
+`OpTypeBool` | `i1`
+`OpTypeInt <bitwidth>` | `i<bitwidth>`
+`OpTypeFloat <bitwidth>` | `f<bitwidth>`
+`OpTypeVector <scalar-type> <count>` | `vector<<count> x <scalar-type>>`
+
+Similarly, `mlir::NoneType` can be used for SPIR-V `OpTypeVoid`; builtin
+function types can be used for SPIR-V `OpTypeFunction` types.
+
+The SPIR-V dialect and defines the following dialect-specific types:
+
+```
+spirv-type ::= array-type
+ | image-type
+ | pointer-type
+ | runtime-array-type
+ | struct-type
+```
+
+### Array type
+
+This corresponds to SPIR-V [array type][ArrayType]. Its syntax is
+
+```
+element-type ::= integer-type
+ | floating-point-type
+ | vector-type
+ | spirv-type
+
+array-type ::= `!spv.array<` integer-literal `x` element-type `>`
+```
+
+For example,
+
+```mlir
+!spv.array<4 x i32>
+!spv.array<16 x vector<4 x f32>>
+```
+
+### Image type
+
+This corresponds to SPIR-V [image type][ImageType]. Its syntax is
+
+```
+dim ::= `1D` | `2D` | `3D` | `Cube` | <and other SPIR-V Dim specifiers...>
+
+depth-info ::= `NoDepth` | `IsDepth` | `DepthUnknown`
+
+arrayed-info ::= `NonArrayed` | `Arrayed`
+
+sampling-info ::= `SingleSampled` | `MultiSampled`
+
+sampler-use-info ::= `SamplerUnknown` | `NeedSampler` | `NoSampler`
+
+format ::= `Unknown` | `Rgba32f` | <and other SPIR-V Image Formats...>
+
+image-type ::= `!spv.image<` element-type `,` dim `,` depth-info `,`
+ arrayed-info `,` sampling-info `,`
+ sampler-use-info `,` format `>`
+```
+
+For example,
+
+```mlir
+!spv.image<f32, 1D, NoDepth, NonArrayed, SingleSampled, SamplerUnknown, Unknown>
+!spv.image<f32, Cube, IsDepth, Arrayed, MultiSampled, NeedSampler, Rgba32f>
+```
+
+### Pointer type
+
+This corresponds to SPIR-V [pointer type][PointerType]. Its syntax is
+
+```
+storage-class ::= `UniformConstant`
+ | `Uniform`
+ | `Workgroup`
+ | <and other storage classes...>
+
+pointer-type ::= `!spv.ptr<` element-type `,` storage-class `>`
+```
+
+For example,
+
+```mlir
+!spv.ptr<i32, Function>
+!spv.ptr<vector<4 x f32>, Uniform>
+```
+
+### Runtime array type
+
+This corresponds to SPIR-V [runtime array type][RuntimeArrayType]. Its syntax is
+
+```
+runtime-array-type ::= `!spv.rtarray<` element-type `>`
+```
+
+For example,
+
+```mlir
+!spv.rtarray<i32>
+!spv.rtarray<vector<4 x f32>>
+```
+
+### Struct type
+
+This corresponds to SPIR-V [struct type][StructType]. Its syntax is
+
+```
+struct-member-decoration ::= integer-literal? spirv-decoration*
+struct-type ::= `!spv.struct<` spirv-type (`[` struct-member-decoration `]`)?
+ (`, ` spirv-type (`[` struct-member-decoration `]`)?
+```
+
+For Example,
+
+```mlir
+!spv.struct<f32>
+!spv.struct<f32 [0]>
+!spv.struct<f32, !spv.image<f32, 1D, NoDepth, NonArrayed, SingleSampled, SamplerUnknown, Unknown>>
+!spv.struct<f32 [0], i32 [4]>
+```
+
+## Function
+
+In SPIR-V, a function construct consists of multiple instructions involving
+`OpFunction`, `OpFunctionParameter`, `OpLabel`, `OpFunctionEnd`.
+
+```spirv
+// int f(int v) { return v; }
+%1 = OpTypeInt 32 0
+%2 = OpTypeFunction %1 %1
+%3 = OpFunction %1 %2
+%4 = OpFunctionParameter %1
+%5 = OpLabel
+%6 = OpReturnValue %4
+ OpFunctionEnd
+```
+
+This construct is very clear yet quite verbose. It is intended for driver
+consumption. There is little benefit to literally replicate this construct in
+the SPIR-V dialect. Instead, we reuse the builtin `func` op to express functions
+more concisely:
+
+```mlir
+func @f(%arg: i32) -> i32 {
+ "spv.ReturnValue"(%arg) : (i32) -> (i32)
+}
+```
+
+A SPIR-V function can have at most one result. It cannot contain nested
+functions or non-SPIR-V operations. `spv.module` verifies these requirements.
+
+A major difference between the SPIR-V dialect and the SPIR-V specification for
+functions is that the former are isolated and require explicit capturing, while
+the latter allow implicit capturing. In SPIR-V specification, functions can
+refer to SSA values (generated by constants, global variables, etc.) defined in
+modules. The SPIR-V dialect adjusted how constants and global variables are
+modeled to enable isolated functions. Isolated functions are more friendly to
+compiler analyses and transformations. This also enables the SPIR-V dialect to
+better utilize core infrastructure: many functionalities in the core
+infrastructure requires ops to be isolated, e.g., the
+[greedy pattern rewriter][GreedyPatternRewriter] can only act on ops isolated
+from above.
+
+(TODO: create a dedicated `spv.fn` op for SPIR-V functions.)
+
+## Operations
+
+In SPIR-V, instruction is a generalized concept; a SPIR-V module is just a
+sequence of instructions. Declaring types, expressing computations, annotating
+result ids, expressing control flows and others are all in the form of
+instructions.
+
+We only discuss instructions expressing computations here, which can be
+represented via SPIR-V dialect ops. Module-level instructions for declarations
+and definitions are represented differently in the SPIR-V dialect as explained
+earlier in the [Module-level operations](#module-level-operations) section.
+
+An instruction computes zero or one result from zero or more operands. The
+result is a new result id. An operand can be a result id generated by a previous
+instruction, an immediate value, or a case of an enum type. We can model result
+id operands and results with MLIR SSA values; for immediate value and enum
+cases, we can model them with MLIR attributes.
+
+For example,
+
+```spirv
+%i32 = OpTypeInt 32 0
+%c42 = OpConstant %i32 42
+...
+%3 = OpVariable %i32 Function 42
+%4 = OpIAdd %i32 %c42 %c42
+```
+
+can be represented in the dialect as
+
+```mlir
+%0 = "spv.constant"() { value = 42 : i32 } : () -> i32
+%1 = "spv.Variable"(%0) { storage_class = "Function" } : (i32) -> !spv.ptr<i32, Function>
+%2 = "spv.IAdd"(%0, %0) : (i32, i32) -> i32
+```
+
+Operation documentation is written in each op's Op Definition Spec using
+TableGen. A markdown version of the doc can be generated using `mlir-tblgen
+-gen-doc`.
+
+### Ops from extended instruction sets
+
+Analogically extended instruction set is a mechanism to import SPIR-V
+instructions within another namespace. [`GLSL.std.450`][GlslStd450] is an
+extended instruction set that provides common mathematical routines that should
+be supported. Instead of modeling `OpExtInstImport` as a separate op and use a
+single op to model `OpExtInst` for all extended instructions, we model each
+SPIR-V instruction in an extended instruction set as a separate op with the
+proper name prefix. For example, for
+
+```spirv
+%glsl = OpExtInstImport "GLSL.std.450"
+
+%f32 = OpTypeFloat 32
+%cst = OpConstant %f32 ...
+
+%1 = OpExtInst %f32 %glsl 28 %cst
+%2 = OpExtInst %f32 %glsl 31 %cst
+```
+
+we can have
+
+```mlir
+%1 = "spv.GLSL.Log"(%cst) : (f32) -> (f32)
+%2 = "spv.GLSL.Sqrt(%cst) : (f32) -> (f32)
+```
+
+## Control Flow
+
+SPIR-V binary format uses merge instructions (`OpSelectionMerge` and
+`OpLoopMerge`) to declare structured control flow. They explicitly declare a
+header block before the control flow diverges and a merge block where control
+flow subsequently converges. These blocks delimit constructs that must nest, and
+can only be entered and exited in structured ways.
+
+In the SPIR-V dialect, we use regions to mark the boundary of a structured
+control flow construct. With this approach, it's easier to discover all blocks
+belonging to a structured control flow construct. It is also more idiomatic to
+MLIR system.
+
+We introduce a `spv.selection` and `spv.loop` op for structured selections and
+loops, respectively. The merge targets are the next ops following them. Inside
+their regions, a special terminator, `spv._merge` is introduced for branching to
+the merge target.
+
+### Selection
+
+`spv.selection` defines a selection construct. It contains one region. The
+region should contain at least two blocks: one selection header block and one
+merge block.
+
+* The selection header block should be the first block. It should contain the
+ `spv.BranchConditional` or `spv.Switch` op.
+* The merge block should be the last block. The merge block should only
+ contain a `spv._merge` op. Any block can branch to the merge block for early
+ exit.
+
+```
+ +--------------+
+ | header block | (may have multiple outgoing branches)
+ +--------------+
+ / | \
+ ...
+
+
+ +---------+ +---------+ +---------+
+ | case #0 | | case #1 | | case #2 | ... (may have branches between each other)
+ +---------+ +---------+ +---------+
+
+
+ ...
+ \ | /
+ v
+ +-------------+
+ | merge block | (may have multiple incoming branches)
+ +-------------+
+```
+
+For example, for the given function
+
+```c++
+void loop(bool cond) {
+ int x = 0;
+ if (cond) {
+ x = 1;
+ } else {
+ x = 2;
+ }
+ // ...
+}
+```
+
+It will be represented as
+
+```mlir
+func @selection(%cond: i1) -> () {
+ %zero = spv.constant 0: i32
+ %one = spv.constant 1: i32
+ %two = spv.constant 2: i32
+ %x = spv.Variable init(%zero) : !spv.ptr<i32, Function>
+
+ spv.selection {
+ spv.BranchConditional %cond, ^then, ^else
+
+ ^then:
+ spv.Store "Function" %x, %one : i32
+ spv.Branch ^merge
+
+ ^else:
+ spv.Store "Function" %x, %two : i32
+ spv.Branch ^merge
+
+ ^merge:
+ spv._merge
+ }
+
+ // ...
+}
+
+```
+
+### Loop
+
+`spv.loop` defines a loop construct. It contains one region. The region should
+contain at least four blocks: one entry block, one loop header block, one loop
+continue block, one merge block.
+
+* The entry block should be the first block and it should jump to the loop
+ header block, which is the second block.
+* The merge block should be the last block. The merge block should only
+ contain a `spv._merge` op. Any block except the entry block can branch to
+ the merge block for early exit.
+* The continue block should be the second to last block and it should have a
+ branch to the loop header block.
+* The loop continue block should be the only block, except the entry block,
+ branching to the loop header block.
+
+```
+ +-------------+
+ | entry block | (one outgoing branch)
+ +-------------+
+ |
+ v
+ +-------------+ (two incoming branches)
+ | loop header | <-----+ (may have one or two outgoing branches)
+ +-------------+ |
+ |
+ ... |
+ \ | / |
+ v |
+ +---------------+ | (may have multiple incoming branches)
+ | loop continue | -----+ (may have one or two outgoing branches)
+ +---------------+
+
+ ...
+ \ | /
+ v
+ +-------------+ (may have multiple incoming branches)
+ | merge block |
+ +-------------+
+```
+
+The reason to have another entry block instead of directly using the loop header
+block as the entry block is to satisfy region's requirement: entry block of
+region may not have predecessors. We have a merge block so that branch ops can
+reference it as successors. The loop continue block here corresponds to
+"continue construct" using SPIR-V spec's term; it does not mean the "continue
+block" as defined in the SPIR-V spec, which is "a block containing a branch to
+an OpLoopMerge instruction’s Continue Target."
+
+For example, for the given function
+
+```c++
+void loop(int count) {
+ for (int i = 0; i < count; ++i) {
+ // ...
+ }
+}
+```
+
+It will be represented as
+
+```mlir
+func @loop(%count : i32) -> () {
+ %zero = spv.constant 0: i32
+ %one = spv.constant 1: i32
+ %var = spv.Variable init(%zero) : !spv.ptr<i32, Function>
+
+ spv.loop {
+ spv.Branch ^header
+
+ ^header:
+ %val0 = spv.Load "Function" %var : i32
+ %cmp = spv.SLessThan %val0, %count : i32
+ spv.BranchConditional %cmp, ^body, ^merge
+
+ ^body:
+ // ...
+ spv.Branch ^continue
+
+ ^continue:
+ %val1 = spv.Load "Function" %var : i32
+ %add = spv.IAdd %val1, %one : i32
+ spv.Store "Function" %var, %add : i32
+ spv.Branch ^header
+
+ ^merge:
+ spv._merge
+ }
+ return
+}
+```
+
+### Block argument for Phi
+
+There are no direct Phi operations in the SPIR-V dialect; SPIR-V `OpPhi`
+instructions are modelled as block arguments in the SPIR-V dialect. (See the
+[Rationale][Rationale] doc for "Block Arguments vs Phi nodes".) Each block
+argument corresponds to one `OpPhi` instruction in the SPIR-V binary format. For
+example, for the following SPIR-V function `foo`:
+
+```spirv
+ %foo = OpFunction %void None ...
+%entry = OpLabel
+ %var = OpVariable %_ptr_Function_int Function
+ OpSelectionMerge %merge None
+ OpBranchConditional %true %true %false
+ %true = OpLabel
+ OpBranch %phi
+%false = OpLabel
+ OpBranch %phi
+ %phi = OpLabel
+ %val = OpPhi %int %int_1 %false %int_0 %true
+ OpStore %var %val
+ OpReturn
+%merge = OpLabel
+ OpReturn
+ OpFunctionEnd
+```
+
+It will be represented as:
+
+```mlir
+func @foo() -> () {
+ %var = spv.Variable : !spv.ptr<i32, Function>
+
+ spv.selection {
+ %true = spv.constant true
+ spv.BranchConditional %true, ^true, ^false
+
+ ^true:
+ %zero = spv.constant 0 : i32
+ spv.Branch ^phi(%zero: i32)
+
+ ^false:
+ %one = spv.constant 1 : i32
+ spv.Branch ^phi(%one: i32)
+
+ ^phi(%arg: i32):
+ spv.Store "Function" %var, %arg : i32
+ spv.Return
+
+ ^merge:
+ spv._merge
+ }
+ spv.Return
+}
+```
+
+## Shader interface (ABI)
+
+SPIR-V itself is just expressing computation happening on GPU device. SPIR-V
+programs themselves are not enough for running workloads on GPU; a companion
+host application is needed to manage the resources referenced by SPIR-V programs
+and dispatch the workload. For the Vulkan execution environment, the host
+application will be written using Vulkan API. Unlike CUDA, the SPIR-V program
+and the Vulkan application are typically authored with different front-end
+languages, which isolates these two worlds. Yet they still need to match
+_interfaces_: the variables declared in a SPIR-V program for referencing
+resources need to match with the actual resources managed by the application
+regarding their parameters.
+
+Still using Vulkan as an example execution environment, there are two primary
+resource types in Vulkan: buffers and images. They are used to back various uses
+that may differ regarding the classes of operations (load, store, atomic) to be
+performed. These uses are differentiated via descriptor types. (For example,
+uniform storage buffer descriptors can only support load operations while
+storage buffer descriptors can support load, store, and atomic operations.)
+Vulkan uses a binding model for resources. Resources are associated with
+descriptors and descriptors are further grouped into sets. Each descriptor thus
+has a set number and a binding number. Descriptors in the application
+corresponds to variables in the SPIR-V program. Their parameters must match,
+including but not limited to set and binding numbers.
+
+Apart from buffers and images, there is other data that is set up by Vulkan and
+referenced inside the SPIR-V program, for example, push constants. They also
+have parameters that require matching between the two worlds.
+
+The interface requirements are external information to the SPIR-V compilation
+path in MLIR. Besides, each Vulkan application may want to handle resources
+differently. To avoid duplication and to share common utilities, a SPIR-V shader
+interface specification needs to be defined to provide the external requirements
+to and guide the SPIR-V compilation path.
+
+### Shader interface attributes
+
+The SPIR-V dialect defines [a few attributes][MlirSpirvAbi] for specifying these
+interfaces:
+
+* `spv.entry_point_abi` is a struct attribute that should be attached to the
+ entry function. It contains:
+ * `local_size` for specifying the local work group size for the dispatch.
+* `spv.interface_var_abi` is a struct attribute that should be attached to
+ each operand and result of the entry function. It contains:
+ * `descriptor_set` for specifying the descriptor set number for the
+ corresponding resource variable.
+ * `binding` for specifying the binding number for the corresponding
+ resource variable.
+ * `storage_class` for specifying the storage class for the corresponding
+ resource variable.
+
+The SPIR-V dialect provides a [`LowerABIAttributesPass`][MlirSpirvPasses] for
+consuming these attributes and create SPIR-V module complying with the
+interface.
+
+## Serialization and deserialization
+
+Although the main objective of the SPIR-V dialect is to act as a proper IR for
+compiler transformations, being able to serialize to and deserialize from the
+binary format is still very valuable for many good reasons. Serialization
+enables the artifacts of SPIR-V compilation to be consumed by a execution
+environment; deserialization allows us to import SPIR-V binary modules and run
+transformations on them. So serialization and deserialization is supported from
+the very beginning of the development of the SPIR-V dialect.
+
+The serialization library provides two entry points, `mlir::spirv::serialize()`
+and `mlir::spirv::deserialize()`, for converting a MLIR SPIR-V module to binary
+format and back. The [Code organization](#code-organization) explains more about
+this.
+
+Given that the focus is transformations, which inevitably means changes to the
+binary module; so serialization is not designed to be a general tool for
+investigating the SPIR-V binary module and does not guarantee roundtrip
+equivalence (at least for now). For the latter, please use the
+assembler/disassembler in the [SPIRV-Tools][SpirvTools] project.
+
+A few transformations are performed in the process of serialization because of
+the representational differences between SPIR-V dialect and binary format:
+
+* Attributes on `spv.module` are emitted as their corresponding SPIR-V
+ instructions.
+* Types are serialized into `OpType*` instructions in the SPIR-V binary module
+ section for types, constants, and global variables.
+* `spv.constant`s are unified and placed in the SPIR-V binary module section
+ for types, constants, and global variables.
+* Attributes on ops, if not part of the op's binary encoding, are emitted as
+ `OpDecorate*` instructions in the SPIR-V binary module section for
+ decorations.
+* `spv.selection`s and `spv.loop`s are emitted as basic blocks with `Op*Merge`
+ instructions in the header block as required by the binary format.
+* Block arguments are materialized as `OpPhi` instructions at the beginning of
+ the corresponding blocks.
+
+Similarly, a few transformations are performed during deserialization:
+
+* Instructions for execution environment requirements (extensions,
+ capabilities, extended instruction sets, etc.) will be placed as attributes
+ on `spv.module`.
+* `OpType*` instructions will be converted into proper `mlir::Type`s.
+* `OpConstant*` instructions are materialized as `spv.constant` at each use
+ site.
+* `OpVariable` instructions will be converted to `spv.globalVariable` ops if
+ in module-level; otherwise they will be converted into `spv.Variable` ops.
+* Every use of a module-level `OpVariable` instruction will materialize a
+ `spv._address_of` op to turn the symbol of the corresponding
+ `spv.globalVariable` into an SSA value.
+* Every use of a `OpSpecConstant` instruction will materialize a
+ `spv._reference_of` op to turn the symbol of the corresponding
+ `spv.specConstant` into an SSA value.
+* `OpPhi` instructions are converted to block arguments.
+* Structured control flow are placed inside `spv.selection` and `spv.loop`.
+
+## Conversions
+
+(TODO: expand this section)
+
+## Code organization
+
+We aim to provide multiple libraries with clear dependencies for SPIR-V related
+functionalities in MLIR so developers can just choose the needed components
+without pulling in the whole world.
+
+### The dialect
+
+The code for the SPIR-V dialect resides in a few places:
+
+* Public headers are placed in [include/mlir/Dialect/SPIRV][MlirSpirvHeaders].
+* Libraries are placed in [lib/Dialect/SPIRV][MlirSpirvLibs].
+* IR tests are placed in [test/Dialect/SPIRV][MlirSpirvTests].
+* Unit tests are placed in [unittests/Dialect/SPIRV][MlirSpirvUnittests].
+
+The whole SPIR-V dialect is exposed via multiple headers for better
+organization:
+
+* [SPIRVDialect.h][MlirSpirvDialect] defines the SPIR-V dialect.
+* [SPIRVTypes.h][MlirSpirvTypes] defines all SPIR-V specific types.
+* [SPIRVOps.h][MlirSPirvOps] defines all SPIR-V operations.
+* [Serialization.h][MlirSpirvSerialization] defines the entry points for
+ serialization and deserialization.
+
+The dialect itself, including all types and ops, is in the `MLIRSPIRV` library.
+Serialization functionalities are in the `MLIRSPIRVSerialization` library.
+
+### Op definitions
+
+We use [Op Definition Spec][ODS] to define all SPIR-V ops. They are written in
+TableGen syntax and placed in various `*Ops.td` files in the header directory.
+Those `*Ops.td` files are organized according to the instruction categories used
+in the SPIR-V specification, for example, an op belonging to the "Atomics
+Instructions" section is put in the `SPIRVAtomicOps.td` file.
+
+`SPIRVOps.td` serves as the master op definition file that includes all files
+for specific categories.
+
+`SPIRVBase.td` defines common classes and utilities used by various op
+definitions. It contains the TableGen SPIR-V dialect definition, SPIR-V
+versions, known extensions, various SPIR-V enums, TableGen SPIR-V types, and
+base op classes, etc.
+
+Many of the contents in `SPIRVBase.td`, e.g., the opcodes and various enums, and
+all `*Ops.td` files can be automatically updated via a Python script, which
+queries the SPIR-V specification and grammar. This greatly reduces the burden of
+supporting new ops and keeping updated with the SPIR-V spec. More details on
+this automated development can be found in the
+[Automated development flow](#automated-development-flow) section.
+
+### Dialect conversions
+
+The code for conversions from other dialects to the SPIR-V dialect also resides
+in a few places:
+
+* From GPU dialect: headers are at
+ [include/mlir/Conversion/GPUTOSPIRV][MlirGpuToSpirvHeaders]; libraries are
+ at [lib/Conversion/GPUToSPIRV][MlirGpuToSpirvLibs].
+* From standard dialect: headers are at
+ [include/mlir/Conversion/StandardTOSPIRV][MlirStdToSpirvHeaders]; libraries
+ are at [lib/Conversion/StandardToSPIRV][MlirStdToSpirvLibs].
+
+These dialect to dialect conversions have their dedicated libraries,
+`MLIRGPUToSPIRVTransforms` and `MLIRStandardToSPIRVTransforms`, respectively.
+
+There are also common utilities when targeting SPIR-V from any dialect:
+
+* [include/mlir/Dialect/SPIRV/Passes.h][MlirSpirvPasses] contains SPIR-V
+ specific analyses and transformations.
+* [include/mlir/Dialect/SPIRV/SPIRVLowering.h][MlirSpirvLowering] contains
+ type converters and other utility functions.
+
+These common utilities are implemented in the `MLIRSPIRVTransforms` library.
+
+## Contribution
+
+All kinds of contributions are highly appreciated! :) We have GitHub issues for
+tracking the [dialect][GitHubDialectTracking] and
+[lowering][GitHubLoweringTracking] development. You can find todo tasks there.
+The [Code organization](#code-organization) section gives an overview of how
+SPIR-V related functionalities are implemented in MLIR. This section gives more
+concrete steps on how to contribute.
+
+### Automated development flow
+
+One of the goals of SPIR-V dialect development is to leverage both the SPIR-V
+[human-readable specification][SpirvSpec] and
+[machine-readable grammar][SpirvGrammar] to auto-generate as much contents as
+possible. Specifically, the following tasks can be automated (partially or
+fully):
+
+* Adding support for a new operation.
+* Adding support for a new SPIR-V enum.
+* Serialization and deserialization of a new operation.
+
+We achieve this using the Python script
+[`gen_spirv_dialect.py`][GenSpirvUtilsPy]. It fetches the human-readable
+specification and machine-readable grammar directly from the Internet and
+updates various SPIR-V `*.td` files in place. The script gives us an automated
+flow for adding support for new ops or enums.
+
+Afterwards, we have SPIR-V specific `mlir-tblgen` backends for reading the Op
+Definition Spec and generate various components, including (de)serialization
+logic for ops. Together with standard `mlir-tblgen` backends, we auto-generate
+all op classes, enum classes, etc.
+
+In the following subsections, we list the detailed steps to follow for common
+tasks.
+
+### Add a new op
+
+To add a new op, invoke the `define_inst.sh` script wrapper in utils/spirv.
+`define_inst.sh` requires a few parameters:
+
+```sh
+./define_inst.sh <filename> <base-class-name> <opname>
+```
+
+For example, to define the op for `OpIAdd`, invoke
+
+```sh
+./define_inst.sh SPIRVArithmeticOps.td ArithmeticBinaryOp OpIAdd
+```
+
+where `SPIRVArithmeticOps.td` is the filename for hosting the new op and
+`ArithmeticBinaryOp` is the direct base class the newly defined op will derive
+from.
+
+Similarly, to define the op for `OpAtomicAnd`,
+
+```sh
+./define_inst.sh SPIRVAtomicOps.td AtomicUpdateWithValueOp OpAtomicAnd
+```
+
+Note that the generated SPIR-V op definition is just a best-effort template; it
+is still expected to be updated to have more accurate traits, arguments, and
+results.
+
+The generated op will automatically gain the logic for (de)serialization.
+However, tests still need to be coupled with the change to make sure no
+surprises. Serialization tests live in test/Dialect/SPIRV/Serialization.
+
+### Add a new enum
+
+To add a new enum, invoke the `define_enum.sh` script wrapper in utils/spirv.
+`define_enum.sh` expects the following parameters:
+
+```sh
+./define_enum.sh <enum-class-name>
+```
+
+For example, to add the definition for SPIR-V storage class in to
+`SPIRVBase.td`:
+
+```sh
+./define_enum.sh StorageClass
+```
+
+### Add a new conversion
+
+(TODO: add details for this section)
+
+[Spirv]: https://www.khronos.org/registry/spir-v/
+[SpirvSpec]: https://www.khronos.org/registry/spir-v/specs/unified1/SPIRV.html
+[SpirvLogicalLayout]: https://www.khronos.org/registry/spir-v/specs/unified1/SPIRV.html#_a_id_logicallayout_a_logical_layout_of_a_module
+[SpirvGrammar]: https://raw.githubusercontent.com/KhronosGroup/SPIRV-Headers/master/include/spirv/unified1/spirv.core.grammar.json
+[GlslStd450]: https://www.khronos.org/registry/spir-v/specs/1.0/GLSL.std.450.html
+[ArrayType]: https://www.khronos.org/registry/spir-v/specs/unified1/SPIRV.html#OpTypeArray
+[ImageType]: https://www.khronos.org/registry/spir-v/specs/unified1/SPIRV.html#OpTypeImage
+[PointerType]: https://www.khronos.org/registry/spir-v/specs/unified1/SPIRV.html#OpTypePointer
+[RuntimeArrayType]: https://www.khronos.org/registry/spir-v/specs/unified1/SPIRV.html#OpTypeRuntimeArray
+[StructType]: https://www.khronos.org/registry/spir-v/specs/unified1/SPIRV.html#Structure
+[SpirvTools]: https://github.com/KhronosGroup/SPIRV-Tools
+[Rationale]: https://github.com/tensorflow/mlir/blob/master/g3doc/Rationale.md#block-arguments-vs-phi-nodes
+[ODS]: https://github.com/tensorflow/mlir/blob/master/g3doc/OpDefinitions.md
+[GreedyPatternRewriter]: https://github.com/tensorflow/mlir/blob/master/lib/Transforms/Utils/GreedyPatternRewriteDriver.cpp
+[MlirSpirvHeaders]: https://github.com/tensorflow/mlir/tree/master/include/mlir/Dialect/SPIRV
+[MlirSpirvLibs]: https://github.com/tensorflow/mlir/tree/master/lib/Dialect/SPIRV
+[MlirSpirvTests]: https://github.com/tensorflow/mlir/tree/master/test/Dialect/SPIRV
+[MlirSpirvUnittests]: https://github.com/tensorflow/mlir/tree/master/unittests/Dialect/SPIRV
+[MlirGpuToSpirvHeaders]: https://github.com/tensorflow/mlir/tree/master/include/mlir/Conversion/GPUToSPIRV
+[MlirGpuToSpirvLibs]: https://github.com/tensorflow/mlir/tree/master/lib/Conversion/GPUToSPIRV
+[MlirStdToSpirvHeaders]: https://github.com/tensorflow/mlir/tree/master/include/mlir/Conversion/StandardToSPIRV
+[MlirStdToSpirvLibs]: https://github.com/tensorflow/mlir/tree/master/lib/Conversion/StandardToSPIRV
+[MlirSpirvDialect]: https://github.com/tensorflow/mlir/blob/master/include/mlir/Dialect/SPIRV/SPIRVDialect.h
+[MlirSpirvTypes]: https://github.com/tensorflow/mlir/blob/master/include/mlir/Dialect/SPIRV/SPIRVTypes.h
+[MlirSpirvOps]: https://github.com/tensorflow/mlir/blob/master/include/mlir/Dialect/SPIRV/SPIRVOps.h
+[MlirSpirvSerialization]: https://github.com/tensorflow/mlir/blob/master/include/mlir/Dialect/SPIRV/Serialization.h
+[MlirSpirvBase]: https://github.com/tensorflow/mlir/blob/master/include/mlir/Dialect/SPIRV/SPIRVBase.td
+[MlirSpirvPasses]: https://github.com/tensorflow/mlir/blob/master/include/mlir/Dialect/SPIRV/Passes.h
+[MlirSpirvLowering]: https://github.com/tensorflow/mlir/blob/master/include/mlir/Dialect/SPIRV/SPIRVLowering.h
+[MlirSpirvAbi]: https://github.com/tensorflow/mlir/blob/master/include/mlir/Dialect/SPIRV/SPIRVLowering.td
+[GitHubDialectTracking]: https://github.com/tensorflow/mlir/issues/302
+[GitHubLoweringTracking]: https://github.com/tensorflow/mlir/issues/303
+[GenSpirvUtilsPy]: https://github.com/tensorflow/mlir/blob/master/utils/spirv/gen_spirv_dialect.py
diff --git a/mlir/docs/Dialects/Standard.md b/mlir/docs/Dialects/Standard.md
new file mode 100644
index 00000000000..f84a2c94e92
--- /dev/null
+++ b/mlir/docs/Dialects/Standard.md
@@ -0,0 +1,1146 @@
+# Standard Dialect
+
+This dialect provides documentation for operations within the Standard dialect.
+
+Note: This dialect is a collection of operations for several different concepts,
+and should be split into multiple more-focused dialects accordingly.
+
+[TOC]
+
+TODO: shape, which returns a 1D tensor, and can take an unknown rank tensor as
+input.
+
+TODO: rank, which returns an index.
+
+## Terminator operations
+
+Terminator operations are required at the end of each block. They may contain a
+list of successors, i.e. other blocks to which the control flow will proceed.
+
+### 'br' terminator operation
+
+Syntax:
+
+```
+operation ::= `br` successor
+successor ::= bb-id branch-use-list?
+branch-use-list ::= `(` ssa-use-list `:` type-list-no-parens `)`
+```
+
+The `br` terminator operation represents an unconditional jump to a target
+block. The count and types of operands to the branch must align with the
+arguments in the target block.
+
+The MLIR branch operation is not allowed to target the entry block for a region.
+
+### 'cond_br' terminator operation
+
+Syntax:
+
+```
+operation ::= `cond_br` ssa-use `,` successor `,` successor
+```
+
+The `cond_br` terminator operation represents a conditional branch on a boolean
+(1-bit integer) value. If the bit is set, then the first destination is jumped
+to; if it is false, the second destination is chosen. The count and types of
+operands must align with the arguments in the corresponding target blocks.
+
+The MLIR conditional branch operation is not allowed to target the entry block
+for a region. The two destinations of the conditional branch operation are
+allowed to be the same.
+
+The following example illustrates a function with a conditional branch operation
+that targets the same block:
+
+```mlir
+func @select(i32, i32, i1) -> i32 {
+^bb0(%a : i32, %b :i32, %flag : i1) :
+ // Both targets are the same, operands differ
+ cond_br %flag, ^bb1(%a : i32), ^bb1(%b : i32)
+
+^bb1(%x : i32) :
+ return %x : i32
+}
+```
+
+### 'return' terminator operation
+
+Syntax:
+
+```
+operation ::= `return` (ssa-use-list `:` type-list-no-parens)?
+```
+
+The `return` terminator operation represents the completion of a function, and
+produces the result values. The count and types of the operands must match the
+result types of the enclosing function. It is legal for multiple blocks in a
+single function to return.
+
+## Core Operations
+
+### 'call' operation
+
+Syntax:
+
+```
+operation ::=
+ (ssa-id `=`)? `call` symbol-ref-id `(` ssa-use-list? `)` `:` function-type
+```
+
+The `call` operation represents a direct call to a function. The operands and
+result types of the call must match the specified function type. The callee is
+encoded as a function attribute named "callee".
+
+Example:
+
+```mlir
+// Calling the function my_add.
+%31 = call @my_add(%0, %1) : (tensor<16xf32>, tensor<16xf32>) -> tensor<16xf32>
+```
+
+### 'call_indirect' operation
+
+Syntax:
+
+```
+operation ::= `call_indirect` ssa-use `(` ssa-use-list? `)` `:` function-type
+```
+
+The `call_indirect` operation represents an indirect call to a value of function
+type. Functions are first class types in MLIR, and may be passed as arguments
+and merged together with block arguments. The operands and result types of the
+call must match the specified function type.
+
+Function values can be created with the
+[`constant` operation](#constant-operation).
+
+Example:
+
+```mlir
+%31 = call_indirect %15(%0, %1)
+ : (tensor<16xf32>, tensor<16xf32>) -> tensor<16xf32>
+```
+
+### 'dim' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `dim` ssa-id `,` integer-literal `:` type
+```
+
+The `dim` operation takes a memref or tensor operand and a dimension index, and
+returns an [`index`](../LangRef.md#index-type) that is the size of that
+dimension.
+
+The `dim` operation is represented with a single integer attribute named
+`index`, and the type specifies the type of the memref or tensor operand.
+
+Examples:
+
+```mlir
+// Always returns 4, can be constant folded:
+%x = dim %A, 0 : tensor<4 x ? x f32>
+
+// Returns the dynamic dimension of %A.
+%y = dim %A, 1 : tensor<4 x ? x f32>
+
+// Equivalent generic form:
+%x = "std.dim"(%A) {index = 0 : i64} : (tensor<4 x ? x f32>) -> index
+%y = "std.dim"(%A) {index = 1 : i64} : (tensor<4 x ? x f32>) -> index
+```
+
+## Memory Operations
+
+### 'alloc' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `alloc` dim-and-symbol-use-list `:` memref-type
+```
+
+Allocates a new memref of specified type. Values required for dynamic dimension
+sizes are passed as arguments in parentheses (in the same order in which they
+appear in the shape signature of the memref) while the symbols required by the
+layout map are passed in the square brackets in lexicographical order. If no
+layout maps are specified in the memref, then an identity mapping is used.
+
+The buffer referenced by a memref type is created by the `alloc` operation, and
+destroyed by the `dealloc` operation.
+
+Example:
+
+```mlir
+// Allocating memref for a fully static shape.
+%A = alloc() : memref<1024x64xf32, #layout_map0, memspace0>
+
+// %M, %N, %x, %y are SSA values of integer type. M and N are bound to the
+// two unknown dimensions of the type and x/y are bound to symbols in
+// #layout_map1.
+%B = alloc(%M, %N)[%x, %y] : memref<?x?xf32, #layout_map1, memspace1>
+```
+
+### 'alloc_static' operation
+
+Syntax:
+
+```
+operation ::=
+ ssa-id `=` `alloc_static` `(` integer-literal `)` : memref-type
+```
+
+Allocates a new memref of specified type with a fixed base pointer location in
+memory. 'alloc_static' does not support types that have dynamic shapes or that
+require dynamic symbols in their layout function (use the
+[`alloc` operation](#alloc-operation) in those cases).
+
+Example:
+
+```mlir
+%A = alloc_static(0x1232a00) : memref<1024 x 64 x f32, #layout_map0, memspace0>
+```
+
+The `alloc_static` operation is used to represent code after buffer allocation
+has been performed.
+
+### 'dealloc' operation
+
+Syntax:
+
+```
+operation ::= `dealloc` ssa-use `:` memref-type
+```
+
+Delineates the end of the lifetime of the memory corresponding to a memref
+allocation. It is paired with an [`alloc`](#alloc-operation) or
+[`alloc_static`](#alloc-static-operation) operation.
+
+Example:
+
+```mlir
+dealloc %A : memref<128 x f32, #layout, memspace0>
+```
+
+### 'dma_start' operation
+
+Syntax:
+
+```
+operation ::= `dma_start` ssa-use`[`ssa-use-list`]` `,`
+ ssa-use`[`ssa-use-list`]` `,` ssa-use `,`
+ ssa-use`[`ssa-use-list`]` (`,` ssa-use `,` ssa-use)?
+ `:` memref-type `,` memref-type `,` memref-type
+```
+
+Starts a non-blocking DMA operation that transfers data from a source memref to
+a destination memref. The operands include the source and destination memref's
+each followed by its indices, size of the data transfer in terms of the number
+of elements (of the elemental type of the memref), a tag memref with its
+indices, and optionally two additional arguments corresponding to the stride (in
+terms of number of elements) and the number of elements to transfer per stride.
+The tag location is used by a dma_wait operation to check for completion. The
+indices of the source memref, destination memref, and the tag memref have the
+same restrictions as any load/store operation in a affine context (whenever DMA
+operations appear in an affine context). See
+[restrictions on dimensions and symbols](Affine.md#restrictions-on-dimensions-and-symbols)
+in affine contexts. This allows powerful static analysis and transformations in
+the presence of such DMAs including rescheduling, pipelining / overlap with
+computation, and checking for matching start/end operations. The source and
+destination memref need not be of the same dimensionality, but need to have the
+same elemental type.
+
+For example, a `dma_start` operation that transfers 32 vector elements from a
+memref `%src` at location `[%i, %j]` to memref `%dst` at `[%k, %l]` would be
+specified as shown below.
+
+Example:
+
+```mlir
+%size = constant 32 : index
+%tag = alloc() : memref<1 x i32, (d0) -> (d0), 4>
+%idx = constant 0 : index
+dma_start %src[%i, %j], %dst[%k, %l], %size, %tag[%idx] :
+ memref<40 x 8 x vector<16xf32>, (d0, d1) -> (d0, d1), 0>,
+ memref<2 x 4 x vector<16xf32>, (d0, d1) -> (d0, d1), 2>,
+ memref<1 x i32>, (d0) -> (d0), 4>
+```
+
+### 'dma_wait' operation
+
+Syntax:
+
+```
+operation ::= `dma_wait` ssa-use`[`ssa-use-list`]` `,` ssa-use `:` memref-type
+```
+
+Blocks until the completion of a DMA operation associated with the tag element
+specified with a tag memref and its indices. The operands include the tag memref
+followed by its indices and the number of elements associated with the DMA being
+waited on. The indices of the tag memref have the same restrictions as
+load/store indices.
+
+Example:
+
+```mlir
+dma_wait %tag[%idx], %size : memref<1 x i32, (d0) -> (d0), 4>
+```
+
+### 'extract_element' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `extract_element` ssa-use `[` ssa-use-list `]` `:` type
+```
+
+The `extract_element` op reads a tensor or vector and returns one element from
+it specified by an index list. The output of the 'extract_element' is a new
+value with the same type as the elements of the tensor or vector. The arity of
+indices matches the rank of the accessed value (i.e., if a tensor is of rank 3,
+then 3 indices are required for the extract. The indices should all be of
+`index` type.
+
+Examples:
+
+```mlir
+%3 = extract_element %v[%1, %2] : vector<4x4xi32>
+%4 = extract_element %t[%1, %2] : tensor<4x4xi32>
+%5 = extract_element %ut[%1, %2] : tensor<*xi32>
+```
+
+### 'load' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `load` ssa-use `[` ssa-use-list `]` `:` memref-type
+```
+
+The `load` op reads an element from a memref specified by an index list. The
+output of load is a new value with the same type as the elements of the memref.
+The arity of indices is the rank of the memref (i.e., if the memref loaded from
+is of rank 3, then 3 indices are required for the load following the memref
+identifier).
+
+In an `affine.if` or `affine.for` body, the indices of a load are restricted to
+SSA values bound to surrounding loop induction variables,
+[symbols](../LangRef.md#dimensions-and-symbols), results of a
+[`constant` operation](#constant-operation), or the result of an `affine.apply`
+operation that can in turn take as arguments all of the aforementioned SSA
+values or the recursively result of such an `affine.apply` operation.
+
+Example:
+
+```mlir
+%1 = affine.apply (d0, d1) -> (3*d0) (%i, %j)
+%2 = affine.apply (d0, d1) -> (d1+1) (%i, %j)
+%12 = load %A[%1, %2] : memref<8x?xi32, #layout, memspace0>
+
+// Example of an indirect load (treated as non-affine)
+%3 = affine.apply (d0) -> (2*d0 + 1)(%12)
+%13 = load %A[%3, %2] : memref<4x?xi32, #layout, memspace0>
+```
+
+**Context:** The `load` and `store` operations are specifically crafted to fully
+resolve a reference to an element of a memref, and (in affine `affine.if` and
+`affine.for` operations) the compiler can follow use-def chains (e.g. through
+[`affine.apply`](Affine.md#affineapply-operation) operations) to precisely
+analyze references at compile-time using polyhedral techniques. This is possible
+because of the
+[restrictions on dimensions and symbols](Affine.md#restrictions-on-dimensions-and-symbols)
+in these contexts.
+
+### 'splat' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `splat` ssa-use `:` ( vector-type | tensor-type )
+```
+
+Broadcast the operand to all elements of the result vector or tensor. The
+operand has to be of either integer or float type. When the result is a tensor,
+it has to be statically shaped.
+
+Example:
+
+```mlir
+ %s = load %A[%i] : memref<128xf32>
+ %v = splat %s : vector<4xf32>
+ %t = splat %s : tensor<8x16xi32>
+```
+
+TODO: This operation is easy to extend to broadcast to dynamically shaped
+tensors in the same way dynamically shaped memrefs are handled.
+```mlir
+// Broadcasts %s to a 2-d dynamically shaped tensor, with %m, %n binding
+// to the sizes of the two dynamic dimensions.
+%m = "foo"() : () -> (index)
+%n = "bar"() : () -> (index)
+%t = splat %s [%m, %n] : tensor<?x?xi32>
+```
+
+### 'store' operation
+
+Syntax:
+
+```
+operation ::= `store` ssa-use `,` ssa-use `[` ssa-use-list `]` `:` memref-type
+```
+
+Store value to memref location given by indices. The value stored should have
+the same type as the elemental type of the memref. The number of arguments
+provided within brackets need to match the rank of the memref.
+
+In an affine context, the indices of a store are restricted to SSA values bound
+to surrounding loop induction variables,
+[symbols](Affine.md#restrictions-on-dimensions-and-symbols), results of a
+[`constant` operation](#constant-operation), or the result of an
+[`affine.apply`](Affine.md#affineapply-operation) operation that can in turn
+take as arguments all of the aforementioned SSA values or the recursively result
+of such an `affine.apply` operation.
+
+Example:
+
+```mlir
+store %100, %A[%1, 1023] : memref<4x?xf32, #layout, memspace0>
+```
+
+**Context:** The `load` and `store` operations are specifically crafted to fully
+resolve a reference to an element of a memref, and (in polyhedral `affine.if`
+and `affine.for` operations) the compiler can follow use-def chains (e.g.
+through [`affine.apply`](Affine.md#affineapply-operation) operations) to
+precisely analyze references at compile-time using polyhedral techniques. This
+is possible because of the
+[restrictions on dimensions and symbols](Affine.md#restrictions-on-dimensions-and-symbols)
+in these contexts.
+
+### 'tensor_load' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `tensor_load` ssa-use-and-type
+```
+
+Create a tensor from a memref, making an independent copy of the element data.
+The result value is a tensor whose shape and element type match the memref
+operand.
+
+Example:
+
+```mlir
+// Produces a value of tensor<4x?xf32> type.
+%12 = tensor_load %10 : memref<4x?xf32, #layout, memspace0>
+```
+
+### 'tensor_store' operation
+
+Syntax:
+
+```
+operation ::= `tensor_store` ssa-use `,` ssa-use `:` memref-type
+```
+
+Stores the contents of a tensor into a memref. The first operand is a value of
+tensor type, the second operand is a value of memref type. The shapes and
+element types of these must match, and are specified by the memref type.
+
+Example:
+
+```mlir
+%9 = dim %8, 1 : tensor<4x?xf32>
+%10 = alloc(%9) : memref<4x?xf32, #layout, memspace0>
+tensor_store %8, %10 : memref<4x?xf32, #layout, memspace0>
+```
+
+## Unary Operations
+
+### 'absf' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `absf` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar absolute value.
+%a = absf %b : f64
+
+// SIMD vector element-wise absolute value.
+%f = absf %g : vector<4xf32>
+
+// Tensor element-wise absolute value.
+%x = absf %y : tensor<4x?xf8>
+```
+
+The `absf` operation computes the absolute value. It takes one operand and
+returns one result of the same type. This type may be a float scalar type, a
+vector whose element type is float, or a tensor of floats. It has no standard
+attributes.
+
+### 'ceilf' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `ceilf` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar ceiling value.
+%a = ceilf %b : f64
+
+// SIMD vector element-wise ceiling value.
+%f = ceilf %g : vector<4xf32>
+
+// Tensor element-wise ceiling value.
+%x = ceilf %y : tensor<4x?xf8>
+```
+
+The `ceilf` operation computes the ceiling of a given value. It takes one
+operand and returns one result of the same type. This type may be a float
+scalar type, a vector whose element type is float, or a tensor of floats. It
+has no standard attributes.
+
+### 'cos' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `cos` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar cosine value.
+%a = cos %b : f64
+
+// SIMD vector element-wise cosine value.
+%f = cos %g : vector<4xf32>
+
+// Tensor element-wise cosine value.
+%x = cos %y : tensor<4x?xf8>
+```
+
+The `cos` operation computes the cosine of a given value. It takes one operand
+and returns one result of the same type. This type may be a float scalar type,
+a vector whose element type is float, or a tensor of floats. It has no standard
+attributes.
+
+### 'exp' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `exp` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar natural exponential.
+%a = exp %b : f64
+
+// SIMD vector element-wise natural exponential.
+%f = exp %g : vector<4xf32>
+
+// Tensor element-wise natural exponential.
+%x = exp %y : tensor<4x?xf8>
+```
+
+The `exp` operation takes one operand and returns one result of the same type.
+This type may be a float scalar type, a vector whose element type is float, or a
+tensor of floats. It has no standard attributes.
+
+### 'negf' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `negf` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar negation value.
+%a = negf %b : f64
+
+// SIMD vector element-wise negation value.
+%f = negf %g : vector<4xf32>
+
+// Tensor element-wise negation value.
+%x = negf %y : tensor<4x?xf8>
+```
+
+The `negf` operation computes the negation of a given value. It takes one
+operand and returns one result of the same type. This type may be a float
+scalar type, a vector whose element type is float, or a tensor of floats. It
+has no standard attributes.
+
+### 'tanh' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `tanh` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar hyperbolic tangent value.
+%a = tanh %b : f64
+
+// SIMD vector element-wise hyperbolic tangent value.
+%f = tanh %g : vector<4xf32>
+
+// Tensor element-wise hyperbolic tangent value.
+%x = tanh %y : tensor<4x?xf8>
+```
+
+The `tanh` operation computes the hyperbolic tangent. It takes one operand and
+returns one result of the same type. This type may be a float scalar type, a
+vector whose element type is float, or a tensor of floats. It has no standard
+attributes.
+
+## Arithmetic Operations
+
+Basic arithmetic in MLIR is specified by standard operations described in this
+section.
+
+### 'addi' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `addi` ssa-use `,` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar addition.
+%a = addi %b, %c : i64
+
+// SIMD vector element-wise addition, e.g. for Intel SSE.
+%f = addi %g, %h : vector<4xi32>
+
+// Tensor element-wise addition.
+%x = addi %y, %z : tensor<4x?xi8>
+```
+
+The `addi` operation takes two operands and returns one result, each of these is
+required to be the same type. This type may be an integer scalar type, a vector
+whose element type is integer, or a tensor of integers. It has no standard
+attributes.
+
+### 'addf' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `addf` ssa-use `,` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar addition.
+%a = addf %b, %c : f64
+
+// SIMD vector addition, e.g. for Intel SSE.
+%f = addf %g, %h : vector<4xf32>
+
+// Tensor addition.
+%x = addf %y, %z : tensor<4x?xbf16>
+```
+
+The `addf` operation takes two operands and returns one result, each of these is
+required to be the same type. This type may be a floating point scalar type, a
+vector whose element type is a floating point type, or a floating point tensor.
+
+It has no standard attributes.
+
+TODO: In the distant future, this will accept optional attributes for fast math,
+contraction, rounding mode, and other controls.
+
+### 'and' operation
+
+Bitwise integer and.
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `and` ssa-use `,` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar integer bitwise and.
+%a = and %b, %c : i64
+
+// SIMD vector element-wise bitwise integer and.
+%f = and %g, %h : vector<4xi32>
+
+// Tensor element-wise bitwise integer and.
+%x = and %y, %z : tensor<4x?xi8>
+```
+
+The `and` operation takes two operands and returns one result, each of these is
+required to be the same type. This type may be an integer scalar type, a vector
+whose element type is integer, or a tensor of integers. It has no standard
+attributes.
+
+### 'cmpi' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `cmpi` string-literal `,` ssa-id `,` ssa-id `:` type
+```
+
+Examples:
+
+```mlir
+// Custom form of scalar "signed less than" comparison.
+%x = cmpi "slt", %lhs, %rhs : i32
+
+// Generic form of the same operation.
+%x = "std.cmpi"(%lhs, %rhs) {predicate = 2 : i64} : (i32, i32) -> i1
+
+// Custom form of vector equality comparison.
+%x = cmpi "eq", %lhs, %rhs : vector<4xi64>
+
+// Generic form of the same operation.
+%x = "std.cmpi"(%lhs, %rhs) {predicate = 0 : i64}
+ : (vector<4xi64>, vector<4xi64>) -> vector<4xi1>
+```
+
+The `cmpi` operation is a generic comparison for integer-like types. Its two
+arguments can be integers, vectors or tensors thereof as long as their types
+match. The operation produces an i1 for the former case, a vector or a tensor of
+i1 with the same shape as inputs in the other cases.
+
+Its first argument is an attribute that defines which type of comparison is
+performed. The following comparisons are supported:
+
+- equal (mnemonic: `"eq"`; integer value: `0`)
+- not equal (mnemonic: `"ne"`; integer value: `1`)
+- signed less than (mnemonic: `"slt"`; integer value: `2`)
+- signed less than or equal (mnemonic: `"sle"`; integer value: `3`)
+- signed greater than (mnemonic: `"sgt"`; integer value: `4`)
+- signed greater than or equal (mnemonic: `"sge"`; integer value: `5`)
+- unsigned less than (mnemonic: `"ult"`; integer value: `6`)
+- unsigned less than or equal (mnemonic: `"ule"`; integer value: `7`)
+- unsigned greater than (mnemonic: `"ugt"`; integer value: `8`)
+- unsigned greater than or equal (mnemonic: `"uge"`; integer value: `9`)
+
+The result is `1` if the comparison is true and `0` otherwise. For vector or
+tensor operands, the comparison is performed elementwise and the element of the
+result indicates whether the comparison is true for the operand elements with
+the same indices as those of the result.
+
+Note: while the custom assembly form uses strings, the actual underlying
+attribute has integer type (or rather enum class in C++ code) as seen from the
+generic assembly form. String literals are used to improve readability of the IR
+by humans.
+
+This operation only applies to integer-like operands, but not floats. The main
+reason being that comparison operations have diverging sets of attributes:
+integers require sign specification while floats require various floating
+point-related particularities, e.g., `-ffast-math` behavior, IEEE754 compliance,
+etc
+([rationale](../Rationale.md#splitting-floating-point-vs-integer-operations)).
+The type of comparison is specified as attribute to avoid introducing ten
+similar operations, taking into account that they are often implemented using
+the same operation downstream
+([rationale](../Rationale.md#specifying-comparison-kind-as-attribute)). The
+separation between signed and unsigned order comparisons is necessary because of
+integers being signless. The comparison operation must know how to interpret
+values with the foremost bit being set: negatives in two's complement or large
+positives
+([rationale](../Rationale.md#specifying-sign-in-integer-comparison-operations)).
+
+### 'constant' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `constant` attribute-value `:` type
+```
+
+The `constant` operation produces an SSA value equal to some constant specified
+by an attribute. This is the way that MLIR uses to form simple integer and
+floating point constants, as well as more exotic things like references to
+functions and (TODO!) tensor/vector constants.
+
+The `constant` operation is represented with a single attribute named "value".
+The type specifies the result type of the operation.
+
+Examples:
+
+```mlir
+// Integer constant
+%1 = constant 42 : i32
+
+// Reference to function @myfn.
+%3 = constant @myfn : (tensor<16xf32>, f32) -> tensor<16xf32>
+
+// Equivalent generic forms
+%1 = "std.constant"() {value = 42 : i32} : () -> i32
+%3 = "std.constant"() {value = @myfn}
+ : () -> ((tensor<16xf32>, f32) -> tensor<16xf32>)
+
+```
+
+MLIR does not allow direct references to functions in SSA operands because the
+compiler is multithreaded, and disallowing SSA values to directly reference a
+function simplifies this
+([rationale](../Rationale.md#multithreading-the-compiler)).
+
+### 'copysign' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `copysign` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar copysign value.
+%a = copysign %b %c : f64
+
+// SIMD vector element-wise copysign value.
+%f = copysign %g %h : vector<4xf32>
+
+// Tensor element-wise copysign value.
+%x = copysign %y %z : tensor<4x?xf8>
+```
+
+The `copysign` returns a value with the magnitude of the first operand and the
+sign of the second operand. It takes two operands and returns one result of the
+same type. This type may be a float scalar type, a vector whose element type is
+float, or a tensor of floats. It has no standard attributes.
+
+### 'divis' operation
+
+Signed integer division. Rounds towards zero. Treats the leading bit as sign,
+i.e. `6 / -2 = -3`.
+
+Note: the semantics of division by zero or signed division overflow (minimum
+value divided by -1) is TBD; do NOT assume any specific behavior.
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `divis` ssa-use `,` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar signed integer division.
+%a = divis %b, %c : i64
+
+// SIMD vector element-wise division.
+%f = divis %g, %h : vector<4xi32>
+
+// Tensor element-wise integer division.
+%x = divis %y, %z : tensor<4x?xi8>
+```
+
+The `divis` operation takes two operands and returns one result, each of these
+is required to be the same type. This type may be an integer scalar type, a
+vector whose element type is integer, or a tensor of integers. It has no
+standard attributes.
+
+### 'diviu' operation
+
+Unsigned integer division. Rounds towards zero. Treats the leading bit as the
+most significant, i.e. for `i16` given two's complement representation, `6 /
+-2 = 6 / (2^16 - 2) = 0`.
+
+Note: the semantics of division by zero is TBD; do NOT assume any specific
+behavior.
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `diviu` ssa-use `,` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar unsigned integer division.
+%a = diviu %b, %c : i64
+
+// SIMD vector element-wise division.
+%f = diviu %g, %h : vector<4xi32>
+
+// Tensor element-wise integer division.
+%x = diviu %y, %z : tensor<4x?xi8>
+```
+
+The `diviu` operation takes two operands and returns one result, each of these
+is required to be the same type. This type may be an integer scalar type, a
+vector whose element type is integer, or a tensor of integers. It has no
+standard attributes.
+
+### 'memref_cast' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `memref_cast` ssa-use `:` type `to` type
+```
+
+Examples:
+
+```mlir
+// Discard static dimension information.
+%3 = memref_cast %2 : memref<4x?xf32> to memref<?x?xf32>
+
+// Convert to a type with more known dimensions.
+%4 = memref_cast %3 : memref<?x?xf32> to memref<4x?xf32>
+
+// Convert to a type with unknown rank.
+%5 = memref_cast %3 : memref<?x?xf32> to memref<*xf32>
+
+// Convert to a type with static rank.
+%6 = memref_cast %5 : memref<*xf32> to memref<?x?xf32>
+```
+
+Convert a memref from one type to an equivalent type without changing any data
+elements. The types are equivalent if 1. they both have the same static rank,
+same element type, same mappings, same address space. The operation is invalid
+if converting to a mismatching constant dimension, or 2. exactly one of the
+operands have an unknown rank, and they both have the same element type and same
+address space. The operation is invalid if both operands are of dynamic rank or
+if converting to a mismatching static rank.
+
+### 'mulf' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `mulf` ssa-use `,` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar multiplication.
+%a = mulf %b, %c : f64
+
+// SIMD pointwise vector multiplication, e.g. for Intel SSE.
+%f = mulf %g, %h : vector<4xf32>
+
+// Tensor pointwise multiplication.
+%x = mulf %y, %z : tensor<4x?xbf16>
+```
+
+The `mulf` operation takes two operands and returns one result, each of these is
+required to be the same type. This type may be a floating point scalar type, a
+vector whose element type is a floating point type, or a floating point tensor.
+
+It has no standard attributes.
+
+TODO: In the distant future, this will accept optional attributes for fast math,
+contraction, rounding mode, and other controls.
+
+### 'or' operation
+
+Bitwise integer or.
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `or` ssa-use `,` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar integer bitwise or.
+%a = or %b, %c : i64
+
+// SIMD vector element-wise bitwise integer or.
+%f = or %g, %h : vector<4xi32>
+
+// Tensor element-wise bitwise integer or.
+%x = or %y, %z : tensor<4x?xi8>
+```
+
+The `or` operation takes two operands and returns one result, each of these is
+required to be the same type. This type may be an integer scalar type, a vector
+whose element type is integer, or a tensor of integers. It has no standard
+attributes.
+
+### 'remis' operation
+
+Signed integer division remainder. Treats the leading bit as sign, i.e. `6 %
+-2 = 0`.
+
+Note: the semantics of division by zero is TBD; do NOT assume any specific
+behavior.
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `remis` ssa-use `,` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar signed integer division remainder.
+%a = remis %b, %c : i64
+
+// SIMD vector element-wise division remainder.
+%f = remis %g, %h : vector<4xi32>
+
+// Tensor element-wise integer division remainder.
+%x = remis %y, %z : tensor<4x?xi8>
+```
+
+The `remis` operation takes two operands and returns one result, each of these
+is required to be the same type. This type may be an integer scalar type, a
+vector whose element type is integer, or a tensor of integers. It has no
+standard attributes.
+
+### 'remiu' operation
+
+Unsigned integer division remainder. Treats the leading bit as the most
+significant, i.e. for `i16`, `6 % -2 = 6 % (2^16 - 2) = 6`.
+
+Note: the semantics of division by zero is TBD; do NOT assume any specific
+behavior.
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `remiu` ssa-use `,` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar unsigned integer division remainder.
+%a = remiu %b, %c : i64
+
+// SIMD vector element-wise division remainder.
+%f = remiu %g, %h : vector<4xi32>
+
+// Tensor element-wise integer division remainder.
+%x = remiu %y, %z : tensor<4x?xi8>
+```
+
+The `remiu` operation takes two operands and returns one result, each of these
+is required to be the same type. This type may be an integer scalar type, a
+vector whose element type is integer, or a tensor of integers. It has no
+standard attributes.
+
+### 'select' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `select` ssa-use `,` ssa-use `,` ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Custom form of scalar selection.
+%x = select %cond, %true, %false : i32
+
+// Generic form of the same operation.
+%x = "std.select"(%cond, %true, %false) : (i1, i32, i32) -> i32
+
+// Vector selection is element-wise
+%vx = "std.select"(%vcond, %vtrue, %vfalse)
+ : (vector<42xi1>, vector<42xf32>, vector<42xf32>) -> vector<42xf32>
+```
+
+The `select` operation chooses one value based on a binary condition supplied as
+its first operand. If the value of the first operand is `1`, the second operand
+is chosen, otherwise the third operand is chosen. The second and the third
+operand must have the same type.
+
+The operation applies to vectors and tensors elementwise given the _shape_ of
+all operands is identical. The choice is made for each element individually
+based on the value at the same position as the element in the condition operand.
+
+The `select` operation combined with [`cmpi`](#cmpi-operation) can be used to
+implement `min` and `max` with signed or unsigned comparison semantics.
+
+### 'tensor_cast' operation
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `tensor_cast` ssa-use `:` type `to` type
+```
+
+Examples:
+
+```mlir
+// Convert from unknown rank to rank 2 with unknown dimension sizes.
+%2 = "std.tensor_cast"(%1) : (tensor<*xf32>) -> tensor<?x?xf32>
+%2 = tensor_cast %1 : tensor<*xf32> to tensor<?x?xf32>
+
+// Convert to a type with more known dimensions.
+%3 = "std.tensor_cast"(%2) : (tensor<?x?xf32>) -> tensor<4x?xf32>
+
+// Discard static dimension and rank information.
+%4 = "std.tensor_cast"(%3) : (tensor<4x?xf32>) -> tensor<?x?xf32>
+%5 = "std.tensor_cast"(%4) : (tensor<?x?xf32>) -> tensor<*xf32>
+```
+
+Convert a tensor from one type to an equivalent type without changing any data
+elements. The source and destination types must both be tensor types with the
+same element type. If both are ranked, then the rank should be the same and
+static dimensions should match. The operation is invalid if converting to a
+mismatching constant dimension.
+
+### 'xor' operation
+
+Bitwise integer xor.
+
+Syntax:
+
+```
+operation ::= ssa-id `=` `xor` ssa-use, ssa-use `:` type
+```
+
+Examples:
+
+```mlir
+// Scalar integer bitwise xor.
+%a = xor %b, %c : i64
+
+// SIMD vector element-wise bitwise integer xor.
+%f = xor %g, %h : vector<4xi32>
+
+// Tensor element-wise bitwise integer xor.
+%x = xor %y, %z : tensor<4x?xi8>
+```
+
+The `xor` operation takes two operands and returns one result, each of these is
+required to be the same type. This type may be an integer scalar type, a vector
+whose element type is integer, or a tensor of integers. It has no standard
+attributes.
diff --git a/mlir/docs/Dialects/Vector.md b/mlir/docs/Dialects/Vector.md
new file mode 100644
index 00000000000..04f5ba71cdb
--- /dev/null
+++ b/mlir/docs/Dialects/Vector.md
@@ -0,0 +1,14 @@
+# Vector Dialect
+
+This dialect provides mid-level abstraction for the MLIR super-vectorizer.
+
+[TOC]
+
+## Operations
+
+# To see op documentation
+
+```sh
+mlir-tblgen --gen-op-doc -I /path/to/mlir/include \
+/path/to/mlir/include/mlir/Dialect/VectorOps/VectorOps.td
+```
diff --git a/mlir/docs/EDSC.md b/mlir/docs/EDSC.md
new file mode 100644
index 00000000000..eaaeb6c7009
--- /dev/null
+++ b/mlir/docs/EDSC.md
@@ -0,0 +1,132 @@
+# Background: declarative builders API
+
+The main purpose of the declarative builders API is to provide an intuitive way
+of constructing MLIR programmatically. In the majority of cases, the IR we wish
+to construct exhibits structured control-flow. Declarative builders provide an
+API to make MLIR construction and manipulation very idiomatic, for the
+structured control-flow case, in C++.
+
+## ScopedContext
+
+`mlir::edsc::ScopedContext` provides an implicit thread-local context,
+supporting a simple declarative API with globally accessible builders. These
+declarative builders are available within the lifetime of a `ScopedContext`.
+
+## ValueHandle and IndexHandle
+
+`mlir::edsc::ValueHandle` and `mlir::edsc::IndexHandle` provide typed
+abstractions around an `mlir::Value`. These abstractions are "delayed", in the
+sense that they allow separating declaration from definition. They may capture
+IR snippets, as they are built, for programmatic manipulation. Intuitive
+operators are provided to allow concise and idiomatic expressions.
+
+```c++
+ValueHandle zero = constant_index(0);
+IndexHandle i, j, k;
+```
+
+## Intrinsics
+
+`mlir::edsc::ValueBuilder` is a generic wrapper for the `mlir::Builder::create`
+method that operates on `ValueHandle` objects and return a single ValueHandle.
+For instructions that return no values or that return multiple values, the
+`mlir::edsc::InstructionBuilder` can be used. Named intrinsics are provided as
+syntactic sugar to further reduce boilerplate.
+
+```c++
+using load = ValueBuilder<LoadOp>;
+using store = InstructionBuilder<StoreOp>;
+```
+
+## LoopBuilder and AffineLoopNestBuilder
+
+`mlir::edsc::AffineLoopNestBuilder` provides an interface to allow writing
+concise and structured loop nests.
+
+```c++
+ ScopedContext scope(f.get());
+ ValueHandle i(indexType),
+ j(indexType),
+ lb(f->getArgument(0)),
+ ub(f->getArgument(1));
+ ValueHandle f7(constant_float(llvm::APFloat(7.0f), f32Type)),
+ f13(constant_float(llvm::APFloat(13.0f), f32Type)),
+ i7(constant_int(7, 32)),
+ i13(constant_int(13, 32));
+ AffineLoopNestBuilder(&i, lb, ub, 3)([&]{
+ lb * index_t(3) + ub;
+ lb + index_t(3);
+ AffineLoopNestBuilder(&j, lb, ub, 2)([&]{
+ ceilDiv(index_t(31) * floorDiv(i + j * index_t(3), index_t(32)),
+ index_t(32));
+ ((f7 + f13) / f7) % f13 - f7 * f13;
+ ((i7 + i13) / i7) % i13 - i7 * i13;
+ });
+ });
+```
+
+## IndexedValue
+
+`mlir::edsc::IndexedValue` provides an index notation around load and store
+operations on abstract data types by overloading the C++ assignment and
+parenthesis operators. The relevant loads and stores are emitted as appropriate.
+
+## Putting it all together
+
+With declarative builders, it becomes fairly concise to build rank and
+type-agnostic custom operations even though MLIR does not yet have generic
+types. Here is what a definition of a general pointwise add looks in
+Tablegen with declarative builders.
+
+```c++
+def AddOp : Op<"x.add">,
+ Arguments<(ins Tensor:$A, Tensor:$B)>,
+ Results<(outs Tensor: $C)> {
+ code referenceImplementation = [{
+ auto ivs = makeIndexHandles(view_A.rank());
+ auto pivs = makePIndexHandles(ivs);
+ IndexedValue A(arg_A), B(arg_B), C(arg_C);
+ AffineLoopNestBuilder(pivs, view_A.getLbs(), view_A.getUbs(), view_A.getSteps())(
+ [&]{
+ C(ivs) = A(ivs) + B(ivs)
+ });
+ }];
+}
+```
+
+Depending on the function signature on which this emitter is called, the
+generated IR resembles the following, for a 4-D memref of `vector<4xi8>`:
+
+```
+// CHECK-LABEL: func @t1(%lhs: memref<3x4x5x6xvector<4xi8>>, %rhs: memref<3x4x5x6xvector<4xi8>>, %result: memref<3x4x5x6xvector<4xi8>>) -> () {
+// CHECK: affine.for {{.*}} = 0 to 3 {
+// CHECK: affine.for {{.*}} = 0 to 4 {
+// CHECK: affine.for {{.*}} = 0 to 5 {
+// CHECK: affine.for {{.*}}= 0 to 6 {
+// CHECK: {{.*}} = load %arg1[{{.*}}] : memref<3x4x5x6xvector<4xi8>>
+// CHECK: {{.*}} = load %arg0[{{.*}}] : memref<3x4x5x6xvector<4xi8>>
+// CHECK: {{.*}} = addi {{.*}} : vector<4xi8>
+// CHECK: store {{.*}}, %arg2[{{.*}}] : memref<3x4x5x6xvector<4xi8>>
+```
+
+or the following, for a 0-D `memref<f32>`:
+
+```
+// CHECK-LABEL: func @t3(%lhs: memref<f32>, %rhs: memref<f32>, %result: memref<f32>) -> () {
+// CHECK: {{.*}} = load %arg1[] : memref<f32>
+// CHECK: {{.*}} = load %arg0[] : memref<f32>
+// CHECK: {{.*}} = addf {{.*}}, {{.*}} : f32
+// CHECK: store {{.*}}, %arg2[] : memref<f32>
+```
+
+Similar APIs are provided to emit the lower-level `loop.for` op with
+`LoopNestBuilder`. See the `builder-api-test.cpp` test for more usage examples.
+
+Since the implementation of declarative builders is in C++, it is also available
+to program the IR with an embedded-DSL flavor directly integrated in MLIR. We
+make use of these properties in the tutorial.
+
+Spoiler: MLIR also provides Python bindings for these builders, and a
+full-fledged Python machine learning DSL with automatic differentiation
+targeting MLIR was built as an early research collaboration.
+
diff --git a/mlir/docs/GenericDAGRewriter.md b/mlir/docs/GenericDAGRewriter.md
new file mode 100644
index 00000000000..8cc09f7d17f
--- /dev/null
+++ b/mlir/docs/GenericDAGRewriter.md
@@ -0,0 +1,415 @@
+# MLIR Generic DAG Rewriter Infrastructure
+
+## Introduction and Motivation
+
+The goal of a compiler IR is to represent code - at various levels of
+abstraction which pose different sets of tradeoffs in terms of representational
+capabilities and ease of transformation. However, the ability to represent code
+is not itself very useful - you also need to be able to implement those
+transformations.
+
+There are many different sorts of compiler transformations, but this document
+focuses on a particularly important class of transformation that comes up
+repeatedly at scale, and is important for the immediate goals of MLIR: that of
+pattern matching on a set of operations and replacing with another set. This is
+the key algorithm required to implement the "op fission" algorithm used by the
+tf2xla bridge, pattern matching rewrites from TF ops to TF/Lite, peephole
+optimizations like "eliminate identity nodes" or "replace x+0 with x", as well
+as a useful abstraction to implement optimization algorithms for MLIR graphs at
+all levels.
+
+A particular strength of MLIR (and a major difference vs other compiler
+infrastructures like LLVM, GCC, XLA, TensorFlow, etc) is that it uses a single
+compiler IR to represent code at multiple levels of abstraction: an MLIR
+operation can be a "TensorFlow operation", an "XLA HLO", a "TF Lite
+FlatBufferModel op", a TPU LLO instruction, an LLVM IR instruction (transitively
+including X86, Lanai, CUDA, and other target specific instructions), or anything
+else that the MLIR type system can reasonably express. Because MLIR spans such a
+wide range of different problems, a single infrastructure for performing
+graph-to-graph rewrites can help solve many diverse domain challenges, including
+TensorFlow graph level down to the machine code level.
+
+[Static single assignment](https://en.wikipedia.org/wiki/Static_single_assignment_form)
+(SSA) representations like MLIR make it easy to access the operands and "users"
+of an operation. As such, a natural abstraction for these graph-to-graph
+rewrites is that of DAG pattern matching: clients define DAG tile patterns, and
+each pattern includes a result DAG to produce and the cost of the result (or,
+inversely, the benefit of doing the replacement). A common infrastructure
+efficiently finds and perform the rewrites.
+
+While this concept is simple, the details are more nuanced. This proposal
+defines and explores a set of abstractions that we feel can solve a wide range
+of different problems, and can be applied to many different sorts of problems
+that MLIR is - and is expected to - face over time. We do this by separating the
+pattern definition and matching algorithm from the "driver" of the computation
+loop, and make space for the patterns to be defined declaratively in the future.
+
+## Related Work
+
+There is a huge amount of related work to consider, given that pretty much every
+compiler in existence has to solve this problem many times over. Here are a few
+graph rewrite systems we have used, along with the pros and cons of this related
+work. One unifying problem with all of these is that these systems are only
+trying to solve one particular and usually narrow problem: our proposal would
+like to solve many of these problems with a single infrastructure. Of these, the
+most similar design to our proposal is the LLVM DAG-to-DAG instruction selection
+algorithm at the end.
+
+### Constant folding
+
+A degenerate but pervasive case of DAG-to-DAG pattern matching is constant
+folding: given an operation whose operands contain constants can often be folded
+to a result constant value.
+
+MLIR already has constant folding routines which provide a simpler API than a
+general DAG-to-DAG pattern matcher, and we expect it to remain because the
+simpler contract makes it applicable in some cases that a generic matcher would
+not. For example, a DAG-rewrite can remove arbitrary nodes in the current
+function, which could invalidate iterators. Constant folding as an API does not
+remove any nodes, it just provides a (list of) constant values and allows the
+clients to update their data structures as necessary.
+
+### AST-Level Pattern Matchers
+
+The literature is full of source-to-source translators which transform
+identities in order to improve performance (e.g. transforming `X*0` into `0`).
+One large example that I'm aware of is the GCC `fold` function, which performs
+[many optimizations](https://github.com/gcc-mirror/gcc/blob/master/gcc/fold-const.c)
+on ASTs. Clang has
+[similar routines](http://releases.llvm.org/3.5.0/tools/clang/docs/InternalsManual.html#constant-folding-in-the-clang-ast)
+for simple constant folding of expressions (as required by the C++ standard) but
+doesn't perform general optimizations on its ASTs.
+
+The primary downside of tree optimizers is that you can't see across operations
+that have multiple uses. It is
+[well known in literature](https://llvm.org/pubs/2008-06-LCTES-ISelUsingSSAGraphs.pdf)
+that DAG pattern matching is more powerful than tree pattern matching, but OTOH,
+DAG pattern matching can lead to duplication of computation which needs to be
+checked for.
+
+### "Combiners" and other peephole optimizers
+
+Compilers end up with a lot of peephole optimizers for various things, e.g. the
+GCC
+["combine" routines](https://github.com/gcc-mirror/gcc/blob/master/gcc/combine.c)
+(which try to merge two machine instructions into a single one), the LLVM
+[Inst Combine](http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/InstCombine/)
+[pass](https://llvm.org/docs/Passes.html#instcombine-combine-redundant-instructions),
+LLVM's
+[DAG Combiner](https://github.com/llvm-mirror/llvm/blob/master/lib/CodeGen/SelectionDAG/DAGCombiner.cpp),
+the Swift compiler's
+[SIL Combiner](https://github.com/apple/swift/tree/master/lib/SILOptimizer/SILCombiner),
+etc. These generally match one or more operations and produce zero or more
+operations as a result. The LLVM
+[Legalization](http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/SelectionDAG/)
+infrastructure has a different outer loop but otherwise works the same way.
+
+These passes have a lot of diversity, but also have a unifying structure: they
+mostly have a worklist outer loop which visits operations. They then use the C++
+visitor pattern (or equivalent) to switch over the class of operation and
+dispatch to a method. That method contains a long list of hand-written C++ code
+that pattern-matches various special cases. LLVM introduced a "match" function
+that allows writing patterns in a somewhat more declarative style using template
+metaprogramming (MLIR has similar facilities). Here's a simple example:
+
+```c++
+ // Y - (X + 1) --> ~X + Y
+ if (match(Op1, m_OneUse(m_Add(m_Value(X), m_One()))))
+ return BinaryOperator::CreateAdd(Builder.CreateNot(X), Op0);
+```
+
+Here is a somewhat more complicated one (this is not the biggest or most
+complicated :)
+
+```c++
+ // C2 is ODD
+ // LHS = XOR(Y,C1), Y = AND(Z,C2), C1==(C2+1) => LHS == NEG(OR(Z, ~C2))
+ // ADD(LHS, RHS) == SUB(RHS, OR(Z, ~C2))
+ if (match(LHS, m_Xor(m_Value(Y), m_APInt(C1))))
+ if (C1->countTrailingZeros() == 0)
+ if (match(Y, m_And(m_Value(Z), m_APInt(C2))) && *C1 == (*C2 + 1)) {
+ Value NewOr = Builder.CreateOr(Z, ~(*C2));
+ return Builder.CreateSub(RHS, NewOr, "sub");
+ }
+```
+
+These systems are simple to set up, and pattern matching templates have some
+advantages (they are extensible for new sorts of sub-patterns, look compact at
+point of use). OTOH, they have lots of well known problems, for example:
+
+* These patterns are very error prone to write, and contain lots of
+ redundancies.
+* The IR being matched often has identities (e.g. when matching commutative
+ operators) and the C++ code has to handle it manually - take a look at
+ [the full code](http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/InstCombine/InstCombineAddSub.cpp?view=markup#l775)
+ for checkForNegativeOperand that defines the second pattern).
+* The matching code compiles slowly, both because it generates tons of code
+ and because the templates instantiate slowly.
+* Adding new patterns (e.g. for count leading zeros in the example above) is
+ awkward and doesn't often happen.
+* The cost model for these patterns is not really defined - it is emergent
+ based on the order the patterns are matched in code.
+* They are non-extensible without rebuilding the compiler.
+* It isn't practical to apply theorem provers and other tools to these
+ patterns - they cannot be reused for other purposes.
+
+In addition to structured "combiners" like these, there are lots of ad-hoc
+systems like the
+[LLVM Machine code peephole optimizer](http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/PeepholeOptimizer.cpp?view=markup)
+which are related.
+
+### LLVM's DAG-to-DAG Instruction Selection Infrastructure
+
+The instruction selection subsystem in LLVM is the result of many years worth of
+iteration and discovery, driven by the need for LLVM to support code generation
+for lots of targets, the complexity of code generators for modern instruction
+sets (e.g. X86), and the fanatical pursuit of reusing code across targets. Eli
+wrote a
+[nice short overview](https://eli.thegreenplace.net/2013/02/25/a-deeper-look-into-the-llvm-code-generator-part-1)
+of how this works, and the
+[LLVM documentation](https://llvm.org/docs/CodeGenerator.html#select-instructions-from-dag)
+describes it in more depth including its advantages and limitations. It allows
+writing patterns like this.
+
+```
+def : Pat<(or GR64:$src, (not (add GR64:$src, 1))),
+ (BLCI64rr GR64:$src)>;
+```
+
+This example defines a matcher for the
+["blci" instruction](https://en.wikipedia.org/wiki/Bit_Manipulation_Instruction_Sets#TBM_\(Trailing_Bit_Manipulation\))
+in the
+[X86 target description](http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrInfo.td?view=markup),
+there are many others in that file (look for `Pat<>` patterns, since they aren't
+entangled in details of the compiler like assembler/disassembler generation
+logic).
+
+For our purposes, there is much to like about this system, for example:
+
+* It is defined in a declarative format.
+* It is extensible to target-defined operations.
+* It automates matching across identities, like commutative patterns.
+* It allows custom abstractions and intense factoring of target-specific
+ commonalities.
+* It generates compact code - it compiles into a state machine, which is
+ interpreted.
+* It allows the instruction patterns to be defined and reused for multiple
+ purposes.
+* The patterns are "type checked" at compile time, detecting lots of bugs
+ early and eliminating redundancy from the pattern specifications.
+* It allows the use of general C++ code for weird/complex cases.
+
+While there is a lot that is good here, there is also a lot of bad things:
+
+* All of this machinery is only applicable to instruction selection. Even
+ directly adjacent problems like the DAGCombiner and Legalizer can't use it.
+* This isn't extensible at compiler runtime, you have to rebuild the compiler
+ to extend it.
+* The error messages when failing to match a pattern
+ [are not exactly optimal](https://www.google.com/search?q=llvm+cannot+select).
+* It has lots of implementation problems and limitations (e.g. can't write a
+ pattern for a multi-result operation) as a result of working with the
+ awkward SelectionDAG representation and being designed and implemented
+ lazily.
+* This stuff all grew organically over time and has lots of sharp edges.
+
+### Summary
+
+MLIR will face a wide range of pattern matching and graph rewrite problems, and
+one of the major advantages of having a common representation for code at
+multiple levels that it allows us to invest in - and highly leverage - a single
+infra for doing this sort of work.
+
+## Goals
+
+This proposal includes support for defining pattern matching and rewrite
+algorithms on MLIR. We'd like these algorithms to encompass many problems in the
+MLIR space, including 1-to-N expansions (e.g. as seen in the TF/XLA bridge when
+lowering a "tf.AddN" to multiple "add" HLOs), M-to-1 patterns (as seen in
+Grappler optimization passes, e.g. that convert multiple/add into a single
+muladd op), as well as general M-to-N patterns (e.g. instruction selection for
+target instructions). Patterns should have a cost associated with them, and the
+common infrastructure should be responsible for sorting out the lowest cost
+match for a given application.
+
+We separate the task of picking a particular locally optimal pattern from a
+given root node, the algorithm used to rewrite an entire graph given a
+particular set of goals, and the definition of the patterns themselves. We do
+this because DAG tile pattern matching is NP complete, which means that there
+are no known polynomial time algorithms to optimally solve this problem.
+Additionally, we would like to support iterative rewrite algorithms that
+progressively transform the input program through multiple steps. Furthermore,
+we would like to support many different sorts of clients across the MLIR stack,
+and they may have different tolerances for compile time cost, different demands
+for optimality, and other algorithmic goals or constraints.
+
+We aim for MLIR transformations to be easy to implement and reduce the
+likelihood for compiler bugs. We expect there to be a very very large number of
+patterns that are defined over time, and we believe that these sorts of patterns
+will have a very large number of legality/validity constraints - many of which
+are difficult to reason about in a consistent way, may be target specific, and
+whose implementation may be particularly bug-prone. As such, we aim to design the
+API around pattern definition to be simple, resilient to programmer errors, and
+allow separation of concerns between the legality of the nodes generated from
+the idea of the pattern being defined.
+
+Finally, error handling is a topmost concern: in addition to allowing patterns
+to be defined in a target-independent way that may not apply for all hardware,
+we also want failure for any pattern to match to be diagnosable in a reasonable
+way. To be clear, this is not a solvable problem in general - the space of
+malfunction is too great to be fully enumerated and handled optimally, but there
+are better and worse ways to handle the situation. MLIR is already designed to
+represent the provenance of an operation well. This project aims to propagate
+that provenance information precisely, as well as diagnose pattern match
+failures with the rationale for why a set of patterns do not apply.
+
+### Non goals
+
+This proposal doesn't aim to solve all compiler problems, it is simply a
+DAG-to-DAG pattern matching system, starting with a greedy driver algorithm.
+Compiler algorithms that require global dataflow analysis (e.g. common
+subexpression elimination, conditional constant propagation, and many many
+others) will not be directly solved by this infrastructure.
+
+This proposal is limited to DAG patterns, which (by definition) prevent the
+patterns from seeing across cycles in a graph. In an SSA-based IR like MLIR,
+this means that these patterns don't see across PHI nodes / basic block
+arguments. We consider this acceptable given the set of problems we are trying
+to solve - we don't know of any other system that attempts to do so, and
+consider the payoff of worrying about this to be low.
+
+This design includes the ability for DAG patterns to have associated costs
+(benefits), but those costs are defined in terms of magic numbers (typically
+equal to the number of nodes being replaced). For any given application, the
+units of magic numbers will have to be defined.
+
+## Overall design
+
+We decompose the problem into four major pieces:
+
+1. the code that is used to define patterns to match, cost, and their
+ replacement actions
+1. the driver logic to pick the best match for a given root node
+1. the client that is implementing some transformation (e.g. a combiner)
+1. (future) the subsystem that allows patterns to be described with a
+ declarative syntax, which sugars step #1.
+
+We sketch the first three of these pieces, each in turn. This is not intended to
+be a concrete API proposal, merely to describe the design
+
+### Defining Patterns
+
+Each pattern will be an instance of a mlir::Pattern class, whose subclasses
+implement methods like this. Note that this API is meant for exposition, the
+actual details are different for efficiency and coding standards reasons (e.g.
+the memory management of `PatternState` is not specified below, etc):
+
+```c++
+class Pattern {
+ /// Return the benefit (the inverse of "cost") of matching this pattern. The
+ /// benefit of a Pattern is always static - rewrites that may have dynamic
+ /// benefit can be instantiated multiple times (different Pattern instances)
+ /// for each benefit that they may return, and be guarded by different match
+ /// condition predicates.
+ PatternBenefit getBenefit() const { return benefit; }
+
+ /// Return the root node that this pattern matches. Patterns that can
+ /// match multiple root types are instantiated once per root.
+ OperationName getRootKind() const { return rootKind; }
+
+ /// Attempt to match against code rooted at the specified operation,
+ /// which is the same operation code as getRootKind(). On failure, this
+ /// returns a None value. On success it a (possibly null) pattern-specific
+ /// state wrapped in a Some. This state is passed back into its rewrite
+ /// function if this match is selected.
+ virtual Optional<PatternState*> match(Operation *op) const = 0;
+
+ /// Rewrite the IR rooted at the specified operation with the result of
+ /// this pattern, generating any new operations with the specified
+ /// rewriter. If an unexpected error is encountered (an internal
+ /// compiler error), it is emitted through the normal MLIR diagnostic
+ /// hooks and the IR is left in a valid state.
+ virtual void rewrite(Operation *op, PatternState *state,
+ PatternRewriter &rewriter) const;
+};
+```
+
+In practice, the first patterns we implement will directly subclass and
+implement this stuff, but we will define some helpers to reduce boilerplate.
+When we have a declarative way to describe patterns, this should be
+automatically generated from the description.
+
+Instances of `Pattern` have a benefit that is static upon construction of the
+pattern instance, but may be computed dynamically at pattern initialization
+time, e.g. allowing the benefit to be derived from domain specific information,
+like the target architecture). This limitation allows us MLIR to (eventually)
+perform pattern fusion and compile patterns into an efficient state machine, and
+[Thier, Ertl, and Krall](https://dl.acm.org/citation.cfm?id=3179501) have shown
+that match predicates eliminate the need for dynamically computed costs in
+almost all cases: you can simply instantiate the same pattern one time for each
+possible cost and use the predicate to guard the match.
+
+The two-phase nature of this API (match separate from rewrite) is important for
+two reasons: 1) some clients may want to explore different ways to tile the
+graph, and only rewrite after committing to one tiling. 2) We want to support
+runtime extensibility of the pattern sets, but want to be able to statically
+compile the bulk of known patterns into a state machine at "compiler compile
+time". Both of these reasons lead to us needing to match multiple patterns
+before committing to an answer.
+
+### Picking and performing a replacement
+
+In the short term, this API can be very simple, something like this can work and
+will be useful for many clients:
+
+```c++
+class PatternMatcher {
+ // Create a pattern matcher with a bunch of patterns. This constructor
+ // looks across all of the specified patterns, and builds an internal
+ // data structure that allows efficient matching.
+ PatternMatcher(ArrayRef<Pattern*> patterns);
+
+ // Given a specific operation, see if there is some rewrite that is
+ // interesting. If so, return success and return the list of new
+ // operations that were created. If not, return failure.
+ bool matchAndRewrite(Operation *op,
+ SmallVectorImpl<Operation*> &newlyCreatedOps);
+};
+```
+
+In practice the interesting part of this class is the acceleration structure it
+builds internally. It buckets up the patterns by root operation, and sorts them
+by their static benefit. When performing a match, it tests any dynamic patterns,
+then tests statically known patterns from highest to lowest benefit.
+
+### First Client: A Greedy Worklist Combiner
+
+We expect that there will be lots of clients for this, but a simple greedy
+worklist-driven combiner should be powerful enough to serve many important ones,
+including the
+[TF2XLA op expansion logic](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/tf2xla/kernels),
+many of the pattern substitution passes of the
+[TOCO compiler](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/toco)
+for TF-Lite, many
+[Grappler](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/grappler)
+passes, and other general performance optimizations for applying identities.
+
+The structure of this algorithm is straight-forward, here is pseudo code:
+
+* Walk a function in preorder, adding each operation to a worklist.
+* While the worklist is non-empty, pull something off the back (processing
+ things generally in postorder)
+ * Perform matchAndRewrite on the operation. If failed, continue to the
+ next operation.
+ * On success, add the newly created ops to the worklist and continue.
+
+## Future directions
+
+It is important to get implementation and usage experience with this, and many
+patterns can be defined using this sort of framework. Over time, we can look to
+make it easier to declare patterns in a declarative form (e.g. with the LLVM
+tblgen tool or something newer/better). Once we have that, we can define an
+internal abstraction for describing the patterns to match, allowing better high
+level optimization of patterns (including fusion of the matching logic across
+patterns, which the LLVM instruction selector does) and allow the patterns to be
+defined without rebuilding the compiler itself.
diff --git a/mlir/docs/Glossary.md b/mlir/docs/Glossary.md
new file mode 100644
index 00000000000..542d3756ac7
--- /dev/null
+++ b/mlir/docs/Glossary.md
@@ -0,0 +1,174 @@
+# MLIR Glossary
+
+This glossary contains definitions of MLIR-specific terminology. It is intended
+to be a quick reference document. For terms which are well-documented elsewhere,
+definitions are kept brief and the header links to the more in-depth
+documentation.
+
+<!-- When contributing, please ensure that entries remain in alphabetical order. -->
+
+#### [Block](LangRef.md#blocks)
+
+A sequential list of operations without control flow.
+
+Also called a [basic block](https://en.wikipedia.org/wiki/Basic_block).
+
+#### Conversion
+
+The transformation of code represented in one dialect into a semantically
+equivalent representation in another dialect (i.e. inter-dialect conversion) or
+the same dialect (i.e. intra-dialect conversion).
+
+In the context of MLIR, conversion is distinct from [translation](#translation).
+Conversion refers to a transformation between (or within) dialects, but all
+still within MLIR, whereas translation refers to a transformation between MLIR
+and an external representation.
+
+#### [Declarative Rewrite Rule](DeclarativeRewrites.md) (DRR)
+
+A [rewrite rule](https://en.wikipedia.org/wiki/Graph_rewriting) which can be
+defined declaratively (e.g. through specification in a
+[TableGen](https://llvm.org/docs/TableGen/) record). At compiler build time,
+these rules are expanded into an equivalent `mlir::RewritePattern` subclass.
+
+#### [Dialect](LangRef.md#dialects)
+
+A dialect is a grouping of functionality which can be used to extend the MLIR
+system.
+
+A dialect creates a unique `namespace` within which new
+[operations](#operation-op), [attributes](LangRef.md#attributes), and
+[types](LangRef.md#type-system) are defined. This is the fundamental method by
+which to extend MLIR.
+
+In this way, MLIR is a meta-IR: its extensible framework allows it to be
+leveraged in many different ways (e.g. at different levels of the compilation
+process). Dialects provide an abstraction for the different uses of MLIR while
+recognizing that they are all a part of the meta-IR that is MLIR.
+
+The tutorial provides an example of
+[interfacing with MLIR](Tutorials/Toy/Ch-2.md#interfacing-with-mlir) in this
+way.
+
+(Note that we have intentionally selected the term "dialect" instead of
+"language", as the latter would wrongly suggest that these different namespaces
+define entirely distinct IRs.)
+
+#### Export
+
+To transform code represented in MLIR into a semantically equivalent
+representation which is external to MLIR.
+
+The tool that performs such a transformation is called an exporter.
+
+See also: [translation](#translation).
+
+#### [Function](LangRef.md#functions)
+
+An [operation](#operation-op) with a name containing one [region](#region).
+
+The region of a function is not allowed to implicitly capture values defined
+outside of the function, and all external references must use function arguments
+or attributes that establish a symbolic connection.
+
+#### Import
+
+To transform code represented in an external representation into a semantically
+equivalent representation in MLIR.
+
+The tool that performs such a transformation is called an importer.
+
+See also: [translation](#translation).
+
+#### Legalization
+
+The process of transforming operations into a semantically equivalent
+representation which adheres to the requirements set by the
+[conversion target](DialectConversion.md#conversion-target).
+
+That is, legalization is accomplished if and only if the new representation
+contains only operations which are legal, as specified in the conversion target.
+
+#### Lowering
+
+The process of transforming a higher-level representation of an operation into a
+lower-level, but semantically equivalent, representation.
+
+In MLIR, this is typically accomplished through
+[dialect conversion](DialectConversion.md). This provides a framework by which
+to define the requirements of the lower-level representation, called the
+[conversion target](DialectConversion.md#conversion-target), by specifying which
+operations are legal versus illegal after lowering.
+
+See also: [legalization](#legalization).
+
+#### [Module](LangRef.md#module)
+
+An [operation](#operation-op) which contains a single region containing a single
+block that is comprised of operations.
+
+This provides an organizational structure for MLIR operations, and is the
+expected top-level operation in the IR: the textual parser returns a Module.
+
+#### [Operation](LangRef.md#operations) (op)
+
+A unit of code in MLIR. Operations are the building blocks for all code and
+computations represented by MLIR. They are fully extensible (there is no fixed
+list of operations) and have application-specific semantics.
+
+An operation can have zero or more [regions](#region). Note that this creates a
+nested IR structure, as regions consist of blocks, which in turn, consist of a
+list of operations.
+
+In MLIR, there are two main classes related to operations: `Operation` and `Op`.
+Operation is the actual opaque instance of the operation, and represents the
+general API into an operation instance. An `Op` is the base class of a derived
+operation, like `ConstantOp`, and acts as smart pointer wrapper around a
+`Operation*`
+
+#### [Region](LangRef.md#regions)
+
+A [CFG](https://en.wikipedia.org/wiki/Control-flow_graph) of MLIR
+[blocks](#block).
+
+#### Round-trip
+
+The process of converting from a source format to a target format and then back
+to the source format.
+
+This is a good way of gaining confidence that the target format richly models
+the source format. This is particularly relevant in the MLIR context, since
+MLIR's multi-level nature allows for easily writing target dialects that model a
+source format (such as TensorFlow GraphDef or another non-MLIR format)
+faithfully and have a simple conversion procedure. Further cleanup/lowering can
+be done entirely within the MLIR representation. This separation - making the
+[importer](#import) as simple as possible and performing all further
+cleanups/lowering in MLIR - has proven to be a useful design pattern.
+
+#### [Terminator operation](LangRef.md#terminator-operations)
+
+An [operation](#operation-op) which *must* terminate a [block](#block).
+Terminator operations are a special category of operations.
+
+#### Transitive lowering
+
+An A->B->C [lowering](#lowering); that is, a lowering in which multiple patterns
+may be applied in order to fully transform an illegal operation into a set of
+legal ones.
+
+This provides the flexibility that the [conversion](#conversion) framework may
+perform the lowering in multiple stages of applying patterns (which may utilize
+intermediate patterns not in the conversion target) in order to fully legalize
+an operation. This is accomplished through
+[partial conversion](DialectConversion.md#modes-of-conversion).
+
+#### Translation
+
+The transformation of code represented in an external (non-MLIR) representation
+into a semantically equivalent representation in MLIR (i.e.
+[importing](#import)), or the inverse (i.e. [exporting](#export)).
+
+In the context of MLIR, translation is distinct from [conversion](#conversion).
+Translation refers to a transformation between MLIR and an external
+representation, whereas conversion refers to a transformation within MLIR
+(between or within dialects).
diff --git a/mlir/docs/Interfaces.md b/mlir/docs/Interfaces.md
new file mode 100644
index 00000000000..f413cac28bb
--- /dev/null
+++ b/mlir/docs/Interfaces.md
@@ -0,0 +1,200 @@
+# Introduction to MLIR Interfaces
+
+MLIR is generic and very extensible; it allows for opaquely representing many
+different dialects that have their own operations, attributes, types, and so on.
+This allows for dialects to be very expressive in their semantics and for MLIR
+to capture many different levels of abstraction. The downside to this is that
+transformations and analyses must be extremely conservative about the operations
+that they encounter, and must special-case the different dialects that they
+support. To combat this, MLIR provides the concept of `interfaces`.
+
+## Motivation
+
+Interfaces provide a generic way of interacting with the IR. The goal is to be
+able to express transformations/analyses in terms of these interfaces without
+encoding specific knowledge about the exact operation or dialect involved. This
+makes the compiler more extensible by allowing the addition of new dialects and
+operations in a decoupled way with respect to the implementation of
+transformations/analyses.
+
+### Dialect Interfaces
+
+Dialect interfaces are generally useful for transformation passes or analyses
+that want to opaquely operate on operations, even *across* dialects. These
+interfaces generally involve wide coverage over the entire dialect and are only
+used for a handful of transformations/analyses. In these cases, registering the
+interface directly on each operation is overly complex and cumbersome. The
+interface is not core to the operation, just to the specific transformation. An
+example of where this type of interface would be used is inlining. Inlining
+generally queries high-level information about the operations within a dialect,
+like legality and cost modeling, that often is not specific to one operation.
+
+A dialect interface can be defined by inheriting from the CRTP base class
+`DialectInterfaceBase::Base`. This class provides the necessary utilities for
+registering an interface with the dialect so that it can be looked up later.
+Once the interface has been defined, dialects can override it using
+dialect-specific information. The interfaces defined by a dialect are registered
+in a similar mechanism to Attributes, Operations, Types, etc.
+
+```c++
+/// Define an Inlining interface to allow for dialects to opt-in.
+class DialectInlinerInterface :
+ public DialectInterface::Base<DialectInlinerInterface> {
+public:
+ /// Returns true if the given region 'src' can be inlined into the region
+ /// 'dest' that is attached to an operation registered to the current dialect.
+ /// 'valueMapping' contains any remapped values from within the 'src' region.
+ /// This can be used to examine what values will replace entry arguments into
+ /// the 'src' region, for example.
+ virtual bool isLegalToInline(Region *dest, Region *src,
+ BlockAndValueMapping &valueMapping) const {
+ return false;
+ }
+};
+
+/// Override the inliner interface to add support for inlining affine
+/// operations.
+struct AffineInlinerInterface : public DialectInlinerInterface {
+ /// Affine structures have specific inlining constraints.
+ bool isLegalToInline(Region *dest, Region *src,
+ BlockAndValueMapping &valueMapping) const final {
+ ...
+ }
+};
+
+/// Register the interface with the dialect.
+AffineOpsDialect::AffineOpsDialect(MLIRContext *context) ... {
+ addInterfaces<AffineInlinerInterface>();
+}
+```
+
+Once registered, these interfaces can be opaquely queried from the dialect by
+the transformation/analysis that wants to use them:
+
+```c++
+Dialect *dialect = ...;
+if (auto *interface = dialect->getInterface<DialectInlinerInterface>())
+ ... // The dialect provides this interface.
+```
+
+#### DialectInterfaceCollections
+
+An additional utility is provided via DialectInterfaceCollection. This CRTP
+class allows for collecting all of the dialects that have registered a given
+interface within the context.
+
+```c++
+class InlinerInterface : public
+ DialectInterfaceCollection<DialectInlinerInterface> {
+ /// The hooks for this class mirror the hooks for the DialectInlinerInterface,
+ /// with default implementations that call the hook on the interface for a
+ /// given dialect.
+ virtual bool isLegalToInline(Region *dest, Region *src,
+ BlockAndValueMapping &valueMapping) const {
+ auto *handler = getInterfaceFor(dest->getContainingOp());
+ return handler ? handler->isLegalToInline(dest, src, valueMapping) : false;
+ }
+};
+
+MLIRContext *ctx = ...;
+InlinerInterface interface(ctx);
+if(!interface.isLegalToInline(...))
+ ...
+```
+
+### Operation Interfaces
+
+Operation interfaces, as the name suggests, are those registered at the
+Operation level. These interfaces provide an opaque view into derived operations
+by providing a virtual interface that must be implemented. As an example, the
+`Linalg` dialect may implement an interface that provides general queries about
+some of the dialects library operations. These queries may provide things like:
+the number of parallel loops; the number of inputs and outputs; etc.
+
+Operation interfaces are defined by overriding the CRTP base class
+`OpInterface`. This class takes, as a template parameter, a `Traits` class that
+defines a `Concept` and a `Model` class. These classes provide an implementation
+of concept-based polymorphism, where the Concept defines a set of virtual
+methods that are overridden by the Model that is templated on the concrete
+operation type. It is important to note that these classes should be pure in
+that they contain no non-static data members. Operations that wish to override
+this interface should add the provided trait `OpInterface<..>::Trait` upon
+registration.
+
+```c++
+struct ExampleOpInterfaceTraits {
+ /// Define a base concept class that defines the virtual interface that needs
+ /// to be overridden.
+ struct Concept {
+ virtual ~Concept();
+ virtual unsigned getNumInputs(Operation *op) = 0;
+ };
+
+ /// Define a model class that specializes a concept on a given operation type.
+ template <typename OpT>
+ struct Model : public Concept {
+ /// Override the method to dispatch on the concrete operation.
+ unsigned getNumInputs(Operation *op) final {
+ return llvm::cast<OpT>(op).getNumInputs();
+ }
+ };
+};
+
+class ExampleOpInterface : public OpInterface<ExampleOpInterface,
+ ExampleOpInterfaceTraits> {
+public:
+ /// Use base class constructor to support LLVM-style casts.
+ using OpInterface<ExampleOpInterface, ExampleOpInterfaceTraits>::OpInterface;
+
+ /// The interface dispatches to 'getImpl()', an instance of the concept.
+ unsigned getNumInputs() {
+ return getImpl()->getNumInputs(getOperation());
+ }
+};
+
+```
+
+Once the interface has been defined, it is registered to an operation by adding
+the provided trait `ExampleOpInterface::Trait`. Using this interface is just
+like using any other derived operation type, i.e. casting:
+
+```c++
+/// When defining the operation, the interface is registered via the nested
+/// 'Trait' class provided by the 'OpInterface<>' base class.
+class MyOp : public Op<MyOp, ExampleOpInterface::Trait> {
+public:
+ /// The definition of the interface method on the derived operation.
+ unsigned getNumInputs() { return ...; }
+};
+
+/// Later, we can query if a specific operation(like 'MyOp') overrides the given
+/// interface.
+Operation *op = ...;
+if (ExampleOpInterface example = dyn_cast<ExampleOpInterface>(op))
+ llvm::errs() << "num inputs = " << example.getNumInputs() << "\n";
+```
+
+#### Utilizing the ODS Framework
+
+Operation interfaces require a bit of boiler plate to connect all of the pieces
+together. The ODS(Operation Definition Specification) framework provides
+simplified mechanisms for
+[defining interfaces](OpDefinitions.md#operation-interfaces).
+
+As an example, using the ODS framework would allow for defining the example
+interface above as:
+
+```tablegen
+def ExampleOpInterface : OpInterface<"ExampleOpInterface"> {
+ let description = [{
+ This is an example interface definition.
+ }];
+
+ let methods = [
+ InterfaceMethod<
+ "Get the number of inputs for the current operation.",
+ "unsigned", "getNumInputs"
+ >,
+ ];
+}
+```
diff --git a/mlir/docs/LangRef.md b/mlir/docs/LangRef.md
new file mode 100644
index 00000000000..da60b8b892e
--- /dev/null
+++ b/mlir/docs/LangRef.md
@@ -0,0 +1,1497 @@
+# MLIR Specification
+
+MLIR (Multi-Level IR) is a compiler intermediate representation with
+similarities to traditional three-address SSA representations (like
+[LLVM IR](http://llvm.org/docs/LangRef.html) or
+[SIL](https://github.com/apple/swift/blob/master/docs/SIL.rst)), but which
+introduces notions from polyhedral loop optimization as first-class concepts.
+This hybrid design is optimized to represent, analyze, and transform high level
+dataflow graphs as well as target-specific code generated for high performance
+data parallel systems. Beyond its representational capabilities, its single
+continuous design provides a framework to lower from dataflow graphs to
+high-performance target-specific code.
+
+This document defines and describes the key concepts in MLIR, and is intended to
+be a dry reference document - the [rationale documentation](Rationale.md),
+[glossary](Glossary.md), and other content are hosted elsewhere.
+
+MLIR is designed to be used in three different forms: a human-readable textual
+form suitable for debugging, an in-memory form suitable for programmatic
+transformations and analysis, and a compact serialized form suitable for storage
+and transport. The different forms all describe the same semantic content. This
+document describes the human-readable textual form.
+
+[TOC]
+
+## High-Level Structure
+
+MLIR is an
+[SSA-based](https://en.wikipedia.org/wiki/Static_single_assignment_form) IR,
+which means that values are defined before use and have scope defined by their
+dominance relations. Operations may produce zero or more results, and each is a
+distinct SSA value with its own type defined by the [type system](#type-system).
+
+The unit of code in MLIR is an [Operation](#operations). Operations allow for
+representing many different concepts: allocating buffers, producing views to
+transform them, target-independent arithmetic, target-specific operations, and
+even arbitrary user-defined high-level operations including the
+[Module](#module) and [Function](#functions) operations. Operations may contain
+[Regions](#regions) that represent a Control Flow Graph (CFG) of
+[Blocks](#blocks), that contain operations and end with a
+[terminator operation](#terminator-operations) (like branches).
+
+Here's an example of an MLIR module:
+
+```mlir
+// Compute A*B using an implementation of multiply kernel and print the
+// result using a TensorFlow op. The dimensions of A and B are partially
+// known. The shapes are assumed to match.
+func @mul(%A: tensor<100x?xf32>, %B: tensor<?x50xf32>) -> (tensor<100x50xf32>) {
+ // Compute the inner dimension of %A using the dim operation.
+ %n = dim %A, 1 : tensor<100x?xf32>
+
+ // Allocate addressable "buffers" and copy tensors %A and %B into them.
+ %A_m = alloc(%n) : memref<100x?xf32>
+ tensor_store %A to %A_m : memref<100x?xf32>
+
+ %B_m = alloc(%n) : memref<?x50xf32>
+ tensor_store %B to %B_m : memref<?x50xf32>
+
+ // Call function @multiply passing memrefs as arguments,
+ // and getting returned the result of the multiplication.
+ %C_m = call @multiply(%A_m, %B_m)
+ : (memref<100x?xf32>, memref<?x50xf32>) -> (memref<100x50xf32>)
+
+ dealloc %A_m : memref<100x?xf32>
+ dealloc %B_m : memref<?x50xf32>
+
+ // Load the buffer data into a higher level "tensor" value.
+ %C = tensor_load %C_m : memref<100x50xf32>
+ dealloc %C_m : memref<100x50xf32>
+
+ // Call TensorFlow built-in function to print the result tensor.
+ "tf.Print"(%C){message: "mul result"}
+ : (tensor<100x50xf32) -> (tensor<100x50xf32>)
+
+ return %C : tensor<100x50xf32>
+}
+
+// A function that multiplies two memrefs and returns the result.
+func @multiply(%A: memref<100x?xf32>, %B: memref<?x50xf32>)
+ -> (memref<100x50xf32>) {
+ // Compute the inner dimension of %A.
+ %n = dim %A, 1 : memref<100x?xf32>
+
+ // Allocate memory for the multiplication result.
+ %C = alloc() : memref<100x50xf32>
+
+ // Multiplication loop nest.
+ affine.for %i = 0 to 100 {
+ affine.for %j = 0 to 50 {
+ store 0 to %C[%i, %j] : memref<100x50xf32>
+ affine.for %k = 0 to %n {
+ %a_v = load %A[%i, %k] : memref<100x?xf32>
+ %b_v = load %B[%k, %j] : memref<?x50xf32>
+ %prod = mulf %a_v, %b_v : f32
+ %c_v = load %C[%i, %j] : memref<100x50xf32>
+ %sum = addf %c_v, %prod : f32
+ store %sum, %C[%i, %j] : memref<100x50xf32>
+ }
+ }
+ }
+ return %C : memref<100x50xf32>
+}
+```
+
+## Notation
+
+MLIR has a simple and unambiguous grammar, allowing it to reliably round-trip
+through a textual form. This is important for development of the compiler - e.g.
+for understanding the state of code as it is being transformed and writing test
+cases.
+
+This document describes the grammar using
+[Extended Backus-Naur Form (EBNF)](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form).
+
+This is the EBNF grammar used in this document, presented in yellow boxes.
+
+```
+alternation ::= expr0 | expr1 | expr2 // Either expr0 or expr1 or expr2.
+sequence ::= expr0 expr1 expr2 // Sequence of expr0 expr1 expr2.
+repetition0 ::= expr* // 0 or more occurrences.
+repetition1 ::= expr+ // 1 or more occurrences.
+optionality ::= expr? // 0 or 1 occurrence.
+grouping ::= (expr) // Everything inside parens is grouped together.
+literal ::= `abcd` // Matches the literal `abcd`.
+```
+
+Code examples are presented in blue boxes.
+
+```mlir
+// This is an example use of the grammar above:
+// This matches things like: ba, bana, boma, banana, banoma, bomana...
+example ::= `b` (`an` | `om`)* `a`
+```
+
+### Common syntax
+
+The following core grammar productions are used in this document:
+
+```
+// TODO: Clarify the split between lexing (tokens) and parsing (grammar).
+digit ::= [0-9]
+hex_digit ::= [0-9a-fA-F]
+letter ::= [a-zA-Z]
+id-punct ::= [$._-]
+
+integer-literal ::= decimal-literal | hexadecimal-literal
+decimal-literal ::= digit+
+hexadecimal-literal ::= `0x` hex_digit+
+float-literal ::= [-+]?[0-9]+[.][0-9]*([eE][-+]?[0-9]+)?
+string-literal ::= `"` [^"\n\f\v\r]* `"` TODO define escaping rules
+```
+
+Not listed here, but MLIR does support comments. They use standard BCPL syntax,
+starting with a `//` and going until the end of the line.
+
+### Identifiers and keywords
+
+Syntax:
+
+```
+// Identifiers
+bare-id ::= (letter|[_]) (letter|digit|[_$.])*
+bare-id-list ::= bare-id (`,` bare-id)*
+ssa-id ::= `%` suffix-id
+suffix-id ::= (digit+ | ((letter|id-punct) (letter|id-punct|digit)*))
+
+symbol-ref-id ::= `@` (suffix-id | string-literal)
+ssa-id-list ::= ssa-id (`,` ssa-id)*
+
+// Uses of an SSA value, e.g. in an operand list to an operation.
+ssa-use ::= ssa-id
+ssa-use-list ::= ssa-use (`,` ssa-use)*
+```
+
+Identifiers name entities such as SSA values, types and functions, and are
+chosen by the writer of MLIR code. Identifiers may be descriptive (e.g.
+`%batch_size`, `@matmul`), or may be non-descriptive when they are
+auto-generated (e.g. `%23`, `@func42`). Identifier names for SSA values may be
+used in an MLIR text file but are not persisted as part of the IR - the printer
+will give them anonymous names like `%42`.
+
+MLIR guarantees identifiers never collide with keywords by prefixing identifiers
+with a sigil (e.g. `%`, `#`, `@`, `^`, `!`). In certain unambiguous contexts
+(e.g. affine expressions), identifiers are not prefixed, for brevity. New
+keywords may be added to future versions of MLIR without danger of collision
+with existing identifiers.
+
+The scope of SSA values is defined based on the standard definition of
+[dominance](https://en.wikipedia.org/wiki/Dominator_\(graph_theory\)). Argument
+identifiers in mapping functions are in scope for the mapping body. Function
+identifiers and mapping identifiers are visible across the entire module.
+
+## Dialects
+
+Dialects are the mechanism by which to engage with and extend the MLIR
+ecosystem. They allow for defining new [operations](#operations), as well as
+[attributes](#attributes) and [types](#type-system). Each dialect is given a
+unique `namespace` that is prefixed to each defined attribute/operation/type.
+For example, the [Affine dialect](Dialects/Affine.md) defines the namespace:
+`affine`.
+
+MLIR allows for multiple dialects, even those outside of the main tree, to
+co-exist together within one module. Dialects are produced and consumed by
+certain passes. MLIR provides a [framework](DialectConversion.md) to convert
+between, and within, different dialects.
+
+A few of the dialects supported by MLIR:
+
+* [Affine dialect](Dialects/Affine.md)
+* [GPU dialect](Dialects/GPU.md)
+* [LLVM dialect](Dialects/LLVM.md)
+* [SPIR-V dialect](Dialects/SPIR-V.md)
+* [Standard dialect](Dialects/Standard.md)
+* [Vector dialect](Dialects/Vector.md)
+
+### Target specific operations
+
+Dialects provide a modular way in which targets can expose target-specific
+operations directly through to MLIR. As an example, some targets go through
+LLVM. LLVM has a rich set of intrinsics for certain target-independent
+operations (e.g. addition with overflow check) as well as providing access to
+target-specific operations for the targets it supports (e.g. vector permutation
+operations). LLVM intrinsics in MLIR are represented via operations that start
+with an "llvm." name.
+
+Example:
+
+```mlir
+// LLVM: %x = call {i16, i1} @llvm.sadd.with.overflow.i16(i16 %a, i16 %b)
+%x:2 = "llvm.sadd.with.overflow.i16"(%a, %b) : (i16, i16) -> (i16, i1)
+```
+
+These operations only work when targeting LLVM as a backend (e.g. for CPUs and
+GPUs), and are required to align with the LLVM definition of these intrinsics.
+
+## Operations
+
+Syntax:
+
+```
+operation ::= op-result-list? (generic-operation | custom-operation)
+ trailing-location?
+generic-operation ::= string-literal '(' ssa-use-list? ')' attribute-dict?
+ `:` function-type
+custom-operation ::= bare-id custom-operation-format
+op-result-list ::= op-result (`,` op-result)* `=`
+op-result ::= ssa-id (`:` integer-literal)
+successor-list ::= successor (`,` successor)*
+successor ::= caret-id (`:` bb-arg-list)?
+region-list ::= region (`,` region)*
+trailing-location ::= (`loc` `(` location `)`)?
+```
+
+MLIR introduces a uniform concept called _operations_ to enable describing many
+different levels of abstractions and computations. Operations in MLIR are fully
+extensible (there is no fixed list of operations) and have application-specific
+semantics. For example, MLIR supports
+[target-independent operations](Dialects/Standard.md#memory-operations),
+[affine operations](Dialects/Affine.md), and
+[target-specific machine operations](#target-specific-operations).
+
+The internal representation of an operation is simple: an operation is
+identified by a unique string (e.g. `dim`, `tf.Conv2d`, `x86.repmovsb`,
+`ppc.eieio`, etc), can return zero or more results, take zero or more SSA
+operands, may have zero or more attributes, may have zero or more successors,
+and zero or more enclosed [regions](#regions). The generic printing form
+includes all these elements literally, with a function type to indicate the
+types of the results and operands.
+
+Example:
+
+```mlir
+// An operation that produces two results.
+// The results of %result can be accessed via the <name> `#` <opNo> syntax.
+%result:2 = "foo_div"() : () -> (f32, i32)
+
+// Pretty form that defines a unique name for each result.
+%foo, %bar = "foo_div"() : () -> (f32, i32)
+
+// Invoke a TensorFlow function called tf.scramble with two inputs
+// and an attribute "fruit".
+%2 = "tf.scramble"(%result#0, %bar) {fruit: "banana"} : (f32, i32) -> f32
+```
+
+In addition to the basic syntax above, dialects may register known operations.
+This allows those dialects to support _custom assembly form_ for parsing and
+printing operations. In the operation sets listed below, we show both forms.
+
+### Terminator Operations
+
+These are a special category of operations that *must* terminate a block, e.g.
+[branches](Dialects/Standard.md#terminator-operations). These operations may
+also have a list of successors ([blocks](#blocks) and their arguments).
+
+Example:
+
+```mlir
+// Branch to ^bb1 or ^bb2 depending on the condition %cond.
+// Pass value %v to ^bb2, but not to ^bb1.
+"cond_br"(%cond)[^bb1, ^bb2(%v : index)] : (i1) -> ()
+```
+
+### Module
+
+```
+module ::= `module` symbol-ref-id? (`attributes` attribute-dict)? region
+```
+
+An MLIR module represents an opaque top-level container operation. It contains a
+single region containing a single block that is comprised of any operations.
+Operations within this region must not implicitly capture values defined above
+it. Modules have an optional symbol name that can be used to refer to them in
+operations.
+
+### Functions
+
+An MLIR Function is an operation with a name containing one [region](#regions).
+The region of a function is not allowed to implicitly capture values defined
+outside of the function, and all external references must use function arguments
+or attributes that establish a symbolic connection (e.g. symbols referenced by
+name via a string attribute like [SymbolRefAttr](#symbol-reference-attribute)):
+
+```
+function ::= `func` function-signature function-attributes? function-body?
+
+function-signature ::= symbol-ref-id `(` argument-list `)`
+ (`->` function-result-list)?
+
+argument-list ::= (named-argument (`,` named-argument)*) | /*empty*/
+argument-list ::= (type attribute-dict? (`,` type attribute-dict?)*) | /*empty*/
+named-argument ::= ssa-id `:` type attribute-dict?
+
+function-result-list ::= function-result-list-parens
+ | non-function-type
+function-result-list-parens ::= `(` `)`
+ | `(` function-result-list-no-parens `)`
+function-result-list-no-parens ::= function-result (`,` function-result)*
+function-result ::= type attribute-dict?
+
+function-attributes ::= `attributes` attribute-dict
+function-body ::= region
+```
+
+An external function declaration (used when referring to a function declared in
+some other module) has no body. While the MLIR textual form provides a nice
+inline syntax for function arguments, they are internally represented as "block
+arguments" to the first block in the region.
+
+Only dialect attribute names may be specified in the attribute dictionaries for
+function arguments, results, or the function itself.
+
+Examples:
+
+```mlir
+// External function definitions.
+func @abort()
+func @scribble(i32, i64, memref<? x 128 x f32, #layout_map0>) -> f64
+
+// A function that returns its argument twice:
+func @count(%x: i64) -> (i64, i64)
+ attributes {fruit: "banana"} {
+ return %x, %x: i64, i64
+}
+
+// A function with an argument attribute
+func @example_fn_arg(%x: i32 {swift.self = unit})
+
+// A function with a result attribute
+func @example_fn_result() -> (f64 {dialectName.attrName = 0 : i64})
+
+// A function with an attribute
+func @example_fn_attr() attributes {dialectName.attrName = false}
+```
+
+## Blocks
+
+Syntax:
+
+```
+block ::= block-label operation+
+block-label ::= block-id block-arg-list? `:`
+block-id ::= caret-id
+caret-id ::= `^` suffix-id
+ssa-id-and-type ::= ssa-id `:` type
+
+// Non-empty list of names and types.
+ssa-id-and-type-list ::= ssa-id-and-type (`,` ssa-id-and-type)*
+
+block-arg-list ::= `(` ssa-id-and-type-list? `)`
+```
+
+A [block](https://en.wikipedia.org/wiki/Basic_block) is a sequential list of
+operations without control flow (calls are not considered control flow for this
+purpose) that are executed from top to bottom. The last operation in a block is
+a [terminator operation](#terminator-operations), which ends the block.
+
+Blocks in MLIR take a list of block arguments, which represent SSA PHI nodes in
+a functional notation. The arguments are defined by the block, and values are
+provided for these block arguments by branches that go to the block.
+
+Here is a simple example function showing branches, returns, and block
+arguments:
+
+```mlir
+func @simple(i64, i1) -> i64 {
+^bb0(%a: i64, %cond: i1): // Code dominated by ^bb0 may refer to %a
+ cond_br %cond, ^bb1, ^bb2
+
+^bb1:
+ br ^bb3(%a: i64) // Branch passes %a as the argument
+
+^bb2:
+ %b = addi %a, %a : i64
+ br ^bb3(%b: i64) // Branch passes %b as the argument
+
+// ^bb3 receives an argument, named %c, from predecessors
+// and passes it on to bb4 twice.
+^bb3(%c: i64):
+ br ^bb4(%c, %c : i64, i64)
+
+^bb4(%d : i64, %e : i64):
+ %0 = addi %d, %e : i64
+ return %0 : i64
+}
+```
+
+**Context:** The "block argument" representation eliminates a number of special
+cases from the IR compared to traditional "PHI nodes are operations" SSA IRs
+(like LLVM). For example, the
+[parallel copy semantics](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.524.5461&rep=rep1&type=pdf)
+of SSA is immediately apparent, and function arguments are no longer a special
+case: they become arguments to the entry block
+[[more rationale](Rationale.md#block-arguments-vs-phi-nodes)].
+
+## Regions
+
+### Definition
+
+A region is a CFG of MLIR [Blocks](#blocks). Regions serve to group semantically
+connected blocks, where the semantics is not imposed by the IR. Instead, the
+containing operation defines the semantics of the regions it contains. Regions
+do not have a name or an address, only the blocks contained in a region do.
+Regions are meaningless outside of the containing entity and have no type or
+attributes.
+
+The first block in the region cannot be a successor of any other block. The
+syntax for the region is as follows:
+
+```
+region ::= `{` block* `}`
+```
+
+The function body is an example of a region: it consists of a CFG of blocks and
+has additional semantic restrictions that other types of regions may not have
+(block terminators must either branch to a different block, or return from a
+function where the types of the `return` arguments must match the result types
+of the function signature).
+
+### Control and Value Scoping
+
+Regions provide nested control isolation: it is impossible to branch to a block
+within a region from outside it or to branch from within a region to a block
+outside it. Similarly, it provides a natural scoping for value visibility: SSA
+values defined in a region don't escape to the enclosing region, if any. By
+default, a region can reference values defined outside of the region whenever it
+would have been legal to use them as operands to the enclosing operation.
+
+Example:
+
+```mlir
+func @accelerator_compute(i64, i1) -> i64 {
+^bb0(%a: i64, %cond: i1): // Code dominated by ^bb0 may refer to %a
+ cond_br %cond, ^bb1, ^bb2
+
+^bb1:
+ // This def for %value does not dominate ^bb2
+ %value = "op.convert"(%a) : (i64) -> i64
+ br ^bb3(%a: i64) // Branch passes %a as the argument
+
+^bb2:
+ "accelerator.launch"() {
+ ^bb0:
+ // Region of code nested under "accelerator.launch", it can reference %a but
+ // not %value.
+ %new_value = "accelerator.do_something"(%a) : (i64) -> ()
+ }
+ // %new_value cannot be referenced outside of the region
+
+^bb3:
+ ...
+}
+```
+
+This can be further restricted using the custom verifier associated with the
+enclosing operation, for example, disallowing references to values defined
+outside the region completely.
+
+### Control Flow
+
+Regions are Single-Entry-Multiple-Exit (SEME). This means that control can only
+flow into the first block of the region, but can flow out of the region at the
+end of any of the contained blocks (This behavior is similar to that of a
+function body in most programming languages). When exiting a Region, control is
+returned to the enclosing operation.
+
+The enclosing operation determines the way in which control is transmitted into
+the entry block of a Region. The successor to a region’s exit points may not
+necessarily exist: for example a call to a function that does not return.
+Concurrent or asynchronous execution of regions is unspecified. Operations may
+define specific rules of execution, e.g. sequential loops or switch cases.
+
+A Region may also enter another region within the enclosing operation. If an
+operation has multiple regions, the semantics of the operation defines into
+which regions the control flows and in which order, if any. An operation may
+transmit control into regions that were specified in other operations, in
+particular those that defined the values the given operation uses. Thus such
+operations can be treated opaquely in the enclosing control flow graph,
+providing a level of control flow isolation similar to that of the call
+operation.
+
+#### Closure
+
+Regions allow defining an operation that creates a closure, for example by
+“boxing” the body of the region into a value they produce. It remains up to the
+operation to define its semantics. Note that if an operation triggers
+asynchronous execution of the region, it is under the responsibility of the
+operation caller to wait for the region to be executed guaranteeing that any
+directly used values remain live.
+
+### Arguments and Results
+
+The arguments of the first block of a region are treated as arguments of the
+region. The source of these arguments is defined by the semantics of the parent
+operation. They may correspond to some of the values the operation itself uses.
+
+Regions produce a (possibly empty) list of values. The operation semantics
+defines the relation between the region results and the operation results.
+
+## Type System
+
+Each SSA value in MLIR has a type defined by the type system below. There are a
+number of primitive types (like integers) and also aggregate types for tensors
+and memory buffers. MLIR [standard types](#standard-types) do not include
+structures, arrays, or dictionaries.
+
+MLIR has an open type system (i.e. there is no fixed list of types), and types
+may have application-specific semantics. For example, MLIR supports a set of
+[dialect types](#dialect-types).
+
+```
+type ::= type-alias | dialect-type | standard-type
+
+type-list-no-parens ::= type (`,` type)*
+type-list-parens ::= `(` `)`
+ | `(` type-list-no-parens `)`
+
+// This is a common way to refer to an SSA value with a specified type.
+ssa-use-and-type ::= ssa-use `:` type
+
+// Non-empty list of names and types.
+ssa-use-and-type-list ::= ssa-use-and-type (`,` ssa-use-and-type)*
+```
+
+### Type Aliases
+
+```
+type-alias-def ::= '!' alias-name '=' 'type' type
+type-alias ::= '!' alias-name
+```
+
+MLIR supports defining named aliases for types. A type alias is an identifier
+that can be used in the place of the type that it defines. These aliases *must*
+be defined before their uses. Alias names may not contain a '.', since those
+names are reserved for [dialect types](#dialect-types).
+
+Example:
+
+```mlir
+!avx_m128 = type vector<4 x f32>
+
+// Using the original type.
+"foo"(%x) : vector<4 x f32> -> ()
+
+// Using the type alias.
+"foo"(%x) : !avx_m128 -> ()
+```
+
+### Dialect Types
+
+Similarly to operations, dialects may define custom extensions to the type
+system.
+
+```
+dialect-namespace ::= bare-id
+
+opaque-dialect-item ::= dialect-namespace '<' string-literal '>'
+
+pretty-dialect-item ::= dialect-namespace '.' pretty-dialect-item-lead-ident
+ pretty-dialect-item-body?
+
+pretty-dialect-item-lead-ident ::= '[A-Za-z][A-Za-z0-9._]*'
+pretty-dialect-item-body ::= '<' pretty-dialect-item-contents+ '>'
+pretty-dialect-item-contents ::= pretty-dialect-item-body
+ | '(' pretty-dialect-item-contents+ ')'
+ | '[' pretty-dialect-item-contents+ ']'
+ | '{' pretty-dialect-item-contents+ '}'
+ | '[^[<({>\])}\0]+'
+
+dialect-type ::= '!' opaque-dialect-item
+dialect-type ::= '!' pretty-dialect-item
+```
+
+Dialect types can be specified in a verbose form, e.g. like this:
+
+```mlir
+// LLVM type that wraps around llvm IR types.
+!llvm<"i32*">
+
+// Tensor flow string type.
+!tf.string
+
+// Complex type
+!foo<"something<abcd>">
+
+// Even more complex type
+!foo<"something<a%%123^^^>>>">
+```
+
+Dialect types that are simple enough can use the pretty format, which is a
+lighter weight syntax that is equivalent to the above forms:
+
+```mlir
+// Tensor flow string type.
+!tf.string
+
+// Complex type
+!foo.something<abcd>
+```
+
+Sufficiently complex dialect types are required to use the verbose form for
+generality. For example, the more complex type shown above wouldn't be valid in
+the lighter syntax: `!foo.something<a%%123^^^>>>` because it contains characters
+that are not allowed in the lighter syntax, as well as unbalanced `<>`
+characters.
+
+See [here](DefiningAttributesAndTypes.md) to learn how to define dialect types.
+
+### Standard Types
+
+Standard types are a core set of [dialect types](#dialect-types) that are
+defined in a builtin dialect and thus available to all users of MLIR.
+
+```
+standard-type ::= complex-type
+ | float-type
+ | function-type
+ | index-type
+ | integer-type
+ | memref-type
+ | none-type
+ | tensor-type
+ | tuple-type
+ | vector-type
+```
+
+#### Complex Type
+
+Syntax:
+
+```
+complex-type ::= `complex` `<` type `>`
+```
+
+The value of `complex` type represents a complex number with a parameterized
+element type, which is composed of a real and imaginary value of that element
+type. The element must be a floating point or integer scalar type.
+
+Examples:
+
+```mlir
+complex<f32>
+complex<i32>
+```
+
+#### Floating Point Types
+
+Syntax:
+
+```
+// Floating point.
+float-type ::= `f16` | `bf16` | `f32` | `f64`
+```
+
+MLIR supports float types of certain widths that are widely used as indicated
+above.
+
+#### Function Type
+
+Syntax:
+
+```
+// MLIR functions can return multiple values.
+function-result-type ::= type-list-parens
+ | non-function-type
+
+function-type ::= type-list-parens `->` function-result-type
+```
+
+MLIR supports first-class functions: for example, the
+[`constant` operation](Dialects/Standard.md#constant-operation) produces the
+address of a function as an SSA value. This SSA value may be passed to and
+returned from functions, merged across control flow boundaries with
+[block arguments](#blocks), and called with the
+[`call_indirect` operation](Dialects/Standard.md#call-indirect-operation).
+
+Function types are also used to indicate the arguments and results of
+[operations](#operations).
+
+#### Index Type
+
+Syntax:
+
+```
+// Target word-sized integer.
+index-type ::= `index`
+```
+
+The `index` type is a signless integer whose size is equal to the natural
+machine word of the target ([rationale](Rationale.md#signless-types)) and is
+used by the affine constructs in MLIR. Unlike fixed-size integers, it cannot be
+used as an element of vector, tensor or memref type
+([rationale](Rationale.md#index-type-disallowed-in-vectortensormemref-types)).
+
+**Rationale:** integers of platform-specific bit widths are practical to express
+sizes, dimensionalities and subscripts.
+
+#### Integer Type
+
+Syntax:
+
+```
+// Sized integers like i1, i4, i8, i16, i32.
+integer-type ::= `i` [1-9][0-9]*
+```
+
+MLIR supports arbitrary precision integer types. Integer types are signless, but
+have a designated width.
+
+**Rationale:** low precision integers (like `i2`, `i4` etc) are useful for
+low-precision inference chips, and arbitrary precision integers are useful for
+hardware synthesis (where a 13 bit multiplier is a lot cheaper/smaller than a 16
+bit one).
+
+TODO: Need to decide on a representation for quantized integers
+([initial thoughts](Rationale.md#quantized-integer-operations)).
+
+#### Memref Type
+
+Syntax:
+
+```
+memref-type ::= ranked-memref-type | unranked-memref-type
+
+ranked-memref-type ::= `memref` `<` dimension-list-ranked tensor-memref-element-type
+ (`,` layout-specification)? |
+ (`,` memory-space)? `>`
+
+unranked-memref-type ::= `memref` `<*x` tensor-memref-element-type
+ (`,` memory-space)? `>`
+
+stride-list ::= `[` (dimension (`,` dimension)*)? `]`
+strided-layout ::= `offset:` dimension `,` `strides: ` stride-list
+layout-specification ::= semi-affine-map | strided-layout
+memory-space ::= integer-literal /* | TODO: address-space-id */
+```
+
+A `memref` type is a reference to a region of memory (similar to a buffer
+pointer, but more powerful). The buffer pointed to by a memref can be allocated,
+aliased and deallocated. A memref can be used to read and write data from/to the
+memory region which it references. Memref types use the same shape specifier as
+tensor types. Note that `memref<f32>`, `memref<0 x f32>`, `memref<1 x 0 x f32>`,
+and `memref<0 x 1 x f32>` are all different types.
+
+A `memref` is allowed to have an unknown rank (e.g. `memref<*xf32>`). The
+purpose of unranked memrefs is to allow external library functions to receive
+memref arguments of any rank without versioning the functions based on the rank.
+Other uses of this type are disallowed or will have undefined behavior.
+
+##### Codegen of Unranked Memref
+
+Using unranked memref in codegen besides the case mentioned above is highly
+discouraged. Codegen is concerned with generating loop nests and specialized
+instructions for high-performance, unranked memref is concerned with hiding the
+rank and thus, the number of enclosing loops required to iterate over the data.
+However, if there is a need to code-gen unranked memref, one possible path is to
+cast into a static ranked type based on the dynamic rank. Another possible path
+is to emit a single while loop conditioned on a linear index and perform
+delinearization of the linear index to a dynamic array containing the (unranked)
+indices. While this is possible, it is expected to not be a good idea to perform
+this during codegen as the cost of the translations is expected to be
+prohibitive and optimizations at this level are not expected to be worthwhile.
+If expressiveness is the main concern, irrespective of performance, passing
+unranked memrefs to an external C++ library and implementing rank-agnostic logic
+there is expected to be significantly simpler.
+
+Unranked memrefs may provide expressiveness gains in the future and help bridge
+the gap with unranked tensors. Unranked memrefs will not be expected to be
+exposed to codegen but one may query the rank of an unranked memref (a special
+op will be needed for this purpose) and perform a switch and cast to a ranked
+memref as a prerequisite to codegen.
+
+Example:
+
+```mlir
+// With static ranks, we need a function for each possible argument type
+%A = alloc() : memref<16x32xf32> %B = alloc() :
+memref<16x32x64xf32> call @helper_2D(%A) : (memref<16x32xf32>)->() call
+@helper_3D(%B) : (memref<16x32x64xf32>)->()
+
+// With unknown rank, the functions can be unified under one unranked type
+%A = alloc() : memref<16x32xf32>
+%B = alloc() : memref<16x32x64xf32>
+// Remove rank info
+%A_u = memref_cast %A : memref<16x32xf32> -> memref<*xf32>
+%B_u = memref_cast %B : memref<16x32x64xf32> -> memref<*xf32>
+// call same function with dynamic ranks
+call @helper(%A_u) : (memref<*xf32>)->()
+call @helper(%B_u) : (memref<*xf32>)->()
+```
+
+The core syntax and representation of a layout specification is a
+[semi-affine map](Dialects/Affine.md#semi-affine-maps). Additionally, syntactic
+sugar is supported to make certain layout specifications more intuitive to read.
+For the moment, a `memref` supports parsing a strided form which is converted to
+a semi-affine map automatically.
+
+The memory space of a memref is specified by a target-specific integer index. If
+no memory space is specified, then the default memory space (0) is used. The
+default space is target specific but always at index 0.
+
+TODO: MLIR will eventually have target-dialects which allow symbolic use of
+memory hierarchy names (e.g. L3, L2, L1, ...) but we have not spec'd the details
+of that mechanism yet. Until then, this document pretends that it is valid to
+refer to these memories by `bare-id`.
+
+The notionally dynamic value of a memref value includes the address of the
+buffer allocated, as well as the symbols referred to by the shape, layout map,
+and index maps.
+
+Examples of memref static type
+
+```mlir
+// Identity index/layout map
+#identity = (d0, d1) -> (d0, d1)
+
+// Column major layout.
+#col_major = (d0, d1, d2) -> (d2, d1, d0)
+
+// A 2-d tiled layout with tiles of size 128 x 256.
+#tiled_2d_128x256 = (d0, d1) -> (d0 div 128, d1 div 256, d0 mod 128, d1 mod 256)
+
+// A tiled data layout with non-constant tile sizes.
+#tiled_dynamic = (d0, d1)[s0, s1] -> (d0 floordiv s0, d1 floordiv s1,
+ d0 mod s0, d1 mod s1)
+
+// A layout that yields a padding on two at either end of the minor dimension.
+#padded = (d0, d1) -> (d0, (d1 + 2) floordiv 2, (d1 + 2) mod 2)
+
+
+// The dimension list "16x32" defines the following 2D index space:
+//
+// { (i, j) : 0 <= i < 16, 0 <= j < 32 }
+//
+memref<16x32xf32, #identity, memspace0>
+
+// The dimension list "16x4x?" defines the following 3D index space:
+//
+// { (i, j, k) : 0 <= i < 16, 0 <= j < 4, 0 <= k < N }
+//
+// where N is a symbol which represents the runtime value of the size of
+// the third dimension.
+//
+// %N here binds to the size of the third dimension.
+%A = alloc(%N) : memref<16x4x?xf32, #col_major, memspace0>
+
+// A 2-d dynamic shaped memref that also has a dynamically sized tiled layout.
+// The memref index space is of size %M x %N, while %B1 and %B2 bind to the
+// symbols s0, s1 respectively of the layout map #tiled_dynamic. Data tiles of
+// size %B1 x %B2 in the logical space will be stored contiguously in memory.
+// The allocation size will be (%M ceildiv %B1) * %B1 * (%N ceildiv %B2) * %B2
+// f32 elements.
+%T = alloc(%M, %N) [%B1, %B2] : memref<?x?xf32, #tiled_dynamic>
+
+// A memref that has a two-element padding at either end. The allocation size
+// will fit 16 * 68 float elements of data.
+%P = alloc() : memref<16x64xf32, #padded>
+
+// Affine map with symbol 's0' used as offset for the first dimension.
+#imapS = (d0, d1) [s0] -> (d0 + s0, d1)
+// Allocate memref and bind the following symbols:
+// '%n' is bound to the dynamic second dimension of the memref type.
+// '%o' is bound to the symbol 's0' in the affine map of the memref type.
+%n = ...
+%o = ...
+%A = alloc (%n)[%o] : <16x?xf32, #imapS>
+```
+
+##### Index Space
+
+A memref dimension list defines an index space within which the memref can be
+indexed to access data.
+
+##### Index
+
+Data is accessed through a memref type using a multidimensional index into the
+multidimensional index space defined by the memref's dimension list.
+
+Examples
+
+```mlir
+// Allocates a memref with 2D index space:
+// { (i, j) : 0 <= i < 16, 0 <= j < 32 }
+%A = alloc() : memref<16x32xf32, #imapA, memspace0>
+
+// Loads data from memref '%A' using a 2D index: (%i, %j)
+%v = load %A[%i, %j] : memref<16x32xf32, #imapA, memspace0>
+```
+
+##### Index Map
+
+An index map is a one-to-one
+[semi-affine map](Dialects/Affine.md#semi-affine-maps) that transforms a
+multidimensional index from one index space to another. For example, the
+following figure shows an index map which maps a 2-dimensional index from a 2x2
+index space to a 3x3 index space, using symbols `S0` and `S1` as offsets.
+
+![Index Map Example](includes/img/index-map.svg)
+
+The number of domain dimensions and range dimensions of an index map can be
+different, but must match the number of dimensions of the input and output index
+spaces on which the map operates. The index space is always non-negative and
+integral. In addition, an index map must specify the size of each of its range
+dimensions onto which it maps. Index map symbols must be listed in order with
+symbols for dynamic dimension sizes first, followed by other required symbols.
+
+##### Layout Map
+
+A layout map is a [semi-affine map](Dialects/Affine.md#semi-affine-maps) which
+encodes logical to physical index space mapping, by mapping input dimensions to
+their ordering from most-major (slowest varying) to most-minor (fastest
+varying). Therefore, an identity layout map corresponds to a row-major layout.
+Identity layout maps do not contribute to the MemRef type identification and are
+discarded on construction. That is, a type with an explicit identity map is
+`memref<?x?xf32, (i,j)->(i,j)>` is strictly the same as the one without layout
+maps, `memref<?x?xf32>`.
+
+Layout map examples:
+
+```mlir
+// MxN matrix stored in row major layout in memory:
+#layout_map_row_major = (i, j) -> (i, j)
+
+// MxN matrix stored in column major layout in memory:
+#layout_map_col_major = (i, j) -> (j, i)
+
+// MxN matrix stored in a 2-d blocked/tiled layout with 64x64 tiles.
+#layout_tiled = (i, j) -> (i floordiv 64, j floordiv 64, i mod 64, j mod 64)
+```
+
+##### Affine Map Composition
+
+A memref specifies a semi-affine map composition as part of its type. A
+semi-affine map composition is a composition of semi-affine maps beginning with
+zero or more index maps, and ending with a layout map. The composition must be
+conformant: the number of dimensions of the range of one map, must match the
+number of dimensions of the domain of the next map in the composition.
+
+The semi-affine map composition specified in the memref type, maps from accesses
+used to index the memref in load/store operations to other index spaces (i.e.
+logical to physical index mapping). Each of the
+[semi-affine maps](Dialects/Affine.md) and thus its composition is required to
+be one-to-one.
+
+The semi-affine map composition can be used in dependence analysis, memory
+access pattern analysis, and for performance optimizations like vectorization,
+copy elision and in-place updates. If an affine map composition is not specified
+for the memref, the identity affine map is assumed.
+
+##### Strided MemRef
+
+A memref may specify strides as part of its type. A stride specification is a
+list of integer values that are either static or `?` (dynamic case). Strides
+encode the distance, in number of elements, in (linear) memory between
+successive entries along a particular dimension. A stride specification is
+syntactic sugar for an equivalent strided memref representation using
+semi-affine maps. For example, `memref<42x16xf32, offset: 33 strides: [1, 64]>`
+specifies a non-contiguous memory region of `42` by `16` `f32` elements such
+that:
+
+1. the minimal size of the enclosing memory region must be `33 + 42 * 1 + 16 *
+ 64 = 1066` elements;
+2. the address calculation for accessing element `(i, j)` computes `33 + i +
+ 64 * j`
+3. the distance between two consecutive elements along the outer dimension is
+ `1` element and the distance between two consecutive elements along the
+ outer dimension is `64` elements.
+
+This corresponds to a column major view of the memory region and is internally
+represented as the type `memref<42x16xf32, (i, j) -> (33 + i + 64 * j)>`.
+
+The specification of strides must not alias: given an n-D strided memref,
+indices `(i1, ..., in)` and `(j1, ..., jn)` may not refer to the same memory
+address unless `i1 == j1, ..., in == jn`.
+
+Strided memrefs represent a view abstraction over preallocated data. They are
+constructed with special ops, yet to be introduced. Strided memrefs are a
+special subclass of memrefs with generic semi-affine map and correspond to a
+normalized memref descriptor when lowering to LLVM.
+
+#### None Type
+
+Syntax:
+
+```
+none-type ::= `none`
+```
+
+The `none` type is a unit type, i.e. a type with exactly one possible value,
+where its value does not have a defined dynamic representation.
+
+#### Tensor Type
+
+Syntax:
+
+```
+tensor-type ::= `tensor` `<` dimension-list tensor-memref-element-type `>`
+tensor-memref-element-type ::= vector-element-type | vector-type | complex-type
+
+// memref requires a known rank, but tensor does not.
+dimension-list ::= dimension-list-ranked | (`*` `x`)
+dimension-list-ranked ::= (dimension `x`)*
+dimension ::= `?` | decimal-literal
+```
+
+SSA values of tensor type represents aggregate N-dimensional data values, and
+have a known element type. It may have an unknown rank (indicated by `*`) or may
+have a fixed rank with a list of dimensions. Each dimension may be a static
+non-negative decimal constant or be dynamically determined (indicated by `?`).
+
+The runtime representation of the MLIR tensor type is intentionally abstracted -
+you cannot control layout or get a pointer to the data. For low level buffer
+access, MLIR has a [`memref` type](#memref-type). This abstracted runtime
+representation holds both the tensor data values as well as information about
+the (potentially dynamic) shape of the tensor. The
+[`dim` operation](Dialects/Standard.md#dim-operation) returns the size of a
+dimension from a value of tensor type.
+
+Note: hexadecimal integer literals are not allowed in tensor type declarations
+to avoid confusion between `0xf32` and `0 x f32`. Zero sizes are allowed in
+tensors and treated as other sizes, e.g., `tensor<0 x 1 x i32>` and `tensor<1 x
+0 x i32>` are different types. Since zero sizes are not allowed in some other
+types, such tensors should be optimized away before lowering tensors to vectors.
+
+Examples:
+
+```mlir
+// Tensor with unknown rank.
+tensor<* x f32>
+
+// Known rank but unknown dimensions.
+tensor<? x ? x ? x ? x f32>
+
+// Partially known dimensions.
+tensor<? x ? x 13 x ? x f32>
+
+// Full static shape.
+tensor<17 x 4 x 13 x 4 x f32>
+
+// Tensor with rank zero. Represents a scalar.
+tensor<f32>
+
+// Zero-element dimensions are allowed.
+tensor<0 x 42 x f32>
+
+// Zero-element tensor of f32 type (hexadecimal literals not allowed here).
+tensor<0xf32>
+```
+
+#### Tuple Type
+
+Syntax:
+
+```
+tuple-type ::= `tuple` `<` (type ( `,` type)*)? `>`
+```
+
+The value of `tuple` type represents a fixed-size collection of elements, where
+each element may be of a different type.
+
+**Rationale:** Though this type is first class in the type system, MLIR provides
+no standard operations for operating on `tuple` types
+([rationale](Rationale.md#tuple-types)).
+
+Examples:
+
+```mlir
+// Empty tuple.
+tuple<>
+
+// Single element
+tuple<f32>
+
+// Many elements.
+tuple<i32, f32, tensor<i1>, i5>
+```
+
+#### Vector Type
+
+Syntax:
+
+```
+vector-type ::= `vector` `<` static-dimension-list vector-element-type `>`
+vector-element-type ::= float-type | integer-type
+
+static-dimension-list ::= (decimal-literal `x`)+
+```
+
+The vector type represents a SIMD style vector, used by target-specific
+operation sets like AVX. While the most common use is for 1D vectors (e.g.
+vector<16 x f32>) we also support multidimensional registers on targets that
+support them (like TPUs).
+
+Vector shapes must be positive decimal integers.
+
+Note: hexadecimal integer literals are not allowed in vector type declarations,
+`vector<0x42xi32>` is invalid because it is interpreted as a 2D vector with
+shape `(0, 42)` and zero shapes are not allowed.
+
+## Attributes
+
+Syntax:
+
+```
+attribute-dict ::= `{` `}`
+ | `{` attribute-entry (`,` attribute-entry)* `}`
+attribute-entry ::= dialect-attribute-entry | dependent-attribute-entry
+dialect-attribute-entry ::= dialect-namespace `.` bare-id `=` attribute-value
+dependent-attribute-entry ::= dependent-attribute-name `=` attribute-value
+dependent-attribute-name ::= (letter|[_]) (letter|digit|[_$])*
+```
+
+Attributes are the mechanism for specifying constant data on operations in
+places where a variable is never allowed - e.g. the index of a
+[`dim` operation](Dialects/Standard.md#dim-operation), or the stride of a
+convolution. They consist of a name and a concrete attribute value. The set of
+expected attributes, their structure, and their interpretation are all
+contextually dependent on what they are attached to.
+
+There are two main classes of attributes: dependent and dialect. Dependent
+attributes derive their structure and meaning from what they are attached to;
+e.g., the meaning of the `index` attribute on a `dim` operation is defined by
+the `dim` operation. Dialect attributes, on the other hand, derive their context
+and meaning from a specific dialect. An example of a dialect attribute may be a
+`swift.self` function argument attribute that indicates an argument is the
+self/context parameter. The context of this attribute is defined by the `swift`
+dialect and not the function argument.
+
+Attribute values are represented by the following forms:
+
+```
+attribute-value ::= attribute-alias | dialect-attribute | standard-attribute
+```
+
+### Attribute Value Aliases
+
+```
+attribute-alias ::= '#' alias-name '=' attribute-value
+attribute-alias ::= '#' alias-name
+```
+
+MLIR supports defining named aliases for attribute values. An attribute alias is
+an identifier that can be used in the place of the attribute that it defines.
+These aliases *must* be defined before their uses. Alias names may not contain a
+'.', since those names are reserved for
+[dialect attributes](#dialect-attribute-values).
+
+Example:
+
+```mlir
+#map = (d0) -> (d0 + 10)
+
+// Using the original attribute.
+%b = affine.apply (d0) -> (d0 + 10) (%a)
+
+// Using the attribute alias.
+%b = affine.apply #map(%a)
+```
+
+### Dialect Attribute Values
+
+Similarly to operations, dialects may define custom attribute values. The
+syntactic structure of these values is identical to custom dialect type values,
+except that dialect attributes values are distinguished with a leading '#',
+while dialect types are distinguished with a leading '!'.
+
+```
+dialect-attribute ::= '#' opaque-dialect-item
+dialect-attribute ::= '#' pretty-dialect-item
+```
+
+Dialect attributes can be specified in a verbose form, e.g. like this:
+
+```mlir
+// Complex attribute
+#foo<"something<abcd>">
+
+// Even more complex attribute
+#foo<"something<a%%123^^^>>>">
+```
+
+Dialect attributes that are simple enough can use the pretty format, which is a
+lighter weight syntax that is equivalent to the above forms:
+
+```mlir
+// Complex attribute
+#foo.something<abcd>
+```
+
+Sufficiently complex dialect attributes are required to use the verbose form for
+generality. For example, the more complex type shown above wouldn't be valid in
+the lighter syntax: `#foo.something<a%%123^^^>>>` because it contains characters
+that are not allowed in the lighter syntax, as well as unbalanced `<>`
+characters.
+
+See [here](DefiningAttributesAndTypes.md) to learn how to define dialect
+attribute values.
+
+### Standard Attribute Values
+
+Standard attributes are a core set of
+[dialect attributes](#dialect-attribute-values) that are defined in a builtin
+dialect and thus available to all users of MLIR.
+
+```
+standard-attribute ::= affine-map-attribute
+ | array-attribute
+ | bool-attribute
+ | dictionary-attribute
+ | elements-attribute
+ | float-attribute
+ | integer-attribute
+ | integer-set-attribute
+ | string-attribute
+ | symbol-ref-attribute
+ | type-attribute
+ | unit-attribute
+```
+
+#### AffineMap Attribute
+
+Syntax:
+
+```
+affine-map-attribute ::= affine-map
+```
+
+An affine-map attribute is an attribute that represents a affine-map object.
+
+#### Array Attribute
+
+Syntax:
+
+```
+array-attribute ::= `[` (attribute-value (`,` attribute-value)*)? `]`
+```
+
+An array attribute is an attribute that represents a collection of attribute
+values.
+
+#### Boolean Attribute
+
+Syntax:
+
+```
+bool-attribute ::= bool-literal
+```
+
+A boolean attribute is a literal attribute that represents a one-bit boolean
+value, true or false.
+
+#### Dictionary Attribute
+
+Syntax:
+
+```
+dictionary-attribute ::= `{` (attribute-entry (`,` attribute-entry)*)? `}`
+```
+
+A dictionary attribute is an attribute that represents a sorted collection of
+named attribute values. The elements are sorted by name, and each name must be
+unique within the collection.
+
+#### Elements Attributes
+
+Syntax:
+
+```
+elements-attribute ::= dense-elements-attribute
+ | opaque-elements-attribute
+ | sparse-elements-attribute
+```
+
+An elements attribute is a literal attribute that represents a constant
+[vector](#vector-type) or [tensor](#tensor-type) value.
+
+##### Dense Elements Attribute
+
+Syntax:
+
+```
+dense-elements-attribute ::= `dense` `<` attribute-value `>` `:`
+ ( tensor-type | vector-type )
+```
+
+A dense elements attribute is an elements attribute where the storage for the
+constant vector or tensor value has been packed to the element bitwidth. The
+element type of the vector or tensor constant must be of integer, index, or
+floating point type.
+
+##### Opaque Elements Attribute
+
+Syntax:
+
+```
+opaque-elements-attribute ::= `opaque` `<` dialect-namespace `,`
+ hex-string-literal `>` `:`
+ ( tensor-type | vector-type )
+```
+
+An opaque elements attribute is an elements attribute where the content of the
+value is opaque. The representation of the constant stored by this elements
+attribute is only understood, and thus decodable, by the dialect that created
+it.
+
+Note: The parsed string literal must be in hexadecimal form.
+
+##### Sparse Elements Attribute
+
+Syntax:
+
+```
+sparse-elements-attribute ::= `sparse` `<` attribute-value `,` attribute-value
+ `>` `:` ( tensor-type | vector-type )
+```
+
+A sparse elements attribute is an elements attribute that represents a sparse
+vector or tensor object. This is where very few of the elements are non-zero.
+
+The attribute uses COO (coordinate list) encoding to represent the sparse
+elements of the elements attribute. The indices are stored via a 2-D tensor of
+64-bit integer elements with shape [N, ndims], which specifies the indices of
+the elements in the sparse tensor that contains non-zero values. The element
+values are stored via a 1-D tensor with shape [N], that supplies the
+corresponding values for the indices.
+
+Example:
+
+```mlir
+ sparse<[[0, 0], [1, 2]], [1, 5]> : tensor<3x4xi32>
+
+// This represents the following tensor:
+/// [[1, 0, 0, 0],
+/// [0, 0, 5, 0],
+/// [0, 0, 0, 0]]
+```
+
+#### Float Attribute
+
+Syntax:
+
+```
+float-attribute ::= (float-literal (`:` float-type)?)
+ | (hexadecimal-literal `:` float-type)
+```
+
+A float attribute is a literal attribute that represents a floating point value
+of the specified [float type](#floating-point-types). It can be represented in
+the hexadecimal form where the hexadecimal value is interpreted as bits of the
+underlying binary representation. This form is useful for representing infinity
+and NaN floating point values. To avoid confusion with integer attributes,
+hexadecimal literals _must_ be followed by a float type to define a float
+attribute.
+
+Examples:
+
+```
+42.0 // float attribute defaults to f64 type
+42.0 : f32 // float attribute of f32 type
+0x7C00 : f16 // positive infinity
+0x7CFF : f16 // NaN (one of possible values)
+42 : f32 // Error: expected integer type
+```
+
+#### Integer Attribute
+
+Syntax:
+
+```
+integer-attribute ::= integer-literal ( `:` (index-type | integer-type) )?
+```
+
+An integer attribute is a literal attribute that represents an integral value of
+the specified integer or index type. The default type for this attribute, if one
+is not specified, is a 64-bit integer.
+
+##### Integer Set Attribute
+
+Syntax:
+
+```
+integer-set-attribute ::= affine-map
+```
+
+An integer-set attribute is an attribute that represents an integer-set object.
+
+#### String Attribute
+
+Syntax:
+
+```
+string-attribute ::= string-literal (`:` type)?
+```
+
+A string attribute is an attribute that represents a string literal value.
+
+#### Symbol Reference Attribute
+
+Syntax:
+
+```
+symbol-ref-attribute ::= symbol-ref-id (`::` symbol-ref-id)*
+```
+
+A symbol reference attribute is a literal attribute that represents a named
+reference to an operation that is nested within an operation with the
+`OpTrait::SymbolTable` trait. As such, this reference is given meaning by the
+nearest parent operation containing the `OpTrait::SymbolTable` trait. It may
+optionally contain a set of nested references that further resolve to a symbol
+nested within a different symbol table.
+
+This attribute can only be held internally by
+[array attributes](#array-attribute) and
+[dictionary attributes](#dictionary-attribute)(including the top-level operation
+attribute dictionary), i.e. no other attribute kinds such as Locations or
+extended attribute kinds. If a reference to a symbol is necessary from outside
+of the symbol table that the symbol is defined in, a
+[string attribute](string-attribute) can be used to refer to the symbol name.
+
+**Rationale:** Given that MLIR models global accesses with symbol references, to
+enable efficient multi-threading, it becomes difficult to effectively reason
+about their uses. By restricting the places that can legally hold a symbol
+reference, we can always opaquely reason about a symbols usage characteristics.
+
+#### Type Attribute
+
+Syntax:
+
+```
+type-attribute ::= type
+```
+
+A type attribute is an attribute that represents a [type object](#type-system).
+
+#### Unit Attribute
+
+```
+unit-attribute ::= `unit`
+```
+
+A unit attribute is an attribute that represents a value of `unit` type. The
+`unit` type allows only one value forming a singleton set. This attribute value
+is used to represent attributes that only have meaning from their existence.
+
+One example of such an attribute could be the `swift.self` attribute. This
+attribute indicates that a function parameter is the self/context parameter. It
+could be represented as a [boolean attribute](#boolean-attribute)(true or
+false), but a value of false doesn't really bring any value. The parameter
+either is the self/context or it isn't.
+
+```mlir
+// A unit attribute defined with the `unit` value specifier.
+func @verbose_form(i1) attributes {dialectName.unitAttr = unit}
+
+// A unit attribute can also be defined without the value specifier.
+func @simple_form(i1) attributes {dialectName.unitAttr}
+```
diff --git a/mlir/docs/MLIRForGraphAlgorithms.md b/mlir/docs/MLIRForGraphAlgorithms.md
new file mode 100644
index 00000000000..ac26e5beb9b
--- /dev/null
+++ b/mlir/docs/MLIRForGraphAlgorithms.md
@@ -0,0 +1,403 @@
+# MLIR: Incremental Application to Graph Algorithms in ML Frameworks
+
+The existing documentation about MLIR focuses on long term vision, how its
+pieces fit together, and the benefits of modular and composable infrastructure
+in the vast and distant future. While this viewpoint appeals to some, it causes
+concern for others who are more concerned about the "here and now" - why does it
+make sense to make a "revolutionary" change when any individual problem can be
+fixed in place?
+
+This document explains that adoption of MLIR to solve graph based problems
+_isn't_ a revolutionary change: it is an incremental series of steps which build
+on each other, each of which delivers local value. This document also addresses
+some points of confusion that keep coming up.
+
+One note: even though a major advantage of MLIR is that it can span the full
+spectrum from graph algorithms down to low-level code generation, this document
+focuses on the use of MLIR for **graph-level algorithms**. MLIR will also unlock
+exciting code generation opportunities (particularly given its novel approach to
+integrating state of the art polyhedral techniques), but issues that touch on
+MLIR's relationship to XLA, Eigen, etc, are out of scope for this particular
+doc.
+
+This document uses TensorFlow as the example given that it is the focus of our
+immediate work, but we believe that the same viewpoint could be useful for
+people working in the context of other ML frameworks that may consider adopting
+MLIR in the future.
+
+### How is MLIR relevant?
+
+MLIR is an overloaded acronym which unpacks as "Multi-Level Intermediate
+Representation". Its high-level purpose is to provide mechanics for describing
+and transforming programs and computations in a flexible way. It provides common
+compiler infrastructure for things like constant folding, dead code elimination,
+graph rewriting, and others - which are independent of the representational
+choices picked by a given dialect (e.g. its concurrency semantics). It was built
+with a specific focus on compile time and memory efficiency, accurate
+propagation of source location information (important for reporting high quality
+errors and warnings) and is designed for testability.
+
+TensorFlow has numerous subsystems (some of which are proprietary, e.g.
+Tensor-RT, nGraph, CoreML, etc) as well as translation layers between these
+different subsystems, and these translation layers face similar challenges. ((As
+an aside, the internals of each of these subsystems could often benefit from
+MLIR infrastructure, but that isn't a focus of this doc.))
+
+A key observation that MLIR makes is that these subsystems often have two things
+going on: they are both particular data structures and encodings (e.g. HLO
+graphs, TF-Lite's flat buffer format, TensorFlow's Graph format, the ONNX
+abstraction, etc) as well as an abstraction of computation (a specific way of
+modeling a convolution, a set of supported operations etc).
+
+MLIR uses a standard IR (i.e., a set of data structures) for representing these
+computations - this allows a huge amount of shared infrastructure across these
+problem domains. MLIR then allows the definition of domain-specific "dialects"
+that describe the set of operations that are legal and supported for a given
+application. This means that the actual translations between data structures are
+kept as simple as possible - and are thus relatively easy to make "correct".
+This allows the common compiler infrastructure to handle the mapping problems
+and the other issues within the domain.
+
+MLIR's design is directly informed by the experience of building (and then
+living with) intermediate representations like the LLVM IR, LLVM SelectionDAG,
+the LLVM machine instruction representation, Swift SIL IR, and learns new
+lessons from TensorFlow and XLA HLO, as well as learning from building countless
+research and production systems on top of them. Our goal is to drag the state of
+the art in compilers forward, not to merely apply a few well-known techniques to
+the machine learning domain.
+
+### What does adoption mean?
+
+The point of this document is not to advocate for rewriting any particular
+subsystem in TensorFlow - indeed, the burden required to justify a rewrite is
+high, and often very specific to that subsystem. That said, there are several
+subsystems that are about to get rewritten or substantially revised anyway, so
+we use those as examples to concretely describe the benefits that MLIR provides
+in these cases and what it will take. The subsystems discussed are:
+
+1. the TF Lite TOCO translator, which we need to improve error
+ reporting/reliability issues and generalize it to support more ops, and
+1. the TF/XLA bridge which needs to improve usability by merging some of its
+ usage models, support dynamic shapes and generalize guest subsystem support
+ to Tensor-RT and nGraph.
+1. Grappler is another subsystem that is likely to get substantial revisions in
+ the future, and would definitely benefit from the MLIR framework, but there
+ are no known plans to do that work at this point, so we don't discuss it
+ further.
+
+Adopting MLIR for these works the same way - and, in fact, the work to support
+TF Lite is mostly a subset of the larger work to support the functionality of
+the TF/XLA bridge. TF Lite and the TF/XLA bridge include several compiler passes
+(things like encapsulate, functionalize control flow, lowering of ops, fusion,
+constant folding, shape inference, etc).
+
+MLIR supports converting from TensorFlow Graphs to MLIR and back, which means
+that we can start by putting in a no-op translation to MLIR and back into the
+pipeline, and verify that nothing breaks. Then we can work on replacing the
+compiler transformations one by one by reimplementing them (with the improved
+algorithms that we're planning).
+
+This is a development plan, we wouldn't actually ship a TensorFlow that just
+uses MLIR for a single pass. In practice, we'll have the MLIR flag gated under
+an option, build out a replacement for an entire subsystem (e.g. the TOCO
+translator) and when the time is right, we'll do A/B comparisons and eventually
+make a switch and phase out the old code over time.
+
+## What benefit does MLIR provide?
+
+The adoption plan above might sound like it only makes things worse in the
+immediate term - we have two implementations of the same functionality, we are
+dividing our efforts, etc. In order for this to be worth it, we should have a
+good sense that we are building towards an improved future that will make
+customers and TensorFlow engineers happier when it lands. Here we describe a few
+of the benefits that MLIR provides, in no particular order:
+
+### A Lossless Human Editable Textual Representation
+
+The MLIR in-memory data structure has a human readable and writable format, as
+well as [a specification](LangRef.md) for that format - built just like any
+other programming language. Important properties of this format are that it is
+compact, easy to read, and lossless. You can dump an MLIR program out to disk
+and munge around with it, then send it through a few more passes.
+
+If you haven't worked with a system that works this way, it is hard to overstate
+how big of a deal this in practice: it means that you can call `foo->dump()` on
+an IR object to see its full contents, it means you can diff the IR before and
+after a change, delta reduce IR files, and many other things.
+
+### A Graph Verification Pass
+
+Like many other popular compiler infrastructures, MLIR provides infrastructure
+and implementation for a "verifier" which checks that the IR is well formed. The
+MLIR verifier is a simple framework that makes it easy to provide a single
+source of truth for those correctness properties and is general across all
+Dialects (e.g. TF Graph, TF Lite flat buffer, XLA HLO, etc).
+
+A verifier pass is sort of like a 'super assertion' that catches mistakes in
+program transformations early, making you as an engineer more productive, making
+the product more reliable, and making it easier to track down bugs when they
+appear - because the verifier can be run at any time, either as a compiler pass
+or with a single function call.
+
+While MLIR provides a well-considered infrastructure for IR verification, and
+has simple checks for existing TensorFlow operations, there is a lot that should
+be added here and lots of opportunity to get involved!
+
+### Designed for Testability
+
+There are many aspects of this in MLIR, but we'll focus on compiler
+transformations since they are the easiest to understand. Compiler
+transformations are modeled as subclasses of the `Pass` C++ class, which are
+driven by an `mlir-opt` tool. When combined with a lossless textual
+representation, it becomes really easy to write unit tests for compiler
+transformations, for example, this is a simple test that shows "x-x" is being
+turned into zero:
+
+```mlir
+ // RUN: mlir-opt %s -canonicalize | FileCheck %s
+ func @test_subi_zero_cfg(%arg0: i32) -> i32 {
+ %y = subi %arg0, %arg0 : i32
+ return %y: i32
+ }
+ // CHECK-LABEL: func @test_subi_zero_cfg(%arg0: i32)
+ // CHECK-NEXT: %c0_i32 = constant 0 : i32
+ // CHECK-NEXT: return %c0
+```
+
+The "CHECK" comments are interpreted by the
+[LLVM FileCheck tool](https://llvm.org/docs/CommandGuide/FileCheck.html), which
+is sort of like a really advanced grep. This test is fully self-contained: it
+feeds the input into the [canonicalize pass](Canonicalization.md), and checks
+that the output matches the CHECK lines. See the `test/Transforms` directory for
+more examples. In contrast, standard unit testing exposes the API of the
+underlying framework to lots and lots of tests (making it harder to refactor and
+move the API), typically requires a lot more code, and exacerbates issues with
+link time. For examples, see
+[the TEST_F functions in TensorFlow's testsuite](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/grappler/optimizers/arithmetic_optimizer_test.cc).
+
+MLIR has been pervasively designed with this sort of design by testability,
+allowing us to put in place a culture that expects every behavior changing
+commit to include a test case, and for these test cases to be stable and
+reliable over time, since they are testing exactly what they are supposed to.
+End to end integration tests are still super useful for some things of course!
+
+### Infrastructure for Warnings and Error Diagnostics and Location Tracking
+
+MLIR benefits from the lessons learned from building other compilers - including
+Clang which
+[[set the standard](http://blog.llvm.org/2010/04/amazing-feats-of-clang-error-recovery.html)](http://blog.llvm.org/2010/04/amazing-feats-of-clang-error-recovery.html)
+for quality of implementation in C/C++ compiler diagnostics. Drawing from this
+experience (and fixing mistakes in LLVM), MLIR requires that operations and
+functions carry abstract location information, that transformations propagate
+this information, and provides standardized mechanisms to emit errors and
+warnings, as well as for clients to hook into them to capture and report them in
+custom ways.
+
+Why is this important? In practice, many graph-to-graph translators can fail
+(e.g. TF Lite when an unsupported op is used) and it is important to be able to
+report the error up through to the user in the most precise way possible, in
+order for it to be actionable. This includes tracking rewrites through fusions
+and fissions of ops, mapping back into language / API specific domains, etc.
+
+More selfishly for infrastructure hackers, this is a huge boon because it means
+that it is easy to write good tests for this: the testing tools for MLIR capture
+the diagnostics produced by passes (using the standard diagnostic hooks) and
+check that they match the expected diagnostics in the testcase. For example, to
+test the dependence analysis infra in the code generator, Andy Davis wrote a
+simple pass that checks dependencies and emits them as "notes", allowing him to
+write tests like this:
+
+```mlir
+ // RUN: mlir-opt %s -memref-dependence-check -verify-diagnostics
+ func @different_memrefs() {
+ %m.a = alloc() : memref<100xf32>
+ %m.b = alloc() : memref<100xf32>
+ %c0 = constant 0 : index
+ %c1 = constant 1.0 : f32
+ store %c1, %m.a[%c0] : memref<100xf32>
+ // expected-note@-1 {{dependence from memref access 0 to access 1 = false}}
+ %v0 = load %m.b[%c0] : memref<100xf32>
+ return
+ }
+```
+
+Note that a major limitation of this is that MLIR suffers from a problem of
+"garbage in, garbage out": if the input locations to MLIR are imprecise, then
+there is nothing that it can do to recover them. There is work underway in
+TensorFlow/Python to improve the situation, and Swift for TensorFlow already has
+perfect location tracking due to its design.
+
+### Shape Information Captured in the IR
+
+In TensorFlow Graphs, each op takes and returns values using a very simple type
+system (TF_DataType) in which each value is a tensor of unknown rank and
+dimensions. At the same time, many graphs have static shapes easily knowable for
+wide swaths of the computation, and even dynamically shaped operations often
+have statically knowable dimensions. Many analyses and transformations benefit
+and use this information when available, but because TensorFlow graphs don't
+capture this (e.g. serialize it to proto), passes have to recompute it on demand
+with ShapeRefiner.
+
+The [MLIR Tensor Type](LangRef.md#tensor-type) directly captures shape
+information, so you can have things like:
+
+```mlir
+ %x = tf.Add %x, %y : tensor<128 x 8 x ? x f32>
+```
+
+Capturing this in the IR is expected to speed up transformations (avoiding
+recomputing the same info over and over again) which therefore makes it
+practical to apply stronger shape analysis algorithms. It also makes it easier
+to work with the IR, because on-the-side representations can get out of date,
+and the API is easier to work with from an ergonomics perspective.
+
+### Unified Graph Rewriting Infrastructure
+
+This is still a work in progress, but we have sightlines towards a
+[general rewriting infrastructure](GenericDAGRewriter.md) for transforming DAG
+tiles into other DAG tiles, using a declarative pattern format. DAG to DAG
+rewriting is a generalized solution for many common compiler optimizations,
+lowerings, and other rewrites and having an IR enables us to invest in building
+a single high-quality implementation.
+
+Declarative pattern rules are preferable to imperative C++ code for a number of
+reasons: they are more compact, easier to reason about, can have checkers
+written against them, and new tools can be built that inspect and manipulate the
+declarative patterns in interesting ways - e.g. applying theorem provers to
+them. It will be exciting to see this ecosystem develop as the infrastructure
+matures.
+
+### Clarified Semantics for TensorFlow Operations
+
+One of the challenging things about working with TensorFlow is that there are
+many invariants and behaviors that need to be preserved and known about when
+working with Graphs, and these can be difficult to reason about and lead to
+bugs. Things like 'dead values', Switch and Merge nodes, concurrency semantics,
+nodes that execute even when passed a dead value, multiple device program
+representation - etc... all add complexities that can make it challenging to
+reason about whether a transformation or analysis is correct in general. Even
+something as simple as constant folding or transforming integer `x-x` into `0`
+is non-trivial because you need to consider control dependence edges.
+
+One of our major goals for the TensorFlow dialect of MLIR is to sort out these
+situations and upgrade existing TensorFlow graphs to semantics that are easier
+to reason about. The solutions to these problems are all still being debated,
+but those discussions have already yielded a lot of potential answers:
+introducing a `tf_dead_or<x>` types for switch/merge, modeling of TF operations
+using futures/async semantics etc. None of these particular battles are critical
+or important for MLIR to succeed (because of its "meta" nature, the abstraction
+decisions of any given dialect are up for it to decide), but each one that works
+out will make it easier to work with and transform TensorFlow operations. We
+expect these issues to get nailed down in the next couple of months when MLIR
+effort moves beyond TF Lite / TOCO support. The discussions that are happening
+now are super valuable and making progress.
+
+### Ergonomics
+
+A minor-in-theory, but important-in-practice point is that MLIR is designed to
+make it easy, memory efficient, and less error prone to transform code than
+other systems. `TensorFlow::Graph` has implementation issues where the same
+information is stored redundantly in different places (which must be manually
+kept up to date), has somewhat unusual representation of certain constructs
+(e.g. the function library, which makes it very difficult to add or remove
+functions, e.g. during interprocedural transformations), and stores information
+in the graph that is used by the executor, but isn't necessary for program
+transformation.
+
+TensorFlow has made a lot of progress in this area over the years, and there are
+lots of ideas about further improvements in the future, we are happy that MLIR
+addresses these needs (making it much easier to implement correct program
+transformations) today, and are committed to pushing hard to make it better.
+
+### Compile Time Performance and Memory Use
+
+MLIR has been designed to be memory and compile-time efficient in its algorithms
+and data structures, using immutable and uniqued structures, low level
+bit-packing, and other well-known techniques to avoid unnecessary heap
+allocations, and allow simple and safe multithreaded optimization of MLIR
+programs. There are other reasons to believe that the MLIR implementations of
+common transformations will be more efficient than the Python and C++
+TensorFlow::Graph implementations of the same things, given the current
+implementation details of TensorFlow.
+
+That said, this is very much a theory at this point. When the new implementation
+of various subsystems are available, we will see what happens in practice: there
+will be no reason to speculate - we can measure.
+
+## Common Questions and Concerns
+
+Here we address some frequently asked questions and concerns.
+
+### Isn't MLIR a big dependency to take on?
+
+We've heard that at least some people are concerned that MLIR is a "big"
+dependency to take on, and could result in large code size. Here are some key
+points MLIR:
+
+1. The entire MLIR codebase is a pretty small C++ code base in absolute terms
+ compared to what goes into a modern ML framework.
+1. Like LLVM, MLIR is designed as a set of libraries that clients can link in
+ or ignore as they wish. For example, the transformations in MLIR kept
+ separate from the core IR abstractions, and dialect specific code (e.g.
+ TensorFlow, TF-Lite, XLA, etc) is all independently selectable by the build
+ system. Clients that don't care about XLA don't link in that code, whether
+ they are a TF-Lite system or a client that is completely unrelated to
+ TensorFlow.
+1. MLIR's only third party dependency is on LLVM, but it doesn't depend on LLVM
+ IR or any other heavy dependency - it just depends on LLVM's support library
+ which provides efficient hash tables and other
+ [memory efficient data structures that the STL does not](http://llvm.org/docs/ProgrammersManual.html#picking-the-right-data-structure-for-a-task).
+ There have been discussions about splitting this set of libraries out to its
+ own subproject in LLVM that the LLVM IR project depends on. This would be
+ great for MLIR as well as other LLVM subprojects.
+1. TensorFlow and many other frameworks already use LLVM - if so, MLIR would
+ not be pulling in an additional dependency at all.
+
+### How does MLIR represent {control flow, concurrency, …} semantics in TensorFlow?
+
+MLIR provides a dialect that is an isomorphic 1-1 mapping between TensorFlow
+graphs and MLIR, as well as a pretty complete translator back and forth (the
+only known gap is that a few TF_DataType enums aren't handled yet). MLIR is a
+"Multi-Level IR", which allows it to represent code with different abstraction
+levels, so the ability to faithfully represent TensorFlow code in a completely
+backwards compatible way (even if there are some historical warts!) is critical.
+
+In *addition* to the isomorphic mapping, we are actively working on efforts to
+raise the abstraction level for working with TensorFlow graphs in MLIR. Doing so
+would make it even easier to write TensorFlow transformations than it is today,
+and would provide a path to migrating TF 1.x graphs forward into the TF 2.x
+world. For example, because MLIR has an extensible type system, we can directly
+model whether it is impossible for a Tensor value to be a "dead" value - similar
+to the use of optional types in modern programming languages.
+
+These discussions occasionally cause confusion because there are several issues
+being mixed up into one:
+
+* What are the current semantics of TensorFlow graphs, and what invariants can
+ we rely on?
+* What should the semantics be in TensorFlow 2.0?
+* What do programs rely on in practice, and if it is unfriendly, can we
+ migrate it?
+* Can we find a way to make it so transforms don't have to worry about the
+ complexities of Switch/Merge, by using higher level control flow
+ representations? (tentative answer: yes)
+* How should MLIR represent async vs sync operations, what invariants are
+ provided, how does this dovetail with control flow?
+* When is it safe and beneficial to perform optimizations that might reduce
+ parallelism?
+
+All of these questions have a "conservative/safe fallback": we can continue
+providing exactly the same abstractions that TensorFlow always has. That said,
+we are trying hard to level-up the representation (taking advantage of the
+"Multi-Level" part of MLIR) because doing so will make it much much easier to
+write analyses and transformations than it currently is in TensorFlow.
+
+### Non Goals
+
+It is important to point out things that MLIR does not aim to do. For example,
+there is no runtime component to MLIR: the TensorFlow executor, the TF Lite
+FlatBuffer interpreter, or other existing runtime should be used as-is.
+
+Another non-goal is that MLIR currently doesn't support a stable binary
+encoding. We will certainly add this at some point, but existing formats should
+be used for serialization and distribution in the meantime.
diff --git a/mlir/docs/OpDefinitions.md b/mlir/docs/OpDefinitions.md
new file mode 100644
index 00000000000..ff3a21fa1bb
--- /dev/null
+++ b/mlir/docs/OpDefinitions.md
@@ -0,0 +1,1210 @@
+# Table-driven Operation Definition Specification (ODS)
+
+In addition to specializing the `mlir::Op` C++ template, MLIR also supports
+defining operations in a table-driven manner. This is achieved via
+[TableGen][TableGen], which is both a generic language and its tooling to
+maintain records of domain-specific information. Facts regarding an operation
+are specified concisely into a TableGen record, which will be expanded into an
+equivalent `mlir::Op` C++ template specialization at compiler build time.
+
+This manual explains in detail all the available mechanisms for defining
+operations in such a table-driven manner. It aims to be a specification instead
+of a tutorial. Please refer to [Quickstart tutorial to adding MLIR graph
+rewrite](QuickstartRewrites.md) for the latter.
+
+In addition to detailing each mechanism, this manual also tries to capture
+best practices. They are rendered as quoted bullet points.
+
+## Motivation
+
+MLIR allows pluggable dialects, and dialects contain, among others, a list of
+operations. This open and extensible ecosystem leads to the "stringly" type IR
+problem, e.g., repetitive string comparisons during optimization and analysis
+passes, unintuitive accessor methods (e.g., generic/error prone `getOperand(3)`
+vs self-documenting `getStride()`) with more generic return types, verbose and
+generic constructors without default arguments, verbose textual IR dump, and
+so on. Furthermore, operation verification is:
+
+1. best case: a central string-to-verification-function map,
+1. middle case: duplication of verification across the code base, or
+1. worst case: no verification functions.
+
+The fix is to support defining ops in a table-driven manner. Then for each
+dialect, we can have a central place that contains everything you need to know
+about each op, including its constraints, custom assembly form, etc. This
+description is also used to generate helper functions and classes to allow
+building, verification, parsing, printing, analysis, and many more.
+
+## Benefits
+
+Compared to the C++ template, this table-driven approach has several benefits
+including but not limited to:
+
+* **Single source of truth**: We strive to encode all facts regarding an
+ operation into the record, so that readers don't need to jump among code
+ snippets to fully understand an operation.
+* **Removing boilerplate**: We can automatically generate
+ operand/attribute/result getter methods, operation build methods, operation
+ verify methods, and many more utilities from the record. This greatly reduces
+ the boilerplate needed for defining a new op.
+* **Facilitating auto-generation**: The usage of these operation information
+ records are by no means limited to op definition itself. We can use them to
+ drive the auto-generation of many other components, like computation graph
+ serialization.
+
+## TableGen Syntax
+
+We use TableGen as the language for specifying operation information. TableGen
+itself just provides syntax for writing records; the syntax and constructs
+allowed in a TableGen file (typically with filename suffix `.td`) can be found
+[here][TableGenIntro]. The formal language specification can be found
+[here][TableGenRef]. _Roughly_ speaking,
+
+* TableGen `class` is similar to C++ class; it can be templated and
+ subclassed.
+* TableGen `def` is similar to C++ object; it can be declared by specializing
+ a TableGen `class` (e.g., `def MyDef : MyClass<...>;`) or completely
+ independently (e.g., `def MyDef;`). It cannot be further templated or
+ subclassed.
+* TableGen `dag` is a dedicated type for directed acyclic graph of elements. A
+ `dag` has one operator and zero or more arguments. Its syntax is `(operator
+ arg0, arg1, argN)`. The operator can be any TableGen `def`; an argument can
+ be anything, including `dag` itself. We can have names attached to both the
+ operator and the arguments like `(MyOp:$op_name MyArg:$arg_name)`.
+
+Please see the [language introduction][TableGenIntro] to learn about all the
+types and expressions supported by TableGen.
+
+## Operation Definition
+
+MLIR defines several common constructs to help operation definition and provide
+their semantics via a special [TableGen backend][TableGenBackend]:
+[`OpDefinitionsGen`][OpDefinitionsGen]. These constructs are defined in
+[`OpBase.td`][OpBase]. The main ones are
+
+* The `Op` class: It is the main construct for defining operations. All facts
+ regarding the operation are specified when specializing this class, with the
+ help of the following constructs.
+* The `Dialect` class: Operations belonging to one logical group are placed in
+ the same dialect. The `Dialect` class contains dialect-level information.
+* The `OpTrait` class hierarchy: They are used to specify special properties
+ and constraints of the operation, including whether the operation has side
+ effect or whether its output has the same shape as the input.
+* The `ins`/`outs` marker: These are two special makers builtin to the
+ `OpDefinitionsGen` backend. They lead the definitions of operands/attributes
+ and results respectively.
+* The `TypeConstraint` class hierarchy: They are used to specify the
+ constraints over operands or results. A notable subclass hierarchy is
+ `Type`, which stands for constraints for common C++ types.
+* The `AttrConstraint` class hierarchy: They are used to specify the
+ constraints over attributes. A notable subclass hierarchy is `Attr`, which
+ stands for constraints for attributes whose values are of common types.
+
+An operation is defined by specializing the `Op` class with concrete contents
+for all the fields it requires. For example, `tf.AvgPool` is defined as
+
+```tablegen
+def TF_AvgPoolOp : TF_Op<"AvgPool", [NoSideEffect]> {
+ let summary = "Performs average pooling on the input.";
+
+ let description = [{
+Each entry in `output` is the mean of the corresponding size `ksize`
+window in `value`.
+ }];
+
+ let arguments = (ins
+ TF_FpTensor:$value,
+
+ Confined<I64ArrayAttr, [ArrayMinCount<4>]>:$ksize,
+ Confined<I64ArrayAttr, [ArrayMinCount<4>]>:$strides,
+ TF_AnyStrAttrOf<["SAME", "VALID"]>:$padding,
+ DefaultValuedAttr<TF_ConvertDataFormatAttr, "NHWC">:$data_format
+ );
+
+ let results = (outs
+ TF_FpTensor:$output
+ );
+
+ TF_DerivedOperandTypeAttr T = TF_DerivedOperandTypeAttr<0>;
+}
+```
+
+In the following we describe all the fields needed. Please see the definition
+of the `Op` class for the complete list of fields supported.
+
+### Operation name
+
+The operation name is a unique identifier of the operation within MLIR, e.g.,
+`tf.Add` for addition operation in the TensorFlow dialect. This is the
+equivalent of the mnemonic in assembly language. It is used for parsing and
+printing in the textual format. It is also used for pattern matching in graph
+rewrites.
+
+The full operation name is composed of the dialect name and the op name, with
+the former provided via the dialect and the latter provided as the second
+template parameter to the `Op` class.
+
+### Operation documentation
+
+This includes both an one-line `summary` and a longer human-readable
+`description`. They will be used to drive automatic generation of dialect
+documentation. They need to be provided in the operation's definition body:
+
+```tablegen
+let summary = "...";
+
+let description = [{
+...
+}];
+```
+
+`description` should be written in Markdown syntax.
+
+Placing the documentation at the beginning is recommended since
+it helps in understanding the operation.
+
+> * Place documentation at the beginning of the operation definition
+> * The summary should be short and concise. It should be a one-liner without
+> trailing punctuation. Put expanded explanation in description.
+
+### Operation arguments
+
+There are two kinds of arguments: operands and attributes. Operands are runtime
+values produced by other ops; while attributes are compile-time known constant
+values, including two categories:
+
+1. Natural attributes: these attributes affect the behavior of the operations
+ (e.g., padding for convolution);
+1. Derived attributes: these attributes are not needed to define the operation
+ but are instead derived from information of the operation. E.g., the output
+ shape of type. This is mostly used for convenience interface generation or
+ interaction with other frameworks/translation.
+
+Both operands and attributes are specified inside the `dag`-typed `arguments`,
+led by `ins`:
+
+```tablegen
+let arguments = (ins
+ <type-constraint>:$<operand-name>,
+ ...
+ <attr-constraint>:$<attr-name>,
+ ...
+);
+```
+
+Here `<type-constraint>` is a TableGen `def` from the `TypeConstraint` class
+hierarchy. Similarly, `<attr-constraint>` is a TableGen `def` from the
+`AttrConstraint` class hierarchy. See [Constraints](#constraints) for more
+information.
+
+There is no requirements on the relative order of operands and attributes; they
+can mix freely. The relative order of operands themselves matters. From each
+named argument a named getter will be generated that returns the argument with
+the return type (in the case of attributes the return type will be
+constructed from the storage type, while for operands it will be `Value`). Each
+attribute's raw value (e.g., as stored) can also be accessed via generated
+`<name>Attr` getters for use in transformation passes where the more user
+friendly return type is less suitable.
+
+All the arguments should be named to 1) provide documentation, 2) drive
+auto-generation of getter methods, 3) provide a handle to reference for other
+places like constraints.
+
+#### Variadic operands
+
+To declare a variadic operand, wrap the `TypeConstraint` for the operand with
+`Variadic<...>`.
+
+Normally operations have no variadic operands or just one variadic operand. For
+the latter case, it is easy to deduce which dynamic operands are for the static
+variadic operand definition. But if an operation has more than one variadic
+operands, it would be impossible to attribute dynamic operands to the
+corresponding static variadic operand definitions without further information
+from the operation. Therefore, the `SameVariadicOperandSize` trait is needed to
+indicate that all variadic operands have the same number of dynamic values.
+
+#### Optional attributes
+
+To declare an optional attribute, wrap the `AttrConstraint` for the attribute
+with `OptionalAttr<...>`.
+
+#### Attributes with default values
+
+To declare an attribute with a default value, wrap the `AttrConstraint` for the
+attribute with `DefaultValuedAttr<..., "...">`.
+
+The second parameter to `DefaultValuedAttr` should be a string containing the
+C++ default value. For example, a float default value should be specified as
+like `"0.5f"`, and an integer array default value should be specified as like
+`"{1, 2, 3}"`.
+
+#### Confining attributes
+
+`Confined` is provided as a general mechanism to help modelling further
+constraints on attributes beyond the ones brought by value types. You can use
+`Confined` to compose complex constraints out of more primitive ones. For
+example, a 32-bit integer attribute whose minimum value must be 10 can be
+expressed as `Confined<I32Attr, [IntMinValue<10>]>`.
+
+Right now, the following primitive constraints are supported:
+
+* `IntMinValue<N>`: Specifying an integer attribute to be greater than or
+ equal to `N`
+* `IntMaxValue<N>`: Specifying an integer attribute to be less than or equal
+ to `N`
+* `ArrayMinCount<N>`: Specifying an array attribute to have at least `N`
+ elements
+* `IntArrayNthElemEq<I, N>`: Specifying an integer array attribute's `I`-th
+ element to be equal to `N`
+* `IntArrayNthElemMinValue<I, N>`: Specifying an integer array attribute's
+ `I`-th element to be greater than or equal to `N`
+
+TODO: Design and implement more primitive constraints
+
+### Operation results
+
+Similar to operands, results are specified inside the `dag`-typed `results`, led
+by `outs`:
+
+```tablegen
+let results = (outs
+ <type-constraint>:$<result-name>,
+ ...
+);
+```
+
+#### Variadic results
+
+Similar to variadic operands, `Variadic<...>` can also be used for results.
+And similarly, `SameVariadicResultSize` for multiple variadic results in the
+same operation.
+
+### Operation traits and constraints
+
+Traits are operation properties that affect syntax or semantics. MLIR C++
+models various traits in the `mlir::OpTrait` namespace.
+
+Both operation traits, [interfaces](#operation-interfaces), and constraints
+involving multiple operands/attributes/results are provided as the second
+template parameter to the `Op` class. They should be deriving from the `OpTrait`
+class. See [Constraints](#constraints) for more information.
+
+### Operation interfaces
+
+[Operation interfaces](Interfaces.md#operation-interfaces) are a mechanism by
+which to opaquely call methods and access information on an *Op instance*,
+without knowing the exact operation type. Operation interfaces defined in C++
+can be accessed in the ODS framework via the `OpInterfaceTrait` class. Aside
+from using pre-existing interfaces in the C++ API, the ODS framework also
+provides a simplified mechanism for defining such interfaces; that removes much
+of the boilerplate necessary.
+
+Providing a definition of the `OpInterface` class will auto-generate the C++
+classes for the interface. An `OpInterface` includes a name, for the C++ class,
+a description, and a list of interface methods.
+
+```tablegen
+def MyInterface : OpInterface<"MyInterface"> {
+ let description = ...;
+ let methods = [...];
+}
+```
+
+There are two types of methods that can be used with an interface,
+`InterfaceMethod` and `StaticInterfaceMethod`. They are both comprised of the
+same core components, with the distinction that `StaticInterfaceMethod` models a
+static method on the derived operation.
+
+An `InterfaceMethod` is comprised of the following components:
+
+* Description
+ - A string description of what this method does and its invariants.
+* ReturnType
+ - A string corresponding to the C++ return type of the method.
+* MethodName
+ - A string corresponding to the desired name of the method.
+* Arguments (Optional)
+ - A dag of strings that correspond to a C++ type and variable name
+ respectively.
+* MethodBody (Optional)
+ - An optional explicit implementation of the interface method.
+ - `ConcreteOp` is an implicitly defined typename that can be used to refer
+ to the type of the derived operation currently being operated on.
+ - In non-static methods, a variable 'ConcreteOp op' is defined and may be
+ used to refer to an instance of the derived operation.
+* DefaultImplementation (Optional)
+ - An optional explicit default implementation of the interface method.
+ - This method is placed within the `Trait` class that is attached to the
+ operation. As such, this method has the same characteristics as any
+ other [`Trait`](Traits.md) method.
+ - `ConcreteOp` is an implicitly defined typename that can be used to refer
+ to the type of the derived operation currently being operated on.
+
+ODS also allows generating the declarations for the `InterfaceMethod` of the op
+if one specifies the interface with `DeclareOpInterfaceMethods` (see example
+below).
+
+Examples:
+
+```tablegen
+def MyInterface : OpInterface<"MyInterface"> {
+ let description = [{
+ My interface is very interesting. ...
+ }];
+
+ let methods = [
+ // A simple non-static method with no inputs.
+ InterfaceMethod<"'foo' is a non-static method with no inputs.",
+ "unsigned", "foo"
+ >,
+
+ // A new non-static method accepting an input argument.
+ InterfaceMethod<"/*insert doc here*/",
+ "Value ", "bar", (ins "unsigned":$i)
+ >,
+
+ // Query a static property of the derived operation.
+ StaticInterfaceMethod<"'fooStatic' is a static method with no inputs.",
+ "unsigned", "fooStatic"
+ >,
+
+ // Provide the definition of a static interface method.
+ // Note: `ConcreteOp` corresponds to the derived operation typename.
+ StaticInterfaceMethod<"/*insert doc here*/",
+ "Operation *", "create", (ins "OpBuilder &":$builder, "Location":$loc), [{
+ return builder.create<ConcreteOp>(loc);
+ }]>,
+
+ // Provide a definition of the non-static method.
+ // Note: `op` corresponds to the derived operation variable.
+ InterfaceMethod<"/*insert doc here*/",
+ "unsigned", "getNumInputsAndOutputs", (ins), [{
+ return op.getNumInputs() + op.getNumOutputs();
+ }]>,
+
+ // Provide only a default definition of the method.
+ // Note: `ConcreteOp` corresponds to the derived operation typename.
+ InterfaceMethod<"/*insert doc here*/",
+ "unsigned", "getNumInputsAndOutputs", (ins), /*methodBody=*/[{}], [{
+ ConcreteOp op = cast<ConcreteOp>(getOperation());
+ return op.getNumInputs() + op.getNumOutputs();
+ }]>,
+ ];
+}
+
+// Interfaces can optionally be wrapped inside DeclareOpInterfaceMethods. This
+// would result in autogenerating declarations for members `foo`, `bar` and
+// `fooStatic`. Methods with bodies are not declared inside the op
+// declaration but instead handled by the op interface trait directly.
+def OpWithInferTypeInterfaceOp : Op<...
+ [DeclareOpInterfaceMethods<MyInterface>]> { ... }
+```
+
+### Builder methods
+
+For each operation, there are a few builders automatically generated based on
+the arguments and returns types. For example, given the following op definition:
+
+```tablegen
+def MyOp : ... {
+ let arguments = (ins
+ I32:$i32_operand,
+ F32:$f32_operand,
+ ...,
+
+ I32Attr:$i32_attr,
+ F32Attr:$f32_attr,
+ ...
+ );
+
+ let results = (outs
+ I32:$i32_result,
+ F32:$f32_result,
+ ...
+ );
+}
+```
+
+The following builders are generated:
+
+```c++
+// All result-types/operands/attributes have one aggregate parameter.
+static void build(Builder *tblgen_builder, OperationState &tblgen_state,
+ ArrayRef<Type> resultTypes,
+ ValueRange operands,
+ ArrayRef<NamedAttribute> attributes);
+
+// Each result-type/operand/attribute has a separate parameter. The parameters
+// for attributes are of mlir::Attribute types.
+static void build(Builder *tblgen_builder, OperationState &tblgen_state,
+ Type i32_result, Type f32_result, ...,
+ Value i32_operand, Value f32_operand, ...,
+ IntegerAttr i32_attr, FloatAttr f32_attr, ...);
+
+// Each result-type/operand/attribute has a separate parameter. The parameters
+// for attributes are raw values unwrapped with mlir::Attribute instances.
+// (Note that this builder will not always be generated. See the following
+// explanation for more details.)
+static void build(Builder *tblgen_builder, OperationState &tblgen_state,
+ Type i32_result, Type f32_result, ...,
+ Value i32_operand, Value f32_operand, ...,
+ APInt i32_attr, StringRef f32_attr, ...);
+
+// Each operand/attribute has a separate parameter but result type is aggregate.
+static void build(Builder *tblgen_builder, OperationState &tblgen_state,
+ ArrayRef<Type> resultTypes,
+ Value i32_operand, Value f32_operand, ...,
+ IntegerAttr i32_attr, FloatAttr f32_attr, ...);
+
+// All operands/attributes have aggregate parameters.
+// Generated if InferTypeOpInterface interface is specified.
+static void build(Builder *tblgen_builder, OperationState &tblgen_state,
+ ValueRange operands,
+ ArrayRef<NamedAttribute> attributes);
+
+// (And manually specified builders depending on the specific op.)
+```
+
+The first form provides basic uniformity so that we can create ops using the
+same form regardless of the exact op. This is particularly useful for
+implementing declarative pattern rewrites.
+
+The second and third forms are good for use in manually written code given that
+they provide better guarantee via signatures.
+
+The third form will be generated if any of the op's attribute has different
+`Attr.returnType` from `Attr.storageType` and we know how to build an attribute
+from an unwrapped value (i.e., `Attr.constBuilderCall` is defined.)
+Additionally, for the third form, if an attribute appearing later in the
+`arguments` list has a default value, the default value will be supplied in the
+declaration. This works for `BoolAttr`, `StrAttr`, `EnumAttr` for now and the
+list can grow in the future. So if possible, default valued attribute should be
+placed at the end of the `arguments` list to leverage this feature. (This
+behavior is essentially due to C++ function parameter default value placement
+restrictions.) Otherwise, the builder of the third form will still be generated
+but default values for the attributes not at the end of the `arguments` list
+will not be supplied in the builder's signature.
+
+And there may potentially exist other builders depending on the specific op;
+please refer to the
+[generated C++ file](#run-mlir-tblgen-to-see-the-generated-content) for the
+complete list.
+
+#### Custom builder methods
+
+However, if the above cases cannot satisfy all needs, you can define additional
+convenience build methods with `OpBuilder`.
+
+`OpBuilder` is a class that takes the parameter list and the optional `build()`
+method body. They are separated because we need to generate op declaration and
+definition into separate files. The parameter list should _include_ `Builder
+*builder, OperationState &state`. If the `body` is not provided, _only_ the
+builder declaration will be generated; this provides a way to define complicated
+builders entirely in C++ files.
+
+For example, for the following op:
+
+```tablegen
+def MyOp : Op<"my_op", []> {
+ let arguments = (ins F32Attr:$attr);
+
+ let results = (outs);
+}
+```
+
+If we want to define a builder with a default value for the only attribute, we
+can add into `MyOp`:
+
+```tablegen
+def MyOp : ... {
+ ...
+
+ let builders = [
+ OpBuilder<"Builder *builder, OperationState &state, float val = 0.5f", [{
+ state.addAttribute("attr", builder->getF32FloatAttr(val));
+ }]>
+ ];
+}
+```
+
+The generated builder will look like:
+
+```c++
+static void build(Builder *builder, OperationState &state, float val = 0.5f) {
+ state.addAttribute("attr", builder->getF32FloatAttr(val));
+}
+```
+
+### Custom parser and printer methods
+
+Functions to parse and print the operation's custom assembly form.
+
+### Custom verifier code
+
+Verification code will be automatically generated for
+[constraints](#constraints) specified on various entities of the op. To
+perform _additional_ verification, you can use
+
+```tablegen
+let verifier = [{
+ ...
+}];
+```
+
+Code placed in `verifier` will be called after the auto-generated verification
+code.
+
+### `hasCanonicalizer`
+
+This boolean field indicate whether canonicalization patterns have been defined
+for this operation. If it is `1`, then `::getCanonicalizationPatterns()` should
+be defined.
+
+### `hasFolder`
+
+This boolean field indicate whether general folding rules have been defined
+for this operation. If it is `1`, then `::fold()` should be defined.
+
+### Extra declarations
+
+One of the goals of table-driven op definition is to auto-generate as much logic
+and methods needed for each op as possible. With that said, there will always be
+long-tail cases that won't be covered. For such cases, you can use
+`extraClassDeclaration`. Code in `extraClassDeclaration` will be copied
+literally to the generated C++ op class.
+
+Note that `extraClassDeclaration` is a mechanism intended for long-tail cases
+by power users; for not-yet-implemented widely-applicable cases, improving the
+infrastructure is preferable.
+
+### Generated C++ code
+
+[OpDefinitionsGen][OpDefinitionsGen] processes the op definition spec file and
+generates two files containing the corresponding C++ code: one for declarations,
+the other for definitions. The former is generated via the `-gen-op-decls`
+command-line option, while the latter is via the `-gen-op-defs` option.
+
+The definition file contains all the op method definitions, which can be
+included and enabled by defining `GET_OP_CLASSES`. For each operation,
+OpDefinitionsGen generates an operation class and an
+[operand adaptor](#operand-adaptors) class. Besides, it also contains a
+comma-separated list of all defined ops, which can be included and enabled by
+defining `GET_OP_LIST`.
+
+#### Class name and namespaces
+
+For each operation, its generated C++ class name is the symbol `def`ed with
+TableGen with dialect prefix removed. The first `_` serves as the delimiter.
+For example, for `def TF_AddOp`, the C++ class name would be `AddOp`.
+We remove the `TF` prefix because it is for scoping ops; other dialects
+may as well define their own `AddOp`s.
+
+The namespaces of the generated C++ class will come from the dialect's
+`cppNamespace` field. For example, if a dialect's `cppNamespace` is `A::B`,
+then an op of that dialect will be placed in
+`namespace A { namespace B { ... } }`. If a dialect does not specify a
+`cppNamespace`, we then use the dialect's name as the namespace.
+
+This means the qualified name of the generated C++ class does not necessarily
+match exactly with the operation name as explained in
+[Operation name](#operation-name). This is to allow flexible naming to satisfy
+coding style requirements.
+
+#### Operand adaptors
+
+For each operation, we automatically generate an _operand adaptor_. This class
+solves the problem of accessing operands provided as a list of `Value`s without
+using "magic" constants. The operand adaptor takes a reference to an array of
+`Value` and provides methods with the same names as those in the operation class
+to access them. For example, for a binary arithmetic operation, it may provide
+`.lhs()` to access the first operand and `.rhs()` to access the second operand.
+
+The operand adaptor class lives in the same namespace as the operation class,
+and has the name of the operation followed by `OperandAdaptor`. A template
+declaration `OperandAdaptor<>` is provided to look up the operand adaptor for
+the given operation.
+
+Operand adaptors can be used in function templates that also process operations:
+
+```c++
+template <typename BinaryOpTy>
+std::pair<Value, Value> zip(BinaryOpTy &&op) {
+ return std::make_pair(op.lhs(), op.rhs());;
+}
+
+void process(AddOp op, ArrayRef<Value> newOperands) {
+ zip(op);
+ zip(OperandAdaptor<AddOp>(newOperands));
+ /*...*/
+}
+```
+
+## Constraints
+
+Constraint is a core concept in table-driven operation definition: operation
+verification and graph operation matching are all based on satisfying
+constraints. So both the operation definition and rewrite rules specification
+significantly involve writing constraints. We have the `Constraint` class in
+[`OpBase.td`][OpBase] has the common base class for all constraints.
+
+An operation's constraint can cover different range; it may
+
+* Only concern a single attribute (e.g. being an 32-bit integer greater than 5),
+* Multiple operands and results (e.g., the 1st result's shape must be the same
+ as the 1st operand), or
+* Intrinsic to the operation itself (e.g., having no side effect).
+
+We call them as single-entity constraint, multi-entity constraint, and traits,
+respectively.
+
+### Single-entity constraint
+
+Constraints scoped to a single operand, attribute, or result are specified at
+the entity's declaration place as described in
+[Operation arguments](#operation-arguments) and
+[Operation results](#operation-results).
+
+To help modelling constraints of common types, a set of `TypeConstraint`s are
+created; they are the `Type` subclass hierarchy. It includes `F32` for the
+constraints of being a float, `TensorOf<[F32]>` for the constraints of being
+a float tensor, and so on.
+
+Similarly, a set of `AttrConstraint`s are created for helping modelling
+constraints of common attribute kinds. They are the `Attr` subclass hierarchy.
+It includes `F32Attr` for the constraints of being a float attribute,
+`F32ArrayAttr` for the constraints of being a float array attribute, and so on.
+
+### Multi-entity constraint
+
+Constraints involving more than one operand/attribute/result are quite common
+on operations, like the element type and shape relation between operands and
+results. These constraints should be specified as the `Op` class template
+parameter as described in
+[Operation traits and constraints](#operation-traits-and-constraints).
+
+Multi-entity constraints are modeled as `PredOpTrait` (a subclass of `OpTrait`)
+in [`OpBase.td`][OpBase].A bunch of constraint primitives are provided to help
+specification. See [`OpBase.td`][OpBase] for the complete list.
+
+### Trait
+
+Traits are intrinsic properties of the operation like having side effect or not,
+commutative or not, whether is a terminator, etc. These constraints should be
+specified as the `Op` class template parameter as described in
+[Operation traits and constraints](#operation-traits-and-constraints).
+
+Traits are modeled as `NativeOpTrait` (a subclass of `OpTrait`) in
+[`OpBase.td`][OpBase]. They are backed and will be translated into the
+corresponding C++ `mlir::OpTrait` classes.
+
+### How to specify new constraint
+
+To write a constraint, you need to provide its predicates and give it a
+descriptive name. Predicates, modeled with the `Pred` class, are the workhorse
+for composing constraints. The predicate for a constraint is typically built up
+in a nested manner, using the two categories of predicates:
+
+1. `CPred`: the primitive leaf predicate.
+2. Compound predicate: a predicate composed from child predicates using
+ predicate combiners (conjunction: `And`, disjunction: `Or`, negation: `Neg`,
+ substitution: `SubstLeaves`, concatenation: `Concat`).
+
+`CPred` is the basis for composing more complex predicates. It is the "atom"
+predicate from the perspective of TableGen and the "interface" between
+TableGen and C++. What is inside is already C++ code, which will be treated
+as opaque strings with special placeholders to be substituted.
+
+You can put any C++ code that returns a boolean value inside a `CPred`,
+including evaluating expressions, calling functions, calling class methods,
+and so on.
+
+To help interaction with the C++ environment, there are a few special
+placeholders provided to refer to entities in the context where this predicate
+is used. They serve as "hooks" to the enclosing environment. This includes
+`$_builder`, `$_op`, and `$_self`:
+
+* `$_builder` will be replaced by a `mlir::Builder` instance so that you can
+ access common build methods.
+* `$_op` will be replaced by the current operation so that you can access
+ information of the current operation.
+* `$_self` will be replaced with the entity this predicate is attached to.
+ E.g., `BoolAttr` is an attribute constraint that wraps a
+ `CPred<"$_self.isa<BoolAttr>()">`. Then for `F32:$attr`,`$_self` will be
+ replaced by `$attr`. For type constraints, it's a little bit special since
+ we want the constraints on each type definition reads naturally and we want
+ to attach type constraints directly to an operand/result, `$_self` will be
+ replaced by the operand/result's type. E.g., for `F32` in `F32:$operand`, its
+ `$_self` will be expanded as `getOperand(...)->getType()`.
+
+TODO(b/130663252): Reconsider the leading symbol for special placeholders.
+Eventually we want to allow referencing operand/result $-names; such $-names
+can start with underscore.
+
+For example, to write an attribute `attr` is an `IntegerAttr`, in C++ you can
+just call `attr.isa<IntegerAttr>()`. The code can be wrapped in a `CPred` as
+`$_self.isa<IntegerAttr>()`, with `$_self` as the special placeholder to be
+replaced by the current attribute `attr` at expansion time.
+
+For more complicated predicates, you can wrap it in a single `CPred`, or you
+can use predicate combiners to combine them. For example, to write the
+constraint that an attribute `attr` is a 32-bit or 64-bit integer, you can
+write it as
+
+```tablegen
+And<[
+ CPred<"$_self.isa<IntegerAttr>()">,
+ Or<[
+ CPred<"$_self.cast<IntegerAttr>().getType().isInteger(32)">,
+ CPred<"$_self.cast<IntegerAttr>().getType().isInteger(64)">
+ ]>
+]>
+```
+
+(Note that the above is just to show with a familiar example how you can use
+`CPred` and predicate combiners to write complicated predicates. For integer
+attributes specifically, [`OpBase.td`][OpBase] already defines `I32Attr` and
+`I64Attr`. So you can actually reuse them to write it as `Or<[I32Attr.predicate,
+I64Attr.predicate]>`.)
+
+TODO: Build up a library of reusable primitive constraints
+
+If the predicate is very complex to write with `CPred` together with predicate
+combiners, you can also write it as a normal C++ function and use the `CPred`
+as a way to "invoke" the function. For example, to verify an attribute `attr`
+has some property, you can write a C++ function like
+
+```cpp
+bool HasSomeProperty(Attribute attr) { ... }
+```
+
+and then define the op as:
+
+```tablegen
+def HasSomeProperty : AttrConstraint<CPred<"HasSomeProperty($_self)">,
+ "has some property">;
+
+def MyOp : Op<...> {
+ let arguments = (ins
+ ...
+ HasSomeProperty:$attr
+ );
+}
+```
+
+As to whether we should define the predicate using a single `CPred` wrapping
+the whole expression, multiple `CPred`s with predicate combiners, or a single
+`CPred` "invoking" a function, there are no clear-cut criteria. Defining using
+`CPred` and predicate combiners is preferable since it exposes more information
+(instead hiding all the logic behind a C++ function) into the op definition spec
+so that it can potentially drive more auto-generation cases. But it will
+require a nice library of common predicates as the building blocks to avoid the
+duplication, which is being worked on right now.
+
+## Attribute Definition
+
+### Enum attributes
+
+Some attributes can only take values from an predefined enum, e.g., the
+comparison kind of a comparison op. To define such attributes, ODS provides
+several mechanisms: `StrEnumAttr`, `IntEnumAttr`, and `BitEnumAttr`.
+
+* `StrEnumAttr`: each enum case is a string, the attribute is stored as a
+ [`StringAttr`][StringAttr] in the op.
+* `IntEnumAttr`: each enum case is an integer, the attribute is stored as a
+ [`IntegerAttr`][IntegerAttr] in the op.
+* `BitEnumAttr`: each enum case is a bit, the attribute is stored as a
+ [`IntegerAttr`][IntegerAttr] in the op.
+
+All these `*EnumAttr` attributes require fully specifying all of the allowed
+cases via their corresponding `*EnumAttrCase`. With this, ODS is able to
+generate additional verification to only accept allowed cases. To facilitate the
+interaction between `*EnumAttr`s and their C++ consumers, the
+[`EnumsGen`][EnumsGen] TableGen backend can generate a few common utilities: a
+C++ enum class, `llvm::DenseMapInfo` for the enum class, conversion functions
+from/to strings. This is controlled via the `-gen-enum-decls` and
+`-gen-enum-defs` command-line options of `mlir-tblgen`.
+
+For example, given the following `EnumAttr`:
+
+```tablegen
+def Case15: I32EnumAttrCase<"Case15", 15>;
+def Case20: I32EnumAttrCase<"Case20", 20>;
+
+def MyIntEnum: I32EnumAttr<"MyIntEnum", "An example int enum",
+ [Case15, Case20]> {
+ let cppNamespace = "Outer::Inner";
+ let stringToSymbolFnName = "ConvertToEnum";
+ let symbolToStringFnName = "ConvertToString";
+}
+```
+
+The following will be generated via `mlir-tblgen -gen-enum-decls`:
+
+```c++
+namespace Outer {
+namespace Inner {
+// An example int enum
+enum class MyIntEnum : uint32_t {
+ Case15 = 15,
+ Case20 = 20,
+};
+
+llvm::Optional<MyIntEnum> symbolizeMyIntEnum(uint32_t);
+llvm::StringRef ConvertToString(MyIntEnum);
+llvm::Optional<MyIntEnum> ConvertToEnum(llvm::StringRef);
+inline constexpr unsigned getMaxEnumValForMyIntEnum() {
+ return 20;
+}
+
+} // namespace Inner
+} // namespace Outer
+
+namespace llvm {
+template<> struct DenseMapInfo<Outer::Inner::MyIntEnum> {
+ using StorageInfo = llvm::DenseMapInfo<uint32_t>;
+
+ static inline Outer::Inner::MyIntEnum getEmptyKey() {
+ return static_cast<Outer::Inner::MyIntEnum>(StorageInfo::getEmptyKey());
+ }
+
+ static inline Outer::Inner::MyIntEnum getTombstoneKey() {
+ return static_cast<Outer::Inner::MyIntEnum>(StorageInfo::getTombstoneKey());
+ }
+
+ static unsigned getHashValue(const Outer::Inner::MyIntEnum &val) {
+ return StorageInfo::getHashValue(static_cast<uint32_t>(val));
+ }
+
+ static bool isEqual(const Outer::Inner::MyIntEnum &lhs, const Outer::Inner::MyIntEnum &rhs) {
+ return lhs == rhs;
+ }
+};
+}
+```
+
+The following will be generated via `mlir-tblgen -gen-enum-defs`:
+
+```c++
+namespace Outer {
+namespace Inner {
+llvm::StringRef ConvertToString(MyIntEnum val) {
+ switch (val) {
+ case MyIntEnum::Case15: return "Case15";
+ case MyIntEnum::Case20: return "Case20";
+ }
+ return "";
+}
+
+llvm::Optional<MyIntEnum> ConvertToEnum(llvm::StringRef str) {
+ return llvm::StringSwitch<llvm::Optional<MyIntEnum>>(str)
+ .Case("Case15", MyIntEnum::Case15)
+ .Case("Case20", MyIntEnum::Case20)
+ .Default(llvm::None);
+}
+llvm::Optional<MyIntEnum> symbolizeMyIntEnum(uint32_t value) {
+ switch (value) {
+ case 15: return MyIntEnum::Case15;
+ case 20: return MyIntEnum::Case20;
+ default: return llvm::None;
+ }
+}
+
+} // namespace Inner
+} // namespace Outer
+```
+
+Similarly for the following `BitEnumAttr` definition:
+
+```tablegen
+def None: BitEnumAttrCase<"None", 0x0000>;
+def Bit1: BitEnumAttrCase<"Bit1", 0x0001>;
+def Bit2: BitEnumAttrCase<"Bit2", 0x0002>;
+def Bit3: BitEnumAttrCase<"Bit3", 0x0004>;
+
+def MyBitEnum: BitEnumAttr<"MyBitEnum", "An example bit enum",
+ [None, Bit1, Bit2, Bit3]>;
+```
+
+We can have:
+
+```c++
+// An example bit enum
+enum class MyBitEnum : uint32_t {
+ None = 0,
+ Bit1 = 1,
+ Bit2 = 2,
+ Bit3 = 4,
+};
+
+llvm::Optional<MyBitEnum> symbolizeMyBitEnum(uint32_t);
+std::string stringifyMyBitEnum(MyBitEnum);
+llvm::Optional<MyBitEnum> symbolizeMyBitEnum(llvm::StringRef);
+inline MyBitEnum operator|(MyBitEnum lhs, MyBitEnum rhs) {
+ return static_cast<MyBitEnum>(static_cast<uint32_t>(lhs) | static_cast<uint32_t>(rhs));
+}
+inline MyBitEnum operator&(MyBitEnum lhs, MyBitEnum rhs) {
+ return static_cast<MyBitEnum>(static_cast<uint32_t>(lhs) & static_cast<uint32_t>(rhs));
+}
+inline bool bitEnumContains(MyBitEnum bits, MyBitEnum bit) {
+ return (static_cast<uint32_t>(bits) & static_cast<uint32_t>(bit)) != 0;
+}
+
+namespace llvm {
+template<> struct DenseMapInfo<::MyBitEnum> {
+ using StorageInfo = llvm::DenseMapInfo<uint32_t>;
+
+ static inline ::MyBitEnum getEmptyKey() {
+ return static_cast<::MyBitEnum>(StorageInfo::getEmptyKey());
+ }
+
+ static inline ::MyBitEnum getTombstoneKey() {
+ return static_cast<::MyBitEnum>(StorageInfo::getTombstoneKey());
+ }
+
+ static unsigned getHashValue(const ::MyBitEnum &val) {
+ return StorageInfo::getHashValue(static_cast<uint32_t>(val));
+ }
+
+ static bool isEqual(const ::MyBitEnum &lhs, const ::MyBitEnum &rhs) {
+ return lhs == rhs;
+ }
+};
+```
+
+```c++
+std::string stringifyMyBitEnum(MyBitEnum symbol) {
+ auto val = static_cast<uint32_t>(symbol);
+ // Special case for all bits unset.
+ if (val == 0) return "None";
+
+ llvm::SmallVector<llvm::StringRef, 2> strs;
+ if (1u & val) { strs.push_back("Bit1"); val &= ~1u; }
+ if (2u & val) { strs.push_back("Bit2"); val &= ~2u; }
+ if (4u & val) { strs.push_back("Bit3"); val &= ~4u; }
+
+ if (val) return "";
+ return llvm::join(strs, "|");
+}
+
+llvm::Optional<MyBitEnum> symbolizeMyBitEnum(llvm::StringRef str) {
+ // Special case for all bits unset.
+ if (str == "None") return MyBitEnum::None;
+
+ llvm::SmallVector<llvm::StringRef, 2> symbols;
+ str.split(symbols, "|");
+
+ uint32_t val = 0;
+ for (auto symbol : symbols) {
+ auto bit = llvm::StringSwitch<llvm::Optional<uint32_t>>(symbol)
+ .Case("Bit1", 1)
+ .Case("Bit2", 2)
+ .Case("Bit3", 4)
+ .Default(llvm::None);
+ if (bit) { val |= *bit; } else { return llvm::None; }
+ }
+ return static_cast<MyBitEnum>(val);
+}
+
+llvm::Optional<MyBitEnum> symbolizeMyBitEnum(uint32_t value) {
+ // Special case for all bits unset.
+ if (value == 0) return MyBitEnum::None;
+
+ if (value & ~(1u | 2u | 4u)) return llvm::None;
+ return static_cast<MyBitEnum>(value);
+}
+```
+
+TODO(b/132506080): This following is outdated. Update it.
+
+An attribute is a compile time known constant of an operation. Attributes are
+required to be known to construct an operation (e.g., the padding behavior is
+required to fully define the `conv2d` op).
+
+Attributes are defined as having a storage type (corresponding to a derived
+class of `mlir::Attribute`), a return type (that corresponds to the C++ type to
+use in the generation of the helper accessors) as well as method to convert
+between the internal storage and the helper method. Derived attributes are a
+special class of attributes that do not have storage but are instead calculated
+based on the operation and its attributes.
+
+## Debugging Tips
+
+### Run `mlir-tblgen` to see the generated content
+
+TableGen syntax sometimes can be obscure; reading the generated content can be
+a very helpful way to understand and debug issues. To build `mlir-tblgen`, run
+`cmake --build . --target mlir-tblgen` in your build directory and find the
+`mlir-tblgen` binary in the `bin/` subdirectory. All the supported generators
+can be found via `mlir-tblgen --help`. For example, `--gen-op-decls` and
+`--gen-op-defs` as explained in [Generated C++ code](#generated-c++-code).
+
+To see the generated code, invoke `mlir-tblgen` with a specific generator by
+providing include paths via `-I`. For example,
+
+```sh
+# To see op C++ class declaration
+mlir-tblgen --gen-op-decls -I /path/to/mlir/include /path/to/input/td/file
+# To see op C++ class definition
+mlir-tblgen --gen-op-defs -I /path/to/mlir/include /path/to/input/td/file
+# To see op documentation
+mlir-tblgen --gen-op-doc -I /path/to/mlir/include /path/to/input/td/file
+
+# To see op interface C++ class declaration
+mlir-tblgen --gen-op-interface-decls -I /path/to/mlir/include /path/to/input/td/file
+# To see op interface C++ class definition
+mlir-tblgen --gen-op-interface-defs -I /path/to/mlir/include /path/to/input/td/file
+# To see op interface documentation
+mlir-tblgen --gen-op-interface-doc -I /path/to/mlir/include /path/to/input/td/file
+```
+
+
+## Appendix
+
+### Requirements and existing mechanisms analysis
+
+The op description should as declarative as possible to allow a wide range of
+tools to work with them and query methods generated from them. In particular
+this means specifying traits, constraints and shape inference information in
+a way that is easily analyzable (e.g., avoid opaque calls to C++ functions where
+possible).
+
+We considered the approaches of several contemporary systems and focused on
+requirements that were desirable:
+
+* Ops registered using a registry separate from C++ code.
+ * Unknown ops are allowed in MLIR, so ops need not be registered. The
+ ability of the compiler to optimize those ops or graphs containing those
+ ops is constrained but correct.
+ * The current proposal does not include a runtime op description, but it
+ does not preclude such description, it can be added later.
+ * The op registry is essential for generating C++ classes that make
+ manipulating ops, verifying correct construction etc. in C++ easier by
+ providing a typed representation and accessors.
+* The op registry will be defined in
+ [TableGen](https://llvm.org/docs/TableGen/index.html) and be used to
+ generate C++ classes and utility functions
+ (builder/verifier/parser/printer).
+ * TableGen is a modelling specification language used by LLVM's backends
+ and fits in well with trait-based modelling. This is an implementation
+ decision and there are alternative ways of doing this. But the
+ specification language is good for the requirements of modelling the
+ traits (as seen from usage in LLVM processor backend modelling) and easy
+ to extend, so a practical choice. If another good option comes up, we
+ will consider it.
+* MLIR allows both defined and undefined ops.
+ * Defined ops should have fixed semantics and could have a corresponding
+ reference implementation defined using, for example, EDSC.
+ * Dialects are under full control of the dialect owner and normally live
+ with the framework of the dialect.
+* The op's traits (e.g., commutative) are modelled along with the op in the
+ registry.
+* The op's operand/return type constraints are modelled along with the op in
+ the registry (see [Shape inference](#shape-inference) discussion below),
+ this allows (e.g.) optimized concise syntax in textual dumps.
+* Behavior of the op is documented along with the op with a summary and a
+ description. The description is written in markdown and extracted for
+ inclusion in the generated LangRef section of the dialect.
+* The generic assembly form of printing and parsing is available as normal,
+ but a custom parser and printer can either be specified or automatically
+ generated from an optional string representation showing the mapping of the
+ "assembly" string to operands/type.
+ * Parser-level remappings (e.g., `eq` to enum) will be supported as part
+ of the parser generation.
+* Matching patterns are specified separately from the op description.
+ * Contrasted with LLVM there is no "base" set of ops that every backend
+ needs to be aware of. Instead there are many different dialects and the
+ transformations/legalizations between these dialects form a graph of
+ transformations.
+* Reference implementation may be provided along with the op definition.
+
+ * The reference implementation may be in terms of either standard ops or
+ other reference implementations.
+
+ TODO: document expectation if the dependent op's definition changes.
+
+### A proposal for auto-generating printer and parser methods
+
+NOTE: Auto-generating printing/parsing (as explained in the below) has _not_
+been prototyped, and potentially just being able to specify custom printer/
+parser methods are sufficient. This should presumably be influenced by the
+design of the assembler/disassembler logic that LLVM backends get for free
+for machine instructions.
+
+The custom assembly form of the operation is specified using a string with
+matching operation name, operands and attributes. With the ability
+to express additional information that needs to be parsed to build the
+operation:
+
+```tablegen
+tfl.add $lhs, $rhs {fused_activation_function: $fused_activation_function}: ${type(self)}
+```
+
+1. The output is never shown in the "mnemonics" string as that is fixed form
+ and cannot be altered.
+1. Custom parsing of ops may include some punctuation (e.g., parenthesis).
+1. The operands/results are added to the created operation in the order that
+ they are shown in the input and output dags.
+1. The `${type(self)}` operator is used to represent the type of the operator.
+ The type of operands can also be queried.
+1. Attributes names are matched to the placeholders in the mnemonic strings.
+ E.g., attribute axis is matched with `$axis`. Custom parsing for attribute
+ type can be defined along with the attribute definition.
+1. The information in the custom assembly form should be sufficient to invoke
+ the builder generated. That may require being able to propagate information
+ (e.g., the `$lhs` has the same type as the result).
+
+Printing is effectively the inverse of the parsing function generated with the
+mnemonic string serving as a template.
+
+### Shape inference
+
+Type constraints are along (at least) three axis: 1) elemental type, 2) rank
+(including static or dynamic), 3) dimensions. While some ops have no compile
+time fixed shape (e.g., output shape is dictated by data) we could still have
+some knowledge of constraints/bounds in the system for that op (e.g., the output
+of a `tf.where` is at most the size of the input data). And so there are
+additional valuable constraints that could be captured even without full
+knowledge.
+
+Initially the shape inference will be declaratively specified using:
+
+* Constraint on the operands of an operation directly. For example
+ constraining the input type to be tensor/vector elements or that the
+ elemental type be of a specific type (e.g., output of sign is of elemental
+ type `i1`) or class (e.g., float like).
+* Constraints across operands and results of an operation. For example,
+ enabling specifying equality constraints on type/constituents of a type
+ (shape and elemental type) between operands and results (e.g., the output
+ type of an add is the same as those of the input operands).
+
+In general there is an input/output transfer function which maps the inputs to
+the outputs (e.g., given input X and Y [or slices thereof] with these sizes, the
+output is Z [or this slice thereof]). Such a function could be used to determine
+the output type (shape) for given input type (shape).
+
+But shape functions are determined by attributes and could be arbitrarily
+complicated with a wide-range of specification possibilities. Equality
+relationships are common (e.g., the elemental type of the output matches the
+primitive type of the inputs, both inputs have exactly the same type [primitive
+type and shape]) and so these should be easy to specify. Algebraic relationships
+would also be common (e.g., a concat of `[n,m]` and `[n,m]` matrix along axis 0
+is `[n+n, m]` matrix), while some ops only have defined shapes under certain
+cases (e.g., matrix multiplication of `[a,b]` and `[c,d]` is only defined if
+`b == c`). As ops are also verified, the shape inference need only specify rules
+for the allowed cases (e.g., shape inference for matmul can ignore the case
+where `b != c`), which would simplify type constraint specification.
+
+Instead of specifying an additional mechanism to specify a shape transfer
+function, the reference implementation of the operation will be used to derive
+the shape function. The reference implementation is general and can support the
+arbitrary computations needed to specify output shapes.
+
+[TableGen]: https://llvm.org/docs/TableGen/index.html
+[TableGenIntro]: https://llvm.org/docs/TableGen/LangIntro.html
+[TableGenRef]: https://llvm.org/docs/TableGen/LangRef.html
+[TableGenBackend]: https://llvm.org/docs/TableGen/BackEnds.html#introduction
+[OpBase]: https://github.com/tensorflow/mlir/blob/master/include/mlir/IR/OpBase.td
+[OpDefinitionsGen]: https://github.com/tensorflow/mlir/blob/master/tools/mlir-tblgen/OpDefinitionsGen.cpp
+[EnumsGen]: https://github.com/tensorflow/mlir/blob/master/tools/mlir-tblgen/EnumsGen.cpp
+[StringAttr]: https://github.com/tensorflow/mlir/blob/master/g3doc/LangRef.md#string-attribute
+[IntegerAttr]: https://github.com/tensorflow/mlir/blob/master/g3doc/LangRef.md#integer-attribute
diff --git a/mlir/docs/Passes.md b/mlir/docs/Passes.md
new file mode 100644
index 00000000000..78ea257b57b
--- /dev/null
+++ b/mlir/docs/Passes.md
@@ -0,0 +1,298 @@
+# MLIR Passes
+
+This document describes the available MLIR passes and their contracts.
+
+[TOC]
+
+## Affine control lowering (`-lower-affine`)
+
+Convert operations related to affine control into a graph of blocks using
+operations from the standard dialect.
+
+Loop statements are converted to a subgraph of blocks (initialization, condition
+checking, subgraph of body blocks) with loop induction variable being passed as
+the block argument of the condition checking block. Conditional statements are
+converted to a subgraph of blocks (chain of condition checking with
+short-circuit logic, subgraphs of 'then' and 'else' body blocks). `affine.apply`
+operations are converted into sequences of primitive arithmetic operations that
+have the same effect, using operands of the `index` type. Consequently, named
+maps and sets may be removed from the module.
+
+For example, `%r = affine.apply (d0, d1)[s0] -> (d0 + 2*d1 + s0)(%d0, %d1)[%s0]`
+can be converted into:
+
+```mlir
+%d0 = <...>
+%d1 = <...>
+%s0 = <...>
+%0 = constant 2 : index
+%1 = muli %0, %d1
+%2 = addi %d0, %1
+%r = addi %2, %s0
+```
+
+### Input invariant
+
+- no `Tensor` types;
+
+These restrictions may be lifted in the future.
+
+### Output IR
+
+Functions with `affine.for` and `affine.if` operations eliminated. These
+functions may contain operations from the Standard dialect in addition to those
+already present before the pass.
+
+### Invariants
+
+- Functions without a body are not modified.
+- The semantics of the other functions is preserved.
+- Individual operations other than those mentioned above are not modified if
+ they do not depend on the loop iterator value or on the result of
+ `affine.apply`.
+
+## Conversion from Standard to LLVM IR dialect (`-convert-std-to-llvm`)
+
+Convert standard operations into the LLVM IR dialect operations.
+
+### Input invariant
+
+- operations including: arithmetic on integers and floats, constants, direct
+ calls, returns and branches;
+- no `tensor` types;
+- all `vector` are one-dimensional;
+- all blocks are reachable by following the successors of the first basic
+ block;
+
+If other operations are present and their results are required by the LLVM IR
+dialect operations, the pass will fail. Any LLVM IR operations or types already
+present in the IR will be kept as is.
+
+### Output IR
+
+Functions converted to LLVM IR. Function arguments types are converted
+one-to-one. Function results are converted one-to-one and, in case more than 1
+value is returned, packed into an LLVM IR struct type. Function calls and
+returns are updated accordingly. Block argument types are updated to use LLVM IR
+types.
+
+## Data Copy DMA generation (`-affine-data-copy-generate`)
+
+Replaces all loads and stores on memref's living in 'slowMemorySpace' by
+introducing DMA operations (strided DMA if necessary) to transfer data to/from
+`fastMemorySpace` and rewriting the original load's/store's to instead
+load/store from the allocated fast memory buffers. Additional options specify
+the identifier corresponding to the fast memory space and the amount of fast
+memory space available. The pass traverses through the nesting structure,
+recursing to inner levels if necessary to determine at what depth DMA transfers
+need to be placed so that the allocated buffers fit within the memory capacity
+provided. If this is not possible (for example, when the elemental type itself
+is of size larger than the DMA capacity), an error with location information is
+emitted. The DMA transfers are also hoisted up past all loops with respect to
+which the transfers are invariant.
+
+Input
+
+```mlir
+func @loop_nest_tiled() -> memref<256x1024xf32> {
+ %0 = alloc() : memref<256x1024xf32>
+ affine.for %i0 = 0 to 256 step 32 {
+ affine.for %i1 = 0 to 1024 step 32 {
+ affine.for %i2 = (d0) -> (d0)(%i0) to (d0) -> (d0 + 32)(%i0) {
+ affine.for %i3 = (d0) -> (d0)(%i1) to (d0) -> (d0 + 32)(%i1) {
+ %1 = affine.load %0[%i2, %i3] : memref<256x1024xf32>
+ }
+ }
+ }
+ }
+ return %0 : memref<256x1024xf32>
+}
+```
+
+Output (with flags: -affine-data-copy-generate -affine-data-copy-generate-fast-mem-space=2)
+
+```mlir
+module {
+ func @loop_nest_tiled() -> memref<256x1024xf32> {
+ %c262144 = constant 262144 : index
+ %c0 = constant 0 : index
+ %0 = alloc() : memref<256x1024xf32>
+ %1 = alloc() : memref<256x1024xf32, 2>
+ %2 = alloc() : memref<1xi32>
+ affine.dma_start %0[%c0, %c0], %1[%c0, %c0], %2[%c0], %c262144 : memref<256x1024xf32>, memref<256x1024xf32, 2>, memref<1xi32>
+ affine.dma_wait %2[%c0], %c262144 : memref<1xi32>
+ affine.for %arg0 = 0 to 256 step 32 {
+ affine.for %arg1 = 0 to 1024 step 32 {
+ affine.for %arg2 = #map1(%arg0) to #map2(%arg0) {
+ affine.for %arg3 = #map1(%arg1) to #map2(%arg1) {
+ %3 = affine.load %1[%arg2, %arg3] : memref<256x1024xf32, 2>
+ }
+ }
+ }
+ }
+ dealloc %2 : memref<1xi32>
+ dealloc %1 : memref<256x1024xf32, 2>
+ return %0 : memref<256x1024xf32>
+ }
+}
+```
+
+## Loop tiling (`-affine-loop-tile`)
+
+Performs tiling or blocking of loop nests. It currently works on perfect loop
+nests.
+
+## Loop unroll (`-affine-loop-unroll`)
+
+This pass implements loop unrolling. It is able to unroll loops with arbitrary
+bounds, and generate a cleanup loop when necessary.
+
+## Loop unroll and jam (`-affine-loop-unroll-jam`)
+
+This pass implements unroll and jam for loops. It works on both perfect or
+imperfect loop nests.
+
+## Loop fusion (`-affine-loop-fusion`)
+
+Performs fusion of loop nests using a slicing-based approach. The fused loop
+nests, when possible, are rewritten to access significantly smaller local
+buffers instead of the original memref's, and the latter are often
+either completely optimized away or contracted. This transformation leads to
+enhanced locality and lower memory footprint through the elimination or
+contraction of temporaries / intermediate memref's. These benefits are sometimes
+achieved at the expense of redundant computation through a cost model that
+evaluates available choices such as the depth at which a source slice should be
+materialized in the designation slice.
+
+## Memref bound checking (`-memref-bound-check`)
+
+Checks all load's and store's on memref's for out of bound accesses, and reports
+any out of bound accesses (both overrun and underrun) with location information.
+
+```mlir
+test/Transforms/memref-bound-check.mlir:19:13: error: 'load' op memref out of upper bound access along dimension #2
+ %x = load %A[%idx0, %idx1] : memref<9 x 9 x i32>
+ ^
+test/Transforms/memref-bound-check.mlir:19:13: error: 'load' op memref out of lower bound access along dimension #2
+ %x = load %A[%idx0, %idx1] : memref<9 x 9 x i32>
+ ^
+```
+
+## Memref dataflow optimization (`-memref-dataflow-opt`)
+
+This pass performs store to load forwarding for memref's to eliminate memory
+accesses and potentially the entire memref if all its accesses are forwarded.
+
+Input
+
+```mlir
+func @store_load_affine_apply() -> memref<10x10xf32> {
+ %cf7 = constant 7.0 : f32
+ %m = alloc() : memref<10x10xf32>
+ affine.for %i0 = 0 to 10 {
+ affine.for %i1 = 0 to 10 {
+ affine.store %cf7, %m[%i0, %i1] : memref<10x10xf32>
+ %v0 = affine.load %m[%i0, %i1] : memref<10x10xf32>
+ %v1 = addf %v0, %v0 : f32
+ }
+ }
+ return %m : memref<10x10xf32>
+}
+```
+
+Output
+
+```mlir
+module {
+ func @store_load_affine_apply() -> memref<10x10xf32> {
+ %cst = constant 7.000000e+00 : f32
+ %0 = alloc() : memref<10x10xf32>
+ affine.for %arg0 = 0 to 10 {
+ affine.for %arg1 = 0 to 10 {
+ affine.store %cst, %0[%arg0, %arg1] : memref<10x10xf32>
+ %1 = addf %cst, %cst : f32
+ }
+ }
+ return %0 : memref<10x10xf32>
+ }
+}
+
+```
+
+## Memref dependence analysis (`-memref-dependence-check`)
+
+This pass performs dependence analysis to determine dependences between pairs of
+memory operations (load's and store's) on memref's. Dependence analysis exploits
+polyhedral information available (affine maps, expressions, and affine.apply
+operations) to precisely represent dependences using affine constraints, while
+also computing dependence vectors from them, where each component of the
+dependence vector provides a lower and an upper bound on the dependence distance
+along the corresponding dimension.
+
+```mlir
+test/Transforms/memref-dataflow-opt.mlir:232:7: note: dependence from 2 to 1 at depth 1 = ([1, 1], [-inf, +inf])
+ store %cf9, %m[%idx] : memref<10xf32>
+```
+
+## Pipeline data transfer (`-affine-pipeline-data-transfer`)
+
+This pass performs a transformation to overlap non-blocking DMA operations in a
+loop with computations through double buffering. This is achieved by advancing
+dma_start operations with respect to other operations.
+
+Input
+
+```mlir
+func @pipelinedatatransfer() {
+ %0 = alloc() : memref<256xf32>
+ %1 = alloc() : memref<32xf32, 1>
+ %2 = alloc() : memref<1xf32>
+ %c0 = constant 0 : index
+ %c128 = constant 128 : index
+ affine.for %i0 = 0 to 8 {
+ affine.dma_start %0[%i0], %1[%i0], %2[%c0], %c128 : memref<256xf32>, memref<32xf32, 1>, memref<1xf32>
+ affine.dma_wait %2[%c0], %c128 : memref<1xf32>
+ %3 = affine.load %1[%i0] : memref<32xf32, 1>
+ %4 = "compute"(%3) : (f32) -> f32
+ affine.store %4, %1[%i0] : memref<32xf32, 1>
+ }
+ return
+}
+```
+
+Output
+
+```mlir
+module {
+ func @pipelinedatatransfer() {
+ %c8 = constant 8 : index
+ %c0 = constant 0 : index
+ %0 = alloc() : memref<256xf32>
+ %c0_0 = constant 0 : index
+ %c128 = constant 128 : index
+ %1 = alloc() : memref<2x32xf32, 1>
+ %2 = alloc() : memref<2x1xf32>
+ affine.dma_start %0[%c0], %1[%c0 mod 2, %c0], %2[%c0 mod 2, symbol(%c0_0)], %c128 : memref<256xf32>, memref<2x32xf32, 1>, memref<2x1xf32>
+ affine.for %arg0 = 1 to 8 {
+ affine.dma_start %0[%arg0], %1[%arg0 mod 2, %arg0], %2[%arg0 mod 2, symbol(%c0_0)], %c128 : memref<256xf32>, memref<2x32xf32, 1>, memref<2x1xf32>
+ %8 = affine.apply #map3(%arg0)
+ %9 = affine.apply #map4(%8)
+ %10 = affine.apply #map4(%8)
+ affine.dma_wait %2[%8 mod 2, symbol(%c0_0)], %c128 : memref<2x1xf32>
+ %11 = affine.load %1[%8 mod 2, %8] : memref<2x32xf32, 1>
+ %12 = "compute"(%11) : (f32) -> f32
+ affine.store %12, %1[%8 mod 2, %8] : memref<2x32xf32, 1>
+ }
+ %3 = affine.apply #map3(%c8)
+ %4 = affine.apply #map4(%3)
+ %5 = affine.apply #map4(%3)
+ affine.dma_wait %2[%3 mod 2, symbol(%c0_0)], %c128 : memref<2x1xf32>
+ %6 = affine.load %1[%3 mod 2, %3] : memref<2x32xf32, 1>
+ %7 = "compute"(%6) : (f32) -> f32
+ affine.store %7, %1[%3 mod 2, %3] : memref<2x32xf32, 1>
+ dealloc %2 : memref<2x1xf32>
+ dealloc %1 : memref<2x32xf32, 1>
+ return
+ }
+}
+```
diff --git a/mlir/docs/Quantization.md b/mlir/docs/Quantization.md
new file mode 100644
index 00000000000..99e450ca84d
--- /dev/null
+++ b/mlir/docs/Quantization.md
@@ -0,0 +1,359 @@
+# MLIR Quantization
+
+This document outlines the design of the MLIR quantization system. While the
+term "quantization" is highly overloaded, in this case, it refers to a fairly
+narrow scope of techniques in use to enable conversion of floating-point
+computations to corresponding and plausible variants expressed in integer math
+for inference, as has historically been supported by low-bit depth inference
+engines such as TFLite, various accelerator hardware, and many DSPs.
+
+Much of this is inspired by the approach taken
+[in this paper](https://arxiv.org/abs/1712.05877) with many extensions and
+adaptations folded in. It specifically documents the positions that MLIR has
+taken on the topic, and is not a general reference.
+
+[TOC]
+
+## Uniform quantization
+
+The primary quantization mechanism supported by MLIR is a scheme which can
+express fixed point and affine transformations via uniformly spaced point on the
+Real number line.
+
+Further, the scheme can be applied:
+
+* *per-layer* : Applying to every value within the target type.
+* *per-axis* (also called *per-channel*) : Applying individually to each index
+ along a specific axis of a tensor type.
+
+### Fixed point values
+
+[Fixed point](https://en.wikipedia.org/wiki/Fixed-point_arithmetic) values are a
+[Real](https://en.wikipedia.org/wiki/Real_number) number divided by a *scale*.
+We will call the result of the divided Real the *scaled value*.
+
+$$ real\_value = scaled\_value * scale $$
+
+The scale can be interpreted as the distance, in Real units, between neighboring
+scaled values. For example, if the scale is $$ \pi $$, then fixed point values
+with this scale can only represent multiples of $$ \pi $$, and nothing in
+between. The maximum rounding error to convert an arbitrary Real to a fixed
+point value with a given $$ scale $$ is $$ \frac{scale}{2} $$. Continuing the
+previous example, when $$ scale = \pi $$, the maximum rounding error will be $$
+\frac{\pi}{2} $$.
+
+Multiplication can be performed on scaled values with different scales, using
+the same algorithm as multiplication of Real values (note that product scaled
+value has $$ scale_{product} = scale_{left \mbox{ } operand} * scale_{right
+\mbox{ } operand} $$). Addition can be performed on scaled values, as long as
+they have the same scale, using the same algorithm as addition of Real values.
+This makes it convenient to represent scaled values on a computer as signed
+integers, and perform arithmetic on those signed integers, because the results
+will be correct scaled values.
+
+### Affine values
+
+Mathematically speaking, affine values are the result of
+[adding a Real-valued *zero point*, to a scaled value](https://en.wikipedia.org/wiki/Affine_transformation#Representation).
+Or equivalently, subtracting a zero point from an affine value results in a
+scaled value:
+
+$$ real\_value = scaled\_value * scale = (affine\_value - zero\_point) * scale $$
+
+Essentially, affine values are a shifting of the scaled values by some constant
+amount. Arithmetic (i.e., addition, subtraction, multiplication, division)
+cannot, in general, be directly performed on affine values; you must first
+[convert](#affine-to-fixed-point) them to the equivalent scaled values.
+
+As alluded to above, the motivation for using affine values is to more
+efficiently represent the Real values that will actually be encountered during
+computation. Frequently, the Real values that will be encountered are not
+symmetric around the Real zero. We also make the assumption that the Real zero
+is encountered during computation, and should thus be represented.
+
+In this case, it's inefficient to store scaled values represented by signed
+integers, as some of the signed integers will never be used. The bit patterns
+corresponding to those signed integers are going to waste.
+
+In order to exactly represent the Real zero with an integral-valued affine
+value, the zero point must be an integer between the minimum and maximum affine
+value (inclusive). For example, given an affine value represented by an 8 bit
+unsigned integer, we have: $$ 0 \leq zero\_point \leq 255$$. This is important,
+because in deep neural networks' convolution-like operations, we frequently
+need to zero-pad inputs and outputs, so zero must be exactly representable, or
+the result will be biased.
+
+### Relation
+
+Real values, fixed point values, and affine values relate through the following
+equation, which demonstrates how to convert one type of number to another:
+
+$$ real\_value = scaled\_value * scale = (affine\_value - zero\_point) * scale $$
+
+Note that computers generally store mathematical values using a finite number of
+bits. Thus, while the above conversions are exact, to store the result in a
+finite number of bits, we must, in general, round the result of the conversion
+(this applies to both cases: storing using floating point and storing using
+fixed point). Note that a full discussion of rounding behavior is outside the
+scope of this document, and it is safe to assume unless otherwise stated that
+rounding should be according to the IEEE754 default of RNE (where hardware
+permits).
+
+### Converting between Real and fixed point or affine
+
+To convert a Real value to a fixed point value, you must know the scale. To
+convert a Real value to an affine value, you must know the scale and zero point.
+
+#### Real to affine
+
+To convert an input tensor of Real-valued elements (usually represented by a
+floating point format, frequently
+[Single precision](https://en.wikipedia.org/wiki/Single-precision_floating-point_format))
+to a tensor of affine elements represented by an integral type (e.g. 8-bit
+unsigned integer), the following conversion can be performed (note that it is
+not required that all representable values of the integral type are used):
+
+$$
+\begin{align*}
+af&fine\_value_{uint8 \, or \, uint16} \\
+ &= clampToTargetSize(roundToNearestInteger( \frac{real\_value_{Single}}{scale_{Single}})_{sint32} + zero\_point_{uint8 \, or \, uint16})
+\end{align*}
+$$
+
+In the above, we assume that $$real\_value$$ is a Single, $$scale$$ is a Single,
+$$roundToNearestInteger$$ returns a signed 32 bit integer, and $$zero\_point$$
+is an unsigned 8 or 16 bit integer. Note that bit depth and number of fixed
+point values are indicative of common types on typical hardware but is not
+constrained to particular bit depths or a requirement that the entire range of
+an N-bit integer is used.
+
+#### Affine to Real
+
+To convert an output tensor of affine elements represented by uint8
+or uint16 to a tensor of Real-valued elements (usually represented with a
+floating point format, frequently Single precision), the following conversion
+can be performed:
+
+$$
+\begin{align*}
+re&al\_value_{Single} \\
+ &= roundToNearestFloat((affine\_value_{uint8 \, or \, uint16} - zero\_point_{uint8 \, or \, uint16})_{sint32})_{Single} * scale_{Single}
+\end{align*}
+$$
+
+In the above, we assume that the result of subtraction is in 32-bit signed
+integer format, and that $$roundToNearestFloat$$ returns a Single.
+
+#### Affine to fixed point
+
+When the affine and fixed point scales are the same, subtract the zero point
+from the affine value to get the equivalent fixed point value.
+
+$$
+scaled\_value = affine\_value_{non\mbox{-}negative} - zero\_point_{non\mbox{-}negative}
+$$
+
+#### Fixed point to affine
+
+When the affine and fixed point scales are the same, add the zero point to the
+fixed point value to get the equivalent affine value.
+
+$$
+affine\_value_{non\mbox{-}negative} = scaled\_value + zero\_point_{non\mbox{-}negative}
+$$
+
+## Usage within MLIR
+
+There are several components to the quantization system being developed within
+MLIR:
+
+* *Quantization* dialect containing:
+
+ * A family of [QuantizedTypes](#quantized-type) which represent the
+ mapping between *expressed* values (typically of a floating point
+ computer type) and *storage* values (typically of an integral computer
+ type).
+ * [Type conversion ops](#quantized-type-conversion-ops) for converting
+ between types based on a QuantizedType and its *expressed* and *storage*
+ sub-types.
+ * [Instrumentation ops](#instrumentation-and-constraint-ops) for assigning
+ instrumentation points within the computation where runtime statistics
+ may help guide the quantization process.
+
+* [Integration with simulated quantization at training time](#integration-with-simulated-quantization-at-training-time)
+
+* [TFLite native quantization](#tflite-native-quantization)
+
+ * The TFLite op-set natively supports uniform-quantized variants.
+ * Passes and tools exist to convert directly from the *TensorFlow* dialect
+ to the TFLite quantized op-set.
+
+* [*FxpMath* dialect](#fxpmath-dialect) containing (experimental) generalized
+ representations of fixed-point math ops and conversions:
+
+ * [Real math ops](#real-math-ops) representing common combinations of
+ arithmetic operations that closely match corresponding fixed-point math
+ concepts (as opposed to being spread across multiple ops as is typical
+ in source dialects).
+ * [Fixed-point math ops](#fixed-point-math-ops) that for carrying out
+ computations on integers, as are typically needed by uniform
+ quantization schemes.
+ * Passes to lower from real math ops to fixed-point math ops.
+
+* [Solver tools](#solver-tools) which can (experimentally and generically
+ operate on computations expressed in the *FxpMath* dialect in order to
+ convert from floating point types to appropriate *QuantizedTypes*, allowing
+ the computation to be further lowered to integral math ops.
+
+Not every application of quantization will use all facilities. Specifically, the
+TensorFlow to TensorFlow Lite conversion uses the QuantizedTypes but has its own
+ops for type conversion and expression of the backing math.
+
+## Quantization Dialect
+
+### Quantized type
+
+TODO : Flesh this section out.
+
+* QuantizedType base class
+* UniformQuantizedType
+
+### Quantized type conversion ops
+
+* qcast : Convert from an expressed type to QuantizedType
+* dcast : Convert from a QuantizedType to its expressed type
+* scast : Convert between a QuantizedType and its storage type
+
+### Instrumentation and constraint ops
+
+* const_fake_quant : Emulates the logic of the historic TensorFlow
+ fake_quant_with_min_max_args op.
+* stats_ref : Declares that statistics should be gathered at this point with a
+ unique key and made available to future passes of the solver.
+* stats : Declares inline statistics (per layer and per axis) for the point in
+ the computation. stats_ref ops are generally converted to stats ops once
+ trial runs have been performed.
+* coupled_ref : Declares points in the computation to be coupled from a type
+ inference perspective based on a unique key.
+
+## Integration with simulated quantization at training time
+
+TensorFlow has historically used the
+[tf.quantization.fake_quant_\*](https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_args)
+family of operations to simulate the effect of quantization at training time.
+
+As originally implemented, TensorFlow Lite was the primary user of such
+operations at inference time. When quantized inference was enabled, if every
+eligible tensor passed through an appropriate fake_quant node (the rules of
+which tensors can have fake_quant applied are somewhat involved), then
+TensorFlow Lite would use the attributes of the fake_quant ops to make a
+judgment about how to convert to use kernels from its quantized ops subset.
+
+In MLIR-based quantization, fake_quant_\* ops are handled by converting them to
+a sequence of *qcast* (quantize) followed by *dcast* (dequantize) with an
+appropriate *UniformQuantizedType* as the target of the qcast operation.
+
+This allows subsequent compiler passes to preserve the knowledge that
+quantization was simulated in a certain way while giving the compiler
+flexibility to move the casts as it simplifies the computation and converts it
+to a form based on integral arithmetic.
+
+This scheme also naturally allows computations that are *partially quantized*
+where the parts which could not be reduced to integral ops are still carried out
+in floating point with appropriate conversions at the boundaries.
+
+## TFLite Native Quantization
+
+TODO : Flesh this out
+
+### General algorithm
+
+1. Take input min/max information and set the ArrayInfo (which really is
+ InputOrOutputArrayInfo.
+1. In LegalizeTF, convert ArrayInfo min/max to tf.Quantize and tf.Dequantize
+ nodes. (or tf.FakeQuant) Convert all constant FakeQuants to (tf.FQ -> tfl.Q
+ -> tfl.DQ).
+1. Hardcode logic/propagation needs to happen here.
+1. Run TF constant folding.
+1. In PrepareTFL, convert all tf.FQ to (tfl.Q -> tfl.DQ).
+1. Run quantization pass that take (tfl.DQ (for both input and weights) -> op
+ -> tfl.Q) and replaces with (op). Also replace (constant_float -> tfl.Q)
+ with (constant_quant).
+
+## FxpMath Dialect
+
+### Real math ops
+
+Note that these all support explicit clamps, which allows for simple fusions and
+representation of some common sequences quantization-compatible math. Of
+addition, some support explicit biases, which are often represented as separate
+adds in source dialects.
+
+TODO: This op set is still evolving and needs to be completed.
+
+* RealBinaryOp
+ * RealAddEwOp
+ * RealSubEwOp
+ * RealMulEwOp
+ * RealDivEwOp
+* RealUnaryOp
+ * IDENTITY
+ * TANH
+ * SIGMOID
+ * EXP
+ * LOG
+ * NEG
+ * RSQRT
+ * SIN
+ * SQUARE
+ * SQRT
+ * CMPZ
+ * CMPNZ
+ * CMPLZ
+ * CMPGZ
+
+### Fixed-point math ops
+
+TODO: This op set only has enough ops to lower a simple power-of-two
+RealAddEwOp.
+
+* RoundingDivideByPotFxpOp
+* SaturatingAddFxpOp
+
+## Solver tools
+
+Solver tools exist to analyze an MLIR-computation, expressed in either a
+supported source dialect or in the *real math ops* set and solve for appropriate
+QuantizedTypes that allow the computation to be lowered to integral math.
+
+These tools are an active area of work and may be expanded in the future to
+adjacent areas such as solving for transformations to other kinds of lower
+precision types (i.e. bfloat16 or fp16).
+
+Solver tools are expected to operate in several modes, depending on the
+computation and the manner in which it was trained:
+
+* *Transform* : With all available information in the MLIR computation, infer
+ boundaries where the computation can be carried out with integral math and
+ change types accordingly to appropriate QuantizedTypes:
+
+ * For passthrough ops which do not perform active math, change them to
+ operate directly on the storage type, converting in and out at the edges
+ via scast ops.
+ * For ops that have the *Quantizable* trait, the type can be set directly.
+ This includes ops from the [real math ops set]{#real-math-ops}.
+ * For others, encase them in appropriate dcast/qcast ops, presuming that
+ some follow-on pass will know what to do with them.
+
+* *Instrument* : Most of the time, there are not sufficient implied
+ constraints within a computation to perform many transformations. For this
+ reason, the solver can insert instrumentation ops at points where additional
+ runtime statistics may yield solutions. It is expected that such
+ computations will be lowered as-is for execution, run over an appropriate
+ eval set, and statistics at each instrumentation point made available for a
+ future invocation of the solver.
+
+* *Simplify* : A variety of passes and simplifications are applied once
+ QuantizedTypes are added in order to arrive at a computation that is
+ expressed in as much integral math, with the fewest number of casts as
+ possible.
diff --git a/mlir/docs/QuickstartRewrites.md b/mlir/docs/QuickstartRewrites.md
new file mode 100644
index 00000000000..6a4a7cca8b8
--- /dev/null
+++ b/mlir/docs/QuickstartRewrites.md
@@ -0,0 +1,255 @@
+# Quickstart tutorial to adding MLIR graph rewrite
+
+This document will present a quickstart to adding graph rewrites. We shall start
+by defining an operation, showing multiple ways to define the rewrite using
+patterns, as well as defining the rewrite using a graph walker (note: using
+patterns and the rewrite engine is preferred, showing the walker is for
+demonstration purposes).
+
+See [MLIR specification](LangRef.md) for more information about MLIR, the
+structure of the IR, operations, etc. See
+[Table-driven Operation Definition](OpDefinitions.md) and
+[Declarative Rewrite Rule](DeclarativeRewrites.md) for the detailed explanation
+of all available mechanisms for defining operations and rewrites in a
+table-driven manner.
+
+## Adding operation
+
+An operation in MLIR is specified using a definition in
+[TableGen](https://llvm.org/docs/TableGen/LangIntro.html) file. TableGen is a
+modeling tool to specify the ops and the C++ code to interact with these
+operations are generated from. To define an operation one needs to specify:
+
+* The operation name. This name is a unique identifier of the operation within
+ MLIR. Most operations are within a dialect, so for example one could have
+ `tfl.add` to represent the add operation in the TensorFlow Lite dialect.
+ Instead of repeating the dialect in the op definition, a base class for the
+ op dialect is commonly created that prepends the dialect namespace given an
+ op name.
+* The traits of the operation. These allow you to specify traits of the
+ operation, such as whether it has side effects or whether it should be
+ verified that the operands and result types are the same. These are backed
+ by C++ traits that perform the verification.
+* The arguments of the operation. These are the input operands (values at
+ runtime produced by other ops) and attributes (compile time known constant
+ values that affect the behavior of the op) that are the inputs of/define the
+ behavior of the operation. The input operands may be named, the attributes
+ must be named.
+* The result(s) of the operation. These may again named or not.
+* Documentation of the operation. This includes a one-line summary as well as
+ a longer human-readable description of the operation.
+* Dialect specific information. Additional information could be added to the
+ operation definition that are only used by dialect specific drivers. These
+ are ignored by the main op and doc generators, but could be used in, say,
+ the translation from a dialect to another representation.
+
+```tablegen
+def TFL_LeakyReluOp: TFL_Op<TFL_Dialect, "leaky_relu",
+ [NoSideEffect, SameValueType]>,
+ Results<(outs Tensor)> {
+ let arguments = (ins
+ F32Tensor:$x,
+ // Slope of the activation function at x < 0.
+ F32Attr:$alpha
+ );
+
+ let summary = "Leaky ReLU operator";
+ let description = [{
+ Element-wise Leaky ReLU operator
+ x -> x >= 0 ? x : (alpha * x)
+ }];
+
+ // TFLite specific attribute that is used when generating the output
+ // flatbuffer.
+ let hasOptions = 1;
+}
+```
+
+Note in the above the result types and inputs are specified in different ways,
+one by way of trait and the other by way of let. It is possible to specify both
+in either way.
+
+<!-- TODO: Define a style convention. -->
+
+Operations can also have custom parser, printer, builder, verifier, constant
+folder, or canonicalizer. These require specifying additional C++ methods to
+invoke for additional functionality. For example, if an operation is marked to
+have a folder, the constant folder also needs to be added, e.g.,:
+
+```c++
+OpFoldResult SpecificOp::fold(ArrayRef<Attribute> constOperands) {
+ if (unable_to_fold)
+ return {};
+ ....
+ return val;
+}
+```
+
+## Adding patterns
+
+There are multiple forms of graph rewrite that can be performed in MLIR. One of
+the most common is DAG tile to DAG tile rewrite. Patterns provide a concise way
+to express this transformation as a pair of source pattern to match and
+resultant pattern. There are both the C++ classes to represent this
+transformation, as well as the patterns in TableGen from which these can be
+generated.
+
+### TableGen patterns
+
+Let us continue with LeakyRelu. To map from TensorFlow's `LeakyRelu` to
+TensorFlow Lite's `LeakyRelu`:
+
+```tablegen
+def : Pat<(TF_LeakyReluOp $arg, F32Attr:$a), (TFL_LeakyReluOp $arg, $a)>
+```
+
+The pattern is specified by instantiating a `Pat` with a source and result DAG.
+The arguments in the source pattern is captured and can be used in the result
+pattern. This is a simple pattern as we have a 1:1 mapping and the attribute
+does not need to be transformed (e.g., both have a floating point attribute for
+alpha). The names of the attributes specified in the pattern is for
+matching/referencing and need not match the original attribute name in the op
+definition but the order of arguments of the dags do need to match.
+
+To specify a pattern, both the source and resultant ops need to be defined using
+TableGen.
+
+If this were a more advance pattern that the current framework could not express
+as destination then one could use a general native code fallback method. This
+consists of defining a pattern as well as adding a C++ function to perform the
+replacement:
+
+```tablegen
+def createTFLLeakyRelu : NativeCodeCall<
+ "createTFLLeakyRelu($_builder, $0->getDefiningOp(), $1, $2)">;
+
+def : Pat<(TF_LeakyReluOp:$old_value, $arg, F32Attr:$a),
+ (createTFLLeakyRelu $old_value, $arg, $a)>;
+```
+
+```c++
+static Value createTFLLeakyRelu(PatternRewriter &rewriter, Operation *op,
+ Value operand, Attribute attr) {
+ return rewriter.create<mlir::TFL::LeakyReluOp>(
+ op->getLoc(), operands[0]->getType(), /*arg=*/operands[0],
+ /*alpha=*/attrs[0].cast<FloatAttr>());
+}
+```
+
+This allows for arbitrarily complex builders. Input pattern side one can express
+multi-op patterns with constraints on input operands and attributes. But input
+patterns cannot yet express constraints across multiple operands/attributes.
+
+### Register the pattern
+
+The file containing the patterns need to be processed using `mlir-tblgen`
+`-gen-rewriters` during compilation time. It can be invoked with the following
+configuration in CMake:
+
+```cmake
+set(LLVM_TARGET_DEFINITIONS <name-of-the-td-file>)
+mlir_tablegen(<name-of-the-generated-inc-file> -gen-rewriters)
+add_public_tablegen_target(<name-of-the-cmake-target>)
+```
+
+Then you can `#include` the generated file in any C++ implementation file you
+like. (You will also need to make sure the library depends on the CMake target
+defined in the above.) The generated file will have a `populateWithGenerated(
+MLIRContext *context, OwningRewritePatternList *patterns)` function that you can
+use to collect all the generated patterns inside `patterns` and then use
+`patterns` in any pass you would like.
+
+### C++ rewrite specification
+
+In case patterns are not sufficient there is also the fully C++ way of
+expressing a rewrite:
+
+```c++
+/// Multi-step rewrite using "match" and "rewrite". This allows for separating
+/// the concerns of matching and rewriting.
+struct ConvertTFLeakyRelu : public RewritePattern {
+ ConvertTFLeakyRelu(MLIRContext *context)
+ : RewritePattern("tf.LeakyRelu", 1, context) {}
+
+ PatternMatchResult match(Operation *op) const override {
+ return matchSuccess();
+ }
+
+ void rewrite(Operation *op, PatternRewriter &rewriter) const override {
+ rewriter.replaceOpWithNewOp<TFL::LeakyReluOp>(
+ op, op->getResult(0)->getType(), op->getOperand(0),
+ /*alpha=*/op->getAttrOfType<FloatAttr>("alpha"));
+ }
+};
+
+/// Single-step rewrite with "matchAndRewrite". This allows for performing the
+/// rewrite immediately upon a successful match.
+struct ConvertTFLeakyRelu : public RewritePattern {
+ ConvertTFLeakyRelu(MLIRContext *context)
+ : RewritePattern("tf.LeakyRelu", 1, context) {}
+
+ PatternMatchResult matchAndRewrite(Operation *op,
+ PatternRewriter &rewriter) const override {
+ rewriter.replaceOpWithNewOp<TFL::LeakyReluOp>(
+ op, op->getResult(0)->getType(), op->getOperand(0),
+ /*alpha=*/op->getAttrOfType<FloatAttr>("alpha"));
+ return matchSuccess();
+ }
+};
+```
+
+In the C++ rewrite the static benefit of the rewrite pattern is specified at
+construction. While in the pattern generator a simple heuristic is currently
+employed based around the number of ops matched and replaced.
+
+The above rule did not capture the matching operands/attributes, but in general
+the `match` function in a multi-step rewrite may populate and return a
+`PatternState` (or class derived from one) to pass information extracted during
+matching to the rewrite. A single-step rewrite with the `matchAndRewrite`
+function has the benefit of being able to directly use any values created when
+matching; removing the need for `PatternState`.
+
+## Testing
+
+MLIR uses [lit](https://llvm.org/docs/CommandGuide/lit.html) (LLVM Integrated
+Testing) tool for performing testing. Testing is performed by way of creating
+the input IR file, running a transformation and then verifying the output IR.
+C++ unit tests are the exception, with the IR transformation serving as the core
+testing mechanism. This results in fewer binaries that need to be built (and
+linked) and forces to focus on the representation as an important piece.
+
+For the legalization transform above we would have a test (probably as part of
+the legalization pass test in TensorFlow Lite) such as:
+
+```mlir
+// RUN: mlir-opt -tfl-legalize-tf %s | FileCheck %s
+
+func @LeakyRelu(%arg0: tensor<1xf32>) -> tensor<1xf32> {
+ %2 = "tf.LeakyRelu"(%arg0) {alpha: 0.1} : (tensor<1xf32>) -> tensor<1xf32>
+ return %2: tensor<1xf32>
+
+// CHECK-LABEL: LeakyRelu
+// CHECK: %0 = "tfl.leaky_relu"(%arg0) {alpha: 1.000000e-01} : (tensor<1xf32>) -> tensor<1xf32>
+}
+```
+
+The RUN command at the top results in running the `mlir-opt` binary (which is
+compiler writer tool to exercise different registered passes) to invoke the
+optimization pass this transform was added as part of on the current file and to
+verify its output using `FileCheck`. `FileCheck` is textual output verifier. In
+particular it uses the CHECK expressions to verify the given output is produced.
+
+There can be multiple RUN commands with different corresponding CHECK prefixes.
+And in addition multiple independent tests separated by `// -----` and
+`mlir-opt` invoked with `-split-input-file` flag. This is especially useful for
+error testing.
+
+This results in very simple, directed testing without need to work around
+constant propagation or other, unrelated, optimization passes.
+
+## Adding optimization pass
+
+Optimization passes that do not fit/are difficult to specify in the above
+structure can be specified as general iterations across modules/functions. See
+[Writing a Pass](WritingAPass.md) for a general overview and introduction to
+optimization passes in MLIR.
diff --git a/mlir/docs/Rationale.md b/mlir/docs/Rationale.md
new file mode 100644
index 00000000000..763442dce06
--- /dev/null
+++ b/mlir/docs/Rationale.md
@@ -0,0 +1,1121 @@
+# MLIR Rationale
+
+This document is intended to capture some of the alternatives considered and
+open debates in the design of MLIR, along with the rationale for certain
+decisions we made. This is not intended to be a "finely groomed" document - we
+prefer the ability to dump in interesting tidbits without worrying too much
+about their consistency or readability.
+
+[TOC]
+
+## Abstract
+
+MLIR is a compiler intermediate representation with similarities to traditional
+three-address SSA representations (like
+[LLVM IR](http://llvm.org/docs/LangRef.html) or
+[SIL](https://github.com/apple/swift/blob/master/docs/SIL.rst)), but which
+introduces notions from the polyhedral loop optimization works as first class
+concepts. This hybrid design is optimized to represent, analyze, and transform
+high level dataflow graphs as well as target-specific code generated for high
+performance data parallel systems. Beyond its representational capabilities, its
+single continuous design provides a framework to lower from dataflow graphs to
+high performance target specific code.
+
+MLIR stands for one of "Multi-Level IR" or "Multi-dimensional Loop IR" or
+"Machine Learning IR" or "Mid Level IR", we prefer the first. This document only
+provides the rationale behind MLIR -- its actual
+[specification document](LangRef.md) and other content is hosted elsewhere.
+
+## Introduction and Motivation
+
+The Multi-Level Intermediate Representation (MLIR) is intended for easy
+expression and optimization of computations involving deep loop nests and dense
+matrices of high dimensionality. It is thus well-suited to deep learning
+computations in particular. Yet it is general enough to also represent arbitrary
+sequential computation. The representation allows high-level optimization and
+parallelization for a wide range of parallel architectures including those with
+deep memory hierarchies --- general-purpose multicores, GPUs, and specialized
+neural network accelerators.
+
+MLIR uses ideas drawn from IRs of LLVM and Swift for lower level constructs
+while combining them with ideas from the polyhedral abstraction to represent
+loop nests, multidimensional data (tensors), and transformations on these
+entities as first class concepts in the IR.
+
+MLIR is a multi-level IR, i.e., it represents code at a domain-specific
+representation such as HLO or TensorFlow graphs, all the way down to the machine
+level. MLIR is able to represent arbitrary control flow and arbitrary data
+accesses, and is general enough to represent nearly all sequential computation.
+This is a key distinction from existing polyhedral representation
+implementations (such as LLVM [Polly](https://polly.llvm.org/)) that are able to
+use the polyhedral abstraction in a way isolated from the LLVM IR and only for
+affine loop nests, i.e., portions of the code where array accesses, loop bounds,
+and conditionals are regular (involve linear functions of loop iterators and
+constant symbols). The presence of statically unpredictable data accesses or
+control flow does not preclude representation in MLIR, but only limits to a
+certain extent the ability to reason about and apply transformations using the
+polyhedral abstraction.
+
+Maps, sets, and relations with affine constraints are the core structures
+underlying a polyhedral representation of high-dimensional loop nests and
+multidimensional arrays. These structures are represented as textual
+expressions in a form close to their mathematical form. These structures are
+used to capture loop nests, tensor data structures, and how they are reordered
+and mapped for a target architecture. All structured or "conforming" loops are
+captured as part of the polyhedral information, and so are tensor variables,
+their layouts, and subscripted accesses to these tensors in memory.
+
+The information captured in the IR allows a compact expression of all loop
+transformations, data remappings, explicit copying necessary for explicitly
+addressed memory in accelerators, mapping to pre-tuned expert written
+primitives, and mapping to specialized vector instructions. Loop transformations
+that can be easily implemented include the body of affine transformations: these
+subsume all traditional loop transformations (unimodular and non-unimodular)
+such as loop tiling, interchange, permutation, skewing, scaling, relative
+shifting, reversal, fusion, and distribution/fission. Transformations on data
+layout such as padding and transforming to blocked layouts are also represented
+well via affine layout maps.
+
+MLIR's design allows a progressive lowering to target-specific forms. Besides
+high-level transformations for loop nests and data layouts that a typical
+mid-level optimizer is expected to deal with, MLIR is also designed to perform
+certain low-level scheduling and mapping decisions that a typical backend IR is
+entrusted with: these include mapping to specialized vector instructions,
+auto-vectorization, and software pipelining. The need to support these
+transformations stems from the fact that neural network accelerators have
+specialized units that deal with large chunks of data whose computation maps
+back to chunks of more than one loop of the loop nests as viewed by a program at
+a level closer to the original specification. Such specialized units or
+instructions operate on multidimensional data chunks from a programmer's
+viewpoint. It thus makes it hard or infeasible for a backend operating on a very
+low-level IR close to assembly to lift and reconstruct loops and perform such a
+mapping. This is in contrast to classic instruction selection and scheduling in
+today's compilers that primarily only deals with the body of the innermost loop.
+MLIR also facilitates automatic mapping to expert pre-tuned primitives or vendor
+libraries operating on data at higher levels (or at the highest level) of the
+memory hierarchy.
+
+In summary, MLIR is convenient for and closed under the kind of transformations
+needed to lower to general-purpose as well as specialized accelerators. It also
+allows one to build modular and reusable target independent and target dependent
+passes.
+
+## Design Decisions
+
+This section sheds light on some of the design decisions -- some of these are
+indirectly implied by the specification document.
+
+### Loads and stores
+
+The 'load' and 'store' instructions are specifically crafted to fully resolve to
+an element of a memref. These instructions take as arguments n+1 indices for an
+n-ranked tensor. This disallows the equivalent of pointer arithmetic or the
+ability to index into the same memref in other ways (something which C arrays
+allow for example). Furthermore, for the affine constructs, the compiler can
+follow use-def chains (e.g. through
+[affine.apply operations](Dialects/Affine.md#affineapply-operation)) or through
+the map attributes of [affine operations](Dialects/Affine.md#Operations)) to
+precisely analyze references at compile-time using polyhedral techniques. This
+is possible because of the [restrictions on dimensions and symbols](Dialects/Affine.md#restrictions-on-dimensions-and-symbols).
+
+A scalar of element-type (a primitive type or a vector type) that is stored in
+memory is modeled as a 0-d memref. This is also necessary for scalars that are
+live out of for loops and if conditionals in a function, for which we don't yet
+have an SSA representation --
+[an extension](#mlfunction-extensions-for-"escaping-scalars") to allow that is
+described later in this doc.
+
+### Symbols and types
+
+The current MLIR disallows use of symbols in types. For example, when a tensor
+or memref dimension is statically unknown, it is denoted in the type as '?'. An
+SSA symbol is then bound to it when a memref is created. The actual value of the
+unknown dimension can be queried using the "dim" builtin as shown below.
+
+Example:
+
+```mlir
+func foo(...) {
+ %A = alloc <8x?xf32, #lmap> (%N)
+ ...
+ call bar(%A) : (memref<8x?xf32, #lmap>)
+}
+
+func bar(%A : memref<8x?xf32, #lmap>) {
+ // Type of %A indicates that %A has dynamic shape with 8 rows
+ // and unknown number of columns. The number of columns is queried
+ // dynamically using dim instruction.
+ %N = dim %A, 1 : memref<8x?xf32, #lmap>
+
+ affine.for %i = 0 to 8 {
+ affine.for %j = 0 to %N {
+ // A[i,j] += 1
+ %s1 = affine.load %A[%i, %j] : memref<8x?xf32, #lmap>
+ %s2 = add %s1, 1
+ affine.store %s2, %A[%i, %j] : memref<8x?xf32, #lmap>
+ }
+ }
+ return
+}
+
+```
+
+An alternative design is to embed the reference to symbols directly in the
+type - memref<8x%Nxf32>. We went for the current approach in MLIR because it
+simplifies the design --- types remain immutable when the values of symbols
+change.
+
+### Block Arguments vs PHI nodes
+
+MLIR Regions represent SSA using "[block arguments](LangRef.md#blocks)" rather
+than [PHI instructions](http://llvm.org/docs/LangRef.html#i-phi) used in LLVM.
+This choice is representationally identical (the same constructs can be
+represented in either form) but block arguments have several advantages:
+
+1. LLVM PHI nodes always have to be kept at the top of a block, and
+ transformations frequently have to manually skip over them. This is defined
+ away with BB arguments.
+1. LLVM has a separate function Argument node. This is defined away with BB
+ arguments, because the arguments to the entry block serve this purpose.
+1. Blocks of PHI nodes in LLVM execute atomically, which is surprising and
+ super confusing to compiler engineers and it is easy to introduce bugs with
+ this (very related to the
+ "[lost copy](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.524.5461&rep=rep1&type=pdf)"
+ problem in SSA lowering literature.) With the BB argument representation,
+ this confusion is defined away.
+1. The entry list of PHI nodes in LLVM are unordered, and some blocks have
+ thousands of predecessors (e.g. unwind blocks). This can cause long compile
+ time problems because transformations have to linearly scan this list. This
+ is defined away with BB argument representation.
+1. LLVM has no way to represent values that are available only in one successor
+ but not the other, e.g. its invoke instruction cannot produce the exception
+ value JUST on the exception edge. Instead, the
+ [landingpad instruction](http://llvm.org/docs/LangRef.html#landingpad-instruction)
+ is a hack used to represent this. MLIR doesn't make use of this capability,
+ but SIL uses it extensively, e.g. in the
+ [switch_enum instruction](https://github.com/apple/swift/blob/master/docs/SIL.rst#switch-enum).
+
+For more context, block arguments were previously used in the Swift
+[SIL Intermediate Representation](https://github.com/apple/swift/blob/master/docs/SIL.rst),
+and described in
+[a talk on YouTube](https://www.youtube.com/watch?v=Ntj8ab-5cvE). The section of
+interest
+[starts here](https://www.google.com/url?q=https://youtu.be/Ntj8ab-5cvE?t%3D596&sa=D&ust=1529450150971000&usg=AFQjCNFQHEWL7m8q3eO-1DiKw9zqC2v24Q).
+
+### Index type disallowed in vector/tensor/memref types
+
+Index types are not allowed as elements of `vector`, `tensor` or `memref` type.
+Index types are intended to be used for platform-specific "size" values and may
+appear in subscripts, sizes of aggregate types and affine expressions. They are
+also tightly coupled with `affine.apply` and affine.load/store operations;
+having `index` type is a necessary precondition of a value to be acceptable by
+these operations. While it may be useful to have `memref<?xindex>` to express
+indirect accesses, e.g. sparse matrix manipulations or lookup tables, it creates
+problems MLIR is not ready to address yet. MLIR needs to internally store
+constants of aggregate types and emit code operating on values of those types,
+which are subject to target-specific size and alignment constraints. Since MLIR
+does not have a target description mechanism at the moment, it cannot reliably
+emit such code. Moreover, some platforms may not support vectors of type
+equivalent to `index`.
+
+Indirect access use cases can be alternatively supported by providing and
+`index_cast` instruction that allows for conversion between `index` and
+fixed-width integer types, at the SSA value level. It has an additional benefit
+of supporting smaller integer types, e.g. `i8` or `i16`, for small indices
+instead of (presumably larger) `index` type.
+
+### Bit width of a non-primitive types and `index` is undefined
+
+The bit width of a compound type is not defined by MLIR, it may be defined by a
+specific lowering pass. In MLIR, bit width is a property of certain primitive
+_type_, in particular integers and floats. It is equal to the number that
+appears in the type definition, e.g. the bit width of `i32` is `32`, so is the
+bit width of `f32`. The bit width is not _necessarily_ related to the amount of
+memory (in bytes) or the size of register (in bits) that is necessary to store
+the value of the given type. These quantities are target and ABI-specific and
+should be defined during the lowering process rather than imposed from above.
+For example, `vector<3xi57>` is likely to be lowered to a vector of four 64-bit
+integers, so that its storage requirement is `4 x 64 / 8 = 32` bytes, rather
+than `(3 x 57) ceildiv 8 = 22` bytes as can be naively computed from the
+bitwidth. Individual components of MLIR that allocate space for storing values
+may use the bit size as the baseline and query the target description when it is
+introduced.
+
+The bit width is not defined for dialect-specific types at MLIR level. Dialects
+are free to define their own quantities for type sizes.
+
+### Signless types
+
+Integers in the builtin MLIR type system have a bitwidth (note that the `index`
+type has a symbolic width equal to the machine word size), but they do not have
+an intrinsic sign. This means that the "standard ops" operation set has things
+like `addi` and `muli` which do two's complement arithmetic, but some other
+operations get a sign, e.g. `divis` vs `diviu`.
+
+LLVM uses the [same design](http://llvm.org/docs/LangRef.html#integer-type),
+which was introduced in a revamp rolled out
+[in the LLVM 2.0 integer type](http://releases.llvm.org/2.0/docs/LangRef.html#t_derived).
+Prior to that, from
+[LLVM 1.0](http://releases.llvm.org/1.0/docs/LangRef.html#t_classifications) to
+[1.9](http://releases.llvm.org/1.9/docs/LangRef.html#t_classifications), LLVM
+uses signed types like "sbyte" and "ubyte". This shift was important and has
+served LLVM well over the years. The reason this is important is that it is a
+good thing for an intermediate representation to represent the same computation
+with the same instruction. Signed types got in the way, because (e.g.) an "add
+of an sbyte" does the same computation as an "add of a ubyte", but the type
+system made them look artificially different. This split also required casts
+like "cast from sbyte to ubyte" which do nothing at the machine level. Removing
+signs from the type system eliminated these problems, making the compiler
+simpler.
+
+More information about this split is available in an old
+[talk on youtube](https://www.youtube.com/watch?v=VeRaLPupGks) talking about
+LLVM 2.0.
+
+Note that this rationale only applies to the "standard ops" dialect in which we
+can express an opinion about its design. Other dialects generally try to model
+an external system, and should aim to reflect its design as closely as possible.
+
+### Splitting floating point vs integer operations
+
+The MLIR "standard" operation set splits many integer and floating point
+operations into different categories, for example `addf` vs `addi` and `cmpf` vs
+`cmpi`
+([following the design of LLVM](http://llvm.org/docs/LangRef.html#binary-operations)).
+These instructions _are_ polymorphic on the number of elements in the type
+though, for example `addf` is used with scalar floats, vectors of floats, and
+tensors of floats (LLVM does the same thing with its scalar/vector types).
+
+This split is important because floating point and integer operations are quite
+different in practice: for example, floating point values include NaN's, so
+[integer comparisons](http://llvm.org/docs/LangRef.html#icmp-instruction) and
+[floating point comparisons](http://llvm.org/docs/LangRef.html#fcmp-instruction)
+should use different comparison opcodes. On the arithmetic side of things,
+floating point operations support rounding modes, floating point contractions,
+["fast math"](http://llvm.org/docs/LangRef.html#fadd-instruction), and integers
+may want to have two's complement overflow behavior or be undefined on
+[various forms of wrapping](http://llvm.org/docs/LangRef.html#add-instruction)
+for performance.
+
+We are a long way from this sort of thing being a priority to care about in
+MLIR, but since we have experience and know the right way to do this, we'd
+rather design it in from the beginning.
+
+Note that this rationale only applies to the "standard ops" dialect in which we
+can express an opinion about its design. Other dialects generally try to model
+an external system, and should aim to reflect its design as closely as possible.
+
+### Specifying sign in integer comparison operations
+
+Since integers are [signless](#signless-types), it is necessary to define the
+sign for integer comparison operations. This sign indicates how to treat the
+foremost bit of the integer: as sign bit or as most significant bit. For
+example, comparing two `i4` values `0b1000` and `0b0010` yields different
+results for unsigned (`8 > 3`) and signed (`-8 < 3`) interpretations. This
+difference is only significant for _order_ comparisons, but not for _equality_
+comparisons. Indeed, for the latter all bits must have the same value
+independently of the sign. Since both arguments have exactly the same bit width
+and cannot be padded by this operation, it is impossible to compare two values
+whose bit representations would differ while the values are interpreted as
+equal.
+
+### Specifying comparison kind as attribute
+
+Unlike arithmetic, comparison operators share several common properties, e.g.
+they cannot be considered associative. In practice, comparisons are sometimes
+implemented by the same instruction or its variants so it makes sense to group
+them together at the IR level.
+
+An alternative would be introducing ten distinct operators for all currently
+supported kinds of integer comparisons. These operators would have increased the
+number of "reserved" names used by standard operations as well as the size of
+the C++ API while their implementations would have been mostly identical.
+
+The comparison kind is internally an integer attribute. However, for the sake of
+readability by humans, custom assembly form accepts string literals that are
+mapped to the underlying integer values: `cmpi "eq", %lhs, %rhs` better implies
+integer equality comparison than `cmpi 0, %lhs, %rhs` where it is unclear what
+gets compared to what else. This syntactic sugar is possible thanks to parser
+logic redefinitions for custom assembly form of non-builtin operations.
+Supporting it in the full notation would have required changing how the main
+parsing algorithm works and may have unexpected repercussions. While it had been
+possible to store the predicate as string attribute, it would have rendered
+impossible to implement switching logic based on the comparison kind and made
+attribute validity checks (one out of ten possible kinds) more complex.
+
+### 'select' operation to implement min/max
+
+Although `min` and `max` operations are likely to occur as a result of
+transforming affine loops in ML functions, we did not make them first-class
+operations. Instead, we provide the `select` operation that can be combined with
+`cmpi` to implement the minimum and maximum computation. Although they now
+require two operations, they are likely to be emitted automatically during the
+transformation inside MLIR. On the other hand, there are multiple benefits of
+introducing `select`: standalone min/max would concern themselves with the
+signedness of the comparison, already taken into account by `cmpi`; `select` can
+support floats transparently if used after a float-comparison operation; the
+lower-level targets provide `select`-like instructions making the translation
+trivial.
+
+This operation could have been implemented with additional control flow: `%r =
+select %cond, %t, %f` is equivalent to
+
+```mlir
+^bb0:
+ cond_br %cond, ^bb1(%t), ^bb1(%f)
+^bb1(%r):
+```
+
+However, this control flow granularity is not available in the ML functions
+where min/max, and thus `select`, are likely to appear. In addition, simpler
+control flow may be beneficial for optimization in general.
+
+### Regions
+
+#### Attributes of type 'Block'
+
+We considered representing regions through `ArrayAttr`s containing a list of a
+special type `IRBlockAttr`, which in turn would contain a list of operations.
+All attributes in MLIR are unique’d within the context, which would make the IR
+inside the regions immortal for no good reason.
+
+#### Use "inlined" functions as regions
+
+We considered attaching a "force-inline" attribute on a function and/or a
+function `call` operation. Even the minimal region support (use cases in
+affine.for and affine.if existing before the regions) requires access to the
+values defined in the dominating block, which is not supported by functions.
+Conceptually, function bodies are instances of regions rather than the inverse;
+regions can also be device kernels, alternative sections, etc.
+
+#### Dedicated `region` operation
+
+This would mean we have a special kind of operation that is allowed to have
+regions while other operations are not. Such distinction is similar to the
+Stmt/Op difference we have had and chose to remove to make the IR simpler and
+more flexible. It would also require analyses and passes to consider the
+interplay between operations (e.g., an `affine.for` operation must be followed
+by a region operation). Finally, a region operation can be introduced using the
+current implementation, among other operations and without being special in any
+sense.
+
+#### Explicit capture of the values used in a region
+
+Being able to use values defined outside the region implies that use-def chains
+may contain uses from different nested regions. Consequently, IR transformations
+and analyses can pull the instruction defining the value across region
+boundaries, for example in case of TableGen-defined canonicalization patterns.
+This would not be the case if all used values had been passed as region
+arguments. One of the motivations for introducing regions in the IR is precisely
+to enable cross-region analyses and transformations that are simpler than
+inter-procedural transformations. Having uses from different regions appear in
+the same use-def chain, contrary to an additional data structure maintaining
+correspondence between function call arguments as uses of the original
+definitions and formal arguments as new definitions, enables such
+simplification. Since individual operations now belong to blocks, which belong
+to regions, it is always possible to check if the definition of the value
+belongs to the same region as its particular use. The risk is that any IR
+traversal will need to handle explicitly this situation and it is easy to forget
+a check (or conversely it isn’t easy to design the right check in a tablegen
+pattern for example): traversing use-def chains potentially crosses implicitly
+semantic barriers, making it possible to unknowingly break region semantics.
+This is expected to be caught in the verifier after the transformation.
+
+At the same time, one may choose to pass certain or all values as region
+arguments to explicitly break the use-def chains in the current proposal. This
+can be combined with an attribute-imposed semantic requirement disallowing the
+body of the region to refer to any value from outside it.
+
+### Quantized integer operations
+
+We haven't designed integer quantized operations in MLIR, but experience from
+TensorFlow suggests that it is better to put information about the quantization
+range/scale into the type itself, rather than have a single type like "qint8"
+and put these on attributes of the operation.
+
+There are a few ways to do this with MLIR, including at least:
+
+* We could do the same thing TensorFlow does - and we will _have_ to support
+ that model to some extent for compatibility.
+* We can encode the fp range of quantized integers directly into the types
+ when they are constants. The best practice on this seems to be to encode the
+ zero point as well as a scale factor. This ensures that 0.0 is always
+ exactly representable, e.g. `qi8<-1.42, 31.23x>`.
+* We could theoretically encode dynamically determined ranges into the types
+ using something like `qi8<?,?>` with the bounds being determined through the
+ SSA dataflow graph dynamically - similar to how dynamic shapes are handled.
+
+We will definitely need to do #1 for compatibility, we probably want to do #2,
+and we should investigate #3 over time. That said, our short term plan is to get
+more implementation experience with the rest of the system first, then come back
+to re-examine the representation for quantized arithmetic when we have that
+experience. When we do, we should chat with benoitjacob@ and
+[read the paper](https://arxiv.org/abs/1712.05877).
+
+### Dialect type extensions
+
+This section describes the design decisions that shaped the dialect extensible
+type system present in MLIR.
+
+#### Reserving dialect type kinds
+
+Dialects that wish to define type extensions must reserve a range of type kinds
+within a '.def' file within the core IR library. This means that every dialect
+wishing to define custom types must modify this file, but it guarantees that all
+type casting checkings are performed in O(1) time.
+
+#### Interactions between dialects
+
+There are two different interactions between dialects that are important to
+understand. When types of a dialect are:
+
+* In operations of other dialects
+
+ - For standard/builtin operations, only standard/builtin types are
+ allowed. This restriction allows for operations to clearly understand
+ the invariants that they are working under.
+ - Outside of standard/builtin operations, dialects are expected to verify
+ the allowable operation types per operation.
+
+* In types of other dialects
+
+ - For standard/builtin types, these types are allowed to contain types
+ from other dialects. This simplifies the type system and removes the
+ need for dialects to redefine all of the standard aggregate types, e.g.
+ tensor, as well as the memref type. Dialects are expected to verify that
+ a specific type is valid within a standard type, e.g. if a type can be
+ an element of a tensor.
+ - For dialect types, the dialect is expected to verify any type
+ invariants, e.g. if the standard tensor type can contain a specific type
+ of that dialect.
+
+#### Separating builtin and standard types
+
+Following the separation between the built-in and standard dialect, it makes
+sense to separate built-in types and standard dialect types. Built-in types are
+required for the validity of the IR itself, e.g. the function type (which
+appears in function signatures and generic assembly forms of operations).
+Integer, float, vector, memref and tensor types, while important, are not
+necessary for IR validity.
+
+#### Unregistered types
+
+MLIR supports unregistered operations in generic assembly form. MLIR also
+supports a similar concept for types. When parsing, if the dialect for dialect
+type has not been registered the type is modeled as an 'OpaqueType'. This allows
+for types to be round-tripped without needing to link in the dialect library
+that defined them. No additional information about opaque types, outside of
+parsing/printing, will be available.
+
+#### Dialect type syntax
+
+Dialect extended types are represented as string literals wrapped inside of the
+dialect namespace. This means that the parser delegates to the dialect for
+parsing specific type instances. This differs from the representation of dialect
+defined operations, of which have an identifier name that the parser uses to
+identify and parse them.
+
+This representation was chosen for several reasons:
+
+##### Dialects must provide custom type parsers
+
+Dialect type parsing cannot plug into the existing parser infrastructure as
+operations do with the OpAsmParser/Printer. Operations have a defined syntax
+structure that is the same across all dialects. Types, on the other hand, may
+have many different, and sometimes conflicting, parsing constraints that would
+be difficult/unmaintainable to provide within a single interface.
+
+This also has the added benefit of encouraging dialects to reuse existing
+external type parsers. For example, an LLVM dialect may provide an MLIR LLVM
+type that is simply a wrapper around LLVM types. The LLVM dialect would then use
+the existing LLVM type parsing infrastructure.
+
+Example:
+
+```mlir
+%s = "foo"() : () -> !llvm<"i32*">
+```
+
+##### Types do not always have canonical names
+
+Unlike operations, types generally do not have a formal canonical name. For
+example, function types have no defined keyword and integer types are defined by
+a regular expression to support arbitrary bitwidth. Dialects with existing type
+systems, e.g. LLVM, are likely to provide wrappers around their existing type
+systems. For these wrapper types there is no simple canonical name, it's logical
+to think of these types as existing within the namespace of the dialect. If a
+dialect wishes to assign a canonical name to a type, it can be done via
+[type aliases](LangRef.md#type-aliases).
+
+### Tuple types
+
+The MLIR type system provides first class support for defining
+[tuple types](LangRef.md#tuple-type). This is due to the fact that `Tuple`
+represents a universal concept that is likely to, and has already begun to,
+present itself in many different dialects. Though this type is first class in
+the type system, it merely serves to provide a common mechanism in which to
+represent this concept in MLIR. As such, MLIR provides no standard operations
+for interfacing with `tuple` types. It is up to dialect authors to provide
+operations, e.g. extract_tuple_element, to interpret and manipulate them. When
+possible, operations should prefer to use multiple results instead. These
+provide a myriad of benefits, such as alleviating any need for tuple-extract
+operations that merely get in the way of analysis and transformation.
+
+### Assembly forms
+
+MLIR decides to support both generic and custom assembly forms under the
+following considerations:
+
+MLIR is an open system; it is designed to support modular and pluggable
+dialects. Depending on whether there exists a corresponding dialect and whether
+the dialect is plugged in, operations may or may not be registered into MLIR
+system. Yet we still need a way to investigate these operations. So the generic
+assembly form is mandated by this aspect of MLIR system. It provides a default
+textual form for operations.
+
+On the other hand, an assembly form is for assisting developers to investigate
+the IR. The generic form serves as a safe fallback but it can be too verbose for
+certain ops. Therefore, MLIR gives each dialect the choice to define a custom
+assembly form for each operation according to the operation's semantics and
+specific needs. The custom assembly form can de-duplicate information from the
+operation to derive a more concise form, thus better facilitating the
+comprehension of the IR.
+
+## Examples
+
+This section describes a few very simple examples that help understand how MLIR
+represents computation.
+
+### Non-affine control flow
+
+```mlir
+// A simple linear search in every row of a matrix
+for (i = 0; i < N; i++) {
+ for (j = 0; j < N; j++) {
+ // dynamic control flow
+ if (a[i][j] == key) {
+ s[i] = j;
+ break;
+ }
+ }
+}
+```
+
+The presence of dynamic control flow leads to an inner non-affine function
+nested in an outer function that using affine loops.
+
+```mlir
+func @search(%A: memref<?x?xi32, %S: <?xi32>, %key : i32) {
+ %ni = dim %A, 0 : memref<?x?xi32>
+ // This loop can be parallelized
+ affine.for %i = 0 to %ni {
+ call @search_body (%A, %S, %key, %i) : (memref<?x?xi32>, memref<?xi32>, i32, i32)
+ }
+ return
+}
+
+func @search_body(%A: memref<?x?xi32>, %S: memref<?xi32>, %key: i32, %i : i32) {
+ %nj = dim %A, 1 : memref<?x?xi32>
+ br ^bb1(0)
+
+^bb1(%j: i32)
+ %p1 = cmpi "lt", %j, %nj : i32
+ cond_br %p1, ^bb2, ^bb5
+
+^bb2:
+ %v = affine.load %A[%i, %j] : memref<?x?xi32>
+ %p2 = cmpi "eq", %v, %key : i32
+ cond_br %p2, ^bb3(%j), ^bb4
+
+^bb3(%j: i32)
+ affine.store %j, %S[%i] : memref<?xi32>
+ br ^bb5
+
+^bb4:
+ %jinc = addi %j, 1 : i32
+ br ^bb1(%jinc)
+
+^bb5:
+ return
+}
+```
+
+As per the [MLIR spec](LangRef.md), the restrictions on dimensions and symbol
+identifiers to be used with the affine.apply operation only apply to accesses
+inside `affine.for` and `affine.if` operations. However, an analysis of accesses
+inside the called function (`@search_body`) is necessary to determine if the
+`%i` loop could be parallelized: such function access analysis is calling
+context sensitive.
+
+### Non-affine loop bounds
+
+Loop bounds that are not affine lead to a nesting of functions as shown below.
+
+```c
+for (i = 0; i < N; i++)
+  for (j = 0; j < N; j++)
+ // Non-affine loop bound for k loop.
+    for (k = 0; k < pow(2, j); k++)
+       for (l = 0; l < N; l++) {
+        // block loop body
+        ...
+       }
+```
+
+```mlir
+func @outer_nest(%n : index) {
+ affine.for %i = 0 to %n {
+ affine.for %j = 0 to %n {
+ %pow = call @pow(2, %j) : (index, index) -> index
+ call @inner_nest(%pow, %n) : ...
+ }
+ }
+ return
+}
+
+func @inner_nest(%m : index, %n : index) {
+ affine.for %k = 0 to %m {
+ affine.for %l = 0 to %n {
+ ...
+ }
+ }
+ return
+}
+```
+
+### Reference 2D Convolution
+
+The following example illustrates a reference implementation of a 2D
+convolution, which uses an integer set `#domain` to represent valid input data
+in a dilated convolution.
+
+```mlir
+// Dilation factors S0 and S1 can be constant folded if constant at compile time.
+#domain = (d0, d1)[S0,S1,S2,S3]: (d0 % S0 == 0, d1 % S1 == 0, d0 >= 0, d1 >= 0,
+ S3 - d0 - 1 >= 0, S4 - d1 - 1 >= 0)
+// Identity map (shown here for illustration).
+#map0 = (d0, d1, d2, d3, d4, d5, d6) -> (d0, d1, d2, d3, d4, d5, d6)
+
+// Affine map from output to input coordinate space.
+// d0 = output_h, d1 = output_w, d2 = kernel_h, d3 = kernel_w
+// S0 = h_stride, S1 = w_stride, S2 = h_kernel_dilation, S3 = w_kernel_dilation
+// S4 = h_pad_low, S5 = w_pad_low
+// %out0 = %0#1 * %h_stride + %0#4 * %h_kernel_dilation - %h_pad_low
+// %out1= %0#2 * %w_stride + %0#5 * %w_kernel_dilation - %w_pad_low
+#map1_0 = (d0, d1, d2, d3) [S0, S1, S2, S3, S4, S5] -> (d0 * S0 + d2 * S2 - %S4)
+#map1_1 = (d0, d1, d2, d3) [S0, S1, S2, S3, S4, S5] -> (d1 * S1 + d3 * S3 - %S5)
+
+// Semi-affine map to undilated input coordinate space.
+// d0 = input_h, d1 = input_w, S0 = h_base_dilation, S1 = w_base_dilation.
+#map2_0 = (d0, d1) [S0, S1] -> (d0 / S0)
+#map2_1 = (d0, d1) [S0, S1] -> (d1 / S1)
+
+// Conv2D shapes:
+// input: [batch, input_height, input_width, input_feature]
+// kernel: [kernel_height, kernel_width, input_feature, output_feature]
+// output: [batch, output_height, output_width, output_feature]
+func @conv2d(%input: memref<16x1024x1024x3xf32, #lm0, /*scratchpad=*/1>,
+ %kernel: memref<5x5x3x32xf32, #lm0, /*scratchpad=*/1>,
+ %output: memref<16x512x512x32xf32, #lm0, /*scratchpad=*/1>) {
+ affine.for %b = 0 to %batch {
+ affine.for %oh = 0 to %output_height {
+ affine.for %ow = 0 to %output_width {
+ affine.for %of = 0 to %output_feature {
+ affine.for %kh = 0 to %kernel_height {
+ affine.for %kw = 0 to %kernel_width {
+ affine.for %if = 0 to %input_feature {
+ // Calculate input indices.
+ %1_0 = affine.apply #map1_0 (%0#1, %0#2, %0#4, %0#5)
+ [%h_stride, %w_stride, %h_kernel_dilation, %w_kernel_dilation,
+ %h_pad_low, %w_pad_low]
+ %1_1 = affine.apply #map1_1 (%0#1, %0#2, %0#4, %0#5)
+ [%h_stride, %w_stride, %h_kernel_dilation, %w_kernel_dilation,
+ %h_pad_low, %w_pad_low]
+
+ // Check if access is not in padding.
+ affine.if #domain(%1_0, %1_1)
+ [%h_base_dilation, %w_kernel_dilation, %h_bound, %w_bound] {
+ %2_0 = affine.apply #map2 (%1_0, %1_1)
+ %2_1 = affine.apply #map2 (%1_0, %1_1)
+ // Compute: output[output_indices] += input[input_indices] * kernel[kernel_indices]
+ call @multiply_accumulate(%input, %kernel, %output, %b, %oh, %ow, %of, %kh, %kw, %if, %2_0, %2_1)
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ return
+}
+```
+
+TODO (Add more examples showing the IR for a variety of interesting cases)
+
+## Design alternatives and extensions
+
+This is a list of some design alternatives and extensions that we discussed in
+detail but did not include in the spec or postponed them for future
+consideration on demand. We will revisit these discussions when we have more
+implementation experience and learn more about the challenges and limitations of
+our current design in practice.
+
+### Polyhedral code representation alternatives: schedule lists vs schedules trees vs affine loop/if forms
+
+The current MLIR uses a representation of polyhedral schedules using a tree of
+if/for loops. We extensively debated the tradeoffs involved in the typical
+unordered polyhedral instruction representation (where each instruction has
+multidimensional schedule information), discussed the benefits of schedule tree
+forms, and eventually decided to go with a syntactic tree of affine if/else
+conditionals and affine for loops. Discussion of the tradeoff was captured in
+this document:
+[ MLIR: The case for a simplified polyhedral form](RationaleSimplifiedPolyhedralForm.md).
+
+At a high level, we have two alternatives here:
+
+1. Schedule tree representation instead of an affine loop AST form: The current
+ proposal uses an affine loop and conditional tree form, which is syntactic
+ and with no separation of domains as sets and schedules as multidimensional
+ affine functions. A schedule tree form however makes polyhedral domains and
+ schedules a first class concept in the IR allowing compact expression of
+ transformations through the schedule tree without changing the domains of
+ instructions. Such a representation also hides prologues, epilogues, partial
+ tiles, complex loop bounds and conditionals making loop nests free of
+ "syntax". Cost models instead look at domains and schedules. In addition, if
+ necessary such a domain schedule representation can be normalized to
+ explicitly propagate the schedule into domains and model all the cleanup
+ code. An example and more detail on the schedule tree form is in the next
+ section.
+1. Having two different forms of "affine regions": an affine loop tree form
+ and a polyhedral schedule tree form. In the latter, ops could carry
+ attributes capturing domain, scheduling, and other polyhedral code
+ generation options with IntegerSet, AffineMap, and other attributes.
+
+#### Schedule Tree Representation for Affine Regions
+
+This representation is based on a simplified form of the domain/schedule
+representation used by the polyhedral compiler community. Domains represent what
+has to be executed while schedules represent the order in which domain elements
+are interleaved. We model domains as non-piece-wise convex integer sets, and
+schedules as affine functions; however, the former can be disjunctive, and the
+latter can be piece-wise affine relations. In the schedule tree representation,
+domain and schedules for instructions are represented in a tree-like structure
+which is called a schedule tree. Each non-leaf node of the tree is an abstract
+polyhedral dimension corresponding to an abstract fused loop for each ML
+instruction that appears in that branch. Each leaf node is an ML Instruction.
+
+```mlir
+// A tiled matmul code (128x128x128) represented in schedule tree form
+
+// #map0 = (d0, d1, d2, d3, d4, d5) -> (128*d0 + d3, 128*d1 + d4, 128*d2 + d5)
+#intset_ij = (i, j) [M, N, K] : i >= 0, -i + N - 1 >= 0, j >= 0, -j + N-1 >= 0
+#intset_ijk = (i, j, k) [M, N, K] : i >= 0, -i + N - 1 >= 0, j >= 0,
+ -j + M-1 >= 0, k >= 0, -k + N - 1 >= 0)
+func @matmul(%A, %B, %C, %M, %N, %K) : (...) { // %M, N, K are symbols
+ // t1, t2, t3, t4, t5, t6 are abstract polyhedral loops
+ mldim %t1 : {S1,S2,S3,S4,S5} floordiv (i, 128) {
+ mldim %t2 : {S1,S2,S3,S4,S5} floordiv (j, 128) {
+ // (%i, %j) = affine.apply (d0, d1) -> (128*d0, 128*d1) (%t1, %t2)
+ call dma_mem_to_scratchpad(%C, %i, %j, %M, %N, %K)
+ with @intset_ij(%i, %j) [%M, %N, %K]
+ mldim %t3 : {S2,S3,S4,S5} floordiv (k, 128) {
+ // (%i, %j, %k) = affine.apply (d0, d1, d2)
+ // -> (128*d0, 128*d1, 128*d2) (%t1, %t2, %t3)
+ call dma_mem_to_scratchpad(%A, ...) with #inset_ijk (%i, %j, %k) [%M, %N, %K]
+ // (%i, %j, %k) = affine.apply (d0, d1, d2)
+ // -> (128*d0, 128*d1, 128*d2) (%t1, %t2, %t3)
+ call dma_mem_to_scratchpad(%B, ...) with #inset_ijk (%i, %j, %k) [%M, %N, %K]
+ mldim %t4 : {S4} i mod 128 {
+ mldim %t5 : {S4} j mod 128 {
+ mldim %t6 : {S4} k mod 128 {
+ // (%i, %j, %k) = affine.apply #map0 (%t1, %t2, %t3, %t4, %t5, %t6)
+ call matmul_body(A, B, C, %i, %j, %k, %M, %N, %K)
+ with #inset_ijk(%i, %j, %k) [%M, %N, %K]
+ } // end mld4im t6
+ } // end mldim t5
+ } // end mldim t4
+ } // end mldim t3
+ // (%i, %j) = affine.apply (d0, d1) -> (128*d0, 128*d1) (%t1, %t2)
+ call $dma_scratchpad_to_mem_C ... with #intset(%i, %j) [%M, %N, %K]
+ } // end mldim t2
+ } // end mldim t1
+ return
+}
+
+```
+
+### Affine Relations
+
+The current MLIR spec includes affine maps and integer sets, but not affine
+relations. Affine relations are a natural way to model read and write access
+information, which can be very useful to capture the behavior of opaque external
+library calls, high-performance vendor libraries, or user-provided / user-tuned
+routines.
+
+An affine relation is a relation between input and output dimension identifiers
+while being symbolic on a list of symbolic identifiers and with affine
+constraints on the identifiers.
+
+Syntax:
+
+```
+// Affine relation definition at the top of file
+affine-rel-def ::= affine-rel-id `=` affine-relation-inline
+
+affine-rel-id ::= `##` prefixed-id
+
+affine-relation-inline ::=
+ `(` input-dims `)` (`[` symbols `]`)? `->`
+ `(` output-dims `)` : affine-constraint-conjunction
+
+input-dims ::= bare-id-list
+output-dims ::= bare-id-list
+symbols ::= bare-id-list
+
+affine-rel ::= affine-rel-id | affine-relation-inline
+
+// Usage
+affine-rel-spec ::= affine-rel dim-and-symbol-use-list
+```
+
+All identifiers appearing in input-dims, output-dims, and symbol-dims are
+pairwise distinct. All affine-constraint non-terminals in the above syntax are
+allowed to contain identifiers only from input-dims, output-dims, and
+symbol-dims.
+
+Affine relations are used to model read, write, may_read, and may_write sets of
+functions in the IR. The output dimension identifiers correspond to the data
+dimensions.
+
+Example:
+
+```mlir
+// read relation: two elements ( d0 <= r0 <= d0+1 )
+##aff_rel9 = (d0) -> (r0) : r0 - d0 >= 0, d0 - r0 + 1 >= 0
+
+func @count (%A : memref<128xf32>, %pos : i32) -> f32
+ reads: {%A ##aff_rel9 (%pos)}
+ writes: /* empty */
+ may_reads: /* empty */
+ may_writes: /* empty */ {
+bb0 (%0, %1: memref<128xf32>, i64):
+ %val = affine.load %A [%pos]
+ %val = affine.load %A [%pos + 1]
+ %p = mulf %val, %val : f32
+ return %p : f32
+}
+```
+
+### Regions
+
+#### Making function definition an operation
+
+MLIR supports values of a Function type. Instead of having first-class IR
+concept for functions, one could define an operation with a body region that
+defines a function value. The particularity of functions is that their names are
+globally visible and can be referred to before being defined, unlike SSA values
+that must be defined first. Implementing a "function definition" operation would
+require to relax some of the SSA constraints in a region, and also make the IR
+Module a region as well. It would also affect the core infrastructure (e.g.,
+function passes) only for the sake of concept unification.
+
+#### Having types on a region
+
+Instead of inspecting the types of arguments of the first block, one could give
+the region itself a type. This type would be redundant with block argument
+types, which must have values and create room for type mismatches. While
+functions do have types that are partly redundant with the arguments of the
+first block in the function, this is necessary to support function declarations
+that do not have a body which we can refer to in order to obtain the argument
+types. A region is always contained in an operation or a function that can be
+queried to obtain the “type” of the region if necessary.
+
+A type on a region can be justified if Regions were to be considered separately
+from the enclosing entity (operation or function) and had their own semantics
+that should be checked.
+
+#### Attaching attributes to regions
+
+Regions could be annotated with dialect attributes to use attribute verification
+hooks. An operation could take multiple regions as arguments, and each of them
+may require different attributes. However, there are currently very few
+practical cases where this would be necessary. Instead, one could simulate
+per-region attributes with array attributes attached to the entity containing
+the region (operation or function). This decreases the overall complexity of the
+IR and enables more concise and op-specific forms, e.g., when all regions of an
+op have the same attribute that can be only mentioned once. Since the semantics
+of the region is entirely defined by the enclosing entity, it also makes sense
+to have attributes attached to that entity rather than to the region itself.
+
+This can be reconsidered in the future if we see a non-neglectable amount of use
+cases.
+
+### Read/Write/May_Read/May_Write sets for External Functions
+
+Having read, write, may_read, and may_write sets for external functions which
+include opaque ones, high-performance vendor libraries such as CuDNN, CuB, MKL,
+FFT libraries, user-provided/optimized functions, or data movement runtimes such
+as DMA ones is a powerful feature. It allows the compiler to perform analysis,
+composition/transformation in the presence of such calls and with loops around
+such calls on sub-tensors. For user-provided or custom hand-tuned functions, the
+read/write/may_read/may_write sets could be provided a-priori by a user as part
+of the external function signature or they could be part of a database.
+
+TODO: Design this, and update to use function attribute syntax.
+
+Example:
+
+```mlir
+##rel9 ( ) [s0] -> (r0, r1) : 0 <= r0 <= 1023, 0 <= r1 <= s0 - 1
+
+func @cblas_reduce_ffi(%M: memref<1024 x ? x f32, #layout_map0, /*mem=*/0>)
+ -> f32 [
+ reads: {%M, ##rel9() }
+ writes: /* empty */
+ may_reads: /* empty */
+ may_writes: /* empty */
+]
+
+func @dma_mem_to_scratchpad(%a : memref<1024 x f32, #layout_map0, /*mem=*/0>,
+ %b : memref<1024 x f32, #layout_map0, 1>, %c : memref<1024 x f32,
+ #layout_map0>) [
+ reads: {%M, ##rel9() }
+ writes: /* empty */
+ may_reads: /* empty */
+ may_writes: /* empty */
+ ]
+
+```
+
+### Memref Extensions
+
+1. Arbitrary polyhedral shapes for tensors: e.g., triangular shapes in tensor
+ dimensions where there is symmetry: use integer set (affine constraints) to
+ model tensor data space (instead of just extents). Requires some changes to
+ the IR and the in-memory form.
+1. Layout maps
+
+ 1. Allow piece-wise affine maps for layouts: allows clean modeling of
+ boundary cases for images/tensors through padding, wrapping, mirroring,
+ padding where padded values are the results of computation as opposed to
+ data, padding in the interior as opposed to just boundaries.
+ 1. Allow many-to-one layout maps: Index and layout maps in the current
+ proposal are bijective. Extending them to many-to-one layout maps allows
+ cleaner(?) modeling of broadcast/reduce style computations while reusing
+ memory.
+
+ Proposal 2(a) requires non-trivial changes to the IR and the in-memory
+ representation. 2(b) requires no change, but impacts how cost models look at
+ index and layout maps.
+
+### `affine.if` and `affine.for` Extensions for "Escaping Scalars"
+
+We considered providing a representation for SSA values that are live out of
+`if/else` conditional bodies and loop carried in `affine.for` loops. We
+ultimately abandoned this approach due to its complexity. In the current design
+of MLIR, scalar variables cannot escape for loops or if instructions. In
+situations, where escaping is necessary, we use zero-dimensional tensors and
+memrefs instead of scalars.
+
+**TODO**: This whole section is obsolete and should be updated to use block
+arguments and a yield like terminator in for/if instructions.
+
+The abandoned design of supporting escaping scalars is as follows:
+
+#### affine.for Instruction
+
+Syntax:
+
+```
+[<out-var-list> =]
+for %<index-variable-name> = <lower-bound> ... <upper-bound> step <step>
+ [with <in-var-list>] { <loop-instruction-list> }
+```
+
+out-var-list is a comma separated list of SSA values defined in the loop body
+and used outside the loop body. in-var-list is a comma separated list of SSA
+values used inside the loop body and their initializers. loop-instruction-list
+is a list of instructions that may also include a yield instruction.
+
+Example:
+
+```mlir
+// Return sum of elements in 1-dimensional mref A
+func i32 @sum(%A : memref<?xi32>, %N : i32) -> (i32) {
+ %init = 0
+ %result = affine.for %i = 0 to N with %tmp(%init) {
+ %value = affine.load %A[%i]
+ %sum = %value + %tmp
+ yield %sum
+ }
+ return %result : i32
+}
+```
+
+#### affine.if/else Instruction
+
+Syntax:
+
+```
+<out-var-list> = affine.if (<cond-list>) {...} [else {...}]
+```
+
+Out-var-list is a list of SSA values defined by the if-instruction. The values
+are arguments to the yield-instruction that occurs in both then and else clauses
+when else clause is present. When if instruction contains only if clause, the
+escaping value defined in the then clause should be merged with the value the
+variable had before the if instruction. The design captured here does not handle
+this situation.
+
+Example:
+
+```mlir
+// Compute sum of half of the array
+func i32 @sum_half(%A : memref<?xi32>, %N : i32) -> (i32) {
+ %s0 = 0
+ %s1 = affine.for %i = 1 ... N step 1 with %s2 (%s0) {
+ %s3 = if (%i >= %N / 2) {
+ %v0 = affine.load %A[%i]
+ %s4 = %s2 + %v0
+ yield %s4
+ }
+ yield %s3
+ }
+ return %s1 : i32
+}
+```
+
+### Multithreading the compiler
+
+People want compilers to go fast, and one simple way to do that is to
+multi-thread them. There are multiple strategies for this, but a simple one is
+to optimize and compile separate functions in parallel. LLVM's original pass
+manager anticipated this demand, and the CallGraphSCCPass manager is even
+designed to support this as well, but unfortunately, a few early design
+decisions in LLVM prevent this from ever happening. Instead, things like ThinLTO
+are forced to split programs into separate LLVM modules/context and optimize
+those chunks independently.
+
+The problem is that LLVM has several objects in its IR that are globally uniqued
+and also mutable: notably constants like `i32 0`. In LLVM, these constants are
+`Value`'s, which allow them to be used as operands to instructions, and that
+they also have SSA use lists. Because these things are uniqued, every `i32 0` in
+any function shares a use list. This means that optimizing multiple functions in
+parallel won't work (at least without some sort of synchronization on the use
+lists, which would be unbearably inefficient).
+
+MLIR now supports a multithreaded pass manager. We do this through several
+design choices:
+
+1. MLIR makes use of extensive uniqued immutable data structures (affine
+ expressions, types, etc are all immutable, uniqued, and immortal).
+2. Constants are defined in per-function pools, instead of being globally
+ uniqued.
+3. Functions themselves are not SSA values either, so they don't have the same
+ problem as constants.
+4. FunctionPasses are copied (through their copy ctor) into one instance per
+ thread, avoiding sharing of local state across threads.
+
+This allows MLIR function passes to support efficient multithreaded compilation
+and code generation.
diff --git a/mlir/docs/RationaleSimplifiedPolyhedralForm.md b/mlir/docs/RationaleSimplifiedPolyhedralForm.md
new file mode 100644
index 00000000000..ec2ecc9fe50
--- /dev/null
+++ b/mlir/docs/RationaleSimplifiedPolyhedralForm.md
@@ -0,0 +1,415 @@
+# MLIR: The case for a <em>simplified</em> polyhedral form
+
+MLIR embraces polyhedral compiler techniques for their many advantages
+representing and transforming dense numerical kernels, but it uses a form that
+differs significantly from other polyhedral frameworks.
+
+**Disclaimer / Warning**
+
+This document is a very early design proposal (which has since been accepted)
+that explored the tradeoffs of using this simplified form vs the traditional
+polyhedral schedule list form. At some point, this document could be dusted off
+and written as a proper academic paper, but until now, it is better to included
+it in this crafty form than not to. Beware that this document uses archaic
+syntax and should not be considered a canonical reference to modern MLIR.
+
+## Introduction
+
+This document discusses general goals of the project, introduces context and the
+two alternatives, then talks about the tradeoffs of these designs. Written by
+Chris Lattner.
+
+## General goals of an IR, and goals of mlfunc's specifically
+
+Our currently planned representation for MLIR consists of two kinds of
+functions: an LLVM-like "CFG Function" and an "ML Function": a function
+represented in multidimensional loop form. The idea is that a CFG function is
+capable of full generality for expressing arbitrary computation, but is awkward
+for loop transformations. In contrast, mlfunc's are limited (e.g. to control
+flow involving loop nests over affine spaces) but these limitations make it much
+easier to transform and analyze, particularly for the set of computations in a
+machine learning kernel.
+
+The design of an intermediate representations is an optimization problem, which
+makes intentional tradeoffs that aim to make certain kinds of compiler
+transformations simple. After all, it is "possible" to do almost any
+transformation on any IR: we could theoretically do loop transformations on
+assembly language. OTOH, such transformations would take too long to write,
+would be fragile due to irrelevant changes, would be difficult to maintain, and
+difficult to make target independent. Performing transformations on the "right
+level" of IR makes it much easier to do analysis and transformation of code, and
+can make them faster by reducing the size of the IR, and eliminating
+possibilities that would have otherwise have to be considered.
+
+This is the reason we're interested in adding polyhedral techniques to an IR in
+the first place: though our base "CFG function" representation is fully capable
+of expressing any computation, it is "too" expressive. The limitations imposed
+by polyhedral techniques (e.g. on affine loop bounds and array subscripts)
+define a closed algebra that can represent an interesting range of
+transformations and their compositions, and because of their simplicity, we can
+perform (e.g.) dependence analysis more efficiently and more reliably.
+
+This raises an important question that this document examines: given we are
+introducing a redundant and limited way to express code and transformations,
+exactly what form is best to perform the analyses and transformations we want?
+
+We explore two different design points that are capable of expressing the same
+class of affine loop computations, but which use different representational
+forms. These forms trade off verbosity, ease of transformation, and ease of
+analysis in interesting ways.
+
+## Context: Traditional Polyhedral Form
+
+We started by discussing a representation that uses the traditional polyhedral
+schedule set + domain representation, e.g. consider C-like code like:
+
+```c
+ void simple_example(...) {
+ for (int i = 0; i < N; ++i) {
+ for (int j = 0; j < N; ++j) {
+ float tmp = X[i,j] // S1
+ A[i,j] = tmp + 1 // S2
+ B[i,j] = tmp * 42 // S3
+ }
+ }
+ }
+```
+
+The polyhedral representation doesn't care about the actual computation, so we
+will abstract them into S1/S2/S3 in the discussion below. Originally, we planned
+to represent this with a classical form like (syntax details are not important
+and probably slightly incorrect below):
+
+```
+ mlfunc @simple_example(... %N) {
+ %tmp = call @S1(%X, %i, %j)
+ domain: (0 <= %i < %N), (0 <= %j < %N)
+ schedule: (i, j, 0)
+
+ call @S2(%tmp, %A, %i, %j)
+ domain: (0 <= %i < %N), (0 <= %j < %N)
+ schedule: (i, j, 1)
+
+ call @S3(%tmp, %B, %i, %j)
+ domain: (0 <= %i < %N), (0 <= %j < %N)
+ schedule: (i, j, 2)
+ }
+```
+
+In this design, an mlfunc is an unordered bag of instructions whose execution
+order is fully controlled by their schedule.
+
+However, we recently agreed that a more explicit schedule tree representation is
+a better fit for our needs, because it exposes important structure that will
+make analyses and optimizations more efficient, and also makes the scoping of
+SSA values more explicit. This leads us to a representation along the lines of:
+
+```
+ mlfunc @simple_example(... %N) {
+ d0/d1 = mlspace
+ for S1(d0), S2(d0), S3(d0) {
+ for S1(d1), S2(d1), S3(d1) {
+
+ %tmp = call @S1(%X, d0, d1) ;; S1
+ domain: (0 <= d0 < %N), (0 <= d1 < %N)
+
+ call @S2(%tmp, %A, d0, d1) ;; S2
+ domain: (0 <= d0 < %N), (0 <= d1 < %N)
+
+ call @S3(%tmp, %B, d0, d1) ;; S3
+ domain: (0 <= d0 < %N), (0 <= d1 < %N)
+ }
+ }
+ }
+```
+
+This change makes the nesting structure of the loops an explicit part of the
+representation, and makes lexical ordering within a loop significant
+(eliminating the constant 0/1/2 of schedules).
+
+It isn't obvious in the example above, but the representation allows for some
+interesting features, including the ability for instructions within a loop nest
+to have non-equal domains, like this - the second instruction ignores the outer
+10 points inside the loop:
+
+```
+ mlfunc @reduced_domain_example(... %N) {
+ d0/d1 = mlspace
+ for S1(d0), S2(d0) {
+ for S1(d1), S2(d1) {
+ %tmp = call @S1(%X, d0, d1) ;; S1
+ domain: (0 <= d0 < %N), (0 <= d1 < %N)
+
+ call @S2(%tmp, %A, d0, d1) ;; S2
+ domain: (10 <= d0 < %N-10), (10 <= d1 < %N-10)
+ }
+ }
+ }
+```
+
+It also allows schedule remapping within the instruction, like this example that
+introduces a diagonal skew through a simple change to the schedules of the two
+instructions:
+
+```
+ mlfunc @skewed_domain_example(... %N) {
+ d0/d1 = mlspace
+ for S1(d0), S2(d0+d1) {
+ for S1(d0+d1), S2(d1) {
+ %tmp = call @S1(%X, d0, d1) ;; S1
+ domain: (0 <= d0 < %N), (0 <= d1 < %N)
+
+ call @S2(%tmp, %A, d0, d1) ;; S2
+ domain: (0 <= d0 < %N), (0 <= d1 < %N)
+ }
+ }
+ }
+```
+
+This form has great power, and the polyhedral code generator (which lowers from
+an mlfunc to a cfgfunc representation) handles this power so things that
+introduce loop transformations don't have to explicitly manipulate the looping
+structure.
+
+## Proposal: Simplified Polyhedral Form
+
+This document proposes and explores the idea of going one step further, moving
+all of the domain and schedule information into the "schedule tree". In this
+form, we would have a representation where all instructions inside of a given
+for-loop are known to have the same domain, which is maintained by the loop. In
+the simplified form, we also have an "if" instruction that takes an affine
+condition.
+
+Our simple example above would be represented as:
+
+```mlir
+ mlfunc @simple_example(... %N) {
+ affine.for %i = 0 ... %N step 1 {
+ affine.for %j = 0 ... %N step 1 {
+ // identity noop in this case, but can exist in general.
+ %0,%1 = affine.apply #57(%i, %j)
+
+ %tmp = call @S1(%X, %0, %1)
+
+ call @S2(%tmp, %A, %0, %1)
+
+ call @S3(%tmp, %B, %0, %1)
+ }
+ }
+ }
+```
+
+The example with the reduced domain would be represented with an if instruction:
+
+```mlir
+ mlfunc @reduced_domain_example(... %N) {
+ affine.for %i = 0 ... %N step 1 {
+ affine.for %j = 0 ... %N step 1 {
+ // identity noop in this case, but can exist in general.
+ %0,%1 = affinecall #57(%i, %j)
+
+ %tmp = call @S1(%X, %0, %1)
+
+ if (10 <= %i < %N-10), (10 <= %j < %N-10) {
+
+ %2,%3 = affine.apply(%i, %j) // identity noop in this case
+
+ call @S2(%tmp, %A, %2, %3)
+ }
+ }
+ }
+ }
+```
+
+These IRs represent exactly the same information, and use a similar information
+density. The 'traditional' form introduces an extra level of abstraction
+(schedules and domains) that make it easy to transform instructions at the
+expense of making it difficult to reason about how those instructions will come
+out after code generation. With the simplified form, transformations have to do
+parts of code generation inline with their transformation: instead of simply
+changing a schedule to **(i+j, j)** to get skewing, you'd have to generate this
+code explicitly (potentially implemented by making polyhedral codegen a library
+that transformations call into):
+
+```mlir
+mlfunc @skewed_domain_example(... %N) {
+ affine.for %t1 = 0 ... 2*N-2 step 1 {
+ affine.for %t2 = max(0, t1-N+1) ... min(N, t1) step 1 {
+ (%i, %j) = (%t1-%t2, %t2)
+ ...
+ }
+ }
+}
+```
+
+## Evaluation
+
+Both of these forms are capable of expressing the same class of computation:
+multidimensional loop nests with affine loop bounds and affine memory
+references. That said, they pose very different tradeoffs in other ways.
+
+### Commonality: can express same computation
+
+Both of these can express the same sorts of computation, e.g. kernels written in
+one form are representable in the other form in all cases.
+
+### Commonality: dependence analysis
+
+These representations both use affine functions for data layout mapping and
+access subscripts, and dependence analysis works the same way.
+
+### Commonality: difficulty of determining optimal transformation series
+
+One major challenge in performance of optimization of this sort of code is
+choosing the ordering and behavior of various loop transformations that get
+applied. There are non-local effects of every decision, and neither
+representation helps solve this inherently hard problem.
+
+### Commonality: compactness of IR
+
+In the cases that are most relevant to us (hyper rectangular spaces) these forms
+are directly equivalent: a traditional instruction with a limited domain (e.g.
+the "reduced_domain_example" above) ends up having one level of ML 'if' inside
+its loops. The simplified form pays for this by eliminating schedules and
+domains from the IR. Both forms allow code duplication to reduce dynamic
+branches in the IR: the traditional approach allows instruction splitting, the
+simplified form supports instruction duplication.
+
+It is important to point out that the traditional form wins on compactness in
+the extreme cases: e.g. the loop skewing case. These cases will be rare in
+practice for our workloads, and are exactly the cases that downstream
+transformations want to be explicit about what they are doing.
+
+### Simplicity of code generation
+
+A key final stage of an mlfunc is its conversion to a CFG function, which is
+required as part of lowering to the target machine. The simplified form has a
+clear advantage here: the IR has a direct correspondence to the structure of the
+generated code.
+
+In contrast, the traditional form has significant complexity in the lowering
+process to a CFG function, because the verbosity not imbued in the IR needs to
+come out during code generation. Code generation from ISL shows that it is
+possible to do this, but it is a non-trivial transformation.
+
+### Ease of transformation
+
+An advantage for the traditional form is that it is easier to perform certain
+transformations on it: skewing and tiling are just transformations on the
+schedule of the instructions in question, it doesn't require changing the loop
+structure.
+
+In practice, the simplified form requires moving the complexity of code
+generation into the transformations themselves - this is sometimes trivial,
+sometimes involved. The author believes that this should be possible by making
+the code generation algorithms themselves be library functions that
+transformations call into, instead of an opaque block that happens at the end of
+the mlfunc processing.
+
+Also, the sorts of transformations performed today by XLA (including tiling,
+padding, unrolling, and other rectangular transformations) should be easy enough
+to implement on either representation. The only cases that are a challenge are
+more advanced cases like skewing, e.g. for DMA data movement generation.
+
+### Ease of analysis: Cost models
+
+The simplified form is much easier for analyses and transformations to build
+cost models for (e.g. answering the question of "how much code bloat will be
+caused by unrolling a loop at this level?"), because it is easier to predict
+what target code will be generated. With the traditional form, these analyses
+will have to anticipate what polyhedral codegen will do to a set of instructions
+under consideration: something that is non-trivial in the interesting cases in
+question (see "Cost of code generation").
+
+### Cost of code generation
+
+State of the art polyhedral code generation is
+[expensive and complicated](https://lirias.kuleuven.be/bitstream/123456789/497238/1/toplas-astgen.pdf),
+sometimes exponential time complexity. We expect that most machine learning
+workloads will be hyper-rectangular, and thus it should be easy to specialize in
+important cases. That said, the traditional polyhedral representation makes it
+very easy to introduce complicated and expensive schedules, and provides no way
+to understand and project a cost model for using them. All downstream clients of
+the IR need to be prepared to handle the full generality of IR that may come to
+them.
+
+The simplified form defines this away: the concepts in the IR remain simple, and
+the code much more directly reflects the cost model for lowering to CFG
+functions and machine code. This is expected to be very important in the late
+stages of a code generator for an accelerator.
+
+### SSA in ML Functions
+
+We agree already that values defined in an mlfunc can include scalar values and
+they are defined based on traditional dominance. In the simplified form, this is
+very simple: arguments and induction variables defined in for-loops are live
+inside their lexical body, and linear series of instructions have the same "top
+down" dominance relation that a basic block does.
+
+In the traditional form though, this is not the case: it seems that a lot of
+knowledge about how codegen will emit the code is necessary to determine if SSA
+form is correct or not. For example, this is invalid code:
+
+```
+ %tmp = call @S1(%X, %0, %1)
+ domain: (10 <= %i < %N), (0 <= %j < %N)
+ schedule: (i, j)
+
+ call @S2(%tmp, %A, %0, %1)
+ domain: (0 <= %i < %N), (0 <= %j < %N)
+ schedule: (i, j)
+```
+
+Because `%tmp` isn't defined on some iterations of the %i loop.
+
+This matters because it makes the verifier more complicated, but more
+significantly, it means that load promotion and other optimizations that will
+produce SSA form will need to be aware of this and be able to model what codegen
+does.
+
+An emergent property of this that we discussed recently is that PHI nodes in
+mlfunc's (if we support them) will also have to have domains.
+
+### Lack of redundancy in IR
+
+The traditional form has multiple encodings for the same sorts of behavior: you
+end up having bits on `affine.for` loops to specify whether codegen should use
+"atomic/separate" policies, unroll loops, etc. Instructions can be split or can
+generate multiple copies of their instruction because of overlapping domains,
+etc.
+
+This is a problem for analyses and cost models, because they each have to reason
+about these additional forms in the IR.
+
+### Suitability to purpose: lowering to machine code
+
+One of the main drivers for this work is lowering to low-level accelerator code,
+including two-dimensional vectorization, insertion of DMAs, and other
+utilization of the matrix accelerator units. In the author's opinion, the extra
+compactness of the traditional form is a negative for this purpose: reasoning
+about the generated machine code will require understanding the mapping from
+mlfunc to lowered code, which means that it must understand what code generation
+will do.
+
+In the simplified form, the effect of "code generation" is always obvious from
+the IR itself, which should make it easier to perform vectorization to target
+instructions and other analyses we need to perform.
+
+## Third Alternative: two different levels of mlfunc
+
+One hybrid alternative is to support both the traditional and simplified forms
+of mlfunc in our IR.
+
+The stages could look like this, for example:
+
+1. Early performance transformations could be done on the traditional form.
+1. Partial code generation lowers to the simplified form
+1. Target specific lowering phases for tiling, and vectorization and other 2D
+ transforms that don't benefit much from the traditional form could be run.
+1. Final codegen to a cfg func can be done when all of the instructions are
+ replaced with ones valid on the target.
+
+While this is possible, it isn't clear what would justify the complexity of this
+approach. Unless there is a super compelling reason for this, it would be nice
+to not do this. **Update:** we discussed this as a design team and agreed that
+this wouldn't be a good way to go.
diff --git a/mlir/docs/TestingGuide.md b/mlir/docs/TestingGuide.md
new file mode 100644
index 00000000000..723b78bf0f5
--- /dev/null
+++ b/mlir/docs/TestingGuide.md
@@ -0,0 +1,171 @@
+# Testing Guide
+
+Testing is an integral part of any software infrastructure. In general, all
+commits to the MLIR repository should include an accompanying test of some form.
+Commits that include no functional changes, such as API changes like symbol
+renaming, should be tagged with NFC(no functional changes). This signals to the
+reviewer why the change doesn't/shouldn't include a test.
+
+MLIR generally separates testing into two main categories, [Check](#check-tests)
+tests and [Unit](#unit-tests) tests.
+
+## Check tests
+
+Check tests are tests that verify that some set of string tags appear in the
+output of some program. These tests generally encompass anything related to the
+state of the IR (and more); analysis, parsing, transformation, verification,
+etc. They are written utilizing several different tools:
+
+### FileCheck tests
+
+[FileCheck](https://llvm.org/docs/CommandGuide/FileCheck.html) is a utility tool
+that "reads two files (one from standard input, and one specified on the command
+line) and uses one to verify the other." Essentially, one file contains a set of
+tags that are expected to appear in the output file. MLIR utilizes FileCheck, in
+combination with [lit](https://llvm.org/docs/CommandGuide/lit.html), to verify
+different aspects of the IR - such as the output of a transformation pass.
+
+An example FileCheck test is shown below:
+
+```mlir
+// RUN: mlir-opt %s -cse | FileCheck %s
+
+// CHECK-LABEL: func @simple_constant
+func @simple_constant() -> (i32, i32) {
+ // CHECK-NEXT: %[[RESULT:.*]] = constant 1
+ // CHECK-NEXT: return %[[RESULT]], %[[RESULT]]
+
+ %0 = constant 1 : i32
+ %1 = constant 1 : i32
+ return %0, %1 : i32, i32
+}
+```
+
+The above test performs a check that after running Common Sub-Expression
+elimination, only one constant remains in the IR.
+
+#### FileCheck best practices
+
+FileCheck is an extremely useful utility, it allows for easily matching various
+parts of the output. This ease of use means that it becomes easy to write
+brittle tests that are essentially `diff` tests. FileCheck tests should be as
+self-contained as possible and focus on testing the minimal set of
+functionalities needed. Let's see an example:
+
+```mlir
+// RUN: mlir-opt %s -cse | FileCheck %s
+
+// CHECK-LABEL: func @simple_constant() -> (i32, i32)
+func @simple_constant() -> (i32, i32) {
+ // CHECK-NEXT: %result = constant 1 : i32
+ // CHECK-NEXT: return %result, %result : i32, i32
+ // CHECK-NEXT: }
+
+ %0 = constant 1 : i32
+ %1 = constant 1 : i32
+ return %0, %1 : i32, i32
+}
+```
+
+The above example is another way to write the original example shown in the main
+[FileCheck tests](#filecheck-tests) section. There are a few problems with this
+test; below is a breakdown of the no-nos of this test to specifically highlight
+best practices.
+
+* Tests should be self-contained.
+
+This means that tests should not test lines or sections outside of what is
+intended. In the above example, we see lines such as `CHECK-NEXT: }`. This line
+in particular is testing pieces of the Parser/Printer of FuncOp, which is
+outside of the realm of concern for the CSE pass. This line should be removed.
+
+* Tests should be minimal, and only check what is absolutely necessary.
+
+This means that anything in the output that is not core to the functionality
+that you are testing should *not* be present in a CHECK line. This is a separate
+bullet just to highlight the importance of it, especially when checking against
+IR output.
+
+If we naively remove the unrelated `CHECK` lines in our source file, we may end
+up with:
+
+```mlir
+// CHECK-LABEL: func @simple_constant
+func @simple_constant() -> (i32, i32) {
+ // CHECK-NEXT: %result = constant 1 : i32
+ // CHECK-NEXT: return %result, %result : i32, i32
+
+ %0 = constant 1 : i32
+ %1 = constant 1 : i32
+ return %0, %1 : i32, i32
+}
+```
+
+It may seem like this is a minimal test case, but it still checks several
+aspects of the output that are unrelated to the CSE transformation. Namely the
+result types of the `constant` and `return` operations, as well the actual SSA
+value names that are produced. FileCheck `CHECK` lines may contain
+[regex statements](https://llvm.org/docs/CommandGuide/FileCheck.html#filecheck-regex-matching-syntax)
+as well as named
+[string substitution blocks](https://llvm.org/docs/CommandGuide/FileCheck.html#filecheck-string-substitution-blocks).
+Utilizing the above, we end up with the example shown in the main
+[FileCheck tests](#filecheck-tests) section.
+
+```mlir
+// CHECK-LABEL: func @simple_constant
+func @simple_constant() -> (i32, i32) {
+ /// Here we use a substitution variable as the output of the constant is
+ /// useful for the test, but we omit as much as possible of everything else.
+ // CHECK-NEXT: %[[RESULT:.*]] = constant 1
+ // CHECK-NEXT: return %[[RESULT]], %[[RESULT]]
+
+ %0 = constant 1 : i32
+ %1 = constant 1 : i32
+ return %0, %1 : i32, i32
+}
+```
+
+### Diagnostic verification tests
+
+MLIR provides rich source location tracking that can be used to emit errors,
+warnings, etc. easily from anywhere throughout the codebase. Certain classes of
+tests are written to check that certain diagnostics are emitted for a given
+input program, such as an MLIR file. These tests are useful in that they allow
+checking specific invariants of the IR without transforming or changing
+anything. Some examples of tests in this category are: those that verify
+invariants of operations, or check the expected results of an analysis.
+Diagnostic verification tests are written utilizing the
+[source manager verifier handler](Diagnostics.md#sourcemgr-diagnostic-verifier-handler),
+accessible via the `verify-diagnostics` flag in mlir-opt.
+
+An example .mlir test running under `mlir-opt` is shown below:
+
+```mlir
+// RUN: mlir-opt %s -split-input-file -verify-diagnostics
+
+// Expect an error on the same line.
+func @bad_branch() {
+ br ^missing // expected-error {{reference to an undefined block}}
+}
+
+// -----
+
+// Expect an error on an adjacent line.
+func @foo(%a : f32) {
+ // expected-error@+1 {{unknown comparison predicate "foo"}}
+ %result = cmpf "foo", %a, %a : f32
+ return
+}
+```
+
+## Unit tests
+
+Unit tests are written using
+[Google Test](https://github.com/google/googletest/blob/master/googletest/docs/primer.md)
+and are located in the unittests/ directory. Tests of these form *should* be
+limited to API tests that cannot be reasonably written as [Check](#check-tests)
+tests, e.g. those for data structures. It is important to keep in mind that the
+C++ APIs are not stable, and evolve over time. As such, directly testing the C++
+IR interfaces makes the tests more fragile as those C++ APIs evolve over time.
+This makes future API refactorings, which may happen frequently, much more
+cumbersome as the number of tests scale.
diff --git a/mlir/docs/Traits.md b/mlir/docs/Traits.md
new file mode 100644
index 00000000000..b233f9bef66
--- /dev/null
+++ b/mlir/docs/Traits.md
@@ -0,0 +1,246 @@
+# Introduction to MLIR Operation Traits
+
+[TOC]
+
+MLIR allows for a truly open operation ecosystem, as any dialect may define
+operations that suit a specific level of abstraction. `Traits` are a mechanism
+in which to abstract implementation details and properties that are common
+across many different operations. `Traits` may be used to specify special
+properties and constraints of the operation, including whether the operation has
+side effects or whether its output has the same type as the input. Some examples
+of traits are `Commutative`, `SingleResult`, `Terminator`, etc. See the more
+[comprehensive list](#traits) below for more examples of what is possible.
+
+## Defining a Trait
+
+Traits may be defined in C++ by inheriting from the
+`OpTrait::TraitBase<ConcreteType, TraitType>` class. This base class takes as
+template parameters:
+
+* ConcreteType
+ - The concrete operation type that this trait was attached to.
+* TraitType
+ - The type of the trait class that is being defined, for use with the
+ [`Curiously Recurring Template Pattern`](https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern).
+
+A derived trait class is expected to take a single template that corresponds to
+the `ConcreteType`. An example trait definition is shown below:
+
+```c++
+template <typename ConcreteType>
+class MyTrait : public OpTrait::TraitBase<ConcreteType, MyTrait> {
+};
+```
+
+Derived traits may also provide a `verifyTrait` hook, that is called when
+verifying the concrete operation. The trait verifiers will currently always be
+invoked before the main `Op::verify`.
+
+```c++
+template <typename ConcreteType>
+class MyTrait : public OpTrait::TraitBase<ConcreteType, MyTrait> {
+public:
+ /// Override the 'verifyTrait' hook to add additional verification on the
+ /// concrete operation.
+ static LogicalResult verifyTrait(Operation *op) {
+ // ...
+ }
+};
+```
+
+Note: It is generally good practice to define the implementation of the
+`verifyTrait` hook out-of-line as a free function when possible to avoid
+instantiating the implementation for every concrete operation type.
+
+### Parametric Traits
+
+The above demonstrates the definition of a simple self-contained trait. It is
+also often useful to provide some static parameters to the trait to control its
+behavior. Given that the definition of the trait class is rigid, i.e. we must
+have a single template argument for the concrete operation, the templates for
+the parameters will need to be split out. An example is shown below:
+
+```c++
+template <int Parameter>
+class MyParametricTrait {
+public:
+ template <typename ConcreteType>
+ class Impl : public OpTrait::TraitBase<ConcreteType, Impl> {
+ // Inside of 'Impl' we have full access to the template parameters
+ // specified above.
+ };
+};
+```
+
+## Attaching a Trait
+
+Traits may be used when defining a derived operation type, by simply adding the
+name of the trait class to the `Op` class after the concrete operation type:
+
+```c++
+/// Here we define 'MyOp' along with the 'MyTrait' and `MyParametric trait
+/// classes we defined previously.
+class MyOp : public Op<MyOp, MyTrait, MyParametricTrait<10>::Impl> {};
+```
+
+To use a trait in the [ODS](OpDefinitions.md) framework, we need to provide a
+definition of the trait class. This can be done using the `NativeOpTrait` and
+`ParamNativeOpTrait` classes. `ParamNativeOpTrait` provides a mechanism in which
+to specify arguments to a parametric trait class with an internal `Impl`.
+
+```tablegen
+// The argument is the c++ trait class name.
+def MyTrait : NativeOpTrait<"MyTrait">;
+
+// The first argument is the parent c++ class name. The second argument is a
+// string containing the parameter list.
+class MyParametricTrait<int prop>
+ : NativeOpTrait<"MyParametricTrait", !cast<string>(!head(parameters))>;
+```
+
+These can then be used in the `traits` list of an op definition:
+
+```tablegen
+def OpWithInferTypeInterfaceOp : Op<...[MyTrait, MyParametricTrait<10>]> { ... }
+```
+
+See the documentation on [operation definitions](OpDefinitions.md) for more
+details.
+
+## Using a Trait
+
+Traits may be used to provide additional methods, static fields, or other
+information directly on the concrete operation. `Traits` internally become
+`Base` classes of the concrete operation, so all of these are directly
+accessible. To expose this information opaquely to transformations and analyses,
+[`interfaces`](Interfaces.md) may be used.
+
+To query if a specific operation contains a specific trait, the `hasTrait<>`
+method may be used. This takes as a template parameter the trait class, which is
+the same as the one passed when attaching the trait to an operation.
+
+```c++
+Operation *op = ..;
+if (op->hasTrait<MyTrait>() || op->hasTrait<MyParametricTrait<10>::Impl>())
+ ...;
+```
+
+## Trait List
+
+MLIR provides a suite of traits that provide various functionalities that are
+common across many different operations. Below is a list of some key traits that
+may be used directly by any dialect. The format of the header for each trait
+section goes as follows:
+
+* `Header`
+ - (`C++ class` -- `ODS class`(if applicable))
+
+### Broadcastable
+
+* `OpTrait::BroadcastableTwoOperandsOneResult` -- `Broadcastable`
+
+This trait provides the API for operations that are known to have
+[broadcast-compatible](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
+operand and result types. Specifically, starting from the most varying
+dimension, each dimension pair of the two operands' types should either be the
+same or one of them is one. Also, the result type should have the corresponding
+dimension equal to the larger one, if known. Shapes are checked partially if
+ranks or dimensions are not known. For example, an op with `tensor<?x2xf32>` and
+`tensor<2xf32>` as operand types and `tensor<3x2xf32>` as the result type is
+broadcast-compatible.
+
+Ths trait assumes the op has two operands and one result, and it asserts if the
+pre-condition is not satisfied.
+
+### Commutative
+
+* `OpTrait::IsCommutative` -- `Commutative`
+
+This trait adds the property that the operation is commutative, i.e. `X op Y ==
+Y op X`
+
+### Function-Like
+
+* `OpTrait::FunctionLike`
+
+This trait provides APIs for operations that behave like functions. In
+particular:
+
+- Ops must be symbols, i.e. also have the `Symbol` trait;
+- Ops have a single region with multiple blocks that corresponds to the body
+ of the function;
+- the absence of a region corresponds to an external function;
+- arguments of the first block of the region are treated as function
+ arguments;
+- they can have argument and result attributes that are stored in dictionary
+ attributes on the operation itself.
+
+This trait does *NOT* provide type support for the functions, meaning that
+concrete Ops must handle the type of the declared or defined function.
+`getTypeAttrName()` is a convenience function that returns the name of the
+attribute that can be used to store the function type, but the trait makes no
+assumption based on it.
+
+### HasParent
+
+* `OpTrait::HasParent<typename ParentOpType>` -- `HasParent<string op>`
+
+This trait provides APIs and verifiers for operations that can only be nested
+within regions that are attached to operations of `ParentOpType`.
+
+### IsolatedFromAbove
+
+* `OpTrait::IsIsolatedFromAbove` -- `IsolatedFromAbove`
+
+This trait signals that the regions of an operations are known to be isolated
+from above. This trait asserts that the regions of an operation will not
+capture, or reference, SSA values defined above the region scope. This means
+that the following is invalid if `foo.region_op` is defined as
+`IsolatedFromAbove`:
+
+```mlir
+%result = constant 10 : i32
+foo.region_op {
+ foo.yield %result : i32
+}
+```
+
+This trait is an important structural property of the IR, and enables operations
+to have [passes](WritingAPass.md) scheduled under them.
+
+### NoSideEffect
+
+* `OpTrait::HasNoSideEffect` -- `NoSideEffect`
+
+This trait signifies that the operation is pure and has no visible side effects.
+
+### Single Block with Implicit Terminator
+
+* `OpTrait::SingleBlockImplicitTerminator<typename TerminatorOpType>` :
+ `SingleBlockImplicitTerminator<string op>`
+
+This trait provides APIs and verifiers for operations with regions that have a
+single block that must terminate with `TerminatorOpType`.
+
+### Symbol
+
+* `OpTrait::Symbol` -- `Symbol`
+
+This trait is used for operations that define a `Symbol`.
+
+TODO(riverriddle) Link to the proper document detailing the design of symbols.
+
+### SymbolTable
+
+* `OpTrait::SymbolTable` -- `SymbolTable`
+
+This trait is used for operations that define a `SymbolTable`.
+
+TODO(riverriddle) Link to the proper document detailing the design of symbols.
+
+### Terminator
+
+* `OpTrait::IsTerminator` -- `Terminator`
+
+This trait provides verification and functionality for operations that are known
+to be [terminators](LangRef.md#terminator-operations).
diff --git a/mlir/docs/Tutorials/Toy/Ch-1.md b/mlir/docs/Tutorials/Toy/Ch-1.md
new file mode 100644
index 00000000000..cb7f97cb3f6
--- /dev/null
+++ b/mlir/docs/Tutorials/Toy/Ch-1.md
@@ -0,0 +1,169 @@
+# Chapter 1: Toy Tutorial Introduction
+
+[TOC]
+
+This tutorial runs through the implementation of a basic toy language on top of
+MLIR. The goal of this tutorial is to introduce the concepts of MLIR; in
+particular, how [dialects](../../LangRef.md#dialects) can help easily support
+language specific constructs and transformations while still offering an easy
+path to lower to LLVM or other codegen infrastructure. This tutorial is based on
+the model of the
+[LLVM Kaleidoscope Tutorial](https://llvm.org/docs/tutorial/MyFirstLanguageFrontend/index.html).
+
+This tutorial assumes you have cloned and built MLIR; if you have not yet done
+so, see
+[Getting started with MLIR](https://github.com/tensorflow/mlir#getting-started-with-mlir).
+
+## The Chapters
+
+This tutorial is divided in the following chapters:
+
+- [Chapter #1](Ch-1.md): Introduction to the Toy language and the definition
+ of its AST.
+- [Chapter #2](Ch-2.md): Traversing the AST to emit a dialect in MLIR,
+ introducing base MLIR concepts. Here we show how to start attaching
+ semantics to our custom operations in MLIR.
+- [Chapter #3](Ch-3.md): High-level language-specific optimization using
+ pattern rewriting system.
+- [Chapter #4](Ch-4.md): Writing generic dialect-independent transformations
+ with Interfaces. Here we will show how to plug dialect specific information
+ into generic transformations like shape inference and inlining.
+- [Chapter #5](Ch-5.md): Partially lowering to lower-level dialects. We'll
+ convert some our high level language specific semantics towards a generic
+ affine oriented dialect for optimization.
+- [Chapter #6](Ch-6.md): Lowering to LLVM and code generation. Here we'll
+ target LLVM IR for code generation, and detail more of the lowering
+ framework.
+- [Chapter #7](Ch-7.md): Extending Toy: Adding support for a composite type.
+ We'll demonstrate how to add a custom type to MLIR, and how it fits in the
+ existing pipeline.
+
+## The Language
+
+This tutorial will be illustrated with a toy language that we’ll call “Toy”
+(naming is hard...). Toy is a tensor-based language that allows you to define
+functions, perform some math computation, and print results.
+
+Given that we want to keep things simple, the codegen will be limited to tensors
+of rank <= 2, and the only datatype in Toy is a 64-bit floating point type (aka
+‘double’ in C parlance). As such, all values are implicitly double precision,
+`Values` are immutable (i.e. every operation returns a newly allocated value),
+and deallocation is automatically managed. But enough with the long description;
+nothing is better than walking through an example to get a better understanding:
+
+```Toy {.toy}
+def main() {
+ # Define a variable `a` with shape <2, 3>, initialized with the literal value.
+ # The shape is inferred from the supplied literal.
+ var a = [[1, 2, 3], [4, 5, 6]];
+
+ # b is identical to a, the literal tensor is implicitly reshaped: defining new
+ # variables is the way to reshape tensors (element count must match).
+ var b<2, 3> = [1, 2, 3, 4, 5, 6];
+
+ # transpose() and print() are the only builtin, the following will transpose
+ # a and b and perform an element-wise multiplication before printing the result.
+ print(transpose(a) * transpose(b));
+}
+```
+
+Type checking is statically performed through type inference; the language only
+requires type declarations to specify tensor shapes when needed. Functions are
+generic: their parameters are unranked (in other words, we know these are
+tensors, but we don't know their dimensions). They are specialized for every
+newly discovered signature at call sites. Let's revisit the previous example by
+adding a user-defined function:
+
+```Toy {.toy}
+# User defined generic function that operates on unknown shaped arguments.
+def multiply_transpose(a, b) {
+ return transpose(a) * transpose(b);
+}
+
+def main() {
+ # Define a variable `a` with shape <2, 3>, initialized with the literal value.
+ var a = [[1, 2, 3], [4, 5, 6]];
+ var b<2, 3> = [1, 2, 3, 4, 5, 6];
+
+ # This call will specialize `multiply_transpose` with <2, 3> for both
+ # arguments and deduce a return type of <3, 2> in initialization of `c`.
+ var c = multiply_transpose(a, b);
+
+ # A second call to `multiply_transpose` with <2, 3> for both arguments will
+ # reuse the previously specialized and inferred version and return <3, 2>.
+ var d = multiply_transpose(b, a);
+
+ # A new call with <3, 2> (instead of <2, 3>) for both dimensions will
+ # trigger another specialization of `multiply_transpose`.
+ var e = multiply_transpose(c, d);
+
+ # Finally, calling into `multiply_transpose` with incompatible shape will
+ # trigger a shape inference error.
+ var f = multiply_transpose(transpose(a), c);
+}
+```
+
+## The AST
+
+The AST from the above code is fairly straightforward; here is a dump of it:
+
+```
+Module:
+ Function
+ Proto 'multiply_transpose' @test/ast.toy:5:1'
+ Args: [a, b]
+ Block {
+ Return
+ BinOp: * @test/ast.toy:6:25
+ Call 'transpose' [ @test/ast.toy:6:10
+ var: a @test/ast.toy:6:20
+ ]
+ Call 'transpose' [ @test/ast.toy:6:25
+ var: b @test/ast.toy:6:35
+ ]
+ } // Block
+ Function
+ Proto 'main' @test/ast.toy:9:1'
+ Args: []
+ Block {
+ VarDecl a<> @test/ast.toy:11:3
+ Literal: <2, 3>[<3>[1.000000e+00, 2.000000e+00, 3.000000e+00], <3>[4.000000e+00, 5.000000e+00, 6.000000e+00]] @test/ast.toy:11:17
+ VarDecl b<2, 3> @test/ast.toy:12:3
+ Literal: <6>[1.000000e+00, 2.000000e+00, 3.000000e+00, 4.000000e+00, 5.000000e+00, 6.000000e+00] @test/ast.toy:12:17
+ VarDecl c<> @test/ast.toy:15:3
+ Call 'multiply_transpose' [ @test/ast.toy:15:11
+ var: a @test/ast.toy:15:30
+ var: b @test/ast.toy:15:33
+ ]
+ VarDecl d<> @test/ast.toy:18:3
+ Call 'multiply_transpose' [ @test/ast.toy:18:11
+ var: b @test/ast.toy:18:30
+ var: a @test/ast.toy:18:33
+ ]
+ VarDecl e<> @test/ast.toy:21:3
+ Call 'multiply_transpose' [ @test/ast.toy:21:11
+ var: b @test/ast.toy:21:30
+ var: c @test/ast.toy:21:33
+ ]
+ VarDecl f<> @test/ast.toy:24:3
+ Call 'multiply_transpose' [ @test/ast.toy:24:11
+ Call 'transpose' [ @test/ast.toy:24:30
+ var: a @test/ast.toy:24:40
+ ]
+ var: c @test/ast.toy:24:44
+ ]
+ } // Block
+```
+
+You can reproduce this result and play with the example in the
+`examples/toy/Ch1/` directory; try running `path/to/BUILD/bin/toyc-ch1
+test/Examples/Toy/Ch1/ast.toy -emit=ast`.
+
+The code for the lexer is fairly straightforward; it is all in a single header:
+`examples/toy/Ch1/include/toy/Lexer.h`. The parser can be found in
+`examples/toy/Ch1/include/toy/Parser.h`; it is a recursive descent parser. If
+you are not familiar with such a Lexer/Parser, these are very similar to the
+LLVM Kaleidoscope equivalent that are detailed in the first two chapters of the
+[Kaleidoscope Tutorial](https://llvm.org/docs/tutorial/MyFirstLanguageFrontend/LangImpl02.html).
+
+The [next chapter](Ch-2.md) will demonstrate how to convert this AST into MLIR.
diff --git a/mlir/docs/Tutorials/Toy/Ch-2.md b/mlir/docs/Tutorials/Toy/Ch-2.md
new file mode 100755
index 00000000000..ce46788f4ae
--- /dev/null
+++ b/mlir/docs/Tutorials/Toy/Ch-2.md
@@ -0,0 +1,577 @@
+# Chapter 2: Emitting Basic MLIR
+
+[TOC]
+
+Now that we're familiar with our language and the AST, let's see how MLIR can
+help to compile Toy.
+
+## Introduction: Multi-Level Intermediate Representation
+
+Other compilers, like LLVM (see the
+[Kaleidoscope tutorial](https://llvm.org/docs/tutorial/MyFirstLanguageFrontend/index.html)),
+offer a fixed set of predefined types and (usually *low-level* / RISC-like)
+instructions. It is up to the frontend for a given language to perform any
+language-specific type-checking, analysis, or transformation before emitting
+LLVM IR. For example, Clang will use its AST to perform not only static analysis
+but also transformations, such as C++ template instantiation through AST cloning
+and rewrite. Finally, languages with construction at a higher-level than C/C++
+may require non-trivial lowering from their AST to generate LLVM IR.
+
+As a consequence, multiple frontends end up reimplementing significant pieces of
+infrastructure to support the need for these analyses and transformation. MLIR
+addresses this issue by being designed for extensibility. As such, there are few
+pre-defined instructions (*operations* in MLIR terminology) or types.
+
+## Interfacing with MLIR
+
+[Language reference](../../LangRef.md)
+
+MLIR is designed to be a completely extensible infrastructure; there is no
+closed set of attributes (think: constant metadata), operations, or types. MLIR
+supports this extensibility with the concept of
+[Dialects](../../LangRef.md#dialects). Dialects provide a grouping mechanism for
+abstraction under a unique `namespace`.
+
+In MLIR, [`Operations`](../../LangRef.md#operations) are the core unit of
+abstraction and computation, similar in many ways to LLVM instructions.
+Operations can have application-specific semantics and can be used to represent
+all of the core IR structures in LLVM: instructions, globals (like functions),
+modules, etc.
+
+Here is the MLIR assembly for the Toy `transpose` operations:
+
+```mlir
+%t_tensor = "toy.transpose"(%tensor) {inplace = true} : (tensor<2x3xf64>) -> tensor<3x2xf64> loc("example/file/path":12:1)
+```
+
+Let's break down the anatomy of this MLIR operation:
+
+- `%t_tensor`
+
+ * The name given to the result defined by this operation (which includes
+ [a prefixed sigil to avoid collisions](../../LangRef.md#identifiers-and-keywords)).
+ An operation may define zero or more results (in the context of Toy, we
+ will limit ourselves to single-result operations), which are SSA values.
+ The name is used during parsing but is not persistent (e.g., it is not
+ tracked in the in-memory representation of the SSA value).
+
+- `"toy.transpose"`
+
+ * The name of the operation. It is expected to be a unique string, with
+ the namespace of the dialect prefixed before the "`.`". This can be read
+ as the `transpose` operation in the `toy` dialect.
+
+- `(%tensor)`
+
+ * A list of zero or more input operands (or arguments), which are SSA
+ values defined by other operations or referring to block arguments.
+
+- `{ inplace = true }`
+
+ * A dictionary of zero or more attributes, which are special operands that
+ are always constant. Here we define a boolean attribute named 'inplace'
+ that has a constant value of true.
+
+- `(tensor<2x3xf64>) -> tensor<3x2xf64>`
+
+ * This refers to the type of the operation in a functional form, spelling
+ the types of the arguments in parentheses and the type of the return
+ values afterward.
+
+- `loc("example/file/path":12:1)`
+
+ * This is the location in the source code from which this operation
+ originated.
+
+Shown here is the general form of an operation. As described above, the set of
+operations in MLIR is extensible. This means that the infrastructure must be
+able to opaquely reason about the structure of an operation. This is done by
+boiling down the composition of an operation into discrete pieces:
+
+- A name for the operation.
+- A list of SSA operand values.
+- A list of [attributes](../../LangRef.md#attributes).
+- A list of [types](../../LangRef.md#type-system) for result values.
+- A [source location](../../Diagnostics.md#source-locations) for debugging
+ purposes.
+- A list of successors [blocks](../../LangRef.md#blocks) (for branches,
+ mostly).
+- A list of [regions](../../LangRef.md#regions) (for structural operations
+ like functions).
+
+In MLIR, every operation has a mandatory source location associated with it.
+Contrary to LLVM, where debug info locations are metadata and can be dropped, in
+MLIR, the location is a core requirement, and APIs depend on and manipulate it.
+Dropping a location is thus an explicit choice which cannot happen by mistake.
+
+To provide an illustration: If a transformation replaces an operation by
+another, that new operation must still have a location attached. This makes it
+possible to track where that operation came from.
+
+It's worth noting that the mlir-opt tool - a tool for testing
+compiler passes - does not include locations in the output by default. The
+`-mlir-print-debuginfo` flag specifies to include locations. (Run `mlir-opt
+--help` for more options.)
+
+### Opaque API
+
+MLIR is designed to be a completely extensible system, and as such, the
+infrastructure has the capability to opaquely represent all of its core
+components: attributes, operations, types, etc. This allows MLIR to parse,
+represent, and [round-trip](../../Glossary.md#round-trip) any valid IR. For
+example, we could place our Toy operation from above into an `.mlir` file and
+round-trip through *mlir-opt* without registering anything:
+
+```mlir
+func @toy_func(%tensor: tensor<2x3xf64>) -> tensor<3x2xf64> {
+ %t_tensor = "toy.transpose"(%tensor) { inplace = true } : (tensor<2x3xf64>) -> tensor<3x2xf64>
+ return %t_tensor : tensor<3x2xf64>
+}
+```
+
+In the cases of unregistered attributes, operations, and types, MLIR will
+enforce some structural constraints (SSA, block termination, etc.), but
+otherwise they are completely opaque. This can be useful for bootstrapping
+purposes, but it is generally advised against. Opaque operations must be treated
+conservatively by transformations and analyses, and they are much harder to
+construct and manipulate.
+
+This handling can be observed by crafting what should be an invalid IR for Toy
+and seeing it round-trip without tripping the verifier:
+
+```mlir
+// RUN: toyc %s -emit=mlir
+
+func @main() {
+ %0 = "toy.print"() : () -> tensor<2x3xf64>
+}
+```
+
+There are multiple problems here: the `toy.print` operation is not a terminator;
+it should take an operand; and it shouldn't return any values. In the next
+section, we will register our dialect and operations with MLIR, plug into the
+verifier, and add nicer APIs to manipulate our operations.
+
+## Defining a Toy Dialect
+
+To effectively interface with MLIR, we will define a new Toy dialect. This
+dialect will properly model the semantics of the Toy language, as well as
+provide an easy avenue for high-level analysis and transformation.
+
+```c++
+/// This is the definition of the Toy dialect. A dialect inherits from
+/// mlir::Dialect and registers custom attributes, operations, and types (in its
+/// constructor). It can also override some general behavior exposed via virtual
+/// methods, which will be demonstrated in later chapters of the tutorial.
+class ToyDialect : public mlir::Dialect {
+ public:
+ explicit ToyDialect(mlir::MLIRContext *ctx);
+
+ /// Provide a utility accessor to the dialect namespace. This is used by
+ /// several utilities.
+ static llvm::StringRef getDialectNamespace() { return "toy"; }
+};
+```
+
+The dialect can now be registered in the global registry:
+
+```c++
+ mlir::registerDialect<ToyDialect>();
+```
+
+Any new `MLIRContext` created from now on will contain an instance of the Toy
+dialect and invoke specific hooks for things like parsing attributes and types.
+
+## Defining Toy Operations
+
+Now that we have a `Toy` dialect, we can start registering operations. This will
+allow for providing semantic information that the rest of the system can hook
+into. Let's walk through the creation of the `toy.constant` operation:
+
+```mlir
+ %4 = "toy.constant"() {value = dense<1.0> : tensor<2x3xf64>} : () -> tensor<2x3xf64>
+```
+
+This operation takes zero operands, a
+[dense elements](../../LangRef.md#dense-elements-attribute) attribute named
+`value`, and returns a single result of
+[TensorType](../../LangRef.md#tensor-type). An operation inherits from the
+[CRTP](https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern)
+`mlir::Op` class which also takes some optional [*traits*](../../Traits.md) to
+customize its behavior. These traits may provide additional accessors,
+verification, etc.
+
+```c++
+class ConstantOp : public mlir::Op<ConstantOp,
+ /// The ConstantOp takes zero inputs.
+ mlir::OpTrait::ZeroOperands,
+ /// The ConstantOp returns a single result.
+ mlir::OpTrait::OneResult,
+ /// The ConstantOp is pure and has no visible side-effects.
+ mlir::OpTrait::HasNoSideEffect> {
+
+ public:
+ /// Inherit the constructors from the base Op class.
+ using Op::Op;
+
+ /// Provide the unique name for this operation. MLIR will use this to register
+ /// the operation and uniquely identify it throughout the system.
+ static llvm::StringRef getOperationName() { return "toy.constant"; }
+
+ /// Return the value of the constant by fetching it from the attribute.
+ mlir::DenseElementsAttr getValue();
+
+ /// Operations can provide additional verification beyond the traits they
+ /// define. Here we will ensure that the specific invariants of the constant
+ /// operation are upheld, for example the result type must be of TensorType.
+ LogicalResult verify();
+
+ /// Provide an interface to build this operation from a set of input values.
+ /// This interface is used by the builder to allow for easily generating
+ /// instances of this operation:
+ /// mlir::OpBuilder::create<ConstantOp>(...)
+ /// This method populates the given `state` that MLIR uses to create
+ /// operations. This state is a collection of all of the discrete elements
+ /// that an operation may contain.
+ /// Build a constant with the given return type and `value` attribute.
+ static void build(mlir::Builder *builder, mlir::OperationState &state,
+ mlir::Type result, mlir::DenseElementsAttr value);
+ /// Build a constant and reuse the type from the given 'value'.
+ static void build(mlir::Builder *builder, mlir::OperationState &state,
+ mlir::DenseElementsAttr value);
+ /// Build a constant by broadcasting the given 'value'.
+ static void build(mlir::Builder *builder, mlir::OperationState &state,
+ double value);
+};
+```
+
+and we register this operation in the `ToyDialect` constructor:
+
+```c++
+ToyDialect::ToyDialect(mlir::MLIRContext *ctx)
+ : mlir::Dialect(getDialectNamespace(), ctx) {
+ addOperations<ConstantOp>();
+}
+```
+
+### Op vs Operation: Using MLIR Operations
+
+Now that we have defined an operation, we will want to access and transform it.
+In MLIR, there are two main classes related to operations: `Operation` and `Op`.
+Operation is the actual opaque instance of the operation, and represents the
+general API into an operation instance. An `Op` is the base class of a derived
+operation, like `ConstantOp`, and acts as smart pointer wrapper around a
+`Operation*`. This means that when we define our Toy operations, we are actually
+providing a clean interface for building and interfacing with the `Operation`
+class; this is why our `ConstantOp` defines no class fields. Therefore, we
+always pass these classes around by value, instead of by reference or pointer
+(*passing by value* is a common idiom and applies similarly to attributes,
+types, etc). We can always get an instance of our toy operation by using LLVM's
+casting infrastructure:
+
+```c++
+void processConstantOp(mlir::Operation *operation) {
+ ConstantOp op = llvm::dyn_cast<ConstantOp>(operation);
+
+ // This operation is not an instance of `ConstantOp`.
+ if (!op)
+ return;
+
+ // Get the internal operation instance back.
+ mlir::Operation *internalOperation = op.getOperation();
+ assert(internalOperation == operation &&
+ "these operation instances are the same");
+}
+```
+
+### Using the Operation Definition Specification (ODS) Framework
+
+In addition to specializing the `mlir::Op` C++ template, MLIR also supports
+defining operations in a declarative manner. This is achieved via the
+[Operation Definition Specification](../../OpDefinitions.md) framework. Facts
+regarding an operation are specified concisely into a TableGen record, which
+will be expanded into an equivalent `mlir::Op` C++ template specialization at
+compile time. Using the ODS framework is the desired way for defining operations
+in MLIR given the simplicity, conciseness, and general stability in the face of
+C++ API changes.
+
+Lets see how to define the ODS equivalent of our ConstantOp:
+
+The first thing to do is to define a link to the Toy dialect that we defined in
+C++. This is used to link all of the operations that we will define to our
+dialect:
+
+```tablegen
+// Provide a definition of the 'toy' dialect in the ODS framework so that we
+// can define our operations.
+def Toy_Dialect : Dialect {
+ // The namespace of our dialect, this corresponds 1-1 with the string we
+ // provided in `ToyDialect::getDialectNamespace`.
+ let name = "toy";
+
+ // The C++ namespace that the dialect class definition resides in.
+ let cppNamespace = "toy";
+}
+```
+
+Now that we have defined a link to the Toy dialect, we can start defining
+operations. Operations in ODS are defined by inheriting from the `Op` class. To
+simplify our operation definitions, we will define a base class for operations
+in the Toy dialect.
+
+```tablegen
+// Base class for toy dialect operations. This operation inherits from the base
+// `Op` class in OpBase.td, and provides:
+// * The parent dialect of the operation.
+// * The mnemonic for the operation, or the name without the dialect prefix.
+// * A list of traits for the operation.
+class Toy_Op<string mnemonic, list<OpTrait> traits = []> :
+ Op<Toy_Dialect, mnemonic, traits>;
+```
+
+With all of the preliminary pieces defined, we can begin to define the constant
+operation.
+
+We define a toy operation by inheriting from our base 'Toy_Op' class above. Here
+we provide the mnemonic and a list of traits for the operation. The
+[mnemonic](../../OpDefinitions.md#operation-name) here matches the one given in
+`ConstantOp::getOperationName` without the dialect prefix; `toy.`. The constant
+operation here is also marked as 'NoSideEffect'. This is an ODS trait, and
+matches one-to-one with the trait we providing when defining `ConstantOp`:
+`mlir::OpTrait::HasNoSideEffect`. Missing here from our C++ definition are the
+`ZeroOperands` and `OneResult` traits; these will be automatically inferred
+based upon the `arguments` and `results` fields we define later.
+
+```tablegen
+def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+}
+```
+
+At this point you probably might want to know what the C++ code generated by
+TableGen looks like. Simply run the `mlir-tblgen` command with the
+`gen-op-decls` or the `gen-op-defs` action like so:
+
+```
+${build_root}/bin/mlir-tblgen -gen-op-defs ${mlir_src_root}/examples/toy/Ch2/include/toy/Ops.td -I ${mlir_src_root}/include/
+```
+
+Depending on the selected action, this will print either the `ConstantOp` class
+declaration or its implementation. Comparing this output to the hand-crafted
+implementation is incredibly useful when getting started with TableGen.
+
+#### Defining Arguments and Results
+
+With the shell of the operation defined, we can now provide the
+[inputs](../../OpDefinitions.md#operation-arguments) and
+[outputs](../../OpDefinitions.md#operation-results) to our operation. The
+inputs, or arguments, to an operation may be attributes or types for SSA operand
+values. The results correspond to a set of types for the values produced by the
+operation:
+
+```tablegen
+def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+ // The constant operation takes an attribute as the only input.
+ // `F64ElementsAttr` corresponds to a 64-bit floating-point ElementsAttr.
+ let arguments = (ins F64ElementsAttr:$value);
+
+ // The constant operation returns a single value of TensorType.
+ // F64Tensor corresponds to a 64-bit floating-point TensorType.
+ let results = (outs F64Tensor);
+}
+```
+
+By providing a name to the arguments or results, e.g. `$value`, ODS will
+automatically generate a matching accessor: `DenseElementsAttr
+ConstantOp::value()`.
+
+#### Adding Documentation
+
+The next step after defining the operation is to document it. Operations may
+provide
+[`summary` and `description`](../../OpDefinitions.md#operation-documentation)
+fields to describe the semantics of the operation. This information is useful
+for users of the dialect and can even be used to auto-generate Markdown
+documents.
+
+```tablegen
+def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+ // Provide a summary and description for this operation. This can be used to
+ // auto-generate documentation of the operations within our dialect.
+ let summary = "constant operation";
+ let description = [{
+ Constant operation turns a literal into an SSA value. The data is attached
+ to the operation as an attribute. For example:
+
+ %0 = "toy.constant"()
+ { value = dense<[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]> : tensor<2x3xf64> }
+ : () -> tensor<2x3xf64>
+ }];
+
+ // The constant operation takes an attribute as the only input.
+ // `F64ElementsAttr` corresponds to a 64-bit floating-point ElementsAttr.
+ let arguments = (ins F64ElementsAttr:$value);
+
+ // The generic call operation returns a single value of TensorType.
+ // F64Tensor corresponds to a 64-bit floating-point TensorType.
+ let results = (outs F64Tensor);
+}
+```
+
+#### Verifying Operation Semantics
+
+At this point we've already covered a majority of the original C++ operation
+definition. The next piece to define is the verifier. Luckily, much like the
+named accessor, the ODS framework will automatically generate a lot of the
+necessary verification logic based upon the constraints we have given. This
+means that we don't need to verify the structure of the return type, or even the
+input attribute `value`. In many cases, additional verification is not even
+necessary for ODS operations. To add additional verification logic, an operation
+can override the [`verifier`](../../OpDefinitions.md#custom-verifier-code)
+field. The `verifier` field allows for defining a C++ code blob that will be run
+as part of `ConstantOp::verify`. This blob can assume that all of the other
+invariants of the operation have already been verified:
+
+```tablegen
+def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+ // Provide a summary and description for this operation. This can be used to
+ // auto-generate documentation of the operations within our dialect.
+ let summary = "constant operation";
+ let description = [{
+ Constant operation turns a literal into an SSA value. The data is attached
+ to the operation as an attribute. For example:
+
+ %0 = "toy.constant"()
+ { value = dense<[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]> : tensor<2x3xf64> }
+ : () -> tensor<2x3xf64>
+ }];
+
+ // The constant operation takes an attribute as the only input.
+ // `F64ElementsAttr` corresponds to a 64-bit floating-point ElementsAttr.
+ let arguments = (ins F64ElementsAttr:$value);
+
+ // The generic call operation returns a single value of TensorType.
+ // F64Tensor corresponds to a 64-bit floating-point TensorType.
+ let results = (outs F64Tensor);
+
+ // Add additional verification logic to the constant operation. Here we invoke
+ // a static `verify` method in a C++ source file. This codeblock is executed
+ // inside of ConstantOp::verify, so we can use `this` to refer to the current
+ // operation instance.
+ let verifier = [{ return ::verify(*this); }];
+}
+```
+
+#### Attaching `build` Methods
+
+The final missing component here from our original C++ example are the `build`
+methods. ODS can generate some simple build methods automatically, and in this
+case it will generate our first build method for us. For the rest, we define the
+[`builders`](../../OpDefinitions.md#custom-builder-methods) field. This field
+takes a list of `OpBuilder` objects that take a string corresponding to a list
+of C++ parameters, as well as an optional code block that can be used to specify
+the implementation inline.
+
+```tablegen
+def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+ // Provide a summary and description for this operation. This can be used to
+ // auto-generate documentation of the operations within our dialect.
+ let summary = "constant operation";
+ let description = [{
+ Constant operation turns a literal into an SSA value. The data is attached
+ to the operation as an attribute. For example:
+
+ %0 = "toy.constant"()
+ { value = dense<[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]> : tensor<2x3xf64> }
+ : () -> tensor<2x3xf64>
+ }];
+
+ // The constant operation takes an attribute as the only input.
+ // `F64ElementsAttr` corresponds to a 64-bit floating-point ElementsAttr.
+ let arguments = (ins F64ElementsAttr:$value);
+
+ // The generic call operation returns a single value of TensorType.
+ // F64Tensor corresponds to a 64-bit floating-point TensorType.
+ let results = (outs F64Tensor);
+
+ // Add additional verification logic to the constant operation. Here we invoke
+ // a static `verify` method in a c++ source file. This codeblock is executed
+ // inside of ConstantOp::verify, so we can use `this` to refer to the current
+ // operation instance.
+ let verifier = [{ return ::verify(*this); }];
+
+ // Add custom build methods for the constant operation. These methods populate
+ // the `state` that MLIR uses to create operations, i.e. these are used when
+ // using `builder.create<ConstantOp>(...)`.
+ let builders = [
+ // Build a constant with a given constant tensor value.
+ OpBuilder<"Builder *builder, OperationState &result, "
+ "DenseElementsAttr value", [{
+ // Call into an autogenerated `build` method.
+ build(builder, result, value.getType(), value);
+ }]>,
+
+ // Build a constant with a given constant floating-point value. This builder
+ // creates a declaration for `ConstantOp::build` with the given parameters.
+ OpBuilder<"Builder *builder, OperationState &result, double value">
+ ];
+}
+```
+
+Above we introduce several of the concepts for defining operations in the ODS
+framework, but there are many more that we haven't had a chance to: regions,
+variadic operands, etc. Check out the
+[full specification](../../OpDefinitions.md) for more details.
+
+## Complete Toy Example
+
+At this point we can generate our "Toy IR". A simplified version of the previous
+example:
+
+```.toy
+# User defined generic function that operates on unknown shaped arguments.
+def multiply_transpose(a, b) {
+ return transpose(a) * transpose(b);
+}
+
+def main() {
+ var a<2, 3> = [[1, 2, 3], [4, 5, 6]];
+ var b<2, 3> = [1, 2, 3, 4, 5, 6];
+ var c = multiply_transpose(a, b);
+ var d = multiply_transpose(b, a);
+ print(d);
+}
+```
+
+Results in the following IR:
+
+```mlir
+module {
+ func @multiply_transpose(%arg0: tensor<*xf64>, %arg1: tensor<*xf64>) -> tensor<*xf64> {
+ %0 = "toy.transpose"(%arg0) : (tensor<*xf64>) -> tensor<*xf64> loc("test/codegen.toy":5:10)
+ %1 = "toy.transpose"(%arg1) : (tensor<*xf64>) -> tensor<*xf64> loc("test/codegen.toy":5:25)
+ %2 = "toy.mul"(%0, %1) : (tensor<*xf64>, tensor<*xf64>) -> tensor<*xf64> loc("test/codegen.toy":5:25)
+ "toy.return"(%2) : (tensor<*xf64>) -> () loc("test/codegen.toy":5:3)
+ } loc("test/codegen.toy":4:1)
+ func @main() {
+ %0 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64> loc("test/codegen.toy":9:17)
+ %1 = "toy.reshape"(%0) : (tensor<2x3xf64>) -> tensor<2x3xf64> loc("test/codegen.toy":9:3)
+ %2 = "toy.constant"() {value = dense<[1.000000e+00, 2.000000e+00, 3.000000e+00, 4.000000e+00, 5.000000e+00, 6.000000e+00]> : tensor<6xf64>} : () -> tensor<6xf64> loc("test/codegen.toy":10:17)
+ %3 = "toy.reshape"(%2) : (tensor<6xf64>) -> tensor<2x3xf64> loc("test/codegen.toy":10:3)
+ %4 = "toy.generic_call"(%1, %3) {callee = @multiply_transpose} : (tensor<2x3xf64>, tensor<2x3xf64>) -> tensor<*xf64> loc("test/codegen.toy":11:11)
+ %5 = "toy.generic_call"(%3, %1) {callee = @multiply_transpose} : (tensor<2x3xf64>, tensor<2x3xf64>) -> tensor<*xf64> loc("test/codegen.toy":12:11)
+ "toy.print"(%5) : (tensor<*xf64>) -> () loc("test/codegen.toy":13:3)
+ "toy.return"() : () -> () loc("test/codegen.toy":8:1)
+ } loc("test/codegen.toy":8:1)
+} loc("test/codegen.toy":0:0)
+```
+
+You can build `toyc-ch2` and try yourself: `toyc-ch2
+test/Examples/Toy/Ch2/codegen.toy -emit=mlir -mlir-print-debuginfo`. We can also
+check our RoundTrip: `toyc-ch2 test/Examples/Toy/Ch2/codegen.toy -emit=mlir
+-mlir-print-debuginfo 2> codegen.mlir` followed by `toyc-ch2 codegen.mlir
+-emit=mlir`. You should also use `mlir-tblgen` on the final definition file and
+study the generated C++ code.
+
+At this point, MLIR knows about our Toy dialect and operations. In the
+[next chapter](Ch-3.md), we will leverage our new dialect to implement some
+high-level language-specific analyses and transformations for the Toy language.
diff --git a/mlir/docs/Tutorials/Toy/Ch-3.md b/mlir/docs/Tutorials/Toy/Ch-3.md
new file mode 100644
index 00000000000..615c2c1bbec
--- /dev/null
+++ b/mlir/docs/Tutorials/Toy/Ch-3.md
@@ -0,0 +1,264 @@
+# Chapter 3: High-level Language-Specific Analysis and Transformation
+
+[TOC]
+
+Creating a dialect that closely represents the semantics of an input language
+enables analyses, transformations and optimizations in MLIR that require
+high-level language information and are generally performed on the language AST.
+For example, `clang` has a fairly
+[heavy mechanism](https://clang.llvm.org/doxygen/classclang_1_1TreeTransform.html)
+for performing template instantiation in C++.
+
+We divide compiler transformations into two categories: local and global. In
+this chapter, we focus on how to leverage the Toy Dialect and its high-level
+semantics to perform local pattern-match transformations that would be difficult
+in LLVM. For this, we use MLIR's
+[Generic DAG Rewriter](../../GenericDAGRewriter.md).
+
+There are two methods that can be used to implement pattern-match
+transformations: 1. Imperative, C++ pattern-match and rewrite 2. Declarative,
+rule-based pattern-match and rewrite using table-driven
+[Declarative Rewrite Rules](../../DeclarativeRewrites.md) (DRR). Note that the
+use of DRR requires that the operations be defined using ODS, as described in
+[Chapter 2](Ch-2.md).
+
+# Optimize Transpose using C++ style pattern-match and rewrite
+
+Let's start with a simple pattern and try to eliminate a sequence of two
+transpose that cancel out: `transpose(transpose(X)) -> X`. Here is the
+corresponding Toy example:
+
+```Toy(.toy)
+def transpose_transpose(x) {
+ return transpose(transpose(x));
+}
+```
+
+Which corresponds to the following IR:
+
+```mlir
+func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> {
+ %0 = "toy.transpose"(%arg0) : (tensor<*xf64>) -> tensor<*xf64>
+ %1 = "toy.transpose"(%0) : (tensor<*xf64>) -> tensor<*xf64>
+ "toy.return"(%1) : (tensor<*xf64>) -> ()
+}
+```
+
+This is a good example of a transformation that is trivial to match on the Toy
+IR but that would be quite hard for LLVM to figure. For example, today Clang
+can't optimize away the temporary array, and the computation with the naive
+transpose is expressed with these loops:
+
+```c++
+#define N 100
+#define M 100
+
+void sink(void *);
+void double_transpose(int A[N][M]) {
+ int B[M][N];
+ for(int i = 0; i < N; ++i) {
+ for(int j = 0; j < M; ++j) {
+ B[j][i] = A[i][j];
+ }
+ }
+ for(int i = 0; i < N; ++i) {
+ for(int j = 0; j < M; ++j) {
+ A[i][j] = B[j][i];
+ }
+ }
+ sink(A);
+}
+```
+
+For a simple C++ approach to rewrite involving matching a tree-like pattern in
+the IR and replacing it with a different set of operations, we can plug into the
+MLIR `Canonicalizer` pass by implementing a `RewritePattern`:
+
+```c++
+/// Fold transpose(transpose(x)) -> x
+struct SimplifyRedundantTranspose : public mlir::OpRewritePattern<TransposeOp> {
+ /// We register this pattern to match every toy.transpose in the IR.
+ /// The "benefit" is used by the framework to order the patterns and process
+ /// them in order of profitability.
+ SimplifyRedundantTranspose(mlir::MLIRContext *context)
+ : OpRewritePattern<TransposeOp>(context, /*benefit=*/1) {}
+
+ /// This method is attempting to match a pattern and rewrite it. The rewriter
+ /// argument is the orchestrator of the sequence of rewrites. It is expected
+ /// to interact with it to perform any changes to the IR from here.
+ mlir::PatternMatchResult
+ matchAndRewrite(TransposeOp op,
+ mlir::PatternRewriter &rewriter) const override {
+ // Look through the input of the current transpose.
+ mlir::Value transposeInput = op.getOperand();
+ TransposeOp transposeInputOp =
+ llvm::dyn_cast_or_null<TransposeOp>(transposeInput->getDefiningOp());
+ // If the input is defined by another Transpose, bingo!
+ if (!transposeInputOp)
+ return matchFailure();
+
+ // Use the rewriter to perform the replacement
+ rewriter.replaceOp(op, {transposeInputOp.getOperand()}, {transposeInputOp});
+ return matchSuccess();
+ }
+};
+```
+
+The implementation of this rewriter is in `ToyCombine.cpp`. The
+[canonicalization pass](../../Canonicalization.md) applies transformations
+defined by operations in a greedy, iterative manner. To ensure that the
+canonicalization pass applies our new transform, we set
+[hasCanonicalizer = 1](../../OpDefinitions.md#hascanonicalizer) and register the
+pattern with the canonicalization framework.
+
+```c++
+// Register our patterns for rewrite by the Canonicalization framework.
+void TransposeOp::getCanonicalizationPatterns(
+ OwningRewritePatternList &results, MLIRContext *context) {
+ results.insert<SimplifyRedundantTranspose>(context);
+}
+```
+
+We also need to update our main file, `toyc.cpp`, to add an optimization
+pipeline. In MLIR, the optimizations are run through a `PassManager` in a
+similar way to LLVM:
+
+```c++
+ mlir::PassManager pm(module.getContext());
+ pm.addNestedPass<mlir::FuncOp>(mlir::createCanonicalizerPass());
+```
+
+Finally, we can run `toyc-ch3 test/transpose_transpose.toy -emit=mlir -opt` and
+observe our pattern in action:
+
+```mlir
+func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> {
+ %0 = "toy.transpose"(%arg0) : (tensor<*xf64>) -> tensor<*xf64>
+ "toy.return"(%arg0) : (tensor<*xf64>) -> ()
+}
+```
+
+As expected, we now directly return the function argument, bypassing any
+transpose operation. However, one of the transposes still hasn't been
+eliminated. That is not ideal! What happened is that our pattern replaced the
+last transform with the function input and left behind the now dead transpose
+input. The Canonicalizer knows to clean up dead operations; however, MLIR
+conservatively assumes that operations may have side-effects. We can fix this by
+adding a new trait, `NoSideEffect`, to our `TransposeOp`:
+
+```tablegen:
+def TransposeOp : Toy_Op<"transpose", [NoSideEffect]> {...}
+```
+
+Let's retry now `toyc-ch3 test/transpose_transpose.toy -emit=mlir -opt`:
+
+```mlir
+func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> {
+ "toy.return"(%arg0) : (tensor<*xf64>) -> ()
+}
+```
+
+Perfect! No `transpose` operation is left - the code is optimal.
+
+In the next section, we use DRR for pattern match optimizations associated with
+the Reshape op.
+
+# Optimize Reshapes using DRR
+
+Declarative, rule-based pattern-match and rewrite (DRR) is an operation
+DAG-based declarative rewriter that provides a table-based syntax for
+pattern-match and rewrite rules:
+
+```tablegen:
+class Pattern<
+ dag sourcePattern, list<dag> resultPatterns,
+ list<dag> additionalConstraints = [],
+ dag benefitsAdded = (addBenefit 0)>;
+```
+
+A redundant reshape optimization similar to SimplifyRedundantTranspose can be
+expressed more simply using DRR as follows:
+
+```tablegen:
+// Reshape(Reshape(x)) = Reshape(x)
+def ReshapeReshapeOptPattern : Pat<(ReshapeOp(ReshapeOp $arg)),
+ (ReshapeOp $arg)>;
+```
+
+The automatically generated C++ code corresponding to each of the DRR patterns
+can be found under path/to/BUILD/projects/mlir/examples/toy/Ch3/ToyCombine.inc.
+
+DRR also provides a method for adding argument constraints when the
+transformation is conditional on some properties of the arguments and results.
+An example is a transformation that eliminates reshapes when they are redundant,
+i.e. when the input and output shapes are identical.
+
+```tablegen:
+def TypesAreIdentical : Constraint<CPred<"$0->getType() == $1->getType()">>;
+def RedundantReshapeOptPattern : Pat<
+ (ReshapeOp:$res $arg), (replaceWithValue $arg),
+ [(TypesAreIdentical $res, $arg)]>;
+```
+
+Some optimizations may require additional transformations on instruction
+arguments. This is achieved using NativeCodeCall, which allows for more complex
+transformations either by calling into a C++ helper function or by using inline
+C++. An example of such an optimization is FoldConstantReshape, where we
+optimize Reshape of a constant value by reshaping the constant in place and
+eliminating the reshape operation.
+
+```tablegen:
+def ReshapeConstant : NativeCodeCall<"$0.reshape(($1->getType()).cast<ShapedType>())">;
+def FoldConstantReshapeOptPattern : Pat<
+ (ReshapeOp:$res (ConstantOp $arg)),
+ (ConstantOp (ReshapeConstant $arg, $res))>;
+```
+
+We demonstrate these reshape optimizations using the following
+trivialReshape.toy program:
+
+```c++
+def main() {
+ var a<2,1> = [1, 2];
+ var b<2,1> = a;
+ var c<2,1> = b;
+ print(c);
+}
+```
+
+```mlir
+module {
+ func @main() {
+ %0 = "toy.constant"() {value = dense<[1.000000e+00, 2.000000e+00]> : tensor<2xf64>}
+ : () -> tensor<2xf64>
+ %1 = "toy.reshape"(%0) : (tensor<2xf64>) -> tensor<2x1xf64>
+ %2 = "toy.reshape"(%1) : (tensor<2x1xf64>) -> tensor<2x1xf64>
+ %3 = "toy.reshape"(%2) : (tensor<2x1xf64>) -> tensor<2x1xf64>
+ "toy.print"(%3) : (tensor<2x1xf64>) -> ()
+ "toy.return"() : () -> ()
+ }
+}
+```
+
+We can try to run `toyc-ch3 test/trivialReshape.toy -emit=mlir -opt` and observe
+our pattern in action:
+
+```mlir
+module {
+ func @main() {
+ %0 = "toy.constant"() {value = dense<[[1.000000e+00], [2.000000e+00]]> \
+ : tensor<2x1xf64>} : () -> tensor<2x1xf64>
+ "toy.print"(%0) : (tensor<2x1xf64>) -> ()
+ "toy.return"() : () -> ()
+ }
+}
+```
+
+As expected, no reshape operations remain after canonicalization.
+
+Further details on the declarative rewrite method can be found at
+[Table-driven Declarative Rewrite Rule (DRR)](../../DeclarativeRewrites.md).
+
+In this chapter, we saw how to use certain core transformations through always
+available hooks. In the [next chapter](Ch-4.md), we will see how to use generic
+solutions that scale better through Interfaces.
diff --git a/mlir/docs/Tutorials/Toy/Ch-4.md b/mlir/docs/Tutorials/Toy/Ch-4.md
new file mode 100644
index 00000000000..4a4e11c68e6
--- /dev/null
+++ b/mlir/docs/Tutorials/Toy/Ch-4.md
@@ -0,0 +1,387 @@
+# Chapter 4: Enabling Generic Transformation with Interfaces
+
+[TOC]
+
+## Background: Grappling with an Extensible IR
+
+Through dialects, MLIR allows for the representation of many different levels of
+abstraction; the Toy dialect that we have previously defined is one such
+example. Though these different dialects may represent different abstractions,
+there is often a set of common transformations and analyses that we would like
+to perform. The problem that arises is that naively implementing each
+transformation for each dialect leads to large amounts of code duplication, as
+the internal algorithms are generally very similar, if not the same. We would
+like to provide the ability for transformations to opaquely hook into dialects
+like Toy to get the information they need.
+
+MLIR provides a set of always available-hooks for certain core transformations,
+as seen in the [previous chapter](Ch-3.md), where we registered some
+canonicalizations via a hook on our operations (`getCanonicalizationPatterns`).
+However, these types of hooks don't really scale well. Therefore, a more generic
+solution was designed, in the form of [interfaces](../../Interfaces.md), to make
+the MLIR infrastructure as extensible as the representation. Interfaces provide
+a generic mechanism for dialects and operations to provide information to a
+transformation or analysis.
+
+## Shape Inference: Preparing for Code Generation
+
+Our Toy IR currently operates on generic tensors, meaning that we don't know the
+shape of tensors other than during the initialization of constants. This
+complicates optimizations, as well as code generation. Fortunately, we can
+simply propagate the shapes through the computation until they are all known.
+The issue is how to handle calls to user-defined generic functions: every call
+site could deduce different shapes. One possibility would be to perform symbolic
+inference based on the argument types, but this would be hard to generalize if
+we were to introduce more control flow in the language. Another approach would
+be function specialization, where every call site with new argument shapes
+duplicates the called function and specializes it. The approach we take for Toy
+is to inline all of the function calls, then perform intraprocedural shape
+propagation.
+
+### Inlining
+
+Here we could write an inlining algorithm specifically designed for the Toy
+dialect, but that can become quite complicated depending on the level of
+complexity that we want. Disregarding cost modeling, the pure structural
+transformation is already complex to implement from scratch. Thankfully, MLIR
+provides a generic inliner algorithm that dialects can plug into. All we need to
+do in Toy is to provide the [interfaces](../../Interfaces.md) for the inliner to
+hook into.
+
+The first thing we need to do is to define the constraints on inlining
+operations in the Toy dialect. This information is provided through a
+[dialect interface](../../Interfaces.md#dialect-interfaces). This is essentially
+a class containing a set of virtual hooks for which a dialect may provide a
+specialization. In this case, the interface is `DialectInlinerInterface`.
+
+```c++
+/// This class defines the interface for handling inlining with Toy operations.
+/// We simplify inherit from the base interface class and provide a
+/// specialization of the necessary methods.
+struct ToyInlinerInterface : public DialectInlinerInterface {
+ using DialectInlinerInterface::DialectInlinerInterface;
+
+ /// This hook checks to see if the given operation is legal to inline into the
+ /// given region. For Toy this hook can simply return true, as all Toy
+ /// operations are inlinable.
+ bool isLegalToInline(Operation *, Region *,
+ BlockAndValueMapping &) const final {
+ return true;
+ }
+
+ /// This hook is called when a terminator operation has been inlined. The only
+ /// terminator that we have in the Toy dialect is the return
+ /// operation(toy.return). We handle the return by replacing the values
+ /// previously returned by the call operation with the operands of the
+ /// return.
+ void handleTerminator(Operation *op,
+ ArrayRef<Value> valuesToRepl) const final {
+ // Only "toy.return" needs to be handled here.
+ auto returnOp = cast<ReturnOp>(op);
+
+ // Replace the values directly with the return operands.
+ assert(returnOp.getNumOperands() == valuesToRepl.size());
+ for (const auto &it : llvm::enumerate(returnOp.getOperands()))
+ valuesToRepl[it.index()]->replaceAllUsesWith(it.value());
+ }
+};
+```
+
+We then register our dialect interface directly on the Toy dialect, similarly to
+how we did for operations.
+
+```c++
+ToyDialect::ToyDialect(mlir::MLIRContext *ctx) : mlir::Dialect("toy", ctx) {
+ addInterfaces<ToyInlinerInterface>();
+}
+```
+
+Next, we need to provide a way for the inliner to know that `toy.generic_call`
+represents a call to a function. MLIR provides an
+[operation interface](../../Interfaces.md#operation-interfaces) that can be used
+to mark an operation as being "call-like". Unlike dialect interfaces, operation
+interfaces provide a more refined granularity of information that is specific
+and core to a single operation. The interface that we will be adding here is the
+`CallOpInterface`.
+
+To add this interface we just need to include the definition into our operation
+specification file (`Ops.td`):
+
+```tablegen
+#ifdef MLIR_CALLINTERFACES
+#else
+include "mlir/Analysis/CallInterfaces.td"
+#endif // MLIR_CALLINTERFACES
+```
+
+and add it to the traits list of `GenericCallOp`:
+
+```tablegen
+def GenericCallOp : Toy_Op<"generic_call",
+ [DeclareOpInterfaceMethods<CallOpInterface>]> {
+ ...
+}
+```
+
+In the above we also use the `DeclareOpInterfaceMethods` directive to
+auto-declare all of the interface methods in the class declaration of
+GenericCallOp. This means that we just need to provide a definition:
+
+```c++
+/// Return the callee of the generic call operation, this is required by the
+/// call interface.
+CallInterfaceCallable GenericCallOp::getCallableForCallee() {
+ return getAttrOfType<SymbolRefAttr>("callee");
+}
+
+/// Get the argument operands to the called function, this is required by the
+/// call interface.
+Operation::operand_range GenericCallOp::getArgOperands() { return inputs(); }
+```
+
+Now that the inliner has been informed about the Toy dialect, we can add the
+inliner pass to the pass manager for Toy:
+
+```c++
+ pm.addPass(mlir::createInlinerPass());
+```
+
+Now let's look at a working example:
+
+```mlir
+func @multiply_transpose(%arg0: tensor<*xf64>, %arg1: tensor<*xf64>) -> tensor<*xf64> {
+ %0 = "toy.transpose"(%arg0) : (tensor<*xf64>) -> tensor<*xf64>
+ %1 = "toy.transpose"(%arg1) : (tensor<*xf64>) -> tensor<*xf64>
+ %2 = "toy.mul"(%0, %1) : (tensor<*xf64>, tensor<*xf64>) -> tensor<*xf64>
+ "toy.return"(%2) : (tensor<*xf64>) -> ()
+}
+func @main() {
+ %0 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64>
+ %1 = "toy.reshape"(%0) : (tensor<2x3xf64>) -> tensor<2x3xf64>
+ %2 = "toy.constant"() {value = dense<[1.000000e+00, 2.000000e+00, 3.000000e+00, 4.000000e+00, 5.000000e+00, 6.000000e+00]> : tensor<6xf64>} : () -> tensor<6xf64>
+ %3 = "toy.reshape"(%2) : (tensor<6xf64>) -> tensor<2x3xf64>
+ %4 = "toy.generic_call"(%1, %3) {callee = @multiply_transpose} : (tensor<2x3xf64>, tensor<2x3xf64>) -> tensor<*xf64>
+ %5 = "toy.generic_call"(%3, %1) {callee = @multiply_transpose} : (tensor<2x3xf64>, tensor<2x3xf64>) -> tensor<*xf64>
+ "toy.print"(%5) : (tensor<*xf64>) -> ()
+ "toy.return"() : () -> ()
+}
+```
+
+We have two calls to multiple_transpose that we would like to inline into main,
+but if we look at the output nothing has changed. We are missing one last subtle
+piece: there is a hidden type conversion on the edge of the call. If we look at
+the above, the operands to the generic_call are of type `tensor<2x3xf64>`, while
+the inputs to the function expect `tensor<*xf64>`. To resolve this difference,
+the inliner expects an explicit cast operation to be inserted. For this, we need
+to add a new operation to the Toy dialect, `ToyCastOp`(toy.cast), to represent
+casts between two different shapes.
+
+```tablegen
+def CastOp : Toy_Op<"cast", [NoSideEffect, SameOperandsAndResultShape]> {
+ let summary = "shape cast operation";
+ let description = [{
+ The "cast" operation converts a tensor from one type to an equivalent type
+ without changing any data elements. The source and destination types
+ must both be tensor types with the same element type. If both are ranked
+ then the rank should be the same and static dimensions should match. The
+ operation is invalid if converting to a mismatching constant dimension.
+ }];
+
+ let arguments = (ins F64Tensor:$input);
+ let results = (outs F64Tensor:$output);
+
+ // Set the folder bit so that we can fold redundant cast operations.
+ let hasFolder = 1;
+}
+```
+
+We can then override the necessary hook on the ToyInlinerInterface to insert
+this for us when necessary:
+
+```c++
+struct ToyInlinerInterface : public DialectInlinerInterface {
+ ...
+
+ /// Attempts to materialize a conversion for a type mismatch between a call
+ /// from this dialect, and a callable region. This method should generate an
+ /// operation that takes 'input' as the only operand, and produces a single
+ /// result of 'resultType'. If a conversion can not be generated, nullptr
+ /// should be returned.
+ Operation *materializeCallConversion(OpBuilder &builder, Value input,
+ Type resultType,
+ Location conversionLoc) const final {
+ return builder.create<CastOp>(conversionLoc, resultType, input);
+ }
+};
+```
+
+If we run the working example through the pipeline again, we get the expected:
+
+```mlir
+func @main() {
+ %0 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64>
+ %1 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64>
+ %2 = "toy.cast"(%1) : (tensor<2x3xf64>) -> tensor<*xf64>
+ %3 = "toy.cast"(%0) : (tensor<2x3xf64>) -> tensor<*xf64>
+ %4 = "toy.transpose"(%2) : (tensor<*xf64>) -> tensor<*xf64>
+ %5 = "toy.transpose"(%3) : (tensor<*xf64>) -> tensor<*xf64>
+ %6 = "toy.mul"(%4, %5) : (tensor<*xf64>, tensor<*xf64>) -> tensor<*xf64>
+ "toy.print"(%6) : (tensor<*xf64>) -> ()
+ "toy.return"() : () -> ()
+}
+```
+
+NOTE: The generic inliner will also perform simplifications, so the output may
+be a bit cleaner than expected.
+
+### Intraprocedural Shape Inference
+
+Now that we have inlined all of the functions, we are left with a main function
+containing a mix of static and dynamically shaped operations. We can now write a
+simple shape inference pass to propagate shapes intraprocedurally (within a
+single function). We could write this as a pass that directly encodes the
+constraints of the operations within the Toy dialect, but this seems like a good
+candidate for a transformation that could be written generically. As a good rule
+of thumb, it is best to express a transformation as generically as possible,
+such that it can be extended to other dialects in the future. There is no
+telling how many other dialects may have similar needs or encounter the same
+problems.
+
+For shape inference, if we break down the problem to its core, we really just
+want operations to tell us the expected outputs given a set of statically known
+inputs. (We can definitely get more complex than that, but for our needs we can
+keep it simple.) Given that this property is core to a specific operation, we
+can define an operation interface that can be specified on operations that need
+to have their result shapes inferred.
+
+Similarly to operations, we can also
+[define operation interfaces](../../OpDefinitions.md#operation-interfaces) using
+the operation definition specification (ODS) framework.
+
+The interface is defined by inheriting from `OpInterface`, which takes the name
+to be given to the generated C++ interface class as a template argument. For our
+purposes, we will name the generated class a simpler `ShapeInference`. We also
+provide a description for the interface.
+
+```tablegen
+def ShapeInferenceOpInterface : OpInterface<"ShapeInference"> {
+ let description = [{
+ Interface to access a registered method to infer the return types for an
+ operation that can be used during type inference.
+ }];
+}
+```
+
+Next, we define the interface methods that the operations will need to provide.
+An interface method is comprised of: a description; a C++ return type in string
+form; a method name in string form; and a few optional components, depending on
+the need. See the
+[ODS documentation](../../OpDefinitions.md#operation-interfaces) for more
+information.
+
+```tablegen
+def ShapeInferenceOpInterface : OpInterface<"ShapeInference"> {
+ let description = [{
+ Interface to access a registered method to infer the return types for an
+ operation that can be used during type inference.
+ }];
+
+ let methods = [
+ InterfaceMethod<"Infer and set the output shape for the current operation.",
+ "void", "inferShapes">
+ ];
+}
+```
+
+Now that the interface is defined, we can add it to the necessary Toy operations
+in a similar way to how we added the `CallOpInterface` to the GenericCallOp:
+
+```
+def MulOp : Toy_Op<"mul",
+ [..., DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ ...
+}
+```
+
+Each of these operations will then need to provide a definition for the
+`inferShapes()` method. As an example, for the mul op, the result shape is
+inferred as the shape of the inputs.
+
+```c++
+/// Infer the output shape of the MulOp, this is required by the shape inference
+/// interface.
+void MulOp::inferShapes() { getResult()->setType(getOperand(0)->getType()); }
+```
+
+At this point, each of the necessary Toy operations provide a mechanism by which
+to infer their output shapes. The ShapeInferencePass is a FunctionPass: it will
+runs on each Function in isolation. MLIR also supports general
+[OperationPasses](../../WritingAPass.md#operation-pass) that run on any isolated
+operation (i.e. other function-like operations), but here our module only
+contains functions, so there is no need to generalize to all operations.
+
+Implementing such a pass is done by creating a class inheriting from
+`mlir::FunctionPass` and overriding the `runOnFunction()` method:
+
+```c++
+class ShapeInferencePass : public mlir::FunctionPass<ShapeInferencePass> {
+ void runOnFunction() override {
+ FuncOp function = getFunction();
+ ...
+ }
+};
+```
+
+The algorithm operates as follows:
+
+1. Build a worklist containing all the operations that return a dynamically
+ shaped tensor: these are the operations that need shape inference.
+2. Iterate on the worklist:
+ - find an operation to process: the next ready operation in the worklist
+ has all of its arguments non-generic,
+ - if no operation is found, break out of the loop,
+ - remove the operation from the worklist,
+ - infer the shape of its output from the argument types.
+3. If the worklist is empty, the algorithm succeeded.
+
+When processing an operation, we query if it registered the `ShapeInference`
+interface.
+
+```c++
+ // Ask the operation to infer its output shapes.
+ LLVM_DEBUG(llvm::dbgs() << "Inferring shape for: " << *op << "\n");
+
+ /// We check if an operation has a particular interface by casting.
+ if (ShapeInference shapeOp = dyn_cast<ShapeInference>(op)) {
+ shapeOp.inferShapes();
+ } else {
+ op->emitError("unable to infer shape of operation without shape "
+ "inference interface");
+ return signalPassFailure();
+ }
+```
+
+We can then add our pass to the pass manager:
+
+```c++
+ pm.addPass(mlir::createShapeInferencePass());
+```
+
+If we rerun our original example, we now get the following:
+
+```mlir
+func @main() {
+ %0 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64>
+ %1 = "toy.transpose"(%0) : (tensor<2x3xf64>) -> tensor<3x2xf64>
+ %2 = "toy.mul"(%1, %1) : (tensor<3x2xf64>, tensor<3x2xf64>) -> tensor<3x2xf64>
+ "toy.print"(%2) : (tensor<3x2xf64>) -> ()
+ "toy.return"() : () -> ()
+}
+```
+
+You can build `toyc-ch4` and try yourself: `toyc-ch4
+test/Examples/Toy/Ch4/codegen.toy -emit=mlir -opt`.
+
+In the [next chapter](Ch-5.md), we will start the process of code generation by
+targeting a lower level dialect for optimizing some of the more compute-heavy
+Toy operations.
diff --git a/mlir/docs/Tutorials/Toy/Ch-5.md b/mlir/docs/Tutorials/Toy/Ch-5.md
new file mode 100644
index 00000000000..8a4268b498f
--- /dev/null
+++ b/mlir/docs/Tutorials/Toy/Ch-5.md
@@ -0,0 +1,357 @@
+# Chapter 5: Partial Lowering to Lower-Level Dialects for Optimization
+
+[TOC]
+
+At this point, we are eager to generate actual code and see our Toy language
+take life. We will use LLVM to generate code, but just showing the LLVM builder
+interface here wouldn't be very exciting. Instead, we will show how to perform
+progressive lowering through a mix of dialects coexisting in the same function.
+
+To make it more interesting, in this chapter we will consider that we want to
+reuse existing optimizations implemented in a dialect optimizing affine
+transformations: `Affine`. This dialect is tailored to the computation-heavy
+part of the program and is limited: it doesn't support representing our
+`toy.print` builtin, for instance, neither should it! Instead, we can target
+`Affine` for the computation heavy part of Toy, and in the
+[next chapter](Ch-6.md) directly the `LLVM IR` dialect for lowering `print`. As
+part of this lowering, we will be lowering from the
+[TensorType](../../LangRef.md#tensor-type) that `Toy` operates on to the
+[MemRefType](../../LangRef.md#memref-type) that is indexed via an affine
+loop-nest. Tensors represent an abstract value-typed sequence of data, meaning
+that they don't live in any memory. MemRefs, on the other hand, represent lower
+level buffer access, as they are concrete references to a region of memory.
+
+# Dialect Conversions
+
+MLIR has many different dialects, so it is important to have a unified framework
+for [converting](../../Glossary.md#conversion) between them. This is where the
+`DialectConversion` framework comes into play. This framework allows for
+transforming a set of `illegal` operations to a set of `legal` ones. To use this
+framework, we need to provide two things (and an optional third):
+
+* A [Conversion Target](../../DialectConversion.md#conversion-target)
+
+ - This is the formal specification of what operations or dialects are
+ legal for the conversion. Operations that aren't legal will require
+ rewrite patterns to perform
+ [legalization](./../../Glossary.md#legalization).
+
+* A set of
+ [Rewrite Patterns](../../DialectConversion.md#rewrite-pattern-specification)
+
+ - These are the set of [patterns](../../QuickstartRewrites.md) used to
+ convert `illegal` operations into a set of zero or more `legal` ones.
+
+* Optionally, a [Type Converter](../../DialectConversion.md#type-conversion).
+
+ - If provided, this is used to convert the types of block arguments. We
+ won't be needing this for our conversion.
+
+## Conversion Target
+
+For our purposes, we want to convert the compute-intensive `Toy` operations into
+a combination of operations from the `Affine` `Standard` dialects for further
+optimization. To start off the lowering, we first define our conversion target:
+
+```c++
+void ToyToAffineLoweringPass::runOnFunction() {
+ // The first thing to define is the conversion target. This will define the
+ // final target for this lowering.
+ mlir::ConversionTarget target(getContext());
+
+ // We define the specific operations, or dialects, that are legal targets for
+ // this lowering. In our case, we are lowering to a combination of the
+ // `Affine` and `Standard` dialects.
+ target.addLegalDialect<mlir::AffineOpsDialect, mlir::StandardOpsDialect>();
+
+ // We also define the Toy dialect as Illegal so that the conversion will fail
+ // if any of these operations are *not* converted. Given that we actually want
+ // a partial lowering, we explicitly mark the Toy operations that don't want
+ // to lower, `toy.print`, as `legal`.
+ target.addIllegalDialect<ToyDialect>();
+ target.addLegalOp<PrintOp>();
+ ...
+}
+```
+
+## Conversion Patterns
+
+After the conversion target has been defined, we can define how to convert the
+`illegal` operations into `legal` ones. Similarly to the canonicalization
+framework introduced in [chapter 3](Ch-3.md), the
+[`DialectConversion` framework](../../DialectConversion.md) also uses
+[RewritePatterns](../../QuickstartRewrites.md) to perform the conversion logic.
+These patterns may be the `RewritePatterns` seen before or a new type of pattern
+specific to the conversion framework `ConversionPattern`. `ConversionPatterns`
+are different from traditional `RewritePatterns` in that they accept an
+additional `operands` parameter containing operands that have been
+remapped/replaced. This is used when dealing with type conversions, as the
+pattern will want to operate on values of the new type but match against the
+old. For our lowering, this invariant will be useful as it translates from the
+[TensorType](../../LangRef.md#tensor-type) currently being operated on to the
+[MemRefType](../../LangRef.md#memref-type). Let's look at a snippet of lowering
+the `toy.transpose` operation:
+
+```c++
+/// Lower the `toy.transpose` operation to an affine loop nest.
+struct TransposeOpLowering : public mlir::ConversionPattern {
+ TransposeOpLowering(mlir::MLIRContext *ctx)
+ : mlir::ConversionPattern(TransposeOp::getOperationName(), 1, ctx) {}
+
+ /// Match and rewrite the given `toy.transpose` operation, with the given
+ /// operands that have been remapped from `tensor<...>` to `memref<...>`.
+ mlir::PatternMatchResult
+ matchAndRewrite(mlir::Operation *op, ArrayRef<mlir::Value> operands,
+ mlir::ConversionPatternRewriter &rewriter) const final {
+ auto loc = op->getLoc();
+
+ // Call to a helper function that will lower the current operation to a set
+ // of affine loops. We provide a functor that operates on the remapped
+ // operands, as well as the loop induction variables for the inner most
+ // loop body.
+ lowerOpToLoops(
+ op, operands, rewriter,
+ [loc](mlir::PatternRewriter &rewriter,
+ ArrayRef<mlir::Value> memRefOperands,
+ ArrayRef<mlir::Value> loopIvs) {
+ // Generate an adaptor for the remapped operands of the TransposeOp.
+ // This allows for using the nice named accessors that are generated
+ // by the ODS. This adaptor is automatically provided by the ODS
+ // framework.
+ TransposeOpOperandAdaptor transposeAdaptor(memRefOperands);
+ mlir::Value input = transposeAdaptor.input();
+
+ // Transpose the elements by generating a load from the reverse
+ // indices.
+ SmallVector<mlir::Value, 2> reverseIvs(llvm::reverse(loopIvs));
+ return rewriter.create<mlir::AffineLoadOp>(loc, input, reverseIvs);
+ });
+ return matchSuccess();
+ }
+};
+```
+
+Now we can prepare the list of patterns to use during the lowering process:
+
+```c++
+void ToyToAffineLoweringPass::runOnFunction() {
+ ...
+
+ // Now that the conversion target has been defined, we just need to provide
+ // the set of patterns that will lower the Toy operations.
+ mlir::OwningRewritePatternList patterns;
+ patterns.insert<..., TransposeOpLowering>(&getContext());
+
+ ...
+```
+
+## Partial Lowering
+
+Once the patterns have been defined, we can perform the actual lowering. The
+`DialectConversion` framework provides several different modes of lowering, but,
+for our purposes, we will perform a partial lowering, as we will not convert
+`toy.print` at this time.
+
+```c++
+void ToyToAffineLoweringPass::runOnFunction() {
+ // The first thing to define is the conversion target. This will define the
+ // final target for this lowering.
+ mlir::ConversionTarget target(getContext());
+
+ // We define the specific operations, or dialects, that are legal targets for
+ // this lowering. In our case, we are lowering to a combination of the
+ // `Affine` and `Standard` dialects.
+ target.addLegalDialect<mlir::AffineOpsDialect, mlir::StandardOpsDialect>();
+
+ // We also define the Toy dialect as Illegal so that the conversion will fail
+ // if any of these operations are *not* converted. Given that we actually want
+ // a partial lowering, we explicitly mark the Toy operations that don't want
+ // to lower, `toy.print`, as `legal`.
+ target.addIllegalDialect<ToyDialect>();
+ target.addLegalOp<PrintOp>();
+
+ // Now that the conversion target has been defined, we just need to provide
+ // the set of patterns that will lower the Toy operations.
+ mlir::OwningRewritePatternList patterns;
+ patterns.insert<..., TransposeOpLowering>(&getContext());
+
+ // With the target and rewrite patterns defined, we can now attempt the
+ // conversion. The conversion will signal failure if any of our `illegal`
+ // operations were not converted successfully.
+ auto function = getFunction();
+ if (mlir::failed(mlir::applyPartialConversion(function, target, patterns)))
+ signalPassFailure();
+}
+```
+
+### Design Considerations With Partial Lowering
+
+Before diving into the result of our lowering, this is a good time to discuss
+potential design considerations when it comes to partial lowering. In our
+lowering, we transform from a value-type, TensorType, to an allocated
+(buffer-like) type, MemRefType. However, given that we do not lower the
+`toy.print` operation, we need to temporarily bridge these two worlds. There are
+many ways to go about this, each with their own tradeoffs:
+
+* Generate `load` operations from the buffer
+
+One option is to generate `load` operations from the buffer type to materialize
+an instance of the value type. This allows for the definition of the `toy.print`
+operation to remain unchanged. The downside to this approach is that the
+optimizations on the `affine` dialect are limited, because the `load` will
+actually involve a full copy that is only visible *after* our optimizations have
+been performed.
+
+* Generate a new version of `toy.print` that operates on the lowered type
+
+Another option would be to have another, lowered, variant of `toy.print` that
+operates on the lowered type. The benefit of this option is that there is no
+hidden, unnecessary copy to the optimizer. The downside is that another
+operation definition is needed that may duplicate many aspects of the first.
+Defining a base class in [ODS](../../OpDefinitions.md) may simplify this, but
+you still need to treat these operations separately.
+
+* Update `toy.print` to allow for operating on the lowered type
+
+A third option is to update the current definition of `toy.print` to allow for
+operating the on the lowered type. The benefit of this approach is that it is
+simple, does not introduce an additional hidden copy, and does not require
+another operation definition. The downside to this option is that it requires
+mixing abstraction levels in the `Toy` dialect.
+
+For the sake of simplicity, we will use the third option for this lowering. This
+involves updating the type constraints on the PrintOp in the operation
+definition file:
+
+```tablegen
+def PrintOp : Toy_Op<"print"> {
+ ...
+
+ // The print operation takes an input tensor to print.
+ // We also allow a F64MemRef to enable interop during partial lowering.
+ let arguments = (ins AnyTypeOf<[F64Tensor, F64MemRef]>:$input);
+}
+```
+
+## Complete Toy Example
+
+Looking back at our current working example:
+
+```mlir
+func @main() {
+ %0 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64>
+ %2 = "toy.transpose"(%0) : (tensor<2x3xf64>) -> tensor<3x2xf64>
+ %3 = "toy.mul"(%2, %2) : (tensor<3x2xf64>, tensor<3x2xf64>) -> tensor<3x2xf64>
+ "toy.print"(%3) : (tensor<3x2xf64>) -> ()
+ "toy.return"() : () -> ()
+}
+```
+
+With affine lowering added to our pipeline, we can now generate:
+
+```mlir
+func @main() {
+ %cst = constant 1.000000e+00 : f64
+ %cst_0 = constant 2.000000e+00 : f64
+ %cst_1 = constant 3.000000e+00 : f64
+ %cst_2 = constant 4.000000e+00 : f64
+ %cst_3 = constant 5.000000e+00 : f64
+ %cst_4 = constant 6.000000e+00 : f64
+
+ // Allocating buffers for the inputs and outputs.
+ %0 = alloc() : memref<3x2xf64>
+ %1 = alloc() : memref<3x2xf64>
+ %2 = alloc() : memref<2x3xf64>
+
+ // Initialize the input buffer with the constant values.
+ affine.store %cst, %2[0, 0] : memref<2x3xf64>
+ affine.store %cst_0, %2[0, 1] : memref<2x3xf64>
+ affine.store %cst_1, %2[0, 2] : memref<2x3xf64>
+ affine.store %cst_2, %2[1, 0] : memref<2x3xf64>
+ affine.store %cst_3, %2[1, 1] : memref<2x3xf64>
+ affine.store %cst_4, %2[1, 2] : memref<2x3xf64>
+
+ // Load the transpose value from the input buffer and store it into the
+ // next input buffer.
+ affine.for %arg0 = 0 to 3 {
+ affine.for %arg1 = 0 to 2 {
+ %3 = affine.load %2[%arg1, %arg0] : memref<2x3xf64>
+ affine.store %3, %1[%arg0, %arg1] : memref<3x2xf64>
+ }
+ }
+
+ // Multiply and store into the output buffer.
+ affine.for %arg0 = 0 to 2 {
+ affine.for %arg1 = 0 to 3 {
+ %3 = affine.load %1[%arg0, %arg1] : memref<3x2xf64>
+ %4 = affine.load %1[%arg0, %arg1] : memref<3x2xf64>
+ %5 = mulf %3, %4 : f64
+ affine.store %5, %0[%arg0, %arg1] : memref<3x2xf64>
+ }
+ }
+
+ // Print the value held by the buffer.
+ "toy.print"(%0) : (memref<3x2xf64>) -> ()
+ dealloc %2 : memref<2x3xf64>
+ dealloc %1 : memref<3x2xf64>
+ dealloc %0 : memref<3x2xf64>
+ return
+}
+```
+
+## Taking Advantage of Affine Optimization
+
+Our naive lowering is correct, but it leaves a lot to be desired with regards to
+efficiency. For example, the lowering of `toy.mul` has generated some redundant
+loads. Let's look at how adding a few existing optimizations to the pipeline can
+help clean this up. Adding the `LoopFusion` and `MemRefDataFlowOpt` passes to
+the pipeline gives the following result:
+
+```mlir
+func @main() {
+ %cst = constant 1.000000e+00 : f64
+ %cst_0 = constant 2.000000e+00 : f64
+ %cst_1 = constant 3.000000e+00 : f64
+ %cst_2 = constant 4.000000e+00 : f64
+ %cst_3 = constant 5.000000e+00 : f64
+ %cst_4 = constant 6.000000e+00 : f64
+
+ // Allocating buffers for the inputs and outputs.
+ %0 = alloc() : memref<3x2xf64>
+ %1 = alloc() : memref<2x3xf64>
+
+ // Initialize the input buffer with the constant values.
+ affine.store %cst, %1[0, 0] : memref<2x3xf64>
+ affine.store %cst_0, %1[0, 1] : memref<2x3xf64>
+ affine.store %cst_1, %1[0, 2] : memref<2x3xf64>
+ affine.store %cst_2, %1[1, 0] : memref<2x3xf64>
+ affine.store %cst_3, %1[1, 1] : memref<2x3xf64>
+ affine.store %cst_4, %1[1, 2] : memref<2x3xf64>
+
+ affine.for %arg0 = 0 to 3 {
+ affine.for %arg1 = 0 to 2 {
+ // Load the transpose value from the input buffer.
+ %2 = affine.load %1[%arg1, %arg0] : memref<2x3xf64>
+
+ // Multiply and store into the output buffer.
+ %3 = mulf %2, %2 : f64
+ affine.store %3, %0[%arg0, %arg1] : memref<3x2xf64>
+ }
+ }
+
+ // Print the value held by the buffer.
+ "toy.print"(%0) : (memref<3x2xf64>) -> ()
+ dealloc %1 : memref<2x3xf64>
+ dealloc %0 : memref<3x2xf64>
+ return
+}
+```
+
+Here, we can see that a redundant allocation was removed, the two loop nests
+were fused, and some unnecessary `load`s were removed. You can build `toyc-ch5`
+and try yourself: `toyc-ch5 test/lowering.toy -emit=mlir-affine`. We can also
+check our optimizations by adding `-opt`.
+
+In this chapter we explored some aspects of partial lowering, with the intent to
+optimize. In the [next chapter](Ch-6.md) we will continue the discussion about
+dialect conversion by targeting LLVM for code generation.
diff --git a/mlir/docs/Tutorials/Toy/Ch-6.md b/mlir/docs/Tutorials/Toy/Ch-6.md
new file mode 100644
index 00000000000..939b2b4f776
--- /dev/null
+++ b/mlir/docs/Tutorials/Toy/Ch-6.md
@@ -0,0 +1,323 @@
+# Chapter 6: Lowering to LLVM and CodeGeneration
+
+[TOC]
+
+In the [previous chapter](Ch-5.md), we introduced the
+[dialect conversion](../../DialectConversion.md) framework and partially lowered
+many of the `Toy` operations to affine loop nests for optimization. In this
+chapter, we will finally lower to LLVM for code generation.
+
+# Lowering to LLVM
+
+For this lowering, we will again use the dialect conversion framework to perform
+the heavy lifting. However, this time, we will be performing a full conversion
+to the [LLVM dialect](../../Dialects/LLVM.md). Thankfully, we have already
+lowered all but one of the `toy` operations, with the last being `toy.print`.
+Before going over the conversion to LLVM, let's lower the `toy.print` operation.
+We will lower this operation to a non-affine loop nest that invokes `printf` for
+each element. Note that, because the dialect conversion framework supports
+[transitive lowering](Glossary.md#transitive-lowering), we don't need to
+directly emit operations in the LLVM dialect. By transitive lowering, we mean
+that the conversion framework may apply multiple patterns to fully legalize an
+operation. In this example, we are generating a structured loop nest instead of
+the branch-form in the LLVM dialect. As long as we then have a lowering from the
+loop operations to LLVM, the lowering will still succeed.
+
+During lowering we can get, or build, the declaration for printf as so:
+
+```c++
+/// Return a symbol reference to the printf function, inserting it into the
+/// module if necessary.
+static FlatSymbolRefAttr getOrInsertPrintf(PatternRewriter &rewriter,
+ ModuleOp module,
+ LLVM::LLVMDialect *llvmDialect) {
+ auto *context = module.getContext();
+ if (module.lookupSymbol<LLVM::LLVMFuncOp>("printf"))
+ return SymbolRefAttr::get("printf", context);
+
+ // Create a function declaration for printf, the signature is:
+ // * `i32 (i8*, ...)`
+ auto llvmI32Ty = LLVM::LLVMType::getInt32Ty(llvmDialect);
+ auto llvmI8PtrTy = LLVM::LLVMType::getInt8PtrTy(llvmDialect);
+ auto llvmFnType = LLVM::LLVMType::getFunctionTy(llvmI32Ty, llvmI8PtrTy,
+ /*isVarArg=*/true);
+
+ // Insert the printf function into the body of the parent module.
+ PatternRewriter::InsertionGuard insertGuard(rewriter);
+ rewriter.setInsertionPointToStart(module.getBody());
+ rewriter.create<LLVM::LLVMFuncOp>(module.getLoc(), "printf", llvmFnType);
+ return SymbolRefAttr::get("printf", context);
+}
+```
+
+Now that the lowering for the printf operation has been defined, we can specify
+the components necessary for the lowering. These are largely the same as the
+components defined in the [previous chapter](Ch-5.md).
+
+## Conversion Target
+
+For this conversion, aside from the top-level module, we will be lowering
+everything to the LLVM dialect.
+
+```c++
+ mlir::ConversionTarget target(getContext());
+ target.addLegalDialect<mlir::LLVM::LLVMDialect>();
+ target.addLegalOp<mlir::ModuleOp, mlir::ModuleTerminatorOp>();
+```
+
+## Type Converter
+
+This lowering will also transform the MemRef types which are currently being
+operated on into a representation in LLVM. To perform this conversion, we use a
+TypeConverter as part of the lowering. This converter specifies how one type
+maps to another. This is necessary now that we are performing more complicated
+lowerings involving block arguments. Given that we don't have any
+Toy-dialect-specific types that need to be lowered, the default converter is
+enough for our use case.
+
+```c++
+ LLVMTypeConverter typeConverter(&getContext());
+```
+
+## Conversion Patterns
+
+Now that the conversion target has been defined, we need to provide the patterns
+used for lowering. At this point in the compilation process, we have a
+combination of `toy`, `affine`, and `std` operations. Luckily, the `std` and
+`affine` dialects already provide the set of patterns needed to transform them
+into LLVM dialect. These patterns allow for lowering the IR in multiple stages
+by relying on [transitive lowering](Glossary.md#transitive-lowering).
+
+```c++
+ mlir::OwningRewritePatternList patterns;
+ mlir::populateAffineToStdConversionPatterns(patterns, &getContext());
+ mlir::populateLoopToStdConversionPatterns(patterns, &getContext());
+ mlir::populateStdToLLVMConversionPatterns(typeConverter, patterns);
+
+ // The only remaining operation to lower from the `toy` dialect, is the
+ // PrintOp.
+ patterns.insert<PrintOpLowering>(&getContext());
+```
+
+## Full Lowering
+
+We want to completely lower to LLVM, so we use a `FullConversion`. This ensures
+that only legal operations will remain after the conversion.
+
+```c++
+ mlir::ModuleOp module = getModule();
+ if (mlir::failed(mlir::applyFullConversion(module, target, patterns,
+ &typeConverter)))
+ signalPassFailure();
+```
+
+Looking back at our current working example:
+
+```mlir
+func @main() {
+ %0 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64>
+ %2 = "toy.transpose"(%0) : (tensor<2x3xf64>) -> tensor<3x2xf64>
+ %3 = "toy.mul"(%2, %2) : (tensor<3x2xf64>, tensor<3x2xf64>) -> tensor<3x2xf64>
+ "toy.print"(%3) : (tensor<3x2xf64>) -> ()
+ "toy.return"() : () -> ()
+}
+```
+
+We can now lower down to the LLVM dialect, which produces the following code:
+
+```mlir
+llvm.func @free(!llvm<"i8*">)
+llvm.func @printf(!llvm<"i8*">, ...) -> !llvm.i32
+llvm.func @malloc(!llvm.i64) -> !llvm<"i8*">
+llvm.func @main() {
+ %0 = llvm.mlir.constant(1.000000e+00 : f64) : !llvm.double
+ %1 = llvm.mlir.constant(2.000000e+00 : f64) : !llvm.double
+
+ ...
+
+^bb16:
+ %221 = llvm.extractvalue %25[0 : index] : !llvm<"{ double*, i64, [2 x i64], [2 x i64] }">
+ %222 = llvm.mlir.constant(0 : index) : !llvm.i64
+ %223 = llvm.mlir.constant(2 : index) : !llvm.i64
+ %224 = llvm.mul %214, %223 : !llvm.i64
+ %225 = llvm.add %222, %224 : !llvm.i64
+ %226 = llvm.mlir.constant(1 : index) : !llvm.i64
+ %227 = llvm.mul %219, %226 : !llvm.i64
+ %228 = llvm.add %225, %227 : !llvm.i64
+ %229 = llvm.getelementptr %221[%228] : (!llvm<"double*">, !llvm.i64) -> !llvm<"double*">
+ %230 = llvm.load %229 : !llvm<"double*">
+ %231 = llvm.call @printf(%207, %230) : (!llvm<"i8*">, !llvm.double) -> !llvm.i32
+ %232 = llvm.add %219, %218 : !llvm.i64
+ llvm.br ^bb15(%232 : !llvm.i64)
+
+ ...
+
+^bb18:
+ %235 = llvm.extractvalue %65[0 : index] : !llvm<"{ double*, i64, [2 x i64], [2 x i64] }">
+ %236 = llvm.bitcast %235 : !llvm<"double*"> to !llvm<"i8*">
+ llvm.call @free(%236) : (!llvm<"i8*">) -> ()
+ %237 = llvm.extractvalue %45[0 : index] : !llvm<"{ double*, i64, [2 x i64], [2 x i64] }">
+ %238 = llvm.bitcast %237 : !llvm<"double*"> to !llvm<"i8*">
+ llvm.call @free(%238) : (!llvm<"i8*">) -> ()
+ %239 = llvm.extractvalue %25[0 : index] : !llvm<"{ double*, i64, [2 x i64], [2 x i64] }">
+ %240 = llvm.bitcast %239 : !llvm<"double*"> to !llvm<"i8*">
+ llvm.call @free(%240) : (!llvm<"i8*">) -> ()
+ llvm.return
+}
+```
+
+See [Conversion to the LLVM IR Dialect](../../ConversionToLLVMDialect.md) for
+more in-depth details on lowering to the LLVM dialect.
+
+# CodeGen: Getting Out of MLIR
+
+At this point we are right at the cusp of code generation. We can generate code
+in the LLVM dialect, so now we just need to export to LLVM IR and setup a JIT to
+run it.
+
+## Emitting LLVM IR
+
+Now that our module is comprised only of operations in the LLVM dialect, we can
+export to LLVM IR. To do this programmatically, we can invoke the following
+utility:
+
+```c++
+ std::unique_ptr<llvm::Module> llvmModule = mlir::translateModuleToLLVMIR(module);
+ if (!llvmModule)
+ /* ... an error was encountered ... */
+```
+
+Exporting our module to LLVM IR generates:
+
+```.llvm
+define void @main() {
+ ...
+
+102:
+ %103 = extractvalue { double*, i64, [2 x i64], [2 x i64] } %8, 0
+ %104 = mul i64 %96, 2
+ %105 = add i64 0, %104
+ %106 = mul i64 %100, 1
+ %107 = add i64 %105, %106
+ %108 = getelementptr double, double* %103, i64 %107
+ %109 = load double, double* %108
+ %110 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @frmt_spec, i64 0, i64 0), double %109)
+ %111 = add i64 %100, 1
+ br label %99
+
+ ...
+
+115:
+ %116 = extractvalue { double*, i64, [2 x i64], [2 x i64] } %24, 0
+ %117 = bitcast double* %116 to i8*
+ call void @free(i8* %117)
+ %118 = extractvalue { double*, i64, [2 x i64], [2 x i64] } %16, 0
+ %119 = bitcast double* %118 to i8*
+ call void @free(i8* %119)
+ %120 = extractvalue { double*, i64, [2 x i64], [2 x i64] } %8, 0
+ %121 = bitcast double* %120 to i8*
+ call void @free(i8* %121)
+ ret void
+}
+```
+
+If we enable optimization on the generated LLVM IR, we can trim this down quite
+a bit:
+
+```.llvm
+define void @main()
+ %0 = tail call i32 (i8*, ...) @printf(i8* nonnull dereferenceable(1) getelementptr inbounds ([4 x i8], [4 x i8]* @frmt_spec, i64 0, i64 0), double 1.000000e+00)
+ %1 = tail call i32 (i8*, ...) @printf(i8* nonnull dereferenceable(1) getelementptr inbounds ([4 x i8], [4 x i8]* @frmt_spec, i64 0, i64 0), double 1.600000e+01)
+ %putchar = tail call i32 @putchar(i32 10)
+ %2 = tail call i32 (i8*, ...) @printf(i8* nonnull dereferenceable(1) getelementptr inbounds ([4 x i8], [4 x i8]* @frmt_spec, i64 0, i64 0), double 4.000000e+00)
+ %3 = tail call i32 (i8*, ...) @printf(i8* nonnull dereferenceable(1) getelementptr inbounds ([4 x i8], [4 x i8]* @frmt_spec, i64 0, i64 0), double 2.500000e+01)
+ %putchar.1 = tail call i32 @putchar(i32 10)
+ %4 = tail call i32 (i8*, ...) @printf(i8* nonnull dereferenceable(1) getelementptr inbounds ([4 x i8], [4 x i8]* @frmt_spec, i64 0, i64 0), double 9.000000e+00)
+ %5 = tail call i32 (i8*, ...) @printf(i8* nonnull dereferenceable(1) getelementptr inbounds ([4 x i8], [4 x i8]* @frmt_spec, i64 0, i64 0), double 3.600000e+01)
+ %putchar.2 = tail call i32 @putchar(i32 10)
+ ret void
+}
+
+```
+
+The full code listing for dumping LLVM IR can be found in `Ch6/toy.cpp` in the
+`dumpLLVMIR()` function:
+
+```c++
+
+int dumpLLVMIR(mlir::ModuleOp module) {
+ // Translate the module, that contains the LLVM dialect, to LLVM IR.
+ auto llvmModule = mlir::translateModuleToLLVMIR(module);
+ if (!llvmModule) {
+ llvm::errs() << "Failed to emit LLVM IR\n";
+ return -1;
+ }
+
+ // Initialize LLVM targets.
+ llvm::InitializeNativeTarget();
+ llvm::InitializeNativeTargetAsmPrinter();
+ mlir::ExecutionEngine::setupTargetTriple(llvmModule.get());
+
+ /// Optionally run an optimization pipeline over the llvm module.
+ auto optPipeline = mlir::makeOptimizingTransformer(
+ /*optLevel=*/EnableOpt ? 3 : 0, /*sizeLevel=*/0,
+ /*targetMachine=*/nullptr);
+ if (auto err = optPipeline(llvmModule.get())) {
+ llvm::errs() << "Failed to optimize LLVM IR " << err << "\n";
+ return -1;
+ }
+ llvm::errs() << *llvmModule << "\n";
+ return 0;
+}
+```
+
+## Setting up a JIT
+
+Setting up a JIT to run the module containing the LLVM dialect can be done using
+the `mlir::ExecutionEngine` infrastructure. This is a utility wrapper around
+LLVM's JIT that accepts `.mlir` as input. The full code listing for setting up
+the JIT can be found in `Ch6/toy.cpp` in the `runJit()` function:
+
+```c++
+int runJit(mlir::ModuleOp module) {
+ // Initialize LLVM targets.
+ llvm::InitializeNativeTarget();
+ llvm::InitializeNativeTargetAsmPrinter();
+
+ // An optimization pipeline to use within the execution engine.
+ auto optPipeline = mlir::makeOptimizingTransformer(
+ /*optLevel=*/EnableOpt ? 3 : 0, /*sizeLevel=*/0,
+ /*targetMachine=*/nullptr);
+
+ // Create an MLIR execution engine. The execution engine eagerly JIT-compiles
+ // the module.
+ auto maybeEngine = mlir::ExecutionEngine::create(module, optPipeline);
+ assert(maybeEngine && "failed to construct an execution engine");
+ auto &engine = maybeEngine.get();
+
+ // Invoke the JIT-compiled function.
+ auto invocationResult = engine->invoke("main");
+ if (invocationResult) {
+ llvm::errs() << "JIT invocation failed\n";
+ return -1;
+ }
+
+ return 0;
+}
+```
+
+You can play around with it from the build directory:
+
+```sh
+$ echo 'def main() { print([[1, 2], [3, 4]]); }' | ./bin/toyc-ch6 -emit=jit
+1.000000 2.000000
+3.000000 4.000000
+```
+
+You can also play with `-emit=mlir`, `-emit=mlir-affine`, `-emit=mlir-llvm`, and
+`-emit=llvm` to compare the various levels of IR involved. Also try options like
+[`--print-ir-after-all`](../../WritingAPass.md#ir-printing) to track the
+evolution of the IR throughout the pipeline.
+
+So far, we have worked with primitive data types. In the
+[next chapter](Ch-7.md), we will add a composite `struct` type.
diff --git a/mlir/docs/Tutorials/Toy/Ch-7.md b/mlir/docs/Tutorials/Toy/Ch-7.md
new file mode 100644
index 00000000000..6298e8253e9
--- /dev/null
+++ b/mlir/docs/Tutorials/Toy/Ch-7.md
@@ -0,0 +1,539 @@
+# Chapter 7: Adding a Composite Type to Toy
+
+[TOC]
+
+In the [previous chapter](Ch-6.md), we demonstrated an end-to-end compilation
+flow from our Toy front-end to LLVM IR. In this chapter, we will extend the Toy
+language to support a new composite `struct` type.
+
+## Defining a `struct` in Toy
+
+The first thing we need to define is the interface of this type in our `toy`
+source language. The general syntax of a `struct` type in Toy is as follows:
+
+```toy
+# A struct is defined by using the `struct` keyword followed by a name.
+struct MyStruct {
+ # Inside of the struct is a list of variable declarations without initializers
+ # or shapes, which may also be other previously defined structs.
+ var a;
+ var b;
+}
+```
+
+Structs may now be used in functions as variables or parameters by using the
+name of the struct instead of `var`. The members of the struct are accessed via
+a `.` access operator. Values of `struct` type may be initialized with a
+composite initializer, or a comma-separated list of other initializers
+surrounded by `{}`. An example is shown below:
+
+```toy
+struct Struct {
+ var a;
+ var b;
+}
+
+# User defined generic function may operate on struct types as well.
+def multiply_transpose(Struct value) {
+ # We can access the elements of a struct via the '.' operator.
+ return transpose(value.a) * transpose(value.b);
+}
+
+def main() {
+ # We initialize struct values using a composite initializer.
+ Struct value = {[[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [4, 5, 6]]};
+
+ # We pass these arguments to functions like we do with variables.
+ var c = multiply_transpose(value);
+ print(c);
+}
+```
+
+## Defining a `struct` in MLIR
+
+In MLIR, we will also need a representation for our struct types. MLIR does not
+provide a type that does exactly what we need, so we will need to define our
+own. We will simply define our `struct` as an unnamed container of a set of
+element types. The name of the `struct` and its elements are only useful for the
+AST of our `toy` compiler, so we don't need to encode it in the MLIR
+representation.
+
+### Defining the Type Class
+
+#### Reserving a Range of Type Kinds
+
+Types in MLIR rely on having a unique `kind` value to ensure that casting checks
+remain extremely efficient
+([rationale](../../Rationale.md#reserving-dialect-type-kinds)). For `toy`, this
+means we need to explicitly reserve a static range of type `kind` values in the
+symbol registry file
+[DialectSymbolRegistry](https://github.com/tensorflow/mlir/blob/master/include/mlir/IR/DialectSymbolRegistry.def).
+
+```c++
+DEFINE_SYM_KIND_RANGE(LINALG) // Linear Algebra Dialect
+DEFINE_SYM_KIND_RANGE(TOY) // Toy language (tutorial) Dialect
+
+// The following ranges are reserved for experimenting with MLIR dialects in a
+// private context without having to register them here.
+DEFINE_SYM_KIND_RANGE(PRIVATE_EXPERIMENTAL_0)
+```
+
+These definitions will provide a range in the Type::Kind enum to use when
+defining the derived types.
+
+```c++
+/// Create a local enumeration with all of the types that are defined by Toy.
+namespace ToyTypes {
+enum Types {
+ Struct = mlir::Type::FIRST_TOY_TYPE,
+};
+} // end namespace ToyTypes
+```
+
+#### Defining the Type Class
+
+As mentioned in [chapter 2](Ch-2.md), [`Type`](../../LangRef.md#type-system)
+objects in MLIR are value-typed and rely on having an internal storage object
+that holds the actual data for the type. The `Type` class in itself acts as a
+simple wrapper around an internal `TypeStorage` object that is uniqued within an
+instance of an `MLIRContext`. When constructing a `Type`, we are internally just
+constructing and uniquing an instance of a storage class.
+
+When defining a new `Type` that requires additional information beyond just the
+`kind` (e.g. the `struct` type, which requires additional information to hold
+the element types), we will need to provide a derived storage class. The
+`primitive` types that don't have any additional data (e.g. the
+[`index` type](../../LangRef.md#index-type)) don't require a storage class.
+
+##### Defining the Storage Class
+
+Type storage objects contain all of the data necessary to construct and unique a
+type instance. Derived storage classes must inherit from the base
+`mlir::TypeStorage` and provide a set of aliases and hooks that will be used by
+the `MLIRContext` for uniquing. Below is the definition of the storage instance
+for our `struct` type, with each of the necessary requirements detailed inline:
+
+```c++
+/// This class represents the internal storage of the Toy `StructType`.
+struct StructTypeStorage : public mlir::TypeStorage {
+ /// The `KeyTy` is a required type that provides an interface for the storage
+ /// instance. This type will be used when uniquing an instance of the type
+ /// storage. For our struct type, we will unique each instance structurally on
+ /// the elements that it contains.
+ using KeyTy = llvm::ArrayRef<mlir::Type>;
+
+ /// A constructor for the type storage instance.
+ StructTypeStorage(llvm::ArrayRef<mlir::Type> elementTypes)
+ : elementTypes(elementTypes) {}
+
+ /// Define the comparison function for the key type with the current storage
+ /// instance. This is used when constructing a new instance to ensure that we
+ /// haven't already uniqued an instance of the given key.
+ bool operator==(const KeyTy &key) const { return key == elementTypes; }
+
+ /// Define a hash function for the key type. This is used when uniquing
+ /// instances of the storage.
+ /// Note: This method isn't necessary as both llvm::ArrayRef and mlir::Type
+ /// have hash functions available, so we could just omit this entirely.
+ static llvm::hash_code hashKey(const KeyTy &key) {
+ return llvm::hash_value(key);
+ }
+
+ /// Define a construction function for the key type from a set of parameters.
+ /// These parameters will be provided when constructing the storage instance
+ /// itself, see the `StructType::get` method further below.
+ /// Note: This method isn't necessary because KeyTy can be directly
+ /// constructed with the given parameters.
+ static KeyTy getKey(llvm::ArrayRef<mlir::Type> elementTypes) {
+ return KeyTy(elementTypes);
+ }
+
+ /// Define a construction method for creating a new instance of this storage.
+ /// This method takes an instance of a storage allocator, and an instance of a
+ /// `KeyTy`. The given allocator must be used for *all* necessary dynamic
+ /// allocations used to create the type storage and its internal.
+ static StructTypeStorage *construct(mlir::TypeStorageAllocator &allocator,
+ const KeyTy &key) {
+ // Copy the elements from the provided `KeyTy` into the allocator.
+ llvm::ArrayRef<mlir::Type> elementTypes = allocator.copyInto(key);
+
+ // Allocate the storage instance and construct it.
+ return new (allocator.allocate<StructTypeStorage>())
+ StructTypeStorage(elementTypes);
+ }
+
+ /// The following field contains the element types of the struct.
+ llvm::ArrayRef<mlir::Type> elementTypes;
+};
+```
+
+##### Defining the Type Class
+
+With the storage class defined, we can add the definition for the user-visible
+`StructType` class. This is the class that we will actually interface with.
+
+```c++
+/// This class defines the Toy struct type. It represents a collection of
+/// element types. All derived types in MLIR must inherit from the CRTP class
+/// 'Type::TypeBase'. It takes as template parameters the concrete type
+/// (StructType), the base class to use (Type), and the storage class
+/// (StructTypeStorage).
+class StructType : public mlir::Type::TypeBase<StructType, mlir::Type,
+ StructTypeStorage> {
+public:
+ /// Inherit some necessary constructors from 'TypeBase'.
+ using Base::Base;
+
+ /// This static method is used to support type inquiry through isa, cast,
+ /// and dyn_cast.
+ static bool kindof(unsigned kind) { return kind == ToyTypes::Struct; }
+
+ /// Create an instance of a `StructType` with the given element types. There
+ /// *must* be at least one element type.
+ static StructType get(llvm::ArrayRef<mlir::Type> elementTypes) {
+ assert(!elementTypes.empty() && "expected at least 1 element type");
+
+ // Call into a helper 'get' method in 'TypeBase' to get a uniqued instance
+ // of this type. The first two parameters are the context to unique in and
+ // the kind of the type. The parameters after the type kind are forwarded to
+ // the storage instance.
+ mlir::MLIRContext *ctx = elementTypes.front().getContext();
+ return Base::get(ctx, ToyTypes::Struct, elementTypes);
+ }
+
+ /// Returns the element types of this struct type.
+ llvm::ArrayRef<mlir::Type> getElementTypes() {
+ // 'getImpl' returns a pointer to the internal storage instance.
+ return getImpl()->elementTypes;
+ }
+
+ /// Returns the number of element type held by this struct.
+ size_t getNumElementTypes() { return getElementTypes().size(); }
+};
+```
+
+We register this type in the `ToyDialect` constructor in a similar way to how we
+did with operations:
+
+```c++
+ToyDialect::ToyDialect(mlir::MLIRContext *ctx)
+ : mlir::Dialect(getDialectNamespace(), ctx) {
+ addTypes<StructType>();
+}
+```
+
+With this we can now use our `StructType` when generating MLIR from Toy. See
+examples/toy/Ch7/mlir/MLIRGen.cpp for more details.
+
+### Parsing and Printing
+
+At this point we can use our `StructType` during MLIR generation and
+transformation, but we can't output or parse `.mlir`. For this we need to add
+support for parsing and printing instances of the `StructType`. This can be done
+by overriding the `parseType` and `printType` methods on the `ToyDialect`.
+
+```c++
+class ToyDialect : public mlir::Dialect {
+public:
+ /// Parse an instance of a type registered to the toy dialect.
+ mlir::Type parseType(mlir::DialectAsmParser &parser) const override;
+
+ /// Print an instance of a type registered to the toy dialect.
+ void printType(mlir::Type type,
+ mlir::DialectAsmPrinter &printer) const override;
+};
+```
+
+These methods take an instance of a high-level parser or printer that allows for
+easily implementing the necessary functionality. Before going into the
+implementation, let's think about the syntax that we want for the `struct` type
+in the printed IR. As described in the
+[MLIR language reference](../../LangRef.md#dialect-types), dialect types are
+generally represented as: `! dialect-namespace < type-data >`, with a pretty
+form available under certain circumstances. The responsibility of our `Toy`
+parser and printer is to provide the `type-data` bits. We will define our
+`StructType` as having the following form:
+
+```
+ struct-type ::= `struct` `<` type (`,` type)* `>`
+```
+
+#### Parsing
+
+An implementation of the parser is shown below:
+
+```c++
+/// Parse an instance of a type registered to the toy dialect.
+mlir::Type ToyDialect::parseType(mlir::DialectAsmParser &parser) const {
+ // Parse a struct type in the following form:
+ // struct-type ::= `struct` `<` type (`,` type)* `>`
+
+ // NOTE: All MLIR parser function return a ParseResult. This is a
+ // specialization of LogicalResult that auto-converts to a `true` boolean
+ // value on failure to allow for chaining, but may be used with explicit
+ // `mlir::failed/mlir::succeeded` as desired.
+
+ // Parse: `struct` `<`
+ if (parser.parseKeyword("struct") || parser.parseLess())
+ return Type();
+
+ // Parse the element types of the struct.
+ SmallVector<mlir::Type, 1> elementTypes;
+ do {
+ // Parse the current element type.
+ llvm::SMLoc typeLoc = parser.getCurrentLocation();
+ mlir::Type elementType;
+ if (parser.parseType(elementType))
+ return nullptr;
+
+ // Check that the type is either a TensorType or another StructType.
+ if (!elementType.isa<mlir::TensorType>() &&
+ !elementType.isa<StructType>()) {
+ parser.emitError(typeLoc, "element type for a struct must either "
+ "be a TensorType or a StructType, got: ")
+ << elementType;
+ return Type();
+ }
+ elementTypes.push_back(elementType);
+
+ // Parse the optional: `,`
+ } while (succeeded(parser.parseOptionalComma()));
+
+ // Parse: `>`
+ if (parser.parseGreater())
+ return Type();
+ return StructType::get(elementTypes);
+}
+```
+
+#### Printing
+
+An implementation of the printer is shown below:
+
+```c++
+/// Print an instance of a type registered to the toy dialect.
+void ToyDialect::printType(mlir::Type type,
+ mlir::DialectAsmPrinter &printer) const {
+ // Currently the only toy type is a struct type.
+ StructType structType = type.cast<StructType>();
+
+ // Print the struct type according to the parser format.
+ printer << "struct<";
+ mlir::interleaveComma(structType.getElementTypes(), printer);
+ printer << '>';
+}
+```
+
+Before moving on, let's look at a quick of example showcasing the functionality
+we have now:
+
+```toy
+struct Struct {
+ var a;
+ var b;
+}
+
+def multiply_transpose(Struct value) {
+}
+```
+
+Which generates the following:
+
+```mlir
+module {
+ func @multiply_transpose(%arg0: !toy.struct<tensor<*xf64>, tensor<*xf64>>) {
+ "toy.return"() : () -> ()
+ }
+}
+```
+
+### Operating on `StructType`
+
+Now that the `struct` type has been defined, and we can round-trip it through
+the IR. The next step is to add support for using it within our operations.
+
+#### Updating Existing Operations
+
+A few of our existing operations will need to be updated to handle `StructType`.
+The first step is to make the ODS framework aware of our Type so that we can use
+it in the operation definitions. A simple example is shown below:
+
+```tablegen
+// Provide a definition for the Toy StructType for use in ODS. This allows for
+// using StructType in a similar way to Tensor or MemRef.
+def Toy_StructType :
+ Type<CPred<"$_self.isa<StructType>()">, "Toy struct type">;
+
+// Provide a definition of the types that are used within the Toy dialect.
+def Toy_Type : AnyTypeOf<[F64Tensor, Toy_StructType]>;
+```
+
+We can then update our operations, e.g. `ReturnOp`, to also accept the
+`Toy_StructType`:
+
+```tablegen
+def ReturnOp : Toy_Op<"return", [Terminator, HasParent<"FuncOp">]> {
+ ...
+ let arguments = (ins Variadic<Toy_Type>:$input);
+ ...
+}
+```
+
+#### Adding New `Toy` Operations
+
+In addition to the existing operations, we will be adding a few new operations
+that will provide more specific handling of `structs`.
+
+##### `toy.struct_constant`
+
+This new operation materializes a constant value for a struct. In our current
+modeling, we just use an [array attribute](../../LangRef.md#array-attribute)
+that contains a set of constant values for each of the `struct` elements.
+
+```mlir
+ %0 = "toy.struct_constant"() {
+ value = [dense<[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]> : tensor<2x3xf64>]
+ } : () -> !toy.struct<tensor<*xf64>>
+```
+
+##### `toy.struct_access`
+
+This new operation materializes the Nth element of a `struct` value.
+
+```mlir
+ %0 = "toy.struct_constant"() {
+ value = [dense<[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]> : tensor<2x3xf64>]
+ } : () -> !toy.struct<tensor<*xf64>>
+ %1 = "toy.struct_access"(%0) {index = 0 : i64} : (!toy.struct<tensor<*xf64>>) -> tensor<*xf64>
+```
+
+With these operations, we can revisit our original example:
+
+```toy
+struct Struct {
+ var a;
+ var b;
+}
+
+# User defined generic function may operate on struct types as well.
+def multiply_transpose(Struct value) {
+ # We can access the elements of a struct via the '.' operator.
+ return transpose(value.a) * transpose(value.b);
+}
+
+def main() {
+ # We initialize struct values using a composite initializer.
+ Struct value = {[[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [4, 5, 6]]};
+
+ # We pass these arguments to functions like we do with variables.
+ var c = multiply_transpose(value);
+ print(c);
+}
+```
+
+and finally get a full MLIR module:
+
+```mlir
+module {
+ func @multiply_transpose(%arg0: !toy.struct<tensor<*xf64>, tensor<*xf64>>) -> tensor<*xf64> {
+ %0 = "toy.struct_access"(%arg0) {index = 0 : i64} : (!toy.struct<tensor<*xf64>, tensor<*xf64>>) -> tensor<*xf64>
+ %1 = "toy.transpose"(%0) : (tensor<*xf64>) -> tensor<*xf64>
+ %2 = "toy.struct_access"(%arg0) {index = 1 : i64} : (!toy.struct<tensor<*xf64>, tensor<*xf64>>) -> tensor<*xf64>
+ %3 = "toy.transpose"(%2) : (tensor<*xf64>) -> tensor<*xf64>
+ %4 = "toy.mul"(%1, %3) : (tensor<*xf64>, tensor<*xf64>) -> tensor<*xf64>
+ "toy.return"(%4) : (tensor<*xf64>) -> ()
+ }
+ func @main() {
+ %0 = "toy.struct_constant"() {value = [dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>, dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>]} : () -> !toy.struct<tensor<*xf64>, tensor<*xf64>>
+ %1 = "toy.generic_call"(%0) {callee = @multiply_transpose} : (!toy.struct<tensor<*xf64>, tensor<*xf64>>) -> tensor<*xf64>
+ "toy.print"(%1) : (tensor<*xf64>) -> ()
+ "toy.return"() : () -> ()
+ }
+}
+```
+
+#### Optimizing Operations on `StructType`
+
+Now that we have a few operations operating on `StructType`, we also have many
+new constant folding opportunities.
+
+After inlining, the MLIR module in the previous section looks something like:
+
+```mlir
+module {
+ func @main() {
+ %0 = "toy.struct_constant"() {value = [dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>, dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>]} : () -> !toy.struct<tensor<*xf64>, tensor<*xf64>>
+ %1 = "toy.struct_access"(%0) {index = 0 : i64} : (!toy.struct<tensor<*xf64>, tensor<*xf64>>) -> tensor<*xf64>
+ %2 = "toy.transpose"(%1) : (tensor<*xf64>) -> tensor<*xf64>
+ %3 = "toy.struct_access"(%0) {index = 1 : i64} : (!toy.struct<tensor<*xf64>, tensor<*xf64>>) -> tensor<*xf64>
+ %4 = "toy.transpose"(%3) : (tensor<*xf64>) -> tensor<*xf64>
+ %5 = "toy.mul"(%2, %4) : (tensor<*xf64>, tensor<*xf64>) -> tensor<*xf64>
+ "toy.print"(%5) : (tensor<*xf64>) -> ()
+ "toy.return"() : () -> ()
+ }
+}
+```
+
+We have several `toy.struct_access` operations that access into a
+`toy.struct_constant`. As detailed in [chapter 3](Ch-3.md), we can add folders
+for these `toy` operations by setting the `hasFolder` bit on the operation
+definition and providing a definition of the `*Op::fold` method.
+
+```c++
+/// Fold constants.
+OpFoldResult ConstantOp::fold(ArrayRef<Attribute> operands) { return value(); }
+
+/// Fold struct constants.
+OpFoldResult StructConstantOp::fold(ArrayRef<Attribute> operands) {
+ return value();
+}
+
+/// Fold simple struct access operations that access into a constant.
+OpFoldResult StructAccessOp::fold(ArrayRef<Attribute> operands) {
+ auto structAttr = operands.front().dyn_cast_or_null<mlir::ArrayAttr>();
+ if (!structAttr)
+ return nullptr;
+
+ size_t elementIndex = index().getZExtValue();
+ return structAttr.getValue()[elementIndex];
+}
+```
+
+To ensure that MLIR generates the proper constant operations when folding our
+`Toy` operations, i.e. `ConstantOp` for `TensorType` and `StructConstant` for
+`StructType`, we will need to provide an override for the dialect hook
+`materializeConstant`. This allows for generic MLIR operations to create
+constants for the `Toy` dialect when necessary.
+
+```c++
+mlir::Operation *ToyDialect::materializeConstant(mlir::OpBuilder &builder,
+ mlir::Attribute value,
+ mlir::Type type,
+ mlir::Location loc) {
+ if (type.isa<StructType>())
+ return builder.create<StructConstantOp>(loc, type,
+ value.cast<mlir::ArrayAttr>());
+ return builder.create<ConstantOp>(loc, type,
+ value.cast<mlir::DenseElementsAttr>());
+}
+```
+
+With this, we can now generate code that can be generated to LLVM without any
+changes to our pipeline.
+
+```mlir
+module {
+ func @main() {
+ %0 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64>
+ %1 = "toy.transpose"(%0) : (tensor<2x3xf64>) -> tensor<3x2xf64>
+ %2 = "toy.mul"(%1, %1) : (tensor<3x2xf64>, tensor<3x2xf64>) -> tensor<3x2xf64>
+ "toy.print"(%2) : (tensor<3x2xf64>) -> ()
+ "toy.return"() : () -> ()
+ }
+}
+```
+
+You can build `toyc-ch7` and try yourself: `toyc-ch7
+test/Examples/Toy/Ch7/struct-codegen.toy -emit=mlir`. More details on defining
+custom types can be found in
+[DefiningAttributesAndTypes](../../DefiningAttributesAndTypes.md).
diff --git a/mlir/docs/UsageOfConst.md b/mlir/docs/UsageOfConst.md
new file mode 100644
index 00000000000..6e8ce78e960
--- /dev/null
+++ b/mlir/docs/UsageOfConst.md
@@ -0,0 +1,272 @@
+# Usage of 'Const' in MLIR, for core IR types
+
+aka, where'd `const` go?
+
+The MLIR data structures that represent the IR itself (Instruction, Block, etc)
+form a graph-based data structure, and the compiler analyses and passes
+frequently walk this graph (e.g. traversing from defs to users). The early
+design of MLIR adopted the `const` model of LLVM, which is familiar and well
+understood (even though the LLVM implementation is flawed in many ways).
+
+The design team since decided to change to a different module, which eschews
+`const` entirely for the core IR types: you should never see a `const` method on
+`Operation`, should never see the type `const Value`, and you shouldn't feel bad
+about this. That said, you *should* use `const` for non-IR types, like
+`SmallVector`'s and many other things.
+
+The document below explains this design point from the viewpoint of "why make a
+change", to explain the rationale and the tradeoffs involved that led us to this
+potentially controversial design point.
+
+Bjarke Roune summarized the situation like this:
+
+> In my opinion `const` correctness is highly valuable, catching many bugs and
+> making it clear in a code base where the mutations happen. In my opinion
+> `const` correctness still isn't worth it in particular for IR elements because
+> of the special uses and properties of IRs, in particular that it is common to
+> transfer a pointer/reference to an instruction from an analysis to an
+> optimization which will change the instruction. The analysis should be const,
+> the optimization needs to get a non-`const` pointer. So all analyses either
+> end up being templates (and if they never get instantiated in a const context,
+> then the point of `const` correctness has been defeated), you need to somehow
+> launder the const in a safe way or there will be `const_cast`s. These options
+> are all bad, probably so bad as to out-weigh the benefits of const.
+
+# Reconsidering `const` in MLIR
+
+This document argues this design is introducing significant sub-optimalities
+into the MLIR codebase, argues that the cost/benefit tradeoff of this design is
+a poor tradeoff, and proposes switching to a much simpler approach - eliminating
+the use of const of these IR types entirely.
+
+**Note:** **This document is only discussing things like `const Value` and
+`const Operation*`. There is no proposed change for other types, e.g.
+`SmallVector` references, the immutable types like `Attribute`, etc.**
+
+## Background: The LLVM Const Model
+
+The LLVM and MLIR data structures provide the IR data structures (like
+`mlir::Operation`s and their users) as a structured cyclic graph data structure.
+Clients of the IR typically walk up and down the graph, perform dynamic down
+casting (of various sorts) to check for patterns, and use some high-abstraction
+pattern matching and binding facilities to do their work.
+
+The basic idea of LLVM's design is that these traversals of the IR should
+preserve the const'ness of a pointer: if you have a const pointer to an
+instruction and ask for its parent (or operand, users, etc), you should get a
+const pointer to the block containing the instruction (or value defining the
+operand, instruction using the instruction, etc). The instruction class looks
+like this:
+
+```
+namespace llvm {
+class Instruction : ... {
+ BasicBlock *Parent;
+public:
+ // A const instruction returns a const parent pointer.
+ inline const BasicBlock *getParent() const { return Parent; }
+ // A non-const instruction returns a non-const parent pointer.
+ inline BasicBlock *getParent() { return Parent; }
+…
+};
+}
+```
+
+The rationale for this design is that it would be const-incorrect to return a
+non-const pointer from getParent, because you could then walk the block to find
+the instruction again and get non-const references to the same instruction - all
+without a `const_cast`.
+
+This const model is simple and the C++ type system generally supports it through
+code duplication of methods. That said, LLVM is actually inconsistent and buggy
+about this. Even the core classes have bugs: `llvm::Instruction::getOperand()`
+isn't currently const correct! There are other subsystems (e.g. the
+`llvm/IR/PatternMatch.h` APIs) where you can perform a pattern match on a const
+IR object and bind a non-const IR object.
+
+LLVM is a mature technology with hundreds of people working on it. The fact that
+it still isn't correctly following the const model it set out for strongly hints
+that one of: 1) The design is too complicated to be practical, 2) the benefits
+of the model aren't worth the cost of the complexity, or 3) both 1 and 2,
+together in some combination.
+
+## Advantages of Const-correctness in MLIR
+
+Even though this doc argues for eliminating const from MLIR, it is important to
+evaluate that as a tradeoff with the advantages the const model provides,
+allowing us to do a cost/benefit tradeoff. These are the benefits we see:
+
+The major advantage of allowing const on MLIR types is as a marker in APIs that
+indicate that the function will not modify the specified values. For example,
+the dominator APIs have a `dominates(const Block*, const Block*)` method, and
+the consts provide a way of indicating that the call won't modify the blocks
+passed in - similarly predicates like `Instruction::isTerminator() const` do not
+modify the receiver object.
+
+It is also an advantage that MLIR follows the generally prevailing pattern of
+C++ code, which generally uses const. Consistency with the community norm is
+important.
+
+## Costs of Const-correctness in MLIR
+
+As mentioned above, early work on MLIR adopted the same design as LLVM intended,
+allowing const-correct traversals in the APIs. Here we discuss the various costs
+of doing this by looking at some examples, listed in roughly increasing order of
+severity.
+
+### Pervasively duplicated accessors
+
+Just as the getParent() example above shows, achieving this const model requires
+that all of the graph traversal accessors be duplicated into const and non-const
+versions. This causes API bloat and slows compile time, but these are minor
+problems.
+
+The more significant issue is that this duplication can be so significant that
+the signal disappears in the noise, for example `mlir::Operation` ends up with
+things like this, which is twice as much API surface area just to try to satisfy
+const.
+
+```c++
+ operand_iterator operand_begin();
+ operand_iterator operand_end();
+
+ /// Returns an iterator on the underlying Value's (Value ).
+ operand_range getOperands();
+
+ // Support const operand iteration.
+ using const_operand_iterator =
+ OperandIterator<const Operation, const Value>;
+ using const_operand_range = llvm::iterator_range<const_operand_iterator>;
+
+ const_operand_iterator operand_begin() const;
+ const_operand_iterator operand_end() const;
+
+ /// Returns a const iterator on the underlying Value's (Value ).
+ llvm::iterator_range<const_operand_iterator> getOperands() const;
+
+ ArrayRef<OpOperand> getOpOperands() const {
+ return getOperandStorage().getOperands();
+ }
+ MutableArrayRef<OpOperand> getOpOperands() {
+ return getOperandStorage().getOperands();
+ }
+
+ OpOperand &getOpOperand(unsigned idx) { return getOpOperands()[idx]; }
+ const OpOperand &getOpOperand(unsigned idx) const {
+ return getOpOperands()[idx];
+ }
+
+```
+
+### Templated accessors
+
+A related issue is that having to provide both const and non-const versions of
+accessors leads to us having to turn more code into templates than would
+otherwise be desirable. Things like `ResultIterator` and `ResultTypeIterator`
+are templates *_only_* because they are generic over const and non-const
+versions of types. This leads to them being defined inline in headers (instead
+of in .cpp files).
+
+Thus, our const model is leading to more code in headers and more complexity in
+the implementation.
+
+### Const incorrect in practice
+
+For some things, const is more trouble than it is worth, so they never get
+updated.
+
+This means that certain API in practice don't provide a const variant, leading
+to pervasive use of `const_cast` to drop the const qualifier. For example the
+logic in `Matchers.h` doesn't support const pointers at all (b/123355851), even
+though matching and binding values themselves makes perfect sense for both const
+and non-const values. Actually fixing this would cause massive code bloat and
+complexity.
+
+Other parts of the code are just outright incorrect. For example, the operation
+cloning methods are defined on Operation like this:
+
+```C++
+Operation *clone(BlockAndValueMapping &mapper, MLIRContext *context) const;
+
+Operation *clone(MLIRContext *context) const;
+```
+
+While it makes sense for a clone method to be `const` conceptually (the original
+operation isn't modified) this is a violation of the model, since the returned
+operation must be mutable, and provides access to the full graph of operands as
+the original operation, violating the graph based const model we were shooting
+for.
+
+### The `OpPointer` and `ConstOpPointer` Classes
+
+The "typed operation" classes for registered operations (e.g. like `DimOp` for
+the "std.dim" operation in standard ops) contain a pointer to an operation and
+provide typed APIs for processing it.
+
+However, this is a problem for our current `const` design - `const DimOp` means
+the pointer itself is immutable, not the pointee. The current solution for this
+is the `OpPointer<>` and `ConstOpPointer<>` classes, which exist solely to
+provide const correctness when referring to a typed operation. Instead of
+referring to `DimOp` directly, we need to use `OpPointer<DimOp>` and
+`ConstOpPointer<DimOp>` to preserve this constness.
+
+While `auto` hides many instances of these `OpPointer` classes, their presence
+leads to extremely ugly APIs. It also obscures the fact that the user does not
+have a direct `DimOp` object, creating easy pitfalls with subtly incorrect
+semantics:
+
+```C++
+// OpPointer encodes unnecessary and superfluous information into the API.
+SmallVector<OpPointer<AffineForOp>, 8> stripmineSink(
+ OpPointer<AffineForOp> forOp, uint64_t factor,
+ ArrayRef<OpPointer<AffineForOp>> targets);
+// Compared to the much cleaner and easier to read...
+SmallVector<AffineForOp, 8> stripmineSink(AffineForOp forOp, uint64_t factor,
+ ArrayRef<AffineForOp> targets);
+
+// OpPointer is easy to misuse.
+if (auto *dimOp = inst->dyn_cast<DimOp>()) {
+ // This is actually undefined behavior because dyn_cast actually returns
+ // OpPointer<DimOp>. OpPointer<DimOp> happily implicitly converts to DimOp *
+ // creating undefined behavior that will execute correctly most of the time.
+}
+```
+
+It would be much better to eliminate them entirely, and just pass around `DimOp`
+directly. For example, instead of:
+
+```C++
+LogicalResult mlir::getIndexSet(MutableArrayRef<OpPointer<AffineForOp>> forOps,
+ FlatAffineConstraints *domain) {
+
+```
+
+It would be a lot nicer to just have:
+
+```c++
+LogicalResult mlir::getIndexSet(MutableArrayRef<AffineForOp> forOps,
+ FlatAffineConstraints *domain) {
+```
+
+Particularly since all of the `FooOp` classes are already semantically a smart
+pointer to their underlying operation.
+
+## Proposal: Remove `const` from IR objects
+
+As we can see above, there is very little benefit to our const design and
+significant cost, and given that the primary purpose of an IR is to represent
+transformations of code, const is providing very little benefit.
+
+As such, we propose eliminating support for const references in MLIR. This
+implies the following changes to the codebase:
+
+1. All of the const-duplicated accessors would be eliminated, e.g.
+ `Operation::getParent() const` would be removed. This is expected to remove
+ approximately ~130 lines of code from just Operation.h alone.
+1. Const-only predicates would be changed to be non-const, e.g.
+ `Operation::isTerminator() const` would have the const removed.
+1. Iterators and other types and functions that are templated to support
+ `const` can have those template arguments removed.
+1. Types like `OpPointer` and `ConstOpPointer` that exist solely to propagate
+ const can be entirely removed from the codebase.
+1. We can close bugs complaining about const incorrectness in the IR.
diff --git a/mlir/docs/WritingAPass.md b/mlir/docs/WritingAPass.md
new file mode 100644
index 00000000000..5119c469e20
--- /dev/null
+++ b/mlir/docs/WritingAPass.md
@@ -0,0 +1,835 @@
+# Writing a Pass
+
+[TOC]
+
+Passes represent the basic infrastructure for transformation and optimization.
+This document provides a quickstart to the pass infrastructure in MLIR and how
+to use it.
+
+See [MLIR specification](LangRef.md) for more information about MLIR and its
+core aspects, such as the IR structure and operations.
+
+See [MLIR Rewrites](QuickstartRewrites.md) for a quick start on graph rewriting
+in MLIR. If your transformation involves pattern matching operation DAGs, this
+is a great place to start.
+
+## Operation Pass
+
+In MLIR, the main unit of abstraction and transformation is an
+[operation](LangRef.md#operations). As such, the pass manager is designed to
+work on instances of operations at different levels of nesting. The structure of
+the [pass manager](#pass-manager), and the concept of nesting, is detailed
+further below. All passes in MLIR derive from `OperationPass` and adhere to the
+following restrictions; any noncompliance will lead to problematic behavior in
+multithreaded and other advanced scenarios:
+
+* Modify anything within the parent block/region/operation/etc, outside of the
+ current operation being operated on. This includes adding or removing
+ operations from the parent block.
+* Maintain pass state across invocations of `runOnOperation`. A pass may be
+ run on several different operations with no guarantee of execution order.
+ * When multithreading, a specific pass instance may not even execute on
+ all operations within the module. As such, a pass should not rely on
+ running on all operations.
+* Modify the state of another operation not nested within the current
+ operation being operated on.
+ * Other threads may be operating on different operations within the module
+ simultaneously.
+* Maintain any global mutable state, e.g. static variables within the source
+ file. All mutable state should be maintained by an instance of the pass.
+* Must be copy-constructible, multiple instances of the pass may be created by
+ the pass manager to process operations in parallel.
+* Inspect the IR of sibling operations. Other threads may be modifying these
+ operations in parallel.
+
+When creating an operation pass, there are two different types to choose from
+depending on the usage scenario:
+
+### OperationPass : Op-Specific
+
+An `op-specific` operation pass operates explicitly on a given operation type.
+This operation type must adhere to the restrictions set by the pass manager for
+pass execution.
+
+To define an op-specific operation pass, a derived class must adhere to the
+following:
+
+* Inherit from the CRTP class `OperationPass` and provide the operation type
+ as an additional template parameter.
+* Override the virtual `void runOnOperation()` method.
+
+A simple pass may look like:
+
+```c++
+namespace {
+struct MyFunctionPass : public OperationPass<MyFunctionPass, FuncOp> {
+ void runOnOperation() override {
+ // Get the current FuncOp operation being operated on.
+ FuncOp f = getOperation();
+
+ // Walk the operations within the function.
+ f.walk([](Operation *inst) {
+ ....
+ });
+ }
+};
+} // end anonymous namespace
+
+// Register this pass to make it accessible to utilities like mlir-opt.
+// (Pass registration is discussed more below)
+static PassRegistration<MyFunctionPass> pass(
+ "flag-name-to-invoke-pass-via-mlir-opt", "Pass description here");
+```
+
+### OperationPass : Op-Agnostic
+
+An `op-agnostic` pass operates on the operation type of the pass manager that it
+is added to. This means that a pass that operates on several different operation
+types in the same way only needs one implementation.
+
+To create an operation pass, a derived class must adhere to the following:
+
+* Inherit from the CRTP class `OperationPass`.
+* Override the virtual `void runOnOperation()` method.
+
+A simple pass may look like:
+
+```c++
+struct MyOperationPass : public OperationPass<MyOperationPass> {
+ void runOnOperation() override {
+ // Get the current operation being operated on.
+ Operation *op = getOperation();
+ ...
+ }
+};
+```
+
+## Analysis Management
+
+An important concept, along with transformation passes, are analyses. These are
+conceptually similar to transformation passes, except that they compute
+information on a specific operation without modifying it. In MLIR, analyses are
+not passes but free-standing classes that are computed lazily on-demand and
+cached to avoid unnecessary recomputation. An analysis in MLIR must adhere to
+the following:
+
+* Provide a valid constructor taking an `Operation*`.
+* Must not modify the given operation.
+
+An analysis may provide additional hooks to control various behavior:
+
+* `bool isInvalidated(const AnalysisManager::PreservedAnalyses &)`
+
+Given a preserved analysis set, the analysis returns true if it should truly be
+invalidated. This allows for more fine-tuned invalidation in cases where an
+analysis wasn't explicitly marked preserved, but may be preserved (or
+invalidated) based upon other properties such as analyses sets.
+
+### Querying Analyses
+
+The base `OperationPass` class provide utilities for querying and preserving
+analyses for the current operation being processed.
+
+* OperationPass automatically provides the following utilities for querying
+ analyses:
+ * `getAnalysis<>`
+ - Get an analysis for the current operation, constructing it if
+ necessary.
+ * `getCachedAnalysis<>`
+ - Get an analysis for the current operation, if it already exists.
+ * `getCachedParentAnalysis<>`
+ - Get an analysis for a given parent operation, if it exists.
+ * `getCachedChildAnalysis<>`
+ - Get an analysis for a given child operation, if it exists.
+ * `getChildAnalysis<>`
+ - Get an analysis for a given child operation, constructing it if
+ necessary.
+
+Using the example passes defined above, let's see some examples:
+
+```c++
+/// An interesting analysis.
+struct MyOperationAnalysis {
+ // Compute this analysis with the provided operation.
+ MyOperationAnalysis(Operation *op);
+};
+
+void MyOperationPass::runOnOperation() {
+ // Query MyOperationAnalysis for the current operation.
+ MyOperationAnalysis &myAnalysis = getAnalysis<MyOperationAnalysis>();
+
+ // Query a cached instance of MyOperationAnalysis for the current operation.
+ // It will not be computed if it doesn't exist.
+ auto optionalAnalysis = getCachedAnalysis<MyOperationAnalysis>();
+ if (optionalAnalysis)
+ ...
+
+ // Query a cached instance of MyOperationAnalysis for the parent operation of
+ // the current operation. It will not be computed if it doesn't exist.
+ auto optionalAnalysis = getCachedParentAnalysis<MyOperationAnalysis>();
+ if (optionalAnalysis)
+ ...
+}
+```
+
+### Preserving Analyses
+
+Analyses that are constructed after being queried by a pass are cached to avoid
+unnecessary computation if they are requested again later. To avoid stale
+analyses, all analyses are assumed to be invalidated by a pass. To avoid
+invalidation, a pass must specifically mark analyses that are known to be
+preserved.
+
+* All Pass classes automatically provide the following utilities for
+ preserving analyses:
+ * `markAllAnalysesPreserved`
+ * `markAnalysesPreserved<>`
+
+```c++
+void MyOperationPass::runOnOperation() {
+ // Mark all analyses as preserved. This is useful if a pass can guarantee
+ // that no transformation was performed.
+ markAllAnalysesPreserved();
+
+ // Mark specific analyses as preserved. This is used if some transformation
+ // was performed, but some analyses were either unaffected or explicitly
+ // preserved.
+ markAnalysesPreserved<MyAnalysis, MyAnalyses...>();
+}
+```
+
+## Pass Failure
+
+Passes in MLIR are allowed to gracefully fail. This may happen if some invariant
+of the pass was broken, potentially leaving the IR in some invalid state. If
+such a situation occurs, the pass can directly signal a failure to the pass
+manager. If a pass signaled a failure when executing, no other passes in the
+pipeline will execute and the `PassManager::run` will return failure. Failure
+signaling is provided in the form of a `signalPassFailure` method.
+
+```c++
+void MyPass::runOnOperation() {
+ // Signal failure on a broken invariant.
+ if (some_broken_invariant) {
+ signalPassFailure();
+ return;
+ }
+}
+```
+
+## Pass Manager
+
+Above we introduced the different types of passes and their constraints. Now
+that we have our pass we need to be able to run it over a specific module. This
+is where the pass manager comes into play. The `PassManager` class is used to
+configure and run a pipeline. The `OpPassManager` class is used to schedule
+passes to run at a specific level of nesting.
+
+### OpPassManager
+
+An `OpPassManager` is essentially a collection of passes to execute on an
+operation of a given type. This operation type must adhere to the following
+requirement:
+
+* Must be registered and marked `IsolatedFromAbove`.
+
+ * Passes are expected to not modify operations at or above the current
+ operation being processed. If the operation is not isolated, it may
+ inadvertently modify the use-list of an operation it is not supposed to
+ modify.
+
+Passes can be added to a pass manager via `addPass`. The pass must either be an
+`op-specific` pass operating on the same operation type as `OpPassManager`, or
+an `op-agnostic` pass.
+
+An `OpPassManager` cannot be created directly, but must be explicitly nested
+within another `OpPassManager` via the `nest<>` method. This method takes the
+operation type that the nested pass manager will operate on. At the top-level, a
+`PassManager` acts as an `OpPassManager` that operates on the
+[`module`](LangRef.md#module) operation. Nesting in this sense, corresponds to
+the structural nesting within [Regions](LangRef.md#regions) of the IR.
+
+For example, the following `.mlir`:
+
+```
+module {
+ spv.module "Logical" "GLSL450" {
+ func @foo() {
+ ...
+ }
+ }
+}
+```
+
+Has the nesting structure of:
+
+```
+`module`
+ `spv.module`
+ `function`
+```
+
+Below is an example of constructing a pipeline that operates on the above
+structure:
+
+```c++
+PassManager pm(ctx);
+
+// Add a pass on the top-level module operation.
+pm.addPass(std::make_unique<MyModulePass>());
+
+// Nest a pass manager that operates on spirv module operations nested directly
+// under the top-level module.
+OpPassManager &nestedModulePM = pm.nest<spirv::ModuleOp>();
+nestedModulePM.addPass(std::make_unique<MySPIRVModulePass>());
+
+// Nest a pass manager that operates on functions within the nested SPIRV
+// module.
+OpPassManager &nestedFunctionPM = nestedModulePM.nest<FuncOp>();
+nestedFunctionPM.addPass(std::make_unique<MyFunctionPass>());
+
+// Run the pass manager on the top-level module.
+Module m = ...;
+if (failed(pm.run(m)))
+ ... // One of the passes signaled a failure.
+```
+
+The above pass manager would contain the following pipeline structure:
+
+```
+OpPassManager<ModuleOp>
+ MyModulePass
+ OpPassManager<spirv::ModuleOp>
+ MySPIRVModulePass
+ OpPassManager<FuncOp>
+ MyFunctionPass
+```
+
+These pipelines are then run over a single operation at a time. This means that,
+for example, given a series of consecutive passes on FuncOp, it will execute all
+on the first function, then all on the second function, etc. until the entire
+program has been run through the passes. This provides several benefits:
+
+* This improves the cache behavior of the compiler, because it is only
+ touching a single function at a time, instead of traversing the entire
+ program.
+* This improves multi-threading performance by reducing the number of jobs
+ that need to be scheduled, as well as increasing the efficiency of each job.
+ An entire function pipeline can be run on each function asynchronously.
+
+## Pass Registration
+
+Briefly shown in the example definitions of the various pass types is the
+`PassRegistration` class. This is a utility to register derived pass classes so
+that they may be created, and inspected, by utilities like mlir-opt. Registering
+a pass class takes the form:
+
+```c++
+static PassRegistration<MyPass> pass("command-line-arg", "description");
+```
+
+* `MyPass` is the name of the derived pass class.
+* "command-line-arg" is the argument to use on the command line to invoke the
+ pass from `mlir-opt`.
+* "description" is a description of the pass.
+
+For passes that cannot be default-constructed, `PassRegistration` accepts an
+optional third argument that takes a callback to create the pass:
+
+```c++
+static PassRegistration<MyParametricPass> pass(
+ "command-line-arg", "description",
+ []() -> std::unique_ptr<Pass> {
+ std::unique_ptr<Pass> p = std::make_unique<MyParametricPass>(/*options*/);
+ /*... non-trivial-logic to configure the pass ...*/;
+ return p;
+ });
+```
+
+This variant of registration can be used, for example, to accept the
+configuration of a pass from command-line arguments and pass it over to the pass
+constructor. Make sure that the pass is copy-constructible in a way that does
+not share data as the [pass manager](#pass-manager) may create copies of the
+pass to run in parallel.
+
+### Pass Pipeline Registration
+
+Described above is the mechanism used for registering a specific derived pass
+class. On top of that, MLIR allows for registering custom pass pipelines in a
+similar fashion. This allows for custom pipelines to be available to tools like
+mlir-opt in the same way that passes are, which is useful for encapsulating
+common pipelines like the "-O1" series of passes. Pipelines are registered via a
+similar mechanism to passes in the form of `PassPipelineRegistration`. Compared
+to `PassRegistration`, this class takes an additional parameter in the form of a
+pipeline builder that modifies a provided `OpPassManager`.
+
+```c++
+void pipelineBuilder(OpPassManager &pm) {
+ pm.addPass(std::make_unique<MyPass>());
+ pm.addPass(std::make_unique<MyOtherPass>());
+}
+
+// Register an existing pipeline builder function.
+static PassPipelineRegistration<> pipeline(
+ "command-line-arg", "description", pipelineBuilder);
+
+// Register an inline pipeline builder.
+static PassPipelineRegistration<> pipeline(
+ "command-line-arg", "description", [](OpPassManager &pm) {
+ pm.addPass(std::make_unique<MyPass>());
+ pm.addPass(std::make_unique<MyOtherPass>());
+ });
+```
+
+Pipeline registration also allows for simplified registration of
+specifializations for existing passes:
+
+```c++
+static PassPipelineRegistration<> foo10(
+ "foo-10", "Foo Pass 10", [] { return std::make_unique<FooPass>(10); } );
+```
+
+### Textual Pass Pipeline Specification
+
+In the previous sections, we showed how to register passes and pass pipelines
+with a specific argument and description. Once registered, these can be used on
+the command line to configure a pass manager. The limitation of using these
+arguments directly is that they cannot build a nested pipeline. For example, if
+our module has another module nested underneath, with just `-my-module-pass`
+there is no way to specify that this pass should run on the nested module and
+not the top-level module. This is due to the flattened nature of the command
+line.
+
+To circumvent this limitation, MLIR also supports a textual description of a
+pass pipeline. This allows for explicitly specifying the structure of the
+pipeline to add to the pass manager. This includes the nesting structure, as
+well as the passes and pass pipelines to run. A textual pipeline is defined as a
+series of names, each of which may in itself recursively contain a nested
+pipeline description. The syntax for this specification is as follows:
+
+```ebnf
+pipeline ::= op-name `(` pipeline-element (`,` pipeline-element)* `)`
+pipeline-element ::= pipeline | (pass-name | pass-pipeline-name) options?
+options ::= '{' (key ('=' value)?)+ '}'
+```
+
+* `op-name`
+ * This corresponds to the mnemonic name of an operation to run passes on,
+ e.g. `func` or `module`.
+* `pass-name` | `pass-pipeline-name`
+ * This corresponds to the command-line argument of a registered pass or
+ pass pipeline, e.g. `cse` or `canonicalize`.
+* `options`
+ * Options are pass specific key value pairs that are handled as described
+ in the [instance specific pass options](#instance-specific-pass-options)
+ section.
+
+For example, the following pipeline:
+
+```shell
+$ mlir-opt foo.mlir -cse -canonicalize -convert-std-to-llvm
+```
+
+Can also be specified as (via the `-pass-pipeline` flag):
+
+```shell
+$ mlir-opt foo.mlir -pass-pipeline='func(cse, canonicalize), convert-std-to-llvm'
+```
+
+In order to support round-tripping your pass to the textual representation using
+`OpPassManager::printAsTextualPipeline(raw_ostream&)`, override
+`Pass::printAsTextualPipeline(raw_ostream&)` to format your pass-name and
+options in the format described above.
+
+### Instance Specific Pass Options
+
+Options may be specified for a parametric pass. Individual options are defined
+using the [LLVM command line](https://llvm.org/docs/CommandLine.html) flag
+definition rules. These options will then be parsed at pass construction time
+independently for each instance of the pass. To provide options for passes, the
+`Option<>` and `OptionList<>` classes may be used:
+
+```c++
+struct MyPass ... {
+ /// Make sure that we have a valid default constructor and copy constructor to
+ /// make sure that the options are initialized properly.
+ MyPass() = default;
+ MyPass(const MyPass& pass) {}
+
+ // These just forward onto llvm::cl::list and llvm::cl::opt respectively.
+ Option<int> exampleOption{*this, "flag-name", llvm::cl::desc("...")};
+ ListOption<int> exampleListOption{*this, "list-flag-name",
+ llvm::cl::desc("...")};
+};
+```
+
+For pass pipelines, the `PassPipelineRegistration` templates take an additional
+optional template parameter that is the Option struct definition to be used for
+that pipeline. To use pipeline specific options, create a class that inherits
+from `mlir::PassPipelineOptions` that contains the desired options. When using
+`PassPipelineRegistration`, the constructor now takes a function with the
+signature `void (OpPassManager &pm, const MyPipelineOptions&)` which should
+construct the passes from the options and pass them to the pm:
+
+```c++
+struct MyPipelineOptions : public PassPipelineOptions {
+ // These just forward onto llvm::cl::list and llvm::cl::opt respectively.
+ Option<int> exampleOption{*this, "flag-name", llvm::cl::desc("...")};
+ ListOption<int> exampleListOption{*this, "list-flag-name",
+ llvm::cl::desc("...")};
+};
+
+
+static mlir::PassPipelineRegistration<MyPipelineOptions> pipeline(
+ "example-pipeline", "Run an example pipeline.",
+ [](OpPassManager &pm, const MyPipelineOptions &pipelineOptions) {
+ // Initialize the pass manager.
+ });
+```
+
+## Pass Statistics
+
+Statistics are a way to keep track of what the compiler is doing and how
+effective various transformations are. It is often useful to see what effect
+specific transformations have on a particular program, and how often they
+trigger. Pass statistics are instance specific which allow for taking this a
+step further as you are able to see the effect of placing a particular
+transformation at specific places within the pass pipeline. For example, they
+help answer questions like `What happens if I run CSE again here?`.
+
+Statistics can be added to a pass by using the 'Pass::Statistic' class. This
+class takes as a constructor arguments: the parent pass, a name, and a
+description. This class acts like an unsigned integer, and may be incremented
+and updated accordingly. These statistics use the same infrastructure as
+[`llvm::Statistic`](http://llvm.org/docs/ProgrammersManual.html#the-statistic-class-stats-option)
+and thus have similar usage constraints. Collected statistics can be dumped by
+the [pass manager](#pass-manager) programmatically via
+`PassManager::enableStatistics`; or via `-pass-statistics` and
+`-pass-statistics-display` on the command line.
+
+An example is shown below:
+
+```c++
+struct MyPass : public OperationPass<MyPass> {
+ Statistic testStat{this, "testStat", "A test statistic"};
+
+ void runOnOperation() {
+ ...
+
+ // Update our statistic after some invariant was hit.
+ ++testStat;
+
+ ...
+ }
+};
+```
+
+The collected statistics may be aggregated in two types of views:
+
+A pipeline view that models the structure of the pass manager, this is the
+default view:
+
+```shell
+$ mlir-opt -pass-pipeline='func(my-pass,my-pass)' foo.mlir -pass-statistics
+
+===-------------------------------------------------------------------------===
+ ... Pass statistics report ...
+===-------------------------------------------------------------------------===
+'func' Pipeline
+ MyPass
+ (S) 15 testStat - A test statistic
+ VerifierPass
+ MyPass
+ (S) 6 testStat - A test statistic
+ VerifierPass
+VerifierPass
+```
+
+And a list view that aggregates all instances of a specific pass together:
+
+```shell
+$ mlir-opt -pass-pipeline='func(my-pass, my-pass)' foo.mlir -pass-statistics -pass-statistics-display=list
+
+===-------------------------------------------------------------------------===
+ ... Pass statistics report ...
+===-------------------------------------------------------------------------===
+MyPass
+ (S) 21 testStat - A test statistic
+```
+
+## Pass Instrumentation
+
+MLIR provides a customizable framework to instrument pass execution and analysis
+computation. This is provided via the `PassInstrumentation` class. This class
+provides hooks into the PassManager that observe various pass events:
+
+* `runBeforePipeline`
+ * This callback is run just before a pass pipeline, i.e. pass manager, is
+ executed.
+* `runAfterPipeline`
+ * This callback is run right after a pass pipeline has been executed,
+ successfully or not.
+* `runBeforePass`
+ * This callback is run just before a pass is executed.
+* `runAfterPass`
+ * This callback is run right after a pass has been successfully executed.
+ If this hook is executed, runAfterPassFailed will not be.
+* `runAfterPassFailed`
+ * This callback is run right after a pass execution fails. If this hook is
+ executed, runAfterPass will not be.
+* `runBeforeAnalysis`
+ * This callback is run just before an analysis is computed.
+* `runAfterAnalysis`
+ * This callback is run right after an analysis is computed.
+
+PassInstrumentation objects can be registered directly with a
+[PassManager](#pass-manager) instance via the `addInstrumentation` method.
+Instrumentations added to the PassManager are run in a stack like fashion, i.e.
+the last instrumentation to execute a `runBefore*` hook will be the first to
+execute the respective `runAfter*` hook. Below in an example instrumentation
+that counts the number of times DominanceInfo is computed:
+
+```c++
+struct DominanceCounterInstrumentation : public PassInstrumentation {
+ unsigned &count;
+
+ DominanceCounterInstrumentation(unsigned &count) : count(count) {}
+ void runAfterAnalysis(llvm::StringRef, AnalysisID *id, Operation *) override {
+ if (id == AnalysisID::getID<DominanceInfo>())
+ ++count;
+ }
+};
+
+MLIRContext *ctx = ...;
+PassManager pm(ctx);
+
+// Add the instrumentation to the pass manager.
+unsigned domInfoCount;
+pm.addInstrumentation(
+ std::make_unique<DominanceCounterInstrumentation>(domInfoCount));
+
+// Run the pass manager on a module operation.
+ModuleOp m = ...;
+if (failed(pm.run(m)))
+ ...
+
+llvm::errs() << "DominanceInfo was computed " << domInfoCount << " times!\n";
+```
+
+### Standard Instrumentations
+
+MLIR utilizes the pass instrumentation framework to provide a few useful
+developer tools and utilities. Each of these instrumentations are immediately
+available to all users of the MLIR pass framework.
+
+#### Pass Timing
+
+The PassTiming instrumentation provides timing information about the execution
+of passes and computation of analyses. This provides a quick glimpse into what
+passes are taking the most time to execute, as well as how much of an effect
+your pass has on the total execution time of the pipeline. Users can enable this
+instrumentation directly on the PassManager via `enableTiming`. This
+instrumentation is also made available in mlir-opt via the `-pass-timing` flag.
+The PassTiming instrumentation provides several different display modes for the
+timing results, each of which is described below:
+
+##### List Display Mode
+
+In this mode, the results are displayed in a list sorted by total time with each
+pass/analysis instance aggregated into one unique result. This view is useful
+for getting an overview of what analyses/passes are taking the most time in a
+pipeline. This display mode is available in mlir-opt via
+`-pass-timing-display=list`.
+
+```shell
+$ mlir-opt foo.mlir -disable-pass-threading -pass-pipeline='func(cse,canonicalize)' -convert-std-to-llvm -pass-timing -pass-timing-display=list
+
+===-------------------------------------------------------------------------===
+ ... Pass execution timing report ...
+===-------------------------------------------------------------------------===
+ Total Execution Time: 0.0203 seconds
+
+ ---Wall Time--- --- Name ---
+ 0.0047 ( 55.9%) Canonicalizer
+ 0.0019 ( 22.2%) VerifierPass
+ 0.0016 ( 18.5%) LLVMLoweringPass
+ 0.0003 ( 3.4%) CSE
+ 0.0002 ( 1.9%) (A) DominanceInfo
+ 0.0084 (100.0%) Total
+```
+
+##### Pipeline Display Mode
+
+In this mode, the results are displayed in a nested pipeline view that mirrors
+the internal pass pipeline that is being executed in the pass manager. This view
+is useful for understanding specifically which parts of the pipeline are taking
+the most time, and can also be used to identify when analyses are being
+invalidated and recomputed. This is the default display mode.
+
+```shell
+$ mlir-opt foo.mlir -disable-pass-threading -pass-pipeline='func(cse,canonicalize)' -convert-std-to-llvm -pass-timing
+
+===-------------------------------------------------------------------------===
+ ... Pass execution timing report ...
+===-------------------------------------------------------------------------===
+ Total Execution Time: 0.0249 seconds
+
+ ---Wall Time--- --- Name ---
+ 0.0058 ( 70.8%) 'func' Pipeline
+ 0.0004 ( 4.3%) CSE
+ 0.0002 ( 2.6%) (A) DominanceInfo
+ 0.0004 ( 4.8%) VerifierPass
+ 0.0046 ( 55.4%) Canonicalizer
+ 0.0005 ( 6.2%) VerifierPass
+ 0.0005 ( 5.8%) VerifierPass
+ 0.0014 ( 17.2%) LLVMLoweringPass
+ 0.0005 ( 6.2%) VerifierPass
+ 0.0082 (100.0%) Total
+```
+
+##### Multi-threaded Pass Timing
+
+When multi-threading is enabled in the pass manager the meaning of the display
+slightly changes. First, a new timing column is added, `User Time`, that
+displays the total time spent across all threads. Secondly, the `Wall Time`
+column displays the longest individual time spent amongst all of the threads.
+This means that the `Wall Time` column will continue to give an indicator on the
+perceived time, or clock time, whereas the `User Time` will display the total
+cpu time.
+
+```shell
+$ mlir-opt foo.mlir -pass-pipeline='func(cse,canonicalize)' -convert-std-to-llvm -pass-timing
+
+===-------------------------------------------------------------------------===
+ ... Pass execution timing report ...
+===-------------------------------------------------------------------------===
+ Total Execution Time: 0.0078 seconds
+
+ ---User Time--- ---Wall Time--- --- Name ---
+ 0.0177 ( 88.5%) 0.0057 ( 71.3%) 'func' Pipeline
+ 0.0044 ( 22.0%) 0.0015 ( 18.9%) CSE
+ 0.0029 ( 14.5%) 0.0012 ( 15.2%) (A) DominanceInfo
+ 0.0038 ( 18.9%) 0.0015 ( 18.7%) VerifierPass
+ 0.0089 ( 44.6%) 0.0025 ( 31.1%) Canonicalizer
+ 0.0006 ( 3.0%) 0.0002 ( 2.6%) VerifierPass
+ 0.0004 ( 2.2%) 0.0004 ( 5.4%) VerifierPass
+ 0.0013 ( 6.5%) 0.0013 ( 16.3%) LLVMLoweringPass
+ 0.0006 ( 2.8%) 0.0006 ( 7.0%) VerifierPass
+ 0.0200 (100.0%) 0.0081 (100.0%) Total
+```
+
+#### IR Printing
+
+When debugging it is often useful to dump the IR at various stages of a pass
+pipeline. This is where the IR printing instrumentation comes into play. This
+instrumentation allows for conditionally printing the IR before and after pass
+execution by optionally filtering on the pass being executed. This
+instrumentation can be added directly to the PassManager via the
+`enableIRPrinting` method. `mlir-opt` provides a few useful flags for utilizing
+this instrumentation:
+
+* `print-ir-before=(comma-separated-pass-list)`
+ * Print the IR before each of the passes provided within the pass list.
+* `print-ir-before-all`
+ * Print the IR before every pass in the pipeline.
+
+```shell
+$ mlir-opt foo.mlir -pass-pipeline='func(cse)' -print-ir-before=cse
+
+*** IR Dump Before CSE ***
+func @simple_constant() -> (i32, i32) {
+ %c1_i32 = constant 1 : i32
+ %c1_i32_0 = constant 1 : i32
+ return %c1_i32, %c1_i32_0 : i32, i32
+}
+```
+
+* `print-ir-after=(comma-separated-pass-list)`
+ * Print the IR after each of the passes provided within the pass list.
+* `print-ir-after-all`
+ * Print the IR after every pass in the pipeline.
+
+```shell
+$ mlir-opt foo.mlir -pass-pipeline='func(cse)' -print-ir-after=cse
+
+*** IR Dump After CSE ***
+func @simple_constant() -> (i32, i32) {
+ %c1_i32 = constant 1 : i32
+ return %c1_i32, %c1_i32 : i32, i32
+}
+```
+
+* `print-ir-after-change`
+ * Only print the IR after a pass if the pass mutated the IR. This helps to
+ reduce the number of IR dumps for "uninteresting" passes.
+ * Note: Changes are detected by comparing a hash of the operation before
+ and after the pass. This adds additional run-time to compute the hash of
+ the IR, and in some rare cases may result in false-positives depending
+ on the collision rate of the hash algorithm used.
+ * Note: This option should be used in unison with one of the other
+ 'print-ir-after' options above, as this option alone does not enable
+ printing.
+
+```shell
+$ mlir-opt foo.mlir -pass-pipeline='func(cse,cse)' -print-ir-after=cse -print-ir-after-change
+
+*** IR Dump After CSE ***
+func @simple_constant() -> (i32, i32) {
+ %c1_i32 = constant 1 : i32
+ return %c1_i32, %c1_i32 : i32, i32
+}
+```
+
+* `print-ir-module-scope`
+ * Always print the top-level module operation, regardless of pass type or
+ operation nesting level.
+ * Note: Printing at module scope should only be used when multi-threading
+ is disabled(`-disable-pass-threading`)
+
+```shell
+$ mlir-opt foo.mlir -disable-pass-threading -pass-pipeline='func(cse)' -print-ir-after=cse -print-ir-module-scope
+
+*** IR Dump After CSE *** ('func' operation: @bar)
+func @bar(%arg0: f32, %arg1: f32) -> f32 {
+ ...
+}
+
+func @simple_constant() -> (i32, i32) {
+ %c1_i32 = constant 1 : i32
+ %c1_i32_0 = constant 1 : i32
+ return %c1_i32, %c1_i32_0 : i32, i32
+}
+
+*** IR Dump After CSE *** ('func' operation: @simple_constant)
+func @bar(%arg0: f32, %arg1: f32) -> f32 {
+ ...
+}
+
+func @simple_constant() -> (i32, i32) {
+ %c1_i32 = constant 1 : i32
+ return %c1_i32, %c1_i32 : i32, i32
+}
+```
+
+## Crash and Failure Reproduction
+
+The [pass manager](#pass-manager) in MLIR contains a builtin mechanism to
+generate reproducibles in the even of a crash, or a
+[pass failure](#pass-failure). This functionality can be enabled via
+`PassManager::enableCrashReproducerGeneration` or via the command line flag
+`pass-pipeline-crash-reproducer`. In either case, an argument is provided that
+corresponds to the output `.mlir` file name that the reproducible should be
+written to. The reproducible contains the configuration of the pass manager that
+was executing, as well as the initial IR before any passes were run. A potential
+reproducible may have the form:
+
+```mlir
+// configuration: -pass-pipeline='func(cse, canonicalize), inline'
+// note: verifyPasses=false
+
+module {
+ func @foo() {
+ ...
+ }
+}
+```
diff --git a/mlir/docs/includes/img/index-map.svg b/mlir/docs/includes/img/index-map.svg
new file mode 100644
index 00000000000..6004c2da362
--- /dev/null
+++ b/mlir/docs/includes/img/index-map.svg
@@ -0,0 +1,380 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ version="1.1"
+ viewBox="0 0 573.56073 250.77821"
+ stroke-miterlimit="10"
+ id="svg133"
+ sodipodi:docname="index-map.svg"
+ width="573.56073"
+ height="250.77821"
+ style="fill:none;stroke:none;stroke-linecap:square;stroke-miterlimit:10"
+ inkscape:version="0.92.2pre0 (973e216, 2017-07-25)">
+ <metadata
+ id="metadata139">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title></dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs137" />
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="2145"
+ inkscape:window-height="1372"
+ id="namedview135"
+ showgrid="false"
+ fit-margin-top="0"
+ fit-margin-left="0"
+ fit-margin-right="0"
+ fit-margin-bottom="0"
+ inkscape:zoom="0.45"
+ inkscape:cx="685.47816"
+ inkscape:cy="101.31222"
+ inkscape:window-x="413"
+ inkscape:window-y="149"
+ inkscape:window-maximized="0"
+ inkscape:current-layer="svg133" />
+ <clipPath
+ id="p.0">
+ <path
+ d="M 0,0 H 1280 V 960 H 0 Z"
+ id="path2"
+ inkscape:connector-curvature="0"
+ style="clip-rule:nonzero" />
+ </clipPath>
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path5"
+ d="M -12.118111,-9.267716 H 1267.8819 V 950.73228 H -12.118111 Z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path7"
+ d="M 94.598429,41.62992 H 118.063 V 68.338584 H 94.598429 Z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path9"
+ d="M 94.598429,41.62992 H 118.063 V 68.338584 H 94.598429 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path11"
+ d="m 111.1453,60.716754 q -0.92188,0.765625 -1.76563,1.09375 -0.82812,0.3125 -1.79687,0.3125 -1.59375,0 -2.45313,-0.78125 -0.85937,-0.78125 -0.85937,-1.984375 0,-0.71875 0.32812,-1.296875 0.32813,-0.59375 0.84375,-0.9375 0.53125,-0.359375 1.1875,-0.546875 0.46875,-0.125 1.45313,-0.25 1.98437,-0.234375 2.92187,-0.5625 0.0156,-0.34375 0.0156,-0.421875 0,-1 -0.46875,-1.421875 -0.625,-0.546875 -1.875,-0.546875 -1.15625,0 -1.70313,0.40625 -0.54687,0.40625 -0.8125,1.421875 l -1.60937,-0.21875 q 0.21875,-1.015625 0.71875,-1.640625 0.5,-0.640625 1.45312,-0.984375 0.95313,-0.34375 2.1875,-0.34375 1.25,0 2.01563,0.296875 0.78125,0.28125 1.14062,0.734375 0.375,0.4375 0.51563,1.109375 0.0781,0.421875 0.0781,1.515625 v 2.1875 q 0,2.28125 0.10938,2.890625 0.10937,0.59375 0.40625,1.15625 h -1.70313 q -0.26562,-0.515625 -0.32812,-1.1875 z m -0.14063,-3.671875 q -0.89062,0.375 -2.67187,0.625 -1.01563,0.140625 -1.4375,0.328125 -0.42188,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45312,0.4375 0.9375,0 1.67188,-0.40625 0.75,-0.421875 1.09375,-1.140625 0.26562,-0.5625 0.26562,-1.640625 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path13"
+ d="m 118.06299,41.62992 h 23.46457 v 26.708664 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path15"
+ d="m 118.06299,41.62992 h 23.46457 v 26.708664 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path17"
+ d="m 129.79737,61.904254 h -1.51563 V 48.544879 h 1.64063 v 4.765625 q 1.04687,-1.296875 2.65625,-1.296875 0.89062,0 1.6875,0.359375 0.79687,0.359375 1.3125,1.015625 0.51562,0.640625 0.79687,1.5625 0.29688,0.921875 0.29688,1.96875 0,2.484375 -1.23438,3.84375 -1.21875,1.359375 -2.95312,1.359375 -1.70313,0 -2.6875,-1.4375 z m -0.0156,-4.90625 q 0,1.734375 0.48438,2.515625 0.76562,1.265625 2.09375,1.265625 1.07812,0 1.85937,-0.9375 0.78125,-0.9375 0.78125,-2.78125 0,-1.890625 -0.75,-2.796875 -0.75,-0.90625 -1.82812,-0.90625 -1.0625,0 -1.85938,0.9375 -0.78125,0.9375 -0.78125,2.703125 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path19"
+ d="m 141.52757,41.62992 h 23.46455 v 26.708664 h -23.46455 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path21"
+ d="m 141.52757,41.62992 h 23.46455 v 26.708664 h -23.46455 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path23"
+ d="m 158.07444,58.357374 1.60937,0.21875 q -0.26562,1.65625 -1.35937,2.609375 -1.07813,0.9375 -2.67188,0.9375 -1.98437,0 -3.1875,-1.296875 -1.20312,-1.296875 -1.20312,-3.71875 0,-1.578125 0.51562,-2.75 0.51563,-1.171875 1.57813,-1.75 1.0625,-0.59375 2.3125,-0.59375 1.57812,0 2.57812,0.796875 1,0.796875 1.28125,2.265625 l -1.59375,0.234375 q -0.23437,-0.96875 -0.8125,-1.453125 -0.57812,-0.5 -1.39062,-0.5 -1.23438,0 -2.01563,0.890625 -0.78125,0.890625 -0.78125,2.8125 0,1.953125 0.75,2.84375 0.75,0.875 1.95313,0.875 0.96875,0 1.60937,-0.59375 0.65625,-0.59375 0.82813,-1.828125 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path25"
+ d="M 94.598429,68.338584 H 118.063 v 26.70866 H 94.598429 Z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path27"
+ d="M 94.598429,68.338584 H 118.063 v 26.70866 H 94.598429 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path29"
+ d="m 111.09843,88.612914 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.640625 -0.96875,-0.640625 -1.5,-1.78125 -0.53125,-1.140625 -0.53125,-2.625 0,-1.453125 0.48438,-2.625 0.48437,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.359375 1.10937,0.953125 v -4.796875 h 1.64063 v 13.359375 z m -5.17188,-4.828125 q 0,1.859375 0.78125,2.78125 0.78125,0.921875 1.84375,0.921875 1.07813,0 1.82813,-0.875 0.75,-0.890625 0.75,-2.6875 0,-1.984375 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.890625 -0.73438,0.890625 -0.73438,2.8125 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path31"
+ d="m 118.06299,68.338584 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path33"
+ d="m 118.06299,68.338584 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path35"
+ d="m 134.92237,85.503539 1.6875,0.203125 q -0.40625,1.484375 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.296875 -1.23438,-1.3125 -1.23438,-3.671875 0,-2.453125 1.25,-3.796875 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.328125 1.23437,1.3125 1.23437,3.703125 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.453125 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.484375 1.01563,-1.515625 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.828125 -0.78125,-0.953125 -2.03125,-0.953125 -1.125,0 -1.90625,0.765625 -0.76562,0.75 -0.84375,2.015625 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path37"
+ d="m 141.52757,68.338584 h 23.46455 v 26.70866 h -23.46455 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path39"
+ d="m 141.52757,68.338584 h 23.46455 v 26.70866 h -23.46455 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path41"
+ d="m 152.29319,88.612914 v -8.40625 h -1.45313 v -1.265625 h 1.45313 v -1.03125 q 0,-0.96875 0.17187,-1.453125 0.23438,-0.640625 0.82813,-1.03125 0.59375,-0.390625 1.67187,-0.390625 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.09375 -0.95312,-0.09375 -0.75,0 -1.0625,0.328125 -0.3125,0.3125 -0.3125,1.1875 v 0.890625 h 1.89062 v 1.265625 h -1.89062 v 8.40625 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path43"
+ d="M 94.598429,95.047244 H 118.063 V 121.7559 H 94.598429 Z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path45"
+ d="M 94.598429,95.047244 H 118.063 V 121.7559 H 94.598429 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path47"
+ d="m 104.5203,116.11844 1.59375,0.23438 q 0.10937,0.75 0.5625,1.07812 0.60937,0.45313 1.67187,0.45313 1.14063,0 1.75,-0.45313 0.625,-0.45312 0.84375,-1.26562 0.125,-0.5 0.10938,-2.10938 -1.0625,1.26563 -2.67188,1.26563 -2,0 -3.09375,-1.4375 -1.09375,-1.4375 -1.09375,-3.45313 0,-1.39062 0.5,-2.5625 0.51563,-1.17187 1.45313,-1.79687 0.95312,-0.64063 2.25,-0.64063 1.70312,0 2.8125,1.375 v -1.15625 h 1.51562 v 8.35938 q 0,2.26562 -0.46875,3.20312 -0.45312,0.9375 -1.45312,1.48438 -0.98438,0.54688 -2.45313,0.54688 -1.71875,0 -2.79687,-0.78126 -1.0625,-0.76563 -1.03125,-2.34375 z m 1.35937,-5.8125 q 0,1.90625 0.75,2.78125 0.76563,0.875 1.90625,0.875 1.125,0 1.89063,-0.85937 0.76562,-0.875 0.76562,-2.73438 0,-1.78125 -0.79687,-2.67187 -0.78125,-0.90625 -1.89063,-0.90625 -1.09375,0 -1.85937,0.89062 -0.76563,0.875 -0.76563,2.625 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path49"
+ d="m 118.06299,95.047244 h 23.46457 V 121.7559 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path51"
+ d="m 118.06299,95.047244 h 23.46457 V 121.7559 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path53"
+ d="M 128.29737,115.32157 V 101.9622 h 1.64062 v 4.79687 q 1.14063,-1.32812 2.89063,-1.32812 1.07812,0 1.85937,0.42187 0.79688,0.42188 1.14063,1.17188 0.34375,0.75 0.34375,2.17187 v 6.125 h -1.64063 v -6.125 q 0,-1.23437 -0.53125,-1.79687 -0.53125,-0.5625 -1.51562,-0.5625 -0.71875,0 -1.35938,0.39062 -0.64062,0.375 -0.92187,1.01563 -0.26563,0.64062 -0.26563,1.78125 v 5.29687 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path55"
+ d="m 141.52757,95.047244 h 23.46455 V 121.7559 h -23.46455 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path57"
+ d="m 141.52757,95.047244 h 23.46455 V 121.7559 h -23.46455 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path59"
+ d="m 152.42181,103.85282 v -1.89062 h 1.64062 v 1.89062 z m 0,11.46875 v -9.67187 h 1.64062 v 9.67187 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path61"
+ d="m 118.06299,196.8609 h 23.46457 v 26.70865 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path63"
+ d="m 118.06299,196.8609 h 23.46457 v 26.70865 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path65"
+ d="m 134.92237,214.02584 1.6875,0.20313 q -0.40625,1.48437 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.29688 -1.23438,-1.3125 -1.23438,-3.67187 0,-2.45313 1.25,-3.79688 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.32813 1.23437,1.3125 1.23437,3.70312 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45313 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.48438 1.01563,-1.51563 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.82812 -0.78125,-0.95313 -2.03125,-0.95313 -1.125,0 -1.90625,0.76563 -0.76562,0.75 -0.84375,2.01562 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path67"
+ d="m 141.52757,196.8609 h 23.46455 v 26.70865 h -23.46455 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path69"
+ d="m 141.52757,196.8609 h 23.46455 v 26.70865 h -23.46455 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path71"
+ d="m 152.15257,217.13522 v -8.40625 h -1.45313 v -1.26563 h 1.45313 v -1.03125 q 0,-0.96875 0.17187,-1.45312 0.23438,-0.64063 0.82813,-1.03125 0.59375,-0.39063 1.67187,-0.39063 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.0937 -0.95312,-0.0937 -0.75,0 -1.0625,0.32813 -0.3125,0.3125 -0.3125,1.1875 v 0.89062 h 1.89062 v 1.26563 h -1.89062 v 8.40625 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path73"
+ d="m 118.06299,223.56955 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path75"
+ d="m 118.06299,223.56955 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path77"
+ d="M 128.29737,243.84388 V 230.4845 h 1.64062 v 4.79688 q 1.14063,-1.32813 2.89063,-1.32813 1.07812,0 1.85937,0.42188 0.79688,0.42187 1.14063,1.17187 0.34375,0.75 0.34375,2.17188 v 6.125 h -1.64063 v -6.125 q 0,-1.23438 -0.53125,-1.79688 -0.53125,-0.5625 -1.51562,-0.5625 -0.71875,0 -1.35938,0.39063 -0.64062,0.375 -0.92187,1.01562 -0.26563,0.64063 -0.26563,1.78125 v 5.29688 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path79"
+ d="m 141.52757,223.56955 h 23.46455 v 26.70866 h -23.46455 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path81"
+ d="m 141.52757,223.56955 h 23.46455 v 26.70866 h -23.46455 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path83"
+ d="m 151.76194,232.37513 v -1.89063 h 1.64062 v 1.89063 z m 0,11.46875 V 234.172 h 1.64062 v 9.67188 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path85"
+ d="m 163.38583,132.35694 h 464.50394 v 64.50395 H 163.38583 Z" />
+ <path
+ style="fill:#434343;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path87"
+ d="m 174.1202,159.27694 v -13.35938 h 1.76563 v 13.35938 z m 4.6833,0 v -9.67188 h 1.46875 v 1.375 q 1.0625,-1.59375 3.07813,-1.59375 0.875,0 1.60937,0.3125 0.73438,0.3125 1.09375,0.82813 0.375,0.5 0.51563,1.20312 0.0937,0.45313 0.0937,1.59375 v 5.95313 h -1.64063 v -5.89063 q 0,-1 -0.20312,-1.48437 -0.1875,-0.5 -0.67188,-0.79688 -0.48437,-0.29687 -1.14062,-0.29687 -1.04688,0 -1.8125,0.67187 -0.75,0.65625 -0.75,2.51563 v 5.28125 z m 16.64135,0 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.64063 -0.96875,-0.64062 -1.5,-1.78125 -0.53125,-1.14062 -0.53125,-2.625 0,-1.45312 0.48438,-2.625 0.48437,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.35938 1.10937,0.95313 v -4.79688 h 1.64063 v 13.35938 z m -5.17188,-4.82813 q 0,1.85938 0.78125,2.78125 0.78125,0.92188 1.84375,0.92188 1.07813,0 1.82813,-0.875 0.75,-0.89063 0.75,-2.6875 0,-1.98438 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.89062 -0.73438,0.89063 -0.73438,2.8125 z m 15.90697,1.71875 1.6875,0.20313 q -0.40625,1.48437 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.29688 -1.23438,-1.3125 -1.23438,-3.67187 0,-2.45313 1.25,-3.79688 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.32813 1.23437,1.3125 1.23437,3.70312 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45313 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.48438 1.01563,-1.51563 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.82812 -0.78125,-0.95313 -2.03125,-0.95313 -1.125,0 -1.90625,0.76563 -0.76562,0.75 -0.84375,2.01562 z m 8.0476,5.76563 3.53125,-5.03125 -3.26563,-4.64063 h 2.04688 l 1.48437,2.26563 q 0.42188,0.64062 0.67188,1.07812 0.40625,-0.59375 0.73437,-1.0625 l 1.64063,-2.28125 h 1.95312 l -3.34375,4.54688 3.59375,5.125 h -2.01562 l -1.98438,-3 -0.51562,-0.8125 -2.54688,3.8125 z m 15.76142,0 v -13.35938 h 2.65625 l 3.15625,9.45313 q 0.4375,1.32812 0.64063,1.98437 0.23437,-0.73437 0.70312,-2.14062 l 3.20313,-9.29688 h 2.375 v 13.35938 h -1.70313 v -11.17188 l -3.875,11.17188 h -1.59375 l -3.85937,-11.375 v 11.375 z m 21.69707,-1.1875 q -0.92187,0.76562 -1.76562,1.09375 -0.82814,0.3125 -1.79689,0.3125 -1.59375,0 -2.45313,-0.78125 -0.85937,-0.78125 -0.85937,-1.98438 0,-0.71875 0.32812,-1.29687 0.32813,-0.59375 0.84375,-0.9375 0.53125,-0.35938 1.1875,-0.54688 0.46875,-0.125 1.45313,-0.25 1.98439,-0.23437 2.92189,-0.5625 0.0156,-0.34375 0.0156,-0.42187 0,-1 -0.46875,-1.42188 -0.625,-0.54687 -1.87501,-0.54687 -1.15625,0 -1.70313,0.40625 -0.54687,0.40625 -0.8125,1.42187 l -1.60937,-0.21875 q 0.21875,-1.01562 0.71875,-1.64062 0.5,-0.64063 1.45312,-0.98438 0.95313,-0.34375 2.18752,-0.34375 1.25,0 2.01562,0.29688 0.78125,0.28125 1.14063,0.73437 0.375,0.4375 0.51562,1.10938 0.0781,0.42187 0.0781,1.51562 v 2.1875 q 0,2.28125 0.10937,2.89063 0.10938,0.59375 0.40625,1.15625 h -1.70312 q -0.26563,-0.51563 -0.32813,-1.1875 z m -0.14062,-3.67188 q -0.89063,0.375 -2.67189,0.625 -1.01563,0.14063 -1.4375,0.32813 -0.42188,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45312,0.4375 0.93752,0 1.67189,-0.40625 0.75,-0.42188 1.09375,-1.14063 0.26563,-0.5625 0.26563,-1.64062 z m 4.20382,8.5625 v -13.375 h 1.48438 v 1.25 q 0.53125,-0.73437 1.1875,-1.09375 0.67187,-0.375 1.625,-0.375 1.23437,0 2.17187,0.64063 0.95313,0.625 1.4375,1.79687 0.48438,1.15625 0.48438,2.54688 0,1.48437 -0.53125,2.67187 -0.53125,1.1875 -1.54688,1.82813 -1.01562,0.625 -2.14062,0.625 -0.8125,0 -1.46875,-0.34375 -0.65625,-0.34375 -1.0625,-0.875 v 4.70312 z m 1.48438,-8.48437 q 0,1.85937 0.75,2.76562 0.76562,0.89063 1.82812,0.89063 1.09375,0 1.875,-0.92188 0.78125,-0.9375 0.78125,-2.875 0,-1.84375 -0.76562,-2.76562 -0.75,-0.92188 -1.8125,-0.92188 -1.04688,0 -1.85938,0.98438 -0.79687,0.96875 -0.79687,2.84375 z m 9.34448,-3.03125 v -1.85938 h 1.85937 v 1.85938 z m 0,7.8125 v -1.875 h 1.85937 v 1.875 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path89"
+ d="m 173.32333,181.51132 0.79687,-3.89063 h -1.54687 v -1.35937 h 1.8125 l 0.67187,-3.29688 h -2.48437 v -1.35937 h 2.76562 l 0.79688,-3.90625 h 1.35937 l -0.79687,3.90625 h 2.875 l 0.79687,-3.90625 h 1.375 l -0.79687,3.90625 h 1.57812 v 1.35937 h -1.84375 l -0.6875,3.29688 h 2.53125 v 1.35937 h -2.8125 l -0.78125,3.89063 h -1.375 l 0.78125,-3.89063 h -2.85937 l -0.78125,3.89063 z m 2.4375,-5.25 h 2.85937 l 0.6875,-3.29688 h -2.875 z m 8.23509,-6.45313 v -1.89062 h 1.64063 v 1.89062 z m 0,11.46875 v -9.67187 h 1.64063 v 9.67187 z m 4.14482,0 v -9.67187 h 1.46875 v 1.35937 q 0.45313,-0.71875 1.20313,-1.14062 0.76562,-0.4375 1.71875,-0.4375 1.07812,0 1.76562,0.45312 0.6875,0.4375 0.96875,1.23438 1.15625,-1.6875 2.98438,-1.6875 1.45312,0 2.21875,0.79687 0.78125,0.79688 0.78125,2.45313 v 6.64062 h -1.64063 v -6.09375 q 0,-0.98437 -0.15625,-1.40625 -0.15625,-0.4375 -0.57812,-0.70312 -0.42188,-0.26563 -0.98438,-0.26563 -1.01562,0 -1.6875,0.6875 -0.67187,0.67188 -0.67187,2.15625 v 5.625 h -1.64063 v -6.28125 q 0,-1.09375 -0.40625,-1.64062 -0.40625,-0.54688 -1.3125,-0.54688 -0.6875,0 -1.28125,0.35938 -0.59375,0.35937 -0.85937,1.0625 -0.25,0.70312 -0.25,2.03125 v 5.01562 z m 21.85331,-1.1875 q -0.92188,0.76563 -1.76563,1.09375 -0.82812,0.3125 -1.79687,0.3125 -1.59375,0 -2.45313,-0.78125 -0.85937,-0.78125 -0.85937,-1.98437 0,-0.71875 0.32812,-1.29688 0.32813,-0.59375 0.84375,-0.9375 0.53125,-0.35937 1.1875,-0.54687 0.46875,-0.125 1.45313,-0.25 1.98437,-0.23438 2.92187,-0.5625 0.0156,-0.34375 0.0156,-0.42188 0,-1 -0.46875,-1.42187 -0.625,-0.54688 -1.875,-0.54688 -1.15625,0 -1.70313,0.40625 -0.54687,0.40625 -0.8125,1.42188 l -1.60937,-0.21875 q 0.21875,-1.01563 0.71875,-1.64063 0.5,-0.64062 1.45312,-0.98437 0.95313,-0.34375 2.1875,-0.34375 1.25,0 2.01563,0.29687 0.78125,0.28125 1.14062,0.73438 0.375,0.4375 0.51563,1.10937 0.0781,0.42188 0.0781,1.51563 v 2.1875 q 0,2.28125 0.10938,2.89062 0.10937,0.59375 0.40625,1.15625 h -1.70313 q -0.26562,-0.51562 -0.32812,-1.1875 z m -0.14063,-3.67187 q -0.89062,0.375 -2.67187,0.625 -1.01563,0.14062 -1.4375,0.32812 -0.42188,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45312,0.4375 0.9375,0 1.67188,-0.40625 0.75,-0.42187 1.09375,-1.14062 0.26562,-0.5625 0.26562,-1.64063 z m 4.20384,8.5625 v -13.375 h 1.48438 v 1.25 q 0.53125,-0.73438 1.1875,-1.09375 0.67187,-0.375 1.625,-0.375 1.23437,0 2.17187,0.64062 0.95313,0.625 1.4375,1.79688 0.48438,1.15625 0.48438,2.54687 0,1.48438 -0.53125,2.67188 -0.53125,1.1875 -1.54688,1.82812 -1.01562,0.625 -2.14062,0.625 -0.8125,0 -1.46875,-0.34375 -0.65625,-0.34375 -1.0625,-0.875 v 4.70313 z m 1.48438,-8.48438 q 0,1.85938 0.75,2.76563 0.76562,0.89062 1.82812,0.89062 1.09375,0 1.875,-0.92187 0.78125,-0.9375 0.78125,-2.875 0,-1.84375 -0.76562,-2.76563 -0.75,-0.92187 -1.8125,-0.92187 -1.04688,0 -1.85938,0.98437 -0.79687,0.96875 -0.79687,2.84375 z m 9.01634,4.78125 v -13.35937 h 5.01562 q 1.53125,0 2.45313,0.40625 0.92187,0.40625 1.4375,1.25 0.53125,0.84375 0.53125,1.76562 0,0.85938 -0.46875,1.625 -0.45313,0.75 -1.39063,1.20313 1.20313,0.35937 1.85938,1.21875 0.65625,0.85937 0.65625,2.01562 0,0.9375 -0.40625,1.75 -0.39063,0.79688 -0.98438,1.23438 -0.57812,0.4375 -1.45312,0.67187 -0.875,0.21875 -2.15625,0.21875 z m 1.78125,-7.75 h 2.875 q 1.1875,0 1.6875,-0.14062 0.67187,-0.20313 1.01562,-0.67188 0.34375,-0.46875 0.34375,-1.17187 0,-0.65625 -0.32812,-1.15625 -0.3125,-0.51563 -0.90625,-0.70313 -0.59375,-0.1875 -2.03125,-0.1875 h -2.65625 z m 0,6.17188 h 3.3125 q 0.85937,0 1.20312,-0.0625 0.60938,-0.10938 1.01563,-0.35938 0.42187,-0.26562 0.6875,-0.75 0.26562,-0.48437 0.26562,-1.125 0,-0.75 -0.39062,-1.29687 -0.375,-0.54688 -1.0625,-0.76563 -0.67188,-0.23437 -1.95313,-0.23437 h -3.07812 z m 18.69357,0 v 1.57812 h -8.82812 q -0.0156,-0.59375 0.1875,-1.14062 0.34375,-0.90625 1.07812,-1.78125 0.75,-0.875 2.15625,-2.01563 2.17188,-1.78125 2.9375,-2.82812 0.76563,-1.04688 0.76563,-1.96875 0,-0.98438 -0.70313,-1.64063 -0.6875,-0.67187 -1.8125,-0.67187 -1.1875,0 -1.90625,0.71875 -0.70312,0.70312 -0.70312,1.95312 l -1.6875,-0.17187 q 0.17187,-1.89063 1.29687,-2.875 1.14063,-0.98438 3.03125,-0.98438 1.92188,0 3.04688,1.0625 1.125,1.0625 1.125,2.64063 0,0.79687 -0.32813,1.57812 -0.32812,0.78125 -1.09375,1.64063 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23437 -1.89062,1.6875 -0.42188,0.4375 -0.6875,0.875 z m 0.95386,1.57812 5.125,-13.35937 h 1.90625 l 5.46875,13.35937 h -2.01563 l -1.54687,-4.04687 h -5.59375 l -1.46875,4.04687 z m 3.85937,-5.48437 h 4.53125 l -1.40625,-3.70313 q -0.625,-1.6875 -0.9375,-2.76562 -0.26562,1.28125 -0.71875,2.54687 z m 18.15812,9.40625 q -1.35938,-1.70313 -2.29688,-4 -0.9375,-2.29688 -0.9375,-4.76563 0,-2.15625 0.70313,-4.14062 0.82812,-2.3125 2.53125,-4.59375 h 1.17187 q -1.09375,1.89062 -1.45312,2.70312 -0.54688,1.25 -0.875,2.625 -0.39063,1.70313 -0.39063,3.42188 0,4.375 2.71875,8.75 z m 9.3533,-3.92188 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.64062 -0.96875,-0.64063 -1.5,-1.78125 -0.53125,-1.14063 -0.53125,-2.625 0,-1.45313 0.48438,-2.625 0.48437,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.35937 1.10937,0.95312 v -4.79687 h 1.64063 v 13.35937 z m -5.17188,-4.82812 q 0,1.85937 0.78125,2.78125 0.78125,0.92187 1.84375,0.92187 1.07813,0 1.82813,-0.875 0.75,-0.89062 0.75,-2.6875 0,-1.98437 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.89063 -0.73438,0.89062 -0.73438,2.8125 z m 8.82886,-1.76563 q 0,-2.35937 0.48438,-3.79687 0.48437,-1.45313 1.4375,-2.23438 0.96875,-0.78125 2.42187,-0.78125 1.07813,0 1.89063,0.4375 0.8125,0.42188 1.32812,1.25 0.53125,0.8125 0.82813,1.98438 0.3125,1.15625 0.3125,3.14062 0,2.35938 -0.48438,3.8125 -0.48437,1.4375 -1.45312,2.23438 -0.95313,0.78125 -2.42188,0.78125 -1.92187,0 -3.03125,-1.39063 -1.3125,-1.67187 -1.3125,-5.4375 z m 1.67188,0 q 0,3.29688 0.76562,4.39063 0.78125,1.07812 1.90625,1.07812 1.14063,0 1.90625,-1.09375 0.76563,-1.09375 0.76563,-4.375 0,-3.29687 -0.76563,-4.375 -0.76562,-1.07812 -1.92187,-1.07812 -1.125,0 -1.79688,0.95312 -0.85937,1.21875 -0.85937,4.5 z m 9.57882,6.59375 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35937,0.64063 -1.15625,0.98438 l -0.45312,-0.70313 q 0.51562,-0.21875 0.76562,-0.67187 0.25,-0.4375 0.28125,-1.26563 z m 16.21036,0 v -1.21875 q -0.90625,1.4375 -2.70312,1.4375 -1.15625,0 -2.125,-0.64062 -0.96875,-0.64063 -1.5,-1.78125 -0.53125,-1.14063 -0.53125,-2.625 0,-1.45313 0.48437,-2.625 0.48438,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17188,-0.625 0.875,0 1.54687,0.375 0.6875,0.35937 1.10938,0.95312 v -4.79687 h 1.64062 v 13.35937 z m -5.17187,-4.82812 q 0,1.85937 0.78125,2.78125 0.78125,0.92187 1.84375,0.92187 1.07812,0 1.82812,-0.875 0.75,-0.89062 0.75,-2.6875 0,-1.98437 -0.76562,-2.90625 -0.76563,-0.9375 -1.89063,-0.9375 -1.07812,0 -1.8125,0.89063 -0.73437,0.89062 -0.73437,2.8125 z m 15.00073,4.82812 h -1.64063 v -10.45312 q -0.59375,0.5625 -1.5625,1.14062 -0.95312,0.5625 -1.71875,0.84375 v -1.59375 q 1.375,-0.64062 2.40625,-1.5625 1.03125,-0.92187 1.45313,-1.78125 h 1.0625 z m 5.73507,3.92188 h -1.1875 q 2.73438,-4.375 2.73438,-8.75 0,-1.71875 -0.39063,-3.39063 -0.3125,-1.375 -0.875,-2.625 -0.35937,-0.82812 -1.46875,-2.73437 h 1.1875 q 1.70313,2.28125 2.53125,4.59375 0.6875,1.98437 0.6875,4.14062 0,2.46875 -0.9375,4.76563 -0.9375,2.29687 -2.28125,4 z m 5.16581,-0.21875 v -17.0625 h 3.60937 v 1.35937 h -1.96875 v 14.34375 h 1.96875 v 1.35938 z m 4.76144,-8 1.65625,-0.14063 q 0.125,1 0.54688,1.64063 0.4375,0.64062 1.34375,1.04687 0.92187,0.39063 2.0625,0.39063 1,0 1.78125,-0.29688 0.78125,-0.29687 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.35938,-0.46875 -1.1875,-0.79687 -0.54688,-0.20313 -2.39063,-0.64063 -1.82812,-0.45312 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.23437 -0.46875,-0.75 -0.46875,-1.67188 0,-1 0.57813,-1.875 0.57812,-0.89062 1.67187,-1.34375 1.10938,-0.45312 2.45313,-0.45312 1.48437,0 2.60937,0.48437 1.14063,0.46875 1.75,1.40625 0.60938,0.92188 0.65625,2.09375 l -1.6875,0.125 q -0.14062,-1.26562 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.60937,0 -2.34375,0.59375 -0.73437,0.59375 -0.73437,1.42188 0,0.71875 0.53125,1.17187 0.5,0.46875 2.65625,0.96875 2.15625,0.48438 2.95312,0.84375 1.17188,0.53125 1.71875,1.35938 0.5625,0.82812 0.5625,1.90625 0,1.0625 -0.60937,2.01562 -0.60938,0.9375 -1.75,1.46875 -1.14063,0.51563 -2.57813,0.51563 -1.8125,0 -3.04687,-0.53125 -1.21875,-0.53125 -1.92188,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 12.38107,-2.29688 q 0,-2.35937 0.48438,-3.79687 0.48437,-1.45313 1.4375,-2.23438 0.96875,-0.78125 2.42187,-0.78125 1.07813,0 1.89063,0.4375 0.8125,0.42188 1.32812,1.25 0.53125,0.8125 0.82813,1.98438 0.3125,1.15625 0.3125,3.14062 0,2.35938 -0.48438,3.8125 -0.48437,1.4375 -1.45312,2.23438 -0.95313,0.78125 -2.42188,0.78125 -1.92187,0 -3.03125,-1.39063 -1.3125,-1.67187 -1.3125,-5.4375 z m 1.67188,0 q 0,3.29688 0.76562,4.39063 0.78125,1.07812 1.90625,1.07812 1.14063,0 1.90625,-1.09375 0.76563,-1.09375 0.76563,-4.375 0,-3.29687 -0.76563,-4.375 -0.76562,-1.07812 -1.92187,-1.07812 -1.125,0 -1.79688,0.95312 -0.85937,1.21875 -0.85937,4.5 z m 9.57883,6.59375 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35938,0.64063 -1.15625,0.98438 l -0.45313,-0.70313 q 0.51563,-0.21875 0.76563,-0.67187 0.25,-0.4375 0.28125,-1.26563 z m 9.5541,-4.29687 1.65625,-0.14063 q 0.125,1 0.54688,1.64063 0.4375,0.64062 1.34375,1.04687 0.92187,0.39063 2.0625,0.39063 1,0 1.78125,-0.29688 0.78125,-0.29687 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.35938,-0.46875 -1.1875,-0.79687 -0.54688,-0.20313 -2.39063,-0.64063 -1.82812,-0.45312 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.23437 -0.46875,-0.75 -0.46875,-1.67188 0,-1 0.57813,-1.875 0.57812,-0.89062 1.67187,-1.34375 1.10938,-0.45312 2.45313,-0.45312 1.48437,0 2.60937,0.48437 1.14063,0.46875 1.75,1.40625 0.60938,0.92188 0.65625,2.09375 l -1.6875,0.125 q -0.14062,-1.26562 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.60937,0 -2.34375,0.59375 -0.73437,0.59375 -0.73437,1.42188 0,0.71875 0.53125,1.17187 0.5,0.46875 2.65625,0.96875 2.15625,0.48438 2.95312,0.84375 1.17188,0.53125 1.71875,1.35938 0.5625,0.82812 0.5625,1.90625 0,1.0625 -0.60937,2.01562 -0.60938,0.9375 -1.75,1.46875 -1.14063,0.51563 -2.57813,0.51563 -1.8125,0 -3.04687,-0.53125 -1.21875,-0.53125 -1.92188,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 18.55295,4.29687 h -1.64062 v -10.45312 q -0.59375,0.5625 -1.5625,1.14062 -0.95313,0.5625 -1.71875,0.84375 v -1.59375 q 1.375,-0.64062 2.40625,-1.5625 1.03125,-0.92187 1.45312,-1.78125 h 1.0625 z m 7.39136,3.70313 h -3.60938 v -1.35938 h 1.96875 v -14.34375 h -1.96875 v -1.35937 h 3.60938 z m 6.99161,-7.71875 v -1.64063 h 5.03125 v 1.64063 z m 15.4783,-1.82813 -8.84375,3.78125 v -1.625 l 7.01562,-2.90625 -7.01562,-2.875 v -1.64062 l 8.84375,3.73437 z m 10.57825,9.76563 q -1.35938,-1.70313 -2.29688,-4 -0.9375,-2.29688 -0.9375,-4.76563 0,-2.15625 0.70313,-4.14062 0.82812,-2.3125 2.53125,-4.59375 h 1.17187 q -1.09375,1.89062 -1.45312,2.70312 -0.54688,1.25 -0.875,2.625 -0.39063,1.70313 -0.39063,3.42188 0,4.375 2.71875,8.75 z m 9.3533,-3.92188 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.64062 -0.96875,-0.64063 -1.5,-1.78125 -0.53125,-1.14063 -0.53125,-2.625 0,-1.45313 0.48438,-2.625 0.48437,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.35937 1.10937,0.95312 v -4.79687 h 1.64063 v 13.35937 z m -5.17188,-4.82812 q 0,1.85937 0.78125,2.78125 0.78125,0.92187 1.84375,0.92187 1.07813,0 1.82813,-0.875 0.75,-0.89062 0.75,-2.6875 0,-1.98437 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.89063 -0.73438,0.89062 -0.73438,2.8125 z m 8.82886,-1.76563 q 0,-2.35937 0.48437,-3.79687 0.48438,-1.45313 1.4375,-2.23438 0.96875,-0.78125 2.42188,-0.78125 1.07812,0 1.89062,0.4375 0.8125,0.42188 1.32813,1.25 0.53125,0.8125 0.82812,1.98438 0.3125,1.15625 0.3125,3.14062 0,2.35938 -0.48437,3.8125 -0.48438,1.4375 -1.45313,2.23438 -0.95312,0.78125 -2.42187,0.78125 -1.92188,0 -3.03125,-1.39063 -1.3125,-1.67187 -1.3125,-5.4375 z m 1.67187,0 q 0,3.29688 0.76563,4.39063 0.78125,1.07812 1.90625,1.07812 1.14062,0 1.90625,-1.09375 0.76562,-1.09375 0.76562,-4.375 0,-3.29687 -0.76562,-4.375 -0.76563,-1.07812 -1.92188,-1.07812 -1.125,0 -1.79687,0.95312 -0.85938,1.21875 -0.85938,4.5 z m 17.77778,4.4375 v -3.67187 h -3.64063 v -1.51563 h 3.64063 v -3.64062 h 1.54687 v 3.64062 h 3.64063 v 1.51563 h -3.64063 v 3.67187 z m 12.25012,-2.14062 1.65625,-0.14063 q 0.125,1 0.54687,1.64063 0.4375,0.64062 1.34375,1.04687 0.92188,0.39063 2.0625,0.39063 1,0 1.78125,-0.29688 0.78125,-0.29687 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.35937,-0.46875 -1.1875,-0.79687 -0.54687,-0.20313 -2.39062,-0.64063 -1.82813,-0.45312 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.23437 -0.46875,-0.75 -0.46875,-1.67188 0,-1 0.57812,-1.875 0.57813,-0.89062 1.67188,-1.34375 1.10937,-0.45312 2.45312,-0.45312 1.48438,0 2.60938,0.48437 1.14062,0.46875 1.75,1.40625 0.60937,0.92188 0.65625,2.09375 l -1.6875,0.125 q -0.14063,-1.26562 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.60938,0 -2.34375,0.59375 -0.73438,0.59375 -0.73438,1.42188 0,0.71875 0.53125,1.17187 0.5,0.46875 2.65625,0.96875 2.15625,0.48438 2.95313,0.84375 1.17187,0.53125 1.71875,1.35938 0.5625,0.82812 0.5625,1.90625 0,1.0625 -0.60938,2.01562 -0.60937,0.9375 -1.75,1.46875 -1.14062,0.51563 -2.57812,0.51563 -1.8125,0 -3.04688,-0.53125 -1.21875,-0.53125 -1.92187,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 12.38107,-2.29688 q 0,-2.35937 0.48438,-3.79687 0.48437,-1.45313 1.4375,-2.23438 0.96875,-0.78125 2.42187,-0.78125 1.07813,0 1.89063,0.4375 0.8125,0.42188 1.32812,1.25 0.53125,0.8125 0.82813,1.98438 0.3125,1.15625 0.3125,3.14062 0,2.35938 -0.48438,3.8125 -0.48437,1.4375 -1.45312,2.23438 -0.95313,0.78125 -2.42188,0.78125 -1.92187,0 -3.03125,-1.39063 -1.3125,-1.67187 -1.3125,-5.4375 z m 1.67188,0 q 0,3.29688 0.76562,4.39063 0.78125,1.07812 1.90625,1.07812 1.14063,0 1.90625,-1.09375 0.76563,-1.09375 0.76563,-4.375 0,-3.29687 -0.76563,-4.375 -0.76562,-1.07812 -1.92187,-1.07812 -1.125,0 -1.79688,0.95312 -0.85937,1.21875 -0.85937,4.5 z m 9.57882,6.59375 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35937,0.64063 -1.15625,0.98438 l -0.45312,-0.70313 q 0.51562,-0.21875 0.76562,-0.67187 0.25,-0.4375 0.28125,-1.26563 z m 16.21036,0 v -1.21875 q -0.90625,1.4375 -2.70312,1.4375 -1.15625,0 -2.125,-0.64062 -0.96875,-0.64063 -1.5,-1.78125 -0.53125,-1.14063 -0.53125,-2.625 0,-1.45313 0.48437,-2.625 0.48438,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17188,-0.625 0.875,0 1.54687,0.375 0.6875,0.35937 1.10938,0.95312 v -4.79687 h 1.64062 v 13.35937 z m -5.17187,-4.82812 q 0,1.85937 0.78125,2.78125 0.78125,0.92187 1.84375,0.92187 1.07812,0 1.82812,-0.875 0.75,-0.89062 0.75,-2.6875 0,-1.98437 -0.76562,-2.90625 -0.76563,-0.9375 -1.89063,-0.9375 -1.07812,0 -1.8125,0.89063 -0.73437,0.89062 -0.73437,2.8125 z m 15.00073,4.82812 h -1.64063 v -10.45312 q -0.59375,0.5625 -1.5625,1.14062 -0.95312,0.5625 -1.71875,0.84375 v -1.59375 q 1.375,-0.64062 2.40625,-1.5625 1.03125,-0.92187 1.45313,-1.78125 h 1.0625 z m 13.27777,-2.15625 v -3.67187 h -3.64063 v -1.51563 h 3.64063 v -3.64062 h 1.54687 v 3.64062 h 3.64063 v 1.51563 h -3.64063 v 3.67187 z m 12.25012,-2.14062 1.65625,-0.14063 q 0.125,1 0.54688,1.64063 0.4375,0.64062 1.34375,1.04687 0.92187,0.39063 2.0625,0.39063 1,0 1.78125,-0.29688 0.78125,-0.29687 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.35938,-0.46875 -1.1875,-0.79687 -0.54688,-0.20313 -2.39063,-0.64063 -1.82812,-0.45312 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.23437 -0.46875,-0.75 -0.46875,-1.67188 0,-1 0.57813,-1.875 0.57812,-0.89062 1.67187,-1.34375 1.10938,-0.45312 2.45313,-0.45312 1.48437,0 2.60937,0.48437 1.14063,0.46875 1.75,1.40625 0.60938,0.92188 0.65625,2.09375 l -1.6875,0.125 q -0.14062,-1.26562 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.60937,0 -2.34375,0.59375 -0.73437,0.59375 -0.73437,1.42188 0,0.71875 0.53125,1.17187 0.5,0.46875 2.65625,0.96875 2.15625,0.48438 2.95312,0.84375 1.17188,0.53125 1.71875,1.35938 0.5625,0.82812 0.5625,1.90625 0,1.0625 -0.60937,2.01562 -0.60938,0.9375 -1.75,1.46875 -1.14063,0.51563 -2.57813,0.51563 -1.8125,0 -3.04687,-0.53125 -1.21875,-0.53125 -1.92188,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 18.55292,4.29687 h -1.64063 v -10.45312 q -0.59375,0.5625 -1.5625,1.14062 -0.95312,0.5625 -1.71875,0.84375 v -1.59375 q 1.375,-0.64062 2.40625,-1.5625 1.03125,-0.92187 1.45313,-1.78125 h 1.0625 z m 5.7351,3.92188 h -1.1875 q 2.73438,-4.375 2.73438,-8.75 0,-1.71875 -0.39063,-3.39063 -0.3125,-1.375 -0.875,-2.625 -0.35937,-0.82812 -1.46875,-2.73437 h 1.1875 q 1.70313,2.28125 2.53125,4.59375 0.6875,1.98437 0.6875,4.14062 0,2.46875 -0.9375,4.76563 -0.9375,2.29687 -2.28125,4 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path91"
+ d="M 73.383199,62.081364 H 85.761152 V 78.364827 H 73.383199 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path93"
+ d="m 77.995529,75.816074 v -1.890625 h 1.640625 v 1.890625 z m 0,11.46875 v -9.671875 h 1.640625 v 9.671875 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path95"
+ d="M 128.95013,0 H 156.4147 V 33.007874 H 128.95013 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path97"
+ d="m 139.16888,15.466874 v -1.90625 h 1.64062 v 1.90625 z m -2.07813,15.203123 0.3125,-1.390625 q 0.5,0.125 0.78125,0.125 0.5,0 0.73438,-0.328125 0.25,-0.328125 0.25,-1.671875 V 17.248124 h 1.64062 v 10.203123 q 0,1.78125 -0.46875,2.484375 -0.59375,0.90625 -1.96875,0.90625 -0.65625,0 -1.28125,-0.171875 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path99"
+ d="M 0,25.010498 H 128.18896 V 46.490814 H 0 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path101"
+ d="M 13.359375,55.852374 Q 12,54.149249 11.0625,51.852374 10.125,49.555499 10.125,47.086749 q 0,-2.15625 0.703125,-4.140625 0.828125,-2.3125 2.53125,-4.59375 h 1.171875 q -1.09375,1.890625 -1.453125,2.703125 -0.546875,1.25 -0.875,2.625 -0.390625,1.703125 -0.390625,3.421875 0,4.375 2.71875,8.75 z m 2.697052,-8.21875 1.65625,-0.140625 q 0.125,1 0.546875,1.640625 0.4375,0.640625 1.34375,1.046875 0.921875,0.390625 2.0625,0.390625 1,0 1.78125,-0.296875 0.78125,-0.296875 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.359375,-0.46875 -1.1875,-0.796875 -0.546875,-0.203125 -2.390625,-0.640625 -1.828125,-0.453125 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.234375 -0.46875,-0.75 -0.46875,-1.671875 0,-1 0.578125,-1.875 0.578125,-0.890625 1.671875,-1.34375 1.109375,-0.453125 2.453125,-0.453125 1.484375,0 2.609375,0.484375 1.140625,0.46875 1.75,1.40625 0.609375,0.921875 0.65625,2.09375 l -1.6875,0.125 q -0.140625,-1.265625 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.609375,0 -2.34375,0.59375 -0.734375,0.59375 -0.734375,1.421875 0,0.71875 0.53125,1.171875 0.5,0.46875 2.65625,0.96875 2.15625,0.484375 2.953125,0.84375 1.171875,0.53125 1.71875,1.359375 0.5625,0.828125 0.5625,1.90625 0,1.0625 -0.609375,2.015625 -0.609375,0.9375 -1.75,1.46875 -1.140625,0.515625 -2.578125,0.515625 -1.8125,0 -3.046875,-0.53125 -1.21875,-0.53125 -1.921875,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z M 28.4375,45.336749 q 0,-2.359375 0.484375,-3.796875 0.484375,-1.453125 1.4375,-2.234375 0.96875,-0.78125 2.421875,-0.78125 1.078125,0 1.890625,0.4375 0.8125,0.421875 1.328125,1.25 0.53125,0.8125 0.828125,1.984375 0.3125,1.15625 0.3125,3.140625 0,2.359375 -0.484375,3.8125 -0.484375,1.4375 -1.453125,2.234375 -0.953125,0.78125 -2.421875,0.78125 -1.921875,0 -3.03125,-1.390625 -1.3125,-1.671875 -1.3125,-5.4375 z m 1.671875,0 q 0,3.296875 0.765625,4.390625 0.78125,1.078125 1.90625,1.078125 1.140625,0 1.90625,-1.09375 0.765625,-1.09375 0.765625,-4.375 0,-3.296875 -0.765625,-4.375 -0.765625,-1.078125 -1.921875,-1.078125 -1.125,0 -1.796875,0.953125 -0.859375,1.21875 -0.859375,4.5 z m 9.578842,6.59375 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.359375,0.640625 -1.15625,0.984375 l -0.453125,-0.703125 q 0.515625,-0.21875 0.765625,-0.671875 0.25,-0.4375 0.28125,-1.265625 z m 9.554108,-4.296875 1.65625,-0.140625 q 0.125,1 0.546875,1.640625 0.4375,0.640625 1.34375,1.046875 0.921875,0.390625 2.0625,0.390625 1,0 1.78125,-0.296875 0.78125,-0.296875 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.359375,-0.46875 -1.1875,-0.796875 -0.546875,-0.203125 -2.390625,-0.640625 -1.828125,-0.453125 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.234375 -0.46875,-0.75 -0.46875,-1.671875 0,-1 0.578125,-1.875 0.578125,-0.890625 1.671875,-1.34375 1.109375,-0.453125 2.453125,-0.453125 1.484375,0 2.609375,0.484375 1.140625,0.46875 1.75,1.40625 0.609375,0.921875 0.65625,2.09375 l -1.6875,0.125 q -0.140625,-1.265625 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.609375,0 -2.34375,0.59375 -0.734375,0.59375 -0.734375,1.421875 0,0.71875 0.53125,1.171875 0.5,0.46875 2.65625,0.96875 2.15625,0.484375 2.953125,0.84375 1.171875,0.53125 1.71875,1.359375 0.5625,0.828125 0.5625,1.90625 0,1.0625 -0.609375,2.015625 -0.609375,0.9375 -1.75,1.46875 -1.140625,0.515625 -2.578125,0.515625 -1.8125,0 -3.046875,-0.53125 -1.21875,-0.53125 -1.921875,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 18.552948,4.296875 H 66.154648 V 41.477374 q -0.59375,0.5625 -1.5625,1.140625 -0.953125,0.5625 -1.71875,0.84375 v -1.59375 q 1.375,-0.640625 2.40625,-1.5625 1.03125,-0.921875 1.453125,-1.78125 h 1.0625 z m 5.735092,3.921875 h -1.1875 q 2.734375,-4.375 2.734375,-8.75 0,-1.71875 -0.390625,-3.390625 -0.3125,-1.375 -0.875,-2.625 -0.359375,-0.828125 -1.46875,-2.734375 h 1.1875 q 1.703125,2.28125 2.53125,4.59375 0.6875,1.984375 0.6875,4.140625 0,2.46875 -0.9375,4.765625 -0.9375,2.296875 -2.28125,4 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path103"
+ d="M 54.629919,54.490814 118.06299,68.349083" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path105"
+ d="M 54.629919,54.490814 112.20125,67.068466" />
+ <path
+ style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt"
+ inkscape:connector-curvature="0"
+ id="path107"
+ d="m 111.84871,68.682134 4.78606,-0.645081 -4.08098,-2.58226 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path109"
+ d="M 153.25985,196.8609 V 121.74278" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path111"
+ d="M 153.25985,196.8609 V 127.74278" />
+ <path
+ style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt"
+ inkscape:connector-curvature="0"
+ id="path113"
+ d="m 154.91157,127.74278 -1.65172,-4.5381 -1.65173,4.5381 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path115"
+ d="m 82.181099,203.58267 h 63.433071 v 31.2756 H 82.181099 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path117"
+ d="m 98.681099,230.50267 v -1.21875 q -0.90625,1.4375 -2.70312,1.4375 -1.15625,0 -2.125,-0.64063 -0.96875,-0.64062 -1.5,-1.78125 -0.53125,-1.14062 -0.53125,-2.625 0,-1.45312 0.48437,-2.625 0.48438,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17188,-0.625 0.875,0 1.54687,0.375 0.6875,0.35938 1.10938,0.95313 v -4.79688 h 1.640621 v 13.35938 z m -5.17187,-4.82813 q 0,1.85938 0.78125,2.78125 0.78125,0.92188 1.84375,0.92188 1.07812,0 1.82812,-0.875 0.75,-0.89063 0.75,-2.6875 0,-1.98438 -0.76562,-2.90625 -0.76563,-0.9375 -1.89063,-0.9375 -1.07812,0 -1.8125,0.89062 -0.73437,0.89063 -0.73437,2.8125 z m 8.828841,-1.76562 q 0,-2.35938 0.48437,-3.79688 0.48438,-1.45312 1.4375,-2.23437 0.96875,-0.78125 2.42188,-0.78125 1.07812,0 1.89062,0.4375 0.8125,0.42187 1.32813,1.25 0.53125,0.8125 0.82812,1.98437 0.3125,1.15625 0.3125,3.14063 0,2.35937 -0.48437,3.8125 -0.48438,1.4375 -1.45313,2.23437 -0.95312,0.78125 -2.42187,0.78125 -1.92188,0 -3.03125,-1.39062 -1.3125,-1.67188 -1.3125,-5.4375 z m 1.67187,0 q 0,3.29687 0.76563,4.39062 0.78125,1.07813 1.90625,1.07813 1.14062,0 1.90625,-1.09375 0.76562,-1.09375 0.76562,-4.375 0,-3.29688 -0.76562,-4.375 -0.76563,-1.07813 -1.92188,-1.07813 -1.125,0 -1.79687,0.95313 -0.85938,1.21875 -0.85938,4.5 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path119"
+ d="m 115.32809,160.28871 h 71.55905 v 31.27559 h -71.55905 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path121"
+ d="m 131.82809,187.20871 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.64063 -0.96875,-0.64062 -1.5,-1.78125 -0.53125,-1.14062 -0.53125,-2.625 0,-1.45312 0.48438,-2.625 0.48437,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.35938 1.10937,0.95313 v -4.79688 h 1.64063 v 13.35938 z m -5.17188,-4.82813 q 0,1.85938 0.78125,2.78125 0.78125,0.92188 1.84375,0.92188 1.07813,0 1.82813,-0.875 0.75,-0.89063 0.75,-2.6875 0,-1.98438 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.89062 -0.73438,0.89063 -0.73438,2.8125 z m 15.00072,4.82813 h -1.64062 v -10.45313 q -0.59375,0.5625 -1.5625,1.14063 -0.95313,0.5625 -1.71875,0.84375 v -1.59375 q 1.375,-0.64063 2.40625,-1.5625 1.03125,-0.92188 1.45312,-1.78125 h 1.0625 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path123"
+ d="m 171.88189,57.732284 h 440 v 26.708664 h -440 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path125"
+ d="M 182.61627,84.652284 V 71.292909 h 1.76562 v 13.359375 z m 4.6833,0 v -9.671875 h 1.46875 v 1.375 q 1.0625,-1.59375 3.07813,-1.59375 0.875,0 1.60937,0.3125 0.73438,0.3125 1.09375,0.828125 0.375,0.5 0.51563,1.203125 0.0937,0.453125 0.0937,1.59375 v 5.953125 h -1.64063 v -5.890625 q 0,-1 -0.20312,-1.484375 -0.1875,-0.5 -0.67188,-0.796875 -0.48437,-0.296875 -1.14062,-0.296875 -1.04688,0 -1.8125,0.671875 -0.75,0.65625 -0.75,2.515625 v 5.28125 z m 16.64135,0 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.640625 -0.96875,-0.640625 -1.5,-1.78125 -0.53125,-1.140625 -0.53125,-2.625 0,-1.453125 0.48438,-2.625 0.48437,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.359375 1.10937,0.953125 v -4.796875 h 1.64063 v 13.359375 z m -5.17188,-4.828125 q 0,1.859375 0.78125,2.78125 0.78125,0.921875 1.84375,0.921875 1.07813,0 1.82813,-0.875 0.75,-0.890625 0.75,-2.6875 0,-1.984375 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.890625 -0.73438,0.890625 -0.73438,2.8125 z m 15.90697,1.71875 1.6875,0.203125 q -0.40625,1.484375 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.296875 -1.23438,-1.3125 -1.23438,-3.671875 0,-2.453125 1.25,-3.796875 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.328125 1.23437,1.3125 1.23437,3.703125 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.453125 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.484375 1.01563,-1.515625 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.828125 -0.78125,-0.953125 -2.03125,-0.953125 -1.125,0 -1.90625,0.765625 -0.76562,0.75 -0.84375,2.015625 z m 8.04759,5.765625 3.53125,-5.03125 -3.26562,-4.640625 h 2.04687 l 1.48438,2.265625 q 0.42187,0.640625 0.67187,1.078125 0.40625,-0.59375 0.73438,-1.0625 l 1.64062,-2.28125 h 1.95313 l -3.34375,4.546875 3.59375,5.125 h -2.01563 l -1.98437,-3 -0.51563,-0.8125 -2.54687,3.8125 z m 15.21456,-4.296875 1.65625,-0.140625 q 0.125,1 0.54687,1.640625 0.4375,0.640625 1.34375,1.046875 0.92188,0.390625 2.0625,0.390625 1,0 1.78125,-0.296875 0.78125,-0.296875 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.35937,-0.46875 -1.1875,-0.796875 -0.54687,-0.203125 -2.39062,-0.640625 -1.82813,-0.453125 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.234375 -0.46875,-0.75 -0.46875,-1.671875 0,-1 0.57812,-1.875 0.57813,-0.890625 1.67188,-1.34375 1.10937,-0.453125 2.45312,-0.453125 1.48438,0 2.60938,0.484375 1.14062,0.46875 1.75,1.40625 0.60937,0.921875 0.65625,2.09375 l -1.6875,0.125 q -0.14063,-1.265625 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.60938,0 -2.34375,0.59375 -0.73438,0.59375 -0.73438,1.421875 0,0.71875 0.53125,1.171875 0.5,0.46875 2.65625,0.96875 2.15625,0.484375 2.95313,0.84375 1.17187,0.53125 1.71875,1.359375 0.5625,0.828125 0.5625,1.90625 0,1.0625 -0.60938,2.015625 -0.60937,0.9375 -1.75,1.46875 -1.14062,0.515625 -2.57812,0.515625 -1.8125,0 -3.04688,-0.53125 -1.21875,-0.53125 -1.92187,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 12.83418,8 v -13.375 h 1.48438 v 1.25 q 0.53125,-0.734375 1.1875,-1.09375 0.67187,-0.375 1.625,-0.375 1.23437,0 2.17187,0.640625 0.95313,0.625 1.4375,1.796875 0.48438,1.15625 0.48438,2.546875 0,1.484375 -0.53125,2.671875 -0.53125,1.1875 -1.54688,1.828125 -1.01562,0.625 -2.14062,0.625 -0.8125,0 -1.46875,-0.34375 -0.65625,-0.34375 -1.0625,-0.875 v 4.703125 z m 1.48438,-8.484375 q 0,1.859375 0.75,2.765625 0.76562,0.890625 1.82812,0.890625 1.09375,0 1.875,-0.921875 0.78125,-0.9375 0.78125,-2.875 0,-1.84375 -0.76562,-2.765625 -0.75,-0.921875 -1.8125,-0.921875 -1.04688,0 -1.85938,0.984375 -0.79687,0.96875 -0.79687,2.84375 z m 15.20385,3.59375 q -0.92187,0.765625 -1.76562,1.09375 -0.82813,0.3125 -1.79688,0.3125 -1.59375,0 -2.45312,-0.78125 -0.85938,-0.78125 -0.85938,-1.984375 0,-0.71875 0.32813,-1.296875 0.32812,-0.59375 0.84375,-0.9375 0.53125,-0.359375 1.1875,-0.546875 0.46875,-0.125 1.45312,-0.25 1.98438,-0.234375 2.92188,-0.5625 0.0156,-0.34375 0.0156,-0.421875 0,-1 -0.46875,-1.421875 -0.625,-0.546875 -1.875,-0.546875 -1.15625,0 -1.70312,0.40625 -0.54688,0.40625 -0.8125,1.421875 l -1.60938,-0.21875 q 0.21875,-1.015625 0.71875,-1.640625 0.5,-0.640625 1.45313,-0.984375 0.95312,-0.34375 2.1875,-0.34375 1.25,0 2.01562,0.296875 0.78125,0.28125 1.14063,0.734375 0.375,0.4375 0.51562,1.109375 0.0781,0.421875 0.0781,1.515625 v 2.1875 q 0,2.28125 0.10937,2.890625 0.10938,0.59375 0.40625,1.15625 h -1.70312 q -0.26563,-0.515625 -0.32813,-1.1875 z m -0.14062,-3.671875 q -0.89063,0.375 -2.67188,0.625 -1.01562,0.140625 -1.4375,0.328125 -0.42187,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45313,0.4375 0.9375,0 1.67187,-0.40625 0.75,-0.421875 1.09375,-1.140625 0.26563,-0.5625 0.26563,-1.640625 z m 10.51636,1.3125 1.60937,0.21875 q -0.26562,1.65625 -1.35937,2.609375 -1.07813,0.9375 -2.67188,0.9375 -1.98437,0 -3.1875,-1.296875 -1.20312,-1.296875 -1.20312,-3.71875 0,-1.578125 0.51562,-2.75 0.51563,-1.171875 1.57813,-1.75 1.0625,-0.59375 2.3125,-0.59375 1.57812,0 2.57812,0.796875 1,0.796875 1.28125,2.265625 l -1.59375,0.234375 q -0.23437,-0.96875 -0.8125,-1.453125 -0.57812,-0.5 -1.39062,-0.5 -1.23438,0 -2.01563,0.890625 -0.78125,0.890625 -0.78125,2.8125 0,1.953125 0.75,2.84375 0.75,0.875 1.95313,0.875 0.96875,0 1.60937,-0.59375 0.65625,-0.59375 0.82813,-1.828125 z m 9.64062,0.4375 1.6875,0.203125 q -0.40625,1.484375 -1.48437,2.3125 -1.07813,0.8125 -2.76563,0.8125 -2.125,0 -3.375,-1.296875 -1.23437,-1.3125 -1.23437,-3.671875 0,-2.453125 1.25,-3.796875 1.26562,-1.34375 3.26562,-1.34375 1.9375,0 3.15625,1.328125 1.23438,1.3125 1.23438,3.703125 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.453125 0.8125,0.84375 2.01562,0.84375 0.90625,0 1.54688,-0.46875 0.64062,-0.484375 1.01562,-1.515625 z m -5.39062,-2.65625 h 5.40625 q -0.10938,-1.21875 -0.625,-1.828125 -0.78125,-0.953125 -2.03125,-0.953125 -1.125,0 -1.90625,0.765625 -0.76563,0.75 -0.84375,2.015625 z m 14.10589,-0.07813 v -1.531245 l 8.84375,-3.734375 v 1.640625 l -7.01562,2.875 7.01562,2.90625 v 1.625 z m 10.66059,2.3125 1.64062,-0.21875 q 0.28125,1.40625 0.95313,2.015625 0.6875,0.609375 1.65625,0.609375 1.15625,0 1.95312,-0.796875 0.79688,-0.796875 0.79688,-1.984375 0,-1.125 -0.73438,-1.859375 -0.73437,-0.734375 -1.875,-0.734375 -0.46875,0 -1.15625,0.171875 l 0.1875,-1.4375 q 0.15625,0.01563 0.26563,0.01563 1.04687,0 1.875,-0.546875 0.84375,-0.546875 0.84375,-1.671875 0,-0.90625 -0.60938,-1.5 -0.60937,-0.59375 -1.57812,-0.59375 -0.95313,0 -1.59375,0.609375 -0.64063,0.59375 -0.8125,1.796875 l -1.64063,-0.296875 q 0.29688,-1.640625 1.35938,-2.546875 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92187,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.859375 -0.46875,1.578125 -0.46875,0.703125 -1.375,1.125 1.1875,0.28125 1.84375,1.140625 0.65625,0.859375 0.65625,2.15625 0,1.734375 -1.28125,2.953125 -1.26563,1.21875 -3.21875,1.21875 -1.76563,0 -2.92188,-1.046875 -1.15625,-1.046875 -1.32812,-2.71875 z m 9.73507,3.53125 3.53125,-5.03125 -3.26562,-4.640625 h 2.04687 l 1.48438,2.265625 q 0.42187,0.640625 0.67187,1.078125 0.40625,-0.59375 0.73438,-1.0625 l 1.64062,-2.28125 h 1.95313 l -3.34375,4.546875 3.59375,5.125 h -2.01563 l -1.98437,-3 -0.51563,-0.8125 -2.54687,3.8125 z m 9.96875,-3.53125 1.64063,-0.21875 q 0.28125,1.40625 0.95312,2.015625 0.6875,0.609375 1.65625,0.609375 1.15625,0 1.95313,-0.796875 0.79687,-0.796875 0.79687,-1.984375 0,-1.125 -0.73437,-1.859375 -0.73438,-0.734375 -1.875,-0.734375 -0.46875,0 -1.15625,0.171875 l 0.1875,-1.4375 q 0.15625,0.01563 0.26562,0.01563 1.04688,0 1.875,-0.546875 0.84375,-0.546875 0.84375,-1.671875 0,-0.90625 -0.60937,-1.5 -0.60938,-0.59375 -1.57813,-0.59375 -0.95312,0 -1.59375,0.609375 -0.64062,0.59375 -0.8125,1.796875 l -1.64062,-0.296875 q 0.29687,-1.640625 1.35937,-2.546875 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92188,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.859375 -0.46875,1.578125 -0.46875,0.703125 -1.375,1.125 1.1875,0.28125 1.84375,1.140625 0.65625,0.859375 0.65625,2.15625 0,1.734375 -1.28125,2.953125 -1.26562,1.21875 -3.21875,1.21875 -1.76562,0 -2.92187,-1.046875 -1.15625,-1.046875 -1.32813,-2.71875 z m 9.73508,3.53125 3.53125,-5.03125 -3.26563,-4.640625 h 2.04688 l 1.48437,2.265625 q 0.42188,0.640625 0.67188,1.078125 0.40625,-0.59375 0.73437,-1.0625 l 1.64063,-2.28125 h 1.95312 l -3.34375,4.546875 3.59375,5.125 h -2.01562 l -1.98438,-3 -0.51562,-0.8125 -2.54688,3.8125 z m 10.8125,0 v -8.40625 h -1.45313 v -1.265625 h 1.45313 v -1.03125 q 0,-0.96875 0.17187,-1.453125 0.23438,-0.640625 0.82813,-1.03125 0.59375,-0.390625 1.67187,-0.390625 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.09375 -0.95312,-0.09375 -0.75,0 -1.0625,0.328125 -0.3125,0.3125 -0.3125,1.1875 v 0.890625 h 1.89062 v 1.265625 h -1.89062 v 8.406255 z m 4.33957,-3.53125 1.64062,-0.21875 q 0.28125,1.40625 0.95313,2.015625 0.6875,0.609375 1.65625,0.609375 1.15625,0 1.95312,-0.796875 0.79688,-0.796875 0.79688,-1.984375 0,-1.125 -0.73438,-1.859375 -0.73437,-0.734375 -1.875,-0.734375 -0.46875,0 -1.15625,0.171875 l 0.1875,-1.4375 q 0.15625,0.01563 0.26563,0.01563 1.04687,0 1.875,-0.546875 0.84375,-0.546875 0.84375,-1.671875 0,-0.90625 -0.60938,-1.5 -0.60937,-0.59375 -1.57812,-0.59375 -0.95313,0 -1.59375,0.609375 -0.64063,0.59375 -0.8125,1.796875 L 346.225,74.699159 q 0.29688,-1.640625 1.35938,-2.546875 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92187,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.859375 -0.46875,1.578125 -0.46875,0.703125 -1.375,1.125 1.1875,0.28125 1.84375,1.140625 0.65625,0.859375 0.65625,2.15625 0,1.734375 -1.28125,2.953125 -1.26563,1.21875 -3.21875,1.21875 -1.76563,0 -2.92188,-1.046875 -1.15625,-1.046875 -1.32812,-2.71875 z m 18.98508,1.953125 v 1.57813 h -8.82813 q -0.0156,-0.59375 0.1875,-1.140625 0.34375,-0.90625 1.07813,-1.78125 0.75,-0.875 2.15625,-2.015625 2.17187,-1.78125 2.9375,-2.828125 0.76562,-1.046875 0.76562,-1.96875 0,-0.984375 -0.70312,-1.640625 -0.6875,-0.671875 -1.8125,-0.671875 -1.1875,0 -1.90625,0.71875 -0.70313,0.703125 -0.70313,1.953125 l -1.6875,-0.171875 q 0.17188,-1.890625 1.29688,-2.875 1.14062,-0.984375 3.03125,-0.984375 1.92187,0 3.04687,1.0625 1.125,1.0625 1.125,2.640625 0,0.796875 -0.32812,1.578125 -0.32813,0.78125 -1.09375,1.640625 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.234375 -1.89063,1.6875 -0.42187,0.4375 -0.6875,0.875 z m 10.84448,-4.265625 -8.84375,3.78125 v -1.625 l 7.01562,-2.90625 -7.01562,-2.875 v -1.64062 l 8.84375,3.734375 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path127"
+ d="m 167.14961,208.14961 h 309.7323 v 42.14172 h -309.7323 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path129"
+ d="m 177.88398,235.06961 v -13.35938 h 1.76562 v 13.35938 z m 4.6833,0 v -9.67188 h 1.46875 v 1.375 q 1.0625,-1.59375 3.07813,-1.59375 0.875,0 1.60937,0.3125 0.73438,0.3125 1.09375,0.82813 0.375,0.5 0.51563,1.20312 0.0937,0.45313 0.0937,1.59375 v 5.95313 h -1.64063 v -5.89063 q 0,-1 -0.20312,-1.48437 -0.1875,-0.5 -0.67188,-0.79688 -0.48437,-0.29687 -1.14062,-0.29687 -1.04688,0 -1.8125,0.67187 -0.75,0.65625 -0.75,2.51563 v 5.28125 z m 16.64135,0 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.64063 -0.96875,-0.64062 -1.5,-1.78125 -0.53125,-1.14062 -0.53125,-2.625 0,-1.45312 0.48438,-2.625 0.48437,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.35938 1.10937,0.95313 v -4.79688 h 1.64063 v 13.35938 z m -5.17188,-4.82813 q 0,1.85938 0.78125,2.78125 0.78125,0.92188 1.84375,0.92188 1.07813,0 1.82813,-0.875 0.75,-0.89063 0.75,-2.6875 0,-1.98438 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.89062 -0.73438,0.89063 -0.73438,2.8125 z m 15.90697,1.71875 1.6875,0.20313 q -0.40625,1.48437 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.29688 -1.23438,-1.3125 -1.23438,-3.67187 0,-2.45313 1.25,-3.79688 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.32813 1.23437,1.3125 1.23437,3.70312 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45313 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.48438 1.01563,-1.51563 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.82812 -0.78125,-0.95313 -2.03125,-0.95313 -1.125,0 -1.90625,0.76563 -0.76562,0.75 -0.84375,2.01562 z m 8.0476,5.76563 3.53125,-5.03125 -3.26563,-4.64063 h 2.04688 l 1.48437,2.26563 q 0.42188,0.64062 0.67188,1.07812 0.40625,-0.59375 0.73437,-1.0625 l 1.64063,-2.28125 h 1.95312 l -3.34375,4.54688 3.59375,5.125 h -2.01562 l -1.98438,-3 -0.51562,-0.8125 -2.54688,3.8125 z m 15.21455,-4.29688 1.65625,-0.14062 q 0.125,1 0.54687,1.64062 0.4375,0.64063 1.34375,1.04688 0.92188,0.39062 2.0625,0.39062 1,0 1.78125,-0.29687 0.78125,-0.29688 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.35937,-0.46875 -1.1875,-0.79688 -0.54687,-0.20312 -2.39062,-0.64062 -1.82813,-0.45313 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.23438 -0.46875,-0.75 -0.46875,-1.67187 0,-1 0.57812,-1.875 0.57813,-0.89063 1.67188,-1.34375 1.10937,-0.45313 2.45312,-0.45313 1.48438,0 2.60938,0.48438 1.14062,0.46875 1.75,1.40625 0.60937,0.92187 0.65625,2.09375 l -1.6875,0.125 q -0.14063,-1.26563 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.60938,0 -2.34375,0.59375 -0.73438,0.59375 -0.73438,1.42187 0,0.71875 0.53125,1.17188 0.5,0.46875 2.65625,0.96875 2.15625,0.48437 2.95313,0.84375 1.17187,0.53125 1.71875,1.35937 0.5625,0.82813 0.5625,1.90625 0,1.0625 -0.60938,2.01563 -0.60937,0.9375 -1.75,1.46875 -1.14062,0.51562 -2.57812,0.51562 -1.8125,0 -3.04688,-0.53125 -1.21875,-0.53125 -1.92187,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 12.8342,8 v -13.375 h 1.48437 v 1.25 q 0.53125,-0.73437 1.1875,-1.09375 0.67188,-0.375 1.625,-0.375 1.23438,0 2.17188,0.64063 0.95312,0.625 1.4375,1.79687 0.48437,1.15625 0.48437,2.54688 0,1.48437 -0.53125,2.67187 -0.53125,1.1875 -1.54687,1.82813 -1.01563,0.625 -2.14063,0.625 -0.8125,0 -1.46875,-0.34375 -0.65625,-0.34375 -1.0625,-0.875 v 4.70312 z m 1.48437,-8.48437 q 0,1.85937 0.75,2.76562 0.76563,0.89063 1.82813,0.89063 1.09375,0 1.875,-0.92188 0.78125,-0.9375 0.78125,-2.875 0,-1.84375 -0.76563,-2.76562 -0.75,-0.92188 -1.8125,-0.92188 -1.04687,0 -1.85937,0.98438 -0.79688,0.96875 -0.79688,2.84375 z m 15.20386,3.59375 q -0.92188,0.76562 -1.76563,1.09375 -0.82812,0.3125 -1.79687,0.3125 -1.59375,0 -2.45313,-0.78125 -0.85937,-0.78125 -0.85937,-1.98438 0,-0.71875 0.32812,-1.29687 0.32813,-0.59375 0.84375,-0.9375 0.53125,-0.35938 1.1875,-0.54688 0.46875,-0.125 1.45313,-0.25 1.98437,-0.23437 2.92187,-0.5625 0.0156,-0.34375 0.0156,-0.42187 0,-1 -0.46875,-1.42188 -0.625,-0.54687 -1.875,-0.54687 -1.15625,0 -1.70313,0.40625 -0.54687,0.40625 -0.8125,1.42187 l -1.60937,-0.21875 q 0.21875,-1.01562 0.71875,-1.64062 0.5,-0.64063 1.45312,-0.98438 0.95313,-0.34375 2.1875,-0.34375 1.25,0 2.01563,0.29688 0.78125,0.28125 1.14062,0.73437 0.375,0.4375 0.51563,1.10938 0.0781,0.42187 0.0781,1.51562 v 2.1875 q 0,2.28125 0.10938,2.89063 0.10937,0.59375 0.40625,1.15625 h -1.70313 q -0.26562,-0.51563 -0.32812,-1.1875 z m -0.14063,-3.67188 q -0.89062,0.375 -2.67187,0.625 -1.01563,0.14063 -1.4375,0.32813 -0.42188,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45312,0.4375 0.9375,0 1.67188,-0.40625 0.75,-0.42188 1.09375,-1.14063 0.26562,-0.5625 0.26562,-1.64062 z m 10.51633,1.3125 1.60938,0.21875 q -0.26563,1.65625 -1.35938,2.60938 -1.07812,0.9375 -2.67187,0.9375 -1.98438,0 -3.1875,-1.29688 -1.20313,-1.29687 -1.20313,-3.71875 0,-1.57812 0.51563,-2.75 0.51562,-1.17187 1.57812,-1.75 1.0625,-0.59375 2.3125,-0.59375 1.57813,0 2.57813,0.79688 1,0.79687 1.28125,2.26562 l -1.59375,0.23438 q -0.23438,-0.96875 -0.8125,-1.45313 -0.57813,-0.5 -1.39063,-0.5 -1.23437,0 -2.01562,0.89063 -0.78125,0.89062 -0.78125,2.8125 0,1.95312 0.75,2.84375 0.75,0.875 1.95312,0.875 0.96875,0 1.60938,-0.59375 0.65625,-0.59375 0.82812,-1.82813 z m 9.64063,0.4375 1.6875,0.20313 q -0.40625,1.48437 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.29688 -1.23438,-1.3125 -1.23438,-3.67187 0,-2.45313 1.25,-3.79688 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.32813 1.23437,1.3125 1.23437,3.70312 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45313 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.48438 1.01563,-1.51563 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.82812 -0.78125,-0.95313 -2.03125,-0.95313 -1.125,0 -1.90625,0.76563 -0.76562,0.75 -0.84375,2.01562 z m 14.1059,-0.0781 v -1.53125 l 8.84375,-3.73438 v 1.64063 l -7.01563,2.875 7.01563,2.90625 v 1.625 z m 19.26995,4.26562 v 1.57813 h -8.82812 q -0.0156,-0.59375 0.1875,-1.14063 0.34375,-0.90625 1.07812,-1.78125 0.75,-0.875 2.15625,-2.01562 2.17188,-1.78125 2.9375,-2.82813 0.76563,-1.04687 0.76563,-1.96875 0,-0.98437 -0.70313,-1.64062 -0.6875,-0.67188 -1.8125,-0.67188 -1.1875,0 -1.90625,0.71875 -0.70312,0.70313 -0.70312,1.95313 l -1.6875,-0.17188 q 0.17187,-1.89062 1.29687,-2.875 1.14063,-0.98437 3.03125,-0.98437 1.92188,0 3.04688,1.0625 1.125,1.0625 1.125,2.64062 0,0.79688 -0.32813,1.57813 -0.32812,0.78125 -1.09375,1.64062 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23438 -1.89062,1.6875 -0.42188,0.4375 -0.6875,0.875 z m 1.12574,1.57813 3.53125,-5.03125 -3.26563,-4.64063 h 2.04688 l 1.48437,2.26563 q 0.42188,0.64062 0.67188,1.07812 0.40625,-0.59375 0.73437,-1.0625 l 1.64063,-2.28125 h 1.95312 l -3.34375,4.54688 3.59375,5.125 h -2.01562 l -1.98438,-3 -0.51562,-0.8125 -2.54688,3.8125 z m 18.57812,-1.57813 v 1.57813 h -8.82812 q -0.0156,-0.59375 0.1875,-1.14063 0.34375,-0.90625 1.07812,-1.78125 0.75,-0.875 2.15625,-2.01562 2.17188,-1.78125 2.9375,-2.82813 0.76563,-1.04687 0.76563,-1.96875 0,-0.98437 -0.70313,-1.64062 -0.6875,-0.67188 -1.8125,-0.67188 -1.1875,0 -1.90625,0.71875 -0.70312,0.70313 -0.70312,1.95313 l -1.6875,-0.17188 q 0.17187,-1.89062 1.29687,-2.875 1.14063,-0.98437 3.03125,-0.98437 1.92188,0 3.04688,1.0625 1.125,1.0625 1.125,2.64062 0,0.79688 -0.32813,1.57813 -0.32812,0.78125 -1.09375,1.64062 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23438 -1.89062,1.6875 -0.42188,0.4375 -0.6875,0.875 z m 1.1257,1.57813 3.53125,-5.03125 -3.26562,-4.64063 h 2.04687 l 1.48438,2.26563 q 0.42187,0.64062 0.67187,1.07812 0.40625,-0.59375 0.73438,-1.0625 l 1.64062,-2.28125 h 1.95313 l -3.34375,4.54688 3.59375,5.125 h -2.01563 l -1.98437,-3 -0.51563,-0.8125 -2.54687,3.8125 z m 10.8125,0 v -8.40625 h -1.45312 v -1.26563 h 1.45312 v -1.03125 q 0,-0.96875 0.17188,-1.45312 0.23437,-0.64063 0.82812,-1.03125 0.59375,-0.39063 1.67188,-0.39063 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.0937 -0.95313,-0.0937 -0.75,0 -1.0625,0.32813 -0.3125,0.3125 -0.3125,1.1875 v 0.89062 h 1.89063 v 1.26563 h -1.89063 v 8.40625 z m 4.33957,-3.53125 1.64063,-0.21875 q 0.28125,1.40625 0.95312,2.01562 0.6875,0.60938 1.65625,0.60938 1.15625,0 1.95313,-0.79688 0.79687,-0.79687 0.79687,-1.98437 0,-1.125 -0.73437,-1.85938 -0.73438,-0.73437 -1.875,-0.73437 -0.46875,0 -1.15625,0.17187 l 0.1875,-1.4375 q 0.15625,0.0156 0.26562,0.0156 1.04688,0 1.875,-0.54688 0.84375,-0.54687 0.84375,-1.67187 0,-0.90625 -0.60937,-1.5 -0.60938,-0.59375 -1.57813,-0.59375 -0.95312,0 -1.59375,0.60937 -0.64062,0.59375 -0.8125,1.79688 l -1.64062,-0.29688 q 0.29687,-1.64062 1.35937,-2.54687 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92188,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85937 -0.46875,1.57812 -0.46875,0.70313 -1.375,1.125 1.1875,0.28125 1.84375,1.14063 0.65625,0.85937 0.65625,2.15625 0,1.73437 -1.28125,2.95312 -1.26562,1.21875 -3.21875,1.21875 -1.76562,0 -2.92187,-1.04687 -1.15625,-1.04688 -1.32813,-2.71875 z m 18.98508,1.95312 v 1.57813 h -8.82812 q -0.0156,-0.59375 0.1875,-1.14063 0.34375,-0.90625 1.07812,-1.78125 0.75,-0.875 2.15625,-2.01562 2.17188,-1.78125 2.9375,-2.82813 0.76563,-1.04687 0.76563,-1.96875 0,-0.98437 -0.70313,-1.64062 -0.6875,-0.67188 -1.8125,-0.67188 -1.1875,0 -1.90625,0.71875 -0.70312,0.70313 -0.70312,1.95313 l -1.6875,-0.17188 q 0.17187,-1.89062 1.29687,-2.875 1.14063,-0.98437 3.03125,-0.98437 1.92188,0 3.04688,1.0625 1.125,1.0625 1.125,2.64062 0,0.79688 -0.32813,1.57813 -0.32812,0.78125 -1.09375,1.64062 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23438 -1.89062,1.6875 -0.42188,0.4375 -0.6875,0.875 z m 10.84448,-4.26562 -8.84375,3.78125 v -1.625 l 7.01563,-2.90625 -7.01563,-2.875 v -1.64063 l 8.84375,3.73438 z" />
+</svg>
diff --git a/mlir/docs/includes/img/view-operation.svg b/mlir/docs/includes/img/view-operation.svg
new file mode 100644
index 00000000000..f4d622ee263
--- /dev/null
+++ b/mlir/docs/includes/img/view-operation.svg
@@ -0,0 +1,580 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ version="1.1"
+ viewBox="0 0 781.88983 360.73489"
+ stroke-miterlimit="10"
+ id="svg213"
+ sodipodi:docname="view-operation.svg"
+ width="781.88983"
+ height="360.73489"
+ style="fill:none;stroke:none;stroke-linecap:square;stroke-miterlimit:10"
+ inkscape:version="0.92.2pre0 (973e216, 2017-07-25)">
+ <metadata
+ id="metadata219">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title></dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs217" />
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="2312"
+ inkscape:window-height="1165"
+ id="namedview215"
+ showgrid="false"
+ fit-margin-top="0"
+ fit-margin-left="0"
+ fit-margin-right="0"
+ fit-margin-bottom="0"
+ inkscape:zoom="0.9"
+ inkscape:cx="514.61205"
+ inkscape:cy="336.45539"
+ inkscape:window-x="0"
+ inkscape:window-y="0"
+ inkscape:window-maximized="0"
+ inkscape:current-layer="svg213" />
+ <clipPath
+ id="p.0">
+ <path
+ d="M 0,0 H 1280 V 960 H 0 Z"
+ id="path2"
+ inkscape:connector-curvature="0"
+ style="clip-rule:nonzero" />
+ </clipPath>
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path5"
+ d="M -12.118111,-20.430447 H 1267.8819 V 939.56955 H -12.118111 Z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path7"
+ d="M 94.598429,118.46719 H 118.063 v 26.70865 H 94.598429 Z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path9"
+ d="M 94.598429,118.46719 H 118.063 v 26.70865 H 94.598429 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path11"
+ d="m 111.1453,137.55402 q -0.92188,0.76562 -1.76563,1.09375 -0.82812,0.3125 -1.79687,0.3125 -1.59375,0 -2.45313,-0.78125 -0.85937,-0.78125 -0.85937,-1.98438 0,-0.71875 0.32812,-1.29687 0.32813,-0.59375 0.84375,-0.9375 0.53125,-0.35938 1.1875,-0.54688 0.46875,-0.125 1.45313,-0.25 1.98437,-0.23437 2.92187,-0.5625 0.0156,-0.34375 0.0156,-0.42187 0,-1 -0.46875,-1.42188 -0.625,-0.54687 -1.875,-0.54687 -1.15625,0 -1.70313,0.40625 -0.54687,0.40625 -0.8125,1.42187 l -1.60937,-0.21875 q 0.21875,-1.01562 0.71875,-1.64062 0.5,-0.64063 1.45312,-0.98438 0.95313,-0.34375 2.1875,-0.34375 1.25,0 2.01563,0.29688 0.78125,0.28125 1.14062,0.73437 0.375,0.4375 0.51563,1.10938 0.0781,0.42187 0.0781,1.51562 v 2.1875 q 0,2.28125 0.10938,2.89063 0.10937,0.59375 0.40625,1.15625 h -1.70313 q -0.26562,-0.51563 -0.32812,-1.1875 z m -0.14063,-3.67188 q -0.89062,0.375 -2.67187,0.625 -1.01563,0.14063 -1.4375,0.32813 -0.42188,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45312,0.4375 0.9375,0 1.67188,-0.40625 0.75,-0.42188 1.09375,-1.14063 0.26562,-0.5625 0.26562,-1.64062 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path13"
+ d="m 118.06299,118.46719 h 23.46457 v 26.70865 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path15"
+ d="m 118.06299,118.46719 h 23.46457 v 26.70865 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path17"
+ d="m 129.79737,138.74152 h -1.51563 v -13.35938 h 1.64063 v 4.76563 q 1.04687,-1.29688 2.65625,-1.29688 0.89062,0 1.6875,0.35938 0.79687,0.35937 1.3125,1.01562 0.51562,0.64063 0.79687,1.5625 0.29688,0.92188 0.29688,1.96875 0,2.48438 -1.23438,3.84375 -1.21875,1.35938 -2.95312,1.35938 -1.70313,0 -2.6875,-1.4375 z m -0.0156,-4.90625 q 0,1.73437 0.48438,2.51562 0.76562,1.26563 2.09375,1.26563 1.07812,0 1.85937,-0.9375 0.78125,-0.9375 0.78125,-2.78125 0,-1.89063 -0.75,-2.79688 -0.75,-0.90625 -1.82812,-0.90625 -1.0625,0 -1.85938,0.9375 -0.78125,0.9375 -0.78125,2.70313 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path19"
+ d="m 141.52757,118.46719 h 23.46455 v 26.70865 h -23.46455 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path21"
+ d="m 141.52757,118.46719 h 23.46455 v 26.70865 h -23.46455 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path23"
+ d="m 158.07444,135.19464 1.60937,0.21875 q -0.26562,1.65625 -1.35937,2.60937 -1.07813,0.9375 -2.67188,0.9375 -1.98437,0 -3.1875,-1.29687 -1.20312,-1.29688 -1.20312,-3.71875 0,-1.57813 0.51562,-2.75 0.51563,-1.17188 1.57813,-1.75 1.0625,-0.59375 2.3125,-0.59375 1.57812,0 2.57812,0.79687 1,0.79688 1.28125,2.26563 l -1.59375,0.23437 q -0.23437,-0.96875 -0.8125,-1.45312 -0.57812,-0.5 -1.39062,-0.5 -1.23438,0 -2.01563,0.89062 -0.78125,0.89063 -0.78125,2.8125 0,1.95313 0.75,2.84375 0.75,0.875 1.95313,0.875 0.96875,0 1.60937,-0.59375 0.65625,-0.59375 0.82813,-1.82812 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path25"
+ d="M 94.598429,145.17585 H 118.063 v 26.70866 H 94.598429 Z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path27"
+ d="M 94.598429,145.17585 H 118.063 v 26.70866 H 94.598429 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path29"
+ d="m 111.09843,165.45018 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.64063 -0.96875,-0.64062 -1.5,-1.78125 -0.53125,-1.14062 -0.53125,-2.625 0,-1.45312 0.48438,-2.625 0.48437,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.35938 1.10937,0.95313 v -4.79688 h 1.64063 v 13.35938 z m -5.17188,-4.82813 q 0,1.85938 0.78125,2.78125 0.78125,0.92188 1.84375,0.92188 1.07813,0 1.82813,-0.875 0.75,-0.89063 0.75,-2.6875 0,-1.98438 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.89062 -0.73438,0.89063 -0.73438,2.8125 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path31"
+ d="m 118.06299,145.17585 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path33"
+ d="m 118.06299,145.17585 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path35"
+ d="m 134.92237,162.34081 1.6875,0.20312 q -0.40625,1.48438 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.29687 -1.23438,-1.3125 -1.23438,-3.67188 0,-2.45312 1.25,-3.79687 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.32812 1.23437,1.3125 1.23437,3.70313 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45312 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.48437 1.01563,-1.51562 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.82813 -0.78125,-0.95312 -2.03125,-0.95312 -1.125,0 -1.90625,0.76562 -0.76562,0.75 -0.84375,2.01563 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path37"
+ d="m 141.52757,145.17585 h 23.46455 v 26.70866 h -23.46455 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path39"
+ d="m 141.52757,145.17585 h 23.46455 v 26.70866 h -23.46455 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path41"
+ d="m 152.29319,165.45018 v -8.40625 h -1.45313 v -1.26563 h 1.45313 v -1.03125 q 0,-0.96875 0.17187,-1.45312 0.23438,-0.64063 0.82813,-1.03125 0.59375,-0.39063 1.67187,-0.39063 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.0937 -0.95312,-0.0937 -0.75,0 -1.0625,0.32813 -0.3125,0.3125 -0.3125,1.1875 v 0.89062 h 1.89062 v 1.26563 h -1.89062 v 8.40625 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path43"
+ d="M 94.598429,171.88451 H 118.063 v 26.70866 H 94.598429 Z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path45"
+ d="M 94.598429,171.88451 H 118.063 v 26.70866 H 94.598429 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path47"
+ d="m 104.5203,192.95572 1.59375,0.23437 q 0.10937,0.75 0.5625,1.07813 0.60937,0.45312 1.67187,0.45312 1.14063,0 1.75,-0.45312 0.625,-0.45313 0.84375,-1.26563 0.125,-0.5 0.10938,-2.10937 -1.0625,1.26562 -2.67188,1.26562 -2,0 -3.09375,-1.4375 -1.09375,-1.4375 -1.09375,-3.45312 0,-1.39063 0.5,-2.5625 0.51563,-1.17188 1.45313,-1.79688 0.95312,-0.64062 2.25,-0.64062 1.70312,0 2.8125,1.375 v -1.15625 h 1.51562 v 8.35937 q 0,2.26563 -0.46875,3.20313 -0.45312,0.9375 -1.45312,1.48437 -0.98438,0.54688 -2.45313,0.54688 -1.71875,0 -2.79687,-0.78125 -1.0625,-0.76563 -1.03125,-2.34375 z m 1.35937,-5.8125 q 0,1.90625 0.75,2.78125 0.76563,0.875 1.90625,0.875 1.125,0 1.89063,-0.85938 0.76562,-0.875 0.76562,-2.73437 0,-1.78125 -0.79687,-2.67188 -0.78125,-0.90625 -1.89063,-0.90625 -1.09375,0 -1.85937,0.89063 -0.76563,0.875 -0.76563,2.625 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path49"
+ d="m 118.06299,171.88451 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path51"
+ d="m 118.06299,171.88451 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path53"
+ d="m 128.29737,192.15885 v -13.35938 h 1.64062 v 4.79688 q 1.14063,-1.32813 2.89063,-1.32813 1.07812,0 1.85937,0.42188 0.79688,0.42187 1.14063,1.17187 0.34375,0.75 0.34375,2.17188 v 6.125 h -1.64063 v -6.125 q 0,-1.23438 -0.53125,-1.79688 -0.53125,-0.5625 -1.51562,-0.5625 -0.71875,0 -1.35938,0.39063 -0.64062,0.375 -0.92187,1.01562 -0.26563,0.64063 -0.26563,1.78125 v 5.29688 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path55"
+ d="m 141.52757,171.88451 h 23.46455 v 26.70866 h -23.46455 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path57"
+ d="m 141.52757,171.88451 h 23.46455 v 26.70866 h -23.46455 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path59"
+ d="m 152.42181,180.69009 v -1.89063 h 1.64062 v 1.89063 z m 0,11.46875 v -9.67188 h 1.64062 v 9.67188 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path61"
+ d="m 22.598423,8 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path63"
+ d="m 22.598423,8 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path65"
+ d="m 39.145299,27.086826 q -0.921875,0.765625 -1.765625,1.09375 -0.828125,0.3125 -1.796875,0.3125 -1.59375,0 -2.453125,-0.78125 -0.859375,-0.78125 -0.859375,-1.984375 0,-0.71875 0.328125,-1.296875 0.328125,-0.59375 0.84375,-0.9375 0.53125,-0.359375 1.1875,-0.546875 0.46875,-0.125 1.453125,-0.25 1.984375,-0.234375 2.921875,-0.5625 0.01563,-0.34375 0.01563,-0.421875 0,-1 -0.46875,-1.421875 -0.625,-0.546875 -1.875,-0.546875 -1.15625,0 -1.703125,0.40625 -0.546875,0.40625 -0.8125,1.421875 l -1.609375,-0.21875 q 0.21875,-1.015625 0.71875,-1.640625 0.5,-0.640625 1.453125,-0.984375 0.953125,-0.34375 2.1875,-0.34375 1.25,0 2.015625,0.296875 0.78125,0.28125 1.140625,0.734375 0.375,0.4375 0.515625,1.109375 0.07813,0.421875 0.07813,1.515625 v 2.1875 q 0,2.28125 0.109375,2.890625 0.109375,0.59375 0.40625,1.15625 h -1.703125 q -0.265625,-0.515625 -0.328125,-1.1875 z m -0.140625,-3.671875 q -0.890625,0.375 -2.671875,0.625 -1.015625,0.140625 -1.4375,0.328125 -0.421875,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.453125,0.4375 0.9375,0 1.671875,-0.40625 0.75,-0.421875 1.09375,-1.140625 0.265625,-0.5625 0.265625,-1.640625 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path67"
+ d="M 46.062992,8 H 69.527557 V 34.70866 H 46.062992 Z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path69"
+ d="M 46.062992,8 H 69.527557 V 34.70866 H 46.062992 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path71"
+ d="M 57.797363,28.274326 H 56.281738 V 14.914951 h 1.640625 v 4.765625 q 1.046875,-1.296875 2.65625,-1.296875 0.890625,0 1.6875,0.359375 0.796875,0.359375 1.3125,1.015625 0.515625,0.640625 0.796875,1.5625 0.296875,0.921875 0.296875,1.96875 0,2.484375 -1.234375,3.84375 -1.21875,1.359375 -2.953125,1.359375 -1.703125,0 -2.6875,-1.4375 z m -0.01563,-4.90625 q 0,1.734375 0.484375,2.515625 0.765625,1.265625 2.09375,1.265625 1.078125,0 1.859375,-0.9375 0.78125,-0.9375 0.78125,-2.78125 0,-1.890625 -0.75,-2.796875 -0.75,-0.90625 -1.828125,-0.90625 -1.0625,0 -1.859375,0.9375 -0.78125,0.9375 -0.78125,2.703125 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path73"
+ d="m 69.527559,8 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path75"
+ d="m 69.527559,8 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path77"
+ d="m 86.074429,24.727451 1.609375,0.21875 q -0.265625,1.65625 -1.359375,2.609375 -1.078125,0.9375 -2.671875,0.9375 -1.984375,0 -3.1875,-1.296875 -1.203125,-1.296875 -1.203125,-3.71875 0,-1.578125 0.515625,-2.75 0.515625,-1.171875 1.578125,-1.75 1.0625,-0.59375 2.3125,-0.59375 1.578125,0 2.578125,0.796875 1,0.796875 1.28125,2.265625 l -1.59375,0.234375 q -0.234375,-0.96875 -0.8125,-1.453125 -0.578125,-0.5 -1.390625,-0.5 -1.234375,0 -2.015625,0.890625 -0.78125,0.890625 -0.78125,2.8125 0,1.953125 0.75,2.84375 0.75,0.875 1.953125,0.875 0.96875,0 1.609375,-0.59375 0.65625,-0.59375 0.828125,-1.828125 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path79"
+ d="M 92.992129,8 H 116.45669 V 34.70866 H 92.992129 Z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path81"
+ d="M 92.992129,8 H 116.45669 V 34.70866 H 92.992129 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path83"
+ d="m 109.49213,28.274326 v -1.21875 q -0.90625,1.4375 -2.70312,1.4375 -1.15625,0 -2.125,-0.640625 -0.96875,-0.640625 -1.5,-1.78125 -0.53125,-1.140625 -0.53125,-2.625 0,-1.453125 0.48437,-2.625 0.48438,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17188,-0.625 0.875,0 1.54687,0.375 0.6875,0.359375 1.10938,0.953125 v -4.796875 h 1.64062 v 13.359375 z m -5.17187,-4.828125 q 0,1.859375 0.78125,2.78125 0.78125,0.921875 1.84375,0.921875 1.07812,0 1.82812,-0.875 0.75,-0.890625 0.75,-2.6875 0,-1.984375 -0.76562,-2.90625 -0.76563,-0.9375 -1.89063,-0.9375 -1.07812,0 -1.8125,0.890625 -0.73437,0.890625 -0.73437,2.8125 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path85"
+ d="m 116.45669,8 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path87"
+ d="m 116.45669,8 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path89"
+ d="m 133.31606,25.164951 1.6875,0.203125 q -0.40625,1.484375 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.296875 -1.23438,-1.3125 -1.23438,-3.671875 0,-2.453125 1.25,-3.796875 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.328125 1.23437,1.3125 1.23437,3.703125 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.453125 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.484375 1.01563,-1.515625 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.828125 -0.78125,-0.953125 -2.03125,-0.953125 -1.125,0 -1.90625,0.765625 -0.76562,0.75 -0.84375,2.015625 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path91"
+ d="m 139.92126,8 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path93"
+ d="m 139.92126,8 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path95"
+ d="m 150.54626,28.274326 v -8.40625 h -1.45313 v -1.265625 h 1.45313 v -1.03125 q 0,-0.96875 0.17187,-1.453125 0.23438,-0.640625 0.82813,-1.03125 0.59375,-0.390625 1.67187,-0.390625 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.09375 -0.95312,-0.09375 -0.75,0 -1.0625,0.328125 -0.3125,0.3125 -0.3125,1.1875 v 0.890625 h 1.89062 v 1.265625 h -1.89062 v 8.40625 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path97"
+ d="M 163.38583,8 H 186.8504 V 34.70866 H 163.38583 Z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path99"
+ d="M 163.38583,8 H 186.8504 V 34.70866 H 163.38583 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path101"
+ d="m 173.3077,29.071201 1.59375,0.234375 q 0.10938,0.75 0.5625,1.078125 0.60938,0.453125 1.67188,0.453125 1.14062,0 1.75,-0.453125 0.625,-0.453125 0.84375,-1.265625 0.125,-0.5 0.10937,-2.109375 -1.0625,1.265625 -2.67187,1.265625 -2,0 -3.09375,-1.4375 -1.09375,-1.4375 -1.09375,-3.453125 0,-1.390625 0.5,-2.5625 0.51562,-1.171875 1.45312,-1.796875 0.95313,-0.640625 2.25,-0.640625 1.70313,0 2.8125,1.375 v -1.15625 h 1.51563 v 8.359375 q 0,2.265625 -0.46875,3.203125 -0.45313,0.9375 -1.45313,1.484375 -0.98437,0.546875 -2.45312,0.546875 -1.71875,0 -2.79688,-0.78125 -1.0625,-0.765625 -1.03125,-2.34375 z m 1.35938,-5.8125 q 0,1.90625 0.75,2.78125 0.76562,0.875 1.90625,0.875 1.125,0 1.89062,-0.859375 0.76563,-0.875 0.76563,-2.734375 0,-1.78125 -0.79688,-2.671875 -0.78125,-0.90625 -1.89062,-0.90625 -1.09375,0 -1.85938,0.890625 -0.76562,0.875 -0.76562,2.625 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path103"
+ d="m 186.85039,8 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path105"
+ d="m 186.85039,8 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path107"
+ d="M 197.08477,28.274326 V 14.914951 h 1.64062 v 4.796875 q 1.14063,-1.328125 2.89063,-1.328125 1.07812,0 1.85937,0.421875 0.79688,0.421875 1.14063,1.171875 0.34375,0.75 0.34375,2.171875 v 6.125 h -1.64063 v -6.125 q 0,-1.234375 -0.53125,-1.796875 -0.53125,-0.5625 -1.51562,-0.5625 -0.71875,0 -1.35938,0.390625 -0.64062,0.375 -0.92187,1.015625 -0.26563,0.640625 -0.26563,1.78125 v 5.296875 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path109"
+ d="m 210.31496,8 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path111"
+ d="m 210.31496,8 h 23.46457 v 26.70866 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path113"
+ d="m 220.54934,16.805576 v -1.890625 h 1.64062 v 1.890625 z m 0,11.46875 v -9.671875 h 1.64062 v 9.671875 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path115"
+ d="m 118.06299,273.69815 h 23.46457 v 26.70868 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path117"
+ d="m 118.06299,273.69815 h 23.46457 v 26.70868 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path119"
+ d="m 134.92237,290.8631 1.6875,0.20312 q -0.40625,1.48438 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.29687 -1.23438,-1.3125 -1.23438,-3.67188 0,-2.45312 1.25,-3.79687 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.32812 1.23437,1.3125 1.23437,3.70313 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45312 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.48437 1.01563,-1.51562 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.82813 -0.78125,-0.95312 -2.03125,-0.95312 -1.125,0 -1.90625,0.76562 -0.76562,0.75 -0.84375,2.01563 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path121"
+ d="m 141.52757,273.69815 h 23.46455 v 26.70868 h -23.46455 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path123"
+ d="m 141.52757,273.69815 h 23.46455 v 26.70868 h -23.46455 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path125"
+ d="m 152.15257,293.97247 v -8.40625 h -1.45313 v -1.26563 h 1.45313 v -1.03125 q 0,-0.96875 0.17187,-1.45312 0.23438,-0.64063 0.82813,-1.03125 0.59375,-0.39063 1.67187,-0.39063 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.0937 -0.95312,-0.0937 -0.75,0 -1.0625,0.32813 -0.3125,0.3125 -0.3125,1.1875 v 0.89062 h 1.89062 v 1.26563 h -1.89062 v 8.40625 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path127"
+ d="m 118.06299,300.40683 h 23.46457 v 26.70865 h -23.46457 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path129"
+ d="m 118.06299,300.40683 h 23.46457 v 26.70865 h -23.46457 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path131"
+ d="m 128.29737,320.68115 v -13.35938 h 1.64062 v 4.79688 q 1.14063,-1.32813 2.89063,-1.32813 1.07812,0 1.85937,0.42188 0.79688,0.42187 1.14063,1.17187 0.34375,0.75 0.34375,2.17188 v 6.125 h -1.64063 v -6.125 q 0,-1.23438 -0.53125,-1.79688 -0.53125,-0.5625 -1.51562,-0.5625 -0.71875,0 -1.35938,0.39063 -0.64062,0.375 -0.92187,1.01562 -0.26563,0.64063 -0.26563,1.78125 v 5.29688 z" />
+ <path
+ style="fill:#cfe2f3;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path133"
+ d="m 141.52757,300.40683 h 23.46455 v 26.70865 h -23.46455 z" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path135"
+ d="m 141.52757,300.40683 h 23.46455 v 26.70865 h -23.46455 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path137"
+ d="m 151.76194,309.2124 v -1.89063 h 1.64062 v 1.89063 z m 0,11.46875 v -9.67188 h 1.64062 v 9.67188 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path139"
+ d="M 156.42782,47.356953 H 652.86877 V 68.837269 H 156.42782 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path141"
+ d="m 166.36532,74.511323 0.79687,-3.890625 h -1.54687 v -1.359375 h 1.8125 l 0.67187,-3.296875 h -2.48437 v -1.359375 h 2.76562 l 0.79688,-3.90625 h 1.35937 l -0.79687,3.90625 h 2.875 l 0.79687,-3.90625 h 1.375 l -0.79687,3.90625 h 1.57812 v 1.359375 h -1.84375 l -0.6875,3.296875 h 2.53125 v 1.359375 h -2.8125 l -0.78125,3.890625 h -1.375 l 0.78125,-3.890625 h -2.85937 l -0.78125,3.890625 z m 2.4375,-5.25 h 2.85937 l 0.6875,-3.296875 h -2.875 z m 8.18822,5.015625 V 60.917573 h 1.64062 v 13.359375 z m 4.19169,0 v -9.671875 h 1.46875 v 1.359375 q 0.45313,-0.71875 1.20313,-1.140625 0.76562,-0.4375 1.71875,-0.4375 1.07812,0 1.76562,0.453125 0.6875,0.4375 0.96875,1.234375 1.15625,-1.6875 2.98438,-1.6875 1.45312,0 2.21875,0.796875 0.78125,0.796875 0.78125,2.453125 v 6.640625 h -1.64063 v -6.09375 q 0,-0.984375 -0.15625,-1.40625 -0.15625,-0.4375 -0.57812,-0.703125 -0.42188,-0.265625 -0.98438,-0.265625 -1.01562,0 -1.6875,0.6875 -0.67187,0.671875 -0.67187,2.15625 v 5.625 h -1.64063 v -6.28125 q 0,-1.09375 -0.40625,-1.640625 -0.40625,-0.546875 -1.3125,-0.546875 -0.6875,0 -1.28125,0.359375 -0.59375,0.359375 -0.85937,1.0625 -0.25,0.703125 -0.25,2.03125 v 5.015625 z m 21.85331,-1.1875 q -0.92188,0.765625 -1.76563,1.09375 -0.82812,0.3125 -1.79687,0.3125 -1.59375,0 -2.45313,-0.78125 -0.85937,-0.78125 -0.85937,-1.984375 0,-0.71875 0.32812,-1.296875 0.32813,-0.59375 0.84375,-0.9375 0.53125,-0.359375 1.1875,-0.546875 0.46875,-0.125 1.45313,-0.25 1.98437,-0.234375 2.92187,-0.5625 0.0156,-0.34375 0.0156,-0.421875 0,-1 -0.46875,-1.421875 -0.625,-0.546875 -1.875,-0.546875 -1.15625,0 -1.70313,0.40625 -0.54687,0.40625 -0.8125,1.421875 l -1.60937,-0.21875 q 0.21875,-1.015625 0.71875,-1.640625 0.5,-0.640625 1.45312,-0.984375 0.95313,-0.34375 2.1875,-0.34375 1.25,0 2.01563,0.296875 0.78125,0.28125 1.14062,0.734375 0.375,0.4375 0.51563,1.109375 0.0781,0.421875 0.0781,1.515625 v 2.1875 q 0,2.28125 0.10938,2.890625 0.10937,0.59375 0.40625,1.15625 h -1.70313 q -0.26562,-0.515625 -0.32812,-1.1875 z m -0.14063,-3.671875 q -0.89062,0.375 -2.67187,0.625 -1.01563,0.140625 -1.4375,0.328125 -0.42188,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45312,0.4375 0.9375,0 1.67188,-0.40625 0.75,-0.421875 1.09375,-1.140625 0.26562,-0.5625 0.26562,-1.640625 z m 4.20384,8.5625 v -13.375 h 1.48438 v 1.25 q 0.53125,-0.734375 1.1875,-1.09375 0.67187,-0.375 1.625,-0.375 1.23437,0 2.17187,0.640625 0.95313,0.625 1.4375,1.796875 0.48438,1.15625 0.48438,2.546875 0,1.484375 -0.53125,2.671875 -0.53125,1.1875 -1.54688,1.828125 -1.01562,0.625 -2.14062,0.625 -0.8125,0 -1.46875,-0.34375 -0.65625,-0.34375 -1.0625,-0.875 v 4.703125 z m 1.48438,-8.484375 q 0,1.859375 0.75,2.765625 0.76562,0.890625 1.82812,0.890625 1.09375,0 1.875,-0.921875 0.78125,-0.9375 0.78125,-2.875 0,-1.84375 -0.76562,-2.765625 -0.75,-0.921875 -1.8125,-0.921875 -1.04688,0 -1.85938,0.984375 -0.79687,0.96875 -0.79687,2.84375 z m 7.62571,4.78125 5.125,-13.359375 h 1.90625 l 5.46875,13.359375 h -2.01562 l -1.54688,-4.046875 h -5.59375 l -1.46875,4.046875 z m 3.85938,-5.484375 h 4.53125 l -1.40625,-3.703125 q -0.625,-1.6875 -0.9375,-2.765625 -0.26563,1.28125 -0.71875,2.546875 z m 23.65813,-2.375 h -8.82813 v -1.515625 h 8.82813 z m 0,4.0625 h -8.82813 v -1.53125 h 8.82813 z m 10.57826,7.71875 q -1.35938,-1.703125 -2.29688,-4 -0.9375,-2.296875 -0.9375,-4.765625 0,-2.15625 0.70313,-4.140625 0.82812,-2.3125 2.53125,-4.59375 h 1.17187 q -1.09375,1.890625 -1.45312,2.703125 -0.54688,1.25 -0.875,2.625 -0.39063,1.703125 -0.39063,3.421875 0,4.375 2.71875,8.75 z m 3.08768,-15.390625 v -1.890625 h 1.64062 v 1.890625 z m 0,11.46875 v -9.671875 h 1.64062 v 9.671875 z m 4.56671,0 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35938,0.640625 -1.15625,0.984375 l -0.45313,-0.703125 q 0.51563,-0.21875 0.76563,-0.671875 0.25,-0.4375 0.28125,-1.265625 z m 9.9291,-11.453125 v -1.90625 h 1.64063 v 1.90625 z m -2.07812,15.203125 0.3125,-1.390625 q 0.5,0.125 0.78125,0.125 0.5,0 0.73437,-0.328125 0.25,-0.328125 0.25,-1.671875 v -10.15625 h 1.64063 v 10.203125 q 0,1.78125 -0.46875,2.484375 -0.59375,0.90625 -1.96875,0.90625 -0.65625,0 -1.28125,-0.171875 z m 7.31668,0.171875 h -1.1875 q 2.73438,-4.375 2.73438,-8.75 0,-1.71875 -0.39063,-3.390625 -0.3125,-1.375 -0.875,-2.625 -0.35937,-0.828125 -1.46875,-2.734375 h 1.1875 q 1.70313,2.28125 2.53125,4.59375 0.6875,1.984375 0.6875,4.140625 0,2.46875 -0.9375,4.765625 -0.9375,2.296875 -2.28125,4 z m 9.67725,-7.9375 v -1.640625 h 5.03125 v 1.640625 z m 15.4783,-1.828125 -8.84375,3.78125 v -1.625 l 7.01562,-2.90625 -7.01562,-2.875 v -1.640625 l 8.84375,3.734375 z m 10.57825,9.765625 q -1.35938,-1.703125 -2.29688,-4 -0.9375,-2.296875 -0.9375,-4.765625 0,-2.15625 0.70313,-4.140625 0.82812,-2.3125 2.53125,-4.59375 h 1.17187 q -1.09375,1.890625 -1.45312,2.703125 -0.54688,1.25 -0.875,2.625 -0.39063,1.703125 -0.39063,3.421875 0,4.375 2.71875,8.75 z m 3.08767,-15.390625 v -1.890625 h 1.64063 v 1.890625 z m 0,11.46875 v -9.671875 h 1.64063 v 9.671875 z m 4.56671,0 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35937,0.640625 -1.15625,0.984375 l -0.45312,-0.703125 q 0.51562,-0.21875 0.76562,-0.671875 0.25,-0.4375 0.28125,-1.265625 z m 9.92911,-11.453125 v -1.90625 h 1.64063 v 1.90625 z m -2.07812,15.203125 0.3125,-1.390625 q 0.5,0.125 0.78125,0.125 0.5,0 0.73437,-0.328125 0.25,-0.328125 0.25,-1.671875 v -10.15625 h 1.64063 v 10.203125 q 0,1.78125 -0.46875,2.484375 -0.59375,0.90625 -1.96875,0.90625 -0.65625,0 -1.28125,-0.171875 z m 7.31668,0.171875 h -1.1875 q 2.73437,-4.375 2.73437,-8.75 0,-1.71875 -0.39062,-3.390625 -0.3125,-1.375 -0.875,-2.625 -0.35938,-0.828125 -1.46875,-2.734375 h 1.1875 q 1.70312,2.28125 2.53125,4.59375 0.6875,1.984375 0.6875,4.140625 0,2.46875 -0.9375,4.765625 -0.9375,2.296875 -2.28125,4 z m 9.66162,-6.8125 1.625,-0.25 q 0.125,0.96875 0.75,1.5 0.625,0.515625 1.75,0.515625 1.125,0 1.67187,-0.453125 0.54688,-0.46875 0.54688,-1.09375 0,-0.546875 -0.48438,-0.875 -0.32812,-0.21875 -1.67187,-0.546875 -1.8125,-0.46875 -2.51563,-0.796875 -0.6875,-0.328125 -1.04687,-0.90625 -0.35938,-0.59375 -0.35938,-1.3125 0,-0.640625 0.29688,-1.1875 0.29687,-0.5625 0.8125,-0.921875 0.375,-0.28125 1.03125,-0.46875 0.67187,-0.203125 1.42187,-0.203125 1.14063,0 2,0.328125 0.85938,0.328125 1.26563,0.890625 0.42187,0.5625 0.57812,1.5 l -1.60937,0.21875 q -0.10938,-0.75 -0.64063,-1.171875 -0.51562,-0.421875 -1.46875,-0.421875 -1.14062,0 -1.625,0.375 -0.46875,0.375 -0.46875,0.875 0,0.3125 0.1875,0.578125 0.20313,0.265625 0.64063,0.4375 0.23437,0.09375 1.4375,0.421875 1.75,0.453125 2.4375,0.75 0.6875,0.296875 1.07812,0.859375 0.39063,0.5625 0.39063,1.40625 0,0.828125 -0.48438,1.546875 -0.46875,0.71875 -1.375,1.125 -0.90625,0.390625 -2.04687,0.390625 -1.875,0 -2.875,-0.78125 -0.98438,-0.78125 -1.25,-2.328125 z m 9.98437,-8.578125 v -1.890625 h 1.64063 v 1.890625 z m 0,11.46875 v -9.671875 h 1.64063 v 9.671875 z m 3.26981,0 v -1.328125 l 6.15625,-7.078125 q -1.04688,0.0625 -1.84375,0.0625 h -3.9375 v -1.328125 h 7.90625 v 1.078125 l -5.25,6.140625 -1,1.125 q 1.09375,-0.07813 2.0625,-0.07813 h 4.46875 v 1.40625 z m 16.82812,-3.109375 1.6875,0.203125 q -0.40625,1.484375 -1.48437,2.3125 -1.07813,0.8125 -2.76563,0.8125 -2.125,0 -3.375,-1.296875 -1.23437,-1.3125 -1.23437,-3.671875 0,-2.453125 1.25,-3.796875 1.26562,-1.34375 3.26562,-1.34375 1.9375,0 3.15625,1.328125 1.23438,1.3125 1.23438,3.703125 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.453125 0.8125,0.84375 2.01562,0.84375 0.90625,0 1.54688,-0.46875 0.64062,-0.484375 1.01562,-1.515625 z m -5.39062,-2.65625 h 5.40625 q -0.10938,-1.21875 -0.625,-1.828125 -0.78125,-0.953125 -2.03125,-0.953125 -1.125,0 -1.90625,0.765625 -0.76563,0.75 -0.84375,2.015625 z m 17.44965,9.6875 q -1.35938,-1.703125 -2.29688,-4 -0.9375,-2.296875 -0.9375,-4.765625 0,-2.15625 0.70313,-4.140625 0.82812,-2.3125 2.53125,-4.59375 h 1.17187 q -1.09375,1.890625 -1.45312,2.703125 -0.54688,1.25 -0.875,2.625 -0.39063,1.703125 -0.39063,3.421875 0,4.375 2.71875,8.75 z m 2.63455,-7.453125 1.64062,-0.21875 q 0.28125,1.40625 0.95313,2.015625 0.6875,0.609375 1.65625,0.609375 1.15625,0 1.95312,-0.796875 0.79688,-0.796875 0.79688,-1.984375 0,-1.125 -0.73438,-1.859375 -0.73437,-0.734375 -1.875,-0.734375 -0.46875,0 -1.15625,0.171875 l 0.1875,-1.4375 q 0.15625,0.01563 0.26563,0.01563 1.04687,0 1.875,-0.546875 0.84375,-0.546875 0.84375,-1.671875 0,-0.90625 -0.60938,-1.5 -0.60937,-0.59375 -1.57812,-0.59375 -0.95313,0 -1.59375,0.609375 -0.64063,0.59375 -0.8125,1.796875 l -1.64063,-0.296875 q 0.29688,-1.640625 1.35938,-2.546875 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92187,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.859375 -0.46875,1.578125 -0.46875,0.703125 -1.375,1.125 1.1875,0.28125 1.84375,1.140625 0.65625,0.859375 0.65625,2.15625 0,1.734375 -1.28125,2.953125 -1.26563,1.21875 -3.21875,1.21875 -1.76563,0 -2.92188,-1.046875 -1.15625,-1.046875 -1.32812,-2.71875 z m 11.25073,3.53125 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35938,0.640625 -1.15625,0.984375 l -0.45313,-0.703125 q 0.51563,-0.21875 0.76563,-0.671875 0.25,-0.4375 0.28125,-1.265625 z m 9.49161,-3.53125 1.64059,-0.21875 q 0.28125,1.40625 0.95313,2.015625 0.6875,0.609375 1.65625,0.609375 1.15625,0 1.95312,-0.796875 0.79688,-0.796875 0.79688,-1.984375 0,-1.125 -0.73438,-1.859375 -0.73437,-0.734375 -1.875,-0.734375 -0.46875,0 -1.15625,0.171875 l 0.1875,-1.4375 q 0.15625,0.01563 0.26563,0.01563 1.04687,0 1.875,-0.546875 0.84375,-0.546875 0.84375,-1.671875 0,-0.90625 -0.60938,-1.5 -0.60937,-0.59375 -1.57812,-0.59375 -0.95313,0 -1.59375,0.609375 -0.64063,0.59375 -0.8125,1.796875 l -1.6406,-0.296875 q 0.29688,-1.640625 1.35938,-2.546875 1.06247,-0.90625 2.65622,-0.90625 1.09375,0 2,0.46875 0.92187,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.859375 -0.46875,1.578125 -0.46875,0.703125 -1.375,1.125 1.1875,0.28125 1.84375,1.140625 0.65625,0.859375 0.65625,2.15625 0,1.734375 -1.28125,2.953125 -1.26563,1.21875 -3.21875,1.21875 -1.76563,0 -2.92185,-1.046875 -1.15625,-1.046875 -1.32812,-2.71875 z m 11.90695,7.453125 h -1.1875 q 2.73437,-4.375 2.73437,-8.75 0,-1.71875 -0.39062,-3.390625 -0.3125,-1.375 -0.875,-2.625 -0.35938,-0.828125 -1.46875,-2.734375 h 1.1875 q 1.70312,2.28125 2.53125,4.59375 0.6875,1.984375 0.6875,4.140625 0,2.46875 -0.9375,4.765625 -0.9375,2.296875 -2.28125,4 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path143"
+ d="m 163.38583,217.19421 h 582.48816 v 44.34647 H 163.38583 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path145"
+ d="m 173.32333,244.3486 0.79687,-3.89063 h -1.54687 v -1.35937 h 1.8125 l 0.67187,-3.29688 h -2.48437 v -1.35939 h 2.76562 l 0.79688,-3.90625 h 1.35937 l -0.79687,3.90625 h 2.875 l 0.79687,-3.90625 h 1.375 l -0.79687,3.90625 h 1.57812 v 1.35939 h -1.84375 l -0.6875,3.29688 h 2.53125 v 1.35937 h -2.8125 l -0.78125,3.89063 h -1.375 l 0.78125,-3.89063 h -2.85937 l -0.78125,3.89063 z m 2.4375,-5.25 h 2.85937 l 0.6875,-3.29688 h -2.875 z m 8.23509,-6.45314 v -1.89063 h 1.64063 v 1.89063 z m 0,11.46876 v -9.67189 h 1.64063 v 9.67189 z m 4.14482,0 v -9.67189 h 1.46875 v 1.35939 q 0.45313,-0.71876 1.20313,-1.14064 0.76562,-0.4375 1.71875,-0.4375 1.07812,0 1.76562,0.45313 0.6875,0.4375 0.96875,1.23439 1.15625,-1.68752 2.98438,-1.68752 1.45312,0 2.21875,0.79688 0.78125,0.79689 0.78125,2.45314 v 6.64062 h -1.64063 v -6.09375 q 0,-0.98437 -0.15625,-1.40625 -0.15625,-0.4375 -0.57812,-0.70312 -0.42188,-0.26563 -0.98438,-0.26563 -1.01562,0 -1.6875,0.6875 -0.67187,0.67188 -0.67187,2.15625 v 5.625 h -1.64063 v -6.28125 q 0,-1.09375 -0.40625,-1.64062 -0.40625,-0.54688 -1.3125,-0.54688 -0.6875,0 -1.28125,0.35938 -0.59375,0.35937 -0.85937,1.0625 -0.25,0.70312 -0.25,2.03125 v 5.01562 z m 21.85331,-1.1875 q -0.92188,0.76563 -1.76563,1.09375 -0.82812,0.3125 -1.79687,0.3125 -1.59375,0 -2.45313,-0.78125 -0.85937,-0.78125 -0.85937,-1.98437 0,-0.71875 0.32812,-1.29688 0.32813,-0.59375 0.84375,-0.9375 0.53125,-0.35937 1.1875,-0.54687 0.46875,-0.125 1.45313,-0.25 1.98437,-0.23438 2.92187,-0.5625 0.0156,-0.34375 0.0156,-0.42188 0,-1 -0.46875,-1.42187 -0.625,-0.54688 -1.875,-0.54688 -1.15625,0 -1.70313,0.40625 -0.54687,0.40625 -0.8125,1.42188 l -1.60937,-0.21875 q 0.21875,-1.01563 0.71875,-1.64064 0.5,-0.64063 1.45312,-0.98438 0.95313,-0.34375 2.1875,-0.34375 1.25,0 2.01563,0.29688 0.78125,0.28125 1.14062,0.73437 0.375,0.43752 0.51563,1.10939 0.0781,0.42188 0.0781,1.51563 v 2.1875 q 0,2.28125 0.10938,2.89062 0.10937,0.59375 0.40625,1.15625 h -1.70313 q -0.26562,-0.51562 -0.32812,-1.1875 z m -0.14063,-3.67187 q -0.89062,0.375 -2.67187,0.625 -1.01563,0.14062 -1.4375,0.32812 -0.42188,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45312,0.4375 0.9375,0 1.67188,-0.40625 0.75,-0.42187 1.09375,-1.14062 0.26562,-0.5625 0.26562,-1.64063 z m 4.20384,8.5625 v -13.37502 h 1.48438 v 1.25002 q 0.53125,-0.73439 1.1875,-1.09377 0.67187,-0.375 1.625,-0.375 1.23437,0 2.17187,0.64063 0.95313,0.625 1.4375,1.79689 0.48438,1.15625 0.48438,2.54687 0,1.48438 -0.53125,2.67188 -0.53125,1.1875 -1.54688,1.82812 -1.01562,0.625 -2.14062,0.625 -0.8125,0 -1.46875,-0.34375 -0.65625,-0.34375 -1.0625,-0.875 v 4.70313 z m 1.48438,-8.48438 q 0,1.85938 0.75,2.76563 0.76562,0.89062 1.82812,0.89062 1.09375,0 1.875,-0.92187 0.78125,-0.9375 0.78125,-2.875 0,-1.84375 -0.76562,-2.76563 -0.75,-0.92189 -1.8125,-0.92189 -1.04688,0 -1.85938,0.98439 -0.79687,0.96875 -0.79687,2.84375 z m 9.01634,4.78125 v -13.35939 h 5.01562 q 1.53125,0 2.45313,0.40625 0.92187,0.40625 1.4375,1.25 0.53125,0.84375 0.53125,1.76563 0,0.85937 -0.46875,1.62501 -0.45313,0.75 -1.39063,1.20313 1.20313,0.35937 1.85938,1.21875 0.65625,0.85937 0.65625,2.01562 0,0.9375 -0.40625,1.75 -0.39063,0.79688 -0.98438,1.23438 -0.57812,0.4375 -1.45312,0.67187 -0.875,0.21875 -2.15625,0.21875 z m 1.78125,-7.75 h 2.875 q 1.1875,0 1.6875,-0.14062 0.67187,-0.20313 1.01562,-0.67189 0.34375,-0.46875 0.34375,-1.17188 0,-0.65625 -0.32812,-1.15625 -0.3125,-0.51562 -0.90625,-0.70312 -0.59375,-0.1875 -2.03125,-0.1875 h -2.65625 z m 0,6.17188 h 3.3125 q 0.85937,0 1.20312,-0.0625 0.60938,-0.10938 1.01563,-0.35938 0.42187,-0.26562 0.6875,-0.75 0.26562,-0.48437 0.26562,-1.125 0,-0.75 -0.39062,-1.29687 -0.375,-0.54688 -1.0625,-0.76563 -0.67188,-0.23437 -1.95313,-0.23437 h -3.07812 z m 18.69357,0 v 1.57812 h -8.82812 q -0.0156,-0.59375 0.1875,-1.14062 0.34375,-0.90625 1.07812,-1.78125 0.75,-0.875 2.15625,-2.01563 2.17188,-1.78125 2.9375,-2.82812 0.76563,-1.04689 0.76563,-1.96877 0,-0.98437 -0.70313,-1.64062 -0.6875,-0.67188 -1.8125,-0.67188 -1.1875,0 -1.90625,0.71875 -0.70312,0.70313 -0.70312,1.95313 l -1.6875,-0.17188 q 0.17187,-1.89062 1.29687,-2.875 1.14063,-0.98437 3.03125,-0.98437 1.92188,0 3.04688,1.0625 1.125,1.0625 1.125,2.64062 0,0.79688 -0.32813,1.57814 -0.32812,0.78125 -1.09375,1.64063 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23437 -1.89062,1.6875 -0.42188,0.4375 -0.6875,0.875 z m 0.95386,1.57812 5.125,-13.35939 h 1.90625 l 5.46875,13.35939 h -2.01563 l -1.54687,-4.04687 h -5.59375 l -1.46875,4.04687 z m 3.85937,-5.48437 h 4.53125 l -1.40625,-3.70314 q -0.625,-1.6875 -0.9375,-2.76563 -0.26562,1.28125 -0.71875,2.54688 z m 23.65812,-2.375 h -8.82813 v -1.51564 h 8.82813 z m 0,4.0625 h -8.82813 v -1.53125 h 8.82813 z m 10.57827,7.71875 q -1.35937,-1.70313 -2.29687,-4 -0.9375,-2.29688 -0.9375,-4.76563 0,-2.15625 0.70312,-4.14064 0.82813,-2.3125 2.53125,-4.59375 h 1.17188 q -1.09375,1.89063 -1.45313,2.70313 -0.54687,1.25 -0.875,2.62501 -0.39062,1.70313 -0.39062,3.42188 0,4.375 2.71875,8.75 z m 9.35331,-3.92188 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.64062 -0.96875,-0.64063 -1.5,-1.78125 -0.53125,-1.14063 -0.53125,-2.625 0,-1.45313 0.48438,-2.625 0.48437,-1.18752 1.4375,-1.81252 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.35938 1.10937,0.95313 v -4.79688 h 1.64063 v 13.35939 z m -5.17188,-4.82812 q 0,1.85937 0.78125,2.78125 0.78125,0.92187 1.84375,0.92187 1.07813,0 1.82813,-0.875 0.75,-0.89062 0.75,-2.6875 0,-1.98437 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.89063 -0.73438,0.89062 -0.73438,2.8125 z m 8.82883,-1.76563 q 0,-2.35939 0.48437,-3.79689 0.48438,-1.45312 1.4375,-2.23437 0.96875,-0.78125 2.42188,-0.78125 1.07812,0 1.89062,0.4375 0.8125,0.42187 1.32813,1.25 0.53125,0.8125 0.82812,1.98437 0.3125,1.15625 0.3125,3.14064 0,2.35938 -0.48437,3.8125 -0.48438,1.4375 -1.45313,2.23438 -0.95312,0.78125 -2.42187,0.78125 -1.92188,0 -3.03125,-1.39063 -1.3125,-1.67187 -1.3125,-5.4375 z m 1.67187,0 q 0,3.29688 0.76563,4.39063 0.78125,1.07812 1.90625,1.07812 1.14062,0 1.90625,-1.09375 0.76562,-1.09375 0.76562,-4.375 0,-3.29689 -0.76562,-4.37501 -0.76563,-1.07813 -1.92188,-1.07813 -1.125,0 -1.79687,0.95313 -0.85938,1.21875 -0.85938,4.50001 z m 9.57886,6.59375 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35937,0.64063 -1.15625,0.98438 l -0.45312,-0.70313 q 0.51562,-0.21875 0.76562,-0.67187 0.25,-0.4375 0.28125,-1.26563 z m 16.21036,0 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.64062 -0.96875,-0.64063 -1.5,-1.78125 -0.53125,-1.14063 -0.53125,-2.625 0,-1.45313 0.48438,-2.625 0.48437,-1.18752 1.4375,-1.81252 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.35938 1.10937,0.95313 v -4.79688 h 1.64063 v 13.35939 z m -5.17188,-4.82812 q 0,1.85937 0.78125,2.78125 0.78125,0.92187 1.84375,0.92187 1.07813,0 1.82813,-0.875 0.75,-0.89062 0.75,-2.6875 0,-1.98437 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.89063 -0.73438,0.89062 -0.73438,2.8125 z m 15.00071,4.82812 h -1.64063 v -10.45314 q -0.59375,0.5625 -1.5625,1.14063 -0.95312,0.5625 -1.71875,0.84376 v -1.59376 q 1.375,-0.64063 2.40625,-1.5625 1.03125,-0.92188 1.45313,-1.78125 h 1.0625 z m 5.7351,3.92188 h -1.1875 q 2.73438,-4.375 2.73438,-8.75 0,-1.71875 -0.39063,-3.39063 -0.3125,-1.37501 -0.875,-2.62501 -0.35937,-0.82813 -1.46875,-2.73438 h 1.1875 q 1.70313,2.28125 2.53125,4.59375 0.6875,1.98439 0.6875,4.14064 0,2.46875 -0.9375,4.76563 -0.9375,2.29687 -2.28125,4 z m 5.1658,-0.21875 v -17.06252 h 3.60938 v 1.35938 h -1.96875 v 14.34376 h 1.96875 v 1.35938 z m 4.76142,-8 1.65625,-0.14063 q 0.125,1 0.54687,1.64063 0.4375,0.64062 1.34375,1.04687 0.92188,0.39063 2.0625,0.39063 1,0 1.78125,-0.29688 0.78125,-0.29687 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.35937,-0.46875 -1.1875,-0.79687 -0.54687,-0.20313 -2.39062,-0.64063 -1.82813,-0.45312 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.23437 -0.46875,-0.75002 -0.46875,-1.67189 0,-1 0.57812,-1.875 0.57813,-0.89063 1.67188,-1.34375 1.10937,-0.45313 2.45312,-0.45313 1.48438,0 2.60938,0.48438 1.14062,0.46875 1.75,1.40625 0.60937,0.92187 0.65625,2.09375 l -1.6875,0.125 q -0.14063,-1.26563 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.60938,0 -2.34375,0.59375 -0.73438,0.59375 -0.73438,1.42187 0,0.71875 0.53125,1.17188 0.5,0.46876 2.65625,0.96876 2.15625,0.48438 2.95313,0.84375 1.17187,0.53125 1.71875,1.35938 0.5625,0.82812 0.5625,1.90625 0,1.0625 -0.60938,2.01562 -0.60937,0.9375 -1.75,1.46875 -1.14062,0.51563 -2.57812,0.51563 -1.8125,0 -3.04688,-0.53125 -1.21875,-0.53125 -1.92187,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 12.38107,-2.29688 q 0,-2.35939 0.48438,-3.79689 0.48437,-1.45312 1.4375,-2.23437 0.96875,-0.78125 2.42187,-0.78125 1.07813,0 1.89063,0.4375 0.8125,0.42187 1.32812,1.25 0.53125,0.8125 0.82813,1.98437 0.3125,1.15625 0.3125,3.14064 0,2.35938 -0.48438,3.8125 -0.48437,1.4375 -1.45312,2.23438 -0.95313,0.78125 -2.42188,0.78125 -1.92187,0 -3.03125,-1.39063 -1.3125,-1.67187 -1.3125,-5.4375 z m 1.67188,0 q 0,3.29688 0.76562,4.39063 0.78125,1.07812 1.90625,1.07812 1.14063,0 1.90625,-1.09375 0.76563,-1.09375 0.76563,-4.375 0,-3.29689 -0.76563,-4.37501 -0.76562,-1.07813 -1.92187,-1.07813 -1.125,0 -1.79688,0.95313 -0.85937,1.21875 -0.85937,4.50001 z m 9.57885,6.59375 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35937,0.64063 -1.15625,0.98438 l -0.45312,-0.70313 q 0.51562,-0.21875 0.76562,-0.67187 0.25,-0.4375 0.28125,-1.26563 z m 9.55411,-4.29687 1.65625,-0.14063 q 0.125,1 0.54688,1.64063 0.4375,0.64062 1.34375,1.04687 0.92187,0.39063 2.0625,0.39063 1,0 1.78125,-0.29688 0.78125,-0.29687 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.35938,-0.46875 -1.1875,-0.79687 -0.54688,-0.20313 -2.39063,-0.64063 -1.82812,-0.45312 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.23437 -0.46875,-0.75002 -0.46875,-1.67189 0,-1 0.57813,-1.875 0.57812,-0.89063 1.67187,-1.34375 1.10938,-0.45313 2.45313,-0.45313 1.48437,0 2.60937,0.48438 1.14063,0.46875 1.75,1.40625 0.60938,0.92187 0.65625,2.09375 l -1.6875,0.125 q -0.14062,-1.26563 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.60937,0 -2.34375,0.59375 -0.73437,0.59375 -0.73437,1.42187 0,0.71875 0.53125,1.17188 0.5,0.46876 2.65625,0.96876 2.15625,0.48438 2.95312,0.84375 1.17188,0.53125 1.71875,1.35938 0.5625,0.82812 0.5625,1.90625 0,1.0625 -0.60937,2.01562 -0.60938,0.9375 -1.75,1.46875 -1.14063,0.51563 -2.57813,0.51563 -1.8125,0 -3.04687,-0.53125 -1.21875,-0.53125 -1.92188,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 18.55295,4.29687 h -1.64063 v -10.45314 q -0.59375,0.5625 -1.5625,1.14063 -0.95312,0.5625 -1.71875,0.84376 v -1.59376 q 1.375,-0.64063 2.40625,-1.5625 1.03125,-0.92188 1.45313,-1.78125 h 1.0625 z m 7.39133,3.70313 h -3.60938 v -1.35938 h 1.96875 v -14.34376 h -1.96875 v -1.35938 h 3.60938 z m 6.9916,-7.71875 v -1.64063 h 5.03125 v 1.64063 z m 15.47831,-1.82813 -8.84375,3.78125 v -1.625 l 7.01562,-2.90625 -7.01562,-2.87501 v -1.64063 l 8.84375,3.73439 z m 10.57827,9.76563 q -1.35937,-1.70313 -2.29687,-4 -0.9375,-2.29688 -0.9375,-4.76563 0,-2.15625 0.70312,-4.14064 0.82813,-2.3125 2.53125,-4.59375 h 1.17188 q -1.09375,1.89063 -1.45313,2.70313 -0.54687,1.25 -0.875,2.62501 -0.39062,1.70313 -0.39062,3.42188 0,4.375 2.71875,8.75 z m 9.35331,-3.92188 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.64062 -0.96875,-0.64063 -1.5,-1.78125 -0.53125,-1.14063 -0.53125,-2.625 0,-1.45313 0.48438,-2.625 0.48437,-1.18752 1.4375,-1.81252 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.35938 1.10937,0.95313 v -4.79688 h 1.64063 v 13.35939 z m -5.17188,-4.82812 q 0,1.85937 0.78125,2.78125 0.78125,0.92187 1.84375,0.92187 1.07813,0 1.82813,-0.875 0.75,-0.89062 0.75,-2.6875 0,-1.98437 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.89063 -0.73438,0.89062 -0.73438,2.8125 z m 8.82883,-1.76563 q 0,-2.35939 0.48437,-3.79689 0.48438,-1.45312 1.4375,-2.23437 0.96875,-0.78125 2.42188,-0.78125 1.07812,0 1.89062,0.4375 0.8125,0.42187 1.32813,1.25 0.53125,0.8125 0.82812,1.98437 0.3125,1.15625 0.3125,3.14064 0,2.35938 -0.48437,3.8125 -0.48438,1.4375 -1.45313,2.23438 -0.95312,0.78125 -2.42187,0.78125 -1.92188,0 -3.03125,-1.39063 -1.3125,-1.67187 -1.3125,-5.4375 z m 1.67187,0 q 0,3.29688 0.76563,4.39063 0.78125,1.07812 1.90625,1.07812 1.14062,0 1.90625,-1.09375 0.76562,-1.09375 0.76562,-4.375 0,-3.29689 -0.76562,-4.37501 -0.76563,-1.07813 -1.92188,-1.07813 -1.125,0 -1.79687,0.95313 -0.85938,1.21875 -0.85938,4.50001 z m 17.77777,4.4375 v -3.67187 h -3.64062 v -1.51563 h 3.64062 v -3.64064 h 1.54688 v 3.64064 h 3.64062 v 1.51563 h -3.64062 v 3.67187 z m 12.25013,-2.14062 1.65625,-0.14063 q 0.125,1 0.54687,1.64063 0.4375,0.64062 1.34375,1.04687 0.92188,0.39063 2.0625,0.39063 1,0 1.78125,-0.29688 0.78125,-0.29687 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.35937,-0.46875 -1.1875,-0.79687 -0.54687,-0.20313 -2.39062,-0.64063 -1.82813,-0.45312 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.23437 -0.46875,-0.75002 -0.46875,-1.67189 0,-1 0.57812,-1.875 0.57813,-0.89063 1.67188,-1.34375 1.10937,-0.45313 2.45312,-0.45313 1.48438,0 2.60938,0.48438 1.14062,0.46875 1.75,1.40625 0.60937,0.92187 0.65625,2.09375 l -1.6875,0.125 q -0.14063,-1.26563 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.60938,0 -2.34375,0.59375 -0.73438,0.59375 -0.73438,1.42187 0,0.71875 0.53125,1.17188 0.5,0.46876 2.65625,0.96876 2.15625,0.48438 2.95313,0.84375 1.17187,0.53125 1.71875,1.35938 0.5625,0.82812 0.5625,1.90625 0,1.0625 -0.60938,2.01562 -0.60937,0.9375 -1.75,1.46875 -1.14062,0.51563 -2.57812,0.51563 -1.8125,0 -3.04688,-0.53125 -1.21875,-0.53125 -1.92187,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 12.38107,-2.29688 q 0,-2.35939 0.48437,-3.79689 0.48438,-1.45312 1.4375,-2.23437 0.96875,-0.78125 2.42188,-0.78125 1.07812,0 1.89062,0.4375 0.8125,0.42187 1.32813,1.25 0.53125,0.8125 0.82812,1.98437 0.3125,1.15625 0.3125,3.14064 0,2.35938 -0.48437,3.8125 -0.48438,1.4375 -1.45313,2.23438 -0.95312,0.78125 -2.42187,0.78125 -1.92188,0 -3.03125,-1.39063 -1.3125,-1.67187 -1.3125,-5.4375 z m 1.67187,0 q 0,3.29688 0.76563,4.39063 0.78125,1.07812 1.90625,1.07812 1.14062,0 1.90625,-1.09375 0.76562,-1.09375 0.76562,-4.375 0,-3.29689 -0.76562,-4.37501 -0.76563,-1.07813 -1.92188,-1.07813 -1.125,0 -1.79687,0.95313 -0.85938,1.21875 -0.85938,4.50001 z m 9.57886,6.59375 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35937,0.64063 -1.15625,0.98438 l -0.45312,-0.70313 q 0.51562,-0.21875 0.76562,-0.67187 0.25,-0.4375 0.28125,-1.26563 z m 16.21033,0 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.64062 -0.96875,-0.64063 -1.5,-1.78125 -0.53125,-1.14063 -0.53125,-2.625 0,-1.45313 0.48438,-2.625 0.48437,-1.18752 1.4375,-1.81252 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.35938 1.10937,0.95313 v -4.79688 h 1.64063 v 13.35939 z m -5.17188,-4.82812 q 0,1.85937 0.78125,2.78125 0.78125,0.92187 1.84375,0.92187 1.07813,0 1.82813,-0.875 0.75,-0.89062 0.75,-2.6875 0,-1.98437 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.89063 -0.73438,0.89062 -0.73438,2.8125 z m 15.00074,4.82812 h -1.64063 v -10.45314 q -0.59375,0.5625 -1.5625,1.14063 -0.95312,0.5625 -1.71875,0.84376 v -1.59376 q 1.375,-0.64063 2.40625,-1.5625 1.03125,-0.92188 1.45313,-1.78125 h 1.0625 z m 13.27777,-2.15625 v -3.67187 h -3.64063 v -1.51563 h 3.64063 v -3.64064 h 1.54687 v 3.64064 h 3.64063 v 1.51563 h -3.64063 v 3.67187 z m 12.25012,-2.14062 1.65625,-0.14063 q 0.125,1 0.54687,1.64063 0.4375,0.64062 1.34375,1.04687 0.92188,0.39063 2.0625,0.39063 1,0 1.78125,-0.29688 0.78125,-0.29687 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.35937,-0.46875 -1.1875,-0.79687 -0.54687,-0.20313 -2.39062,-0.64063 -1.82813,-0.45312 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.23437 -0.46875,-0.75002 -0.46875,-1.67189 0,-1 0.57812,-1.875 0.57813,-0.89063 1.67188,-1.34375 1.10937,-0.45313 2.45312,-0.45313 1.48438,0 2.60938,0.48438 1.14062,0.46875 1.75,1.40625 0.60937,0.92187 0.65625,2.09375 l -1.6875,0.125 q -0.14063,-1.26563 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.60938,0 -2.34375,0.59375 -0.73438,0.59375 -0.73438,1.42187 0,0.71875 0.53125,1.17188 0.5,0.46876 2.65625,0.96876 2.15625,0.48438 2.95313,0.84375 1.17187,0.53125 1.71875,1.35938 0.5625,0.82812 0.5625,1.90625 0,1.0625 -0.60938,2.01562 -0.60937,0.9375 -1.75,1.46875 -1.14062,0.51563 -2.57812,0.51563 -1.8125,0 -3.04688,-0.53125 -1.21875,-0.53125 -1.92187,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 18.55292,4.29687 h -1.64063 v -10.45314 q -0.59375,0.5625 -1.5625,1.14063 -0.95312,0.5625 -1.71875,0.84376 v -1.59376 q 1.375,-0.64063 2.40625,-1.5625 1.03125,-0.92188 1.45313,-1.78125 h 1.0625 z m 5.7351,3.92188 h -1.1875 q 2.73438,-4.375 2.73438,-8.75 0,-1.71875 -0.39063,-3.39063 -0.3125,-1.37501 -0.875,-2.62501 -0.35937,-0.82813 -1.46875,-2.73438 h 1.1875 q 1.70313,2.28125 2.53125,4.59375 0.6875,1.98439 0.6875,4.14064 0,2.46875 -0.9375,4.76563 -0.9375,2.29687 -2.28125,4 z m 9.66162,-6.8125 1.625,-0.25 q 0.125,0.96875 0.75,1.5 0.625,0.51562 1.75,0.51562 1.125,0 1.67188,-0.45312 0.54687,-0.46875 0.54687,-1.09375 0,-0.54688 -0.48437,-0.875 -0.32813,-0.21875 -1.67188,-0.54688 -1.8125,-0.46875 -2.51562,-0.79687 -0.6875,-0.32813 -1.04688,-0.90625 -0.35937,-0.59375 -0.35937,-1.3125 0,-0.64063 0.29687,-1.1875 0.29688,-0.56252 0.8125,-0.92189 0.375,-0.28125 1.03125,-0.46875 0.67188,-0.20313 1.42188,-0.20313 1.14062,0 2,0.32813 0.85937,0.32812 1.26562,0.89062 0.42188,0.56252 0.57813,1.50002 l -1.60938,0.21875 q -0.10937,-0.75 -0.64062,-1.17188 -0.51563,-0.42189 -1.46875,-0.42189 -1.14063,0 -1.625,0.37502 -0.46875,0.375 -0.46875,0.875 0,0.3125 0.1875,0.57812 0.20312,0.26563 0.64062,0.4375 0.23438,0.0937 1.4375,0.42188 1.75,0.45312 2.4375,0.75 0.6875,0.29687 1.07813,0.85937 0.39062,0.5625 0.39062,1.40625 0,0.82813 -0.48437,1.54688 -0.46875,0.71875 -1.375,1.125 -0.90625,0.39062 -2.04688,0.39062 -1.875,0 -2.875,-0.78125 -0.98437,-0.78125 -1.25,-2.32812 z m 9.98438,-8.57814 v -1.89063 h 1.64062 v 1.89063 z m 0,11.46876 v -9.67189 h 1.64062 v 9.67189 z m 3.26983,0 v -1.32812 l 6.15625,-7.07813 q -1.04687,0.0625 -1.84375,0.0625 h -3.9375 v -1.32814 h 7.90625 v 1.07813 l -5.25,6.14064 -1,1.125 q 1.09375,-0.0781 2.0625,-0.0781 h 4.46875 v 1.40625 z m 16.82813,-3.10937 1.6875,0.20312 q -0.40625,1.48438 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.29687 -1.23438,-1.3125 -1.23438,-3.67188 0,-2.45312 1.25,-3.79689 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.32813 1.23437,1.31251 1.23437,3.70314 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45312 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.48437 1.01563,-1.51562 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.82813 -0.78125,-0.95314 -2.03125,-0.95314 -1.125,0 -1.90625,0.76564 -0.76562,0.75 -0.84375,2.01563 z m 17.44965,9.6875 q -1.35937,-1.70313 -2.29687,-4 -0.9375,-2.29688 -0.9375,-4.76563 0,-2.15625 0.70312,-4.14064 0.82813,-2.3125 2.53125,-4.59375 h 1.17188 q -1.09375,1.89063 -1.45313,2.70313 -0.54687,1.25 -0.875,2.62501 -0.39062,1.70313 -0.39062,3.42188 0,4.375 2.71875,8.75 z m 2.63452,-7.45313 1.64063,-0.21875 q 0.28125,1.40625 0.95312,2.01563 0.6875,0.60937 1.65625,0.60937 1.15625,0 1.95313,-0.79687 0.79687,-0.79688 0.79687,-1.98438 0,-1.125 -0.73437,-1.85937 -0.73438,-0.73438 -1.875,-0.73438 -0.46875,0 -1.15625,0.17188 l 0.1875,-1.4375 q 0.15625,0.0156 0.26562,0.0156 1.04688,0 1.875,-0.54687 0.84375,-0.54689 0.84375,-1.67189 0,-0.90625 -0.60937,-1.5 -0.60938,-0.59375 -1.57813,-0.59375 -0.95312,0 -1.59375,0.60937 -0.64062,0.59375 -0.8125,1.79688 l -1.64062,-0.29688 q 0.29687,-1.64062 1.35937,-2.54687 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92188,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85937 -0.46875,1.57814 -0.46875,0.70312 -1.375,1.125 1.1875,0.28125 1.84375,1.14062 0.65625,0.85938 0.65625,2.15625 0,1.73438 -1.28125,2.95313 -1.26562,1.21875 -3.21875,1.21875 -1.76562,0 -2.92187,-1.04688 -1.15625,-1.04687 -1.32813,-2.71875 z m 11.25073,3.53125 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35937,0.64063 -1.15625,0.98438 l -0.45312,-0.70313 q 0.51562,-0.21875 0.76562,-0.67187 0.25,-0.4375 0.28125,-1.26563 z m 9.49158,-3.53125 1.64063,-0.21875 q 0.28125,1.40625 0.95312,2.01563 0.6875,0.60937 1.65625,0.60937 1.15625,0 1.95313,-0.79687 0.79687,-0.79688 0.79687,-1.98438 0,-1.125 -0.73437,-1.85937 -0.73438,-0.73438 -1.875,-0.73438 -0.46875,0 -1.15625,0.17188 l 0.1875,-1.4375 q 0.15625,0.0156 0.26562,0.0156 1.04688,0 1.875,-0.54687 0.84375,-0.54689 0.84375,-1.67189 0,-0.90625 -0.60937,-1.5 -0.60938,-0.59375 -1.57813,-0.59375 -0.95312,0 -1.59375,0.60937 -0.64062,0.59375 -0.8125,1.79688 l -1.64062,-0.29688 q 0.29687,-1.64062 1.35937,-2.54687 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92188,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85937 -0.46875,1.57814 -0.46875,0.70312 -1.375,1.125 1.1875,0.28125 1.84375,1.14062 0.65625,0.85938 0.65625,2.15625 0,1.73438 -1.28125,2.95313 -1.26562,1.21875 -3.21875,1.21875 -1.76562,0 -2.92187,-1.04688 -1.15625,-1.04687 -1.32813,-2.71875 z m 11.90698,7.45313 h -1.1875 q 2.73438,-4.375 2.73438,-8.75 0,-1.71875 -0.39063,-3.39063 -0.3125,-1.37501 -0.875,-2.62501 -0.35937,-0.82813 -1.46875,-2.73438 h 1.1875 q 1.70313,2.28125 2.53125,4.59375 0.6875,1.98439 0.6875,4.14064 0,2.46875 -0.9375,4.76563 -0.9375,2.29687 -2.28125,4 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path147"
+ d="m 73.383199,138.91863 h 12.377953 v 16.28348 H 73.383199 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path149"
+ d="m 77.995529,152.65335 v -1.89063 h 1.640625 v 1.89063 z m 0,11.46875 v -9.67188 h 1.640625 v 9.67188 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path151"
+ d="M 128.95013,76.837268 H 156.4147 V 109.84514 H 128.95013 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path153"
+ d="m 139.16888,92.304143 v -1.90625 h 1.64062 v 1.90625 z m -2.07813,15.203117 0.3125,-1.39062 q 0.5,0.125 0.78125,0.125 0.5,0 0.73438,-0.32813 0.25,-0.32812 0.25,-1.67187 V 94.085393 h 1.64062 v 10.203117 q 0,1.78125 -0.46875,2.48438 -0.59375,0.90625 -1.96875,0.90625 -0.65625,0 -1.28125,-0.17188 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path155"
+ d="m 0,101.84776 h 128.18896 v 21.48032 H 0 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path157"
+ d="m 13.359375,132.68964 q -1.359375,-1.70313 -2.296875,-4 -0.9375,-2.29688 -0.9375,-4.76563 0,-2.15625 0.703125,-4.14062 0.828125,-2.3125 2.53125,-4.59375 h 1.171875 q -1.09375,1.89062 -1.453125,2.70312 -0.546875,1.25 -0.875,2.625 -0.390625,1.70313 -0.390625,3.42188 0,4.375 2.71875,8.75 z m 2.697052,-8.21875 1.65625,-0.14063 q 0.125,1 0.546875,1.64063 0.4375,0.64062 1.34375,1.04687 0.921875,0.39063 2.0625,0.39063 1,0 1.78125,-0.29688 0.78125,-0.29687 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.359375,-0.46875 -1.1875,-0.79687 -0.546875,-0.20313 -2.390625,-0.64063 -1.828125,-0.45312 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.23437 -0.46875,-0.75 -0.46875,-1.67188 0,-1 0.578125,-1.875 0.578125,-0.89062 1.671875,-1.34375 1.109375,-0.45312 2.453125,-0.45312 1.484375,0 2.609375,0.48437 1.140625,0.46875 1.75,1.40625 0.609375,0.92188 0.65625,2.09375 l -1.6875,0.125 q -0.140625,-1.26562 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.609375,0 -2.34375,0.59375 -0.734375,0.59375 -0.734375,1.42188 0,0.71875 0.53125,1.17187 0.5,0.46875 2.65625,0.96875 2.15625,0.48438 2.953125,0.84375 1.171875,0.53125 1.71875,1.35938 0.5625,0.82812 0.5625,1.90625 0,1.0625 -0.609375,2.01562 -0.609375,0.9375 -1.75,1.46875 -1.140625,0.51563 -2.578125,0.51563 -1.8125,0 -3.046875,-0.53125 -1.21875,-0.53125 -1.921875,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z M 28.4375,122.17401 q 0,-2.35937 0.484375,-3.79687 0.484375,-1.45313 1.4375,-2.23438 0.96875,-0.78125 2.421875,-0.78125 1.078125,0 1.890625,0.4375 0.8125,0.42188 1.328125,1.25 0.53125,0.8125 0.828125,1.98438 0.3125,1.15625 0.3125,3.14062 0,2.35938 -0.484375,3.8125 -0.484375,1.4375 -1.453125,2.23438 -0.953125,0.78125 -2.421875,0.78125 -1.921875,0 -3.03125,-1.39063 -1.3125,-1.67187 -1.3125,-5.4375 z m 1.671875,0 q 0,3.29688 0.765625,4.39063 0.78125,1.07812 1.90625,1.07812 1.140625,0 1.90625,-1.09375 0.765625,-1.09375 0.765625,-4.375 0,-3.29687 -0.765625,-4.375 -0.765625,-1.07812 -1.921875,-1.07812 -1.125,0 -1.796875,0.95312 -0.859375,1.21875 -0.859375,4.5 z m 9.578842,6.59375 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.359375,0.64063 -1.15625,0.98438 l -0.453125,-0.70313 q 0.515625,-0.21875 0.765625,-0.67187 0.25,-0.4375 0.28125,-1.26563 z m 9.554108,-4.29687 1.65625,-0.14063 q 0.125,1 0.546875,1.64063 0.4375,0.64062 1.34375,1.04687 0.921875,0.39063 2.0625,0.39063 1,0 1.78125,-0.29688 0.78125,-0.29687 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.359375,-0.46875 -1.1875,-0.79687 -0.546875,-0.20313 -2.390625,-0.64063 -1.828125,-0.45312 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.23437 -0.46875,-0.75 -0.46875,-1.67188 0,-1 0.578125,-1.875 0.578125,-0.89062 1.671875,-1.34375 1.109375,-0.45312 2.453125,-0.45312 1.484375,0 2.609375,0.48437 1.140625,0.46875 1.75,1.40625 0.609375,0.92188 0.65625,2.09375 l -1.6875,0.125 q -0.140625,-1.26562 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.609375,0 -2.34375,0.59375 -0.734375,0.59375 -0.734375,1.42188 0,0.71875 0.53125,1.17187 0.5,0.46875 2.65625,0.96875 2.15625,0.48438 2.953125,0.84375 1.171875,0.53125 1.71875,1.35938 0.5625,0.82812 0.5625,1.90625 0,1.0625 -0.609375,2.01562 -0.609375,0.9375 -1.75,1.46875 -1.140625,0.51563 -2.578125,0.51563 -1.8125,0 -3.046875,-0.53125 -1.21875,-0.53125 -1.921875,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 18.552948,4.29687 h -1.640625 v -10.45312 q -0.59375,0.5625 -1.5625,1.14062 -0.953125,0.5625 -1.71875,0.84375 v -1.59375 q 1.375,-0.64062 2.40625,-1.5625 1.03125,-0.92187 1.453125,-1.78125 h 1.0625 z m 5.735092,3.92188 h -1.1875 q 2.734375,-4.375 2.734375,-8.75 0,-1.71875 -0.390625,-3.39063 -0.3125,-1.375 -0.875,-2.625 -0.359375,-0.82812 -1.46875,-2.73437 h 1.1875 q 1.703125,2.28125 2.53125,4.59375 0.6875,1.98437 0.6875,4.14062 0,2.46875 -0.9375,4.76563 -0.9375,2.29687 -2.28125,4 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path159"
+ d="m 54.629919,131.32808 63.433071,13.85826" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path161"
+ d="m 54.629919,131.32808 57.571331,12.57765" />
+ <path
+ style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt"
+ inkscape:connector-curvature="0"
+ id="path163"
+ d="m 111.84871,145.51939 4.78606,-0.64507 -4.08098,-2.58227 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path165"
+ d="M 129.79528,118.46719 128.18897,34.719153" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path167"
+ d="M 129.79528,118.46719 128.30402,40.718047" />
+ <path
+ style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt"
+ inkscape:connector-curvature="0"
+ id="path169"
+ d="m 129.95547,40.686383 -1.73846,-4.505589 -1.5644,4.568936 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path171"
+ d="m 153.25985,273.69815 v -75.1181" />
+ <path
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round"
+ inkscape:connector-curvature="0"
+ id="path173"
+ d="m 153.25985,273.69815 v -69.1181" />
+ <path
+ style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt"
+ inkscape:connector-curvature="0"
+ id="path175"
+ d="m 154.91157,204.58005 -1.65172,-4.5381 -1.65173,4.5381 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path177"
+ d="m 82.181099,280.41995 h 63.433071 v 31.27557 H 82.181099 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path179"
+ d="m 98.681099,307.33995 v -1.21875 q -0.90625,1.4375 -2.70312,1.4375 -1.15625,0 -2.125,-0.64063 -0.96875,-0.64062 -1.5,-1.78125 -0.53125,-1.14062 -0.53125,-2.625 0,-1.45312 0.48437,-2.625 0.48438,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17188,-0.625 0.875,0 1.54687,0.375 0.6875,0.35938 1.10938,0.95313 v -4.79688 h 1.640621 v 13.35938 z m -5.17187,-4.82813 q 0,1.85938 0.78125,2.78125 0.78125,0.92188 1.84375,0.92188 1.07812,0 1.82812,-0.875 0.75,-0.89063 0.75,-2.6875 0,-1.98438 -0.76562,-2.90625 -0.76563,-0.9375 -1.89063,-0.9375 -1.07812,0 -1.8125,0.89062 -0.73437,0.89063 -0.73437,2.8125 z m 8.828841,-1.76562 q 0,-2.35938 0.48437,-3.79688 0.48438,-1.45312 1.4375,-2.23437 0.96875,-0.78125 2.42188,-0.78125 1.07812,0 1.89062,0.4375 0.8125,0.42187 1.32813,1.25 0.53125,0.8125 0.82812,1.98437 0.3125,1.15625 0.3125,3.14063 0,2.35937 -0.48437,3.8125 -0.48438,1.4375 -1.45313,2.23437 -0.95312,0.78125 -2.42187,0.78125 -1.92188,0 -3.03125,-1.39062 -1.3125,-1.67188 -1.3125,-5.4375 z m 1.67187,0 q 0,3.29687 0.76563,4.39062 0.78125,1.07813 1.90625,1.07813 1.14062,0 1.90625,-1.09375 0.76562,-1.09375 0.76562,-4.375 0,-3.29688 -0.76562,-4.375 -0.76563,-1.07813 -1.92188,-1.07813 -1.125,0 -1.79687,0.95313 -0.85938,1.21875 -0.85938,4.5 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path181"
+ d="m 115.32809,237.12598 h 71.55905 v 31.2756 h -71.55905 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path183"
+ d="m 131.82809,264.04599 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.64062 -0.96875,-0.64063 -1.5,-1.78125 -0.53125,-1.14063 -0.53125,-2.625 0,-1.45313 0.48438,-2.625 0.48437,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.35937 1.10937,0.95312 v -4.79687 h 1.64063 v 13.35937 z m -5.17188,-4.82812 q 0,1.85937 0.78125,2.78125 0.78125,0.92187 1.84375,0.92187 1.07813,0 1.82813,-0.875 0.75,-0.89062 0.75,-2.6875 0,-1.98437 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.89063 -0.73438,0.89062 -0.73438,2.8125 z m 15.00072,4.82812 h -1.64062 v -10.45312 q -0.59375,0.5625 -1.5625,1.14062 -0.95313,0.5625 -1.71875,0.84375 v -1.59375 q 1.375,-0.64062 2.40625,-1.5625 1.03125,-0.92187 1.45312,-1.78125 h 1.0625 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path185"
+ d="m 171.88189,134.56955 h 440 v 26.70866 h -440 z" />
+ <path
+ style="fill:#434343;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path187"
+ d="m 182.61627,161.48955 v -13.35938 h 1.76562 v 13.35938 z m 4.6833,0 v -9.67188 h 1.46875 v 1.375 q 1.0625,-1.59375 3.07813,-1.59375 0.875,0 1.60937,0.3125 0.73438,0.3125 1.09375,0.82813 0.375,0.5 0.51563,1.20312 0.0937,0.45313 0.0937,1.59375 v 5.95313 h -1.64063 v -5.89063 q 0,-1 -0.20312,-1.48437 -0.1875,-0.5 -0.67188,-0.79688 -0.48437,-0.29687 -1.14062,-0.29687 -1.04688,0 -1.8125,0.67187 -0.75,0.65625 -0.75,2.51563 v 5.28125 z m 16.64135,0 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.64063 -0.96875,-0.64062 -1.5,-1.78125 -0.53125,-1.14062 -0.53125,-2.625 0,-1.45312 0.48438,-2.625 0.48437,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.35938 1.10937,0.95313 v -4.79688 h 1.64063 v 13.35938 z m -5.17188,-4.82813 q 0,1.85938 0.78125,2.78125 0.78125,0.92188 1.84375,0.92188 1.07813,0 1.82813,-0.875 0.75,-0.89063 0.75,-2.6875 0,-1.98438 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.89062 -0.73438,0.89063 -0.73438,2.8125 z m 15.90697,1.71875 1.6875,0.20313 q -0.40625,1.48437 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.29688 -1.23438,-1.3125 -1.23438,-3.67187 0,-2.45313 1.25,-3.79688 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.32813 1.23437,1.3125 1.23437,3.70312 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45313 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.48438 1.01563,-1.51563 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.82812 -0.78125,-0.95313 -2.03125,-0.95313 -1.125,0 -1.90625,0.76563 -0.76562,0.75 -0.84375,2.01562 z m 8.04759,5.76563 3.53125,-5.03125 -3.26562,-4.64063 h 2.04687 l 1.48438,2.26563 q 0.42187,0.64062 0.67187,1.07812 0.40625,-0.59375 0.73438,-1.0625 l 1.64062,-2.28125 h 1.95313 l -3.34375,4.54688 3.59375,5.125 h -2.01563 l -1.98437,-3 -0.51563,-0.8125 -2.54687,3.8125 z m 15.21456,-4.29688 1.65625,-0.14062 q 0.125,1 0.54687,1.64062 0.4375,0.64063 1.34375,1.04688 0.92188,0.39062 2.0625,0.39062 1,0 1.78125,-0.29687 0.78125,-0.29688 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.35937,-0.46875 -1.1875,-0.79688 -0.54687,-0.20312 -2.39062,-0.64062 -1.82813,-0.45313 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.23438 -0.46875,-0.75 -0.46875,-1.67187 0,-1 0.57812,-1.875 0.57813,-0.89063 1.67188,-1.34375 1.10937,-0.45313 2.45312,-0.45313 1.48438,0 2.60938,0.48438 1.14062,0.46875 1.75,1.40625 0.60937,0.92187 0.65625,2.09375 l -1.6875,0.125 q -0.14063,-1.26563 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.60938,0 -2.34375,0.59375 -0.73438,0.59375 -0.73438,1.42187 0,0.71875 0.53125,1.17188 0.5,0.46875 2.65625,0.96875 2.15625,0.48437 2.95313,0.84375 1.17187,0.53125 1.71875,1.35937 0.5625,0.82813 0.5625,1.90625 0,1.0625 -0.60938,2.01563 -0.60937,0.9375 -1.75,1.46875 -1.14062,0.51562 -2.57812,0.51562 -1.8125,0 -3.04688,-0.53125 -1.21875,-0.53125 -1.92187,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 12.83418,8 v -13.375 h 1.48438 v 1.25 q 0.53125,-0.73437 1.1875,-1.09375 0.67187,-0.375 1.625,-0.375 1.23437,0 2.17187,0.64063 0.95313,0.625 1.4375,1.79687 0.48438,1.15625 0.48438,2.54688 0,1.48437 -0.53125,2.67187 -0.53125,1.1875 -1.54688,1.82813 -1.01562,0.625 -2.14062,0.625 -0.8125,0 -1.46875,-0.34375 -0.65625,-0.34375 -1.0625,-0.875 v 4.70312 z m 1.48438,-8.48437 q 0,1.85937 0.75,2.76562 0.76562,0.89063 1.82812,0.89063 1.09375,0 1.875,-0.92188 0.78125,-0.9375 0.78125,-2.875 0,-1.84375 -0.76562,-2.76562 -0.75,-0.92188 -1.8125,-0.92188 -1.04688,0 -1.85938,0.98438 -0.79687,0.96875 -0.79687,2.84375 z m 15.20385,3.59375 q -0.92187,0.76562 -1.76562,1.09375 -0.82813,0.3125 -1.79688,0.3125 -1.59375,0 -2.45312,-0.78125 -0.85938,-0.78125 -0.85938,-1.98438 0,-0.71875 0.32813,-1.29687 0.32812,-0.59375 0.84375,-0.9375 0.53125,-0.35938 1.1875,-0.54688 0.46875,-0.125 1.45312,-0.25 1.98438,-0.23437 2.92188,-0.5625 0.0156,-0.34375 0.0156,-0.42187 0,-1 -0.46875,-1.42188 -0.625,-0.54687 -1.875,-0.54687 -1.15625,0 -1.70312,0.40625 -0.54688,0.40625 -0.8125,1.42187 l -1.60938,-0.21875 q 0.21875,-1.01562 0.71875,-1.64062 0.5,-0.64063 1.45313,-0.98438 0.95312,-0.34375 2.1875,-0.34375 1.25,0 2.01562,0.29688 0.78125,0.28125 1.14063,0.73437 0.375,0.4375 0.51562,1.10938 0.0781,0.42187 0.0781,1.51562 v 2.1875 q 0,2.28125 0.10937,2.89063 0.10938,0.59375 0.40625,1.15625 h -1.70312 q -0.26563,-0.51563 -0.32813,-1.1875 z m -0.14062,-3.67188 q -0.89063,0.375 -2.67188,0.625 -1.01562,0.14063 -1.4375,0.32813 -0.42187,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45313,0.4375 0.9375,0 1.67187,-0.40625 0.75,-0.42188 1.09375,-1.14063 0.26563,-0.5625 0.26563,-1.64062 z m 10.51636,1.3125 1.60937,0.21875 q -0.26562,1.65625 -1.35937,2.60938 -1.07813,0.9375 -2.67188,0.9375 -1.98437,0 -3.1875,-1.29688 -1.20312,-1.29687 -1.20312,-3.71875 0,-1.57812 0.51562,-2.75 0.51563,-1.17187 1.57813,-1.75 1.0625,-0.59375 2.3125,-0.59375 1.57812,0 2.57812,0.79688 1,0.79687 1.28125,2.26562 l -1.59375,0.23438 q -0.23437,-0.96875 -0.8125,-1.45313 -0.57812,-0.5 -1.39062,-0.5 -1.23438,0 -2.01563,0.89063 -0.78125,0.89062 -0.78125,2.8125 0,1.95312 0.75,2.84375 0.75,0.875 1.95313,0.875 0.96875,0 1.60937,-0.59375 0.65625,-0.59375 0.82813,-1.82813 z m 9.64062,0.4375 1.6875,0.20313 q -0.40625,1.48437 -1.48437,2.3125 -1.07813,0.8125 -2.76563,0.8125 -2.125,0 -3.375,-1.29688 -1.23437,-1.3125 -1.23437,-3.67187 0,-2.45313 1.25,-3.79688 1.26562,-1.34375 3.26562,-1.34375 1.9375,0 3.15625,1.32813 1.23438,1.3125 1.23438,3.70312 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45313 0.8125,0.84375 2.01562,0.84375 0.90625,0 1.54688,-0.46875 0.64062,-0.48438 1.01562,-1.51563 z m -5.39062,-2.65625 h 5.40625 q -0.10938,-1.21875 -0.625,-1.82812 -0.78125,-0.95313 -2.03125,-0.95313 -1.125,0 -1.90625,0.76563 -0.76563,0.75 -0.84375,2.01562 z m 14.10589,-0.0781 v -1.53127 l 8.84375,-3.73438 v 1.64063 l -7.01562,2.875 7.01562,2.90625 v 1.625 z m 10.66059,2.3125 1.64062,-0.21875 q 0.28125,1.40625 0.95313,2.01562 0.6875,0.60938 1.65625,0.60938 1.15625,0 1.95312,-0.79688 0.79688,-0.79687 0.79688,-1.98437 0,-1.125 -0.73438,-1.85938 -0.73437,-0.73437 -1.875,-0.73437 -0.46875,0 -1.15625,0.17187 l 0.1875,-1.4375 q 0.15625,0.0156 0.26563,0.0156 1.04687,0 1.875,-0.54688 0.84375,-0.54687 0.84375,-1.67187 0,-0.90625 -0.60938,-1.5 -0.60937,-0.59375 -1.57812,-0.59375 -0.95313,0 -1.59375,0.60937 -0.64063,0.59375 -0.8125,1.79688 l -1.64063,-0.29688 q 0.29688,-1.64062 1.35938,-2.54687 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92187,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85937 -0.46875,1.57812 -0.46875,0.70313 -1.375,1.125 1.1875,0.28125 1.84375,1.14063 0.65625,0.85937 0.65625,2.15625 0,1.73437 -1.28125,2.95312 -1.26563,1.21875 -3.21875,1.21875 -1.76563,0 -2.92188,-1.04687 -1.15625,-1.04688 -1.32812,-2.71875 z m 9.73507,3.53125 3.53125,-5.03125 -3.26562,-4.64063 h 2.04687 l 1.48438,2.26563 q 0.42187,0.64062 0.67187,1.07812 0.40625,-0.59375 0.73438,-1.0625 l 1.64062,-2.28125 h 1.95313 l -3.34375,4.54688 3.59375,5.125 h -2.01563 l -1.98437,-3 -0.51563,-0.8125 -2.54687,3.8125 z m 9.96875,-3.53125 1.64063,-0.21875 q 0.28125,1.40625 0.95312,2.01562 0.6875,0.60938 1.65625,0.60938 1.15625,0 1.95313,-0.79688 0.79687,-0.79687 0.79687,-1.98437 0,-1.125 -0.73437,-1.85938 -0.73438,-0.73437 -1.875,-0.73437 -0.46875,0 -1.15625,0.17187 l 0.1875,-1.4375 q 0.15625,0.0156 0.26562,0.0156 1.04688,0 1.875,-0.54688 0.84375,-0.54687 0.84375,-1.67187 0,-0.90625 -0.60937,-1.5 -0.60938,-0.59375 -1.57813,-0.59375 -0.95312,0 -1.59375,0.60937 -0.64062,0.59375 -0.8125,1.79688 l -1.64062,-0.29688 q 0.29687,-1.64062 1.35937,-2.54687 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92188,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85937 -0.46875,1.57812 -0.46875,0.70313 -1.375,1.125 1.1875,0.28125 1.84375,1.14063 0.65625,0.85937 0.65625,2.15625 0,1.73437 -1.28125,2.95312 -1.26562,1.21875 -3.21875,1.21875 -1.76562,0 -2.92187,-1.04687 -1.15625,-1.04688 -1.32813,-2.71875 z m 9.73508,3.53125 3.53125,-5.03125 -3.26563,-4.64063 h 2.04688 l 1.48437,2.26563 q 0.42188,0.64062 0.67188,1.07812 0.40625,-0.59375 0.73437,-1.0625 l 1.64063,-2.28125 h 1.95312 l -3.34375,4.54688 3.59375,5.125 h -2.01562 l -1.98438,-3 -0.51562,-0.8125 -2.54688,3.8125 z m 10.8125,0 v -8.40625 h -1.45313 v -1.26563 h 1.45313 v -1.03125 q 0,-0.96875 0.17187,-1.45312 0.23438,-0.64063 0.82813,-1.03125 0.59375,-0.39063 1.67187,-0.39063 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.0937 -0.95312,-0.0937 -0.75,0 -1.0625,0.32813 -0.3125,0.3125 -0.3125,1.1875 v 0.89062 h 1.89062 v 1.26563 h -1.89062 v 8.40618 z m 4.33957,-3.53125 1.64062,-0.21875 q 0.28125,1.40625 0.95313,2.01562 0.6875,0.60938 1.65625,0.60938 1.15625,0 1.95312,-0.79688 0.79688,-0.79687 0.79688,-1.98437 0,-1.125 -0.73438,-1.85938 -0.73437,-0.73437 -1.875,-0.73437 -0.46875,0 -1.15625,0.17187 l 0.1875,-1.4375 q 0.15625,0.0156 0.26563,0.0156 1.04687,0 1.875,-0.54688 0.84375,-0.54687 0.84375,-1.67187 0,-0.90625 -0.60938,-1.5 -0.60937,-0.59375 -1.57812,-0.59375 -0.95313,0 -1.59375,0.60937 -0.64063,0.59375 -0.8125,1.79688 l -1.64063,-0.29688 q 0.29688,-1.64062 1.35938,-2.54687 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92187,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85937 -0.46875,1.57812 -0.46875,0.70313 -1.375,1.125 1.1875,0.28125 1.84375,1.14063 0.65625,0.85937 0.65625,2.15625 0,1.73437 -1.28125,2.95312 -1.26563,1.21875 -3.21875,1.21875 -1.76563,0 -2.92188,-1.04687 -1.15625,-1.04688 -1.32812,-2.71875 z m 18.98508,1.95312 v 1.57811 h -8.82813 q -0.0156,-0.59375 0.1875,-1.14063 0.34375,-0.90625 1.07813,-1.78125 0.75,-0.875 2.15625,-2.01562 2.17187,-1.78125 2.9375,-2.82813 0.76562,-1.04687 0.76562,-1.96875 0,-0.98437 -0.70312,-1.64062 -0.6875,-0.67188 -1.8125,-0.67188 -1.1875,0 -1.90625,0.71875 -0.70313,0.70313 -0.70313,1.95313 l -1.6875,-0.17188 q 0.17188,-1.89062 1.29688,-2.875 1.14062,-0.98437 3.03125,-0.98437 1.92187,0 3.04687,1.0625 1.125,1.0625 1.125,2.64062 0,0.79688 -0.32812,1.57813 -0.32813,0.78125 -1.09375,1.64062 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23438 -1.89063,1.6875 -0.42187,0.4375 -0.6875,0.875 z m 10.84448,-4.26562 -8.84375,3.78125 v -1.625 l 7.01562,-2.90625 -7.01562,-2.875 v -1.64063 l 8.84375,3.73438 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path189"
+ d="m 181.96001,173.34892 q 0,-1.4375 0.71875,-2.4375 0.71875,-1 2.09375,-1 1.25,0 2.07813,0.90625 0.82812,0.89062 0.82812,2.625 0,1.6875 -0.84375,2.60937 -0.82812,0.92188 -2.04687,0.92188 -1.20313,0 -2.01563,-0.90625 -0.8125,-0.90625 -0.8125,-2.71875 z m 2.85938,-2.3125 q -0.60938,0 -1.01563,0.53125 -0.40625,0.53125 -0.40625,1.9375 0,1.28125 0.40625,1.8125 0.40625,0.51562 1.01563,0.51562 0.625,0 1.01562,-0.51562 0.40625,-0.53125 0.40625,-1.9375 0,-1.29688 -0.40625,-1.8125 -0.40625,-0.53125 -1.01562,-0.53125 z m 0,12.9375 7.3125,-14.0625 h 1.32812 l -7.28125,14.0625 z m 5.78125,-3.625 q 0,-1.4375 0.71875,-2.42188 0.71875,-1 2.09375,-1 1.26562,0 2.07812,0.90625 0.82813,0.89063 0.82813,2.625 0,1.6875 -0.82813,2.60938 -0.82812,0.90625 -2.0625,0.90625 -1.21875,0 -2.03125,-0.90625 -0.79687,-0.90625 -0.79687,-2.71875 z m 2.85937,-2.29688 q -0.625,0 -1.03125,0.53125 -0.39062,0.53125 -0.39062,1.9375 0,1.28125 0.40625,1.8125 0.40625,0.51563 1.01562,0.51563 0.625,0 1.03125,-0.51563 0.40625,-0.53125 0.40625,-1.9375 0,-1.29687 -0.42187,-1.8125 -0.40625,-0.53125 -1.01563,-0.53125 z m 3.97902,5.4375 5.125,-13.35937 h 1.90625 l 5.46875,13.35937 h -2.01563 l -1.54687,-4.04687 h -5.59375 l -1.46875,4.04687 z m 3.85937,-5.48437 h 4.53125 l -1.40625,-3.70313 q -0.625,-1.6875 -0.9375,-2.76562 -0.26562,1.28125 -0.71875,2.54687 z m 23.65813,-2.375 h -8.82812 v -1.51563 h 8.82812 z m 0,4.0625 h -8.82812 v -1.53125 h 8.82812 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path191"
+ d="m 234.42542,176.7708 -2.32813,-0.42188 q 0.40625,-1.40625 1.35938,-2.07812 0.95312,-0.67188 2.84375,-0.67188 1.70312,0 2.54687,0.40625 0.84375,0.40625 1.17188,1.03125 0.34375,0.625 0.34375,2.28125 l -0.0156,3 q 0,1.26563 0.10938,1.875 0.125,0.60938 0.46875,1.29688 h -2.53125 q -0.10938,-0.25 -0.25,-0.75 -0.0625,-0.23438 -0.0937,-0.3125 -0.65625,0.64062 -1.40625,0.96875 -0.73438,0.3125 -1.59375,0.3125 -1.48438,0 -2.34375,-0.8125 -0.85938,-0.8125 -0.85938,-2.04688 0,-0.82812 0.39063,-1.46875 0.39062,-0.64062 1.09375,-0.96875 0.70312,-0.34375 2.03125,-0.60937 1.79687,-0.32813 2.48437,-0.625 v -0.25 q 0,-0.75 -0.35937,-1.0625 -0.35938,-0.3125 -1.375,-0.3125 -0.6875,0 -1.07813,0.28125 -0.375,0.26562 -0.60937,0.9375 z m 3.42187,2.07812 q -0.48437,0.15625 -1.5625,0.39063 -1.0625,0.21875 -1.39062,0.4375 -0.5,0.35937 -0.5,0.90625 0,0.53125 0.40625,0.9375 0.40625,0.39062 1.01562,0.39062 0.70313,0 1.32813,-0.46875 0.46875,-0.34375 0.60937,-0.84375 0.0937,-0.32812 0.0937,-1.25 z m 5.0476,4.64063 v -13.35938 h 2.5625 v 13.35938 z m 5.18329,0 v -13.35938 h 2.5625 v 13.35938 z m 4.58956,-4.96875 q 0,-1.28125 0.625,-2.46875 0.625,-1.20313 1.78125,-1.82813 1.15625,-0.625 2.57813,-0.625 2.1875,0 3.59375,1.42188 1.40625,1.42187 1.40625,3.60937 0,2.1875 -1.42188,3.64063 -1.42187,1.4375 -3.5625,1.4375 -1.32812,0 -2.54687,-0.59375 -1.20313,-0.60938 -1.82813,-1.76563 -0.625,-1.17187 -0.625,-2.82812 z m 2.625,0.125 q 0,1.45312 0.67188,2.21875 0.6875,0.75 1.6875,0.75 1,0 1.67187,-0.75 0.6875,-0.76563 0.6875,-2.23438 0,-1.42187 -0.6875,-2.1875 -0.67187,-0.76562 -1.67187,-0.76562 -1,0 -1.6875,0.76562 -0.67188,0.76563 -0.67188,2.20313 z m 17.80222,-1.96875 -2.53125,0.45312 q -0.125,-0.75 -0.57812,-1.125 -0.45313,-0.39062 -1.17188,-0.39062 -0.95312,0 -1.53125,0.65625 -0.5625,0.65625 -0.5625,2.20312 0,1.73438 0.57813,2.4375 0.57812,0.70313 1.54687,0.70313 0.73438,0 1.20313,-0.40625 0.46875,-0.42188 0.65625,-1.42188 l 2.51562,0.42188 q -0.39062,1.73437 -1.51562,2.625 -1.10938,0.875 -2.96875,0.875 -2.125,0 -3.39063,-1.32813 -1.25,-1.34375 -1.25,-3.71875 0,-2.39062 1.26563,-3.71875 1.26562,-1.34375 3.42187,-1.34375 1.76563,0 2.79688,0.76563 1.04687,0.75 1.51562,2.3125 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path193"
+ d="m 280.10711,183.48955 v -9.67188 h 1.46875 v 1.35938 q 0.45312,-0.71875 1.20312,-1.14063 0.76563,-0.4375 1.71875,-0.4375 1.07813,0 1.76563,0.45313 0.6875,0.4375 0.96875,1.23437 1.15625,-1.6875 2.98437,-1.6875 1.45313,0 2.21875,0.79688 0.78125,0.79687 0.78125,2.45312 v 6.64063 h -1.64062 v -6.09375 q 0,-0.98438 -0.15625,-1.40625 -0.15625,-0.4375 -0.57813,-0.70313 -0.42187,-0.26562 -0.98437,-0.26562 -1.01563,0 -1.6875,0.6875 -0.67188,0.67187 -0.67188,2.15625 v 5.625 h -1.64062 v -6.28125 q 0,-1.09375 -0.40625,-1.64063 -0.40625,-0.54687 -1.3125,-0.54687 -0.6875,0 -1.28125,0.35937 -0.59375,0.35938 -0.85938,1.0625 -0.25,0.70313 -0.25,2.03125 v 5.01563 z m 22.16583,-3.10938 1.6875,0.20313 q -0.40625,1.48437 -1.48437,2.3125 -1.07813,0.8125 -2.76563,0.8125 -2.125,0 -3.375,-1.29688 -1.23437,-1.3125 -1.23437,-3.67187 0,-2.45313 1.25,-3.79688 1.26562,-1.34375 3.26562,-1.34375 1.9375,0 3.15625,1.32813 1.23438,1.3125 1.23438,3.70312 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45313 0.8125,0.84375 2.01562,0.84375 0.90625,0 1.54688,-0.46875 0.64062,-0.48438 1.01562,-1.51563 z m -5.39062,-2.65625 h 5.40625 q -0.10938,-1.21875 -0.625,-1.82812 -0.78125,-0.95313 -2.03125,-0.95313 -1.125,0 -1.90625,0.76563 -0.76563,0.75 -0.84375,2.01562 z m 9.14132,5.76563 v -9.67188 h 1.46875 v 1.35938 q 0.45313,-0.71875 1.20313,-1.14063 0.76562,-0.4375 1.71875,-0.4375 1.07812,0 1.76562,0.45313 0.6875,0.4375 0.96875,1.23437 1.15625,-1.6875 2.98438,-1.6875 1.45312,0 2.21875,0.79688 0.78125,0.79687 0.78125,2.45312 v 6.64063 h -1.64063 v -6.09375 q 0,-0.98438 -0.15625,-1.40625 -0.15625,-0.4375 -0.57812,-0.70313 -0.42188,-0.26562 -0.98438,-0.26562 -1.01562,0 -1.6875,0.6875 -0.67187,0.67187 -0.67187,2.15625 v 5.625 h -1.64063 v -6.28125 q 0,-1.09375 -0.40625,-1.64063 -0.40625,-0.54687 -1.3125,-0.54687 -0.6875,0 -1.28125,0.35937 -0.59375,0.35938 -0.85937,1.0625 -0.25,0.70313 -0.25,2.03125 v 5.01563 z m 15.52518,0 v -9.67188 h 1.46875 v 1.46875 q 0.5625,-1.03125 1.03125,-1.35937 0.48438,-0.32813 1.0625,-0.32813 0.82813,0 1.6875,0.53125 l -0.5625,1.51563 q -0.60937,-0.35938 -1.20312,-0.35938 -0.54688,0 -0.96875,0.32813 -0.42188,0.32812 -0.60938,0.89062 -0.28125,0.875 -0.28125,1.92188 v 5.0625 z m 12.8533,-3.10938 1.6875,0.20313 q -0.40625,1.48437 -1.48437,2.3125 -1.07813,0.8125 -2.76563,0.8125 -2.125,0 -3.375,-1.29688 -1.23437,-1.3125 -1.23437,-3.67187 0,-2.45313 1.25,-3.79688 1.26562,-1.34375 3.26562,-1.34375 1.9375,0 3.15625,1.32813 1.23438,1.3125 1.23438,3.70312 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45313 0.8125,0.84375 2.01562,0.84375 0.90625,0 1.54688,-0.46875 0.64062,-0.48438 1.01562,-1.51563 z m -5.39062,-2.65625 h 5.40625 q -0.10938,-1.21875 -0.625,-1.82812 -0.78125,-0.95313 -2.03125,-0.95313 -1.125,0 -1.90625,0.76563 -0.76563,0.75 -0.84375,2.01562 z m 9.53195,5.76563 v -8.40625 h -1.45312 v -1.26563 h 1.45312 v -1.03125 q 0,-0.96875 0.17188,-1.45312 0.23437,-0.64063 0.82812,-1.03125 0.59375,-0.39063 1.67188,-0.39063 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.0937 -0.95313,-0.0937 -0.75,0 -1.0625,0.32813 -0.3125,0.3125 -0.3125,1.1875 v 0.89062 h 1.89063 v 1.26563 h -1.89063 v 8.4062 z m 4.57394,-5.84375 v -1.53125 l 8.84375,-3.73438 v 1.64063 l -7.01562,2.875 7.01562,2.90625 v 1.625 z m 10.66059,2.3125 1.64062,-0.21875 q 0.28125,1.40625 0.95313,2.01562 0.6875,0.60938 1.65625,0.60938 1.15625,0 1.95312,-0.79688 0.79688,-0.79687 0.79688,-1.98437 0,-1.125 -0.73438,-1.85938 -0.73437,-0.73437 -1.875,-0.73437 -0.46875,0 -1.15625,0.17187 l 0.1875,-1.4375 q 0.15625,0.0156 0.26563,0.0156 1.04687,0 1.875,-0.54688 0.84375,-0.54687 0.84375,-1.67187 0,-0.90625 -0.60938,-1.5 -0.60937,-0.59375 -1.57812,-0.59375 -0.95313,0 -1.59375,0.60937 -0.64063,0.59375 -0.8125,1.79688 l -1.64063,-0.29688 q 0.29688,-1.64062 1.35938,-2.54687 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92187,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85937 -0.46875,1.57812 -0.46875,0.70313 -1.375,1.125 1.1875,0.28125 1.84375,1.14063 0.65625,0.85937 0.65625,2.15625 0,1.73437 -1.28125,2.95312 -1.26563,1.21875 -3.21875,1.21875 -1.76563,0 -2.92188,-1.04687 -1.15625,-1.04688 -1.32812,-2.71875 z m 9.73508,3.53125 3.53125,-5.03125 -3.26563,-4.64063 h 2.04688 l 1.48437,2.26563 q 0.42188,0.64062 0.67188,1.07812 0.40625,-0.59375 0.73437,-1.0625 l 1.64063,-2.28125 h 1.95312 l -3.34375,4.54688 3.59375,5.125 h -2.01562 l -1.98438,-3 -0.51562,-0.8125 -2.54688,3.8125 z m 9.96875,-3.53125 1.64062,-0.21875 q 0.28125,1.40625 0.95313,2.01562 0.6875,0.60938 1.65625,0.60938 1.15625,0 1.95312,-0.79688 0.79688,-0.79687 0.79688,-1.98437 0,-1.125 -0.73438,-1.85938 -0.73437,-0.73437 -1.875,-0.73437 -0.46875,0 -1.15625,0.17187 l 0.1875,-1.4375 q 0.15625,0.0156 0.26563,0.0156 1.04687,0 1.875,-0.54688 0.84375,-0.54687 0.84375,-1.67187 0,-0.90625 -0.60938,-1.5 -0.60937,-0.59375 -1.57812,-0.59375 -0.95313,0 -1.59375,0.60937 -0.64063,0.59375 -0.8125,1.79688 l -1.64063,-0.29688 q 0.29688,-1.64062 1.35938,-2.54687 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92187,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85937 -0.46875,1.57812 -0.46875,0.70313 -1.375,1.125 1.1875,0.28125 1.84375,1.14063 0.65625,0.85937 0.65625,2.15625 0,1.73437 -1.28125,2.95312 -1.26563,1.21875 -3.21875,1.21875 -1.76563,0 -2.92188,-1.04687 -1.15625,-1.04688 -1.32812,-2.71875 z m 9.7351,3.53125 3.53125,-5.03125 -3.26562,-4.64063 h 2.04687 l 1.48438,2.26563 q 0.42187,0.64062 0.67187,1.07812 0.40625,-0.59375 0.73438,-1.0625 l 1.64062,-2.28125 h 1.95313 l -3.34375,4.54688 3.59375,5.125 h -2.01563 l -1.98437,-3 -0.51563,-0.8125 -2.54687,3.8125 z m 10.8125,0 v -8.40625 h -1.45312 v -1.26563 h 1.45312 v -1.03125 q 0,-0.96875 0.17188,-1.45312 0.23437,-0.64063 0.82812,-1.03125 0.59375,-0.39063 1.67188,-0.39063 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.0937 -0.95313,-0.0937 -0.75,0 -1.0625,0.32813 -0.3125,0.3125 -0.3125,1.1875 v 0.89062 h 1.89063 v 1.26563 h -1.89063 v 8.4062 z m 4.33954,-3.53125 1.64063,-0.21875 q 0.28125,1.40625 0.95312,2.01562 0.6875,0.60938 1.65625,0.60938 1.15625,0 1.95313,-0.79688 0.79687,-0.79687 0.79687,-1.98437 0,-1.125 -0.73437,-1.85938 -0.73438,-0.73437 -1.875,-0.73437 -0.46875,0 -1.15625,0.17187 l 0.1875,-1.4375 q 0.15625,0.0156 0.26562,0.0156 1.04688,0 1.875,-0.54688 0.84375,-0.54687 0.84375,-1.67187 0,-0.90625 -0.60937,-1.5 -0.60938,-0.59375 -1.57813,-0.59375 -0.95312,0 -1.59375,0.60937 -0.64062,0.59375 -0.8125,1.79688 l -1.64062,-0.29688 q 0.29687,-1.64062 1.35937,-2.54687 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92188,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85937 -0.46875,1.57812 -0.46875,0.70313 -1.375,1.125 1.1875,0.28125 1.84375,1.14063 0.65625,0.85937 0.65625,2.15625 0,1.73437 -1.28125,2.95312 -1.26562,1.21875 -3.21875,1.21875 -1.76562,0 -2.92187,-1.04687 -1.15625,-1.04688 -1.32813,-2.71875 z m 18.98511,1.95312 v 1.57813 h -8.82813 q -0.0156,-0.59375 0.1875,-1.14063 0.34375,-0.90625 1.07813,-1.78125 0.75,-0.875 2.15625,-2.01562 2.17187,-1.78125 2.9375,-2.82813 0.76562,-1.04687 0.76562,-1.96875 0,-0.98437 -0.70312,-1.64062 -0.6875,-0.67188 -1.8125,-0.67188 -1.1875,0 -1.90625,0.71875 -0.70313,0.70313 -0.70313,1.95313 l -1.6875,-0.17188 q 0.17188,-1.89062 1.29688,-2.875 1.14062,-0.98437 3.03125,-0.98437 1.92187,0 3.04687,1.0625 1.125,1.0625 1.125,2.64062 0,0.79688 -0.32812,1.57813 -0.32813,0.78125 -1.09375,1.64062 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23438 -1.89063,1.6875 -0.42187,0.4375 -0.6875,0.875 z m 2.64136,1.57813 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35938,0.64062 -1.15625,0.98437 l -0.45313,-0.70312 q 0.51563,-0.21875 0.76563,-0.67188 0.25,-0.4375 0.28125,-1.26562 z m 9.64782,0.23437 0.79688,-3.89062 h -1.54688 v -1.35938 h 1.8125 l 0.67188,-3.29687 h -2.48438 v -1.35938 h 2.76563 l 0.79687,-3.90625 h 1.35938 l -0.79688,3.90625 h 2.875 l 0.79688,-3.90625 h 1.375 l -0.79688,3.90625 h 1.57813 v 1.35938 h -1.84375 l -0.6875,3.29687 h 2.53125 v 1.35938 h -2.8125 l -0.78125,3.89062 h -1.375 l 0.78125,-3.89062 h -2.85938 l -0.78125,3.89062 z m 2.4375,-5.25 h 2.85938 l 0.6875,-3.29687 h -2.875 z m 8.18823,5.01563 v -13.35938 h 1.64063 v 13.35938 z m 4.19172,0 v -9.67188 h 1.46875 v 1.35938 q 0.45312,-0.71875 1.20312,-1.14063 0.76563,-0.4375 1.71875,-0.4375 1.07813,0 1.76563,0.45313 0.6875,0.4375 0.96875,1.23437 1.15625,-1.6875 2.98437,-1.6875 1.45313,0 2.21875,0.79688 0.78125,0.79687 0.78125,2.45312 v 6.64063 h -1.64062 v -6.09375 q 0,-0.98438 -0.15625,-1.40625 -0.15625,-0.4375 -0.57813,-0.70313 -0.42187,-0.26562 -0.98437,-0.26562 -1.01563,0 -1.6875,0.6875 -0.67188,0.67187 -0.67188,2.15625 v 5.625 h -1.64062 v -6.28125 q 0,-1.09375 -0.40625,-1.64063 -0.40625,-0.54687 -1.3125,-0.54687 -0.6875,0 -1.28125,0.35937 -0.59375,0.35938 -0.85938,1.0625 -0.25,0.70313 -0.25,2.03125 v 5.01563 z m 21.85327,-1.1875 q -0.92188,0.76562 -1.76563,1.09375 -0.82812,0.3125 -1.79687,0.3125 -1.59375,0 -2.45313,-0.78125 -0.85937,-0.78125 -0.85937,-1.98438 0,-0.71875 0.32812,-1.29687 0.32813,-0.59375 0.84375,-0.9375 0.53125,-0.35938 1.1875,-0.54688 0.46875,-0.125 1.45313,-0.25 1.98437,-0.23437 2.92187,-0.5625 0.0156,-0.34375 0.0156,-0.42187 0,-1 -0.46875,-1.42188 -0.625,-0.54687 -1.875,-0.54687 -1.15625,0 -1.70313,0.40625 -0.54687,0.40625 -0.8125,1.42187 l -1.60937,-0.21875 q 0.21875,-1.01562 0.71875,-1.64062 0.5,-0.64063 1.45312,-0.98438 0.95313,-0.34375 2.1875,-0.34375 1.25,0 2.01563,0.29688 0.78125,0.28125 1.14062,0.73437 0.375,0.4375 0.51563,1.10938 0.0781,0.42187 0.0781,1.51562 v 2.1875 q 0,2.28125 0.10938,2.89063 0.10937,0.59375 0.40625,1.15625 h -1.70313 q -0.26562,-0.51563 -0.32812,-1.1875 z m -0.14063,-3.67188 q -0.89062,0.375 -2.67187,0.625 -1.01563,0.14063 -1.4375,0.32813 -0.42188,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45312,0.4375 0.9375,0 1.67188,-0.40625 0.75,-0.42188 1.09375,-1.14063 0.26562,-0.5625 0.26562,-1.64062 z m 4.20386,8.5625 v -13.375 h 1.48437 v 1.25 q 0.53125,-0.73437 1.1875,-1.09375 0.67188,-0.375 1.625,-0.375 1.23438,0 2.17188,0.64063 0.95312,0.625 1.4375,1.79687 0.48437,1.15625 0.48437,2.54688 0,1.48437 -0.53125,2.67187 -0.53125,1.1875 -1.54687,1.82813 -1.01563,0.625 -2.14063,0.625 -0.8125,0 -1.46875,-0.34375 -0.65625,-0.34375 -1.0625,-0.875 v 4.70312 z m 1.48437,-8.48437 q 0,1.85937 0.75,2.76562 0.76563,0.89063 1.82813,0.89063 1.09375,0 1.875,-0.92188 0.78125,-0.9375 0.78125,-2.875 0,-1.84375 -0.76563,-2.76562 -0.75,-0.92188 -1.8125,-0.92188 -1.04687,0 -1.85937,0.98438 -0.79688,0.96875 -0.79688,2.84375 z m 7.62574,4.78125 5.125,-13.35938 h 1.90625 l 5.46875,13.35938 h -2.01563 l -1.54687,-4.04688 h -5.59375 l -1.46875,4.04688 z m 3.85937,-5.48438 h 4.53125 l -1.40625,-3.70312 q -0.625,-1.6875 -0.9375,-2.76563 -0.26562,1.28125 -0.71875,2.54688 z m 10.27167,5.48438 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35938,0.64062 -1.15625,0.98437 l -0.45313,-0.70312 q 0.51563,-0.21875 0.76563,-0.67188 0.25,-0.4375 0.28125,-1.26562 z m 12.63226,0 -3.6875,-9.67188 h 1.73438 l 2.07812,5.79688 q 0.32813,0.9375 0.625,1.9375 0.20313,-0.76563 0.60938,-1.82813 l 2.14062,-5.90625 h 1.6875 l -3.65625,9.67188 z m 6.64063,0 v -9.67188 h 1.46875 v 1.35938 q 0.45312,-0.71875 1.20312,-1.14063 0.76563,-0.4375 1.71875,-0.4375 1.07813,0 1.76563,0.45313 0.6875,0.4375 0.96875,1.23437 1.15625,-1.6875 2.98437,-1.6875 1.45313,0 2.21875,0.79688 0.78125,0.79687 0.78125,2.45312 v 6.64063 h -1.64062 v -6.09375 q 0,-0.98438 -0.15625,-1.40625 -0.15625,-0.4375 -0.57813,-0.70313 -0.42187,-0.26562 -0.98437,-0.26562 -1.01563,0 -1.6875,0.6875 -0.67188,0.67187 -0.67188,2.15625 v 5.625 h -1.64062 v -6.28125 q 0,-1.09375 -0.40625,-1.64063 -0.40625,-0.54687 -1.3125,-0.54687 -0.6875,0 -1.28125,0.35937 -0.59375,0.35938 -0.85938,1.0625 -0.25,0.70313 -0.25,2.03125 v 5.01563 z m 22.16577,-3.10938 1.6875,0.20313 q -0.40625,1.48437 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.29688 -1.23438,-1.3125 -1.23438,-3.67187 0,-2.45313 1.25,-3.79688 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.32813 1.23437,1.3125 1.23437,3.70312 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45313 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.48438 1.01563,-1.51563 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.82812 -0.78125,-0.95313 -2.03125,-0.95313 -1.125,0 -1.90625,0.76563 -0.76562,0.75 -0.84375,2.01562 z m 9.14136,5.76563 v -9.67188 h 1.46875 v 1.35938 q 0.45313,-0.71875 1.20313,-1.14063 0.76562,-0.4375 1.71875,-0.4375 1.07812,0 1.76562,0.45313 0.6875,0.4375 0.96875,1.23437 1.15625,-1.6875 2.98438,-1.6875 1.45312,0 2.21875,0.79688 0.78125,0.79687 0.78125,2.45312 v 6.64063 h -1.64063 v -6.09375 q 0,-0.98438 -0.15625,-1.40625 -0.15625,-0.4375 -0.57812,-0.70313 -0.42188,-0.26562 -0.98438,-0.26562 -1.01562,0 -1.6875,0.6875 -0.67187,0.67187 -0.67187,2.15625 v 5.625 h -1.64063 v -6.28125 q 0,-1.09375 -0.40625,-1.64063 -0.40625,-0.54687 -1.3125,-0.54687 -0.6875,0 -1.28125,0.35937 -0.59375,0.35938 -0.85937,1.0625 -0.25,0.70313 -0.25,2.03125 v 5.01563 z m 24.16583,-5.84375 -8.84375,3.78125 v -1.625 l 7.01563,-2.90625 -7.01563,-2.875 v -1.64063 l 8.84375,3.73438 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path195"
+ d="m 167.14961,276.98688 h 614.7402 v 83.74802 h -614.7402 z" />
+ <path
+ style="fill:#434343;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path197"
+ d="m 177.88398,303.90685 v -13.35937 h 1.76562 v 13.35937 z m 4.6833,0 v -9.67187 h 1.46875 v 1.375 q 1.0625,-1.59375 3.07813,-1.59375 0.875,0 1.60937,0.3125 0.73438,0.3125 1.09375,0.82812 0.375,0.5 0.51563,1.20313 0.0937,0.45312 0.0937,1.59375 v 5.95312 h -1.64063 v -5.89062 q 0,-1 -0.20312,-1.48438 -0.1875,-0.5 -0.67188,-0.79687 -0.48437,-0.29688 -1.14062,-0.29688 -1.04688,0 -1.8125,0.67188 -0.75,0.65625 -0.75,2.51562 v 5.28125 z m 16.64135,0 v -1.21875 q -0.90625,1.4375 -2.70313,1.4375 -1.15625,0 -2.125,-0.64062 -0.96875,-0.64063 -1.5,-1.78125 -0.53125,-1.14063 -0.53125,-2.625 0,-1.45313 0.48438,-2.625 0.48437,-1.1875 1.4375,-1.8125 0.96875,-0.625 2.17187,-0.625 0.875,0 1.54688,0.375 0.6875,0.35937 1.10937,0.95312 v -4.79687 h 1.64063 v 13.35937 z m -5.17188,-4.82812 q 0,1.85937 0.78125,2.78125 0.78125,0.92187 1.84375,0.92187 1.07813,0 1.82813,-0.875 0.75,-0.89062 0.75,-2.6875 0,-1.98437 -0.76563,-2.90625 -0.76562,-0.9375 -1.89062,-0.9375 -1.07813,0 -1.8125,0.89063 -0.73438,0.89062 -0.73438,2.8125 z m 15.90697,1.71875 1.6875,0.20312 q -0.40625,1.48438 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.29687 -1.23438,-1.3125 -1.23438,-3.67188 0,-2.45312 1.25,-3.79687 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.32812 1.23437,1.3125 1.23437,3.70313 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45312 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.48437 1.01563,-1.51562 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.82813 -0.78125,-0.95312 -2.03125,-0.95312 -1.125,0 -1.90625,0.76562 -0.76562,0.75 -0.84375,2.01563 z m 8.0476,5.76562 3.53125,-5.03125 -3.26563,-4.64062 h 2.04688 l 1.48437,2.26562 q 0.42188,0.64063 0.67188,1.07813 0.40625,-0.59375 0.73437,-1.0625 l 1.64063,-2.28125 h 1.95312 l -3.34375,4.54687 3.59375,5.125 h -2.01562 l -1.98438,-3 -0.51562,-0.8125 -2.54688,3.8125 z m 15.21455,-4.29687 1.65625,-0.14063 q 0.125,1 0.54687,1.64063 0.4375,0.64062 1.34375,1.04687 0.92188,0.39063 2.0625,0.39063 1,0 1.78125,-0.29688 0.78125,-0.29687 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.35937,-0.46875 -1.1875,-0.79687 -0.54687,-0.20313 -2.39062,-0.64063 -1.82813,-0.45312 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.23437 -0.46875,-0.75 -0.46875,-1.67188 0,-1 0.57812,-1.875 0.57813,-0.89062 1.67188,-1.34375 1.10937,-0.45312 2.45312,-0.45312 1.48438,0 2.60938,0.48437 1.14062,0.46875 1.75,1.40625 0.60937,0.92188 0.65625,2.09375 l -1.6875,0.125 q -0.14063,-1.26562 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.60938,0 -2.34375,0.59375 -0.73438,0.59375 -0.73438,1.42188 0,0.71875 0.53125,1.17187 0.5,0.46875 2.65625,0.96875 2.15625,0.48438 2.95313,0.84375 1.17187,0.53125 1.71875,1.35938 0.5625,0.82812 0.5625,1.90625 0,1.0625 -0.60938,2.01562 -0.60937,0.9375 -1.75,1.46875 -1.14062,0.51563 -2.57812,0.51563 -1.8125,0 -3.04688,-0.53125 -1.21875,-0.53125 -1.92187,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 12.8342,8 v -13.375 h 1.48437 v 1.25 q 0.53125,-0.73438 1.1875,-1.09375 0.67188,-0.375 1.625,-0.375 1.23438,0 2.17188,0.64062 0.95312,0.625 1.4375,1.79688 0.48437,1.15625 0.48437,2.54687 0,1.48438 -0.53125,2.67188 -0.53125,1.1875 -1.54687,1.82812 -1.01563,0.625 -2.14063,0.625 -0.8125,0 -1.46875,-0.34375 -0.65625,-0.34375 -1.0625,-0.875 v 4.70313 z m 1.48437,-8.48438 q 0,1.85938 0.75,2.76563 0.76563,0.89062 1.82813,0.89062 1.09375,0 1.875,-0.92187 0.78125,-0.9375 0.78125,-2.875 0,-1.84375 -0.76563,-2.76563 -0.75,-0.92187 -1.8125,-0.92187 -1.04687,0 -1.85937,0.98437 -0.79688,0.96875 -0.79688,2.84375 z m 15.20386,3.59375 q -0.92188,0.76563 -1.76563,1.09375 -0.82812,0.3125 -1.79687,0.3125 -1.59375,0 -2.45313,-0.78125 -0.85937,-0.78125 -0.85937,-1.98437 0,-0.71875 0.32812,-1.29688 0.32813,-0.59375 0.84375,-0.9375 0.53125,-0.35937 1.1875,-0.54687 0.46875,-0.125 1.45313,-0.25 1.98437,-0.23438 2.92187,-0.5625 0.0156,-0.34375 0.0156,-0.42188 0,-1 -0.46875,-1.42187 -0.625,-0.54688 -1.875,-0.54688 -1.15625,0 -1.70313,0.40625 -0.54687,0.40625 -0.8125,1.42188 l -1.60937,-0.21875 q 0.21875,-1.01563 0.71875,-1.64063 0.5,-0.64062 1.45312,-0.98437 0.95313,-0.34375 2.1875,-0.34375 1.25,0 2.01563,0.29687 0.78125,0.28125 1.14062,0.73438 0.375,0.4375 0.51563,1.10937 0.0781,0.42188 0.0781,1.51563 v 2.1875 q 0,2.28125 0.10938,2.89062 0.10937,0.59375 0.40625,1.15625 h -1.70313 q -0.26562,-0.51562 -0.32812,-1.1875 z m -0.14063,-3.67187 q -0.89062,0.375 -2.67187,0.625 -1.01563,0.14062 -1.4375,0.32812 -0.42188,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45312,0.4375 0.9375,0 1.67188,-0.40625 0.75,-0.42187 1.09375,-1.14062 0.26562,-0.5625 0.26562,-1.64063 z m 10.51633,1.3125 1.60938,0.21875 q -0.26563,1.65625 -1.35938,2.60937 -1.07812,0.9375 -2.67187,0.9375 -1.98438,0 -3.1875,-1.29687 -1.20313,-1.29688 -1.20313,-3.71875 0,-1.57813 0.51563,-2.75 0.51562,-1.17188 1.57812,-1.75 1.0625,-0.59375 2.3125,-0.59375 1.57813,0 2.57813,0.79687 1,0.79688 1.28125,2.26563 l -1.59375,0.23437 q -0.23438,-0.96875 -0.8125,-1.45312 -0.57813,-0.5 -1.39063,-0.5 -1.23437,0 -2.01562,0.89062 -0.78125,0.89063 -0.78125,2.8125 0,1.95313 0.75,2.84375 0.75,0.875 1.95312,0.875 0.96875,0 1.60938,-0.59375 0.65625,-0.59375 0.82812,-1.82812 z m 9.64063,0.4375 1.6875,0.20312 q -0.40625,1.48438 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.29687 -1.23438,-1.3125 -1.23438,-3.67188 0,-2.45312 1.25,-3.79687 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.32812 1.23437,1.3125 1.23437,3.70313 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45312 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.48437 1.01563,-1.51562 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.82813 -0.78125,-0.95312 -2.03125,-0.95312 -1.125,0 -1.90625,0.76562 -0.76562,0.75 -0.84375,2.01563 z m 14.1059,-0.0781 v -1.53125 l 8.84375,-3.73437 v 1.64062 l -7.01563,2.875 7.01563,2.90625 v 1.625 z m 19.26995,4.26563 v 1.57812 h -8.82812 q -0.0156,-0.59375 0.1875,-1.14062 0.34375,-0.90625 1.07812,-1.78125 0.75,-0.875 2.15625,-2.01563 2.17188,-1.78125 2.9375,-2.82812 0.76563,-1.04688 0.76563,-1.96875 0,-0.98438 -0.70313,-1.64063 -0.6875,-0.67187 -1.8125,-0.67187 -1.1875,0 -1.90625,0.71875 -0.70312,0.70312 -0.70312,1.95312 l -1.6875,-0.17187 q 0.17187,-1.89063 1.29687,-2.875 1.14063,-0.98438 3.03125,-0.98438 1.92188,0 3.04688,1.0625 1.125,1.0625 1.125,2.64063 0,0.79687 -0.32813,1.57812 -0.32812,0.78125 -1.09375,1.64063 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23437 -1.89062,1.6875 -0.42188,0.4375 -0.6875,0.875 z m 1.12574,1.57812 3.53125,-5.03125 -3.26563,-4.64062 h 2.04688 l 1.48437,2.26562 q 0.42188,0.64063 0.67188,1.07813 0.40625,-0.59375 0.73437,-1.0625 l 1.64063,-2.28125 h 1.95312 l -3.34375,4.54687 3.59375,5.125 h -2.01562 l -1.98438,-3 -0.51562,-0.8125 -2.54688,3.8125 z m 18.57812,-1.57812 v 1.57812 h -8.82812 q -0.0156,-0.59375 0.1875,-1.14062 0.34375,-0.90625 1.07812,-1.78125 0.75,-0.875 2.15625,-2.01563 2.17188,-1.78125 2.9375,-2.82812 0.76563,-1.04688 0.76563,-1.96875 0,-0.98438 -0.70313,-1.64063 -0.6875,-0.67187 -1.8125,-0.67187 -1.1875,0 -1.90625,0.71875 -0.70312,0.70312 -0.70312,1.95312 l -1.6875,-0.17187 q 0.17187,-1.89063 1.29687,-2.875 1.14063,-0.98438 3.03125,-0.98438 1.92188,0 3.04688,1.0625 1.125,1.0625 1.125,2.64063 0,0.79687 -0.32813,1.57812 -0.32812,0.78125 -1.09375,1.64063 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23437 -1.89062,1.6875 -0.42188,0.4375 -0.6875,0.875 z m 1.1257,1.57812 3.53125,-5.03125 -3.26562,-4.64062 h 2.04687 l 1.48438,2.26562 q 0.42187,0.64063 0.67187,1.07813 0.40625,-0.59375 0.73438,-1.0625 l 1.64062,-2.28125 h 1.95313 l -3.34375,4.54687 3.59375,5.125 h -2.01563 l -1.98437,-3 -0.51563,-0.8125 -2.54687,3.8125 z m 10.8125,0 v -8.40625 h -1.45312 v -1.26562 h 1.45312 v -1.03125 q 0,-0.96875 0.17188,-1.45313 0.23437,-0.64062 0.82812,-1.03125 0.59375,-0.39062 1.67188,-0.39062 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.0937 -0.95313,-0.0937 -0.75,0 -1.0625,0.32812 -0.3125,0.3125 -0.3125,1.1875 v 0.89063 h 1.89063 v 1.26562 h -1.89063 v 8.40625 z m 4.33957,-3.53125 1.64063,-0.21875 q 0.28125,1.40625 0.95312,2.01563 0.6875,0.60937 1.65625,0.60937 1.15625,0 1.95313,-0.79687 0.79687,-0.79688 0.79687,-1.98438 0,-1.125 -0.73437,-1.85937 -0.73438,-0.73438 -1.875,-0.73438 -0.46875,0 -1.15625,0.17188 l 0.1875,-1.4375 q 0.15625,0.0156 0.26562,0.0156 1.04688,0 1.875,-0.54687 0.84375,-0.54688 0.84375,-1.67188 0,-0.90625 -0.60937,-1.5 -0.60938,-0.59375 -1.57813,-0.59375 -0.95312,0 -1.59375,0.60938 -0.64062,0.59375 -0.8125,1.79687 l -1.64062,-0.29687 q 0.29687,-1.64063 1.35937,-2.54688 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92188,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85938 -0.46875,1.57813 -0.46875,0.70312 -1.375,1.125 1.1875,0.28125 1.84375,1.14062 0.65625,0.85938 0.65625,2.15625 0,1.73438 -1.28125,2.95313 -1.26562,1.21875 -3.21875,1.21875 -1.76562,0 -2.92187,-1.04688 -1.15625,-1.04687 -1.32813,-2.71875 z m 18.98508,1.95313 v 1.57812 h -8.82812 q -0.0156,-0.59375 0.1875,-1.14062 0.34375,-0.90625 1.07812,-1.78125 0.75,-0.875 2.15625,-2.01563 2.17188,-1.78125 2.9375,-2.82812 0.76563,-1.04688 0.76563,-1.96875 0,-0.98438 -0.70313,-1.64063 -0.6875,-0.67187 -1.8125,-0.67187 -1.1875,0 -1.90625,0.71875 -0.70312,0.70312 -0.70312,1.95312 l -1.6875,-0.17187 q 0.17187,-1.89063 1.29687,-2.875 1.14063,-0.98438 3.03125,-0.98438 1.92188,0 3.04688,1.0625 1.125,1.0625 1.125,2.64063 0,0.79687 -0.32813,1.57812 -0.32812,0.78125 -1.09375,1.64063 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23437 -1.89062,1.6875 -0.42188,0.4375 -0.6875,0.875 z m 10.84448,-4.26563 -8.84375,3.78125 v -1.625 l 7.01563,-2.90625 -7.01563,-2.875 v -1.64062 l 8.84375,3.73437 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path199"
+ d="m 177.22773,315.76625 q 0,-1.4375 0.71875,-2.4375 0.71875,-1 2.09375,-1 1.25,0 2.07812,0.90625 0.82813,0.89063 0.82813,2.625 0,1.6875 -0.84375,2.60938 -0.82813,0.92187 -2.04688,0.92187 -1.20312,0 -2.01562,-0.90625 -0.8125,-0.90625 -0.8125,-2.71875 z m 2.85937,-2.3125 q -0.60937,0 -1.01562,0.53125 -0.40625,0.53125 -0.40625,1.9375 0,1.28125 0.40625,1.8125 0.40625,0.51563 1.01562,0.51563 0.625,0 1.01563,-0.51563 0.40625,-0.53125 0.40625,-1.9375 0,-1.29687 -0.40625,-1.8125 -0.40625,-0.53125 -1.01563,-0.53125 z m 0,12.9375 7.3125,-14.0625 h 1.32813 l -7.28125,14.0625 z m 5.78125,-3.625 q 0,-1.4375 0.71875,-2.42187 0.71875,-1 2.09375,-1 1.26563,0 2.07813,0.90625 0.82812,0.89062 0.82812,2.625 0,1.6875 -0.82812,2.60937 -0.82813,0.90625 -2.0625,0.90625 -1.21875,0 -2.03125,-0.90625 -0.79688,-0.90625 -0.79688,-2.71875 z m 2.85938,-2.29687 q -0.625,0 -1.03125,0.53125 -0.39063,0.53125 -0.39063,1.9375 0,1.28125 0.40625,1.8125 0.40625,0.51562 1.01563,0.51562 0.625,0 1.03125,-0.51562 0.40625,-0.53125 0.40625,-1.9375 0,-1.29688 -0.42188,-1.8125 -0.40625,-0.53125 -1.01562,-0.53125 z m 5.36964,5.4375 V 312.5475 h 5.01563 q 1.53125,0 2.45312,0.40625 0.92188,0.40625 1.4375,1.25 0.53125,0.84375 0.53125,1.76563 0,0.85937 -0.46875,1.625 -0.45312,0.75 -1.39062,1.20312 1.20312,0.35938 1.85937,1.21875 0.65625,0.85938 0.65625,2.01563 0,0.9375 -0.40625,1.75 -0.39062,0.79687 -0.98437,1.23437 -0.57813,0.4375 -1.45313,0.67188 -0.875,0.21875 -2.15625,0.21875 z m 1.78125,-7.75 h 2.875 q 1.1875,0 1.6875,-0.14063 0.67188,-0.20312 1.01563,-0.67187 0.34375,-0.46875 0.34375,-1.17188 0,-0.65625 -0.32813,-1.15625 -0.3125,-0.51562 -0.90625,-0.70312 -0.59375,-0.1875 -2.03125,-0.1875 h -2.65625 z m 0,6.17187 h 3.3125 q 0.85938,0 1.20313,-0.0625 0.60937,-0.10937 1.01562,-0.35937 0.42188,-0.26563 0.6875,-0.75 0.26563,-0.48438 0.26563,-1.125 0,-0.75 -0.39063,-1.29688 -0.375,-0.54687 -1.0625,-0.76562 -0.67187,-0.23438 -1.95312,-0.23438 h -3.07813 z m 24.34563,-6.28125 h -8.82812 v -1.51562 h 8.82812 z m 0,4.0625 h -8.82812 v -1.53125 h 8.82812 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path201"
+ d="m 230.44314,325.90685 -3.90625,-9.67187 h 2.6875 l 1.82812,4.9375 0.53125,1.64062 q 0.20313,-0.625 0.26563,-0.82812 0.125,-0.40625 0.26562,-0.8125 l 1.84375,-4.9375 h 2.625 l -3.84375,9.67187 z m 7.71947,-10.98437 v -2.375 h 2.5625 v 2.375 z m 0,10.98437 v -9.67187 h 2.5625 v 9.67187 z m 10.77705,-3.07812 2.54688,0.42187 q -0.48438,1.40625 -1.54688,2.14063 -1.0625,0.73437 -2.65625,0.73437 -2.51562,0 -3.73437,-1.65625 -0.95313,-1.3125 -0.95313,-3.32812 0,-2.40625 1.25,-3.76563 1.26563,-1.35937 3.1875,-1.35937 2.15625,0 3.40625,1.42187 1.25,1.42188 1.1875,4.375 h -6.40625 q 0.0312,1.14063 0.60938,1.78125 0.59375,0.625 1.48437,0.625 0.59375,0 1,-0.32812 0.42188,-0.32813 0.625,-1.0625 z m 0.15625,-2.59375 q -0.0312,-1.10938 -0.57812,-1.6875 -0.54688,-0.57813 -1.32813,-0.57813 -0.84375,0 -1.39062,0.60938 -0.54688,0.60937 -0.53125,1.65625 z m 6.42261,5.67187 -3.0625,-9.67187 h 2.48437 l 1.8125,6.34375 1.67188,-6.34375 h 2.46875 l 1.60937,6.34375 1.85938,-6.34375 h 2.51562 l -3.10937,9.67187 h -2.45313 l -1.67187,-6.21875 -1.64063,6.21875 z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path203"
+ d="m 273.30699,325.90685 v -9.67187 h 1.46875 v 1.35937 q 0.45312,-0.71875 1.20312,-1.14062 0.76563,-0.4375 1.71875,-0.4375 1.07813,0 1.76563,0.45312 0.6875,0.4375 0.96875,1.23438 1.15625,-1.6875 2.98437,-1.6875 1.45313,0 2.21875,0.79687 0.78125,0.79688 0.78125,2.45313 v 6.64062 h -1.64062 v -6.09375 q 0,-0.98437 -0.15625,-1.40625 -0.15625,-0.4375 -0.57813,-0.70312 -0.42187,-0.26563 -0.98437,-0.26563 -1.01563,0 -1.6875,0.6875 -0.67188,0.67188 -0.67188,2.15625 v 5.625 h -1.64062 v -6.28125 q 0,-1.09375 -0.40625,-1.64062 -0.40625,-0.54688 -1.3125,-0.54688 -0.6875,0 -1.28125,0.35938 -0.59375,0.35937 -0.85938,1.0625 -0.25,0.70312 -0.25,2.03125 v 5.01562 z m 22.1658,-3.10937 1.6875,0.20312 q -0.40625,1.48438 -1.48437,2.3125 -1.07813,0.8125 -2.76563,0.8125 -2.125,0 -3.375,-1.29687 -1.23437,-1.3125 -1.23437,-3.67188 0,-2.45312 1.25,-3.79687 1.26562,-1.34375 3.26562,-1.34375 1.9375,0 3.15625,1.32812 1.23438,1.3125 1.23438,3.70313 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45312 0.8125,0.84375 2.01562,0.84375 0.90625,0 1.54688,-0.46875 0.64062,-0.48437 1.01562,-1.51562 z m -5.39062,-2.65625 h 5.40625 q -0.10938,-1.21875 -0.625,-1.82813 -0.78125,-0.95312 -2.03125,-0.95312 -1.125,0 -1.90625,0.76562 -0.76563,0.75 -0.84375,2.01563 z m 9.14132,5.76562 v -9.67187 h 1.46875 v 1.35937 q 0.45313,-0.71875 1.20313,-1.14062 0.76562,-0.4375 1.71875,-0.4375 1.07812,0 1.76562,0.45312 0.6875,0.4375 0.96875,1.23438 1.15625,-1.6875 2.98438,-1.6875 1.45312,0 2.21875,0.79687 0.78125,0.79688 0.78125,2.45313 v 6.64062 h -1.64063 v -6.09375 q 0,-0.98437 -0.15625,-1.40625 -0.15625,-0.4375 -0.57812,-0.70312 -0.42188,-0.26563 -0.98438,-0.26563 -1.01562,0 -1.6875,0.6875 -0.67187,0.67188 -0.67187,2.15625 v 5.625 h -1.64063 v -6.28125 q 0,-1.09375 -0.40625,-1.64062 -0.40625,-0.54688 -1.3125,-0.54688 -0.6875,0 -1.28125,0.35938 -0.59375,0.35937 -0.85937,1.0625 -0.25,0.70312 -0.25,2.03125 v 5.01562 z m 15.52518,0 v -9.67187 h 1.46875 v 1.46875 q 0.5625,-1.03125 1.03125,-1.35938 0.48438,-0.32812 1.0625,-0.32812 0.82813,0 1.6875,0.53125 l -0.5625,1.51562 q -0.60937,-0.35937 -1.20312,-0.35937 -0.54688,0 -0.96875,0.32812 -0.42188,0.32813 -0.60938,0.89063 -0.28125,0.875 -0.28125,1.92187 v 5.0625 z m 12.8533,-3.10937 1.6875,0.20312 q -0.40625,1.48438 -1.48437,2.3125 -1.07813,0.8125 -2.76563,0.8125 -2.125,0 -3.375,-1.29687 -1.23437,-1.3125 -1.23437,-3.67188 0,-2.45312 1.25,-3.79687 1.26562,-1.34375 3.26562,-1.34375 1.9375,0 3.15625,1.32812 1.23438,1.3125 1.23438,3.70313 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45312 0.8125,0.84375 2.01562,0.84375 0.90625,0 1.54688,-0.46875 0.64062,-0.48437 1.01562,-1.51562 z m -5.39062,-2.65625 h 5.40625 q -0.10938,-1.21875 -0.625,-1.82813 -0.78125,-0.95312 -2.03125,-0.95312 -1.125,0 -1.90625,0.76562 -0.76563,0.75 -0.84375,2.01563 z m 9.53198,5.76562 v -8.40625 h -1.45313 v -1.26562 h 1.45313 v -1.03125 q 0,-0.96875 0.17187,-1.45313 0.23438,-0.64062 0.82813,-1.03125 0.59375,-0.39062 1.67187,-0.39062 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.0937 -0.95312,-0.0937 -0.75,0 -1.0625,0.32812 -0.3125,0.3125 -0.3125,1.1875 v 0.89063 h 1.89062 v 1.26562 h -1.89062 v 8.40625 z m 4.57391,-5.84375 v -1.53125 l 8.84375,-3.73437 v 1.64062 l -7.01562,2.875 7.01562,2.90625 v 1.625 z m 19.26996,4.26563 v 1.57812 h -8.82812 q -0.0156,-0.59375 0.1875,-1.14062 0.34375,-0.90625 1.07812,-1.78125 0.75,-0.875 2.15625,-2.01563 2.17188,-1.78125 2.9375,-2.82812 0.76563,-1.04688 0.76563,-1.96875 0,-0.98438 -0.70313,-1.64063 -0.6875,-0.67187 -1.8125,-0.67187 -1.1875,0 -1.90625,0.71875 -0.70312,0.70312 -0.70312,1.95312 l -1.6875,-0.17187 q 0.17187,-1.89063 1.29687,-2.875 1.14063,-0.98438 3.03125,-0.98438 1.92188,0 3.04688,1.0625 1.125,1.0625 1.125,2.64063 0,0.79687 -0.32813,1.57812 -0.32812,0.78125 -1.09375,1.64063 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23437 -1.89062,1.6875 -0.42188,0.4375 -0.6875,0.875 z m 1.12573,1.57812 3.53125,-5.03125 -3.26562,-4.64062 h 2.04687 l 1.48438,2.26562 q 0.42187,0.64063 0.67187,1.07813 0.40625,-0.59375 0.73438,-1.0625 l 1.64062,-2.28125 h 1.95313 l -3.34375,4.54687 3.59375,5.125 h -2.01563 l -1.98437,-3 -0.51563,-0.8125 -2.54687,3.8125 z m 18.57813,-1.57812 v 1.57812 h -8.82813 q -0.0156,-0.59375 0.1875,-1.14062 0.34375,-0.90625 1.07813,-1.78125 0.75,-0.875 2.15625,-2.01563 2.17187,-1.78125 2.9375,-2.82812 0.76562,-1.04688 0.76562,-1.96875 0,-0.98438 -0.70312,-1.64063 -0.6875,-0.67187 -1.8125,-0.67187 -1.1875,0 -1.90625,0.71875 -0.70313,0.70312 -0.70313,1.95312 l -1.6875,-0.17187 q 0.17188,-1.89063 1.29688,-2.875 1.14062,-0.98438 3.03125,-0.98438 1.92187,0 3.04687,1.0625 1.125,1.0625 1.125,2.64063 0,0.79687 -0.32812,1.57812 -0.32813,0.78125 -1.09375,1.64063 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23437 -1.89063,1.6875 -0.42187,0.4375 -0.6875,0.875 z m 1.1257,1.57812 3.53125,-5.03125 -3.26562,-4.64062 h 2.04687 l 1.48438,2.26562 q 0.42187,0.64063 0.67187,1.07813 0.40625,-0.59375 0.73438,-1.0625 l 1.64062,-2.28125 h 1.95313 l -3.34375,4.54687 3.59375,5.125 h -2.01563 l -1.98437,-3 -0.51563,-0.8125 -2.54687,3.8125 z m 10.8125,0 v -8.40625 h -1.45312 v -1.26562 h 1.45312 v -1.03125 q 0,-0.96875 0.17188,-1.45313 0.23437,-0.64062 0.82812,-1.03125 0.59375,-0.39062 1.67188,-0.39062 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.0937 -0.95313,-0.0937 -0.75,0 -1.0625,0.32812 -0.3125,0.3125 -0.3125,1.1875 v 0.89063 h 1.89063 v 1.26562 h -1.89063 v 8.40625 z m 4.33957,-3.53125 1.64062,-0.21875 q 0.28125,1.40625 0.95313,2.01563 0.6875,0.60937 1.65625,0.60937 1.15625,0 1.95312,-0.79687 0.79688,-0.79688 0.79688,-1.98438 0,-1.125 -0.73438,-1.85937 -0.73437,-0.73438 -1.875,-0.73438 -0.46875,0 -1.15625,0.17188 l 0.1875,-1.4375 q 0.15625,0.0156 0.26563,0.0156 1.04687,0 1.875,-0.54687 0.84375,-0.54688 0.84375,-1.67188 0,-0.90625 -0.60938,-1.5 -0.60937,-0.59375 -1.57812,-0.59375 -0.95313,0 -1.59375,0.60938 -0.64063,0.59375 -0.8125,1.79687 l -1.64063,-0.29687 q 0.29688,-1.64063 1.35938,-2.54688 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92187,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85938 -0.46875,1.57813 -0.46875,0.70312 -1.375,1.125 1.1875,0.28125 1.84375,1.14062 0.65625,0.85938 0.65625,2.15625 0,1.73438 -1.28125,2.95313 -1.26563,1.21875 -3.21875,1.21875 -1.76563,0 -2.92188,-1.04688 -1.15625,-1.04687 -1.32812,-2.71875 z m 18.98508,1.95313 v 1.57812 h -8.82813 q -0.0156,-0.59375 0.1875,-1.14062 0.34375,-0.90625 1.07813,-1.78125 0.75,-0.875 2.15625,-2.01563 2.17187,-1.78125 2.9375,-2.82812 0.76562,-1.04688 0.76562,-1.96875 0,-0.98438 -0.70312,-1.64063 -0.6875,-0.67187 -1.8125,-0.67187 -1.1875,0 -1.90625,0.71875 -0.70313,0.70312 -0.70313,1.95312 l -1.6875,-0.17187 q 0.17188,-1.89063 1.29688,-2.875 1.14062,-0.98438 3.03125,-0.98438 1.92187,0 3.04687,1.0625 1.125,1.0625 1.125,2.64063 0,0.79687 -0.32812,1.57812 -0.32813,0.78125 -1.09375,1.64063 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23437 -1.89063,1.6875 -0.42187,0.4375 -0.6875,0.875 z m 2.64135,1.57812 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35937,0.64063 -1.15625,0.98438 l -0.45312,-0.70313 q 0.51562,-0.21875 0.76562,-0.67187 0.25,-0.4375 0.28125,-1.26563 z m 9.64786,0.23438 0.79688,-3.89063 h -1.54688 v -1.35937 h 1.8125 l 0.67188,-3.29688 h -2.48438 v -1.35937 h 2.76563 l 0.79687,-3.90625 h 1.35938 l -0.79688,3.90625 h 2.875 l 0.79688,-3.90625 h 1.375 l -0.79688,3.90625 h 1.57813 v 1.35937 h -1.84375 l -0.6875,3.29688 h 2.53125 v 1.35937 h -2.8125 l -0.78125,3.89063 h -1.375 l 0.78125,-3.89063 h -2.85938 l -0.78125,3.89063 z m 2.4375,-5.25 H 428.14 l 0.6875,-3.29688 h -2.875 z m 8.1882,5.01562 v -13.35937 h 1.64063 v 13.35937 z m 4.19172,0 v -9.67187 h 1.46875 v 1.35937 q 0.45312,-0.71875 1.20312,-1.14062 0.76563,-0.4375 1.71875,-0.4375 1.07813,0 1.76563,0.45312 0.6875,0.4375 0.96875,1.23438 1.15625,-1.6875 2.98437,-1.6875 1.45313,0 2.21875,0.79687 0.78125,0.79688 0.78125,2.45313 v 6.64062 h -1.64062 v -6.09375 q 0,-0.98437 -0.15625,-1.40625 -0.15625,-0.4375 -0.57813,-0.70312 -0.42187,-0.26563 -0.98437,-0.26563 -1.01563,0 -1.6875,0.6875 -0.67188,0.67188 -0.67188,2.15625 v 5.625 h -1.64062 v -6.28125 q 0,-1.09375 -0.40625,-1.64062 -0.40625,-0.54688 -1.3125,-0.54688 -0.6875,0 -1.28125,0.35938 -0.59375,0.35937 -0.85938,1.0625 -0.25,0.70312 -0.25,2.03125 v 5.01562 z m 21.8533,-1.1875 q -0.92188,0.76563 -1.76563,1.09375 -0.82812,0.3125 -1.79687,0.3125 -1.59375,0 -2.45313,-0.78125 -0.85937,-0.78125 -0.85937,-1.98437 0,-0.71875 0.32812,-1.29688 0.32813,-0.59375 0.84375,-0.9375 0.53125,-0.35937 1.1875,-0.54687 0.46875,-0.125 1.45313,-0.25 1.98437,-0.23438 2.92187,-0.5625 0.0156,-0.34375 0.0156,-0.42188 0,-1 -0.46875,-1.42187 -0.625,-0.54688 -1.875,-0.54688 -1.15625,0 -1.70313,0.40625 -0.54687,0.40625 -0.8125,1.42188 l -1.60937,-0.21875 q 0.21875,-1.01563 0.71875,-1.64063 0.5,-0.64062 1.45312,-0.98437 0.95313,-0.34375 2.1875,-0.34375 1.25,0 2.01563,0.29687 0.78125,0.28125 1.14062,0.73438 0.375,0.4375 0.51563,1.10937 0.0781,0.42188 0.0781,1.51563 v 2.1875 q 0,2.28125 0.10938,2.89062 0.10937,0.59375 0.40625,1.15625 h -1.70313 q -0.26562,-0.51562 -0.32812,-1.1875 z m -0.14063,-3.67187 q -0.89062,0.375 -2.67187,0.625 -1.01563,0.14062 -1.4375,0.32812 -0.42188,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45312,0.4375 0.9375,0 1.67188,-0.40625 0.75,-0.42187 1.09375,-1.14062 0.26562,-0.5625 0.26562,-1.64063 z m 4.20383,8.5625 v -13.375 h 1.48437 v 1.25 q 0.53125,-0.73438 1.1875,-1.09375 0.67188,-0.375 1.625,-0.375 1.23438,0 2.17188,0.64062 0.95312,0.625 1.4375,1.79688 0.48437,1.15625 0.48437,2.54687 0,1.48438 -0.53125,2.67188 -0.53125,1.1875 -1.54687,1.82812 -1.01563,0.625 -2.14063,0.625 -0.8125,0 -1.46875,-0.34375 -0.65625,-0.34375 -1.0625,-0.875 v 4.70313 z m 1.48437,-8.48438 q 0,1.85938 0.75,2.76563 0.76563,0.89062 1.82813,0.89062 1.09375,0 1.875,-0.92187 0.78125,-0.9375 0.78125,-2.875 0,-1.84375 -0.76563,-2.76563 -0.75,-0.92187 -1.8125,-0.92187 -1.04687,0 -1.85937,0.98437 -0.79688,0.96875 -0.79688,2.84375 z m 7.62574,4.78125 5.125,-13.35937 h 1.90625 l 5.46875,13.35937 h -2.01563 l -1.54687,-4.04687 h -5.59375 l -1.46875,4.04687 z m 3.85937,-5.48437 h 4.53125 l -1.40625,-3.70313 q -0.625,-1.6875 -0.9375,-2.76562 -0.26562,1.28125 -0.71875,2.54687 z m 10.2717,5.48437 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35937,0.64063 -1.15625,0.98438 l -0.45312,-0.70313 q 0.51562,-0.21875 0.76562,-0.67187 0.25,-0.4375 0.28125,-1.26563 z m 9.64786,0.23438 0.79687,-3.89063 h -1.54687 v -1.35937 h 1.8125 l 0.67187,-3.29688 h -2.48437 v -1.35937 h 2.76562 l 0.79688,-3.90625 h 1.35934 l -0.79684,3.90625 h 2.87497 l 0.79687,-3.90625 h 1.375 l -0.79687,3.90625 h 1.57812 v 1.35937 h -1.84375 l -0.6875,3.29688 h 2.53125 v 1.35937 h -2.8125 l -0.78125,3.89063 h -1.375 l 0.78125,-3.89063 h -2.85934 l -0.78125,3.89063 z m 2.4375,-5.25 h 2.85934 l 0.6875,-3.29688 h -2.87497 z m 8.23508,-6.45313 v -1.89062 h 1.64062 v 1.89062 z m 0,11.46875 v -9.67187 h 1.64062 v 9.67187 z m 4.14483,0 v -9.67187 h 1.46875 v 1.35937 q 0.45313,-0.71875 1.20313,-1.14062 0.76562,-0.4375 1.71875,-0.4375 1.07812,0 1.76562,0.45312 0.6875,0.4375 0.96875,1.23438 1.15625,-1.6875 2.98438,-1.6875 1.45312,0 2.21875,0.79687 0.78125,0.79688 0.78125,2.45313 v 6.64062 h -1.64063 v -6.09375 q 0,-0.98437 -0.15625,-1.40625 -0.15625,-0.4375 -0.57812,-0.70312 -0.42188,-0.26563 -0.98438,-0.26563 -1.01562,0 -1.6875,0.6875 -0.67187,0.67188 -0.67187,2.15625 v 5.625 h -1.64063 v -6.28125 q 0,-1.09375 -0.40625,-1.64062 -0.40625,-0.54688 -1.3125,-0.54688 -0.6875,0 -1.28125,0.35938 -0.59375,0.35937 -0.85937,1.0625 -0.25,0.70312 -0.25,2.03125 v 5.01562 z m 21.85327,-1.1875 q -0.92187,0.76563 -1.76562,1.09375 -0.82813,0.3125 -1.79688,0.3125 -1.59375,0 -2.45312,-0.78125 -0.85938,-0.78125 -0.85938,-1.98437 0,-0.71875 0.32813,-1.29688 0.32812,-0.59375 0.84375,-0.9375 0.53125,-0.35937 1.1875,-0.54687 0.46875,-0.125 1.45312,-0.25 1.98438,-0.23438 2.92188,-0.5625 0.0156,-0.34375 0.0156,-0.42188 0,-1 -0.46875,-1.42187 -0.625,-0.54688 -1.875,-0.54688 -1.15625,0 -1.70312,0.40625 -0.54688,0.40625 -0.8125,1.42188 l -1.60938,-0.21875 q 0.21875,-1.01563 0.71875,-1.64063 0.5,-0.64062 1.45313,-0.98437 0.95312,-0.34375 2.1875,-0.34375 1.25,0 2.01562,0.29687 0.78125,0.28125 1.14063,0.73438 0.375,0.4375 0.51562,1.10937 0.0781,0.42188 0.0781,1.51563 v 2.1875 q 0,2.28125 0.10937,2.89062 0.10938,0.59375 0.40625,1.15625 h -1.70307 q -0.26563,-0.51562 -0.32813,-1.1875 z m -0.14062,-3.67187 q -0.89063,0.375 -2.67188,0.625 -1.01562,0.14062 -1.4375,0.32812 -0.42187,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45313,0.4375 0.9375,0 1.67187,-0.40625 0.75,-0.42187 1.09375,-1.14062 0.26563,-0.5625 0.26563,-1.64063 z m 4.20385,8.5625 v -13.375 h 1.48438 v 1.25 q 0.53125,-0.73438 1.1875,-1.09375 0.67187,-0.375 1.625,-0.375 1.23437,0 2.17187,0.64062 0.95313,0.625 1.4375,1.79688 0.48438,1.15625 0.48438,2.54687 0,1.48438 -0.53125,2.67188 -0.53125,1.1875 -1.54688,1.82812 -1.01562,0.625 -2.14062,0.625 -0.8125,0 -1.46875,-0.34375 -0.65625,-0.34375 -1.0625,-0.875 v 4.70313 z m 1.48438,-8.48438 q 0,1.85938 0.75,2.76563 0.76562,0.89062 1.82812,0.89062 1.09375,0 1.875,-0.92187 0.78125,-0.9375 0.78125,-2.875 0,-1.84375 -0.76562,-2.76563 -0.75,-0.92187 -1.8125,-0.92187 -1.04688,0 -1.85938,0.98437 -0.79687,0.96875 -0.79687,2.84375 z m 9.01636,4.78125 v -13.35937 h 5.01562 q 1.53125,0 2.45313,0.40625 0.92187,0.40625 1.4375,1.25 0.53125,0.84375 0.53125,1.76562 0,0.85938 -0.46875,1.625 -0.45313,0.75 -1.39063,1.20313 1.20313,0.35937 1.85938,1.21875 0.65625,0.85937 0.65625,2.01562 0,0.9375 -0.40625,1.75 -0.39063,0.79688 -0.98438,1.23438 -0.57812,0.4375 -1.45312,0.67187 -0.875,0.21875 -2.15625,0.21875 z m 1.78125,-7.75 h 2.875 q 1.1875,0 1.6875,-0.14062 0.67187,-0.20313 1.01562,-0.67188 0.34375,-0.46875 0.34375,-1.17187 0,-0.65625 -0.32812,-1.15625 -0.3125,-0.51563 -0.90625,-0.70313 -0.59375,-0.1875 -2.03125,-0.1875 h -2.65625 z m 0,6.17188 h 3.3125 q 0.85937,0 1.20312,-0.0625 0.60938,-0.10938 1.01563,-0.35938 0.42187,-0.26562 0.6875,-0.75 0.26562,-0.48437 0.26562,-1.125 0,-0.75 -0.39062,-1.29687 -0.375,-0.54688 -1.0625,-0.76563 -0.67188,-0.23437 -1.95313,-0.23437 h -3.07812 z m 18.6936,0 v 1.57812 h -8.82812 q -0.0156,-0.59375 0.1875,-1.14062 0.34375,-0.90625 1.07812,-1.78125 0.75,-0.875 2.15625,-2.01563 2.17188,-1.78125 2.9375,-2.82812 0.76563,-1.04688 0.76563,-1.96875 0,-0.98438 -0.70313,-1.64063 -0.6875,-0.67187 -1.8125,-0.67187 -1.1875,0 -1.90625,0.71875 -0.70312,0.70312 -0.70312,1.95312 l -1.6875,-0.17187 q 0.17187,-1.89063 1.29687,-2.875 1.14063,-0.98438 3.03125,-0.98438 1.92188,0 3.04688,1.0625 1.125,1.0625 1.125,2.64063 0,0.79687 -0.32813,1.57812 -0.32812,0.78125 -1.09375,1.64063 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23437 -1.89062,1.6875 -0.42188,0.4375 -0.6875,0.875 z m 0.9538,1.57812 5.125,-13.35937 h 1.90625 l 5.46875,13.35937 h -2.01563 l -1.54687,-4.04687 h -5.59375 l -1.46875,4.04687 z m 3.85937,-5.48437 H 577.52 l -1.40625,-3.70313 q -0.625,-1.6875 -0.9375,-2.76562 -0.26562,1.28125 -0.71875,2.54687 z m 10.27173,5.48437 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35937,0.64063 -1.15625,0.98438 l -0.45312,-0.70313 q 0.51562,-0.21875 0.76562,-0.67187 0.25,-0.4375 0.28125,-1.26563 z m 12.6322,0 -3.6875,-9.67187 h 1.73438 l 2.07812,5.79687 q 0.32813,0.9375 0.625,1.9375 0.20313,-0.76562 0.60938,-1.82812 l 2.14062,-5.90625 h 1.6875 l -3.65625,9.67187 z m 6.64063,0 v -9.67187 h 1.46875 v 1.35937 q 0.45312,-0.71875 1.20312,-1.14062 0.76563,-0.4375 1.71875,-0.4375 1.07813,0 1.76563,0.45312 0.6875,0.4375 0.96875,1.23438 1.15625,-1.6875 2.98437,-1.6875 1.45313,0 2.21875,0.79687 0.78125,0.79688 0.78125,2.45313 v 6.64062 h -1.64062 v -6.09375 q 0,-0.98437 -0.15625,-1.40625 -0.15625,-0.4375 -0.57813,-0.70312 -0.42187,-0.26563 -0.98437,-0.26563 -1.01563,0 -1.6875,0.6875 -0.67188,0.67188 -0.67188,2.15625 v 5.625 h -1.64062 v -6.28125 q 0,-1.09375 -0.40625,-1.64062 -0.40625,-0.54688 -1.3125,-0.54688 -0.6875,0 -1.28125,0.35938 -0.59375,0.35937 -0.85938,1.0625 -0.25,0.70312 -0.25,2.03125 v 5.01562 z m 22.16583,-3.10937 1.6875,0.20312 q -0.40625,1.48438 -1.48437,2.3125 -1.07813,0.8125 -2.76563,0.8125 -2.125,0 -3.375,-1.29687 -1.23437,-1.3125 -1.23437,-3.67188 0,-2.45312 1.25,-3.79687 1.26562,-1.34375 3.26562,-1.34375 1.9375,0 3.15625,1.32812 1.23438,1.3125 1.23438,3.70313 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45312 0.8125,0.84375 2.01562,0.84375 0.90625,0 1.54688,-0.46875 0.64062,-0.48437 1.01562,-1.51562 z m -5.39062,-2.65625 h 5.40625 q -0.10938,-1.21875 -0.625,-1.82813 -0.78125,-0.95312 -2.03125,-0.95312 -1.125,0 -1.90625,0.76562 -0.76563,0.75 -0.84375,2.01563 z m 9.14129,5.76562 v -9.67187 h 1.46875 v 1.35937 q 0.45313,-0.71875 1.20313,-1.14062 0.76562,-0.4375 1.71875,-0.4375 1.07812,0 1.76562,0.45312 0.6875,0.4375 0.96875,1.23438 1.15625,-1.6875 2.98438,-1.6875 1.45312,0 2.21875,0.79687 0.78125,0.79688 0.78125,2.45313 v 6.64062 h -1.64063 v -6.09375 q 0,-0.98437 -0.15625,-1.40625 -0.15625,-0.4375 -0.57812,-0.70312 -0.42188,-0.26563 -0.98438,-0.26563 -1.01562,0 -1.6875,0.6875 -0.67187,0.67188 -0.67187,2.15625 v 5.625 h -1.64063 v -6.28125 q 0,-1.09375 -0.40625,-1.64062 -0.40625,-0.54688 -1.3125,-0.54688 -0.6875,0 -1.28125,0.35938 -0.59375,0.35937 -0.85937,1.0625 -0.25,0.70312 -0.25,2.03125 v 5.01562 z m 24.16583,-5.84375 -8.84375,3.78125 v -1.625 l 7.01563,-2.90625 -7.01563,-2.875 v -1.64062 l 8.84375,3.73437 z m 10.57825,9.76563 q -1.35937,-1.70313 -2.29687,-4 -0.9375,-2.29688 -0.9375,-4.76563 0,-2.15625 0.70312,-4.14062 0.82813,-2.3125 2.53125,-4.59375 h 1.17188 q -1.09375,1.89062 -1.45313,2.70312 -0.54687,1.25 -0.875,2.625 -0.39062,1.70313 -0.39062,3.42188 0,4.375 2.71875,8.75 z m 4.16583,0 h -1.1875 q 2.73438,-4.375 2.73438,-8.75 0,-1.71875 -0.39063,-3.39063 -0.3125,-1.375 -0.875,-2.625 -0.35937,-0.82812 -1.46875,-2.73437 h 1.1875 q 1.70313,2.28125 2.53125,4.59375 0.6875,1.98437 0.6875,4.14062 0,2.46875 -0.9375,4.76563 -0.9375,2.29687 -2.28125,4 z m 10.34906,-0.21875 v -17.0625 h 3.60938 v 1.35937 h -1.96875 v 14.34375 h 1.96875 v 1.35938 z m 4.99579,-13.84375 q 0,-1.4375 0.71875,-2.4375 0.71875,-1 2.09375,-1 1.25,0 2.07813,0.90625 0.82812,0.89062 0.82812,2.625 0,1.6875 -0.84375,2.60937 -0.82812,0.92188 -2.04687,0.92188 -1.20313,0 -2.01563,-0.90625 -0.8125,-0.90625 -0.8125,-2.71875 z m 2.85938,-2.3125 q -0.60938,0 -1.01563,0.53125 -0.40625,0.53125 -0.40625,1.9375 0,1.28125 0.40625,1.8125 0.40625,0.51562 1.01563,0.51562 0.625,0 1.01562,-0.51562 0.40625,-0.53125 0.40625,-1.9375 0,-1.29688 -0.40625,-1.8125 -0.40625,-0.53125 -1.01562,-0.53125 z m 0,12.9375 7.3125,-14.0625 h 1.32812 l -7.28125,14.0625 z m 5.78125,-3.625 q 0,-1.4375 0.71875,-2.42188 0.71875,-1 2.09375,-1 1.26562,0 2.07812,0.90625 0.82813,0.89063 0.82813,2.625 0,1.6875 -0.82813,2.60938 -0.82812,0.90625 -2.0625,0.90625 -1.21875,0 -2.03125,-0.90625 -0.79687,-0.90625 -0.79687,-2.71875 z m 2.85937,-2.29688 q -0.625,0 -1.03125,0.53125 -0.39062,0.53125 -0.39062,1.9375 0,1.28125 0.40625,1.8125 0.40625,0.51563 1.01562,0.51563 0.625,0 1.03125,-0.51563 0.40625,-0.53125 0.40625,-1.9375 0,-1.29687 -0.42187,-1.8125 -0.40625,-0.53125 -1.01563,-0.53125 z m 10.96338,5.4375 h -1.64062 v -10.45312 q -0.59375,0.5625 -1.5625,1.14062 -0.95313,0.5625 -1.71875,0.84375 v -1.59375 q 1.375,-0.64062 2.40625,-1.5625 1.03125,-0.92187 1.45312,-1.78125 h 1.0625 z m 5.07886,0 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35938,0.64063 -1.15625,0.98438 l -0.45313,-0.70313 q 0.51563,-0.21875 0.76563,-0.67187 0.25,-0.4375 0.28125,-1.26563 z m 9.78845,-10.14062 q 0,-1.4375 0.71875,-2.4375 0.71875,-1 2.09375,-1 1.25,0 2.07813,0.90625 0.82812,0.89062 0.82812,2.625 0,1.6875 -0.84375,2.60937 -0.82812,0.92188 -2.04687,0.92188 -1.20313,0 -2.01563,-0.90625 -0.8125,-0.90625 -0.8125,-2.71875 z m 2.85938,-2.3125 q -0.60938,0 -1.01563,0.53125 -0.40625,0.53125 -0.40625,1.9375 0,1.28125 0.40625,1.8125 0.40625,0.51562 1.01563,0.51562 0.625,0 1.01562,-0.51562 0.40625,-0.53125 0.40625,-1.9375 0,-1.29688 -0.40625,-1.8125 -0.40625,-0.53125 -1.01562,-0.53125 z m 0,12.9375 7.3125,-14.0625 h 1.32812 l -7.28125,14.0625 z m 5.78125,-3.625 q 0,-1.4375 0.71875,-2.42188 0.71875,-1 2.09375,-1 1.26562,0 2.07812,0.90625 0.82813,0.89063 0.82813,2.625 0,1.6875 -0.82813,2.60938 -0.82812,0.90625 -2.0625,0.90625 -1.21875,0 -2.03125,-0.90625 -0.79687,-0.90625 -0.79687,-2.71875 z m 2.85937,-2.29688 q -0.625,0 -1.03125,0.53125 -0.39062,0.53125 -0.39062,1.9375 0,1.28125 0.40625,1.8125 0.40625,0.51563 1.01562,0.51563 0.625,0 1.03125,-0.51563 0.40625,-0.53125 0.40625,-1.9375 0,-1.29687 -0.42187,-1.8125 -0.40625,-0.53125 -1.01563,-0.53125 z m 4.90088,-6.17187 v -1.57813 h 8.64063 v 1.28125 q -1.28125,1.35938 -2.53125,3.60938 -1.25,2.25 -1.9375,4.625 -0.48438,1.67187 -0.625,3.67187 h -1.6875 q 0.0312,-1.57812 0.625,-3.8125 0.59375,-2.23437 1.6875,-4.29687 1.10937,-2.07813 2.35937,-3.5 z m 13.45386,15.3125 h -3.60938 v -1.35938 h 1.96875 v -14.34375 h -1.96875 v -1.35937 H 749.89 Z" />
+ <path
+ style="fill:#000000;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path205"
+ d="m 203.14425,337.76625 q 0,-1.4375 0.71875,-2.4375 0.71875,-1 2.09375,-1 1.25,0 2.07812,0.90625 0.82813,0.89063 0.82813,2.625 0,1.6875 -0.84375,2.60938 -0.82813,0.92187 -2.04688,0.92187 -1.20312,0 -2.01562,-0.90625 -0.8125,-0.90625 -0.8125,-2.71875 z m 2.85937,-2.3125 q -0.60937,0 -1.01562,0.53125 -0.40625,0.53125 -0.40625,1.9375 0,1.28125 0.40625,1.8125 0.40625,0.51563 1.01562,0.51563 0.625,0 1.01563,-0.51563 0.40625,-0.53125 0.40625,-1.9375 0,-1.29687 -0.40625,-1.8125 -0.40625,-0.53125 -1.01563,-0.53125 z m 0,12.9375 7.3125,-14.0625 h 1.32813 l -7.28125,14.0625 z m 5.78125,-3.625 q 0,-1.4375 0.71875,-2.42187 0.71875,-1 2.09375,-1 1.26563,0 2.07813,0.90625 0.82812,0.89062 0.82812,2.625 0,1.6875 -0.82812,2.60937 -0.82813,0.90625 -2.0625,0.90625 -1.21875,0 -2.03125,-0.90625 -0.79688,-0.90625 -0.79688,-2.71875 z m 2.85938,-2.29687 q -0.625,0 -1.03125,0.53125 -0.39063,0.53125 -0.39063,1.9375 0,1.28125 0.40625,1.8125 0.40625,0.51562 1.01563,0.51562 0.625,0 1.03125,-0.51562 0.40625,-0.53125 0.40625,-1.9375 0,-1.29688 -0.42188,-1.8125 -0.40625,-0.53125 -1.01562,-0.53125 z m 3.97902,5.4375 5.125,-13.35938 h 1.90625 l 5.46875,13.35938 h -2.01563 L 227.56077,343.86 h -5.59375 l -1.46875,4.04688 z m 3.85937,-5.48438 h 4.53125 l -1.40625,-3.70312 q -0.625,-1.6875 -0.9375,-2.76563 -0.26562,1.28125 -0.71875,2.54688 z m 15.48626,-2.32812 V 338.235 h 1.85937 v 1.85938 z m 0,7.8125 v -1.875 h 1.85937 v 1.875 z m 9.91348,0 V 338.235 h 1.46875 v 1.35938 q 0.45312,-0.71875 1.20312,-1.14063 0.76563,-0.4375 1.71875,-0.4375 1.07813,0 1.76563,0.45313 0.6875,0.4375 0.96875,1.23437 1.15625,-1.6875 2.98437,-1.6875 1.45313,0 2.21875,0.79688 0.78125,0.79687 0.78125,2.45312 v 6.64063 h -1.64062 v -6.09375 q 0,-0.98438 -0.15625,-1.40625 -0.15625,-0.4375 -0.57813,-0.70313 -0.42187,-0.26562 -0.98437,-0.26562 -1.01563,0 -1.6875,0.6875 -0.67188,0.67187 -0.67188,2.15625 v 5.625 h -1.64062 v -6.28125 q 0,-1.09375 -0.40625,-1.64063 -0.40625,-0.54687 -1.3125,-0.54687 -0.6875,0 -1.28125,0.35937 -0.59375,0.35938 -0.85938,1.0625 -0.25,0.70313 -0.25,2.03125 v 5.01563 z m 22.1658,-3.10938 1.6875,0.20313 q -0.40625,1.48437 -1.48437,2.3125 -1.07813,0.8125 -2.76563,0.8125 -2.125,0 -3.375,-1.29688 -1.23437,-1.3125 -1.23437,-3.67187 0,-2.45313 1.25,-3.79688 1.26562,-1.34375 3.26562,-1.34375 1.9375,0 3.15625,1.32813 1.23438,1.3125 1.23438,3.70312 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45313 0.8125,0.84375 2.01562,0.84375 0.90625,0 1.54688,-0.46875 0.64062,-0.48438 1.01562,-1.51563 z m -5.39062,-2.65625 h 5.40625 q -0.10938,-1.21875 -0.625,-1.82812 -0.78125,-0.95313 -2.03125,-0.95313 -1.125,0 -1.90625,0.76563 -0.76563,0.75 -0.84375,2.01562 z m 9.14135,5.76563 V 338.235 h 1.46875 v 1.35938 q 0.45313,-0.71875 1.20313,-1.14063 0.76562,-0.4375 1.71875,-0.4375 1.07812,0 1.76562,0.45313 0.6875,0.4375 0.96875,1.23437 1.15625,-1.6875 2.98438,-1.6875 1.45312,0 2.21875,0.79688 0.78125,0.79687 0.78125,2.45312 v 6.64063 h -1.64063 v -6.09375 q 0,-0.98438 -0.15625,-1.40625 -0.15625,-0.4375 -0.57812,-0.70313 -0.42188,-0.26562 -0.98438,-0.26562 -1.01562,0 -1.6875,0.6875 -0.67187,0.67187 -0.67187,2.15625 v 5.625 h -1.64063 v -6.28125 q 0,-1.09375 -0.40625,-1.64063 -0.40625,-0.54687 -1.3125,-0.54687 -0.6875,0 -1.28125,0.35937 -0.59375,0.35938 -0.85937,1.0625 -0.25,0.70313 -0.25,2.03125 v 5.01563 z m 15.52518,0 V 338.235 h 1.46875 v 1.46875 q 0.5625,-1.03125 1.03125,-1.35937 0.48438,-0.32813 1.0625,-0.32813 0.82813,0 1.6875,0.53125 l -0.5625,1.51563 q -0.60937,-0.35938 -1.20312,-0.35938 -0.54688,0 -0.96875,0.32813 -0.42188,0.32812 -0.60938,0.89062 -0.28125,0.875 -0.28125,1.92188 v 5.0625 z m 12.8533,-3.10938 1.6875,0.20313 q -0.40625,1.48437 -1.48437,2.3125 -1.07813,0.8125 -2.76563,0.8125 -2.125,0 -3.375,-1.29688 -1.23437,-1.3125 -1.23437,-3.67187 0,-2.45313 1.25,-3.79688 1.26562,-1.34375 3.26562,-1.34375 1.9375,0 3.15625,1.32813 1.23438,1.3125 1.23438,3.70312 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45313 0.8125,0.84375 2.01562,0.84375 0.90625,0 1.54688,-0.46875 0.64062,-0.48438 1.01562,-1.51563 z m -5.39062,-2.65625 h 5.40625 q -0.10938,-1.21875 -0.625,-1.82812 -0.78125,-0.95313 -2.03125,-0.95313 -1.125,0 -1.90625,0.76563 -0.76563,0.75 -0.84375,2.01562 z m 9.53195,5.76563 v -8.40625 h -1.45313 V 338.235 h 1.45313 v -1.03125 q 0,-0.96875 0.17187,-1.45312 0.23438,-0.64063 0.82813,-1.03125 0.59375,-0.39063 1.67187,-0.39063 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.0937 -0.95312,-0.0937 -0.75,0 -1.0625,0.32813 -0.3125,0.3125 -0.3125,1.1875 v 0.89062 h 1.89062 v 1.26563 h -1.89062 v 8.40625 z m 4.57394,-5.84375 v -1.53125 l 8.84375,-3.73438 v 1.64063 l -7.01562,2.875 7.01562,2.90625 v 1.625 z m 10.66059,2.3125 1.64062,-0.21875 q 0.28125,1.40625 0.95313,2.01562 0.6875,0.60938 1.65625,0.60938 1.15625,0 1.95312,-0.79688 0.79688,-0.79687 0.79688,-1.98437 0,-1.125 -0.73438,-1.85938 -0.73437,-0.73437 -1.875,-0.73437 -0.46875,0 -1.15625,0.17187 l 0.1875,-1.4375 q 0.15625,0.0156 0.26563,0.0156 1.04687,0 1.875,-0.54688 0.84375,-0.54687 0.84375,-1.67187 0,-0.90625 -0.60938,-1.5 -0.60937,-0.59375 -1.57812,-0.59375 -0.95313,0 -1.59375,0.60937 -0.64063,0.59375 -0.8125,1.79688 l -1.64063,-0.29688 q 0.29688,-1.64062 1.35938,-2.54687 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92187,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85937 -0.46875,1.57812 -0.46875,0.70313 -1.375,1.125 1.1875,0.28125 1.84375,1.14063 0.65625,0.85937 0.65625,2.15625 0,1.73437 -1.28125,2.95312 -1.26563,1.21875 -3.21875,1.21875 -1.76563,0 -2.92188,-1.04687 -1.15625,-1.04688 -1.32812,-2.71875 z m 9.73507,3.53125 3.53125,-5.03125 -3.26562,-4.64063 h 2.04687 l 1.48438,2.26563 q 0.42187,0.64062 0.67187,1.07812 0.40625,-0.59375 0.73438,-1.0625 l 1.64062,-2.28125 h 1.95313 l -3.34375,4.54688 3.59375,5.125 h -2.01563 l -1.98437,-3 -0.51563,-0.8125 -2.54687,3.8125 z m 9.96875,-3.53125 1.64063,-0.21875 q 0.28125,1.40625 0.95312,2.01562 0.6875,0.60938 1.65625,0.60938 1.15625,0 1.95313,-0.79688 0.79687,-0.79687 0.79687,-1.98437 0,-1.125 -0.73437,-1.85938 -0.73438,-0.73437 -1.875,-0.73437 -0.46875,0 -1.15625,0.17187 l 0.1875,-1.4375 q 0.15625,0.0156 0.26562,0.0156 1.04688,0 1.875,-0.54688 0.84375,-0.54687 0.84375,-1.67187 0,-0.90625 -0.60937,-1.5 -0.60938,-0.59375 -1.57813,-0.59375 -0.95312,0 -1.59375,0.60937 -0.64062,0.59375 -0.8125,1.79688 l -1.64062,-0.29688 q 0.29687,-1.64062 1.35937,-2.54687 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92188,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85937 -0.46875,1.57812 -0.46875,0.70313 -1.375,1.125 1.1875,0.28125 1.84375,1.14063 0.65625,0.85937 0.65625,2.15625 0,1.73437 -1.28125,2.95312 -1.26562,1.21875 -3.21875,1.21875 -1.76562,0 -2.92187,-1.04687 -1.15625,-1.04688 -1.32813,-2.71875 z m 9.73511,3.53125 3.53125,-5.03125 -3.26562,-4.64063 h 2.04687 l 1.48438,2.26563 q 0.42187,0.64062 0.67187,1.07812 0.40625,-0.59375 0.73438,-1.0625 l 1.64062,-2.28125 h 1.95313 l -3.34375,4.54688 3.59375,5.125 h -2.01563 l -1.98437,-3 -0.51563,-0.8125 -2.54687,3.8125 z m 10.8125,0 v -8.40625 h -1.45312 V 338.235 h 1.45312 v -1.03125 q 0,-0.96875 0.17188,-1.45312 0.23437,-0.64063 0.82812,-1.03125 0.59375,-0.39063 1.67188,-0.39063 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.0937 -0.95313,-0.0937 -0.75,0 -1.0625,0.32813 -0.3125,0.3125 -0.3125,1.1875 v 0.89062 h 1.89063 v 1.26563 h -1.89063 v 8.40625 z m 4.33954,-3.53125 1.64062,-0.21875 q 0.28125,1.40625 0.95313,2.01562 0.6875,0.60938 1.65625,0.60938 1.15625,0 1.95312,-0.79688 0.79688,-0.79687 0.79688,-1.98437 0,-1.125 -0.73438,-1.85938 -0.73437,-0.73437 -1.875,-0.73437 -0.46875,0 -1.15625,0.17187 l 0.1875,-1.4375 q 0.15625,0.0156 0.26563,0.0156 1.04687,0 1.875,-0.54688 0.84375,-0.54687 0.84375,-1.67187 0,-0.90625 -0.60938,-1.5 -0.60937,-0.59375 -1.57812,-0.59375 -0.95313,0 -1.59375,0.60937 -0.64063,0.59375 -0.8125,1.79688 l -1.64063,-0.29688 q 0.29688,-1.64062 1.35938,-2.54687 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92187,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.85937 -0.46875,1.57812 -0.46875,0.70313 -1.375,1.125 1.1875,0.28125 1.84375,1.14063 0.65625,0.85937 0.65625,2.15625 0,1.73437 -1.28125,2.95312 -1.26563,1.21875 -3.21875,1.21875 -1.76563,0 -2.92188,-1.04687 -1.15625,-1.04688 -1.32812,-2.71875 z m 18.98511,1.95312 v 1.57813 h -8.82813 q -0.0156,-0.59375 0.1875,-1.14063 0.34375,-0.90625 1.07813,-1.78125 0.75,-0.875 2.15625,-2.01562 2.17187,-1.78125 2.9375,-2.82813 0.76562,-1.04687 0.76562,-1.96875 0,-0.98437 -0.70312,-1.64062 -0.6875,-0.67188 -1.8125,-0.67188 -1.1875,0 -1.90625,0.71875 -0.70313,0.70313 -0.70313,1.95313 L 376.6137,338.36 q 0.17188,-1.89062 1.29688,-2.875 1.14062,-0.98437 3.03125,-0.98437 1.92187,0 3.04687,1.0625 1.125,1.0625 1.125,2.64062 0,0.79688 -0.32812,1.57813 -0.32813,0.78125 -1.09375,1.64062 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.23438 -1.89063,1.6875 -0.42187,0.4375 -0.6875,0.875 z m 2.64132,1.57813 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35937,0.64062 -1.15625,0.98437 l -0.45312,-0.70312 q 0.51562,-0.21875 0.76562,-0.67188 0.25,-0.4375 0.28125,-1.26562 z m 9.64786,0.23437 0.79688,-3.89062 h -1.54688 v -1.35938 h 1.8125 l 0.67188,-3.29687 h -2.48438 V 338.235 h 2.76563 l 0.79687,-3.90625 h 1.35938 l -0.79688,3.90625 h 2.875 l 0.79688,-3.90625 h 1.375 l -0.79688,3.90625 h 1.57813 v 1.35938 h -1.84375 l -0.6875,3.29687 h 2.53125 v 1.35938 h -2.8125 l -0.78125,3.89062 h -1.375 l 0.78125,-3.89062 h -2.85938 l -0.78125,3.89062 z m 2.4375,-5.25 h 2.85938 l 0.6875,-3.29687 h -2.875 z m 8.18823,5.01563 V 334.5475 h 1.64063 v 13.35938 z m 4.19168,0 V 338.235 h 1.46875 v 1.35938 q 0.45313,-0.71875 1.20313,-1.14063 0.76562,-0.4375 1.71875,-0.4375 1.07812,0 1.76562,0.45313 0.6875,0.4375 0.96875,1.23437 1.15625,-1.6875 2.98438,-1.6875 1.45312,0 2.21875,0.79688 0.78125,0.79687 0.78125,2.45312 v 6.64063 h -1.64063 v -6.09375 q 0,-0.98438 -0.15625,-1.40625 -0.15625,-0.4375 -0.57812,-0.70313 -0.42188,-0.26562 -0.98438,-0.26562 -1.01562,0 -1.6875,0.6875 -0.67187,0.67187 -0.67187,2.15625 v 5.625 h -1.64063 v -6.28125 q 0,-1.09375 -0.40625,-1.64063 -0.40625,-0.54687 -1.3125,-0.54687 -0.6875,0 -1.28125,0.35937 -0.59375,0.35938 -0.85937,1.0625 -0.25,0.70313 -0.25,2.03125 v 5.01563 z m 21.85334,-1.1875 q -0.92188,0.76562 -1.76563,1.09375 -0.82812,0.3125 -1.79687,0.3125 -1.59375,0 -2.45313,-0.78125 -0.85937,-0.78125 -0.85937,-1.98438 0,-0.71875 0.32812,-1.29687 0.32813,-0.59375 0.84375,-0.9375 0.53125,-0.35938 1.1875,-0.54688 0.46875,-0.125 1.45313,-0.25 1.98437,-0.23437 2.92187,-0.5625 0.0156,-0.34375 0.0156,-0.42187 0,-1 -0.46875,-1.42188 -0.625,-0.54687 -1.875,-0.54687 -1.15625,0 -1.70313,0.40625 -0.54687,0.40625 -0.8125,1.42187 l -1.60937,-0.21875 q 0.21875,-1.01562 0.71875,-1.64062 0.5,-0.64063 1.45312,-0.98438 0.95313,-0.34375 2.1875,-0.34375 1.25,0 2.01563,0.29688 0.78125,0.28125 1.14062,0.73437 0.375,0.4375 0.51563,1.10938 0.0781,0.42187 0.0781,1.51562 v 2.1875 q 0,2.28125 0.10938,2.89063 0.10937,0.59375 0.40625,1.15625 h -1.70313 q -0.26562,-0.51563 -0.32812,-1.1875 z m -0.14063,-3.67188 q -0.89062,0.375 -2.67187,0.625 -1.01563,0.14063 -1.4375,0.32813 -0.42188,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45312,0.4375 0.9375,0 1.67188,-0.40625 0.75,-0.42188 1.09375,-1.14063 0.26562,-0.5625 0.26562,-1.64062 z m 4.20383,8.5625 v -13.375 h 1.48437 v 1.25 q 0.53125,-0.73437 1.1875,-1.09375 0.67188,-0.375 1.625,-0.375 1.23438,0 2.17188,0.64063 0.95312,0.625 1.4375,1.79687 0.48437,1.15625 0.48437,2.54688 0,1.48437 -0.53125,2.67187 -0.53125,1.1875 -1.54687,1.82813 -1.01563,0.625 -2.14063,0.625 -0.8125,0 -1.46875,-0.34375 -0.65625,-0.34375 -1.0625,-0.875 V 351.61 Z m 1.48437,-8.48437 q 0,1.85937 0.75,2.76562 0.76563,0.89063 1.82813,0.89063 1.09375,0 1.875,-0.92188 0.78125,-0.9375 0.78125,-2.875 0,-1.84375 -0.76563,-2.76562 -0.75,-0.92188 -1.8125,-0.92188 -1.04687,0 -1.85937,0.98438 -0.79688,0.96875 -0.79688,2.84375 z m 7.62571,4.78125 5.125,-13.35938 h 1.90625 l 5.46875,13.35938 h -2.01563 L 456.20004,343.86 h -5.59375 l -1.46875,4.04688 z m 3.85937,-5.48438 h 4.53125 l -1.40625,-3.70312 q -0.625,-1.6875 -0.9375,-2.76563 -0.26562,1.28125 -0.71875,2.54688 z m 10.2717,5.48438 v -1.875 h 1.875 v 1.875 q 0,1.03125 -0.375,1.65625 -0.35938,0.64062 -1.15625,0.98437 l -0.45313,-0.70312 q 0.51563,-0.21875 0.76563,-0.67188 0.25,-0.4375 0.28125,-1.26562 z m 12.63223,0 -3.6875,-9.67188 h 1.73438 l 2.07812,5.79688 q 0.32813,0.9375 0.625,1.9375 0.20313,-0.76563 0.60938,-1.82813 l 2.14062,-5.90625 h 1.6875 l -3.65625,9.67188 z m 6.64063,0 V 338.235 h 1.46875 v 1.35938 q 0.45312,-0.71875 1.20312,-1.14063 0.76563,-0.4375 1.71875,-0.4375 1.07813,0 1.76563,0.45313 0.6875,0.4375 0.96875,1.23437 1.15625,-1.6875 2.98437,-1.6875 1.45313,0 2.21875,0.79688 0.78125,0.79687 0.78125,2.45312 v 6.64063 h -1.64062 v -6.09375 q 0,-0.98438 -0.15625,-1.40625 -0.15625,-0.4375 -0.57813,-0.70313 -0.42187,-0.26562 -0.98437,-0.26562 -1.01563,0 -1.6875,0.6875 -0.67188,0.67187 -0.67188,2.15625 v 5.625 h -1.64062 v -6.28125 q 0,-1.09375 -0.40625,-1.64063 -0.40625,-0.54687 -1.3125,-0.54687 -0.6875,0 -1.28125,0.35937 -0.59375,0.35938 -0.85938,1.0625 -0.25,0.70313 -0.25,2.03125 v 5.01563 z m 22.1658,-3.10938 1.6875,0.20313 q -0.40625,1.48437 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.29688 -1.23438,-1.3125 -1.23438,-3.67187 0,-2.45313 1.25,-3.79688 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.32813 1.23437,1.3125 1.23437,3.70312 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.45313 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.48438 1.01563,-1.51563 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.82812 -0.78125,-0.95313 -2.03125,-0.95313 -1.125,0 -1.90625,0.76563 -0.76562,0.75 -0.84375,2.01562 z m 9.1413,5.76563 V 338.235 h 1.46875 v 1.35938 q 0.45313,-0.71875 1.20313,-1.14063 0.76562,-0.4375 1.71875,-0.4375 1.07812,0 1.76562,0.45313 0.6875,0.4375 0.96875,1.23437 1.15625,-1.6875 2.98438,-1.6875 1.45312,0 2.21875,0.79688 0.78125,0.79687 0.78125,2.45312 v 6.64063 h -1.64063 v -6.09375 q 0,-0.98438 -0.15625,-1.40625 -0.15625,-0.4375 -0.57812,-0.70313 -0.42188,-0.26562 -0.98438,-0.26562 -1.01562,0 -1.6875,0.6875 -0.67187,0.67187 -0.67187,2.15625 v 5.625 h -1.64063 v -6.28125 q 0,-1.09375 -0.40625,-1.64063 -0.40625,-0.54687 -1.3125,-0.54687 -0.6875,0 -1.28125,0.35937 -0.59375,0.35938 -0.85937,1.0625 -0.25,0.70313 -0.25,2.03125 v 5.01563 z m 24.16583,-5.84375 -8.84375,3.78125 v -1.625 l 7.01563,-2.90625 -7.01563,-2.875 v -1.64063 l 8.84375,3.73438 z" />
+ <path
+ style="fill:#000000;fill-opacity:0;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path207"
+ d="M 245.88976,0 H 586.86614 V 16.283463 H 245.88976 Z" />
+ <path
+ style="fill:#434343;fill-rule:nonzero"
+ inkscape:connector-curvature="0"
+ id="path209"
+ d="M 256.32726,25.20346 V 11.844085 h 5.04688 q 1.32812,0 2.03125,0.125 0.96875,0.171875 1.64062,0.640625 0.67188,0.453125 1.07813,1.28125 0.40625,0.828125 0.40625,1.828125 0,1.703125 -1.09375,2.890625 -1.07813,1.171875 -3.92188,1.171875 h -3.42187 v 5.421875 z m 1.76563,-7 h 3.45312 q 1.71875,0 2.4375,-0.640625 0.71875,-0.640625 0.71875,-1.796875 0,-0.84375 -0.42187,-1.4375 -0.42188,-0.59375 -1.125,-0.78125 -0.4375,-0.125 -1.64063,-0.125 h -3.42187 z m 10.47482,7 V 11.844085 h 1.64062 v 4.796875 q 1.14063,-1.328125 2.89063,-1.328125 1.07812,0 1.85937,0.421875 0.79688,0.421875 1.14063,1.171875 0.34375,0.75 0.34375,2.171875 v 6.125 h -1.64063 v -6.125 q 0,-1.234375 -0.53125,-1.796875 -0.53125,-0.5625 -1.51562,-0.5625 -0.71875,0 -1.35938,0.390625 -0.64062,0.375 -0.92187,1.015625 -0.26563,0.640625 -0.26563,1.78125 v 5.296875 z m 10.2976,3.71875 -0.1875,-1.53125 q 0.54688,0.140625 0.9375,0.140625 0.54688,0 0.875,-0.1875 0.32813,-0.171875 0.54688,-0.5 0.15625,-0.25 0.5,-1.21875 0.0469,-0.140625 0.14062,-0.40625 l -3.67187,-9.6875 h 1.76562 l 2.01563,5.59375 q 0.39062,1.078125 0.70312,2.25 0.28125,-1.125 0.67188,-2.203125 l 2.07812,-5.640625 h 1.64063 l -3.6875,9.828125 q -0.59375,1.609375 -0.92188,2.203125 -0.4375,0.8125 -1,1.1875 -0.5625,0.375 -1.34375,0.375 -0.48437,0 -1.0625,-0.203125 z m 8.75,-6.609375 1.625,-0.25 q 0.125,0.96875 0.75,1.5 0.625,0.515625 1.75,0.515625 1.125,0 1.67188,-0.453125 0.54687,-0.46875 0.54687,-1.09375 0,-0.546875 -0.48437,-0.875 -0.32813,-0.21875 -1.67188,-0.546875 -1.8125,-0.46875 -2.51562,-0.796875 -0.6875,-0.328125 -1.04688,-0.90625 -0.35937,-0.59375 -0.35937,-1.3125 0,-0.640625 0.29687,-1.1875 0.29688,-0.5625 0.8125,-0.921875 0.375,-0.28125 1.03125,-0.46875 0.67188,-0.203125 1.42188,-0.203125 1.14062,0 2,0.328125 0.85937,0.328125 1.26562,0.890625 0.42188,0.5625 0.57813,1.5 l -1.60938,0.21875 q -0.10937,-0.75 -0.64062,-1.171875 -0.51563,-0.421875 -1.46875,-0.421875 -1.14063,0 -1.625,0.375 -0.46875,0.375 -0.46875,0.875 0,0.3125 0.1875,0.578125 0.20312,0.265625 0.64062,0.4375 0.23438,0.09375 1.4375,0.421875 1.75,0.453125 2.4375,0.75 0.6875,0.296875 1.07813,0.859375 0.39062,0.5625 0.39062,1.40625 0,0.828125 -0.48437,1.546875 -0.46875,0.71875 -1.375,1.125 -0.90625,0.390625 -2.04688,0.390625 -1.875,0 -2.875,-0.78125 -0.98437,-0.78125 -1.25,-2.328125 z m 9.98438,-8.578125 v -1.890625 h 1.64062 v 1.890625 z m 0,11.46875 v -9.671875 h 1.64062 v 9.671875 z m 10.45731,-3.546875 1.60937,0.21875 q -0.26562,1.65625 -1.35937,2.609375 -1.07813,0.9375 -2.67188,0.9375 -1.98437,0 -3.1875,-1.296875 -1.20312,-1.296875 -1.20312,-3.71875 0,-1.578125 0.51562,-2.75 0.51563,-1.171875 1.57813,-1.75 1.0625,-0.59375 2.3125,-0.59375 1.57812,0 2.57812,0.796875 1,0.796875 1.28125,2.265625 l -1.59375,0.234375 q -0.23437,-0.96875 -0.8125,-1.453125 -0.57812,-0.5 -1.39062,-0.5 -1.23438,0 -2.01563,0.890625 -0.78125,0.890625 -0.78125,2.8125 0,1.953125 0.75,2.84375 0.75,0.875 1.95313,0.875 0.96875,0 1.60937,-0.59375 0.65625,-0.59375 0.82813,-1.828125 z m 9.32812,2.359375 q -0.92187,0.765625 -1.76562,1.09375 -0.82813,0.3125 -1.79688,0.3125 -1.59375,0 -2.45312,-0.78125 -0.85938,-0.78125 -0.85938,-1.984375 0,-0.71875 0.32813,-1.296875 0.32812,-0.59375 0.84375,-0.9375 0.53125,-0.359375 1.1875,-0.546875 0.46875,-0.125 1.45312,-0.25 1.98438,-0.234375 2.92188,-0.5625 0.0156,-0.34375 0.0156,-0.421875 0,-1 -0.46875,-1.421875 -0.625,-0.546875 -1.875,-0.546875 -1.15625,0 -1.70312,0.40625 -0.54688,0.40625 -0.8125,1.421875 l -1.60938,-0.21875 q 0.21875,-1.015625 0.71875,-1.640625 0.5,-0.640625 1.45313,-0.984375 0.95312,-0.34375 2.1875,-0.34375 1.25,0 2.01562,0.296875 0.78125,0.28125 1.14063,0.734375 0.375,0.4375 0.51562,1.109375 0.0781,0.421875 0.0781,1.515625 v 2.1875 q 0,2.28125 0.10937,2.890625 0.10938,0.59375 0.40625,1.15625 h -1.70312 q -0.26563,-0.515625 -0.32813,-1.1875 z m -0.14062,-3.671875 q -0.89063,0.375 -2.67188,0.625 -1.01562,0.140625 -1.4375,0.328125 -0.42187,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45313,0.4375 0.9375,0 1.67187,-0.40625 0.75,-0.421875 1.09375,-1.140625 0.26563,-0.5625 0.26563,-1.640625 z m 4.15698,4.859375 V 11.844085 h 1.64062 V 25.20346 Z m 9.53125,0 V 11.844085 h 2.65625 l 3.15625,9.453125 q 0.4375,1.328125 0.64062,1.984375 0.23438,-0.734375 0.70313,-2.140625 l 3.20312,-9.296875 h 2.375 V 25.20346 h -1.70312 V 14.031585 l -3.875,11.171875 h -1.59375 l -3.85938,-11.375 v 11.375 z m 22.00955,-3.109375 1.6875,0.203125 q -0.40625,1.484375 -1.48437,2.3125 -1.07813,0.8125 -2.76563,0.8125 -2.125,0 -3.375,-1.296875 -1.23437,-1.3125 -1.23437,-3.671875 0,-2.453125 1.25,-3.796875 1.26562,-1.34375 3.26562,-1.34375 1.9375,0 3.15625,1.328125 1.23438,1.3125 1.23438,3.703125 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.453125 0.8125,0.84375 2.01562,0.84375 0.90625,0 1.54688,-0.46875 0.64062,-0.484375 1.01562,-1.515625 z m -5.39062,-2.65625 h 5.40625 q -0.10938,-1.21875 -0.625,-1.828125 -0.78125,-0.953125 -2.03125,-0.953125 -1.125,0 -1.90625,0.765625 -0.76563,0.75 -0.84375,2.015625 z m 9.14132,5.765625 v -9.671875 h 1.46875 v 1.359375 q 0.45313,-0.71875 1.20313,-1.140625 0.76562,-0.4375 1.71875,-0.4375 1.07812,0 1.76562,0.453125 0.6875,0.4375 0.96875,1.234375 1.15625,-1.6875 2.98438,-1.6875 1.45312,0 2.21875,0.796875 0.78125,0.796875 0.78125,2.453125 v 6.640625 h -1.64063 v -6.09375 q 0,-0.984375 -0.15625,-1.40625 -0.15625,-0.4375 -0.57812,-0.703125 -0.42188,-0.265625 -0.98438,-0.265625 -1.01562,0 -1.6875,0.6875 -0.67187,0.671875 -0.67187,2.15625 v 5.625 h -1.64063 v -6.28125 q 0,-1.09375 -0.40625,-1.640625 -0.40625,-0.546875 -1.3125,-0.546875 -0.6875,0 -1.28125,0.359375 -0.59375,0.359375 -0.85937,1.0625 -0.25,0.703125 -0.25,2.03125 v 5.015625 z m 14.93143,-4.84375 q 0,-2.6875 1.48437,-3.96875 1.25,-1.078125 3.04688,-1.078125 2,0 3.26562,1.3125 1.26563,1.296875 1.26563,3.609375 0,1.859375 -0.5625,2.9375 -0.5625,1.0625 -1.64063,1.65625 -1.0625,0.59375 -2.32812,0.59375 -2.03125,0 -3.28125,-1.296875 -1.25,-1.3125 -1.25,-3.765625 z m 1.6875,0 q 0,1.859375 0.79687,2.796875 0.8125,0.921875 2.04688,0.921875 1.21875,0 2.03125,-0.921875 0.8125,-0.9375 0.8125,-2.84375 0,-1.796875 -0.8125,-2.71875 -0.8125,-0.921875 -2.03125,-0.921875 -1.23438,0 -2.04688,0.921875 -0.79687,0.90625 -0.79687,2.765625 z m 9.28198,4.84375 v -9.671875 h 1.46875 v 1.46875 q 0.5625,-1.03125 1.03125,-1.359375 0.48438,-0.328125 1.0625,-0.328125 0.82813,0 1.6875,0.53125 l -0.5625,1.515625 q -0.60937,-0.359375 -1.20312,-0.359375 -0.54688,0 -0.96875,0.328125 -0.42188,0.328125 -0.60938,0.890625 -0.28125,0.875 -0.28125,1.921875 v 5.0625 z m 6.15018,3.71875 -0.1875,-1.53125 q 0.54687,0.140625 0.9375,0.140625 0.54687,0 0.875,-0.1875 0.32812,-0.171875 0.54687,-0.5 0.15625,-0.25 0.5,-1.21875 0.0469,-0.140625 0.14063,-0.40625 l -3.67188,-9.6875 h 1.76563 l 2.01562,5.59375 q 0.39063,1.078125 0.70313,2.25 0.28125,-1.125 0.67187,-2.203125 l 2.07813,-5.640625 h 1.64062 l -3.6875,9.828125 q -0.59375,1.609375 -0.92187,2.203125 -0.4375,0.8125 -1,1.1875 -0.5625,0.375 -1.34375,0.375 -0.48438,0 -1.0625,-0.203125 z m 14.19891,-8.015625 1.65625,-0.140625 q 0.125,1 0.54688,1.640625 0.4375,0.640625 1.34375,1.046875 0.92187,0.390625 2.0625,0.390625 1,0 1.78125,-0.296875 0.78125,-0.296875 1.15625,-0.8125 0.375,-0.53125 0.375,-1.15625 0,-0.625 -0.375,-1.09375 -0.35938,-0.46875 -1.1875,-0.796875 -0.54688,-0.203125 -2.39063,-0.640625 -1.82812,-0.453125 -2.5625,-0.84375 -0.96875,-0.5 -1.4375,-1.234375 -0.46875,-0.75 -0.46875,-1.671875 0,-1 0.57813,-1.875 0.57812,-0.890625 1.67187,-1.34375 1.10938,-0.453125 2.45313,-0.453125 1.48437,0 2.60937,0.484375 1.14063,0.46875 1.75,1.40625 0.60938,0.921875 0.65625,2.09375 l -1.6875,0.125 q -0.14062,-1.265625 -0.9375,-1.90625 -0.78125,-0.65625 -2.3125,-0.65625 -1.60937,0 -2.34375,0.59375 -0.73437,0.59375 -0.73437,1.421875 0,0.71875 0.53125,1.171875 0.5,0.46875 2.65625,0.96875 2.15625,0.484375 2.95312,0.84375 1.17188,0.53125 1.71875,1.359375 0.5625,0.828125 0.5625,1.90625 0,1.0625 -0.60937,2.015625 -0.60938,0.9375 -1.75,1.46875 -1.14063,0.515625 -2.57813,0.515625 -1.8125,0 -3.04687,-0.53125 -1.21875,-0.53125 -1.92188,-1.59375 -0.6875,-1.0625 -0.71875,-2.40625 z m 12.8342,8 v -13.375 h 1.48438 v 1.25 q 0.53125,-0.734375 1.1875,-1.09375 0.67187,-0.375 1.625,-0.375 1.23437,0 2.17187,0.640625 0.95313,0.625 1.4375,1.796875 0.48438,1.15625 0.48438,2.546875 0,1.484375 -0.53125,2.671875 -0.53125,1.1875 -1.54688,1.828125 -1.01562,0.625 -2.14062,0.625 -0.8125,0 -1.46875,-0.34375 -0.65625,-0.34375 -1.0625,-0.875 v 4.703125 z m 1.48438,-8.484375 q 0,1.859375 0.75,2.765625 0.76562,0.890625 1.82812,0.890625 1.09375,0 1.875,-0.921875 0.78125,-0.9375 0.78125,-2.875 0,-1.84375 -0.76562,-2.765625 -0.75,-0.921875 -1.8125,-0.921875 -1.04688,0 -1.85938,0.984375 -0.79687,0.96875 -0.79687,2.84375 z m 15.20385,3.59375 q -0.92187,0.765625 -1.76562,1.09375 -0.82813,0.3125 -1.79688,0.3125 -1.59375,0 -2.45312,-0.78125 -0.85938,-0.78125 -0.85938,-1.984375 0,-0.71875 0.32813,-1.296875 0.32812,-0.59375 0.84375,-0.9375 0.53125,-0.359375 1.1875,-0.546875 0.46875,-0.125 1.45312,-0.25 1.98438,-0.234375 2.92188,-0.5625 0.0156,-0.34375 0.0156,-0.421875 0,-1 -0.46875,-1.421875 -0.625,-0.546875 -1.875,-0.546875 -1.15625,0 -1.70312,0.40625 -0.54688,0.40625 -0.8125,1.421875 l -1.60938,-0.21875 q 0.21875,-1.015625 0.71875,-1.640625 0.5,-0.640625 1.45313,-0.984375 0.95312,-0.34375 2.1875,-0.34375 1.25,0 2.01562,0.296875 0.78125,0.28125 1.14063,0.734375 0.375,0.4375 0.51562,1.109375 0.0781,0.421875 0.0781,1.515625 v 2.1875 q 0,2.28125 0.10937,2.890625 0.10938,0.59375 0.40625,1.15625 h -1.70312 q -0.26563,-0.515625 -0.32813,-1.1875 z m -0.14062,-3.671875 q -0.89063,0.375 -2.67188,0.625 -1.01562,0.140625 -1.4375,0.328125 -0.42187,0.1875 -0.65625,0.53125 -0.21875,0.34375 -0.21875,0.78125 0,0.65625 0.5,1.09375 0.5,0.4375 1.45313,0.4375 0.9375,0 1.67187,-0.40625 0.75,-0.421875 1.09375,-1.140625 0.26563,-0.5625 0.26563,-1.640625 z m 10.51632,1.3125 1.60938,0.21875 q -0.26563,1.65625 -1.35938,2.609375 -1.07812,0.9375 -2.67187,0.9375 -1.98438,0 -3.1875,-1.296875 -1.20313,-1.296875 -1.20313,-3.71875 0,-1.578125 0.51563,-2.75 0.51562,-1.171875 1.57812,-1.75 1.0625,-0.59375 2.3125,-0.59375 1.57813,0 2.57813,0.796875 1,0.796875 1.28125,2.265625 l -1.59375,0.234375 q -0.23438,-0.96875 -0.8125,-1.453125 -0.57813,-0.5 -1.39063,-0.5 -1.23437,0 -2.01562,0.890625 -0.78125,0.890625 -0.78125,2.8125 0,1.953125 0.75,2.84375 0.75,0.875 1.95312,0.875 0.96875,0 1.60938,-0.59375 0.65625,-0.59375 0.82812,-1.828125 z m 9.64063,0.4375 1.6875,0.203125 q -0.40625,1.484375 -1.48438,2.3125 -1.07812,0.8125 -2.76562,0.8125 -2.125,0 -3.375,-1.296875 -1.23438,-1.3125 -1.23438,-3.671875 0,-2.453125 1.25,-3.796875 1.26563,-1.34375 3.26563,-1.34375 1.9375,0 3.15625,1.328125 1.23437,1.3125 1.23437,3.703125 0,0.15625 0,0.4375 h -7.21875 q 0.0937,1.59375 0.90625,2.453125 0.8125,0.84375 2.01563,0.84375 0.90625,0 1.54687,-0.46875 0.64063,-0.484375 1.01563,-1.515625 z m -5.39063,-2.65625 h 5.40625 q -0.10937,-1.21875 -0.625,-1.828125 -0.78125,-0.953125 -2.03125,-0.953125 -1.125,0 -1.90625,0.765625 -0.76562,0.75 -0.84375,2.015625 z m 14.1059,-0.07813 v -1.53125 l 8.84375,-3.734375 v 1.640625 l -7.01562,2.875 7.01562,2.90625 v 1.625 z m 10.89496,2.75 1.57812,-0.140625 q 0.20313,1.109375 0.76563,1.609375 0.5625,0.5 1.45312,0.5 0.75,0 1.3125,-0.34375 0.57813,-0.34375 0.9375,-0.921875 0.375,-0.578125 0.60938,-1.5625 0.25,-0.984375 0.25,-2 0,-0.109375 0,-0.328125 -0.5,0.78125 -1.35938,1.265625 -0.84375,0.484375 -1.82812,0.484375 -1.67188,0 -2.8125,-1.203125 -1.14063,-1.203125 -1.14063,-3.171875 0,-2.03125 1.1875,-3.265625 1.20313,-1.234375 3,-1.234375 1.3125,0 2.39063,0.703125 1.07812,0.703125 1.64062,2 0.5625,1.296875 0.5625,3.75 0,2.5625 -0.5625,4.078125 -0.5625,1.515625 -1.65625,2.3125 -1.09375,0.796875 -2.57812,0.796875 -1.5625,0 -2.5625,-0.875 -0.98438,-0.875 -1.1875,-2.453125 z m 6.71875,-5.890625 q 0,-1.40625 -0.75,-2.234375 -0.75,-0.828125 -1.8125,-0.828125 -1.09375,0 -1.90625,0.890625 -0.8125,0.890625 -0.8125,2.3125 0,1.28125 0.76562,2.078125 0.78125,0.796875 1.90625,0.796875 1.14063,0 1.875,-0.796875 0.73438,-0.796875 0.73438,-2.21875 z m 2.78198,8.984375 3.53125,-5.03125 -3.26562,-4.640625 h 2.04687 l 1.48438,2.265625 q 0.42187,0.640625 0.67187,1.078125 0.40625,-0.59375 0.73438,-1.0625 l 1.64062,-2.28125 h 1.95313 l -3.34375,4.546875 3.59375,5.125 h -2.01563 l -1.98437,-3 -0.51563,-0.8125 -2.54687,3.8125 z m 10.8125,0 v -8.40625 h -1.45312 V 15.53158 h 1.45312 v -1.03125 q 0,-0.96875 0.17188,-1.453125 0.23437,-0.640625 0.82812,-1.03125 0.59375,-0.390625 1.67188,-0.390625 0.6875,0 1.53125,0.15625 l -0.25,1.4375 q -0.5,-0.09375 -0.95313,-0.09375 -0.75,0 -1.0625,0.328125 -0.3125,0.3125 -0.3125,1.1875 v 0.890625 h 1.89063 v 1.265625 h -1.89063 v 8.40625 z m 4.33954,-3.53125 1.64062,-0.21875 q 0.28125,1.40625 0.95313,2.015625 0.6875,0.609375 1.65625,0.609375 1.15625,0 1.95312,-0.796875 0.79688,-0.796875 0.79688,-1.984375 0,-1.125 -0.73438,-1.859375 -0.73437,-0.734375 -1.875,-0.734375 -0.46875,0 -1.15625,0.171875 l 0.1875,-1.4375 q 0.15625,0.01563 0.26563,0.01563 1.04687,0 1.875,-0.546875 0.84375,-0.546875 0.84375,-1.671875 0,-0.90625 -0.60938,-1.5 -0.60937,-0.59375 -1.57812,-0.59375 -0.95313,0 -1.59375,0.609375 -0.64063,0.59375 -0.8125,1.796875 l -1.64063,-0.296875 q 0.29688,-1.640625 1.35938,-2.546875 1.0625,-0.90625 2.65625,-0.90625 1.09375,0 2,0.46875 0.92187,0.46875 1.40625,1.28125 0.5,0.8125 0.5,1.71875 0,0.859375 -0.46875,1.578125 -0.46875,0.703125 -1.375,1.125 1.1875,0.28125 1.84375,1.140625 0.65625,0.859375 0.65625,2.15625 0,1.734375 -1.28125,2.953125 -1.26563,1.21875 -3.21875,1.21875 -1.76563,0 -2.92188,-1.046875 -1.15625,-1.046875 -1.32812,-2.71875 z m 18.98511,1.953125 v 1.578125 h -8.82813 q -0.0156,-0.59375 0.1875,-1.140625 0.34375,-0.90625 1.07813,-1.78125 0.75,-0.875 2.15625,-2.015625 2.17187,-1.78125 2.9375,-2.828125 0.76562,-1.046875 0.76562,-1.96875 0,-0.984375 -0.70312,-1.640625 -0.6875,-0.671875 -1.8125,-0.671875 -1.1875,0 -1.90625,0.71875 -0.70313,0.703125 -0.70313,1.953125 l -1.6875,-0.171875 q 0.17188,-1.890625 1.29688,-2.875 1.14062,-0.984375 3.03125,-0.984375 1.92187,0 3.04687,1.0625 1.125,1.0625 1.125,2.640625 0,0.796875 -0.32812,1.578125 -0.32813,0.78125 -1.09375,1.640625 -0.75,0.84375 -2.53125,2.34375 -1.46875,1.234375 -1.89063,1.6875 -0.42187,0.4375 -0.6875,0.875 z m 10.84448,-4.265625 -8.84375,3.78125 v -1.625 l 7.01562,-2.90625 -7.01562,-2.875 V 14.09408 l 8.84375,3.734375 z" />
+</svg>
OpenPOWER on IntegriCloud