| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
| |
*) Implements AffineValueMap forward substitution for AffineApplyOps.
*) Adds ComposeAffineMaps transformation pass, which composes affine maps for all loads/stores in an MLFunction.
*) Adds multiple affine map composition tests.
PiperOrigin-RevId: 216216446
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This CL introduces a series of cleanups for AffineExpr value types:
1. to make it clear that the value types should be used, the pointer
AffineExpr types are put in the detail namespace. Unfortunately, since the
value type operator-> only forwards to the underlying pointer type, we
still
need to expose this in the include file for now;
2. AffineExprKind is ok to use, it thus comes out of detail and thus of
AffineExpr
3. getAffineDimExpr, getAffineSymbolExpr, getAffineConstantExpr are
similarly
extracted as free functions and their naming is mande consistent across
Builder, MLContext and AffineExpr
4. AffineBinaryOpEx::simplify functions are made into static free
functions.
In particular it is moved away from AffineMap.cpp where it does not belong
5. operator AffineExprType is made explicit
6. uses the binary operators everywhere possible
7. drops the pointer usage everywhere outside of AffineExpr.cpp,
MLIRContext.cpp and AsmPrinter.cpp
PiperOrigin-RevId: 216207212
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
but is better than the old name. Here is some justification:
1) affineint (as it is named) is not a type suitable for general computation (e.g. the multiply/adds in an integer matmul). It has undefined width and is undefined on overflow. They are used as the indices for forstmt because they are intended to be used as indexes inside the loop.
2) It can be used in both cfg and ml functions, and in cfg functions. As you mention, “symbols” are not affine, and we use affineint values for symbols.
3) Integers aren’t affine, the algorithms applied to them can be. :)
4) The only suitable use for affineint in MLIR is for indexes and dimension sizes (i.e. the bounds of those indexes).
PiperOrigin-RevId: 216057974
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with a new one (of a potentially different rank/shape) with an optional index
remapping.
- introduce Utils::replaceAllMemRefUsesWith
- use this for DMA double buffering
(This CL also adds a few temporary utilities / code that will be done away with
once:
1) abstract DMA op's are added
2) memref deferencing side-effect / trait is available on op's
3) b/117159533 is resolved (memref index computation slices).
PiperOrigin-RevId: 215831373
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This CL starts by replacing AffineExpr* with value-type AffineExprRef in a few
places in the IR. By a domino effect that is pretty telling of the
inconsistencies in the codebase, const is removed where it makes sense.
The rationale is that the decision was concisously made that unique'd types
have pointer semantics without const specifier. This is fine but we should be
consistent. In the end, the only logical invariant is that there should never
be such a thing as a const AffineExpr*, const AffineMap* or const IntegerSet*
in our codebase.
This CL takes a number of shortcuts to killing const with fire, in particular
forcing const AffineExprRef to return the underlying non-const
AffineExpr*. This will be removed once AffineExpr* has disappeared in
containers but for now such shortcuts allow a bit of sanity in this long quest
for cleanups.
The **only** places where const AffineExpr*, const AffineMap* or const
IntegerSet* may still appear is by transitive needs from containers,
comparison operators etc.
There is still one major thing remaining here: figure out why cast/dyn_cast
return me a const AffineXXX*, which in turn requires a bunch of ugly
const_casts. I suspect this is due to the classof
taking const AffineXXXExpr*. I wonder whether this is a side effect of 1., if
it is coming from llvm itself (I'd doubt it) or something else (clattner@?)
In light of this, the whole discussion about const makes total sense to me now
and I would systematically apply the rule that in the end, we should never
have any const XXX in our codebase for unique'd types (assuming we can remove
them all in containers and no additional constness constraint is added on us
from the outside world).
PiperOrigin-RevId: 215811554
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This CL implements AffineExprBaseRef as a templated type to allow LLVM-style
casts to work properly. This also allows making AffineExprBaseRef::expr
private.
To achieve this, it is necessary to use llvm::simplify_type and make
AffineConstExpr derive from both AffineExpr and llvm::simplify<AffineExprRef>.
Note that llvm::simplify_type is just an interface to enable the proper
template resolution of isa/cast/dyn_cast but it otherwise holds no value.
Lastly note that certain dyn_cast operations wanted the const AffineExpr* form
of AffineExprBaseRef so I made the implicit constructor take that by default
and documented the immutable behavior. I think this is consistent with the
decision to make unique'd type immutable by convention and never use const on
them.
PiperOrigin-RevId: 215642247
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This CL uniformizes the uses of AffineExprWrap outside of IR.
The public API of AffineExpr builder is modified to only use AffineExprWrap.
A few places access AffineExprWrap.expr, this is only while the API is in
transition to easily keep track (i.e. make expr private and let the compiler
track the errors).
Parser.cpp exhibits patterns that are dependent on nullptr values so
converting it is left for another CL.
PiperOrigin-RevId: 215642005
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This CL proposes adding MLIRContext* to AffineExpr as discussed previously.
This allows the value class to not require the context in its constructor and
makes it a POD that it makes sense to pass by value everywhere.
A list of other RFC CLs will build on this. The RFC CLs are small incremental
pushes of the API which would be a pretty big change otherwise.
Pushing the thinking a little bit more it seems reasonable to use implicit
cast/constructor to/from AffineExpr*.
As this thing evolves, it looks to me like IR (and
probably Parser, for not so good reasons) want to operate on AffineExpr* and
the rest of the code wants to operate on the value type.
For this reason I think AffineExprImpl*/AffineExpr may also make sense but I
do not have a particular naming preference.
The jury is still out for naming decision between the above and
AffineExprBase*/AffineExpr or AffineExpr*/AffineExprRef.
PiperOrigin-RevId: 215641596
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This CL argues that the builder API for AffineExpr should be used
with a lightweight wrapper that supports operators chaining.
This CL takes the ill-named AffineExprWrap and proposes a simple
set of operators with builtin constant simplifications.
This allows:
1. removing the getAddMulPureAffineExpr function;
2. avoiding concerns about constant vs non-constant simplifications
at **every call site**;
3. writing the mathematical expressions we want to write without unnecessary
obfuscations.
The points above represent pure technical debt that we don't want to carry on.
It is important to realize that this is not a mere convenience or "just sugar"
but reduction in cognitive overhead.
This thinking can be pushed significantly further, I have added some comments
with some basic ideas but we could make AffineMap, AffineApply and other
objects that use map applications more functional and value-based.
I am putting this out to get a first batch of reviews and see what people
think.
I think in my preferred design I would have the Builder directly return such
AffineExprPtr objects by value everywhere and avoid the boilerplate explicit
creations that I am doing by hand at this point.
Yes this AffineExprPtr would implicitly convert to AffineExpr* because that is
what it is.
PiperOrigin-RevId: 215641317
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- loopBodySkew shifts statements of a loop body by stmt-wise delays, and is
typically meant to be used to:
- allow overlap of non-blocking start/wait until completion operations with
other computation
- allow shifting of statements (for better register
reuse/locality/parallelism)
- software pipelining (when applied to the innermost loop)
- an additional argument specifies whether to unroll the prologue and epilogue.
- add method to check SSA dominance preservation.
- add a fake loop pipeline pass to test this utility.
Sample input/output are below. While on this, fix/add following:
- fix minor bug in getAddMulPureAffineExpr
- add additional builder methods for common affine map cases
- fix const_operand_iterator's for ForStmt, etc. When there is no such thing
as 'const MLValue', the iterator shouldn't be returning const MLValue's.
Returning MLValue is const correct.
Sample input/output examples:
1) Simplest case: shift second statement by one.
Input:
for %i = 0 to 7 {
%y = "foo"(%i) : (affineint) -> affineint
%x = "bar"(%i) : (affineint) -> affineint
}
Output:
#map0 = (d0) -> (d0 - 1)
mlfunc @loop_nest_simple1() {
%c8 = constant 8 : affineint
%c0 = constant 0 : affineint
%0 = "foo"(%c0) : (affineint) -> affineint
for %i0 = 1 to 7 {
%1 = "foo"(%i0) : (affineint) -> affineint
%2 = affine_apply #map0(%i0)
%3 = "bar"(%2) : (affineint) -> affineint
}
%4 = affine_apply #map0(%c8)
%5 = "bar"(%4) : (affineint) -> affineint
return
}
2) DMA overlap: shift dma.wait and compute by one.
Input
for %i = 0 to 7 {
%pingpong = affine_apply (d0) -> (d0 mod 2) (%i)
"dma.enqueue"(%pingpong) : (affineint) -> affineint
%pongping = affine_apply (d0) -> (d0 mod 2) (%i)
"dma.wait"(%pongping) : (affineint) -> affineint
"compute1"(%pongping) : (affineint) -> affineint
}
Output
#map0 = (d0) -> (d0 mod 2)
#map1 = (d0) -> (d0 - 1)
#map2 = ()[s0] -> (s0 + 7)
mlfunc @loop_nest_dma() {
%c8 = constant 8 : affineint
%c0 = constant 0 : affineint
%0 = affine_apply #map0(%c0)
%1 = "dma.enqueue"(%0) : (affineint) -> affineint
for %i0 = 1 to 7 {
%2 = affine_apply #map0(%i0)
%3 = "dma.enqueue"(%2) : (affineint) -> affineint
%4 = affine_apply #map1(%i0)
%5 = affine_apply #map0(%4)
%6 = "dma.wait"(%5) : (affineint) -> affineint
%7 = "compute1"(%5) : (affineint) -> affineint
}
%8 = affine_apply #map1(%c8)
%9 = affine_apply #map0(%8)
%10 = "dma.wait"(%9) : (affineint) -> affineint
%11 = "compute1"(%9) : (affineint) -> affineint
return
}
3) With arbitrary affine bound maps:
Shift last two statements by two.
Input:
for %i = %N to ()[s0] -> (s0 + 7)()[%N] {
%y = "foo"(%i) : (affineint) -> affineint
%x = "bar"(%i) : (affineint) -> affineint
%z = "foo_bar"(%i) : (affineint) -> (affineint)
"bar_foo"(%i) : (affineint) -> (affineint)
}
Output
#map0 = ()[s0] -> (s0 + 1)
#map1 = ()[s0] -> (s0 + 2)
#map2 = ()[s0] -> (s0 + 7)
#map3 = (d0) -> (d0 - 2)
#map4 = ()[s0] -> (s0 + 8)
#map5 = ()[s0] -> (s0 + 9)
for %i0 = %arg0 to #map0()[%arg0] {
%0 = "foo"(%i0) : (affineint) -> affineint
%1 = "bar"(%i0) : (affineint) -> affineint
}
for %i1 = #map1()[%arg0] to #map2()[%arg0] {
%2 = "foo"(%i1) : (affineint) -> affineint
%3 = "bar"(%i1) : (affineint) -> affineint
%4 = affine_apply #map3(%i1)
%5 = "foo_bar"(%4) : (affineint) -> affineint
%6 = "bar_foo"(%4) : (affineint) -> affineint
}
for %i2 = #map4()[%arg0] to #map5()[%arg0] {
%7 = affine_apply #map3(%i2)
%8 = "foo_bar"(%7) : (affineint) -> affineint
%9 = "bar_foo"(%7) : (affineint) -> affineint
}
4) Shift one by zero, second by one, third by two
for %i = 0 to 7 {
%y = "foo"(%i) : (affineint) -> affineint
%x = "bar"(%i) : (affineint) -> affineint
%z = "foobar"(%i) : (affineint) -> affineint
}
#map0 = (d0) -> (d0 - 1)
#map1 = (d0) -> (d0 - 2)
#map2 = ()[s0] -> (s0 + 7)
%c9 = constant 9 : affineint
%c8 = constant 8 : affineint
%c1 = constant 1 : affineint
%c0 = constant 0 : affineint
%0 = "foo"(%c0) : (affineint) -> affineint
%1 = "foo"(%c1) : (affineint) -> affineint
%2 = affine_apply #map0(%c1)
%3 = "bar"(%2) : (affineint) -> affineint
for %i0 = 2 to 7 {
%4 = "foo"(%i0) : (affineint) -> affineint
%5 = affine_apply #map0(%i0)
%6 = "bar"(%5) : (affineint) -> affineint
%7 = affine_apply #map1(%i0)
%8 = "foobar"(%7) : (affineint) -> affineint
}
%9 = affine_apply #map0(%c8)
%10 = "bar"(%9) : (affineint) -> affineint
%11 = affine_apply #map1(%c8)
%12 = "foobar"(%11) : (affineint) -> affineint
%13 = affine_apply #map1(%c9)
%14 = "foobar"(%13) : (affineint) -> affineint
5) SSA dominance violated; no shifting if a shift is specified for the second
statement.
for %i = 0 to 7 {
%x = "foo"(%i) : (affineint) -> affineint
"bar"(%x) : (affineint) -> affineint
}
PiperOrigin-RevId: 214975731
|
|
|
|
| |
PiperOrigin-RevId: 214722005
|
|
|
|
|
|
|
|
| |
Alternatively, we can defined a TFComplexType with a width parameter in the
mlir, then both types can be converted to the same mlir type with different width (like IntegerType).
We chose to use a direct mapping because there are only two TF Complex types.
PiperOrigin-RevId: 213856651
|
|
|
|
| |
PiperOrigin-RevId: 213748573
|
|
|
|
| |
PiperOrigin-RevId: 213650346
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- extend loop unroll-jam similar to loop unroll for affine bounds
- extend both loop unroll/unroll-jam to deal with cleanup loop for non multiple
of unroll factor.
- extend promotion of single iteration loops to work with affine bounds
- fix typo bugs in loop unroll
- refactor common code b/w loop unroll and loop unroll-jam
- move prototypes of non-pass transforms to LoopUtils.h
- add additional builder methods.
- introduce loopUnrollUpTo(factor) to unroll by either factor or trip count,
whichever is less.
- remove Statement::isInnermost (not used for now - will come back at the right
place/in right form later)
PiperOrigin-RevId: 213471227
|
|
|
|
|
|
|
| |
Use these methods to simplify existing code. Rename getConstantMap
getConstantAffineMap. Move declarations to group similar ones together.
PiperOrigin-RevId: 212814829
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
make loop
unroll/unroll-and-jam more powerful; add additional affine expr builder methods
- use previously added analysis/simplification to infer multiple of unroll
factor trip counts, making loop unroll/unroll-and-jam more general.
- for loop unroll, support bounds that are single result affine map's with the
same set of operands. For unknown loop bounds, loop unroll will now work as
long as trip count can be determined to be a multiple of unroll factor.
- extend getConstantTripCount to deal with single result affine map's with the
same operands. move it to mlir/Analysis/LoopAnalysis.cpp
- add additional builder utility methods for affine expr arithmetic
(difference, mod/floordiv/ceildiv w.r.t postitive constant). simplify code to
use the utility methods.
- move affine analysis routines to AffineAnalysis.cpp/.h from
AffineStructures.cpp/.h.
- Rename LoopUnrollJam to LoopUnrollAndJam to match class name.
- add an additional simplification for simplifyFloorDiv, simplifyCeilDiv
- Rename AffineMap::getNumOperands() getNumInputs: an affine map by itself does
not have operands. Operands are passed to it through affine_apply, from loop
bounds/if condition's, etc., operands are stored in the latter.
This should be sufficiently powerful for now as far as unroll/unroll-and-jam go for TPU
code generation, and can move to other analyses/transformations.
Loop nests like these are now unrolled without any cleanup loop being generated.
for %i = 1 to 100 {
// unroll factor 4: no cleanup loop will be generated.
for %j = (d0) -> (d0) (%i) to (d0) -> (5*d0 + 3) (%i) {
%x = "foo"(%j) : (affineint) -> i32
}
}
for %i = 1 to 100 {
// unroll factor 4: no cleanup loop will be generated.
for %j = (d0) -> (d0) (%i) to (d0) -> (d0 - d mod 4 - 1) (%i) {
%y = "foo"(%j) : (affineint) -> i32
}
}
for %i = 1 to 100 {
for %j = (d0) -> (d0) (%i) to (d0) -> (d0 + 128) (%i) {
%x = "foo"() : () -> i32
}
}
TODO(bondhugula): extend this to LoopUnrollAndJam as well in the next CL (with minor
changes).
PiperOrigin-RevId: 212661212
|
|
|
|
|
|
|
|
| |
This CL also includes two other minor changes:
- change the implemented syntax from 'if (cond)' to 'if cond', as specified by MLIR spec.
- a minor fix to the implementation of the ForStmt.
PiperOrigin-RevId: 210618122
|
|
|
|
|
|
|
|
| |
(and more useful) way rather than hacking up a pile of attributes for it. In
the future this will grow to represent inlined locations, fusion cases etc, but
for now we start with simple Unknown and File/Line/Col locations. NFC.
PiperOrigin-RevId: 210485775
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This revamps implementation of the loop bounds in the ForStmt, using general representation that supports operands. The frequent case of constant bounds is supported
via special access methods.
This also includes:
- Operand iterators for the Statement class.
- OpPointer::is() method to query the class of the Operation.
- Support for the bound shorthand notation parsing and printing.
- Validity checks for the bound operands used as dim ids and symbols
I didn't mean this CL to be so large. It just happened this way, as one thing led to another.
PiperOrigin-RevId: 210204858
|
|
|
|
|
|
|
|
|
|
| |
- Implement support for the TensorFlow 'If' op, the first TF op definition.
- Fill in some missing basic infra, including the ability to split a basic block, the ability to create a branch with operands, etc.
- Implement basic lowering for some simple forms of If, where the condition is a zero-D bool tensor and when all the types line up. Future patches will generalize this.
There is still much to be done here. I'd like to get some example graphs coming from the converter to play with to direct this work.
PiperOrigin-RevId: 210198760
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
operation and statement to have a location, and make it so a location is
required to be specified whenever you make one (though a null location is still
allowed). This is to encourage compiler authors to propagate loc info
properly, allowing our failability story to work well.
This is still a WIP - it isn't clear if we want to continue abusing Attribute
for location information, or whether we should introduce a new class heirarchy
to do so. This is good step along the way, and unblocks some of the tf/xla
work that builds upon it.
PiperOrigin-RevId: 210001406
|
|
|
|
|
|
| |
This also fixes an infinite recursion in VariadicOperands that this turned up.
PiperOrigin-RevId: 209692932
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
apply op.
- add builder for AffineApplyOp (first one for an operation that has
non-zero operands)
- add support for loop unrolling by a given factor; uses the affine apply op
builder.
While on this, change 'step' of ForStmt to be 'unsigned' instead of
AffineConstantExpr *. Add setters for ForStmt lb, ub, step.
Sample Input:
// CHECK-LABEL: mlfunc @loop_nest_unroll_cleanup() {
mlfunc @loop_nest_unroll_cleanup() {
for %i = 1 to 100 {
for %j = 0 to 17 {
%x = "addi32"(%j, %j) : (affineint, affineint) -> i32
%y = "addi32"(%x, %x) : (i32, i32) -> i32
}
}
return
}
Output:
$ mlir-opt -loop-unroll -unroll-factor=4 /tmp/single2.mlir
#map0 = (d0) -> (d0 + 1)
#map1 = (d0) -> (d0 + 2)
#map2 = (d0) -> (d0 + 3)
mlfunc @loop_nest_unroll_cleanup() {
for %i0 = 1 to 100 {
for %i1 = 0 to 17 step 4 {
%0 = "addi32"(%i1, %i1) : (affineint, affineint) -> i32
%1 = "addi32"(%0, %0) : (i32, i32) -> i32
%2 = affine_apply #map0(%i1)
%3 = "addi32"(%2, %2) : (affineint, affineint) -> i32
%4 = affine_apply #map1(%i1)
%5 = "addi32"(%4, %4) : (affineint, affineint) -> i32
%6 = affine_apply #map2(%i1)
%7 = "addi32"(%6, %6) : (affineint, affineint) -> i32
}
for %i2 = 16 to 17 {
%8 = "addi32"(%i2, %i2) : (affineint, affineint) -> i32
%9 = "addi32"(%8, %8) : (i32, i32) -> i32
}
}
return
}
PiperOrigin-RevId: 209676220
|
|
|
|
|
|
|
|
|
|
| |
resolver support.
Still TODO are verifier support (to make sure you don't use an attribute for a
function in another module) and the TODO in ModuleParser::finalizeModule that I
will handle in the next patch.
PiperOrigin-RevId: 209361648
|
|
|
|
| |
PiperOrigin-RevId: 208245701
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- introduce affine integer sets into the IR
- parse and print affine integer sets (both inline or outlined) similar to
affine maps
- use integer set for IfStmt's conditional, and implement parsing of IfStmt's
conditional
- fixed an affine expr paren omission bug while one this.
TODO: parse/represent/print MLValue operands to affine integer set references.
PiperOrigin-RevId: 207779408
|
|
|
|
|
|
|
|
| |
encapsulates an operation that is yet to be created. This is a patch towards
custom ops providing create methods that don't need to be templated, allowing
them to move out of line in the future.
PiperOrigin-RevId: 207725557
|
|
|
|
| |
PiperOrigin-RevId: 207235956
|
|
|
|
| |
PiperOrigin-RevId: 206977161
|
|
|
|
|
|
|
|
| |
of a statement block. Rename Statement::getFunction() and StmtBlock()::getFunction() to findFunction() to make it clear that this is not a constant time getter.
Fix b/112039912 - we were recording 'i' instead of '%i' for loop induction variables causing "use of undefined SSA value" error.
PiperOrigin-RevId: 206884644
|
|
|
|
|
|
|
|
|
|
| |
%i<ssa value number>. Add support for future pretty printing of ML function arguments as %arg<ssa value number>.
Induction variables are implemented by inheriting ForStmt from MLValue. ForStmt provides APIs that make this design decision invisible to the ForStmt users.
This CL in combination with cl/206253643 resolves http://b/111769060.
PiperOrigin-RevId: 206655937
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and OtherType. Other type is now the thing that holds AffineInt, Control,
eventually Resource, Variant, String, etc. FloatType holds the floating point
types, and allows convenient query of isa<FloatType>().
This fixes issues where we allowed control to be the element type of tensor,
memref, vector. At the same time, ban AffineInt from being an element of a
vector/memref/tensor as well since we don't need it.
I updated the spec to match this as well.
PiperOrigin-RevId: 206361942
|
|
|
|
|
|
|
| |
* Add tf_control as primitive type;
* Allow $ in bare-id to allow attributes with $ (to make it trivially to mangle a TF attribute);
PiperOrigin-RevId: 206342642
|
|
|
|
|
|
| |
functions with equivalent CFG functions. Traverses module MLIR, generates CFG functions (empty for now) and removes ML functions. Adds Transforms library and tests.
PiperOrigin-RevId: 205848367
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Drop sub-classing of affine binary op expressions.
- Drop affine expr op kind sub. Represent it as multiply by -1 and add. This
will also be in line with the math form when we'll need to represent a system of
linear equalities/inequalities: the negative number goes into the coefficient
of an affine form. (For eg. x_1 + (-1)*x_2 + 3*x_3 + (-2) >= 0). The folding
simplification will transparently deal with multiplying the -1 with any other
constants. This also means we won't need to simplify a multiply expression
like in x_1 + (-2)*x_2 to a subtract expression (x_1 - 2*x_2) for
canonicalization/uniquing.
- When we print the IR, we will still pretty print to a subtract when possible.
PiperOrigin-RevId: 205298958
|
|
|
|
|
|
|
|
| |
loop header.
Loop bounds and presumed to be constants for now and are stored in ForStmt as affine constant expressions. ML function arguments, return statement operands and loop variable name are dropped for now.
PiperOrigin-RevId: 205256208
|
|
|
|
| |
PiperOrigin-RevId: 205157390
|
|
|
|
| |
PiperOrigin-RevId: 204240947
|
|
|
|
|
|
|
|
|
|
| |
use it.
This also removes "operand" from the affine expr classes: it is unnecessary
verbosity and "operand" will mean something very specific for SSA stuff (we
will have an Operand type).
PiperOrigin-RevId: 203976504
|
|
prone to create things.
PiperOrigin-RevId: 203703229
|