summaryrefslogtreecommitdiffstats
path: root/mlir/lib/Transforms/MaterializeVectors.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Automated rollback of changelist 232717775.Uday Bondhugula2019-03-291-12/+12
| | | | PiperOrigin-RevId: 232807986
* NFC: Rename the 'for' operation in the AffineOps dialect to 'affine.for'. ↵River Riddle2019-03-291-12/+12
| | | | | | The is the second step to adding a namespace to the AffineOps dialect. PiperOrigin-RevId: 232717775
* NFC: Rename affine_apply to affine.apply. This is the first step to adding a ↵River Riddle2019-03-291-4/+4
| | | | | | namespace to the affine dialect. PiperOrigin-RevId: 232707862
* Remove remaining usages of OperationInst in lib/Transforms.River Riddle2019-03-291-23/+21
| | | | PiperOrigin-RevId: 232323671
* Define the AffineForOp and replace ForInst with it. This patch is largely ↵River Riddle2019-03-291-12/+8
| | | | | | mechanical, i.e. changing usages of ForInst to OpPointer<AffineForOp>. An important difference is that upon construction an AffineForOp no longer automatically creates the body and induction variable. To generate the body/iv, 'createBody' can be called on an AffineForOp with no body. PiperOrigin-RevId: 232060516
* Address Performance issue in NestedMatcherNicolas Vasilache2019-03-291-5/+6
| | | | | | | | | | | | | | | | | | | | | | A performance issue was reported due to the usage of NestedMatcher in ComposeAffineMaps. The main culprit was the ubiquitous copies that were occuring when appending even a single element in `matchOne`. This CL generally simplifies the implementation and removes one level of indirection by getting rid of auxiliary storage as well as simplifying the API. The users of the API are updated accordingly. The implementation was tested on a heavily unrolled example with ComposeAffineMaps and is now close in performance with an implementation based on stateless InstWalker. As a reminder, the whole ComposeAffineMaps pass is slated to disappear but the bug report was very useful as a stress test for NestedMatchers. Lastly, the following cleanups reported by @aminim were addressed: 1. make NestedPatternContext scoped within runFunction rather than at the Pass level. This was caused by a previous misunderstanding of Pass lifetime; 2. use defensive assertions in the constructor of NestedPatternContext to make it clear a unique such locally scoped context is allowed to exist. PiperOrigin-RevId: 231781279
* Recommit: Define a AffineOps dialect as well as an AffineIfOp operation. ↵River Riddle2019-03-291-3/+4
| | | | | | Replace all instances of IfInst with AffineIfOp and delete IfInst. PiperOrigin-RevId: 231342063
* Automated rollback of changelist 231318632.Nicolas Vasilache2019-03-291-4/+3
| | | | PiperOrigin-RevId: 231327161
* Define a AffineOps dialect as well as an AffineIfOp operation. Replace all ↵River Riddle2019-03-291-3/+4
| | | | | | instances of IfInst with AffineIfOp and delete IfInst. PiperOrigin-RevId: 231318632
* Replace too obscure usage of functional::map by declare + reserve + loop.Nicolas Vasilache2019-03-291-15/+21
| | | | | | | Cleanup a usage of functional::map that is deemed too obscure in `reindexAffineIndices`. Also fix a stale comment in `reindexAffineIndices`. PiperOrigin-RevId: 231211184
* Change AffineApplyOp to produce a single result, simplifying the code thatChris Lattner2019-03-291-4/+3
| | | | | | works with it, and updating the g3docs. PiperOrigin-RevId: 231120927
* Cleanup resource management and rename recursive matchersNicolas Vasilache2019-03-291-2/+2
| | | | | | | | | | | | | | | This CL follows up on a memory leak issue related to SmallVector growth that escapes the BumpPtrAllocator. The fix is to properly use ArrayRef and placement new to define away the issue. The following renaming is also applied: 1. MLFunctionMatcher -> NestedPattern 2. MLFunctionMatches -> NestedMatch As a consequence all allocations are now guaranteed to live on the BumpPtrAllocator. PiperOrigin-RevId: 231047766
* Allow operations to hold a blocklist and add support for parsing/printing a ↵River Riddle2019-03-291-0/+3
| | | | | | block list for verbose printing. PiperOrigin-RevId: 230951462
* Migrate VectorOrTensorType/MemRefType shape api to use int64_t instead of int.River Riddle2019-03-291-2/+2
| | | | PiperOrigin-RevId: 230605756
* Uniformize composition of AffineApplyOp by constructionNicolas Vasilache2019-03-291-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This CL is the 5th on the path to simplifying AffineMap composition. This removes the distinction between normalized single-result AffineMap and more general composed multi-result map. One nice byproduct of making the implementation driven by single-result is that the multi-result extension is a trivial change: the implementation is still single-result and we just use: ``` unsigned idx = getIndexOf(...); map.getResult(idx); ``` This CL also fixes an AffineNormalizer implementation issue related to symbols. Namely it stops performing substitutions on symbols in AffineNormalizer and instead concatenates them all to be consistent with the call to `AffineMap::compose(AffineMap)`. This latter call to `compose` cannot perform simplifications of symbols coming from different maps based on positions only: i.e. dims are applied and renumbered but symbols must be concatenated. The only way to determine whether symbols from different AffineApply are the same is to look at the concrete values. The canonicalizeMapAndOperands is thus extended with behavior to support replacing operands that appear multiple times. Lastly, this CL demonstrates that the implementation is correct by rewriting ComposeAffineMaps using only `makeComposedAffineApply`. The implementation uses a matcher because AffineApplyOp are introduced as composed operations on the fly instead of iteratively forwardSubstituting. For this purpose, a walker would revisit freshly introduced AffineApplyOp. Regardless, ComposeAffineMaps is scheduled to disappear, this CL replaces the implementation based on iterative `forwardSubstitute` by a composed-by-construction `makeComposedAffineApply`. Remaining calls to `forwardSubstitute` will be removed in the next CL. PiperOrigin-RevId: 228830443
* [MLIR] Make SuperVectorization use normalized AffineApplyOpNicolas Vasilache2019-03-291-8/+20
| | | | | | | | | Supervectorization does not plan on handling multi-result AffineMaps and non-canonical chains of > 1 AffineApplyOp. This CL uses the simpler single-result unbounded AffineApplyOp in the MaterializeVectors pass. PiperOrigin-RevId: 228469085
* Introduce AffineMap::compose(AffineMap)Nicolas Vasilache2019-03-291-1/+1
| | | | | | | | | | | This CL is the 2nd on the path to simplifying AffineMap composition. This CL uses the now accepted `AffineExpr::compose(AffineMap)` to implement `AffineMap::compose(AffineMap)`. Implications of keeping the simplification function in Analysis are documented where relevant. PiperOrigin-RevId: 228276646
* [MLIR] More graceful failure in MaterializeVectorsNicolas Vasilache2019-03-291-3/+6
| | | | | | | Even though it is unexpected except in pathological cases, a nullptr clone may be returned. This CL handles the nullptr return gracefuly. PiperOrigin-RevId: 227764615
* [MLIR] Drop strict super-vector requirement in MaterializeVectorNicolas Vasilache2019-03-291-1/+1
| | | | | | | | | | | | | The strict requirement (i.e. at least 2 HW vectors in a super-vector) was a premature optimization to avoid interfering with other vector code potentially introduced via other means. This CL avoids this premature optimization and the spurious errors it causes when super-vector size == HW vector size (which is a possible corner case). This may be revisited in the future. PiperOrigin-RevId: 227763966
* [MLIR] Handle corner case in MaterializeVectorsNicolas Vasilache2019-03-291-4/+11
| | | | | | | | | | | This corner was found when stress testing with a functional end-to-end CPU path. In the case where the hardware vector size is 1x...x1 the `keep` vector is empty and would result a crash. While there is no reason to expect a 1x...x1 HW vector in practice, this case can just gracefully degrade to scalar, which is what this CL allows. PiperOrigin-RevId: 227761097
* Update and generalize various passes to work on both CFG and ML functions,Chris Lattner2019-03-291-2/+6
| | | | | | | | | | | | | simplifying them in minor ways. The only significant cleanup here is the constant folding pass. All the other changes are simple and easy, but this is still enough to shrink the compiler by 45LOC. The one pass left to merge is the CSE pass, which will be move involved, so I'm splitting it out to its own patch (which I'll tackle right after this). This is step 28/n towards merging instructions and statements. PiperOrigin-RevId: 227328115
* Introduce PostDominanceInfo, fix properlyDominates() for InstructionsUday Bondhugula2019-03-291-2/+5
| | | | | | | | | | | | | | | - introduce PostDominanceInfo in the right/complete way and use that for post dominance check in store-load forwarding - replace all uses of Analysis/Utils::dominates/properlyDominates with DominanceInfo::dominates/properlyDominates - drop all redundant copies of dominance methods in Analysis/Utils/ - in pipeline-data-transfer, replace dominates call with a much less expensive check; similarly, substitute dominates() in checkMemRefAccessDependence with a simpler check suitable for that context - fix a bug in properlyDominates - improve doc for 'for' instruction 'body' PiperOrigin-RevId: 227320507
* Standardize naming of statements -> instructions, revisting the code base to beChris Lattner2019-03-291-52/+52
| | | | | | | | | consistent and moving the using declarations over. Hopefully this is the last truly massive patch in this refactoring. This is step 21/n towards merging instructions and statements, NFC. PiperOrigin-RevId: 227178245
* Eliminate the using decls for MLFunction and CFGFunction standardizing onChris Lattner2019-03-291-5/+5
| | | | | | | | Function. This is step 18/n towards merging instructions and statements, NFC. PiperOrigin-RevId: 227139399
* Rename BBArgument -> BlockArgument, Op::getOperation -> Op::getInst(),Chris Lattner2019-03-291-4/+4
| | | | | | | | StmtResult -> InstResult, StmtOperand -> InstOperand, and remove the old names. This is step 17/n towards merging instructions and statements, NFC. PiperOrigin-RevId: 227121537
* Merge Operation into OperationInst and standardize nomenclature aroundChris Lattner2019-03-291-18/+17
| | | | | | | | OperationInst. This is a big mechanical patch. This is step 16/n towards merging instructions and statements, NFC. PiperOrigin-RevId: 227093712
* Merge CFGFuncBuilder/MLFuncBuilder/FuncBuilder together into a single newChris Lattner2019-03-291-8/+8
| | | | | | | | FuncBuilder class. Also rename SSAValue.cpp to Value.cpp This is step 12/n towards merging instructions and statements, NFC. PiperOrigin-RevId: 227067644
* Merge SSAValue, CFGValue, and MLValue together into a single Value class, whichChris Lattner2019-03-291-29/+26
| | | | | | | | | is the new base of the SSA value hierarchy. This CL also standardizes all the nomenclature and comments to use 'Value' where appropriate. This also eliminates a large number of cast<MLValue>(x)'s, which is very soothing. This is step 11/n towards merging instructions and statements, NFC. PiperOrigin-RevId: 227064624
* Per review on the previous CL, drop MLFuncBuilder::createOperation, changingChris Lattner2019-03-291-5/+8
| | | | | | | | clients to use OperationState instead. This makes MLFuncBuilder more similiar to CFGFuncBuilder. This whole area will get tidied up more when cfg and ml worlds get unified. This patch is just gardening, NFC. PiperOrigin-RevId: 226701959
* Extract vector_transfer_* Ops into a SuperVectorDialect.Alex Zinenko2019-03-291-0/+1
| | | | | | | | | | | From the beginning, vector_transfer_read and vector_transfer_write opreations were intended as a mid-level vectorization abstraction. In particular, they are lowered to the StandardOps dialect before further processing. As such, it does not make sense to keep them at the same level as StandardOps. Introduce the new SuperVectorOps dialect and move vector_transfer_* operations there. This will be used as a testbed for the generic lowering/legalization pass. PiperOrigin-RevId: 225554492
* [MLIR] Fix the name of the MaterializeVectorPassNicolas Vasilache2019-03-291-10/+8
| | | | PiperOrigin-RevId: 224536381
* Return bool from all emitError methods similar to Operation::emitOpErrorSmit Hinsu2019-03-291-16/+8
| | | | | | | | | | | | | This simplifies call-sites returning true after emitting an error. After the conversion, dropped braces around single statement blocks as that seems more common. Also, switched to emitError method instead of emitting Error kind using the emitDiagnostic method. TESTED with existing unit tests PiperOrigin-RevId: 224527868
* [MLIR] Error handling in MaterializeVectorsNicolas Vasilache2019-03-291-16/+41
| | | | | | | This removes assertions as a means to capture NYI behavior and propagates errors up. PiperOrigin-RevId: 224376935
* [MLIR] Add AffineMap composition and use it in MaterializationNicolas Vasilache2019-03-291-49/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This CL adds the following free functions: ``` /// Returns the AffineExpr e o m. AffineExpr compose(AffineExpr e, AffineMap m); /// Returns the AffineExpr f o g. AffineMap compose(AffineMap f, AffineMap g); ``` This addresses the issue that AffineMap composition is only available at a distance via AffineValueMap and is thus unusable on Attributes. This CL thus implements AffineMap composition in a more modular and composable way. This CL does not claim that it can be a good replacement for the implementation in AffineValueMap, in particular it does not support bounded maps atm. Standalone tests are added that replicate some of the logic of the AffineMap composition pass. Lastly, affine map composition is used properly inside MaterializeVectors and a standalone test is added that requires permutation_map composition with a projection map. PiperOrigin-RevId: 224376870
* [MLIR] Add support for permutation_mapNicolas Vasilache2019-03-291-3/+121
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This CL hooks up and uses permutation_map in vector_transfer ops. In particular, when going into the nuts and bolts of the implementation, it became clear that cases arose that required supporting broadcast semantics. Broadcast semantics are thus added to the general permutation_map. The verify methods and tests are updated accordingly. Examples of interest include. Example 1: The following MLIR snippet: ```mlir for %i3 = 0 to %M { for %i4 = 0 to %N { for %i5 = 0 to %P { %a5 = load %A[%i4, %i5, %i3] : memref<?x?x?xf32> }}} ``` may vectorize with {permutation_map: (d0, d1, d2) -> (d2, d1)} into: ```mlir for %i3 = 0 to %0 step 32 { for %i4 = 0 to %1 { for %i5 = 0 to %2 step 256 { %4 = vector_transfer_read %arg0, %i4, %i5, %i3 {permutation_map: (d0, d1, d2) -> (d2, d1)} : (memref<?x?x?xf32>, index, index) -> vector<32x256xf32> }}} ```` Meaning that vector_transfer_read will be responsible for reading the 2-D slice: `%arg0[%i4, %i5:%15+256, %i3:%i3+32]` into vector<32x256xf32>. This will require a transposition when vector_transfer_read is further lowered. Example 2: The following MLIR snippet: ```mlir %cst0 = constant 0 : index for %i0 = 0 to %M { %a0 = load %A[%cst0, %cst0] : memref<?x?xf32> } ``` may vectorize with {permutation_map: (d0) -> (0)} into: ```mlir for %i0 = 0 to %0 step 128 { %3 = vector_transfer_read %arg0, %c0_0, %c0_0 {permutation_map: (d0, d1) -> (0)} : (memref<?x?xf32>, index, index) -> vector<128xf32> } ```` Meaning that vector_transfer_read will be responsible of reading the 0-D slice `%arg0[%c0, %c0]` into vector<128xf32>. This will require a 1-D vector broadcast when vector_transfer_read is further lowered. Additionally, some minor cleanups and refactorings are performed. One notable thing missing here is the composition with a projection map during materialization. This is because I could not find an AffineMap composition that operates on AffineMap directly: everything related to composition seems to require going through SSAValue and only operates on AffinMap at a distance via AffineValueMap. I have raised this concern a bunch of times already, the followup CL will actually do something about it. In the meantime, the projection is hacked at a minimum to pass verification and materialiation tests are temporarily incorrect. PiperOrigin-RevId: 224376828
* OpPointer: replace conversion operator to Operation* to OpType*.Alex Zinenko2019-03-291-2/+2
| | | | | | | | | | | | | | | | | | | | | | | The implementation of OpPointer<OpType> provides an implicit conversion to Operation *, but not to the underlying OpType *. This has led to awkward-looking code when an OpPointer needs to be passed to a function accepting an OpType *. For example, if (auto someOp = genericOp.dyn_cast<OpType>()) someFunction(&*someOp); where "&*" makes it harder to read. Arguably, one does not want to spell out OpPointer<OpType> in the line with dyn_cast. More generally, OpPointer is now being used as an owning pointer to OpType rather than to operation. Replace the implicit conversion to Operation* with the conversion to OpType* taking into account const-ness of the type. An Operation* can be obtained from an OpType with a simple call. Since an instance of OpPointer owns the OpType value, the pointer to it is never null. However, the OpType value may not be associated with any Operation*. In this case, return nullptr when conversion is attempted to maintain consistency with the existing null checks. PiperOrigin-RevId: 224368103
* [MLIR] Add VectorTransferOpsNicolas Vasilache2019-03-291-114/+131
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This CL implements and uses VectorTransferOps in lieu of the former custom call op. Tests are updated accordingly. VectorTransferOps come in 2 flavors: VectorTransferReadOp and VectorTransferWriteOp. VectorTransferOps can be thought of as a backend-independent pseudo op/library call that needs to be legalized to MLIR (whiteboxed) before it can be lowered to backend-dependent IR. Note that the current implementation does not yet support a real permutation map. Proper support will come in a followup CL. VectorTransferReadOp ==================== VectorTransferReadOp performs a blocking read from a scalar memref location into a super-vector of the same elemental type. This operation is called 'read' by opposition to 'load' because the super-vector granularity is generally not representable with a single hardware register. As a consequence, memory transfers will generally be required when lowering VectorTransferReadOp. A VectorTransferReadOp is thus a mid-level abstraction that supports super-vectorization with non-effecting padding for full-tile only code. A vector transfer read has semantics similar to a vector load, with additional support for: 1. an optional value of the elemental type of the MemRef. This value supports non-effecting padding and is inserted in places where the vector read exceeds the MemRef bounds. If the value is not specified, the access is statically guaranteed to be within bounds; 2. an attribute of type AffineMap to specify a slice of the original MemRef access and its transposition into the super-vector shape. The permutation_map is an unbounded AffineMap that must represent a permutation from the MemRef dim space projected onto the vector dim space. Example: ```mlir %A = alloc(%size1, %size2, %size3, %size4) : memref<?x?x?x?xf32> ... %val = `ssa-value` : f32 // let %i, %j, %k, %l be ssa-values of type index %v0 = vector_transfer_read %src, %i, %j, %k, %l {permutation_map: (d0, d1, d2, d3) -> (d3, d1, d2)} : (memref<?x?x?x?xf32>, index, index, index, index) -> vector<16x32x64xf32> %v1 = vector_transfer_read %src, %i, %j, %k, %l, %val {permutation_map: (d0, d1, d2, d3) -> (d3, d1, d2)} : (memref<?x?x?x?xf32>, index, index, index, index, f32) -> vector<16x32x64xf32> ``` VectorTransferWriteOp ===================== VectorTransferWriteOp performs a blocking write from a super-vector to a scalar memref of the same elemental type. This operation is called 'write' by opposition to 'store' because the super-vector granularity is generally not representable with a single hardware register. As a consequence, memory transfers will generally be required when lowering VectorTransferWriteOp. A VectorTransferWriteOp is thus a mid-level abstraction that supports super-vectorization with non-effecting padding for full-tile only code. A vector transfer write has semantics similar to a vector store, with additional support for handling out-of-bounds situations. Example: ```mlir %A = alloc(%size1, %size2, %size3, %size4) : memref<?x?x?x?xf32>. %val = `ssa-value` : vector<16x32x64xf32> // let %i, %j, %k, %l be ssa-values of type index vector_transfer_write %val, %src, %i, %j, %k, %l {permutation_map: (d0, d1, d2, d3) -> (d3, d1, d2)} : (vector<16x32x64xf32>, memref<?x?x?x?xf32>, index, index, index, index) ``` PiperOrigin-RevId: 223873234
* [MLIR] Fix opt buildNicolas Vasilache2019-03-291-0/+1
| | | | PiperOrigin-RevId: 222491353
* [MLIR][MaterializeVectors] Add a MaterializeVector pass via unrolling.Nicolas Vasilache2019-03-291-0/+595
This CL adds an MLIR-MLIR pass which materializes super-vectors to hardware-dependent sized vectors. While the physical vector size is target-dependent, the pass is written in a target-independent way: the target vector size is specified as a parameter to the pass. This pass is thus a partial lowering that opens the "greybox" that is the super-vector abstraction. This first CL adds a first materilization pass iterates over vector_transfer_write operations and: 1. computes the program slice including the current vector_transfer_write; 2. computes the multi-dimensional ratio of super-vector shape to hardware vector shape; 3. for each possible multi-dimensional value within the bounds of ratio, a new slice is instantiated (i.e. cloned and rewritten) so that all operations in this instance operate on the hardware vector type. As a simple example, given: ```mlir mlfunc @vector_add_2d(%M : index, %N : index) -> memref<?x?xf32> { %A = alloc (%M, %N) : memref<?x?xf32> %B = alloc (%M, %N) : memref<?x?xf32> %C = alloc (%M, %N) : memref<?x?xf32> for %i0 = 0 to %M { for %i1 = 0 to %N { %a1 = load %A[%i0, %i1] : memref<?x?xf32> %b1 = load %B[%i0, %i1] : memref<?x?xf32> %s1 = addf %a1, %b1 : f32 store %s1, %C[%i0, %i1] : memref<?x?xf32> } } return %C : memref<?x?xf32> } ``` and the following options: ``` -vectorize -virtual-vector-size 32 --test-fastest-varying=0 -materialize-vectors -vector-size=8 ``` materialization emits: ```mlir #map0 = (d0, d1) -> (d0, d1) #map1 = (d0, d1) -> (d0, d1 + 8) #map2 = (d0, d1) -> (d0, d1 + 16) #map3 = (d0, d1) -> (d0, d1 + 24) mlfunc @vector_add_2d(%arg0 : index, %arg1 : index) -> memref<?x?xf32> { %0 = alloc(%arg0, %arg1) : memref<?x?xf32> %1 = alloc(%arg0, %arg1) : memref<?x?xf32> %2 = alloc(%arg0, %arg1) : memref<?x?xf32> for %i0 = 0 to %arg0 { for %i1 = 0 to %arg1 step 32 { %3 = affine_apply #map0(%i0, %i1) %4 = "vector_transfer_read"(%0, %3tensorflow/mlir#0, %3tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32> %5 = affine_apply #map1(%i0, %i1) %6 = "vector_transfer_read"(%0, %5tensorflow/mlir#0, %5tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32> %7 = affine_apply #map2(%i0, %i1) %8 = "vector_transfer_read"(%0, %7tensorflow/mlir#0, %7tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32> %9 = affine_apply #map3(%i0, %i1) %10 = "vector_transfer_read"(%0, %9tensorflow/mlir#0, %9tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32> %11 = affine_apply #map0(%i0, %i1) %12 = "vector_transfer_read"(%1, %11tensorflow/mlir#0, %11tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32> %13 = affine_apply #map1(%i0, %i1) %14 = "vector_transfer_read"(%1, %13tensorflow/mlir#0, %13tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32> %15 = affine_apply #map2(%i0, %i1) %16 = "vector_transfer_read"(%1, %15tensorflow/mlir#0, %15tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32> %17 = affine_apply #map3(%i0, %i1) %18 = "vector_transfer_read"(%1, %17tensorflow/mlir#0, %17tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32> %19 = addf %4, %12 : vector<8xf32> %20 = addf %6, %14 : vector<8xf32> %21 = addf %8, %16 : vector<8xf32> %22 = addf %10, %18 : vector<8xf32> %23 = affine_apply #map0(%i0, %i1) "vector_transfer_write"(%19, %2, %23tensorflow/mlir#0, %23tensorflow/mlir#1) : (vector<8xf32>, memref<?x?xf32>, index, index) -> () %24 = affine_apply #map1(%i0, %i1) "vector_transfer_write"(%20, %2, %24tensorflow/mlir#0, %24tensorflow/mlir#1) : (vector<8xf32>, memref<?x?xf32>, index, index) -> () %25 = affine_apply #map2(%i0, %i1) "vector_transfer_write"(%21, %2, %25tensorflow/mlir#0, %25tensorflow/mlir#1) : (vector<8xf32>, memref<?x?xf32>, index, index) -> () %26 = affine_apply #map3(%i0, %i1) "vector_transfer_write"(%22, %2, %26tensorflow/mlir#0, %26tensorflow/mlir#1) : (vector<8xf32>, memref<?x?xf32>, index, index) -> () } } return %2 : memref<?x?xf32> } ``` PiperOrigin-RevId: 222455351
OpenPOWER on IntegriCloud