summaryrefslogtreecommitdiffstats
path: root/mlir/test/Conversion
Commit message (Collapse)AuthorAgeFilesLines
* [mlir] Change the syntax of AffineMapAttr and IntegerSetAttr to avoid ↵River Riddle2020-01-134-33/+33
| | | | | | | | | | conflicts with function types. Summary: The current syntax for AffineMapAttr and IntegerSetAttr conflict with function types, making it currently impossible to round-trip function types(and e.g. FuncOp) in the IR. This revision changes the syntax for the attributes by wrapping them in a keyword. AffineMapAttr is wrapped with `affine_map<>` and IntegerSetAttr is wrapped with `affine_set<>`. Reviewed By: nicolasvasilache, ftynse Differential Revision: https://reviews.llvm.org/D72429
* [mlir] Added missing GPU lowering ops.Julian Gross2020-01-132-0/+90
| | | | | | | | | | | Summary: This diff adds missing GPU lowering ops to MLIR. Reviewers: herhut, pifon2a, ftynse Tags: #pre-merge_beta_testing, #llvm Differential Revision: https://reviews.llvm.org/D72439
* [mlir][VectorOps] Implement insert_strided_slice conversionNicolas Vasilache2020-01-091-1/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This diff implements the progressive lowering of insert_strided_slice. Two cases appear: 1. when the source and dest vectors have different ranks, extract the dest subvector at the proper offset and reduce to case 2. 2. when they have the same rank N: a. if the source and dest type are the same, the insertion is trivial: just forward the source b. otherwise, iterate over all N-1 D subvectors and create an extract/insert_strided_slice/insert replacement, reducing the problem to vecotrs of the same N-1 rank. This combines properly with the other conversion patterns to lower all the way to LLVM. Reviewers: ftynse, rriddle, AlexEichenberger, andydavis1, tetuante, nicolasvasilache Reviewed By: andydavis1 Subscribers: merge_guards_bot, mehdi_amini, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D72317
* [mlir][VectorOps] Implement strided_slice conversionNicolas Vasilache2020-01-091-0/+61
| | | | | | | | | | | | | | | | | | | | | Summary: This diff implements the progressive lowering of strided_slice to either: 1. extractelement + insertelement for the 1-D case 2. extract + optional strided_slice + insert for the n-D case. This combines properly with the other conversion patterns to lower all the way to LLVM. Appropriate tests are added. Reviewers: ftynse, rriddle, AlexEichenberger, andydavis1, tetuante Reviewed By: andydavis1 Subscribers: merge_guards_bot, mehdi_amini, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D72310
* [mlir][spirv] Add lowering for std.fpext, std.fptrunc, std.sitofp.Denis Khalikov2020-01-071-0/+33
| | | | Differential Revision: https://reviews.llvm.org/D72137
* Revert "[mlir][spirv] Add lowering for std.fpext, std.fptrunc, std.sitofp."Lei Zhang2020-01-071-33/+0
| | | | | This reverts commit 7e7f849a6d94f77f1a29630419acb7226051f4b6 because it recorded the wrong commit author.
* [mlir][spirv] Add lowering for std cmp ops.Denis Khalikov2020-01-071-0/+41
| | | | Differential Revision: https://reviews.llvm.org/D72296
* [mlir][spirv] Add lowering for standard bit opsDenis Khalikov2020-01-071-18/+48
| | | | Differential Revision: https://reviews.llvm.org/D72205
* [mlir][spirv] Add lowering for std.fpext, std.fptrunc, std.sitofp.Lei Zhang2020-01-071-0/+33
| | | | Differential Revision: https://reviews.llvm.org/D72137
* [mlir][spirv] Fix shader ABI attribute prefix and add verificationLei Zhang2020-01-032-10/+10
| | | | | | | | | | This commit fixes shader ABI attributes to use `spv.` as the prefix so that they match the dialect's namespace. This enables us to add verification hooks in the SPIR-V dialect to verify them. Reviewed By: mravishankar Differential Revision: https://reviews.llvm.org/D72062
* [mlir] Fix the wrong computation of dynamic strides for lowering AllocOp to LLVMTung Le Duc2019-12-281-3/+3
| | | | | Leftover change from before the MLIR merge, reviewed at accepted at https://github.com/tensorflow/mlir/pull/338.
* [mlir] Convert std.and/std.or ops to spv.LogicalAnd/spv.LogicalOrMaheshRavishankar2019-12-271-0/+40
| | | | | | | | | The conversion from std.and/std.or to spv.LogicalAnd/spv.LogicalOr is only valid for boolean (i1) types. Modify BinaryOpPattern in StandardToSPIRV.td to allow limiting the type of the operands for which the pattern is applied. Differential Revision: https://reviews.llvm.org/D71881
* Add integer bit-shift operations to the standard dialect.Manuel Freiberger2019-12-226-26/+38
| | | | | | | | | | | | | | | | | | | Rename the 'shlis' operation in the standard dialect to 'shift_left'. Add tests for this operation (these have been missing so far) and add a lowering to the 'shl' operation in the LLVM dialect. Add also 'shift_right_signed' (lowered to LLVM's 'ashr') and 'shift_right_unsigned' (lowered to 'lshr'). The original plan was to name these operations 'shift.left', 'shift.right.signed' and 'shift.right.unsigned'. This works if the operations are prefixed with 'std.' in MLIR assembly. Unfortunately during import the short form is ambigous with operations from a hypothetical 'shift' dialect. The best solution seems to omit dots in standard operations for now. Closes tensorflow/mlir#226 PiperOrigin-RevId: 286803388
* [VectorOps] unify vector dialect "subscripts"Aart Bik2019-12-201-27/+27
| | | | PiperOrigin-RevId: 286650682
* Add gpu.shuffle op.Christian Sigg2019-12-201-0/+25
| | | | | | | | This will allow us to lower most of gpu.all_reduce (when all_reduce doesn't exist in the target dialect) within the GPU dialect, and only do target-specific lowering for the shuffle op. PiperOrigin-RevId: 286548256
* [VectorOps] minor cleanup: vector dialect "subscripts" are i32Aart Bik2019-12-191-8/+8
| | | | | | | | Introduces some centralized methods to move towards consistent use of i32 as vector subscripts. Note: sizes/strides/offsets attributes are still i64 PiperOrigin-RevId: 286434133
* [VectorOps] Add vector.print definition, with lowering supportAart Bik2019-12-181-0/+38
| | | | | | | | | | | | | | | | Examples: vector.print %f : f32 vector.print %x : vector<4xf32> vector.print %y : vector<3x4xf32> vector.print %z : vector<2x3x4xf32> LLVM lowering replaces these with fully unrolled calls into a small runtime support library that provides some basic printing operations (single value, opening closing bracket, comma, newline). PiperOrigin-RevId: 286230325
* Introduce prefetch op: affine -> std -> llvm intrinsicUday Bondhugula2019-12-181-0/+30
| | | | | | | | | | | | | | | | | | | Introduce affine.prefetch: op to prefetch using a multi-dimensional subscript on a memref; similar to affine.load but has no effect on semantics, but only on performance. Provide lowering through std.prefetch, llvm.prefetch and map to llvm's prefetch instrinsic. All attributes reflected through the lowering - locality hint, rw, and instr/data cache. affine.prefetch %0[%i, %j + 5], false, 3, true : memref<400x400xi32> Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#225 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/225 from bondhugula:prefetch 4c3b4e93bc64d9a5719504e6d6e1657818a2ead0 PiperOrigin-RevId: 286212997
* Harden the requirements to memory attribution types in gpu.funcAlex Zinenko2019-12-181-5/+5
| | | | | | | | | | When memory attributions are present in `gpu.func`, require that they are of memref type and live in memoryspaces 3 and 5 for workgroup and private memory attributions, respectively. Adapt the conversion from the GPU dialect to the NVVM dialect to drop the private memory space from attributions as NVVM is able to model them as local `llvm.alloca`s in the default memory space. PiperOrigin-RevId: 286161763
* Plug gpu.func into the GPU lowering pipelinesAlex Zinenko2019-12-166-22/+23
| | | | | | | | | | | | | | This updates the lowering pipelines from the GPU dialect to lower-level dialects (NVVM, SPIRV) to use the recently introduced gpu.func operation instead of a standard function annotated with an attribute. In particular, the kernel outlining is updated to produce gpu.func instead of std.func and the individual conversions are updated to consume gpu.funcs and disallow standard funcs after legalization, if necessary. The attribute "gpu.kernel" is preserved in the generic syntax, but can also be used with the custom syntax on gpu.funcs. The special kind of function for GPU allows one to use additional features such as memory attribution. PiperOrigin-RevId: 285822272
* [VectorOps] Add [insert/extract]element definition together with lowering to ↵Aart Bik2019-12-161-0/+20
| | | | | | | | | | | | | | LLVM Similar to insert/extract vector instructions but (1) work on 1-D vectors only (2) allow for a dynamic index %c3 = constant 3 : index %0 = vector.insertelement %arg0, %arg1[%c : index] : vector<4xf32> %1 = vector.extractelement %arg0[%c3 : index] : vector<4xf32> PiperOrigin-RevId: 285792205
* Make memref promotion during std->LLVM lowering the default calling conventionAlex Zinenko2019-12-161-0/+8
| | | | | | | | | | | | | | | During the conversion from the standard dialect to the LLVM dialect, memref-typed arguments are promoted from registers to memory and passed into functions by pointer. This had been introduced into the lowering to work around the abesnce of calling convention modeling in MLIR to enable better interoperability with LLVM IR generated from C, and has been exerciced for several months. Make this promotion the default calling covention when converting to the LLVM dialect. This adds the documentation, simplifies the code and makes the conversion consistent across function operations and function types used in other places, e.g. in high-order functions or attributes, which would not follow the same rule previously. PiperOrigin-RevId: 285751280
* Try to fold operations in DialectConversion when trying to legalize.River Riddle2019-12-131-16/+13
| | | | | | This change allows for DialectConversion to attempt folding as a mechanism to legalize illegal operations. This also expands folding support in OpBuilder::createOrFold to generate new constants when folding, and also enables it to work in the context of a PatternRewriter. PiperOrigin-RevId: 285448440
* [VectorOps] Add lowering of vector.shuffle to LLVM IRAart Bik2019-12-121-165/+215
| | | | | | | | | | | | | | | | | | | | | | | | | | | For example, a shuffle %1 = vector.shuffle %arg0, %arg1 [0 : i32, 1 : i32] : vector<2xf32>, vector<2xf32> becomes a direct LLVM shuffle 0 = llvm.shufflevector %arg0, %arg1 [0 : i32, 1 : i32] : !llvm<"<2 x float>">, !llvm<"<2 x float>"> but %1 = vector.shuffle %a, %b[1 : i32, 0 : i32, 2: i32] : vector<1x4xf32>, vector<2x4xf32> becomes the more elaborate (note the index permutation that drives argument selection for the extract operations) %0 = llvm.mlir.undef : !llvm<"[3 x <4 x float>]"> %1 = llvm.extractvalue %arg1[0] : !llvm<"[2 x <4 x float>]"> %2 = llvm.insertvalue %1, %0[0] : !llvm<"[3 x <4 x float>]"> %3 = llvm.extractvalue %arg0[0] : !llvm<"[1 x <4 x float>]"> %4 = llvm.insertvalue %3, %2[1] : !llvm<"[3 x <4 x float>]"> %5 = llvm.extractvalue %arg1[1] : !llvm<"[2 x <4 x float>]"> %6 = llvm.insertvalue %5, %4[2] : !llvm<"[3 x <4 x float>]"> PiperOrigin-RevId: 285268164
* Retire !linalg.buffer type - NFCNicolas Vasilache2019-12-121-1/+1
| | | | | | This type is not used anymore now that Linalg view and subview have graduated to std and that alignment is supported on alloc. PiperOrigin-RevId: 285213424
* Added lowering of `std.tanh` to llvm function call to `tanh` and `tanhf`.Ehsan Toosi2019-12-121-1/+23
| | | | | | | Closes tensorflow/mlir#312 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/312 from dfki-ehna:tanh 9e89b072ff91ff390ad739501745114feb3ac856 PiperOrigin-RevId: 285205674
* Automated rollback of commit f68ac464d818629e0fe10c23b44ac782d64a12d2Christian Sigg2019-12-121-2/+2
| | | | PiperOrigin-RevId: 285162061
* Switch from shfl.bfly to shfl.down.Christian Sigg2019-12-121-2/+2
| | | | | | | Both work for the current use case, but the latter allows implementing prefix sums and is a little easier to understand for partial warps. PiperOrigin-RevId: 285145287
* [spirv] Add lowering for std.fdiv, std.frem, std.fsubDenis Khalikov2019-12-111-0/+28
| | | | | | | Closes tensorflow/mlir#313 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/313 from denis0x0D:sandbox/lowering_std_farith 41715070a74d13bfa9401957478978c1bb8006c0 PiperOrigin-RevId: 285023586
* [VectorOps] Add lowering of vector.insert to LLVM IRAart Bik2019-12-101-0/+53
| | | | | | | | | | | | | | | | | | | | | | | | | For example, an insert %0 = vector.insert %arg0, %arg1[3 : i32] : f32 into vector<4xf32> becomes %0 = llvm.mlir.constant(3 : i32) : !llvm.i32 %1 = llvm.insertelement %arg0, %arg1[%0 : !llvm.i32] : !llvm<"<4 x float>"> A more elaborate example, inserting an element in a higher dimension vector %0 = vector.insert %arg0, %arg1[3 : i32, 7 : i32, 15 : i32] : f32 into vector<4x8x16xf32> becomes %0 = llvm.extractvalue %arg1[3 : i32, 7 : i32] : !llvm<"[4 x [8 x <16 x float>]]"> %1 = llvm.mlir.constant(15 : i32) : !llvm.i32 %2 = llvm.insertelement %arg0, %0[%1 : !llvm.i32] : !llvm<"<16 x float>"> %3 = llvm.insertvalue %2, %arg1[3 : i32, 7 : i32] : !llvm<"[4 x [8 x <16 x float>]]"> PiperOrigin-RevId: 284882443
* Uniformize Vector transforms as patterns on the model of Linalg - NFCNicolas Vasilache2019-12-101-238/+0
| | | | | | This reorganizes the vector transformations to be more easily testable as patterns and more easily composable into fused passes in the future. PiperOrigin-RevId: 284817474
* [VectorOps] Add a ShuffleOp to the VectorOps dialectAart Bik2019-12-091-5/+5
| | | | | | | | | | For example %0 = vector.shuffle %x, %y [3 : i32, 2 : i32, 1 : i32, 0 : i32] : vector<2xf32>, vector<2xf32> yields a vector<4xf32> result with a permutation of the elements of %x and %y PiperOrigin-RevId: 284657191
* Add lowering for module with gpu.kernel_module attribute.Mahesh Ravishankar2019-12-091-0/+1
| | | | | | | | | | | The existing GPU to SPIR-V lowering created a spv.module for every function with gpu.kernel attribute. A better approach is to lower the module that the function lives in (which has the attribute gpu.kernel_module) to a spv.module operation. This better captures the host-device separation modeled by GPU dialect and simplifies the lowering as well. PiperOrigin-RevId: 284574688
* Unify vector op unrolling transformation.Andy Davis2019-12-091-29/+51
| | | | | | | Unifies vector op unrolling transformation, by using the same unrolling implementation for contraction and elementwise operations. Removes fakefork/join operations which are non longer needed now that we have the InsertStridedSlice operation. PiperOrigin-RevId: 284570784
* Minor spelling tweaksKazuaki Ishizaki2019-12-091-1/+1
| | | | | | Closes tensorflow/mlir#304 PiperOrigin-RevId: 284568358
* [VecOps] Rename vector.[insert|extract]element to just vector.[insert|extract]Aart Bik2019-12-061-2/+2
| | | | | | | Since these operations lower to [insert|extract][element|value] at LLVM dialect level, neither element nor value would correctly reflect the meaning. PiperOrigin-RevId: 284240727
* [VectorOps] Add lowering of vector.broadcast to LLVM IRAart Bik2019-12-061-0/+200
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For example, a scalar broadcast %0 = vector.broadcast %x : f32 to vector<2xf32> return %0 : vector<2xf32> which expands scalar x into vector [x,x] by lowering to the following LLVM IR dialect to implement the duplication over the leading dimension. %0 = llvm.mlir.undef : !llvm<"<2 x float>"> %1 = llvm.mlir.constant(0 : index) : !llvm.i64 %2 = llvm.insertelement %x, %0[%1 : !llvm.i64] : !llvm<"<2 x float>"> %3 = llvm.shufflevector %2, %0 [0 : i32, 0 : i32] : !llvm<"<2 x float>">, !llvm<"<2 x float>"> return %3 : vector<2xf32> In the trailing dimensions, the operand is simply "passed through", unless a more elaborate "stretch" is required. For example %0 = vector.broadcast %arg0 : vector<1xf32> to vector<4xf32> return %0 : vector<4xf32> becomes %0 = llvm.mlir.undef : !llvm<"<4 x float>"> %1 = llvm.mlir.constant(0 : index) : !llvm.i64 %2 = llvm.extractelement %arg0[%1 : !llvm.i64] : !llvm<"<1 x float>"> %3 = llvm.mlir.constant(0 : index) : !llvm.i64 %4 = llvm.insertelement %2, %0[%3 : !llvm.i64] : !llvm<"<4 x float>"> %5 = llvm.shufflevector %4, %0 [0 : i32, 0 : i32, 0 : i32, 0 : i32] : !llvm<"<4 x float>">, !llvm<"<4 x float>"> llvm.return %5 : !llvm<"<4 x float>"> PiperOrigin-RevId: 284219926
* Add conversions of GPU func with memory attributions to LLVM/NVVMAlex Zinenko2019-12-061-0/+145
| | | | | | | | | | | | | | | | | | | GPU functions use memory attributions, a combination of Op attributes and region arguments, to specify function-wide buffers placed in workgroup or private memory spaces. Introduce a lowering pattern for GPU functions to be converted to LLVM functions taking into account memory attributions. Workgroup attributions get transformed into module-level globals with unique names derived from function names. Private attributions get converted into llvm.allocas inside the function body. In both cases, we inject at the beginning of the function the IR that obtains the raw pointer to the data and populates a MemRef descriptor based on the MemRef type of buffer, making attributions compose with the rest of the MemRef lowering and transparent for use with std.load and std.store. While using raw pointers instead of descriptors might have been more efficient, it is better implemented as a canonicalization or a separate transformation so that non-attribution memrefs could also benefit from it. PiperOrigin-RevId: 284208396
* Unroll vector masks along with their associated vector arguments.Andy Davis2019-12-061-18/+32
| | | | | | | | | Updates vector ContractionOp to use proper vector masks (produced by CreateMaskOp/ConstantMaskOp). Leverages the following canonicalizations in unrolling unit test: CreateMaskOp -> ConstantMaskOp, StridedSliceOp(ConstantMaskOp) -> ConstantMaskOp Removes IndexTupleOp (no longer needed now that we have vector mask ops). Updates all unit tests. PiperOrigin-RevId: 284182168
* DimOp folding for alloc/view dynamic dimensionsUday Bondhugula2019-12-061-19/+8
| | | | | | | | | Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow/mlir#253 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/253 from bondhugula:dimop a4b464f24ae63fd259114558d87e11b8ee4dae86 PiperOrigin-RevId: 284169689
* Add UnrankedMemRef Typenmostafa2019-12-051-0/+24
| | | | | | | Closes tensorflow/mlir#261 COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/261 from nmostafa:nmostafa/unranked 96b6e918f6ed64496f7573b2db33c0b02658ca45 PiperOrigin-RevId: 284037040
* Allow specification of the workgroup size for GPUToSPIRV lowering.Mahesh Ravishankar2019-12-051-1/+2
| | | | | | | | | | SPIR-V/Vulkan spec requires the workgroups size to be specified with the spv.ExecutionMode operation. This was hard-wired to be set to a particular value. It is now changed to be configurable by clients of the pass or of the patterns that implement the lowering from GPU to SPIRV. PiperOrigin-RevId: 284017482
* Add a CL option to Standard to LLVM lowering to use alloca instead of ↵Nicolas Vasilache2019-12-041-0/+6
| | | | | | | | | malloc/free. In the future, a more configurable malloc and free interface should be used and exposed via extra parameters to the `createLowerToLLVMPass`. Until requirements are gathered, a simple CL flag allows generating code that runs successfully on hardware that cannot use the stdlib. PiperOrigin-RevId: 283833424
* Drop MaterializeVectorTransfers in favor of simpler declarative unrollingNicolas Vasilache2019-12-041-0/+25
| | | | | | Now that we have unrolling as a declarative pattern, we can drop a full pass that has gone stale. In the future we may want to add specific unrolling patterns for VectorTransferReadOp. PiperOrigin-RevId: 283806880
* Adds support for unrolling single-result vector operations with iterator ↵Andy Davis2019-12-041-0/+128
| | | | | | | | type lists and indexing maps to a target vector size. Adds unit tests for unrolling the vector ContractionOp with different iteration orders. PiperOrigin-RevId: 283747503
* Refactor dependencies to expose Vector transformations as patterns - NFCNicolas Vasilache2019-12-032-1/+1
| | | | | | | | | This CL refactors some of the MLIR vector dependencies to allow decoupling VectorOps, vector analysis, vector transformations and vector conversions from each other. This makes the system more modular and allows extracting VectorToVector into VectorTransforms that do not depend on vector conversions. This refactoring exhibited a bunch of cyclic library dependencies that have been cleaned up. PiperOrigin-RevId: 283660308
* Add a pass to legalize operations before lowering to SPIR-V.Mahesh Ravishankar2019-12-032-0/+114
| | | | | | | | | | | | Not all StandardOps can be lowered to SPIR-V. For example, subview op implementation requires use of pointer bitcasts which is not valid according to SPIR-V spec (or at least is ambiguous about it). Such ops need to be removed/transformed before lowering to SPIR-V. The SPIRVLegalizationPass is added a place where such legalizations can be added. Current implementation folds the subview ops with load/stores so that the lowering itself does not have to convert a subview op. PiperOrigin-RevId: 283642981
* Convert MemRefType to a linearized array in SPIR-V lowering.Mahesh Ravishankar2019-12-032-10/+17
| | | | | | | | | | | | | | | The SPIR-V lowering used nested !spv.arrays to represented multi-dimensional arrays, with the hope that in-conjunction with the layout annotations, the shape and layout of memref can be represented directly. It is unclear though how portable this representation will end up being. It will rely on driver compilers implementing complex index computations faithfully. A more portable approach is to use linearized arrays to represent memrefs and explicitly instantiate all the index computation in SPIR-V. This gives added benefit that we can further optimize the generated code in MLIR before generating the SPIR-V binary. PiperOrigin-RevId: 283571167
* Fix ViewOp to have at most one offset operandAlex Zinenko2019-12-031-5/+5
| | | | | | | | | | | | | | | | | | | | | As described in the documentation, ViewOp is expected to take an optional dynamic offset followed by a list of dynamic sizes. However, the ViewOp parser did not include a check for the offset being a single value and accepeted a list of values instead. Furthermore, several tests have been exercising the wrong syntax of a ViewOp, passing multiple values to the dyanmic stride list, which was not caught by the parser. The trailing values could have been erronously interpreted as dynamic sizes. This is likely due to resyntaxing of the ViewOp, with the previous syntax taking the list of sizes before the offset. Update the tests to use the syntax with the offset preceding the sizes. Worse, the conversion of ViewOp to the LLVM dialect assumed the wrong order of operands with offset in the trailing position, and erronously relied on the permissive parsing that interpreted trailing dynamic offset values as leading dynamic sizes. Fix the lowering to use the correct order of operands. PiperOrigin-RevId: 283532506
* Extend conversion of SubViewOp to llvm to also support cases where size and ↵Stephan Herhut2019-12-031-0/+54
| | | | | | | | | | | | stride are constant (i.e., there are no size and stride operands). We recently added canonicalization that rewrites constant size and stride operands to SubViewOp into static information in the type, so these patterns now occur during code generation. PiperOrigin-RevId: 283524688
OpenPOWER on IntegriCloud