| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
| |
The restriction that symbols can only have identifier names is arbitrary, and artificially limits the names that a symbol may have. This change adds support for parsing and printing symbols that don't fit in the 'bare-identifier' grammar by printing the reference in quotes, e.g. @"0_my_reference" can now be used as a symbol name.
PiperOrigin-RevId: 273644768
|
|
|
|
|
|
|
|
|
| |
This is matching what the runtime library is expecting.
Closes tensorflow/mlir#171
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/171 from deven-amd:deven-rocdl-device-func-i64 80762629a8c34e844ebdc542b34dd783990db9db
PiperOrigin-RevId: 273640767
|
|
|
|
|
|
|
|
|
|
|
| |
Add a pass to decorate the composite types used by
composite objects in the StorageBuffer, PhysicalStorageBuffer,
Uniform, and PushConstant storage classes with layout information.
Closes tensorflow/mlir#156
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/156 from denis0x0D:sandbox/layout_info_decoration 7c50840fd38ca169a2da7ce9886b52b50c868b84
PiperOrigin-RevId: 273634140
|
|
|
|
|
|
| |
This is similar to the `inlineRegionBefore` hook, except the original blocks are unchanged. The region to be cloned *must* not have been modified during the conversion process at the point of cloning, i.e. it must belong an operation that has yet to be converted, or the operation that is currently being converted.
PiperOrigin-RevId: 273622533
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- bodies would earlier appear in the order (i, i+3, i+2, i+1) instead of
(i, i+1, i+2, i+3) for example for factor 4.
- clean up hardcoded test cases
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closes tensorflow/mlir#170
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/170 from bondhugula:ujam b66b405b2b1894a03b376952e32a9d0292042665
PiperOrigin-RevId: 273613131
|
|
|
|
|
|
| |
MLIR uses symbol references to model references to many global entities, such as functions/variables/etc. Before this change, there is no way to actually reason about the uses of such entities. This change provides a walker for symbol references(via SymbolTable::walkSymbolUses), as well as 'use_empty' support(via SymbolTable::symbol_use_empty). It also resolves some deficiencies in the LangRef definition of SymbolRefAttr, namely the restrictions on where a SymbolRefAttr can be stored, ArrayAttr and DictionaryAttr, and the relationship with operations containing the SymbolTable trait.
PiperOrigin-RevId: 273549331
|
|
|
|
|
|
| |
The default value is never used as the value of the elide option is only used if it has an occurrence.
PiperOrigin-RevId: 273545143
|
|
|
|
|
|
|
|
|
|
|
| |
During the conversion, both the original and the converted function may coexist
in the module and have the same symbol name. There is no guarantee which of the
two will be found by the symbol lookup. Avoid returning the result of the
library function lookup when lowering Linalg to Standard or LLVM. Use the
symbol reference instead. After the conversion completes, only one symbol will
remain and the Ops using SymbolRefAttrs will be referring to the correct one.
PiperOrigin-RevId: 273510079
|
|
|
|
|
|
|
|
|
|
|
| |
Originally, we were attaching attributes containing CUBIN blobs to the kernel
function called by `gpu.launch_func`. This kernel is now contained in a nested
module that is used as a compilation unit. Attach compiled CUBIN blobs to the
module rather than to the function since we were compiling the module. This
also avoids duplication of the attribute on multiple kernels within the same
module.
PiperOrigin-RevId: 273497303
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally, the CUBIN getter function was introduced as a mechanism to
circumvent the absence of globals in the LLVM dialect. It would allocate memory
and populate it with the CUBIN data. LLVM dialect now supports globals and they
are already used to store CUBIN data, making the getter function a trivial
address computation of a global. Emit the address computation directly at the
place of `gpu.launch_func` instead of putting it in a function and calling it.
This simplifies the conversion flow and prepares it for using the
DialectConversion infrastructure.
PiperOrigin-RevId: 273496221
|
|
|
|
|
|
|
|
|
|
|
| |
Now that the accessor function is a trivial getter of the global variable, it
makes less sense to have the getter generation as a separate pass. Move the
getter generation into the lowering of `gpu.launch_func` to CUDA calls. This
change is mostly code motion, but the process can be simplified further by
generating the addressof inplace instead of using a call. This is will be done
in a follow-up.
PiperOrigin-RevId: 273492517
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The kernel function called by gpu.launch_func is now placed into an isolated
nested module during the outlining stage to simplify separate compilation.
Until recently, modules did not have names and could not be referenced. This
limitation was circumvented by introducing a stub kernel at the same name at
the same nesting level as the module containing the actual kernel. This
relation is only effective in one direction: from actual kernel function to its
launch_func "caller".
Leverage the recently introduced symbol name attributes on modules to refer to
a specific nested module from `gpu.launch_func`. This removes the implicit
connection between the identically named stub and kernel functions. It also
enables support for `gpu.launch_func`s to call different kernels located in the
same module.
PiperOrigin-RevId: 273491891
|
|
|
|
|
|
| |
directly.
PiperOrigin-RevId: 273446814
|
|
|
|
|
|
| |
Some modules may have extremely large ElementsAttrs, which makes debugging involving IR dumping extremely slow and painful. This change adds a flag that will elide ElementsAttrs with a "large"(as defined by the user) number of elements by printing "..." instead of the element data.
PiperOrigin-RevId: 273413100
|
|
|
|
| |
PiperOrigin-RevId: 273406833
|
|
|
|
| |
PiperOrigin-RevId: 273384063
|
|
|
|
|
|
|
|
|
| |
Since MLIR integer types don't make a distinction between signed vs
unsigned integers, during deserialization of SPIR-V binaries, the
OpBitcast might result in a cast from/to the same type. Do not add a
spv.Bitcast operation to the spv.module in these cases.
PiperOrigin-RevId: 273381887
|
|
|
|
| |
PiperOrigin-RevId: 273379318
|
|
|
|
|
|
|
|
| |
Operation::print behavior.
This allows for controlling the behavior of the AsmPrinter programmatically, instead of relying exclusively on cl::opt flags. This will also allow for more fine-tuned control of printing behavior per callsite, instead of being applied globally.
PiperOrigin-RevId: 273368361
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The SPIR-V spec recommends all OpUndef instructions be generated at
module level. For the SPIR-V dialect its better for UndefOp to produce
an SSA value for use with other instructions. If UndefOp is to be used
at module level, it cannot produce an SSA value (use of this SSA value
within FuncOp would need implicit capture). To satisfy needs of the
SPIR-V spec while making it simpler to represent UndefOp in the SPIR-V
dialect, the serialization is updated to create OpUndef instruction
at module scope.
PiperOrigin-RevId: 273355526
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The structured selection/loop's entry block does not have arguments.
If the function's header block is also part of the structured control
flow, we cannot just simply erase it because it may contain arguments
matching the function signature and used by the cloned blocks. Instead,
turn it into a block only containing a spv.Branch op.
Also, we can directly emit instructions for the spv.selection header
block to the block containing the spv.selection op. This eliminates
unnecessary branches in the SPIR-V blob.
Added a test for nested spv.loop.
PiperOrigin-RevId: 273351424
|
|
|
|
|
|
|
|
|
| |
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closes tensorflow/mlir#157
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/157 from bondhugula:quickfix bd1fcd79825fc0bd5b4a3e688153fa0993ab703d
PiperOrigin-RevId: 273316498
|
|
|
|
|
|
| |
because ilist_node_with_parent specifically requires a 'getParent() const' method. If/When ilist_node removes this constraint we should drop the const to fit the rest of the MLIR const model.
PiperOrigin-RevId: 273316153
|
|
|
|
| |
PiperOrigin-RevId: 273308494
|
|
|
|
|
|
|
|
|
|
|
| |
Now that MLIR has a standardized StridedMemRef descriptor, it becomes very easy to interact with external library functions and build utilities directly in C++.
This CL introduces basic printing support in a libmlir_utils.so.
Unit tests are rewritten using this feature and also to improve coverage.
For now, C mandates that we have a unique function for each MemRef element type and rank.
In a future a simple unranked descriptor can be introduced to only require uniqu'ing by element type.
PiperOrigin-RevId: 273304741
|
|
|
|
|
|
|
|
| |
Now that linalg.view and strided memrefs are unified, there is no reason to
disallow AllocOp in alias analysis. This CLs adds support for AllocOp which allows writing shorter tests that do not require explicitly creating a view for
each operation.
PiperOrigin-RevId: 273303060
|
|
|
|
|
|
| |
Add new `typeDescription` (description was already used by base constraint class) field to type to allow writing longer descriptions about a type being defined. This allows for providing additional information/rationale for a defined type. This currently uses `description` as the heading/name for the type in the generated documentation.
PiperOrigin-RevId: 273299332
|
|
|
|
| |
PiperOrigin-RevId: 273296399
|
|
|
|
|
|
|
|
|
| |
See RFC: https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/xE2IzfhE3Wg.
Opaque location stores two pointers, one of them points to some data structure that is external to MLIR, and the other one is unique for each type and represents type id of that data structure. OpaqueLoc also stores an optional location that can be used if the first one is not suitable.
OpaqueLoc is managed similar to FileLineColLoc. It is passed around by MLIR transformations and can be used in compound locations like CallSiteLoc.
PiperOrigin-RevId: 273266510
|
|
|
|
|
|
| |
gpu.all_reduce now supports block sizes that are not multiple of 32.
PiperOrigin-RevId: 273255204
|
|
|
|
|
|
| |
Sort ops per dialect and emit summary & description (if provided) of each dialect before emitting the ops of the dialect.
PiperOrigin-RevId: 273077138
|
|
|
|
|
|
|
|
| |
This allows confirming that a scalar argument has the same element type as a shaped one. It's easy to validate a type is shaped on its own if that's desirable, so this shouldn't make that use case harder. This matches the behavior of other traits that operate on element type (e.g. AllElementTypesMatch). Also this makes the code simpler because now we just use getElementTypeOrSelf.
Verified that all uses in core already check the type is shaped in another way.
PiperOrigin-RevId: 273068507
|
|
|
|
|
|
|
|
|
| |
1. Rename a few ops to make it clear they operate on *element* types.
2. Remove unused and generic operand and result ODS names (e.g. $res, $arg, $input). These are just clutter and don't make the op definitions any clearer.
3. Give test cases with duplicate names clearer names.
4. Add missing test case for no operands in SameOperandAndResultElementType.
PiperOrigin-RevId: 273067933
|
|
|
|
|
|
|
| |
Use `getParentOfType<FunctionOp>()` instead of `cast<FuncOp>(getParentOp())`
to avoid crash when return ops are used inside spv.selection/spv.loop.
PiperOrigin-RevId: 273006041
|
|
|
|
|
|
|
|
|
| |
This is fixing a build failure, usually non-deterministic because of
parallelism in the build, but could be reliably reproduced:
ninja projects/mlir/test/lib/TestDialect/CMakeFiles/MLIRTestDialect.dir/TestPatterns.cpp.o
PiperOrigin-RevId: 272998436
|
|
|
|
|
|
|
|
| |
Adding support for OpUndef instruction. Updating the dialect
generation script to fix a few bugs in the instruction spec
generation.
PiperOrigin-RevId: 272975685
|
|
|
|
|
|
|
|
|
|
|
| |
Add builder functions for spv._address_of, spv.EntryPoint,
spv.ExecutionMode and spv.Load to make it easier to create these
operations.
Fix a minor bug in printing of spv.EntryPoint
Add a utility function to get the attribute name associated with a
decoration.
PiperOrigin-RevId: 272952846
|
|
|
|
|
|
|
|
|
|
| |
MemRefType:;getDynamicStrideOrOffset() method - NFC
This fixes global ODR-use issues, some of which manifest in Parser.cpp.
Fixes tensorflow/mlir#167.
PiperOrigin-RevId: 272886347
|
|
|
|
|
|
|
|
|
|
|
|
| |
Certain lowering patterns were reported as [missing](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/dkdmHa77sSQ).
This CL adds them and allows Linalg/roundtrip.mlir and Linalg/loops.mlir to lower to LLVM directly. Those 2 tests are updated to additionally check that the direct lowering to LLVM does not crash.
The following points, left as TODOs still need to be addressed for correct end-to-end execution:
1. the lowering for ConvOp needs to pass attributes such as strides and dilations; the external library call needs to support it.
2. the lowering for GenericOp needs to support lowering to loops as a DialectConversion pattern. This is blocked on the DialectConversion infrastructure accepting an OperationFolder.
PiperOrigin-RevId: 272878131
|
|
|
|
|
|
|
|
|
|
| |
The GPUIndexIntrinsicOpLowering template is currently used by the code in both the GPUToNVVM and GPUToROCDL dirs.
Moving it to a common location to remove code duplication.
Closes tensorflow/mlir#163
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/163 from deven-amd:deven-refactor-gpu-index-ops-lowering b8dc2a5f5353df196039b6ff2ad42106028693ed
PiperOrigin-RevId: 272863297
|
|
|
|
| |
PiperOrigin-RevId: 272851237
|
|
|
|
|
|
|
|
| |
callable.
Some dialects have implicit conversions inherent in their modeling, meaning that a call may have a different type that the type that the callable expects. To support this, a hook is added to the dialect interface that allows for materializing conversion operations during inlining when there is a mismatch. A hook is also added to the callable interface to allow for introspecting the expected result types.
PiperOrigin-RevId: 272814379
|
|
|
|
|
|
| |
This allows for the inliner to work on arbitrary call operations. The updated inliner will also work bottom-up through the callgraph enabling support for multiple levels of inlining.
PiperOrigin-RevId: 272813876
|
|
|
|
|
|
|
| |
The first dim length of the axisStats attribute should equals to the slice size
of the input argument when splitted by the axis dimension.
PiperOrigin-RevId: 272798042
|
|
|
|
| |
PiperOrigin-RevId: 272768027
|
|
|
|
| |
PiperOrigin-RevId: 272722539
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This CL implements the last remaining bit of the [strided memref proposal](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
The syntax is a bit more explicit than what was originally proposed and resembles:
`memref<?x?xf32, offset: 0 strides: [?, 1]>`
Nonnegative strides and offsets are currently supported. Future extensions will include negative strides.
This also gives a concrete example of syntactic sugar for the ([RFC] Proposed Changes to MemRef and Tensor MLIR Types)[https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/-wKHANzDNTg].
The underlying implementation still uses AffineMap layout.
PiperOrigin-RevId: 272717437
|
|
|
|
|
|
|
|
| |
Module names are optional so it makes more sense to take and return an optional
any time the name is involved. Also update the language reference to reflect
the module names.
PiperOrigin-RevId: 272684698
|
|
|
|
|
|
|
|
|
|
| |
Modules are now Ops and, as such, can be nested. They do not produce an SSA
value so there is no possibility to refer to them in the IR. Introduce support
for symbol names attached to the module Op so that it can be referred to using
SymbolRefAttrs. The name is optional, for example the implicit top-level module
does not have a name.
PiperOrigin-RevId: 272671600
|
|
|
|
|
|
| |
This removes a warning and is generally a good practice.
PiperOrigin-RevId: 272613597
|