| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
properly value-typed.
Summary: These were temporary methods used to simplify the transition.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D72548
|
|
|
|
|
|
| |
Fix typos related to (de)serialization of spv.selection.
Differential Revision: https://reviews.llvm.org/D72503
|
|
|
|
|
|
| |
ValuePtr was a temporary typedef during the transition to a value-typed Value.
PiperOrigin-RevId: 286945714
|
|
|
|
| |
PiperOrigin-RevId: 286906740
|
|
|
|
|
|
|
|
|
|
| |
Value being value-typed.
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ
This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.
PiperOrigin-RevId: 286844725
|
|
|
|
|
|
|
|
| |
in `mlir` namespace.
Aside from being cleaner, this also makes the codebase more consistent.
PiperOrigin-RevId: 286206974
|
|
|
|
|
|
|
|
|
|
| |
The iterator should be erased before adding a new entry
into blockMergeInfo to avoid iterator invalidation.
Closes tensorflow/mlir#299
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/299 from denis0x0D:sandbox/reoder_erase 983be565809aa0aadfc7e92962e4d4b282f63c66
PiperOrigin-RevId: 284173235
|
|
|
|
|
|
|
| |
Closes tensorflow/mlir#290
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/290 from kiszk:spelling_tweaks_201912 9d9afd16a723dd65754a04698b3976f150a6054a
PiperOrigin-RevId: 284169681
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For serialization, when we have nested ops, the inner loop will create multiple
SPIR-V blocks. If the outer loop has block arguments (which corresponds to
OpPhi instructions), we defer the handling of OpPhi's parent block handling
until we serialized all blocks and then fix it up with the result <id>. These two
cases happening together was generating invalid SPIR-V blob because we
previously assume the parent block to be the block containing the terminator.
That is not true anymore when the block contains structured control flow ops.
If that happens, it should be fixed to use the structured control flow op's
merge block.
For deserialization, we record a map from header blocks to their corresponding
merge and continue blocks during the initial deserialization and then use the
info to construct spv.selection/spv.loop. The existing implementation will also
fall apart when we have nested loops. If so, we clone all blocks for the outer
loop, including the ones for the inner loop, to the spv.loop's region. So the map
for header blocks' merge info need to be updated; otherwise we are operating
on already deleted blocks.
PiperOrigin-RevId: 283949230
|
|
|
|
|
|
|
|
|
|
|
|
| |
During deserialization, the loop header block will be moved into the
spv.loop's region. If the loop header block has block arguments,
we need to make sure it is correctly carried over to the block where
the new spv.loop resides.
During serialization, we need to make sure block arguments from the
spv.loop's entry block are not silently dropped.
PiperOrigin-RevId: 280021777
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change allows for adding additional nested references to a SymbolRefAttr to allow for further resolving a symbol if that symbol also defines a SymbolTable. If a referenced symbol also defines a symbol table, a nested reference can be used to refer to a symbol within that table. Nested references are printed after the main reference in the following form:
symbol-ref-attribute ::= symbol-ref-id (`::` symbol-ref-id)*
Example:
module @reference {
func @nested_reference()
}
my_reference_op @reference::@nested_reference
Given that SymbolRefAttr is now more general, the existing functionality centered around a single reference is moved to a derived class FlatSymbolRefAttr. Followup commits will add support to lookups, rauw, etc. for scoped references.
PiperOrigin-RevId: 279860501
|
|
|
|
|
|
|
|
|
|
| |
This CL adds another control flow instruction in SPIR-V: OpPhi.
It is modelled as block arguments to be idiomatic with MLIR.
See the rationale.md doc for "Block Arguments vs PHI nodes".
Serialization and deserialization is updated to convert between
block arguments and SPIR-V OpPhi instructions.
PiperOrigin-RevId: 277161545
|
|
|
|
|
|
|
|
|
|
|
|
| |
We will use block arguments as the way to model SPIR-V OpPhi in
the SPIR-V dialect.
This CL also adds a few useful helper methods to both ops to
get the block arguments.
Also added tests for branch weight (de)serialization.
PiperOrigin-RevId: 275960797
|
|
|
|
|
|
| |
Closes tensorflow/mlir#177
PiperOrigin-RevId: 275692653
|
|
|
|
|
|
| |
These don't add any value, and some are even more restrictive than the respective static 'get' method.
PiperOrigin-RevId: 275391240
|
|
|
|
|
|
|
|
| |
The SpecId decoration is the handle for providing external specialization.
Similar to descriptor set and binding on global variables, we directly
bake it into assembly parsing and printing.
PiperOrigin-RevId: 274893879
|
|
|
|
|
|
|
|
|
| |
Since MLIR integer types don't make a distinction between signed vs
unsigned integers, during deserialization of SPIR-V binaries, the
OpBitcast might result in a cast from/to the same type. Do not add a
spv.Bitcast operation to the spv.module in these cases.
PiperOrigin-RevId: 273381887
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The SPIR-V spec recommends all OpUndef instructions be generated at
module level. For the SPIR-V dialect its better for UndefOp to produce
an SSA value for use with other instructions. If UndefOp is to be used
at module level, it cannot produce an SSA value (use of this SSA value
within FuncOp would need implicit capture). To satisfy needs of the
SPIR-V spec while making it simpler to represent UndefOp in the SPIR-V
dialect, the serialization is updated to create OpUndef instruction
at module scope.
PiperOrigin-RevId: 273355526
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The structured selection/loop's entry block does not have arguments.
If the function's header block is also part of the structured control
flow, we cannot just simply erase it because it may contain arguments
matching the function signature and used by the cloned blocks. Instead,
turn it into a block only containing a spv.Branch op.
Also, we can directly emit instructions for the spv.selection header
block to the block containing the spv.selection op. This eliminates
unnecessary branches in the SPIR-V blob.
Added a test for nested spv.loop.
PiperOrigin-RevId: 273351424
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to spv.loop, spv.selection is another op for modelling
SPIR-V structured control flow. It covers both OpBranchConditional
and OpSwitch with OpSelectionMerge.
Instead of having a `spv.SelectionMerge` op to directly model
selection merge instruction for indicating the merge target,
we use regions to delimit the boundary of the selection: the
merge target is the next op following the `spv.selection` op.
This way it's easier to discover all blocks belonging to
the selection and it plays nicer with the MLIR system.
PiperOrigin-RevId: 272475006
|
|
|
|
| |
PiperOrigin-RevId: 271256784
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Process and ignore the following debug instructions: OpSource,
OpSourceContinued, OpSourceExtension, OpString, OpModuleProcessed.
2) While processing OpTypeInt instruction, ignore the signedness
specification. Currently MLIR doesnt make a distinction between signed
and unsigned integer types.
3) Process and ignore BufferBlock decoration (similar to Buffer
decoration). StructType needs to be enhanced to track this attribute
since its needed for proper validation checks.
4) Report better error for unhandled instruction during
deserialization.
PiperOrigin-RevId: 271057060
|
|
|
|
|
|
|
|
|
|
| |
Sdd support in deserializer for OpMemberName instruction. For now
the name is just processed and not associated with the
spirv::StructType being built. That needs an enhancement to
spirv::StructTypes itself.
Add tests to check for errors reported during deserialization with
some refactoring to common out some utility functions.
PiperOrigin-RevId: 270794524
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing logic to parse spirv::StructTypes is very brittle. This
change simplifies the parsing logic a lot. The simplification also
allows for memberdecorations to be separated by commas instead of
spaces (which was an artifact of the existing parsing logic). The
change also needs a modification to mlir::parseType to return the
number of chars parsed. Adding a new parseType method to do so.
Also allow specification of spirv::StructType with no members.
PiperOrigin-RevId: 270739672
|
|
|
|
|
|
|
|
|
| |
Add OpControlBarrier and OpMemoryBarrier (de)serialization.
Closes tensorflow/mlir#130
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/130 from denis0x0D:sandbox/memory_barrier 2e3fff16bca44904dc1039592cb9a65d526faea8
PiperOrigin-RevId: 270457478
|
|
|
|
|
|
| |
MLIR follows the LLVM convention of passing by reference instead of by pointer.
PiperOrigin-RevId: 270396945
|
|
|
|
|
|
|
|
|
|
| |
Allow specification of decorators on SPIR-V StructType members. If the
struct has layout information, these decorations are to be specified
after the offset specification of the member. These decorations are
emitted as OpMemberDecorate instructions on the struct <id>. Update
(de)serialization to handle these decorations.
PiperOrigin-RevId: 270130136
|
|
|
|
|
|
| |
Update the SPIR-V (de)serialization to handle RuntimeArrayType.
PiperOrigin-RevId: 269667196
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A generic mechanism for (de)serialization of extended instruction sets
is added with this CL. To facilitate this, a new class
"SPV_ExtendedInstSetOp" is added which is a base class for all
operations corresponding to extended instruction sets. The methods to
(de)serialization such ops as well as its dispatch is generated
automatically.
The behavior controlled by autogenSerialization and hasOpcode is also
slightly modified to enable this. They are now decoupled.
1) Setting hasOpcode=1 means the operation has a corresponding
opcode in SPIR-V binary format, and its dispatch for
(de)serialization is automatically generated.
2) Setting autogenSerialization=1 generates the function for
(de)serialization automatically.
So now it is possible to have hasOpcode=0 and autogenSerialization=1
(for example SPV_ExtendedInstSetOp).
Since the dispatch functions is also auto-generated, the input file
needs to contain all operations. To this effect, SPIRVGLSLOps.td is
included into SPIRVOps.td. This makes the previously added
SPIRVGLSLOps.h and SPIRVGLSLOps.cpp unnecessary, and are deleted.
The SPIRVUtilsGen.cpp is also changed to make better use of
formatv,making the code more readable.
PiperOrigin-RevId: 269456263
|
|
|
|
|
|
|
|
|
| |
Add spv.FunctionCall operation and (de)serialization.
Closes tensorflow/mlir#137
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/137 from denis0x0D:sandbox/function_call_op e2e6f07d21e7f23e8b44c7df8a8ab784f3356ce4
PiperOrigin-RevId: 269437167
|
|
|
|
|
|
|
|
| |
This CL adds support for serializing and deserializing spv.loop ops.
This adds support for spv.Branch and spv.BranchConditional op
(de)serialization, too, because they are needed for spv.loop.
PiperOrigin-RevId: 268536962
|
|
|
|
|
|
|
|
| |
Each basic block in SPIR-V must start with an OpLabel instruction.
We don't support control flow yet, so this CL just makes sure that
the entry block follows this rule and is valid.
PiperOrigin-RevId: 265718841
|
|
|
|
|
|
|
|
| |
Add Block decoration for top-level spv.struct.
Closes tensorflow/mlir#102
PiperOrigin-RevId: 265716241
|
|
|
|
|
|
|
| |
Only a few important KHR extensions are registered to the
SPIR-V dialect for now.
PiperOrigin-RevId: 264939428
|
|
|
|
|
|
|
| |
This CL pulls in capabilities defined in the spec and adds
support for (de)serialize capabilities of a spv.module.
PiperOrigin-RevId: 264877413
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In SPIR-V binary format, constants are placed at the module level
and referenced by instructions inside functions using their result
<id>s. To model this natively (using SSA values for result <id>s),
it means we need to have implicit capturing functions. We will
lose the ability to have function passes if going down that path.
Instead, this CL changes to materialize constants at their use
sites in deserialization. It's cheap to copy constants in MLIR
given that attributes is uniqued to MLIRContext. By localizing
constants into functions, we can preserve isolated functions.
PiperOrigin-RevId: 264582532
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to global variables, specialization constants also live
in the module scope and can be referenced by instructions in
functions in native SPIR-V. A direct modelling would be to allow
functions in the SPIR-V dialect to implicit capture, but it means
we are losing the ability to write passes for Functions. While
in SPIR-V normally we want to process the module as a whole,
it's not common to see multiple functions get used so we'd like
to leave the door open for those cases. Therefore, similar to
global variables, we introduce spv.specConstant to model three
SPIR-V instructions: OpSpecConstantTrue, OpSpecConstantFalse,
and OpSpecConstant. They do not return SSA value results;
instead they have symbols and can only be referenced by the
symbols. To use it in a function, we need to have another op
spv._reference_of to turn the symbol into an SSA value. This
breaks the tie and makes functions still explicit capture.
Previously specialization constants were handled similarly as
normal constants. That is incorrect given that specialization
constant actually acts more like variable (without need to
load and store). E.g., they cannot be de-duplicated like normal
constants.
This CL also refines various documents and comments.
PiperOrigin-RevId: 264455172
|
|
|
|
|
|
|
|
| |
Support (de)serialization of spv.struct with offset decorations.
Closes tensorflow/mlir#94
PiperOrigin-RevId: 264421427
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
FuncOps in MLIR use explicit capture. So global variables defined in
module scope need to have a symbol name and this should be used to
refer to the variable within the function. This deviates from SPIR-V
spec, which assigns an SSA value to variables at all scopes that can
be used to refer to the variable, which requires SPIR-V functions to
allow implicit capture. To handle this add a new op,
spirv::GlobalVariableOp that can be used to define module scope
variables.
Since instructions need an SSA value, an new spirv::AddressOfOp is
added to convert a symbol reference to an SSA value for use with other
instructions.
This also means the spirv::EntryPointOp instruction needs to change to
allow initializers to be specified using symbol reference instead of
SSA value
The current spirv::VariableOp which returns an SSA value (as defined
by SPIR-V spec) can still be used to define function-scope variables.
PiperOrigin-RevId: 263951109
|
|
|
|
|
|
|
|
| |
Extend spv.array with Layoutinfo to support (de)serialization.
Closes tensorflow/mlir#80
PiperOrigin-RevId: 263795304
|
|
|
|
|
|
|
|
|
|
| |
Generate the EnumAttr to represent BuiltIns in SPIR-V dialect. The
builtIn can be specified as a StringAttr with value being the
name of the builtin. Extend Decoration (de)serialization to handle
BuiltIns.
Also fix an error in the SPIR-V dialect generator script.
PiperOrigin-RevId: 263596624
|
|
|
|
| |
PiperOrigin-RevId: 262225919
|
|
|
|
|
|
|
|
| |
This CL extends the existing spv.constant op to also support
specialization constant by adding an extra unit attribute
on it.
PiperOrigin-RevId: 261194869
|
|
|
|
|
|
|
|
|
|
|
|
| |
All non-argument attributes specified for an operation are treated as
decorations on the result value and (de)serialized using OpDecorate
instruction. An error is generated if an attribute is not an argument,
and the name doesn't correspond to a Decoration enum. Name of the
attributes that represent decoerations are to be the snake-case-ified
version of the Decoration name.
Add utility methods to convert to snake-case and camel-case.
PiperOrigin-RevId: 260792638
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We are relying on serializer to construct positive cases to drive
the test for deserializer. This leaves negative cases untested.
This CL adds a basic test fixture for covering the negative
corner cases to enforce a more robust deserializer.
Refactored common SPIR-V building methods out of serializer to
share it with the deserialization test.
PiperOrigin-RevId: 260742733
|
|
|
|
|
|
|
| |
This CL covers the case of composite spv.constant. We encode/decode
them into/from OpConstantComposite/OpConstantNull.
PiperOrigin-RevId: 259394700
|
|
|
|
|
|
| |
This CL adds support for float scalar spv.constant in (de)serialization.
PiperOrigin-RevId: 259311776
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SPIR-V has multiple constant instructions covering different
constant types:
* `OpConstantTrue` and `OpConstantFalse` for boolean constants
* `OpConstant` for scalar constants
* `OpConstantComposite` for composite constants
* `OpConstantNull` for null constants
* ...
We model them all with a single spv.constant op for uniformity
and friendliness to transformations. This does mean that when
doing (de)serialization, we need to poke spv.constant's type
to determine which SPIR-V binary instruction to use.
This CL only covers the case of bool and integer spv.constant.
The rest will follow.
PiperOrigin-RevId: 259311698
|
|
|
|
|
|
|
|
|
|
|
| |
We already have two levels of controls in SPIRVBase.td: hasOpcode and
autogenSerialization. The former controls whether to add an entry to
the dispatch table, while the latter controls whether to autogenerate
the op's (de)serialization method specialization. This is enough for
our cases. Remove the indirection from processOp to processOpImpl
to simplify the picture.
PiperOrigin-RevId: 259308711
|
|
|
|
|
|
|
|
|
| |
Since the serialization of EntryPointOp contains the name of the
function as well, the function serialization emits the function name
using OpName instruction, which is used during deserialization to get
the correct function name.
PiperOrigin-RevId: 259158784
|