diff options
author | Alex Zinenko <zinenko@google.com> | 2020-01-07 20:00:54 +0100 |
---|---|---|
committer | Alex Zinenko <zinenko@google.com> | 2020-01-09 10:06:00 +0100 |
commit | 08778d8c4fd8a6519c7f27bfa6b09c47262cb844 (patch) | |
tree | 195cfbe336a349d0406283006228c0f8ed4e4cbb /mlir/test/lib/Transforms/TestGpuMemoryPromotion.cpp | |
parent | e93e0d413f3afa1df5c5f88df546bebcd1183155 (diff) | |
download | bcm5719-llvm-08778d8c4fd8a6519c7f27bfa6b09c47262cb844.tar.gz bcm5719-llvm-08778d8c4fd8a6519c7f27bfa6b09c47262cb844.zip |
[mlir][GPU] introduce utilities for promotion to workgroup memory
Introduce a set of function that promote a memref argument of a `gpu.func` to
workgroup memory using memory attribution. The promotion boils down to
additional loops performing the copy from the original argument to the
attributed memory in the beginning of the function, and back at the end of the
function using all available threads. The loop bounds are specified so as to
adapt to any size of the workgroup. These utilities are intended to compose
with other existing utilities (loop coalescing and tiling) in cases where the
distribution of work across threads is uneven, e.g. copying a 2D memref with
only the threads along the "x" dimension. Similarly, specialization of the
kernel to specific launch sizes should be implemented as a separate pass
combining constant propagation and canonicalization.
Introduce a simple attribute-driven pass to test the promotion transformation
since we don't have a heuristic at the moment.
Differential revision: https://reviews.llvm.org/D71904
Diffstat (limited to 'mlir/test/lib/Transforms/TestGpuMemoryPromotion.cpp')
-rw-r--r-- | mlir/test/lib/Transforms/TestGpuMemoryPromotion.cpp | 40 |
1 files changed, 40 insertions, 0 deletions
diff --git a/mlir/test/lib/Transforms/TestGpuMemoryPromotion.cpp b/mlir/test/lib/Transforms/TestGpuMemoryPromotion.cpp new file mode 100644 index 00000000000..ee0291827fa --- /dev/null +++ b/mlir/test/lib/Transforms/TestGpuMemoryPromotion.cpp @@ -0,0 +1,40 @@ +//===- TestGPUMemoryPromotionPass.cpp - Test pass for GPU promotion -------===// +// +// Part of the MLIR Project, under the Apache License v2.0 with LLVM Exceptions. +// See https://llvm.org/LICENSE.txt for license information. +// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception +// +//===----------------------------------------------------------------------===// +// +// This file implements the pass testing the utilities for moving data across +// different levels of the GPU memory hierarchy. +// +//===----------------------------------------------------------------------===// + +#include "mlir/Dialect/GPU/GPUDialect.h" +#include "mlir/Dialect/GPU/MemoryPromotion.h" +#include "mlir/IR/Attributes.h" +#include "mlir/Pass/Pass.h" + +using namespace mlir; + +namespace { +/// Simple pass for testing the promotion to workgroup memory in GPU functions. +/// Promotes all arguments with "gpu.test_promote_workgroup" attribute. This +/// does not check whether the promotion is legal (e.g., amount of memory used) +/// or beneficial (e.g., makes previously uncoalesced loads coalesced). +class TestGpuMemoryPromotionPass + : public OperationPass<TestGpuMemoryPromotionPass, gpu::GPUFuncOp> { + void runOnOperation() override { + gpu::GPUFuncOp op = getOperation(); + for (unsigned i = 0, e = op.getNumArguments(); i < e; ++i) { + if (op.getArgAttrOfType<UnitAttr>(i, "gpu.test_promote_workgroup")) + promoteToWorkgroupMemory(op, i); + } + } +}; +} // end namespace + +static PassRegistration<TestGpuMemoryPromotionPass> registration( + "test-gpu-memory-promotion", + "Promotes the annotated arguments of gpu.func to workgroup memory."); |