diff options
author | Dean Michael Berris <dberris@google.com> | 2018-04-29 13:46:30 +0000 |
---|---|---|
committer | Dean Michael Berris <dberris@google.com> | 2018-04-29 13:46:30 +0000 |
commit | 26e81209ef843232e7a01a420e3d3ef5781e8b3f (patch) | |
tree | 91b761f53899604b2b94cade1e028d544740c63e /compiler-rt/lib/xray/xray_allocator.h | |
parent | 2d569361fcc7ff8969ead46457a75c217eb6f006 (diff) | |
download | bcm5719-llvm-26e81209ef843232e7a01a420e3d3ef5781e8b3f.tar.gz bcm5719-llvm-26e81209ef843232e7a01a420e3d3ef5781e8b3f.zip |
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
Diffstat (limited to 'compiler-rt/lib/xray/xray_allocator.h')
-rw-r--r-- | compiler-rt/lib/xray/xray_allocator.h | 149 |
1 files changed, 149 insertions, 0 deletions
diff --git a/compiler-rt/lib/xray/xray_allocator.h b/compiler-rt/lib/xray/xray_allocator.h new file mode 100644 index 00000000000..8f8e13a7952 --- /dev/null +++ b/compiler-rt/lib/xray/xray_allocator.h @@ -0,0 +1,149 @@ +//===-- xray_allocator.h ---------------------------------------*- C++ -*-===// +// +// The LLVM Compiler Infrastructure +// +// This file is distributed under the University of Illinois Open Source +// License. See LICENSE.TXT for details. +// +//===----------------------------------------------------------------------===// +// +// This file is a part of XRay, a dynamic runtime instrumentation system. +// +// Defines the allocator interface for an arena allocator, used primarily for +// the profiling runtime. +// +//===----------------------------------------------------------------------===// +#ifndef XRAY_ALLOCATOR_H +#define XRAY_ALLOCATOR_H + +#include "sanitizer_common/sanitizer_allocator_internal.h" +#include "sanitizer_common/sanitizer_common.h" +#include "sanitizer_common/sanitizer_mutex.h" +#include <cstddef> +#include <cstdint> + +#include "sanitizer_common/sanitizer_internal_defs.h" + +namespace __xray { + +/// The Allocator type hands out fixed-sized chunks of memory that are +/// cache-line aligned and sized. This is useful for placement of +/// performance-sensitive data in memory that's frequently accessed. The +/// allocator also self-limits the peak memory usage to a dynamically defined +/// maximum. +/// +/// N is the lower-bound size of the block of memory to return from the +/// allocation function. N is used to compute the size of a block, which is +/// cache-line-size multiples worth of memory. We compute the size of a block by +/// determining how many cache lines worth of memory is required to subsume N. +template <size_t N> struct Allocator { + // The Allocator returns memory as Block instances. + struct Block { + /// Compute the minimum cache-line size multiple that is >= N. + static constexpr auto Size = + kCacheLineSize * ((N / kCacheLineSize) + (N % kCacheLineSize ? 1 : 0)); + void *Data = nullptr; + }; + +private: + // A BlockLink will contain a fixed number of blocks, each with an identifier + // to specify whether it's been handed out or not. We keep track of BlockLink + // iterators, which are basically a pointer to the link and an offset into + // the fixed set of blocks associated with a link. The iterators are + // bidirectional. + // + // We're calling it a "link" in the context of seeing these as a chain of + // block pointer containers (i.e. links in a chain). + struct BlockLink { + static_assert(kCacheLineSize % sizeof(void *) == 0, + "Cache line size is not divisible by size of void*; none of " + "the assumptions of the BlockLink will hold."); + + // We compute the number of pointers to areas in memory where we consider as + // individual blocks we've allocated. To ensure that instances of the + // BlockLink object are cache-line sized, we deduct one additional + // pointers worth representing the pointer to the previous link. + // + // This structure corresponds to the following layout: + // + // Blocks [ 0, 1, 2, .., BlockPtrCount - 1] + // + static constexpr auto BlockPtrCount = + (kCacheLineSize / sizeof(Block *)) - 1; + + // FIXME: Align this to cache-line address boundaries? + Block Blocks[BlockPtrCount]{}; + BlockLink *Prev = nullptr; + }; + + static_assert(sizeof(BlockLink) == kCacheLineSize, + "BlockLink instances must be cache-line-sized."); + + static BlockLink NullLink; + + // FIXME: Implement a freelist, in case we actually do intend to return memory + // to the allocator, as opposed to just de-allocating everything in one go? + + size_t MaxMemory; + SpinMutex Mutex{}; + BlockLink *Tail = &NullLink; + size_t Counter = 0; + + BlockLink *NewChainLink() { + auto NewChain = reinterpret_cast<BlockLink *>( + InternalAlloc(sizeof(BlockLink), nullptr, kCacheLineSize)); + auto BackingStore = reinterpret_cast<char *>(InternalAlloc( + BlockLink::BlockPtrCount * Block::Size, nullptr, kCacheLineSize)); + size_t Offset = 0; + DCHECK_NE(NewChain, nullptr); + DCHECK_NE(BackingStore, nullptr); + for (auto &B : NewChain->Blocks) { + B.Data = BackingStore + Offset; + Offset += Block::Size; + } + NewChain->Prev = Tail; + return NewChain; + } + +public: + Allocator(size_t M, size_t PreAllocate) : MaxMemory(M) { + // FIXME: Implement PreAllocate support! + } + + Block Allocate() { + SpinMutexLock Lock(&Mutex); + // Check whether we're over quota. + if (Counter * Block::Size >= MaxMemory) + return {}; + + size_t ChainOffset = Counter % BlockLink::BlockPtrCount; + + Block B{}; + BlockLink *Link = Tail; + if (UNLIKELY(Counter == 0 || ChainOffset == 0)) + Tail = Link = NewChainLink(); + + B = Link->Blocks[ChainOffset]; + ++Counter; + return B; + } + + ~Allocator() NOEXCEPT { + // We need to deallocate all the blocks, including the chain links. + for (auto *C = Tail; C != &NullLink;) { + // We know that the data block is a large contiguous page, we deallocate + // that at once. + InternalFree(C->Blocks[0].Data); + auto Prev = C->Prev; + InternalFree(C); + C = Prev; + } + } +}; // namespace __xray + +// Storage for the NullLink sentinel. +template <size_t N> typename Allocator<N>::BlockLink Allocator<N>::NullLink; + +} // namespace __xray + +#endif // XRAY_ALLOCATOR_H |