diff options
author | Zachary Turner <zturner@google.com> | 2018-06-13 19:29:16 +0000 |
---|---|---|
committer | Zachary Turner <zturner@google.com> | 2018-06-13 19:29:16 +0000 |
commit | 1b76a128a8a6610c3063cc105bfb2cb2857ddcdf (patch) | |
tree | 6311a83450b7ca36e02291ed8f82ab4d8cbdb78e /llvm/lib/Support/ThreadPool.cpp | |
parent | 9d6fabf9e37b1c2a69ba5d15b1dcbcb044990dc6 (diff) | |
download | bcm5719-llvm-1b76a128a8a6610c3063cc105bfb2cb2857ddcdf.tar.gz bcm5719-llvm-1b76a128a8a6610c3063cc105bfb2cb2857ddcdf.zip |
Enable ThreadPool to support tasks that return values.
Previously ThreadPool could only queue async "jobs", i.e. work
that was done for its side effects and not for its result. It's
useful occasionally to queue async work that returns a value.
From an API perspective, this is very intuitive. The previous
API just returned a shared_future<void>, so all we need to do is
make it return a shared_future<T>, where T is the type of value
that the operation returns.
Making this work required a little magic, but ultimately it's not
too bad. Instead of keeping a shared queue<packaged_task<void()>>
we just keep a shared queue<unique_ptr<TaskBase>>, where TaskBase
is a class with a pure virtual execute() method, then have a
templated derived class that stores a packaged_task<T()>. Everything
else works out pretty cleanly.
Differential Revision: https://reviews.llvm.org/D48115
llvm-svn: 334643
Diffstat (limited to 'llvm/lib/Support/ThreadPool.cpp')
-rw-r--r-- | llvm/lib/Support/ThreadPool.cpp | 21 |
1 files changed, 2 insertions, 19 deletions
diff --git a/llvm/lib/Support/ThreadPool.cpp b/llvm/lib/Support/ThreadPool.cpp index d0212ca1346..fef665ba3d1 100644 --- a/llvm/lib/Support/ThreadPool.cpp +++ b/llvm/lib/Support/ThreadPool.cpp @@ -32,7 +32,7 @@ ThreadPool::ThreadPool(unsigned ThreadCount) for (unsigned ThreadID = 0; ThreadID < ThreadCount; ++ThreadID) { Threads.emplace_back([&] { while (true) { - PackagedTaskTy Task; + std::unique_ptr<TaskBase> Task; { std::unique_lock<std::mutex> LockGuard(QueueLock); // Wait for tasks to be pushed in the queue @@ -54,7 +54,7 @@ ThreadPool::ThreadPool(unsigned ThreadCount) Tasks.pop(); } // Run the task we just grabbed - Task(); + Task->execute(); { // Adjust `ActiveThreads`, in case someone waits on ThreadPool::wait() @@ -79,23 +79,6 @@ void ThreadPool::wait() { [&] { return !ActiveThreads && Tasks.empty(); }); } -std::shared_future<void> ThreadPool::asyncImpl(TaskTy Task) { - /// Wrap the Task in a packaged_task to return a future object. - PackagedTaskTy PackagedTask(std::move(Task)); - auto Future = PackagedTask.get_future(); - { - // Lock the queue and push the new task - std::unique_lock<std::mutex> LockGuard(QueueLock); - - // Don't allow enqueueing after disabling the pool - assert(EnableFlag && "Queuing a thread during ThreadPool destruction"); - - Tasks.push(std::move(PackagedTask)); - } - QueueCondition.notify_one(); - return Future.share(); -} - // The destructor joins all threads, waiting for completion. ThreadPool::~ThreadPool() { { |