-
-
Notifications
You must be signed in to change notification settings - Fork 3.9k
Consolidate together Bevy's TaskPools [adopted] #18163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Co-authored-by: Afonso Lage <lage.afonso@gmail.com>
Co-authored-by: Mike <mike.hsu@gmail.com>
22ec886
to
d0a19e9
Compare
8baead2
to
d3bb78e
Compare
58725af
to
78b78ec
Compare
So I looked into the regression with loading assets and it seems like the issue is that the ImageLoader is doing longer running operations in There are a few possible solutions here:
My inclination here is to go with (2), and rework this pr to remove the AsyncComputeTaskPool. Long running async tasks should be using separate threads from the task pool. (3) probably isn't a good idea to assume that the loaders are always be written to behave. (1) seems like a bad idea for the reasons listed above. There might be a (4) with some type of task priority system, but that requires more extensive executor changes. I'll probably explore overprovisioning threads and put up a pr for it if it looks ok. |
Fresh trace of Bistro with compressed textures for this PR, for context on the asset loading regression. edit: #17914 might improve the blocking in prepare assets after the images are loaded. |
Yeah the "separate pools" model was solving a real problem that I don't think we can just fully ignore. I like (2). |
Although I see no reason why IO is any different than "async compute". Seems like plenty of "async compute" cases could also benefit from a pooled model? |
Changes from #12090
spawn_blocking
andspawn_blocking_async
to scope. This allows the gltf plugin to not block the compute task pool when running tasks.Todo
set_var
is now unsafe and the api in the old pr could potentially set the environment variable from multiple threads. I may just expose setting the environment variable in the docs.Scope::spawn_blocking
and make sure that it is sound.Objective
Fixes #1907. Spiritual successor to #4740.
Bevy currently creates a 50%/25%/25% split between the Compute, AsyncCompute, and IO task pools, meaning that any given operation can only be scheduled onto that subset of threads. This is suboptimal in the cases where apps are not using any IO or Async Compute, or vice versa, where available parallelism would be under utilized due to the split not reflecting the actual use case. This PR aims to fix that under utilization.
Solution
TaskPool::spawn_blocking
as an alternative for blocking IO operations and any task that would previously be spawned on the AsyncCompute task pool. This will be backed byblocking
's dynamically scaled thread pool, which will spin up and down threads depending on the need.This allows ECS systems and parallel iterators to be scheduled onto all CPU cores instead of artificially constraining them to half of the logical cores available, as well as for typical async IO tasks to interweave onto any thread.
The use of
spawn_blocking
to perform CPU-bound operations will have to rely on the OS's preemptive scheduler to properly schedule the threads when the main task pool's threads are sitting idle. This comes with potential context switching costs, but is generally preferable to choking out the entire app due the task not cooperatively yielding available threads or artificially limiting available parallelism.Note: We're already using
blocking
throughasync-fs
for loading assets from disk. This shift primarily moves all of the other blocking IO tasks, and the async compute tasks intoblocking
's thread pool.Changelog
Added:
TaskPool::spawn_blocking
Added:TaskPool::spawn_blocking_async
Removed:IoTaskPool
Removed:AsyncComputeTaskPool
Changed:ComputeTaskPool
by default now spawns a thread for every available logical CPU core.Migration Guide
IoTaskPool
andAsyncComputeTaskPool
have been removed and merged withComputeTaskPool
. ReplaceIoTaskPool::get
andAsyncComputeTaskPool::get
.If you were spawning futures that are reliant on non-blocking IO (as in the task spends most of it's time yielding via
await
), just spawn the task onto theComputeTaskPool
:If you were spawning futures that are reliant on blocking IO (i.e.
std::fs::File::read_to_string
), useComputeTaskPool::spawn_blocking
instead. This will spawn and/or reuse one of the dynamically scaled threads explicitly made for blocking operations.If you were spawning futures for async compute, use
ComputeTaskPool::spawn_blocking
instead. This will spawn and/or reuse one of the dynamically scaled threads explicitly made for blocking operations.If you were spawning futures that are reliant on blocking IO (i.e.
std::fs::File::read_to_string
), useComputeTaskPool::spawn_blocking
instead. This will spawn and/or reuse one of the dynamically scaled threads explicitly made for blocking operations.If you were spawning futures for long running tasks that used both blocking and non-blocking work, use
ComputeTaskPool::spawn_blocking_async
instead.