Open
Description
Problem Statement
I deployed self-hosted Sentry on a large-scale server. We split the server into many LXC containers. Then make docker nested in the LXC containers.
For example, I have a 112 cores physical server. Then I have a LXC container with 8 cores CPU limit in the server.
Then I deployed self-hosted Sentry in the LXC container.
The problem is that Sentry started so many workers that runs out so many CPU and memory.
I found that:
- Sentry thinks the CPU count is 112 rather than 8, so it started 112 worker processes.
docker exec sentry-worker-1 python3 -c "from multiprocessing import cpu_count; print(cpu_count())"
returns112
.docker exec sentry-worker-1 python3 -c "import os; print(os.cpu_count())"
returns112
.docker exec sentry-worker-1 python3 -c "import os; print(len(os.sched_getaffinity(0)))"
returns8
.docker exec sentry-worker-1 nproc
returns8
.
References:
Solution Brainstorm
I think we should replace all of cpu_count()
with len(os.sched_getaffinity(0))
.
Product Area
Performance
Metadata
Metadata
Assignees
Type
Projects
Status
No status
Status
No status