Open
Description
Bug report
Bug description:
# Add a code block here, if required
with ThreadPoolExecutor(max_workers=2) as thread_executor:
pending_thread_tasks.add(
thread_executor.submit(
get_bucket_objects, source_bucket_name, "source", source_api_key,
source_ibm_service_instance_id, source_location_constraint, source_cos_credentials
)
)
pending_thread_tasks.add(
thread_executor.submit(
get_bucket_objects, target_bucket_name, "target", target_api_key,
target_ibm_service_instance_id, target_location_constraint, target_cos_credentials
)
)
completed_process_tasks, _ = wait(pending_thread_tasks, return_when=ALL_COMPLETED)
for completed_process_task in completed_process_tasks:
bucket_type, objects = completed_process_task.result()
if bucket_type == "source":
LOGGER.info(f"Successfully fetched objects from source bucket '{source_bucket_name}' having "
f"id {source_bucket_id}")
source_objects = objects
else:
LOGGER.info(f"Successfully fetched objects from target bucket '{target_bucket_name}' having "
f"id {target_bucket_id}")
target_objects = objects
when celery task completes the woker should release memory. it dosnt release Full memory and the container contains 1 gb of memory when task completes because of using this threadpoolexecutor. when using processpool executor this issue doesnt come but as objects is a large list so when using multiprocessing the copy and pickling give memory spike thats why used multithreading. how to avoid the memory leak in this multithreading
CPython versions tested on:
3.12
Operating systems tested on:
Linux