Skip to content

Throttle garbage collector frequency in sig_occurred #227

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

user202729
Copy link
Contributor

Would fix #215 .

(Actually it's only a workaround until sagemath/sage#24986 is fixed the correct way i.e. catch the exception on Sage side and avoid destructing the relevant objects. Still, I don't think there's any case where running garbage collector repeatedly rapidly is desirable.)

@user202729
Copy link
Contributor Author

The current implementation breaks the tests (which makes sense, if test_sig_occurred is ran several times consecutively then the subsequent times will not try to run the garbage collector)

There are other options e.g. measure how much time has taken by garbage collector runs, only fire if it exceeds 0.1 cumulative (which wouldn't break tests/existing behavior as far as it can, but still ignore garbage collector runs if things get too slow). Thoughts?

@user202729
Copy link
Contributor Author

user202729 commented Apr 5, 2025

@tornaria @dimpase Can I get a review, thanks.

As I mentioned in the linked issue, the proper fix for this is to do it "upstream" at the calling site — don't operate on internal mpz owned by an object collectible by Python GC, instead create a mpz first, call functions to write to it, then copy it to Python object later. But there are a lot of locations in Sage source code that calls a function writing to a mpz.

There's also the option of modifying GMP — set pointer member to null first, deallocate later — but I don't know how practical this is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Nested signal handling leads to excessive slowdown of sig_occurred()
1 participant