Skip to content

The new multiprocessing.[R]Lock.locked() method fails. #132561

Open
@YvesDup

Description

@YvesDup

Bug report

Bug description:

Maybe I didn't quite understand what this feature did, but I think there's a bug when using the locked() method with a multiprocessing.[R]Lock.

Here is an example:

import multiprocessing as mp

def acq(lock, event):
    lock.acquire()
    print(f'Acq: {lock = }')
    print(f'Acq: {lock.locked() = }')
    event.set()

def main():
    lock = mp.Lock()
    event = mp.Event()
    p = mp.Process(target=acq, args=(lock, event))
    p.start()
    event.wait()
    print(f'Main: {lock = }')
    print(f'Main: {lock.locked() = }')

if __name__ == "__main__":
    mp.freeze_support()
    main()

output is:

Acq: lock = <Lock(owner=Process-1)>
Acq: lock.locked() = True
Main: lock = <Lock(owner=SomeOtherProcess)>
Main: lock.locked() = False

In the lockedmethod, the call to self._semlock._count() != 0 is not appropriate. The internal count attribute is really used with multiprocessing.RLock to count number of reentrant calls to acquire for the current thread.
With multiprocessing.Lock, this count is set to 1 when the lock is acquired (only once).

Whatever, only other threads can obtain this value, but not other processes sharing the [R]Lock.

IMO the test should be replace with self._semlock._is_zero() and the example above should also be add as unit test.

Linked issue/PR

CPython versions tested on:

CPython main branch

Operating systems tested on:

macOS

Linked PRs

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions