Description
Location
https://doc.rust-lang.org/std/sync/struct.Mutex.html, including at least:
https://doc.rust-lang.org/std/sync/struct.Mutex.html#method.lock
https://doc.rust-lang.org/std/sync/struct.Mutex.html#method.try_lock
https://doc.rust-lang.org/std/sync/struct.MutexGuard.html
Summary
See https://marabos.nl/atomics/memory-ordering.html#example-locking, emphasis mine:
Mutexes are the most common use case for release and acquire ordering (see "Locking: Mutexes and RwLocks" in Chapter 1). When locking, they use an atomic operation to check if it was unlocked, using acquire ordering, while also (atomically) changing the state to "locked." When unlocking, they set the state back to "unlocked" using release ordering. This means that there will be a happens-before relationship between unlocking a mutex and subsequently locking it.
Many people are relying on this guarantee, but the documentation for Mutex
does not state this guarantee. We should explicitly guarantee this in the documentation for Mutex
.
In theory, a mutex could have "consume" semantics instead of acquire/release semantics, such that reads/writes to the "contents" of a mutex (the T
in Mutex<T>
) and pointers/reference contained therein, are treated as "consume" load/stores (even if Rust's atomics API doesn't support consume ordering). In practice, people have already written much code that relies on the acquire/release semantics.
Or, in theory, a mutex could have SeqCst
semantics during lock and unlock, but I think we don't want people to rely on that (though I already have seen code that does, accidentally).
Why does this matter?. Consider:
use std::{
fs, io,
os::fd::{IntoRawFd as _, RawFd},
sync::{
atomic::{compiler_fence, AtomicI32, Ordering},
Mutex,
},
};
fn get_lazy_fd() -> Result<RawFd, io::Error> {
const FD_UNINIT: i32 = -1; // libstd guarantees a RawFD is never -1.
// FD isn't stored within the mutex so we can usually use it
// without locking overhead.
static FD: AtomicI32 = AtomicI32::new(FD_UNINIT);
static LOCK: Mutex<()> = Mutex::new(());
match FD.load(Ordering::Relaxed) {
fd if fd != FD_UNINIT => Ok(fd), // Common case: Fast, no Acquire at all.
_ => {
let _guard = LOCK.lock().unwrap();
if let Some(fd) = FD.load(Ordering::Relaxed) {
return fd;
}
let fd = fs::File::open("/dev/urandom")?.into_raw_fd();
FD.store(fd, Ordering::Relaxed);
Ok(fd)
}
}
}
We want some official reassurance of the "obvious" fact that the FD.store(fd, Ordering::Relaxed)
will be sequenced before the unlocking of the Mutex, i.e. that the mutex is doing an atomic release when unlocking the mutex so that the side effect of storing to FD will be seen by other thread after locking the mutex, which would only be guaranteed if locking the mutex does an atomic aquire that synchronizes with that atomic release.
This would also clarify the more basic fact that in a Mutex<T>
, there is nothing special about what's "contained" in the Mutex
, as far as synchronization is concerned. (This is unique to rust; C/C++/Posix mutexes don't have the concept of the Mutex
containing a value.)