Open
Description
@Amanieu provided the following example here, where the Channel
should be placed in inter-process shared memory:
// Single-producer single-consumer channel with a capacity of 1
pub struct Channel<T: Copy> {
control: AtomicBool,
data: UnsafeCell<T>,
}
// I'm keeping the code simple here by having both read() and write()
// available on Channel. A real implementation would split the ends into
// a ReadChannel and WriteChannel to ensure only a single producer/consumer.
impl<T: Copy> Channel<T> {
pub fn read(&self) -> T {
// Wait for the channel to be full
while !self.control.load(Ordering::Acquire) {}
// Read the data
let data = unsafe { *self.data.get() };
// Mark the channel as empty
self.control.store(false, Ordering::Release);
data
}
pub fn write(&self, data: T) {
// Wait for the channel to be empty
while self.control.load(Ordering::Acquire) {}
// Write the data
unsafe {
*self.data.get() = data;
}
// Mark the channel as full
self.control.store(true, Ordering::Release);
}
}
-
If Rust proves that the Rust process is single-threaded, is it a sound optimization to replace the atomic loads/stores of the
AtomicBool
by non-atomic memory accesses ? If that optimization is sound, a different process attempting to access thebool
introduces a data-race and the behavior is undefined. -
What assumptions does LLVM do?
Note: replacing the atomic loads and stores with volatile atomic loads and stores would ensure that the example above is correct independently of the answer to these questions.