This article delves into programming language memory models, exploring how parallel programs can share memory between threads and guarantee intended outcomes. It highlights issues with ordinary variables potentially leading to endless loops or incorrect outputs due to compiler optimizations. To resolve these issues, modern languages introduce atomic variables or operations for synchronization, ensuring that programs finish as expected and are free from data races. The text covers the evolving landscape of memory models across modern languages, culminating in the notion of data-race-free sequential consistency. It also discusses how lessons from hardware memory models and compiler optimizations shape the development of programming language memory models.
https://research.swtch.com/plmm