> Rewrite your code in rust, and get something better than the go race detector every time you compile.<p>Congrats, rustc forced you to wrap all your types in Arc<Mutex<_>>, and you no longer have data races. As a gift, you will get logical race conditions instead, that are even more difficult to detect, while being equally difficult to reproduce reliably in unit tests and patch.<p>Don’t get me wrong, Rust has done a ton for safety and pushed other languages to do better. I love probably 50% of Rust. But Rust doesn’t protect against logical races, lovelocks, deadlocks, and so on.<p>To write concurrent programs that have the same standards of testable, composable, expressive etc as we are expecting with sequential programs is really really difficult. Either we need new languages, frameworks or (best case) design- and architectural patterns that are <i>easy</i> to apply. As far as I’m concerned large scale general purpose concurrent software development is an unsolved problem.
As a sibling said, Go has all the same deadlocks, livelocks, etc you point out that rust doesn't cover, in addition to also having data-races that rust would prevent.<p>But, also, Go has way worse semantics around various things, like mutexes, making it much more likely deadlocks happen. Like in go, you see all sorts of "mu.Lock(); f(); mu.Unlock()" type code, where if it's called inside an `http.Handler` and 'f' panics, the program's deadlocked forever. In go, panics are the expected way for an http middleware to abort the server ("panic(http.ErrAbortHandler)"). In rust, panics are expected to actually be fatal.<p>Rust's mutexes also gate "ownership" of the inner object, which make a lot of trivial deadlocks compiler errors, while go makes it absolutely trivial to forget a "mu.Unlock" in a specific codepath and call 'Lock' twice in a case rust's ownership rules would have caught.<p>In practice, for similarly sized codebases and similarly experienced engineers, I see only a tiny fraction of deadlocks in concurrent rust code when compared to concurrent go code, so like regardless that it's an "unsolved problem", it's clear that in reality, there's something that's at least sorta working.
I may be biased, as I definitely love more than 50% of Rust, but Go also does not protect against logical races, deadlocks, etc.<p>I have heard positive things about the loom crate[1] for detecting races in general, but I have not used it much myself.<p>But in general I agree, writing correct (and readable) concurrent and/or parallel programs is hard. No language has "solved" the problem completely.<p>[1]: <a href="https://crates.io/crates/loom" rel="nofollow">https://crates.io/crates/loom</a>
If it's solved the solution has been discarded at some point by other developers for being too cumbersome, too much effort, and therefore in violation of some sacred principle of their job needing to be effortless.
A well formed Go program would have the same logical race conditions to manage as well.<p>The Arc is only needed when you truly need to <i>mutably</i> share data.<p>Rust like Go has the full suite of different channels and what other patterns to share data.
I wrote plenty of concurrent Rust code and the number of times I had to use Arc<Mutex> is extremely low (maybe a few times per thousands lines).<p>As for your statement that concurrency is generally hard - yes it is. But it is even harder with data races.
> Congrats, rustc forced you to wrap all your types in Arc<Mutex<_>><p>Also, don’t people know that a Mutex implies lower throughput depending on how long said Mutex is held?<p>Lock-free data structures/algorithms are attempt to address the drawbacks of Mutexes.<p><a href="https://en.wikipedia.org/wiki/Lock_(computer_science)#Disadvantages" rel="nofollow">https://en.wikipedia.org/wiki/Lock_(computer_science)#Disadv...</a>
Lock-free and even wait-free approaches are not a panacea. Memory contention is fundamentally expensive with today’s CPU architectures (they lock, even if you ostensibly don’t). High contention lock-free structures routinely perform worse than serialized locking.
Lock-free data structures and algorithms access shared memory via various atomic operations such as compare-and-swap and atomic arithmetic. The throughout of these operations do not scale with the number of CPU cores. Contrary, the throughput usually reduces with the growing number of CPU cores because they need more time for synchronizing local per-CPU caches with the main memory. So, lock-free data structures and algorithms do not scale on systems with big number of CPU cores. It is preferred to use "shared nothing" data structures and algorithms instead, where every CPU core processes its own portion of state, which isn't shared among other CPU cores. In this case the local state can be processed from local per-CPU caches at the speed which exceeds the main memory read/write bandwidth and has smaller access latency.
Lock-free data structures does not guarantee higher throughput. They guarantee lower <i>latency</i> which often comes at the expense of the throughput. A typical approach for implementing a lock-free data structure is to allow one thread to "take over" the execution of another one by repeating parts of its work. It ensures progress of the system, even if one thread isn't being scheduled. This is mainly useful when you have CPUs competing for work running in parallel.<p>The performance of high-contention code is a <i>really</i> tricky to reason about and depends on a lot of factors. Just replacing a mutex with a lock-free data structure will not magically speed up your code. Eliminating the contention completely is typically much better in general.
The overhead of Mutex for uncontended cases is negligible. If Mutex acquisition starts to measurably limit your production performance, you have options but will probably need to reconsider the use of shared mutable anyway.
> Congrats, rustc forced you to wrap all your types in Arc<Mutex<_>>, and you no longer have data races.<p>Or you can just avoid shared mutable state, or use channels, or many of the other patterns for avoiding data races in Rust. The fun thing is that you can be sure that no matter what you do, as long as it's not unsafe, it will not cause a data race.