I don't think I'd recommend using this in production. The benchmarks look good, but by immediately waking the waker[0], you've effectively created a spin-lock. They may work in some very specific circumstances, but they will most likely in practice be more costly to your scheduler (which likely uses locks btw) than just using locks<p>[0]: <a href="https://github.com/fortress-build/whirlwind/blob/0e4ae5a2aba14870f64828b7bb3a1059b78bb171/src/shard/futures.rs#L27">https://github.com/fortress-build/whirlwind/blob/0e4ae5a2aba...</a>
+1. I'd be curious how much of a pessimization to uncontended workloads it'd be to just use `tokio::sync::RwLock`.<p>and, if we want to keep it as a spinlock, I'm curious how much the immediate wakeup compares to using `tokio::task::yield_now`: <a href="https://docs.rs/tokio/latest/tokio/task/fn.yield_now.html" rel="nofollow">https://docs.rs/tokio/latest/tokio/task/fn.yield_now.html</a>
I don't believe that waking the waker in `poll` synchronously waits / runs poll again immediately. I think it is more likely just adding the future back to the global queue to be polled. I could be wrong though, I'll look into this more. Thanks for the info!
One thing which wasn’t obvious to me from the benchmark: What’s the key distribution? A sharded map will probably have great performance on uniform keys, but in my experience it’s far more common to have power law distribution in real life scenarios.<p>It would be good to see a benchmark where it only touches a _single_ key. If Whirlwind is still fast than the others I would be far more convinced to use it unconditionally.<p>EDIT: I also see that you're benchmarking on a M3 Max. Note that this has 6 performance cores and 6 efficiency cores. This means that if you're running at <6 cores it will most likely start out running it at the efficiency core. From my experience it's quite important to do warmup phases in order to get stable results at low thread count. And even then I find it hard to reason about the results since you're running in mixed set of cores…
Definitely a good point. I used dashmap's benchmark suite because it was already setup to bench many popular libraries, but I definitely want to get this tested in more varied scenarios. I'll try to add a benchmark for a single key only this week.<p>Regarding your edit: damn I hadn't thought of that. I'll rerun the benchmarks on my Linux desktop with a Ryzen chip and update the readme.
Very fascinating results though! The sharding approach is quite a simple way of doing more fine-grained locking, making sure that writes don't block all the reads. Pretty cool to see that it actually pays off even with the overhead of Tokie scheduling everything!<p>There might be some fairness concerns here? Since we're polling the lock (instead of adding ourselves to a queue) it could be the case that some requests are constantly "too late" to acquire it? Could be interesting to see the min/max/median/P99 of the requests themselves. It seems that Bustle only reports the <i>average</i> latency[1] which honestly doesn't tell us much more than the throughputs.<p>[1]: <a href="https://docs.rs/bustle/latest/bustle/struct.Measurement.html" rel="nofollow">https://docs.rs/bustle/latest/bustle/struct.Measurement.html</a>
> Just as dashmap is a replacement for std::sync::RwLock<HashMap>, whirlwind aims to be a replacement for tokio::sync::RwLock<HashMap>.<p>I'm curious about the practical benefits of handling a HashMap with an async interface. My long-standing understanding is that `tokio::sync::RwLock<HashMap>` is only useful when you'd want to hold the lock guard across another `await` operation; but when the lock guard is not held across an await, it is always preferable to use the synchronous version.<p>This would lead me to assume that same applies for dashmap--it should be sufficient for async use cases and doesn't need an async API unless we expect to be blocked, but the benchmarks indicate that whirlwind outperforms dashmap in various situations. Do you have a sense of where this blocking occurs in dashmap?
The blocking mainly occurs due to contention - imo most of the performance gain comes from being able to poll the lock instead of blocking until it's available when acquiring locks on shards.<p>In all honesty I was quite surprised by the benchmarks as well though, I wouldn't expect <i>that</i> much performance gain, but in high-contention scenarios it definitely makes sense.
Benches look promising! My main concern is validating correctness; implementing good concurrency primitives is always challenging. Have you looked into testing against a purpose-built concurrency model checker like tokio-rs/loom [1] or awslabs/shuttle [2]? IMO that would go a long way towards building trust in this impl.<p>[1] <a href="https://github.com/tokio-rs/loom">https://github.com/tokio-rs/loom</a>
[2] <a href="https://github.com/awslabs/shuttle">https://github.com/awslabs/shuttle</a>
I'm sure this library does the thing it claims to do very nicely. However, as a programmer I am saddened that so many things that should have been in the standard library by now need to come from external repositories and have a weird and unintuitive names.
I kind of like the "thin" stdlib approach tbh.<p>Rather than having 3-5 ways to do a say a GUI from the stdlib you have 0 and instead you can pick a dependency that suits your way.<p>That then boils down to even data structures, there's a bunch of trade-offs to be made where one is better than the other and if Rust had to maintain 30 different types of maps for each use case that seems like a waste of their time. Push that work off to the people that need those maps.
I'm torn. I don't write Rust much at all, but I have been playing around with it a little here and there for the last couple years. When I first tried it out, I was shocked that almost nothing I was looking for could be found in the stdlib. I thought I was looking in the wrong place. Coming mostly from the Go world, it was jarring to me when it turned out that out no, it really just barely has a stdlib.<p>Now after on some pondering though, there is a strength in that the Rust team doesn't have to focus on maintaining such a large stdlib, as you said. I find myself using external libraries for many things in Go already, so why not just extend that to slightly more fundamental things with lots of variance too.<p>I think the largest downside with this approach, however, is that it's hard for me to know what the de facto, collectively-agreed-upon standard is for a given task.
I agree with this take a lot. I think having lots of custom implementations is inevitable for systems languages - the only reason why Rust is different is because Cargo/crates.io makes things available for everyone, where in C/C++ land you will often just DIY everything to avoid managing a bunch of dependencies by hand or with cmake/similar.
I try to tell all new programmers that ask me for advice that keeping abreast of the words of tools that are available for use is a big part of the work and shouldn't be left out. If I quit my daily / weekly trawl of what's out there, I'd surely start to atrophy.
tokio itself is not mature/stable enough to be in the standard library, IMO, let alone anything based on tokio.
Hey, I think the name is cool!<p>Fair point though, there would definitely be some benefit to having some of these things in the stdlib.
The teams who compile the standard libraries are still made up of people, and sometimes better libraries get written outside of those teams. It's just a natural consequence of the intense collaboration we have in this industry. In fact, I am GLAD things happen this way, because it shows how flexible and open minded folks are when it comes to such libraries.<p>Naming things is... well, you know...
> This crate is in development, and breaking changes may be made up until a 1.0 release.<p>When will this happen? I imagine a lot of people who want use it might just forget about it if you say "do not use this until some unspecified point in the future".
I think there's a pretty big difference between committing to semantic versioning and saying "do not use this until some unspecified point in the future." Maybe I'm just not clear enough in the note - I just mean that the API could change. But as long as a consumer doesn't use `version = "*"` in their Cargo.toml, breaking changes will always be opt-in and builds won't start failing if I release something with a big API change.
Maybe I'm a bit weird, but I would never commit to using something if the person making it wasn't providing a consistent interface. It could well be different in your case, but as a general rule, a sub 1.0 version is software that isn't yet ready to be used. The vast vast majority of projects that say "you can use this, but we won't provide a consistent interface yet" end up either dying before they get to v1 or causing so much pain they weren't worth using. I can see this issue being especially bad in Rust, where small API changes can create big issues with lifetimes and such.
Looks interesting! We used quick-cache [1] for that purpose right now, might be interesting to add comparison with those types of Key-Value caching crates.<p>[1] <a href="https://github.com/arthurprs/quick-cache">https://github.com/arthurprs/quick-cache</a>
How do you determine the number of shards to use?
I use a multiple of `std::thread::available_paralellism()`. Tbh I borrowed the strategy from dashmap, but I tested others and this seemed to work quite well. Considering making that configurable in the future so that it can be specialized for different use-cases or used in single-threaded but cooperatively scheduled contexts.
(std::thread::available_parallelism().map_or(1, usize::from) * 4).next_power_of_two()
Great crate! Why use a hashmap instead of a Btreemap which is usually advised in rust?
There are a few reasons - For one, I'm not sure BTreeMap is always faster in Rust... it may be sometimes but lookups are still O(log(n)) due to the searching where with a HashMap it's (mostly) O(1). They both have their uses - I usually go for BTreeMap when I explicitly need the collection to be ordered.<p>A second reason is sharding - sharding based on a hash is quite simple to do, but sharding an ordered collection would be quite difficult since some reads would need to search across multiple shards and thus take multiple locks.<p>If you mean internally (like for each shard), we're using hashbrown's raw HashTable API because it allows us to manage hashing entirely ourselves, and avoid recomputing the hash when determining the shard and looking up a key within a shard.
> Why use a hashmap instead of a Btreemap which is usually advised in rust?<p>Is this actually the case? I can't say I've seen the same.
Why would you use a b tree if you don't need sorting? It will not only be slower but require a lot more to make lock free (is this hash map lock free?).
I don't see the use case of this
[dead]
[dead]