Pretty impressive.<p>When I published Grisu (Google double-conversion), it was multiple times faster than the existing algorithms. I knew that there was still room for improvement, but I was at most expecting a factor 2 or so. Six times faster is really impressive.
This blows my mind TBH. I used to say a few years back that Ryu is my favorite modern algorithm but it felt so complicated. Your C implentation almost feels... simple!<p>Congratulations, can't wait to have some time to study this further
I read the post and the companion post:<p><a href="https://vitaut.net/posts/2025/smallest-dtoa/" rel="nofollow">https://vitaut.net/posts/2025/smallest-dtoa/</a><p>And there’s one detail I found confusing. Suppose I go through the steps to find the rounding interval and determine that k=-3, so there is at most one integer multiple of 10^-3 in the interval (and at least one multiple of 10^-4). For the sake of argument, let’s say that -3 worked: m·10^-3 is in the interval.<p>Then, if m is not a multiple of 10, I believe that m·10^-3 is the right answer. But what if m is a multiple of 10? Then the result will be exactly equal, numerically, to the correct answer, but it will have trailing zeros. So maybe I get 7.460 instead of 7.46 (I made up this number and I have no idea whether any double exists gives this output.) Even though that 6 is definitely necessary (there is no numerically different value with decimal exponent greater than -3 that rounds correctly), I still want my formatter library to give me the shortest decimal representation of the result.<p>Is this impossible for some reason? Is there logic hiding in the write function to simplify the answer? Am I missing something?
It seems that most research effort goes into better dtoa, and not enough in a better atod. There are probably a dozen dtoa algorithms now, and (I think?) two for atod. Anyone know why?
When I saw the title here, my first thought was “wow, these RISC-V ISA extensions are getting out of hand”
Thank you for the code. I could port this easily to C and it solved a lot of portability issues for me.
Very interesting!<p>I wonder how Teju Jaguá compares. I don't see it in the C++ benchmark repo you linked and whose graph you included.<p>I have contributed an implementation in Rust :) <a href="https://crates.io/crates/teju" rel="nofollow">https://crates.io/crates/teju</a> it includes benchmarks which compare it vs Ryu and vs Rust's stdlib, and the readme shows a graph with some test cases. It's quite easy to run if you're interested!
I am not sure how it compares but I did use one idea from Cassio's talk on Teju:<p>> A more interesting improvement comes from a talk by Cassio Neri Fast Conversion From Floating Point Numbers. In Schubfach, we look at four candidate numbers. The first two, of which at most one is in the rounding interval, correspond to a larger decimal exponent. The other two, of which at least one is in the rounding interval, correspond to the smaller exponent. Cassio’s insight is that we can directly construct a single candidate from the upper bound in the first case.
Indeed! I saw that you linked to Neri's work, so you were aware of Teju Jaguá. I might make a pull request to add it to the benchmark repo when I have some time :)<p>Another nice thing about your post is mentioning the "shell" of the algorithm, that is, actually translating the decimal significand and exponent into a string (as opposed to the "core", turning f * 2^e into f' * 10^e'). A decent chunk of the overall time is spent there, so it's worth optimising it as well.
Any ideas why Rust’s stdlib hasn’t adopted faster implementations?
Already done ages ago. Nothing more of interest.<p>The bottleneck are the 3 conditionals:
- positive or negative
- positive or negative exponent, x > 10.0
- correction for 1.xxxxx * 2^Y => fract(log10<i>(2^Y)) </i> 1.xxxxxxxx > 10.0