9 comments

  • floitsch51 days ago
    Pretty impressive.<p>When I published Grisu (Google double-conversion), it was multiple times faster than the existing algorithms. I knew that there was still room for improvement, but I was at most expecting a factor 2 or so. Six times faster is really impressive.
    • vitaut51 days ago
      Thank you! It means a lot coming from you, Grisu was the first algorithm that I implemented =). (I am the author of the blog post.)
  • NovemberWhiskey51 days ago
    When I saw the title here, my first thought was “wow, these RISC-V ISA extensions are getting out of hand”
  • HexDecOctBin51 days ago
    It seems that most research effort goes into better dtoa, and not enough in a better atod. There are probably a dozen dtoa algorithms now, and (I think?) two for atod. Anyone know why?
    • vitaut51 days ago
      Good question. I am not familiar with string-to-double algorithms but maybe it&#x27;s an easier problem? double-to-string is relatively complex, people even doing PhD in this area. There is also some inherent asymmetry: formatting is more common than parsing.
      • dtolnay50 days ago
        In implementing Rust&#x27;s serde_json library, I have dealt with both string-to-double and double-to-string. Of the two, I found string-to-double was more complex.<p>Unlike formatting, correct parsing involves high precision arithmetic.<p>Example: the IEEE 754 double closest to the exact value &quot;0.1&quot; is 7205759403792794*2^-56, which has an exact value of A (see below). The next higher IEEE 754 double has an exact value of C (see below). Exactly halfway between these values is B=(A+C)&#x2F;2.<p><pre><code> A=0.1000000000000000055511151231257827021181583404541015625 B=0.100000000000000012490009027033011079765856266021728515625 C=0.10000000000000001942890293094023945741355419158935546875 </code></pre> So for correctness the algorithm needs the ability to distinguish the following extremely close values, because the first is closer to A (must parse to A) whereas the second is closer to C:<p><pre><code> 0.1000000000000000124900090270330110797658562660217285156249 0.1000000000000000124900090270330110797658562660217285156251 </code></pre> The problem of &quot;string-to-double for the special case of strings produced by a good double-to-string algorithm&quot; might be relatively easy compared to double-to-string, but correct string-to-double for arbitrarily big inputs is harder.
        • nly50 days ago
          I guess one aspect of it is that in really high performance fields where you&#x27;re taking in lots of stringy real inputs (FIX messages coming from trading venues for example, containing prices and quantities) you would simply parse directly to a fixed point decimal format, and only accept fixed (not scientific) notation inputs. Except for trailing or leading zeros there is no normalisation to be done.<p>Parsing a decimal ASCII string to a decimal value already optimizes well, because you can scale each digit by it&#x27;s power of 10 in parallel and just add up the result.
        • andrepd50 days ago
          For those wishing to read up on this subject, an excellent starting point is this comprehensive post by one of the main contributors of the fast algorithm currently used in core:<p><a href="https:&#x2F;&#x2F;old.reddit.com&#x2F;r&#x2F;rust&#x2F;comments&#x2F;omelz4&#x2F;making_rust_float_parsing_fast_libcore_edition&#x2F;" rel="nofollow">https:&#x2F;&#x2F;old.reddit.com&#x2F;r&#x2F;rust&#x2F;comments&#x2F;omelz4&#x2F;making_rust_fl...</a>
        • vitaut50 days ago
          &gt; Unlike formatting, correct parsing involves high precision arithmetic.<p>Formatting also requires high precision arithmetic unless you disallow user-specified precision. That&#x27;s why {fmt} still has an implementation of Dragon4 as a fallback for such silly cases.
      • mdf51 days ago
        &gt; formatting is more common than parsing.<p>Is it, though? It&#x27;s genuinely hard for me to tell.<p>There&#x27;s both serialization and deserialization of data sets with, e.g., JSON including floating point numbers, implying formatting and parsing, respectively.<p>Source code (including unit tests etc.) with hard-coded floating point values is compiled, linted, automatically formatted again and again, implying lots of parsing.<p>Code I usually work with ingests a lot of floating point numbers, but whatever is calculated is seldom displayed as formatted strings and more often gets plotted on graphs.
        • adrian_b50 days ago
          For serialization and deserialization, when the goal is to produce strings that will be read again by a computer, I consider the use of decimal numbers as a serious mistake.<p>The conversion to string should produce a hexadecimal floating-point number (e.g. with the &quot;a&quot; or &quot;A&quot; printf conversion specifier of recent C library implementations), not a decimal number, so that both serialization and deserialization are trivial and they cannot introduce any errors.<p>Even if a human inspects the strings produced in this way, comparing numbers to see which is greater or less and examining the order of magnitude can be done as easy as with decimal numbers. Nobody will want to do exact arithmetic computations mentally with such numbers.
        • vitaut50 days ago
          Think about things like logging and all the uses of printf which are not parsed back. But I agree that parsing is extremely common, just not the same level.
  • amluto51 days ago
    I read the post and the companion post:<p><a href="https:&#x2F;&#x2F;vitaut.net&#x2F;posts&#x2F;2025&#x2F;smallest-dtoa&#x2F;" rel="nofollow">https:&#x2F;&#x2F;vitaut.net&#x2F;posts&#x2F;2025&#x2F;smallest-dtoa&#x2F;</a><p>And there’s one detail I found confusing. Suppose I go through the steps to find the rounding interval and determine that k=-3, so there is at most one integer multiple of 10^-3 in the interval (and at least one multiple of 10^-4). For the sake of argument, let’s say that -3 worked: m·10^-3 is in the interval.<p>Then, if m is not a multiple of 10, I believe that m·10^-3 is the right answer. But what if m is a multiple of 10? Then the result will be exactly equal, numerically, to the correct answer, but it will have trailing zeros. So maybe I get 7.460 instead of 7.46 (I made up this number and I have no idea whether any double exists gives this output.) Even though that 6 is definitely necessary (there is no numerically different value with decimal exponent greater than -3 that rounds correctly), I still want my formatter library to give me the shortest decimal representation of the result.<p>Is this impossible for some reason? Is there logic hiding in the write function to simplify the answer? Am I missing something?
    • vitaut51 days ago
      This is possible and the trailing zeros are indeed removed (with the exponent adjusted accordingly) in the write function. The post mentions removing trailing zeros without going into details but it&#x27;s a pretty interesting topic and was recently changed to use lzcnt&#x2F;bsr instead of a lookup table.
  • andrepd51 days ago
    Very interesting!<p>I wonder how Teju Jaguá compares. I don&#x27;t see it in the C++ benchmark repo you linked and whose graph you included.<p>I have contributed an implementation in Rust :) <a href="https:&#x2F;&#x2F;crates.io&#x2F;crates&#x2F;teju" rel="nofollow">https:&#x2F;&#x2F;crates.io&#x2F;crates&#x2F;teju</a> it includes benchmarks which compare it vs Ryu and vs Rust&#x27;s stdlib, and the readme shows a graph with some test cases. It&#x27;s quite easy to run if you&#x27;re interested!
    • vitaut51 days ago
      I am not sure how it compares but I did use one idea from Cassio&#x27;s talk on Teju:<p>&gt; A more interesting improvement comes from a talk by Cassio Neri Fast Conversion From Floating Point Numbers. In Schubfach, we look at four candidate numbers. The first two, of which at most one is in the rounding interval, correspond to a larger decimal exponent. The other two, of which at least one is in the rounding interval, correspond to the smaller exponent. Cassio’s insight is that we can directly construct a single candidate from the upper bound in the first case.
      • andrepd51 days ago
        Indeed! I saw that you linked to Neri&#x27;s work, so you were aware of Teju Jaguá. I might make a pull request to add it to the benchmark repo when I have some time :)<p>Another nice thing about your post is mentioning the &quot;shell&quot; of the algorithm, that is, actually translating the decimal significand and exponent into a string (as opposed to the &quot;core&quot;, turning f * 2^e into f&#x27; * 10^e&#x27;). A decent chunk of the overall time is spent there, so it&#x27;s worth optimising it as well.
    • vlovich12351 days ago
      Any ideas why Rust’s stdlib hasn’t adopted faster implementations?
      • pitaj51 days ago
        I think code size is one notable reason
        • vitaut50 days ago
          I am pretty sure Dragonbox is smaller than Ryu in terms of code size because it can compress the tables.
  • nhatcher50 days ago
    This blows my mind TBH. I used to say a few years back that Ryu is my favorite modern algorithm but it felt so complicated. Your C implentation almost feels... simple!<p>Congratulations, can&#x27;t wait to have some time to study this further
    • vitaut50 days ago
      Thank you! The simplicity is mostly thanks to Schubfach although I did simplify it a bit more. Unfortunately the paper makes it appear somewhat complex because of all the talk about generic bases and Java workarounds.
      • adgjlsfhk150 days ago
        I&#x27;ve just started a Julia port and I think it will be even cleaner than the C version (mostly because Julia gives you a first class (U)Int128 and count leading zeros (and also better compile time programming that lets you skip on writing the first table out explicitly).
        • vitaut50 days ago
          Cool, please share once it is complete.<p>C++ also provides countl_zero: <a href="https:&#x2F;&#x2F;en.cppreference.com&#x2F;w&#x2F;cpp&#x2F;numeric&#x2F;countl_zero.html" rel="nofollow">https:&#x2F;&#x2F;en.cppreference.com&#x2F;w&#x2F;cpp&#x2F;numeric&#x2F;countl_zero.html</a>. We currently use our own for maximum portability.<p>I considered computing the table at compile time (you can do it in C++ using constexpr) but decided against it not to add compile-time overhead, however small. The table never changes so I&#x27;d rather not make users pay for recomputing it every time.
        • nhatcher50 days ago
          Oh wow, I would love to see that if you can share it :)
          • adgjlsfhk150 days ago
            Once I finish it, I&#x27;ll be PRing to the Julia repo (to replace the current Ryu version), and I&#x27;ll drop a link here.
            • vitaut49 days ago
              I started a section to list implementations in other languages: <a href="https:&#x2F;&#x2F;github.com&#x2F;vitaut&#x2F;zmij?tab=readme-ov-file#other-languages" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;vitaut&#x2F;zmij?tab=readme-ov-file#other-lang...</a>. Once yours is complete feel free to submit a PR to add it there.
              • adgjlsfhk149 days ago
                Not adding it until complete, but <a href="https:&#x2F;&#x2F;github.com&#x2F;JuliaLang&#x2F;julia&#x2F;pull&#x2F;60439" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;JuliaLang&#x2F;julia&#x2F;pull&#x2F;60439</a> is the draft.
      • nhatcher49 days ago
        Quick question, if you are still around :).<p>I have been doing some tests. Is it correct to assume that it converts 1.0 to &quot;0.000000000000001e+15&quot;.<p>Is there a test suite it is passing?
        • vitaut49 days ago
          It converts 1.0 to &quot;1.e-01&quot; which reminds me to remove the trailing decimal point =). dtoa-benchmark tests that the algorithm produces valid results on its dataset.
          • nhatcher49 days ago
            So if I use:<p><pre><code> #include &quot;zmij.h&quot; #include &lt;stdio.h&gt; int main() { char buf[zmij::buffer_size]; zmij::dtoa(1.0, buf); puts(buf); } </code></pre> I get `g++ zmij.cc test.c -o test &amp;&amp; .&#x2F;test` =&gt; `0.000000000000001e+15`
            • vitaut49 days ago
              My bad, you are right. The small integer optimization should be switched to a different output method (or disabled since it doesn&#x27;t provide much value). Thanks for catching this!
              • vitaut49 days ago
                Should be fixed now.
          • dtolnay49 days ago
            &quot;1.e-01&quot; is for 0.1, not 1.0.
            • nhatcher49 days ago
              I assume they meant 1.e+00<p>That is what Schubfach does
              • vitaut49 days ago
                Yeah, that&#x27;s what I meant.
  • mulle_nat51 days ago
    Thank you for the code. I could port this easily to C and it solved a lot of portability issues for me.
    • vitaut51 days ago
      I&#x27;ve added a C implementation in <a href="https:&#x2F;&#x2F;github.com&#x2F;vitaut&#x2F;zmij&#x2F;blob&#x2F;main&#x2F;zmij.c" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;vitaut&#x2F;zmij&#x2F;blob&#x2F;main&#x2F;zmij.c</a> in case you are interested.
      • mulle_nat50 days ago
        Nice, but it&#x27;s too late I needed a different API for future use in my custom sprintf so I made mulle-dtostr (<a href="https:&#x2F;&#x2F;github.com&#x2F;mulle-core&#x2F;mulle-dtostr" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;mulle-core&#x2F;mulle-dtostr</a>). On my machine (AMD) that benchmarked in a quick try quite a bit faster even, but I was just checking that it didn&#x27;t regress too badly and didn&#x27;t look at it closer.
        • vitaut49 days ago
          Please note that there is some error in your port:<p>Error: roundtrip fail 4.9406564584124654e-324 -&gt; &#x27;5.e-309&#x27; -&gt; 4.9999999999999995e-309<p>Error: roundtrip fail 6.6302941479442929e-310 -&gt; &#x27;6.6302941479443e-309&#x27; -&gt; 6.6302941479442979e-309<p>Error: roundtrip fail -1.9153028533493997e-310 -&gt; &#x27;-1.9153028533494e-309&#x27; -&gt; -1.9153028533493997e-309<p>Error: roundtrip fail -2.5783653320086361e-312 -&gt; &#x27;-2.57836533201e-309&#x27; -&gt; -2.5783653320099997e-309
  • Scaevolus50 days ago
    Is the large significands table (~19KB) an issue for cache contention in practical use, or are only a small range of significands generally accessed?
    • vitaut50 days ago
      It depends on the input distribution, specifically exponents. It is also possible to compress the table at the cost of additional computation using the method from Dragonbox.
  • Cold_Miserable51 days ago
    Already done ages ago. Nothing more of interest.<p>The bottleneck are the 3 conditionals: - positive or negative - positive or negative exponent, x &gt; 10.0 - correction for 1.xxxxx * 2^Y =&gt; fract(log10<i>(2^Y)) </i> 1.xxxxxxxx &gt; 10.0