3 comments

  • amluto5 hours ago
    I&#x27;m amused by the burying of the non-lede until half way through the paper. I, too, can maintain a 59-bit repetition code for over two hours on my quantum laptop (it&#x27;s just a normal laptop, but it definitely obeys the laws of quantum mechanics):<p>Initialize the coded bit, using a 59-qubit repetition code that corrects bit flips but not phase errors, in IPython:<p><pre><code> In [1]: A = 59 * (False,) </code></pre> Write a decoder:<p><pre><code> In [2]: def decode(physbits): ...: return sum(physbits) &gt; len(physbits)&#x2F;2.0 </code></pre> Wait two hours [0]. I&#x27;ll be lazy and only decode at the end of the two hours, but if I wanted error correction to get the full advantage, I would periodically run the error correction algorithm and fix detected errors. Now decode it:<p><pre><code> In [3]: decode(A) Out[3]: False </code></pre> Holy cow, it worked!<p>I&#x27;m being rather tongue-in-cheek here, of course. But it&#x27;s genuinely impressive that my laptop can stick 59 bits into DRAM cells containing a handful of electrons each, and <i>all</i> of them are just fine after several hours. And it&#x27;s really really impressive that this research group got their superconducting qubits to store <i>classical</i> states well enough that their rather fancy error correcting device could keep up and preserve the logical state for two hours. [1]<p>But this isn&#x27;t <i>quantum</i> error correction going FOOM, per se. It&#x27;s classical. A bit-flip-corrected but not phase-flip-corrected qubit is precisely a classical bit, no more, no less.<p>The authors did also demonstrate that they could do the same trick correcting phase flips and not bit flips, but that&#x27;s a tiny bit like turning the experiment on its side and getting the same result. Combining both demonstrations is impressive, though -- regardless of whether you look at the DRAM cells in my laptop as though the level is the Z basis or the X basis, they only work in one single basis. You cannot swap the role of level and phase in DRAM and get it to still work. But the researchers did pull that off on their two-hour-half-life device, and I find that quite impressive, and the fact that it worked strongly suggests that their device is genuinely 59 <i>qubits</i>, whereas no one could credibly argue that my laptop contains giga-qubits of DRAM. Fundamentally, you can do classical repetition using a repetition code, but you cannot do quantum computation with it. You need fancier, and more sensitive-to-errors, codes for this, and that&#x27;s what the second half of the article is about.<p>[0] I didn&#x27;t actually wait two hours. But I could have waited a week and gotten the same result.<p>[1] The researchers&#x27; qubits are nowhere near as good as my DRAM. They had to run their error correction a billion times or so during the course of their two hours. (My DRAM refreshes maybe ten thousand times over the course of two hours, and one can look at DRAM refreshes as correcting something a bit like a repetition code.)
    • Strilanc4 hours ago
      Author here: yes that&#x27;s all correct.<p>This is perhaps not clear enough, but the title refers to a <i>pattern</i>. For classical bits on a quantum computer this pattern is already playing out (as shown in the cited experiments), and for quantum bits I think it&#x27;s about to play out.<p>Classical storage of classical bits is still far more reliable, of course. Hell, a rock chucked into one bucket or another is still more reliable. We&#x27;ll never beat the classical computer at storing classical bits... but the rock in a bucket has some harsh competition coming.<p>I should maybe also mention that arbitrarily good qubits are a step on the road, not the end. I&#x27;ve seen a few twitter takes making that incorrect extrapolation. We&#x27;ll still need hundreds of these logical qubits. It&#x27;s conceivable that quantity also jumps suddenly... but that&#x27;d require even more complex block codes to start working (not just surface codes). I&#x27;m way less sure if that will happen in the next five years.
      • amluto4 hours ago
        I don’t really expect fancier codes to cause a huge jump in the number of logical qubits. At the end of the day, there’s some code rate (logical bits &#x2F; physical bits) that makes a quantum computer work. The “FOOM” is the transition from that code rate changing from zero (lifetime of a logical bit is short) to something that is distinctly different from zero (the state lasts long enough to be useful when some credible code). Say the code rate is 0.001 when this happens. (I haven’t been in the field for a little while, but I’d expect higher because those huuuuge codes have huuuuge syndromes, which isn’t so fun. But if true topological QC ever works, it will be a different story.) The code rate is unlikely to ever be higher than 1&#x2F;7 or so, and it will definitely not exceed 1. So there’s at most a factor of 1000, and probably less, to be gained by improving the code rate. This isn’t an exponential or super-exponential FOOM.<p>A factor of 1000 may well be the difference between destroying Shor’s-algorithm-prone cryptography and destroying it later, though.
  • Havoc2 hours ago
    Does quantum speed even matter?<p>I would have thought a wide enough array of qubits could functionally do &quot;anything&quot; in one shot
    • Strilanc2 hours ago
      Yes, speed matters. No, quantum computers can&#x27;t do everything instantly even with unbounded qubits.<p>A well studied example is that it&#x27;s impossible to parallelize the steps in Grover&#x27;s algorithm. To find a preimage amongst N possibilities, with only black box access, you <i>need</i> Ω(sqrt(N)) sequential steps on the quantum computer [1].<p>Another well known case is that there&#x27;s no known way to execute a fault tolerant quantum circuit faster than its reaction depth (other than finding a rewrite that reduces the depth, such as replacing a ripple carry adder with a carry lookahead adder) [2]. There&#x27;s no known way to make the reaction depth small in general.<p>Another example is GCD (greatest common divisor). It&#x27;s conjectured to be an inherently sequential problem (no polylog depth classical circuit) and there&#x27;s no known quantum circuit for GCD with lower depth than the classical circuits.<p>[1]: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;quant-ph&#x2F;9711070" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;quant-ph&#x2F;9711070</a><p>[2]: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1210.4626" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1210.4626</a>
    • tsimionescu2 hours ago
      They most certainly can&#x27;t, not even close to it. They can do a very limited subset of problems, and not at all in one shot - just far far far less shots than a classical computer. But even if you reduce and O(e^n) problem to an O(n) or O(n²) problem, that&#x27;s not instantaneous, and the speed with which you perform these n or n² operations still matters.
  • sparedwhistle6 hours ago
    What the hell is FOOM?
    • cubefox15 minutes ago
      It was Yudkowsky&#x27;s colloquial term for hard takeoff:<p><a href="https:&#x2F;&#x2F;www.lesswrong.com&#x2F;posts&#x2F;tjH8XPxAnr6JRbh7k&#x2F;hard-takeoff" rel="nofollow">https:&#x2F;&#x2F;www.lesswrong.com&#x2F;posts&#x2F;tjH8XPxAnr6JRbh7k&#x2F;hard-takeo...</a>
    • AlexCoventry1 hour ago
      Explosive ignition of a fire.
    • MattPalmer10865 hours ago
      Usually refers to a sudden increase in AI intelligence to super intelligence, i.e. the singularity. Basically an exponential increase in capability.
      • austinjp1 hour ago
        It may also be a reference to the (comparatively ancient) &quot;Pentium go F00F&quot; bug.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Pentium_F00F_bug" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Pentium_F00F_bug</a>