money quote:<p><pre><code> "if Microsoft’s claim stands, then topological qubits have finally reached some sort of parity with where more traditional qubits were 20-30 years ago. I.e., the non-topological approaches like superconducting, trapped-ion, and neutral-atom have an absolutely massive head start: there, Google, IBM, Quantinuum, QuEra, and other companies now routinely do experiments with dozens or even hundreds of entangled qubits, and thousands of two-qubit gates. Topological qubits can win if, and only if, they turn out to be so much more reliable that they leapfrog the earlier approaches—sort of like the transistor did to the vacuum tube and electromechanical relay. Whether that will happen is still an open question, to put it extremely mildly."</code></pre>
The quote that struck me was<p>> I foresee exciting times ahead, provided we still have a functioning civilization in which to enjoy them.
There seems to be a bit of a disconnect between the first and the second sentence (to my completely uneducated mind).<p><i>If</i> topological qubits turn out to be so much more reliable then it doesn't really matter how much time was spent trying to make other types of qubits more reliable. It's not really a head start, is it?<p>Or are there other problems besides preventing unwanted decoherence that might take that much time to solve?
The point I think is this: if topological qubits are similar to other types of qubits, then investing in them is going to be disappointing because the other approaches have so much more work put into them.<p>So, he is saying that this approach will only pay off if topological qubits are a fundamentally better approach than the others being tried. If they turn out to be, say, merely twice as good as trapped ion qubits, they'll still only get to the achievements of current trapped ion designs with another, say, 10-15 years of continued investment.
The whole point though is that they are step function better than traditional qubits, in a way that is simply a type error to compare.<p>The utility of traditional qubits depends entirely on how reliable and long-lived they are, and how to can scale to larger numbers of qubits. These topological qubits are effectively 100% reliable, infinite duration, and scale like semiconductors. According to the marketing literature, at least…
There are caveats there too. Generally topological qubits <i>can</i> be immune to all kinds of noise (i.e. built-in error correction) but Majorana zero modes aren't exact the right kind of topological for that to be true. They only enjoy protection on most operations, but not all. So there is a still a need for error correction here (and all the complication that entails) it is just hopefully less onerous since only essentially one operation requires it.
All the other qubits scaled the same way when they were in a simulator, too. When they actually hit reality, they all had huge problems.
Other qubits in general do not scale the same way. Some for example do not allow arbitrary point-to-point interactions, which means doubling your physical qubits doesn’t double your number of logical qubits. There are other ways in which scaling was sometimes nonlinear.<p>Note also that this isn’t a simulated result. Microsoft has an 8-qubit chip they are making available on Azure.
I am well aware of how other qubits scale, but I am also aware that the physicists who created them didn't expect decoherence to scale this rapidly at the time they took that approach.<p>IBM sells you 400 qubits with huge coherence problems. When IBM had an 8-qubit chip, they were also pretty stable.
Yeah I mean that's exactly what MS are talking about, only requiring 1/20 of the checksum qubits or something.<p><a href="https://www.ft.com/content/a60f44f5-81ca-4e66-8193-64c956b09820" rel="nofollow">https://www.ft.com/content/a60f44f5-81ca-4e66-8193-64c956b09...</a>
A very important statement is in the peer review file that everyone should read:<p>"The editorial team wishes to point out that the results in this manuscript do not represent evidence for the presence of Majorana zero modes in the reported devices. The work is published for introducing a device architecture that might enable fusion experiments using future Majorana zero modes."<p><a href="https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-024-08445-2/MediaObjects/41586_2024_8445_MOESM2_ESM.pdf" rel="nofollow">https://static-content.springer.com/esm/art%3A10.1038%2Fs415...</a>
Thanks for your interest. I'm part of the Microsoft team. Here are a couple of comments that might be helpful:<p>1) The Nature paper just released focuses on our technique of qubit readout. We interpret the data in terms of Majorana zero modes, and we also do our best to discuss other possible scenarios. We believe the analysis in the paper and supplemental information significantly constrains alternative explanations but cannot entirely exclude that possibility.<p>2) We have previously demonstrated strong evidence of Majorana zero modes in our devices, see <a href="https://journals.aps.org/prb/pdf/10.1103/PhysRevB.107.245423" rel="nofollow">https://journals.aps.org/prb/pdf/10.1103/PhysRevB.107.245423</a>.<p>3) On top of the Nature paper, we have recently made addition progress which we just shared with various experts in the field at the Station Q conference in Santa Barbara. We will share more broadly at the upcoming APS March meeting. See also <a href="https://www.linkedin.com/posts/roman-lutchyn-bb9a382_interferometric-single-shot-parity-measurement-activity-7298475476010811392-jfIE/" rel="nofollow">https://www.linkedin.com/posts/roman-lutchyn-bb9a382_interfe...</a> for more context.
><i>signal-to-noise ratio of 1</i><p>Hmmm.. appreciate the honesty :)<p>That's from the abstract of the upcoming conference talk (Mar14)<p>><i>Towards topological quantum computing using InAs-Al hybrid devices<p>Presenter: Chetan Nayak (Microsoft)<p>The fusion of non-Abelian anyons is a fundamental operation in measurement-only topological quantum computation. In one-dimensional topological superconductors, fusion amounts to a determination of the shared fermion parity of Majorana zero modes. Here, we introduce a device architecture that is compatible with future tests of fusion rules. We implement a single-shot interferometric measurement of fermion parity in indium arsenide-aluminum heterostructures with a gate-defined superconducting nanowire . The interferometer is formed by tunnel-coupling the proximitized nanowire to quantum dots. The nanowire causes a state-dependent shift of these quantum dots' quantum capacitance of up to 1fF. Our quantum capacitance measurements show flux h/2e-periodic bimodality with a signal-to-noise ratio of 1 in 3.6 microseconds at optimal flux values. From the time traces of the quantum capacitance measurements, we extract a dwell time in the two associated states that is longer than 1ms at in-plane magnetic fields of approximately 2T. These measurements are discussed in terms of both topologically trivial and non-trivial origins. The large capacitance shift and long poisoning time enable a parity measurement with an assignment error probability of 1%.</i>
As the recent results from CS and math on the front pages have shown, one doesn't have to be unknown or underfunded in order to produce
verifiable breakthroughs, but it might help..<p>Seems like John Baez didn't notice those lines in the peer review either<p><a href="https://mathstodon.xyz/@johncarlosbaez/114031919391285877" rel="nofollow">https://mathstodon.xyz/@johncarlosbaez/114031919391285877</a><p>TIL: read the peer review first
Wait so this tech just...doesn't work yet? Like at all?
Microsoft claims that it works. However, the Nature reviewers apparently do not yet feel comfortable vouching for this claim.
It's worse, it will (likely) never work at all.
Another recent writeup that adds some nuance to this (and other claims), summarizing the quantum-skeptic positions:<p><a href="https://gilkalai.wordpress.com/2025/02/17/robert-alicki-michel-dyakonov-leonid-levin-oded-goldreich-and-others-a-summary-of-some-skeptical-views-on-quantum-computing/" rel="nofollow">https://gilkalai.wordpress.com/2025/02/17/robert-alicki-mich...</a>
I think that Kalai here is very seriously understating how fringe/contrarian his views are. He's not merely stating that there's too much optimism about potential future results, or that there's some kind of intractable theoretical or practical bottleneck that we'll soon reach and won't be able to overcome. He's saying that <i>any</i> kind of quantum advantage—a thing that numerous experiments, from different labs in academia and industry, using a wide variety of approaches, have demonstrated over the past decade—is impossible, and therefore all of those experimental results were wrong and need to be retracted. His position was scientifically respectable back when the possibility he was denying <i>hadn't actually happened yet</i>, but I don't think it is anymore.
I think he is playing it smart. The more fringe/contrarian it is, the bigger the payoff if he turns out to be right. So far nothing of much use came out of QC, and if nothing will, then the hype pendulum swings back at some point, and he will win big. If not, his position will seem silly, but not much risk to his reputation, being skeptic of a new model is intellectually fine and even courageous if it goes against the mainstream. I see it as those who called out "replication crisis" in social sciences.
Recent and related:<p><i>Microsoft unveils Majorana 1 quantum processor</i> - <a href="https://news.ycombinator.com/item?id=43104071">https://news.ycombinator.com/item?id=43104071</a> - Feb 2025 (150 comments)
I think what many people are missing in the discussion here is that topological qbits are essentially a different type of component. This is analogous to relay-triode-transistor technology progression.<p>It is speculation still whether the top-q approach will be effective, but there are significant implications if it is. Scalability, reliability, and speed are all significant aspects on the table here.<p>While other technologies have a significant head start, much of the “head start” is transferrable knowledge, similar to the relay-triode-transistor-integrated circuit progression. Each new component type multiplies the effectiveness of the advances made by the previous generation of technologies, it doesn’t start over.<p>IF the topological qubits can be made to be reliable and they live up to their scalability promises, it COULD be a revolutionary step, enabling exponential gains in cost, scalability, and capability. IF.
topological analytics shows that under like-like exchanges multiple distinct pathways exist in 2D (in 3D topology does not have distinct pathways for these exchanges). this permits real anyon particles to exist when the physics is confined to 2D within quantum limits such as in a layer of graphene. certain configurations of layers (“moire materials”) can be made periodic to provide a suitable scale lattice for anyons to localize and adopt particular quantum states<p>anyons lie somewhere between fermions and bosons in their state occupancy and statistics - no 2 fermions may occupy the same state, bosons can all occupy the same state, anyons follow rational number patterns eg up to 2 anyons can occupy 3 states
I enjoy the quality of "it's too early to say" in Aaronson's writing. It won't stop share price movement or hopeless optimism amongst others.<p>I do wonder if he is running a simple 1st order differential on his own beliefs. He certainly has the chops here, and self introspection on the trajectory of highs and lows and the trends would interest me.
A bit off topic - I really like Scott Aaronson and his blog, but hate the comment section - he engages a lot with the comments (which is great!) but it's really hard to follow, as each comment is presented as a new message.<p>I made this small silly chrome extension to re-structure the comments to a more readable format - if anyone is interested<p><a href="https://github.com/eliovi/shtetl-comment-optimized">https://github.com/eliovi/shtetl-comment-optimized</a>
I find the opposite, he often makes some ridiculous claim in the post, the comments (the ones he lets through) rightfully point out how wrong he was, then he cherry-picks and engages one of the more outrageous comments, so a superficial observer is left with the impression that the original claim was OK.
I remain curious if you can actually calculate anything with these gadgets? I mean can it add 2 and 2 or work out the factors of 30 or anything?
This experiment only created one qubit, so no.<p>The experiment with lots of qubits... technically yes they can do things. I think the factoring record is 21. But you might be disappointed a) when you see that most algorithms using quantum computers require conventional computational to transform the problem before and after the quantum steps b) when you learn we only have a few quantum algorithms, the are not general calculation machines and c) when you look under the hood and see that the error correcting stuff makes it actually kinda hard to tell how much is really being done by the actual quantum device.
We should celebrate this for what it is: another brick in the wall that we’re building to achieve practical quantum computing systems.
That's the best-case scenario. It remains possible that topological qubits, even if they are theoretically achievable, will turn out to be a dead end engineeringwise. Presumably competing quantum computing labs think this is likely, since they're not working on topological qubits; only Microsoft thinks they'll end up being important.
Yes, just like putting two bricks onto each other is a first step to the moon.
I wonder if this means that ai will have more capabilities with quantum computing.<p>So far, I haven't read how those chips are programmed, but it seems like it requires to re learn almost everything.<p>I don't even know if there is an OS for those.
So far, the only known algorithm relevant to AI that would run faster on a theoretical quantum computer is linear search, where quantum computers offer a modest speedup (linear search is O(n) on a classical computer, while Grover's algorithm is O(sqrt(n)) for quantum computers - this means that for a list of a million elements, you can scan it in 1 000 steps on a quantum computer instead of 1 000 000 steps on a classical one).<p>However, even this is extremely theoretical at this time - no quantum computer built so far can execute Grover's algorithm, they are not reliable enough to get any result with probability higher than noise, and anyway can't apply the amount of steps required for even a single pass without losing entanglement. So we are still very very very far away from a quantum computer that could reach anything like the computing performance of a single consumer-grade GPU. We're actually very far away from a quantum computer that even reaches the performance of a hand calculator at this time.
Pure Quantum Gradient Descent Algorithm and Full Quantum Variational Eigensolver
<a href="https://arxiv.org/abs/2305.04198" rel="nofollow">https://arxiv.org/abs/2305.04198</a>
<a href="https://en.wikipedia.org/wiki/Quantum_optimization_algorithms" rel="nofollow">https://en.wikipedia.org/wiki/Quantum_optimization_algorithm...</a>
There is not an "OS" or anything even remotely like it. For now these things behave more like physics experiments than computers.<p>You can play around with "quantum programming" through (e.g.) some of IBM's offerings and there has been work on quantum programming languages like q# from Microsoft but its unclear (to me) how useful these are.
that's not the way to think about quantum computing AFAIK.<p>Think of these as accelerators you use to run some specific algorithm the result of which your "normal" application uses.<p>More akin to GPUs: your "normal" applications running on "normal" CPUs offload some specific computation to the GPU and then use the result.
> "<i>an OS for those</i>"<p>Or at least an OS driver for the devices supporting quantum computing if/when they become more standard.
Other than fast factorization and linear search, is there anything that Quantum Computing can do? Those do seem important, but limited in scope - is this a solution in search of a problem?<p>I've heard it could get us very accurate high-detail physics simulations, which has potential, but don't know if that's legit or marketing BS.
Hey anyons 2D electron gas. Wrote about it a while ago and get downvoted!
I had a thought while reading this:<p>Are we, in fact, in the very early stages of gradient descent toward what I want to call "software defined matter?"<p>If we're learning to make programmable quantum physics experiments and use them to do work, what is that the <i>very beginning</i> of? Imagine, say, 300 years from now.
This seems quite a bold claim that Microsoft proofed that Neutrinos are Majorana particles...