"if Microsoft’s claim stands, then topological qubits have finally reached some sort of parity with where more traditional qubits were 20-30 years ago. I.e., the non-topological approaches like superconducting, trapped-ion, and neutral-atom have an absolutely massive head start: there, Google, IBM, Quantinuum, QuEra, and other companies now routinely do experiments with dozens or even hundreds of entangled qubits, and thousands of two-qubit gates. Topological qubits can win if, and only if, they turn out to be so much more reliable that they leapfrog the earlier approaches—sort of like the transistor did to the vacuum tube and electromechanical relay. Whether that will happen is still an open question, to put it extremely mildly."
> I foresee exciting times ahead, provided we still have a functioning civilization in which to enjoy them.
I don't agree with many of his posts but I think the blog is interesting in how personal it feels. Often I feel like all media is very cultivated but he seems very willing to put his own anxieties and foibles on the web.
Does Musk understand this? Maybe not. It's not evident so far. He certainly lives in a fictional world of the right wing's devising. Will someone else be able to penetrate that bubble to make him understand it? Will he care if they do? Guess we'll find out.
> For most of my professional life, this blog has been my forum, where anyone in the world could show up to raise any issue they wanted, as if we were tunic-wearing philosophers in the Athenian agora.
Given that this is Scott Aaronson, does he suggest we'll break cryptography and destroy the foundations of the modern internet?
If topological qubits turn out to be so much more reliable then it doesn't really matter how much time was spent trying to make other types of qubits more reliable. It's not really a head start, is it?
Or are there other problems besides preventing unwanted decoherence that might take that much time to solve?
So, he is saying that this approach will only pay off if topological qubits are a fundamentally better approach than the others being tried. If they turn out to be, say, merely twice as good as trapped ion qubits, they'll still only get to the achievements of current trapped ion designs with another, say, 10-15 years of continued investment.
The utility of traditional qubits depends entirely on how reliable and long-lived they are, and how to can scale to larger numbers of qubits. These topological qubits are effectively 100% reliable, infinite duration, and scale like semiconductors. According to the marketing literature, at least…
Note also that this isn’t a simulated result. Microsoft has an 8-qubit chip they are making available on Azure.
IBM sells you 400 qubits with huge coherence problems. When IBM had an 8-qubit chip, they were also pretty stable.
https://www.ft.com/content/a60f44f5-81ca-4e66-8193-64c956b09...
Microsoft is saying: we did it!
Everyone else is saying: prove it!
The only expert in the FT article is Dr. Sankar Das Sarma who (from Wikipedia)
"In collaboration with Chetan Nayak and Michael Freedman of Microsoft Research, Das Sarma introduced the ν = 5 / 2 topological qubit in 2005"
So you might understand why this FT article is not adding anything to the discussion, which does not discuss the theory but rather MS's claim of an actual breakthrough.They show a chip, we'd like proof of what the chip actually does
"The editorial team wishes to point out that the results in this manuscript do not represent evidence for the presence of Majorana zero modes in the reported devices. The work is published for introducing a device architecture that might enable fusion experiments using future Majorana zero modes."
https://static-content.springer.com/esm/art%3A10.1038%2Fs415...
1) The Nature paper just released focuses on our technique of qubit readout. We interpret the data in terms of Majorana zero modes, and we also do our best to discuss other possible scenarios. We believe the analysis in the paper and supplemental information significantly constrains alternative explanations but cannot entirely exclude that possibility.
2) We have previously demonstrated strong evidence of Majorana zero modes in our devices, see https://journals.aps.org/prb/pdf/10.1103/PhysRevB.107.245423.
3) On top of the Nature paper, we have recently made addition progress which we just shared with various experts in the field at the Station Q conference in Santa Barbara. We will share more broadly at the upcoming APS March meeting. See also https://www.linkedin.com/posts/roman-lutchyn-bb9a382_interfe... for more context.
Hmmm.. appreciate the honesty :)
That's from the abstract of the upcoming conference talk (Mar14)
>Towards topological quantum computing using InAs-Al hybrid devices
Presenter: Chetan Nayak (Microsoft)
The fusion of non-Abelian anyons is a fundamental operation in measurement-only topological quantum computation. In one-dimensional topological superconductors, fusion amounts to a determination of the shared fermion parity of Majorana zero modes. Here, we introduce a device architecture that is compatible with future tests of fusion rules. We implement a single-shot interferometric measurement of fermion parity in indium arsenide-aluminum heterostructures with a gate-defined superconducting nanowire . The interferometer is formed by tunnel-coupling the proximitized nanowire to quantum dots. The nanowire causes a state-dependent shift of these quantum dots' quantum capacitance of up to 1fF. Our quantum capacitance measurements show flux h/2e-periodic bimodality with a signal-to-noise ratio of 1 in 3.6 microseconds at optimal flux values. From the time traces of the quantum capacitance measurements, we extract a dwell time in the two associated states that is longer than 1ms at in-plane magnetic fields of approximately 2T. These measurements are discussed in terms of both topologically trivial and non-trivial origins. The large capacitance shift and long poisoning time enable a parity measurement with an assignment error probability of 1%.
Seems like John Baez didn't notice those lines in the peer review either
https://mathstodon.xyz/@johncarlosbaez/114031919391285877
TIL: read the peer review first
https://gilkalai.wordpress.com/2025/02/17/robert-alicki-mich...
Microsoft unveils Majorana 1 quantum processor - https://news.ycombinator.com/item?id=43104071 - Feb 2025 (150 comments)
It is speculation still whether the top-q approach will be effective, but there are significant implications if it is. Scalability, reliability, and speed are all significant aspects on the table here.
While other technologies have a significant head start, much of the “head start” is transferrable knowledge, similar to the relay-triode-transistor-integrated circuit progression. Each new component type multiplies the effectiveness of the advances made by the previous generation of technologies, it doesn’t start over.
IF the topological qubits can be made to be reliable and they live up to their scalability promises, it COULD be a revolutionary step, enabling exponential gains in cost, scalability, and capability. IF.
anyons lie somewhere between fermions and bosons in their state occupancy and statistics - no 2 fermions may occupy the same state, bosons can all occupy the same state, anyons follow rational number patterns eg up to 2 anyons can occupy 3 states
I do wonder if he is running a simple 1st order differential on his own beliefs. He certainly has the chops here, and self introspection on the trajectory of highs and lows and the trends would interest me.
I made this small silly chrome extension to re-structure the comments to a more readable format - if anyone is interested
The experiment with lots of qubits... technically yes they can do things. I think the factoring record is 21. But you might be disappointed a) when you see that most algorithms using quantum computers require conventional computational to transform the problem before and after the quantum steps b) when you learn we only have a few quantum algorithms, the are not general calculation machines and c) when you look under the hood and see that the error correcting stuff makes it actually kinda hard to tell how much is really being done by the actual quantum device.
(Also, the factoring-21 result is from 2012, and may have been surpassed since then depending on how you count. Recent quantum-computing research has focused less on factoring numbers and more on problems like random circuit sampling where it's easier to get meaningful results with the noisy intermediate-scale machines we have today. Factoring is hard mode because you have to get it exactly right or else it's no good at all.)
So far, I haven't read how those chips are programmed, but it seems like it requires to re learn almost everything.
I don't even know if there is an OS for those.
However, even this is extremely theoretical at this time - no quantum computer built so far can execute Grover's algorithm, they are not reliable enough to get any result with probability higher than noise, and anyway can't apply the amount of steps required for even a single pass without losing entanglement. So we are still very very very far away from a quantum computer that could reach anything like the computing performance of a single consumer-grade GPU. We're actually very far away from a quantum computer that even reaches the performance of a hand calculator at this time.
You can play around with "quantum programming" through (e.g.) some of IBM's offerings and there has been work on quantum programming languages like q# from Microsoft but its unclear (to me) how useful these are.
Think of these as accelerators you use to run some specific algorithm the result of which your "normal" application uses.
More akin to GPUs: your "normal" applications running on "normal" CPUs offload some specific computation to the GPU and then use the result.
Or at least an OS driver for the devices supporting quantum computing if/when they become more standard.
I've heard it could get us very accurate high-detail physics simulations, which has potential, but don't know if that's legit or marketing BS.
Are we, in fact, in the very early stages of gradient descent toward what I want to call "software defined matter?"
If we're learning to make programmable quantum physics experiments and use them to do work, what is that the very beginning of? Imagine, say, 300 years from now.
Indeed Majorana fermions are completely unseen/unconsidered outside of Neutrinos. In fact all Standard Model fermions except Neutrinos are proven to be Dirac fermions
> “There’s no slam dunk to know immediately from the experiment” that the qubits are made of topological states, says Simon. (A claim of having created Majorana states made by a Microsoft-funded team based in Delft, The Netherlands, was retracted in 2021.) The ultimate proof will come if the devices perform as expected once they are scaled up, he adds.
My understanding is that they pretty convincingly showed that the thing they built acts as a qubit. This means that if its not doing what they think its doing (the "topological" / Majorana stuff) then they accidentally made a qubit which works some other way. That isn't outside the realm of possibility but it is fairly unlikely.
Also to say Majorana fermions are not considered outside of neutrinos is a patently ridiculous and ignorant statement. There is absolutely nothing in physics to say the only particle that can possibly be Majorana is a neutrino. For example, there have been theories of Majorana dark matter which are a consideration of fundamental Majorana particles outside of neutrinos.