I’ve recently become very interested in QC and purchased and read Quantum Computation and quantum Information which I think is the standard book on the subject right now.
I’m even more interested in applying what I’ve learned but I’m at a loss as to how to begin working in the industry. Aside from getting a new masters degree I wouldn’t know where to begin and resources on the matter are understandably sparse.
- The section on error correction is still gold, but it doesn't cover "scalable codes" like the Surface Code (and other LDPC codes; lots of exciting progress there) - Superconducting Qubits: https://arxiv.org/abs/1904.06560. - Rydberg Atoms: see Nature Papers from Misha Lukin's group on the subject - Photonic quanum computing
These might be hard to follow now, but if you make it through a good chunk of Nielsen and Chuang, then they might become quite readable. Make sure you solve lots of problems or it won't stick.
Like other commenters have pointed out, quantum computing companies need lots of software engineers, so that's a very viable entry into the field for many people. Here's an arbitrary list of some relevant skills: - Qutip! You can learn sooo much quantum mechanics by playing around in Qutip, and it's quite easy to use. - Rust or C++ (depending on the company?) - FPGA programming - Python (ofc) - Linear algebra - ...
Source: work at a QC company as a scientist.
If you're in AI, you might be pleased to know that the probability distribution of a particle in its various energy states is related to the softmax of the negative values of those energies times temperature, which is where the concept of LLM "temperature" comes from. If you have linear algebra background, those energies are the eigenvalues of a Hamiltonian. Physics is actually quite beautiful.
Getting into industries is another issue though, it seems every company favors credentials over learning ability these days. If you haven't published 1500 papers on the subject you're automatically rejected.
People also keep talking about using it for AI but all you can train with it are Boltzmann machines because those are all that map into QUBO problems.
Right now it hasn't amounted to anything useful, other than Shor's and 'experiments' and promises and applications that are no better done on a GPU rack right now.
Shor's paper on polynomial time factoring is from 1997, first real demonstration of quantum hardware (Monroe et al.) is from 1995: Yes, quantum has had decades -- but only barely, and is has certainly only now started to have generations.
To look at the kind of progress this means, take a look of some of the recent phd spinouts of leading research groups (Oxford Ionics etc.): There are a lot of organisations with nothing but engineering to go before they reach fault tolerance.
When I came back to quantum three years ago, fault tolerance was still to be based on the surface code ideas that floated when I did my phd ('04). Today, after everyone has started looking harder, it turns out that a bit of long-range connectivity can cut the error correction overhead by orders of magnitude (see recent public posts by IBM Quantum): The goalposts for fault tolerance are moving in the right direction.
And this is the key thing about quantum computing: you need error correction, and you need to do it with the same error-prone hardware that you correct for. There is a threshold hardware quality that will let you do this at a reasonable overhead, and before you reach this threshold all you have is a fancy random number generator.
But yes, feel free to be a pessimist -- just remember to own it when quantum happens in a few years.
The next big challenge will be mounting the controlling hardware, currently connected via coaxial cables, onto the chip while preventing the introduction of new sources of interference so that error correction can run. That will take a miracle.
Of course, an alternative is a million coaxial cables connected to a chip cooled close to mK temperatures.
The current emphasis on NISQ systems is a bit of a desperate measure because the most we can get out of such systems is evidence that quantum computing can work in theory; they do not advance us towards having a workable quantum computer.
The last paper I saw posted on hackernews from Gil Kalai included a few explicit predictions about what would be impossible in quantum error correction.
This was a paper from a few years back.
The problem is that now Google has published results which imply that some Kalai's predictions turned out false.
The paper in question is Google's recent "below threshold"/"beyond break-even" QEC paper. I believe Kalai was predicting below threshold QEC to be impossible IIRC, among other things.
Not sure if Kalai has responded or updated his predictions, I haven't been following him closely.
In some ways it is similar to fusion. People have been working on it for a long time. The benefits are potentially significant (shor is cute and all but really the big deal would be a cheap way to simulate other quantum systems) but the challenges are also significant. Real progress is being. Things that were super challanging 10 years ago are solved now. The field is advancing. But we still have a long way to go.
It is not a scam itself, but a lot of scammers use the language of quantum to sell their scams. You should treat anyone convincing you that they will have a useful quantum computer in the next 5 years the same way as someone offering you a fusion reactor (i.e. full of shit).
Its still a worthwhile pursuit, even just as a physics experiment. It pushes the "weirdness" of quantum physics to the limit - by literally disproving the extended church turing thesis. If we make a real quantum computer - that is proof that quantum physics is really how are world works. Its not just something else that is being misinterpreted.
1) While quantum computers are potentially exponentially faster, they also seem to be exponentially more expensive given the number of qubits, so you actually can't save money by building a huge quantum computer. This may or may not change in future. Also, there was a problem with error correction, which is made much harder by the nature of quantum computing. Smart people are working on that, I don't know the current state of progress.
2) Despite the hype, only some problems can be calculated exponentially faster using a quantum computer, not all of them. This is analogical to parallel computing: having two CPUs instead of one will allow you to calculate some things twice as fast, but some other things will require exactly the same amount of time because their steps need to be done sequentially. Similarly, a quantum computer is like a network of billions of computers that are spread across the multiverse, but they need to all run the same code, and to compress the results of the gigantic computation into about dozen bytes. So it's great for highly parallelizable tasks where the entire required output is a "yes or no" or a single number... and less useful for everything else. That still includes some important scientific problems, such as simulating atoms and molecules. But those are not the things we typically use computers for.
Decades is a short amount of time in human history. Many things took centuries to invent.
The silicon valley approach of a year or two of runway is how apps are built, but that's not how science is built.
My time in my Ph.D. program and some of the work in my career (getting paid) suggest that I was "good at researching". But I left such research due to wanting to get paid more, and settled on starting a business, owning it, and making it valuable. If some research can help the business, fine, but the real goal is just the money from a successful business.
On (academic) research, one lesson no one ever mentioned to me but eventually I formulated: Pick a field of research. Then in that field a lot about what is expected, respected, intended, valued or not, ..., is not much spoken about and not made clear -- clear that have fertile ground for politics. Then, for such questions, the answers you guess or get in some one field will likely be quite different in another. In some fields can get reminded of the old quip: "Haydn wrote 101 symphonies or one symphony 101 times?" Or at times can believe that with high probability, a paper gets read by just two people, the peer reviewer and the author; as a result of that case, the only accomplishment of papers, good or bad, are that they get counted as in someone with 50 papers is regarded as better than someone with only 4. Ah, tough to prove that the paper will never get 1000 readers!
For research, one approach is to study a (assume an academic) field, crawl down some narrow alley or rabbit whole, see a question with no answer, consider the broad status of the field, then if making progress on the question seems not obviously impossible, give it a try. By a few days or weeks should have an answer, a partial answer, or hints that by continuing you might get something.
Another approach is to pick a problem mostly on your own and not from, trying to extend, published research. You might follow some instance of personal curiosity or something from some other field, e.g., do some math, optimization, statistics, ... research from problems in the environment (why the ups and downs of lobsters in east Canada?), medical testing, the supply chain, some engineering problem, some business problem, etc.
Do note that in the US, after radar, the proximity fuze, submarine acoustics, code breaking, jet engines, the "bomb", the US military had plenty of both money and problems, and that funded a lot of US research. Now there seems to be a general view: We don't know what research directions will yield powerful results, but since we can't afford to miss out on some big results or fall behind, we will continue to fund research. Non-military research seems less eager for results and to have less money.
Ah, be good at the politics, e.g., even follow "Always look for the hidden agenda." If working in an organization, beware of "goal subordination", i.e., others working to have you fail.