PHYS771 Lecture 14: Skepticism of Quantum Computing

Scott Aaronson

Scribe: Chris Granade


Last time, we talked about whether quantum states should be thought of as exponentially long vectors, and I got challenged a bit about why I care about the class BQP/qpoly and concepts like quantum advice. Actually, I'd say that the main reason why I care is something I didn't even think to mention last time, which is that it relates whether we should expect quantum computing to be fundamentally possible or not. There are people, like Leonid Levin and Oded Goldreich, who just take it as obvious that quantum computing must be impossible. Part of their argument is that it's extravagant to imagine a world where describing the state of 200 particles takes more bits then there are particles in the universe. To them, this is a clear indication something is going to break down. So part of the reason that I like to study the power of quantum proofs and quantum advice is that it helps us answer the question of whether we really should think of a quantum state as encoding an exponential amount of information.

So, on to the Eleven Objections:

  1. Works on paper, not in practice.
  2. Violates Extended Church-Turing Thesis.
  3. Not enough "real physics."
  4. Small amplitudes are unphysical.
  5. Exponentially large states are unphysical.
  6. Quantum computers are just souped-up analog computers.
  7. Quantum computers aren't like anything we've ever seen before.
  8. Quantum mechanics is just an approximation to some deeper theory.
  9. Decoherence will always be worse than the fault-tolerance threshold.
  10. We don't need fault-tolerance for classical computers.
  11. Errors aren't independent.

What I did is to write out every skeptical argument against the possibility of quantum computing that I could think of. We'll just go through them, and make commentary along the way. Let me just start by saying that my point of view has always been rather simple: it's entirely conceivable that quantum computing is impossible for some fundamental reason. If so, then that's by far the most exciting thing that could happen for us. That would be much more interesting than if quantum computing were possible, because it changes our understanding of physics. To have a quantum computer capable of factoring 10000-digit integers is the relatively boring outcome -- the outcome that we'd expect based on the theories we already have.

I like to engage skeptics for several reasons. First of all, because I like arguing. Secondly, often I find that the best way to come up with new results is to find someone who's saying something that seems clearly, manifestly wrong to me, and then try to think of counterarguments. Wrong people provide a fertile source of research ideas.


So what are some of the skeptical arguments that I've heard? The one I hear more than any other argument is "well, it works formally, on paper, but it's not gonna work in the real world." People actually say this, and they actually treat it like it was an argument. For me, the fallacy here is not that people can have ideas that don't work in the real world, but that if they don't work in the real world, they can still somehow "work on paper." Of course, there could be assumptions such that an idea only works if the assumptions are satisfied. Thus, the question becomes if the assumptions are stated clearly or not.

Q: Do you think maybe this is just a rather unsophisticated way of challenging the assumptions of a result?
Scott: Yes -- but in that case, one hopes the challenge will become more sophisticated!

I was happy to find out that I wasn't the first person to point out this particular fallacy. Immanuel Kant wrote an entire treatise demolishing it: On the Common Saying: "That may be right in theory but does not work in practice."


Before going into the second argument, I'd like to remind you that these are all actual arguments that I've heard -- they aren't strawman arguments. With that in mind, the second argument is that quantum computing must be impossible because it violates the Extended Church-Turing Thesis. That is, we know that quantum computing can't be possible (assuming BPP≠BQP), because we know that BPP defines the limit of the efficiently computable.

Q: What is the Extended Church-Turing Thesis?
Scott: That's the the thesis that anything that is efficiently computable in the physical world is computable in polynomial time on a standard Turing machine.

So, we have this thesis, and quantum computing violates the thesis, so it must be impossible. On the other hand, if you replaced Factoring with NP-complete problems, then this argument would actually become more plausible to me, because I would think that any world in which we could solve NP-complete problems efficiently would not look much like our world. For NP-intermediate problems like Factoring and Graph Isomorphism, I'm not willing to take some sort of a priori theological position.

Q: So you're saying that if somebody came up with a brilliant proposal for solving NP-complete problems, you would be skeptical?
Scott: Yeah. I might even take a position not far from the one that Leonid Levin takes toward quantum computing. People actually do have proposals where you could do the first step of your computation in one second, the next in half a second, then a quarter second and so on, so that after 2 seconds, you'd have done infinitely many steps. Of course, if you could do this, you could solve the Halting Problem. As it turns out, we do sort of understand why this model isn't physical: we believe that the very notion of time starts breaking down when you get down to around 10-43 seconds (the Planck scale). We don't really know what happens there. Nevertheless, no matter what theory we have for quantum gravity, I would argue that it would have to rule out something like this.
Q: It seems that once you get to the Planck scale, you're getting into a really sophisticated argument. Why not just say you're always limited in practice by noise and imperfection?
Scott: The question is why are you limited? I think that if you try to make the argument precise, ultimately, you're going to be talking about the Planck scale.
Q: It's similar to saying that you can't store a real number in a register.
Scott: But why can't you store a real number in a register?
Q: Is there some reason you feel that Factoring is not in P?
Scott: Dare I say that the reason is that no one can solve it efficiently in practice? Though it's not a good argument, people are certainly counting on it not being in P. Admittedly, we don't have as strong a reason to believe that Factoring is not in P as we do to believe that P≠NP. It's even a semi-respectable opinion to say that maybe Factoring is in P, and that we just don't know enough about number theory to prove it. My own intuitive map of the complexity space is shown off to the side. Factoring, Graph Isomorphism, etc. have structure, and structure can potentially be exploited by algorithms. Maybe not by classical, polynomial-time algorithms, but in some cases it can be exploited by quantum algorithms and in some other cases by hidden-variable algorithms, etc. For NP-complete problems, we really don't have this structure --- at least by conjecture. That just serves to underscore the importance of the P≠NP conjecture. If it were false, then that would change how we think about all of this.

So that was the second argument. On to the third: "I'm suspicious of all these quantum computing papers because there isn't enough of the real physics that I learned in school. There's too many unitaries and not enough Hamiltonians. There's all this entanglement, but my professor told me not to even think about entanglement, because it's all just kind of weird and philosophical, and has nothing to do with the structure of the helium atom." What can one say to this? Certainly, this argument succeeds in establishing that we have a different way of talking about quantum mechanics now, in addition to the ways people have had for many years. Those making this argument are advancing an additional claim, though, which is that the way of talking about quantum mechanics they learned is the only way. I don't know if any further response is needed.


The fourth argument is that "these exponentially small amplitudes are clearly unphysical." This is another argument that Leonid Levin makes. Consider some state of 300 qubits, such that each component has an amplitude of 2-150. We don't know of any physical law that holds to more than about a dozen decimal places, and you're asking for accuracy to hundreds of decimal points. Why should someone even imagine that makes any sense whatsoever?

Q: Intuitively, this is equivalent to the classical case where each 300-bit string has a 2-300 probability. In that case, this argument would say that classical probability theory is also unphysical.
Scott: You know what? Why don't you tell the skeptics that? Maybe they'll listen to you...

The obvious repudiation of argument 4, then, is that I can take a classical coin and flip it a thousand times. Then, the probability of any particular sequence is 2-1000, which is far smaller than any constant we could ever measure in nature. Does this mean that probability theory is some "mere" approximation of a deeper theory, or that it's going to break down if I start flipping the coin too many times?

Bill: Maybe I don't believe this argument myself, but you could make the argument that with classical probability theory, you've got these extremely small amplitudes, but it's over things that you don't know, and it's actually quite deterministic and only happens one way. Meanwhile, in the quantum case, all those amplitudes might matter.
Scott: Right. That is the difference, and that is the argument that is made. Now, though, there's a further problem with the argument, which is that I could take a state like |+⟩⊗1000. This state has extremely small amplitudes, but presumably not even the staunchest quantum computing skeptic would dispute that we can prepare such a state.
Q: You could probably dispute that we can reliably prepare that state, and that you couldn't actually control 1,000 qubits well enough to verify that each of them is in the |+⟩ state.
Scott: Maybe a physicist could take 1,000 photons and put them through a mirror.
Q: But are you looking at each individual photon to see if it's in the right state?
Q: You can post-select using a beam splitter, and you might get one photon not in the state, but you can account for that.
Scott: Then the question becomes one of if it's somehow illegitimate to put all of the photons into one big tensor product...
Q: We need some kind of formulation of ultra-finitism for physics.
Scott: Right, that's a good way to put it.
Q: Going back to Bill's question, the problem is that the quantum state amplitudes interfere with each other, and they somehow have to "know" what each other's amplitudes are, as opposed to classical probability.
Scott: We'll get to this more later, but for me the key point is that amplitudes evolve linearly, and in that respect are similar to probabilities. We've got minus signs, and so we've got interference, but maybe if we really thought about why probabilities are okay, we could argue that it's not just that we're always in a deterministic state and just don't know what it is, but that this property of linearity is something more general. Linearity is the thing that prevents small errors from creeping up on us. If we have a bunch of small errors, the errors add rather than multiplying. That's linearity.

Argument 5 gets back to what we were talking about in the previous lecture: "it's obvious that quantum states are these extravagant objects; you can't just take 2n bits and pack them into n qubits." Actually, I was arguing with Paul Davies, and he was making this argument, appealing to the holographic principle and saying that we have a finite upper bound on the number of bits that can be stored in a finite region of spacetime. If you have some 1000-qubit quantum state, it requires 21000 bits, and according to Davies we've just violated the holographic bound.

So what does one say to that? First of all, this information, whether or not we think it's "there", can't generally be read out. This is the content of results like Holevo's Theorem. In some sense you might be able to pack 2n bits into a state, but the number of bits that you can reliably get out is only n.

Q: The holographic bound --- I know it's the number of bits per surface area, but what was the proportionality constant?
Scott: 1.4×1069. It's a lot, but it's still a constant. Basically, it's one bit per Planck area.
Q: Why isn't it in volume?
Scott: That's a very profound question that people, like Witten and Maldacena, stay up at night worrying about. The doofus answer is that if you try to take lots and lots of bits and pack them into some volume (such as a cubical hard disk), then at some point, your cubical hard disk will collapse and form a black hole. A flat drive will also collapse, but a one-dimensional drive won't collapse.
Q: You haven't been making a very good case for why this bound is measured in square meters.
Scott: Here's the thing: a hard drive will collapse to a black hole when its information density becomes large enough, so at some point, it seems as if you have all these bits that are near the event horizon of the black hole. That's the part that no one really understands yet, but it would suggest that you talk about the surface area of the event horizon.
Q: Why are they at the event horizon?
Scott: If you're standing outside a black hole, you never see someone pass through the event horizon. Then, if you want to preserve unitarity, and not have pure states evolve into mixed states when something gets dropped into a black hole, you say that when the black hole evaporates via Hawking radiation, then the bits get peeled off like scales, and go flying out into space. Again, this is not something that people really understand. People treat the holographic bound (rightfully) as the one of the few clues we have for a quantum theory of gravity, but they don't yet have the detailed theory that implements the bound.
Q: I was wondering if the following would be another approach to understanding it, that doesn't involve black holes. If you're talking about getting the information, you basically have to access what's in the volume, and the only way to do that is to cut through the boundary.
Scott: Maybe, but the problem is why couldn't we say that we've got an amount of information that scales with the volume, but in order to get to the part in the middle, you have to peel away at the stuff on the outside. That seems like a consistent way of looking at it. The information is there, you just have to peel the other stuff away to get at it. The issue that you run up against then is one of gravitational collapse: to store information, you need some amount of energy. If you have enough energy within a bounded region of spacetime, then you pass the Schwarzschild limit, and it collapses.

There actually is an interesting question here. The holographic principle says that you can store only so much information within a region of space, but what does it mean to have stored that information? Do you have to have random access to the information? Do you have to be able to access whatever bit you want and get the answer in a reasonable amount of time? In the case that these bits are stored in a black hole, apparently if there are n bits on the surface, then it takes on the order of n3/2 time for the bits to evaporate via Hawking radiation. So, the time-order of retrieval is polynomial in the number of bits, but it still isn't particularly efficient. A black hole should not be one's first choice for a hard disk.

The other funny thing about this is that, in classical general relativity, the event horizon doesn't play a particularly special role. You could pass through it and you wouldn't even notice. Eventually, you'll know you passed through it, because you'll be sucked into the singularity, but while you're passing through it, it doesn't feel special. On the other hand, this information point of view says that as you pass through, you'll pass a lot of bits near the event horizon. What is it that singles out the event horizon as being special in terms of information storage? It's very strange, and I wish I understood it.


Argument 6: "a quantum computer would merely be a souped-up analog computer." This I've heard again and again, from people like Robert Laughlin, Nobel laureate, who espoused this argument in his popular book A Different Universe. This is a popular view among physicists. We know that analog computers are not that reliable, and can go haywire because of small errors. The argument proceeds to ask why a quantum computer should be any different, since you have these amplitudes which are continuously varying quantities. Anyone want to answer this one for me?

A: The Threshold Theorem?
Scott: Thank you.

That argument describes what people thought before we had the Threshold Theorem (also called the Quantum Fault-Tolerance Theorem), and yet people are still arguing about it ten years after the theorem.

Q: OK, so you have the Threshold Theorem, but then you have to do some error correction, right? Your computation becomes longer, right?
Scott: Yeah, but by a factor of polylog(n). This isn't challenging the Church-Turing Thesis, but yeah, that's true.
Q: I'm not sure if you'd have to perform another error correction as you proceed.
Scott: The entire content of the Threshold Theorem is that you're correcting errors faster than they're created. That's the whole point, and the whole non-trivial thing that the theorem shows. That's the problem it solves.
Q: Isn't there a Threshold Theorem for classical computing as well?
Scott: There is.
Q: Is there a Threshold Theorem for analog computers?
Scott: No, and there can't be. The point is, there's a crucial property that is shared by discrete theories, probabilistic theories and quantum theories, but that is not shared by analog or continuous theories. That property is insensitivity to small errors. That's really a consequence of linearity. When I think about the Threshold Theorem, I try to take a step back and ask "what does this really mean?" It's really a consequence of the linearity of quantum mechanics. If we want a weaker Threshold Theorem, we could consider a computation taking t time steps, where the amount of error per time step could be 1/t. Then, the Threshold Theorem would be trivial to prove. If we have a product of unitaries U1U1...U100, and each one were to be corrupted by 1/t (1/100 in this case), then we'd have a product like:
(U1 + U'1/t) (U2 + U'2/t)... (U100 + U'100/t)
The product of all these errors still won't be much, again because of linearity. An observation made by Bernstein and Vazirani was that quantum computation is sort of naturally robust against one-over-polynomial errors. In principle, that already answers the question.
Q: I heard from a physicist that the fidelity of a gate decreases exponentially with the physical distance between the gates. When you increase the number of qubits, then the fidelity decreases exponentially, but you've only gained a linear number of qubits.
Scott: But we know that you can do universal quantum computing in the nearest neighbor model. Thus, even supposing that what you said was true, I don't see how it's a fundamental obstacle. I didn't even bother to explicitly list here arguments that apply to one specific architecture, because I take for granted that what we're talking about is if it's possible in principle to build one of these things. If we want to talk about specific architectures, then we can do that too, but no bait-and switch!

On to argument 7. This is an argument that Dyakonov makes many times in his recent paper. The argument goes that all the systems we have experience with involve very rapid decoherence, and thus that it isn't plausible to think that we could "just" engineer some system which is not like any of the systems in nature that we have any experience with.

Q: Could we sic the "brains are quantum computers" people on these guys?
Scott: That'll be good... put them in a room together. I hadn't thought of that.

I actually had a less amusing reaction, which is that a nuclear fission reactor is also unlike any naturally occurring system in many ways. What about a spacecraft? Things don't normally use propulsion to escape the earth. We haven't seen anything doing that in nature. Or a classical computer. I don't know if more than that needs to be said.

Q: These arguments are all pretty bad. Aren't there good arguments against quantum computing?
Scott: I keep looking for them. What I'm listing, again, are the arguments that I actually hear most often.
Q: Maybe that's homework for the audience.

Next, there are the people who just take it for granted that quantum mechanics must be an approximate theory that only works for a small number of particles. When you go to a larger number of particles, something else must take over. The trouble is, there have been experiments that have tested quantum mechanics with fairly large numbers of particles, like the Zeilinger group's experiment with buckyballs. There have also been SQUID experiments that have prepared the "Schrödinger cat state" |0...0⟩ + |1...1⟩ on n qubits where, depending on what you want to count as a degree of freedom, n is as large as several billion.

Again, though, the fundamental point is that discovering a breakdown of QM would be the most exciting possible outcome of trying to build a quantum computer. And, how else are you going to discover that, but by investigating these things experimentally and seeing what happens? Astonishingly, I meet people (especially computer scientists) who ask me, "what, you're going to expect a Nobel Prize if your quantum computer doesn't work?" To them, it's just so obvious that a quantum computer isn't going to work that it isn't even interesting.

Q: Would this be a credible objection if it offered a reason why?
Scott: Yes.

Some people will say "no, no, I want to make a separate argument. I don't believe that quantum mechanics is going to break down, but even if it doesn't, quantum computing could still be fundamentally impossible, because there's just too much decoherence in the world." These people are claiming that decoherence is a fundamental problem. That is, that the error will always be worse than the fault-tolerance threshold, or maybe that some graviton will always come through and decohere your quantum computer.

Q: In all fairness, the people making this argument believe that we'll never get the error down to the threshold, but we believe the opposite, only because of faith. These are both arguments of faith, and our counter-argument isn't that much more sound than the original.
Response from the floor: It's much easier to posit that the minimum possible error is zero then it is, say, 22. They're just saying that "it's impossible, so there's no point in even trying or to investigate."
Q: I guess there's a fallacy there, but then again, we base some of our complexity theoretic assumptions on faith as well. We assume that anything with a polynomial-time algorithm is "efficient."
Scott: Shor's Algorithm, fortunately, isn't just polynomial, but is n2. I love how, especially when you're discussing this on the Internet, people love to raise this issue as if it's something that no complexity theorist has ever thought of. Gosh, it never occurred to me that, even if it's polynomial, it could be n500. People love to lecture me about this. Q: What's the worst best known polynomial-time algorithm?
Scott: That's an actual question! Well, the problem is that we could consider the problem, given a Turing machine, does it halt after nAckermann(10,000) steps? But you could ask, what's the largest polynomial runtime that ever occurred in practice? Again, are we going to count cases where the time is like n1/ε2, where ε is the error we want the algorithm to return with? There are many such algorithms, where the exponent involves a parameter you can vary, and so you can always get an exponent as large as you want by saying that you want a small error. If we exclude that, there are also cases where there are whole sequences of reductions that you've composed with each other. For example, I talked a while ago about this proof from Håstad et al. that you can get a pseudorandom number generator from any one-way function. The whole sequence of reductions involves this kind of blow up, and you get something like n40.
Q: And there's no known way to improve on that?
Scott: There might be a way; I don't know the latest. Often when there are these large polynomial running times, people find clever ways to bring them down. Hopefully the same will be true with the fault-tolerance threshold in quantum computing.

The next argument is a little more subtle: for a classical computer, we don't have to go through all this effort. You just get fault-tolerance naturally. You have some voltage that either is less than a lower threshold or is greater than an upper threshold, and that gives us two easily distinguishable states that we can identify as 0 and 1. We don't have to go through the same amount of work to get fault-tolerance. In modern microprocessors, for example, they don't even bother to build in much redundancy and fault-tolerance, because the components are just so reliable that such safeguards aren't needed. The argument then proceeds by noting that you can, in principle, do universal quantum computing by exploiting this fault-tolerant machinery, but that this should raise a red flag --- why do you need all that error correction machinery? Shouldn't this make you suspicious?

Anyone want to give this one a try?

A: The only reason we don't need fault-tolerance machinery for classical computers is that the components are so reliable, but we haven't been able to build reliable quantum computer components yet. Presumably, if we could build extremely reliable components, we wouldn't need error correction and fault-tolerance technology.
Scott: Yes, that's what I would say. In the early days of classical computing, it wasn't clear at all that reliable components would exist. Von Neumann actually proved a classical analog of the Threshold Theorem, then later, it was found that we didn't need it. He did this to answer skeptics who said there was always going to be something making a nest in your JOHNNIAC, insects would always fly into the machine, and that these things would impose a physical limit on classical computation. Sort of feels like history's repeating itself.

We can already see hints of how things might eventually turn out. People are currently looking at proposals such as non-abelian anyons where your quantum computer is "naturally fault-tolerant," since the only processes that can cause errors have to go around the quantum computer with a nontrivial topology. These proposals show that it's conceivable we'll someday be able to build quantum computers that have the same kind of "natural" error correction that we have in classical computers.


I wanted to have a round number of arguments, but I wound up with eleven. So, Argument 11 comes from people who understand the Fault-Tolerance Theorem, but who take issue with the assumption that the errors are independent. This argument posits that it's ridiculous to suppose that errors are uncorrelated, or even that they're only weakly correlated, from one qubit to the next. Instead, the claim is that such errors are correlated, albeit in some very complicated way. In order to understand this argument, you have to work from the skeptics' mindset: to them, this isn't an engineering issue, it's given a priori that quantum computing is not going to work. The question is how to correlate the errors such that quantum computing won't work.

My favorite response to this argument comes from Daniel Gottesman who was arguing about this against Levin, who of course believes that the errors will be correlated in some conspiracy that defies the imagination. Gottesman said, supposing the errors were correlated in such a diabolical fashion and that Nature should undergo so much work to kill off quantum computation, why couldn't you turn that around and use whatever diabolical process Nature employs to get access to even more computational power? Maybe you could even solve NP-complete problems. It seems like Nature would have to expend enormous amounts of effort just to correlate qubits so as to kill quantum computation.

Q: Not only would your errors have to be correlated in some diabolical way, they'd have to be correlated in some unpredictable diabolical way. Otherwise, you could deal with the problem in general.

To summarize, I think that arguing with skeptics is not only amusing but extremely useful. It could be that quantum computing is impossible for some fundamental reason. So far, though, I haven't seen an argument that's engaged me in a really nontrivial way. That's what I'm still waiting for. People are objecting to this or to that, but they aren't coming up with some alternative picture of the world in which quantum computing wouldn't be possible. That's what's missing for me, and what I keep looking for in skeptical arguments and not finding.

Q: What about the argument that quantum mechanics breaks down with so many particles?
Scott: Even then, they usually don't give an actual theory in which it would fall apart. They just say "it will fall apart."
Q: I was curious as to how you'd respond to a generic argument that tends to come from people outside of a field when discussing a possible invention: "Well, maybe X is useful, but until you build one, it doesn't seem like a good investment." I've heard this about quantum computing, fusion, etc.
Scott: Well, we know nuclear fusion is possible — the Sun does it!
Q: Sometimes, the reason why not is a fundamental problem, other times it's just an engineering problem. You do hear the same argument in other contexts.
Scott: Well, now it just boils down to a question of "what are you interested in?" These same people who say that it's not practical yet, and that we should go back to more practical work, will go do something like (if they're a theoretical computer scientist) improve an n log n algorithm to an n log n / log log log n algorithm. Very little of what we do in theoretical computer science is directly connected to a practical application. That's just not what we're trying to do. Of course, what we do has applications, but indirectly. We're trying to understand computation. If you take that as our goal, then it seems clear that starting from the best physical theories we have is a valuable activity. If you want to ask a different question, such as what we can do in the next five to ten years, then, that's fine. Just make it clear that's what you're doing. Again, what annoys me are people who say that they're talking about what's possible even in principle, who then switch to talking about what's possible in the next few years.

I'll close with a question that you should think about before the next lecture. If we see 500 crows, which are all black, should we expect that the 501st crow we see will also be black? If so, why? Why would seeing 500 black crows give you any grounds whatsoever to draw such a conclusion?


[Discussion of this lecture on blog]

[← Previous lecture | Next lecture →]

[Return to PHYS771 home page]