What’s taking so long, Mr. Babbage?

Recently a journalist asked me why we don’t yet have quantum computers.  Since I get that question, in some form, at least 300 times per day, I thought it might be worthwhile to collect my thoughts about it in one place.  The essay below doesn’t say anything that I and others in the field haven’t said many times before, so hardcore Shtetl-Optimized fans should probably skip it.  (Don’t worry, I’ll let y’all know when I have something new to say and am reckless enough to say it.)


When people ask me why we don’t yet have quantum computers, my first response is to imagine someone asking Charles Babbage in the 1820s: “so, when are we going to get these scalable classical computers?  by 1830? or maybe 1840?”  In that case, we know that it took more than a century for the technology to catch up with the theory (and in particular, for the transistor to be invented).  More generally, we have lots of precedents for a technology being imaginable decades or even centuries before it became technologically feasible—heavier-than-air flight is another example.  So there’s nothing weird or anomalous about our current situation.The central technological obstacle to building a scalable quantum computer is well-known, and is decoherence, or unwanted interaction between the computer and its external environment.  When information about a quantum state leaks into the outside world—by any means whatsoever, intended or not—the state loses its “quantumness” and reverts to being classical.  So to do a quantum computation, it’s necessary to keep the qubits (atomic nuclei, photons, or whatever else they are) almost fanatically isolated from their environment.  But at the same time, you also need to manipulate the qubits, move them around, etc., in such a way as to carry out the computation.  Those twin requirements are the reasons why the most famous ‘success’ of practical quantum computing to date was factoring 15 into 3×5.

Indeed, the problem of decoherence is so severe that you might even conjecture that building a quantum computer is impossible in principle.  However, by now people have thought pretty hard about that possibility, and have failed to find any fundamental obstacle to quantum computing.  Indeed, the starting assumption that quantum computing “must be impossible” hasn’t led to a research program with any real successes.

On the positive side, by contrast, in the 1990s computer scientists and physicists developed the beautiful theory of quantum fault-tolerance, which shows that, as long as you can get the decoherence rate below a certain finite threshold (which current estimates put at somewhere around a 10-3 probability of failure per qubit per gate time), you can fix the errors caused by decoherence faster than you introduce new ones, and thereby perform an arbitrarily long quantum computation.  (I like to think of this fault-tolerance threshold as analogous to the critical mass for an atomic bomb!)  So, today there are large research efforts in two directions: on the experimental side, to push down the decoherence rate, and on the theoretical side, to find error-correction schemes that can cope with higher decoherence rates.  Hopefully those efforts will “meet in the middle” someday, and then we’ll have quantum computers!

However, long before we see general-purpose quantum computers, I predict that we’ll see a proliferation of special-purpose quantum simulators—basically, quantum computers that are tailored to some specific physics, chemistry, or optimization problem.  Indeed, arguably we already have that today, in that we have plenty of many-particle entangled quantum systems (such as high temperature superconductors and quark-gluon plasmas) that we don’t know how to simulate efficiently on a classical computer.  You could think of all these systems as “quantum computers” that compute their own time evolution!  From this perspective, the challenge is “merely” to get a programmable quantum computer, one that can solve a problem of our choosing (like factoring a huge composite number).  But I can easily imagine gradual steps in that direction from what we already have over the next couple of decades.

Finally, maybe it’s worth mentioning that there have already been some significant spinoffs of quantum computing in classical computer science (see here for a beautiful survey of them).  Also, as “ordinary” transistors get closer and closer to the atomic scale, I predict that ideas from quantum information science will be increasingly relevant, if only to suppress the quantum effects that the chip designers don’t want!

50 Responses to “What’s taking so long, Mr. Babbage?”

  1. C.Babbage Says:

    I am not able to rightly apprehend the kind of confusion of ideas that could provoke such a question.

  2. rrtucci Says:

    Since I’m a programmer, and you are a new version of Mr. Babbage, I think I’ll call you Mr. Cabbage.

  3. tomslee Says:

    To play the devil’s advocate, asking for a proof that ‘quantum computing “must be impossible”’ is setting a pretty high bar for sceptics. Given the lack of Hamiltonians in the quantum computing formalism, isn’t that asking for a proof that no physical system can exist with a certain set of properties? By the same logic, I have a ladder to the moon I’d like to see funded. Plus, classical systems that computed their own time evolution were around long before Mr. Babbage, and if you have to go to spinoffs to justify a field, well let’s just say I don’t think that’s a useful place to go.

    I recognize you may be right, and that there will be a happy meeting in the middle as you suggest. And I hope so, and can’t prove that it won’t. I just remain sceptical (and ignorant).

  4. Hugh S. Myers Says:

    As both IBM and a British effort have shown, the only hold up in Babbage’s case was funding. With enough cash to continue manufacturing, both engines would have worked and the world would have been very different. The details escape me, but ironically parliament paid for foreign entrepreneurs to provide the tide tables that were the excuse for the project— Swedes I think? Not that any of this has much to do with your excellent article 😉

  5. Scott Says:

    tomslee:

    To play the devil’s advocate, asking for a proof that ‘quantum computing “must be impossible”’ is setting a pretty high bar for sceptics.

    It’s a fair point, but two quick responses:

    1. I don’t actually set the bar that high—I’d be happy to see any interesting new science come out of the effort to show QC impossible.

    2. Many of the best-known QC skeptics—such as Gerard ‘t Hooft, Stephen Wolfram, Leonid Levin, and Oded Goldreich—do firmly believe QC is impossible for fundamental reasons. And as for the skeptics who think QCs aren’t impossible to build, but merely extremely hard—well, I don’t really have a quarrel with them! 🙂

  6. Job Says:

    So, is the most qubits we’ve had in an actual working QC like 3, 4? Whatever it is, what’s preventing us from adding one more? Is there some sort of a barrier? Why have we been able to build a QC with N qubits but not N + 1?

  7. Scott Says:

    Job: No, there’s no (known) fundamental barrier to adding more qubits. Mostly, there are just huge engineering difficulties in keeping a QC coherent for any appreciable length of time—which means there isn’t any point to adding more qubits, if to take advantage of those qubits you’d need to do a longer computation, and the QC will decohere long before the computation is finished.

    There are also difficulties that arise directly from a large number of qubits—for example, you’d like to be able to address each qubit individually, and that obviously gets harder as the number of qubits grows. But as far as anyone can see, that’s an engineering difficulty, not a fundamental one.

    Depending on how you count, you could say that QC has been demonstrated with 4, 7, or 12 qubits (and no doubt other numbers as well). Shor’s period-finding algorithm was implemented coherently on 4 qubits, in both ion-trap and photonic architectures. It’s also been implemented in liquid NMR (i.e., incoherently) on 7 qubits. And a GHZ state was demonstrated in liquid NMR on 12 qubits.

    It’s funny: when I talk to people outside the field, their first question is always, “so how many qubits have you gotten so far?” And I admit to being pretty curious about that number as well! But then when I talk to the people doing the actual experiments, they don’t seem the slightest bit interested in the number of qubits. To them, it’s self-evident that the real problem is to get one or two qubits that really work (i.e., that you can keep coherent and do operations on for an arbitrary length of time). Once you had that, you could “easily” scale up to a thousand or a million such qubits.

  8. SteveM Says:

    Scott, I think you are not keeping up with developments in the field. We had a workshop at Caltech last week where this http://arxiv.org/abs/1004.1628 and a series of related results on testing the quantum properties of this type of circuit were shown. IMO these opinions you’re providing are really out of touch with the actual state of the field. Also the “it’s self-evident that the real problem…” is such a huge give-away that you don’t really get the fundamental problems in this field. Part of the beauty of the work we were shown is that these guys have already solved all of the problems we haven’t even started thinking about yet.

  9. Jonathan Vos Post Says:

    Wonderful survey! Because they’ve been right about many other controversies before, I do not automatically discount QC skeptics—such as Gerard ‘t Hooft, Stephen Wolfram. I feel that you put the debate on a clear theoretical basis, which gets away from all the noise in the popular press, that tends to drown out the signal.

    I’m sorry that I missed the D-Wave event at Caltech a week or so ago. The assembled Caltech QC and Physics experts grilled the usual suspects. But that’s another story.

    I just hotlinked to this great new thread of your from facebook, as many of my 900 Facebook friends keep asking the same question, and I send them to the real experts, starting with you.

  10. Scott Says:

    SteveM: You can disagree with my views about D-Wave, but phrases like “out of touch” and “not keeping up” suggest a lack of awareness of their claims, an accusation that seems hard to support.

  11. Joe Fitzsimons Says:

    Hi Scott,

    I’m not sure I would agree 100% with you about decoherence being the limiting factor. In many cases the problem seems to be as much or more to do with either cooling or coupling qubits within the system. This is essentially the reason for segmented ion-traps, and optical entangling schemes. Though these are both certainly noise related problems, I’m not sure I would classify them as decoherence.

  12. Raoul Ohio Says:

    I am giving 4 to 1 odds on bets of $1 that there will NOT be a working CQ by 2100.
    A reasonable if arbitrary definition of a working computer is one that can be programed to solve the problem: Given a positive n, representable with an int (in the C programming language) (n less than 2^31), return the n-th fibonacci number.

    Does anyone have a good definition as to how to determine if a computer is truly “quantum”? If so, I might raise the odds to 5 to 1!

  13. Sandra T. Says:

    “To them, it’s self-evident that the real problem is to get one or two qubits that really work (i.e., that you can keep coherent and do operations on for an arbitrary length of time). Once you had that, you could “easily” scale up to a thousand or a million such qubits.”

    There is no way we will have qubits that can be kept coherent for an arbitrary length of time, while we apply operations, unless they are encoded qubits. Also, the physicists are not “the slightest bit interested in the number of qubits” because they are PHYSICISTS. The most interesting and accessible physics is in these small systems. They also know that you have to walk before you can fly, so focus on the short-term problem of manipulating one or two qubit systems. Scaling up is not easy at all and would be a major engineering challenge.

  14. Yatima Says:

    @Raoul Ohio

    “I am giving 4 to 1 odds on bets of $1 that there will NOT be a working CQ by 2100.”

    What about using real money? Do you intend to stick around that long?

    “A reasonable if arbitrary definition of a working computer…”

    You are off-base and need to go back to computing basics 101. Analog computers _are_ computers yet they don’t manipulate symbols to compute Fibonacci numbers. Turing machines are not the all and everything.

    And yes, there is a good definition of a QC: Something which solves problems in BQP-complete in polynomial time.

  15. Sim Says:

    Scott: is it possible that there is a fundamental reason for which QC will not work AND that this fundamental reason will not provide us with MORE computing power than QC alone would?

    My understanding is that if something limits intrication size, then Alice and Bob can communicate faster than light, which makes P=PSPACE, plus a couple of candies such as free energy and time travel. That the reason why I personnaly find hard to believe QC won’t work… but maybe my understanding is bad?

    (BTW, please check previous post: I still have one reply awaiting to be separated from crackpoteries)

  16. Raoul Ohio Says:

    Yatima,

    What constitutes a “good definition” is context dependent. “solves problems in BQP-complete in polynomial time” might be good for proving theorems, but perhaps not for a whoever decides if I owe you $4 or not.

    And, of course I expect to be collecting bets at 155 or so, didn’t you hear about the singularity?

  17. Kaus Hackula Says:

    Just curious, how many professors of scalable classical computing were there in the 1820s, and how many PhDs per year were being produced in the field, etc.? I would guestimate, if measured in terms of appropriately weighted person-years of research, we’ve already blown through the 1930’s or so equivalent, maybe more.

  18. Scott Says:

    Kaus Hackula: Well, the number of researchers in all fields has been increasing exponentially, and on that basis, you might hope that the time to new technologies would decrease exponentially. But the problems we’re trying to solve are also harder—and we also have the enormous disadvantage that anything we build has to not merely work, but be better at something than any of the vast number of technologies that have come before. I agree that it’s hard to say which of these factors predominates in the case of QC. My point was just that, conditioned on QC being possible, there’s no particular reason to expect it to be possible within (say) 10-20 years.

  19. Scott Says:

    Sim:

    is it possible that there is a fundamental reason for which QC will not work AND that this fundamental reason will not provide us with MORE computing power than QC alone would?

    Yes, absolutely! This is a point I’ve often made when debating QC skeptics: they typically assume without argument that, if quantum mechanics fails, then it will fail in such a way that QC becomes impossible. But at a purely mathematical level, it seems much easier to change quantum mechanics in ways that would give you more computational power (say, #P or PSPACE), than in ways that would bring you back down to P or BPP!

    As I’ve said before, I find it implausible that Nature could solve (say) NP-complete problems in polynomial time, and it’s for that reason that my money is on quantum mechanics being correct.

    (BTW, please check previous post: I still have one reply awaiting to be separated from crackpoteries)

    Sorry about that! It’s now moderated.

  20. Jim Says:

    Doesn’t the recent discovery that plants make use of quantum random walking for efficient photosynthesis count as a sort of room-temperature quantum computation? And if so, isn’t this cause for optimism that we can similarly making use of quantum walking for computation?

  21. Sean Barrett Says:

    It’s worth noting that the best current estimates for the fault tolerance threshold, based on numerical estimates (by Knill, and Raussendorf, Harrington, and others), are around ~0.01. If one also has additional information about the location of the errors, the threshold is even higher, more like ~0.1 or higher (this has been shown by Knill, also Varnava and coworkers, and also in my recent paper on this at http://arxiv.org/abs/1005.2456).

    These estimates are not too far away from what has been achieved in recent experiments. But as Joe pointed out, there may be other issues in scaling these experiments to more qubits.

  22. Sean Barrett Says:

    As well as the two approaches that Scott identifies, there’s also another very active area of experimental and theoretical research in quantum computing which might be analogous to the development of digital classical computer technology: the efforts to find (or construct) physical systems with `natural’ error correcting properties. These include various superconducting approaches, and topological quantum computing (anyons and all that).

  23. Jay Says:

    Shouldn’t journalists run this stuff by you (or someone like you) before publishing it?

    http://www.telegraph.co.uk/technology/news/7759347/Scientists-create-worlds-smallest-electronic-switch.html

  24. Dave Says:

    Thanks for the post Scott! Can you please just explain the difference between the QC you are describing here and what DWave is trying to do, e.g. Adiabatic quantum computation?

  25. Sim Says:

    Scott: (BTW: thx) Is there a list of the changes in MQ rules that has been proven to increase the computationnal power? I think you already proven the case for Bohm’s ideas and closed timelike curves. Is there other things we can add to the list?

    Dave: c’mon this is boring. Why not asking for another “Shor, I’ll do it” instead? Anyway… you may be interested in reading

    https://scottaaronson-production.mystagingwebsite.com/?cat=20

  26. John Sidles Says:

    However, by now people have thought pretty hard about that possibility, and have failed to find any fundamental obstacle to quantum computing.

    Scott, to turn that assertion into a question: “What is the strongest evidence that the state-space of quantum mechanics is a Hilbert space?”

    In particular, what is the strongest such argument that you have set forth? Would it be your STOC 2004 article “Multilinear Formulas and Skepticism of Quantum Computing?”

    I am trying to put together a bibliography of articles on this subject … but it is hard to find arguments other than those that put-up strawmen solely for purposes of knocking them down.

    I have to say that both the experimental and theoretical evidence seem surprisingly weak to me … but maybe there are strong arguments out there that I don’t know about?

    For me, it’s been very impressive that our empirical ability to simulate real-world quantum systems improves hugely year-by-year … and yet (examined closely) our simulation codes seldom exploit the global linear structure of quantum mechanics … but only the local symplectic structure … and even then, only in a surprisingly small number of dimensions.

    Historically, physicists have often mistakenly extrapolated local structure to global structure … largely for reasons of convenience in computation and theorem-proving … maybe we’ve been making this mistake again in QM?

  27. Scott Says:

    Can you please just explain the difference between the QC you are describing here and what DWave is trying to do, e.g. Adiabatic quantum computation?

    In principle, adiabatic QCs are equivalent in their computational power to ordinary QCs. So from that perspective, adiabatic QC (using, say, superconducting qubits) is one more possible route to implementation, alongside ion traps, photonics, etc. Its advantages and disadvantages compared to the other implementation approaches (in terms of fault-tolerance) aren’t really understood yet—getting a better understanding of adiabatic fault-tolerance is a major open problem right now.

    The biggest difference between the “mainstream” approaches to QC and what D-Wave wants to do are related to noise. As I said before, the academics in experimental QC are mostly focused on getting one or two qubits that actually, demonstrably work—until they have that, they see little point in scaling up to dozens or hundreds of qubits. D-Wave, by contrast, essentially throws together lots of extremely noisy qubits and hopes something interesting will happen. Since the algorithms they’re running “degrade gracefully” into classical algorithms, and since they’re not believed to provide exponential speedups generically (but probably at best polynomial ones), and since the instance sizes are still extremely small, a major problem is then how to prove that quantum coherence actually played any role in the computation—in other words, that they weren’t just running an extremely noisy classical computer with a memory consisting of ~28 bits. Suffice it to say that some of us have been … frustrated with D-Wave’s responses on this central point (if you really want to know more, check out my 200,000 previous D-Wave posts!) I used to get pretty worked-up and emotional about this issue, but I’m past that. Suffice it to say that the burden of proof is entirely on D-Wave—it’s not on the academics to prove they haven’t done this or that.

  28. SteveM Says:

    If by “actually, demonstrably work” you mean qubits actually are used in the course of performing a computation, then there’s a legitimate argument that dwave has done more than anyone else to show this. The gate model QC community falls down on this point. It doesn’t appear that they are just throwing qubits together. It looks more like they have a reasonable and well thought-out plan to build something simple and useful first.

    I keep hearing this “extremely noisy qubits” thing, where is the source of this? It doesn’t appear to be true. They published 1/f and integrated spectral noise densities on their qubits in prb and prl and they are comparable to the best results from academic groups. And those other results are from isolated qubits not connected to all the gizmos you need to make the circuit scalable.

  29. Mr. Ingr. Joe Mizzoni Stevens Nakayama the III Says:

    Would Gentry’s homomorphic encryption scheme shed any light on a way to reduce decoherence errors?

  30. Scott Says:

    Jay:

    Shouldn’t journalists run this stuff by you (or someone like you) before publishing it?

    No, they should run it by someone unlike me—someone who’s actually an expert in the relevant experimental areas!

  31. Scott Says:

    Sim:

    Is there a list of the changes in MQ rules that has been proven to increase the computationnal power?

    For a partial list, see my NP-complete Problems and Physical Reality survey.

  32. Scott Says:

    Would Gentry’s homomorphic encryption scheme shed any light on a way to reduce decoherence errors?

    I haven’t a clue! As Umesh Vazirani pointed out to me, there’s a tantalizing analogy between Gentry’s scheme and hierarchical quantum fault-tolerance schemes—but I don’t know whether it goes any further than that.

  33. John Sidles Says:

    The following historical reflection is suggested by Abrams and Lloyd 1998, “Nonlinear quantum mechanics implies polynomial-time solution for NP-complete and P problems”.

    Suppose in the 19th century there had appeared a far-sighted article titled “Nonlinear Newtonian mechanics implies dynamical evolution toward thermodynamic disequilibrium.”

    We imagine that this 19th century article (correctly) noted that nonlinear generalizations of Newton’s law generically lead to dynamical flows that violate Liouville’s theorem (not to mention Hamiltonian conservation).

    Then the article might further (and correctly) note that Euclid’s parallel postulate is necessary to the global linearity of Newton’s law … and draw the remarkable conclusion that Euclid’s parallel postulate is true on fundamental thermodynamic grounds.

    These 19th century authors might reasonably summarize their conclusions as follows: “We would like to note that we believe that Newtonian mechanics is in all likelihood exactly linear, and that the above conclusions might be viewed most profitably as further evidence that this is indeed the case”

    This is simply the Abrams/Lloyd conclusion, with “quantum” replaced by “Newtonian”.

    Hmmm … so maybe it’s fortunate that 19th century physicists did not have access to 20th century ideas of quantum information theory?

  34. Sim Says:

    John: thx that’s interesting. I personnaly try to keep an open mind to the question of whether hypercomputing is possible or not, even if that would break all of the most fundamental physical principles we have. However, the point was to answer skepticals of QC, who (sociologically speaking) tend to be also skeptical of hypercomputing and strong AI.

    I’m more interested in the latter question, and I found that one of the most powerfull argument is: “Ok let’s imagine that human mind is not computable, i.e. that strong AI is false. Do you realise you’re saying that hypercomputation is not only possible, but frequent? How is it that life limits hypercomputation to making the brain uncomputable and does not exploit it as a unlimited source of energy?”. There is no demonstration of anything in saying that, but the burden of the proof is to the most extraordinary assumption. Usually AI skepticals believe that strong AI is an extraordinary claim. No sir! Strong AI is far more boring than what you think.

    Scott, thx for the link. I was surprised that non linear MQ, although likely to give us more computationnel power, is actually not proven to do so since “if we allow error, as any physically reasonable model of computation must (…) the standard quantum error-correction theorems break down, since just as a tiny probability of success can be magnified exponentially during the course of a computation, so too can a tiny probability of error. Whether this problem can be overcome (…) deserves further investigation.”. So it is still reasonnable to be skeptical of both QC and strong AI, contrary to what I thought. If in the future you demonstrate or ear a demonstration of a valid quantum correcting code that would stand for non linear MQ, please let us know 🙂

    Anyone: while reading Scott’s paper, I found this “Interestingly, most of the physical proposals for solving NP-complete problems that we will see do not exploit structure, in the sense that they would still work relative to any oracle. Given this observation, I propose the following challenge: find a physical assumption under which NP-complete problems can provably be solved in polynomial time, but only in a non-black-box setting.”

    I’m not sure to understand the challenge. Would anyone be kind enough to explain it in layman terms?

  35. John Sidles Says:

    Dear Stix

    I thought your question was *wonderful*, and I’ll take a three-way shot at answering; first with some (reasonably) solid advice, then with some dubious remarks, and finally with a cartoon.

    The solid advice (especially for students) is this: Wikipedia is your friend … and your even better friend is the “Advanced search” option on arxiv.org … which allows for full text searches of arxiv preprints.

    You’re asking about the technical phrase “relative to an oracle” … so let’s look for the very first arxiv preprints to use this now-common term … on the heuristic that these preprints are likely to be written by folks who are smart … and foresighted … and most of all … who have to explain what these terms mean.

    We find two wonderful articles: first Charles H. Bennett, Ethan Bernstein, Gilles Brassard and Umesh Vazirani, Strengths and Weaknesses of Quantum Computing (1997, arXiv:quant-ph/9701001), and five months later, John Preskill, Quantum Computing: Pro and Con (1997, arXiv:quant-ph/9705032) … followed by dozens of articles … Après moi, le déluge.🙂

    Let’s take as our text Preskill’s remark “Perhaps, though, NP-complete problems are not the best context for exploiting the power of quantum computation. It may be that quantum computers are capable of solving some hard problems that are outside NP, and that quantum simulation is an example. Even were an oracle at our command that could solve NP-complete problems, quantum simulation might still require exponential resources on a classical computer; that is, a classical computer would still be unable to simulate a quantum computer efficiently.”

    Hmmm … oh heck … what does Preskill mean? We recall that Wikipedia too is our friend; we find on Oracle Machine page this remark: “The fact that the P = NP question relativizes both ways is taken as evidence that answering this question will be difficult, because any proof technique that relativizes (i.e., is unaffected by the addition of an oracle) will not answer the P = NP question. Most proof techniques we know today do relativize.”

    OK, that’s the end of the (reasonably) solid advice, namely, search the arxiv and consult Wikipedia (duh!).

    Now for the dubious remarks. Inspired by the arxiv preprints, we ask “What do we get if we map, in the Wikipedia quotation … hmmm … (proof) → (computation) … or even (proof) → (simulation)?” In other words, what if we replace abstract oracles by physical oracles? Does it make mathematical sense? Does it make physical sense? Is it what Scott is talking about?

    Obviously, this kind of question is at least fun to try to answer … and note that all we needed to get started … was a blog, the arxiv server, and Wikipedia.

    To put some flesh on these bones, let’s imagine that Alice and Bob are having a relaxed breakfast together:

    Alice: Mmmmm … Bob … I forgot to tell you last night … our lab has discovered that quantum dynamics are slightly nonsymplectic … and yet the state-space of nature is a Hilbert vector space … it all works just as Weinberg described … with the extravagant informatic consequences that Abrams and Lloyd worked out … and the upshot is, we’ve exploited this nonlinearity to build a general-purpose solver for NP-complete problems .

    Bob: Gosh, that’s terrific, Alice … it’s amazing in fact … I’m no expert … but are you sure your lab’s solver is general purpose … that it functions as a physical oracle for any language?

    Alice: (airily) Oh, Bob … you silly … the first thing we did was ask the machine whether a short proof existed of its own correctness or incorrectness … it assured us that no such proof existed either way … and so we stopped worrying about it!

    I hasten to add that the preceding dialog was written purely for fun … these complexity-theoretic ideas take us engineers far outside our talents, training, and interests. In real-life engineering, our devices (so far!) are always so noisy that we can be entirely confident that they are not solving NP-complete problems.

    Thus, it’s purely on the basis of engineering faith, that we can express our ideas in terms of cartoons, rather than proofs.

  36. John Sidles Says:

    Hmmm … no further comments to Scott’s post? Or to the (very interesting) question that Sim asked?

    OK … I’ll try to write a serious response … which admittedly (when I try it) can yield unintentionally comedic effects … so let’s continue the Alice-and-Bob dialog that began in the preceding post.

    Bob: I’m trying to be serious, Alice! Has your lab answered the “Aaronson Challenge”? You know: “Find a physical assumption under which NP-complete problems can provably be solved in polynomial time, but only in a non-black-box setting.”

    Gosh, it seemed to me that the Aaronson Challenge was asking for a Gedanken experiment, with a view toward using physical intuition to discover non-relativizing proof techniques in complexity theory … but I know you’re working on real quantum computing experiments.

    Alice: (smiles) Well, the short answer is “yes” … but only in the sense that we’ve identified a candidate physical assumption … we’re in the middle of doing the experiments to test its physicality!

    Bob: What’s the assumption?

    Alice: It’s actually a decades-old assumption, Bob … that nowadays lots of experimentalists take seriously … it’s the assumption that the physical state-space of quantum mechanics is not a Hilbert space, but rather a tensor network space. These spaces have a “foamy” geometry, so we can (in principle) meet Scott’s challenge by, say, tailoring that geometry to reflect an NP-complete problem.

    As for specifying quantum dynamics and measurement processes on this “foamy” geometry, that’s mathematically easy and computationally efficient … we write the usual Hilbert description in the language of differential forms, pullback the forms, integrate the quantum trajectory, and (strictly for ease of interpretation) pushforward back into a convention Hilbert space.

    Bob: (laughs) So it’s actually the quantum experimentalists who are taking Walter Thirring’s advice to heart: “The best and latest mathematical methods to appear on the market have been used wherever possible. In doing this, many an old and trusted favorite of the older generation has been forsaken, as I deemed it best not to hand dull and worn-out tools to the next generation.”

    Alice: (raises her eyebrows) Bob, it seems you’ve been reading William Burke’s epic samizdat of mathematical physics, Div, Grad, Curl Are Dead.

    Yes, we experimentalists are using all the classic tools of vector-space quantum mechanics that we learned as graduate students … but nowadays we need even an sharper toolset … because to design our quantum experiments, we have to simulate them … to simulate them efficiently, we need tensor network state-spaces … and to work with tensor network spaces, we need all the latest mathematical methods of differential and algebraic geometry … and even category theory.

    Bob: (leans forward) But Alice … your lab is still focused on building quantum computers, and on understanding quantum complexity … isn’t that right?

    Alice: (looks thoughtful) Bob, back in the 20th century I would have agreed with you … but nowadays I don’t know what to think … my students are repurposing many of the ideas and goals of quantum computing … in ways I’m not sure that I even understand … and that I have very little control over.

    Bob: Can you give me some examples?

    Alice: Well … when I overhear my students arguing at lunch … about half of them have been watching Craig Venter’s videos on TED … these students argue that Venter’s team are really doing quantum systems engineering … but that Venter’s team haven’t awakened to this yet. These students envision repurposing our quantum computing sensors and simulation codes for purposes of doing quantum synthetic biology and materials science, both observationally and in silico.

    Then there’s another group of my students, who take seriously the notion that the quantum state-space of nature is not a Hilbert space; these students are funded to write efficient pullback subroutines for simulating cavity field dynamics in our ion traps, but what the students really imagine they’re working on … is M-theory.

    Bob: (smiles thoughtfully) Perhaps there are even some students who want to do all four: build quantum computers, understand quantum complexity, create good jobs—for themselves and others—as quantum synthetic biologists and entrepreneurs … and finish by discovering what M-theory really is … at least, I hope this kind of optimism is common among our students.

    Alice: (smiles broadly) Bob, you and I have nothing to complain about. Best … century … ever! 🙂

  37. Sim Says:

    Thx for your efforts John. Please let me try to cut the crap.

    As far as I understand the challenge is to find a physical property that is powerful enough so that P = NP, but weak enough to permit P != NP relative to some oracle.

    One example of what can hardly work is a physical property authorizing hypercomputing. Certainly it’d make P = NP, but this equality would most certainly hold in spite of any extra oracle.

    As far as I understand your story, you assert that if the physical state-space of quantum mechanics is a “tensor network space” then P = NP, in a way such that there exists an oracle A such that P^A != NP^A.

    I have to admit this is pretty obscure to me. Can you explain why you think so? Maybe a good thing is to begin with a definition of “tensor network space”. Does it has anything to do with tensor network states or it’s something different?

  38. John Sidles Says:

    Dear Stix

    Some of your questions I can answer pretty cleanly, for others the answers are obscure.

    Let’s regard a “tensor network state-space” as a manifold of quantum states M together with a holomorphic map F: M→N, where N is a Hilbert space, and F has the multilinear functional form of a Kronecker product.

    How much “quantum goodness” can we naturally pullback from N to M? Well, because F is multilinear, hence holomorphic, the metric and symplectic structures of N pullback naturally. So too do all the familiar scalar functions of N, in particular it’s Hamiltonian functions H. A little less obviously, but without too much work, we can naturally pullback the Lindblad dynamics of N (by writing Lindblad processes in terms of Stratonovich stochastic functions, and pulling those back).

    At this point, we haven’t pulled-back all of quantum mechanics—we haven’t pulled-back field theory, for example—but we have pulled back, onto a tensor network manifold, all of the information-theoretic quantum goodness that is embodied in the postulates of Nielsen and Chuang. In particular, we have enough metric and symplectic structure in-hand to do quantum information theory in a mathematical natural way.

    How much computational goodness survives pullback? It’s clear that the answer will depend on the rank r of the map F. Glossing over some technicalities, for r=1 the pulled back symplectic structure turns out to be the familiar classical symplectic structure of the Bloch/Newton equations. And when is r is exponentially large, we recover Hilbert-space dynamics.

    So from both a practical and a fundamental point of view, the interesting case is dim M< dim H, that is, dynamical state-spaces whose dimensionality is larger than classical, but smaller than Hilbert.

    From a practical engineering point of view, we have preserved all the Hamiltonian and Lindbladian quantum goodness we need to proceed directly to simulating large quantum systems … especially if they are noisy and/or thermal … and this is exactly what many people are doing!

    From an complexity theory point of view, there are a number of unanswered questions, and this I took to be the point of Scott’s question. These questions are easy to ask … they are all variations on the theme “What is the computational power of Hamiltonian/Lindbladian dynamics on a specified manifold M?

    Despite being easy and natural to ask, these informatic questions seem to be hard to answer, for at least two reasons. The first reason is that the Kählerian geometry of M now enters as a design parameter. We can well imagine that some state-space geometries might be natural for solving 3SAT, for example, while other geometries might be natural for subset sum. Thus we are proving theorems on a “landscape” of Kählerian geometry that, as string theorists know, is vast indeed.

    The second reason is that many of the most cherished proof technologies of quantum information theory do not survive pullback. Projective measurements, linear superposition, and density matrix methods all are gone; so are (most) spectral theorems. As Abrams and Lloyd note, quantum error correction too does not survive.

    It takes awhile to realize that this annihilation may be a good thing. After all, euclidean theorems do not survive pullback onto riemannian manifolds … and yet geometry turns out to be hugely enriched by this pullback. Thus, the pullback from Hilbert space to Kähler manifolds challenges us to create a more broadly natural context for our present-day quantum informatic proof technologies.

    Forder: “The virtue of a logical proof is not that it compels belief but that it suggests doubts. The proof tells us where to concentrate our doubts.” In particular, for engineers, the natural compressive dynamics of Lindblad processes has served to concentrate our doubts that the quantum devices that we build have Hilbert state-spaces.

  39. sim Says:

    John, sorry for the delay I’m quite busy at this time.

    What you’re talking about seems a way by which P could equal NP by virtue of some physical properties. Ok why not.

    What I still don’t get is why you think this would ALSO allow P!=NP given some oracle. In other words, why do you think there’s any relationship with SA challenge I reported earlier? Do I misunderstand the challenge or there is as little relationship as I’m beginning to guess?

  40. John Sidles Says:

    sim Says: “What you’re talking about seems a way by which P could equal NP by virtue of some physical properties. Ok why not. What I still don’t get is why you think this would ALSO allow P!=NP given some oracle.”

    Hmmm … well, what I was actually after was a fun, interesting and useful response to Scott’s challenge “Find a physical assumption under which NP-complete problems can provably be solved in polynomial time, but only in a non-black-box setting”.

    Implicit in Scott’s challenge (it seems to me) is that our candidate quantum dynamics has to exclude physical oracle machines. Suppose (for example) that Alice claims to have constructed a physical oracle machine that is simply a memory of enormous capacity.

    To put some flesh on these bare bones, suppose that the quantum state-space is Hilbert, and the quantum dynamics is non-linear in the Weinberg/Abrams/Lloyd sense. In this model (AFAICT) there’s no physical obstacle to Alice’s construction of an oracle device (say) 10 cm in diameter, capable of storing the answers to all queries up to (say) 10^4 bits.

    One problem with this picture is that it conflicts with what we know about thermodynamics— Bekenstein’s Bound asserts that, to store that much information, our Weinberg/Abrams/Lloyd oracle device would have a minimum mass of 7x(10^2967) kilograms. Ouch.

    This helps us appreciate that thermodynamic extravagance can be a precursor to computational extravagance. Conversely, perhaps if we fix the thermodynamic extravagance, then we will fix the computational extravagance too.

    There are (at least) two reasonable strategies to mitigating this thermodynamic extravagance: one is to tune the dynamical nonlinearity, another is to reduce the dimensionality of the state-space.

    A nice overview of the dimension-reducing approach—”nice” in the sense of blending math, physics, and philosophy—is Jeremy Butterfield’s essay “On Symplectic Reduction in Classical Mechanics” (arXiv:physics/0507194). Although Butterfield’s essay doesn’t seem to talk much about QM, to read it as a quantum essay one needs only to keep in mind one of the deeper results of modern quantum information theory, namely, that Lindblad processes are simularly effective in inducing dimension reduction, as are the symmetry principles that are the focus of Butterfield’s review.

    If we keep this principle of dimension-reduction in the foreground of our thinking, then much of the perceived distinction between classical and quantum mechanics evaporates.

    This leaves us free to cheerfully embrace Arnol’d’s dictum “Hamiltonian mechanics is geometry in phase space; phase space has the structure of a symplectic manifold”, as being universally applicable, no matter whether the dynamics is classical, quantum, or (as is increasingly common) hybridized.

    Golly, a tribute to Vladimir Arnol’d’s dynamical insights might be a good breakfast topic for Alice, Bob, and Carla! 🙂

  41. sim Says:

    “Implicit in Scott’s challenge (it seems to me) is that our candidate quantum dynamics has to exclude physical oracle machines.”

    Then I don’t think we have the same (mis)understanding of the challenge. Would you mind stating yours?

    For the record, mine is “to find a physical property that is powerful enough so that P = NP, but weak enough to permit P != NP relative to some oracle.”

    With some luck Scott will give a hand on this one too 🙂

  42. John Sidles Says:

    Sim, I take a talmudic point-of-view—`cuz hey, it’s Shtetl Optimized, after all—that the purpose of Scott’s challenge is mainly to inspire folks to think … in which regard, Shtetl Optimized surely ranks among the very best math/science/engineering blogs … and that’s why I don’t scruple to interpret Scott’s challenges pretty broadly.

    To me, Scott’s “physical oracle” challenge, like his “SURE/SHOR separator” challenge, are pointing toward a broader challenge that he lectured upon in his QIP2010 talk “New Evidence That Quantum Mechanics Is Hard to Simulate On Classical Computers”.

    This over-arching challenge might be phrased as: “Carry out any physical experiment whose resulting dataset provably cannot be simulated with PTIME resources (subject to standard assumptions of complexity theory).”

    Even to conceive of such a (wholly unprecedented) class of experiments is an immensely creative act of the imagination … and to actually carry out such an experiment would rank with Galileo’s (legendary?) ball-dropping at the Tower of Pisa.

    I agree with you, Sim, with with some luck, Scott will give us a post (or better, an in-depth manuscript) on this broad topic … which I take to be (to date) the hardest and most fundamental of all the challenges that Scott has ever issued.

  43. Scott Says:

    John: In 4+ years of your leaving comments here, I think #42 is my all-time favorite. 🙂 As for the “in-depth manuscript”, Alex Arkhipov and I have been writing it for the past year—hopefully we’ll be done in a couple months.

  44. sim Says:

    “the purpose of Scott’s challenge is mainly to inspire folks to think ”

    Well I’m not sure we can put that on Scott’s credit, but obviously you think a lot. Pardon me if I wait for the paper by now. 🙂

  45. Hyperstig Says:

    I have completed The GUT Theory.

    You can find the equations here:

    http://www.wix.com/Hyperstig/Hyperstig

    0 is a wave function that collapses and re-expands.

    I have invented the 1st quantum computer.

    I am the failsafe.

    You can follow me on Twitter:

    https://twitter.com/Hyperstigzero

  46. John Sidles Says:

    Scott says: John: In 4+ years of your leaving comments here, I think #42 is my all-time favorite.

    Thanks Scott … definitely I am a huge fan of the line of research that you and Alex Arkhipov are pursuing & will be reading your article with great interest, for reasons that I posted about on Dick Lipton’s blog.

    Very possibly, the reasons that you and Alex have for pursuing your research, are considerably different from the reasons that I (and other readers) have for reading the resulting articles … and that ineradicable tension between authors and readers is IMHO one of the most *wonderful* things about *all* forms of literature.

    So if you and Alex are presently having as much fun writing-up this work, as I (and your other colleagues) intend to have reading about it, then for sure, we’ll all be having plenty of fun this summer. Good! 🙂

  47. John Sidles Says:

    I continued this topic over on Dave Bacon’s blog, in a comment that was mainly focused on Dave’s (excellent IMHO) preprint with Isaac Crosson and Kenneth Brown, Making Classical Ground State Spin Computing Fault-Tolerant (arXiv:1006.4388).

    But that Bacon-blog comment was also written with a view toward the coming Aaronson/Arkipov article. Specifically, the comment links the Bacon/Crosson/Brown preprint to the point-of-view (originated by Ashtekar and Schilling) that Hilbert space can be regarded as a mathematically convenient large-rank “dimension-drowning” approximation, and therefore, that Hilbert space is not the True State-Space of Nature.

    Now, I will cheerfully stipulate that the phrase “dimension-drowning approximation,” with reference to Hilbert space, was conceived as a cheerful provocation, with a view toward countering the similarly provocative phrase “quantum computing skeptic.”

    To address these points more soberly—but hey, what fun is that?—the following three elements (IMHO) are none-of-them particularly controversial, but are each-of-them very exciting: (1) these ideas are subject in principle to experimental test; (2) designing these experiments poses a considerable challenge to both mathematicians and scientists, and (3) analyzing the data will be a considerable challenge too.

    It is with regard to this third point that (AFAICT) Scott and Alex have been thinking deeply and seriously, and I think everyone appreciates the toughness of these issues. That is why I am so interested in their findinds, and also, so very grateful (along with many QIS colleagues) for Scott’s and Alex’s willingness to tackle them.

  48. John Sidles Says:

    For that subset of humanity—possibly a vanishingly small subset—that is interested in the mathematical history of geometric mechanics (both classical and quantum), I have posted some of the early 19th century history on Ian Durham’s blog Quantum Moxie, and some of the early 21st century history on Dick Lipton’s blog Godel’s Lost Letter.

    These two centuries appear to have more in common (in terms of science/technology/engineering/math, i.e., STEM) than might be expected.

    We can hope so anyway … an overheating planet … short on resources … with ten billion people on it … who all need family-supporting jobs … well … that planet needs all the help it can get.

  49. John Sidles Says:

    And for that even smaller subset of humanity that is interested in actually building quantum oracle devices, this objective is listed as Challenge 4 (slide 16) of a talk we gave last week the 2rd Nano-MRI conference at the Domaine du Tremblay, France.

    That challenge is phrased as follows: “Experimentally demonstrate any oracle device that cannot be simulated in PTIME (that is, with polynomial resources). Alternatively, provide direct experimental evidence that the geometric dynamics of nature’s quantum state-space obstructs such a demonstration, and describe that geometry.”

    The slide takes care to credit Scott and Alex (Arkipov) for inspiration … it would have been even better if Scott and Alex had shipped their rumored preprint!

    The talk overall adopted pretty much the same cheeky attitude toward “Sure/Shor Separators” that the French knights had toward Monty Python’s Holy Grail … namely … ‘No thanks, we’ve already got one.’

    I hoped for some lively discussion … but this strongly experiment-driven audience was sanguine … “So the state-space of quantum mechanics is perhaps, after all, not a linear Hilbert space? Hmmm … okay. How hard are these codes to write?”

    And so the French revolution was placid.

  50. John Sidles Says:

    Hmmm … the slide that explicitly credits Scott and Alex’s ideas about oracles is actually this one.

    The accompanying commentary “This challenge aims to untether quantum information science from quantum computers” may or may not accord with Scott and Alex’s views on physical quantum oracles … and that is why, up here in Seattle, we’re so eager to see a preprint!

    The (mostly experimental) attendees at the 3rd Nano-MRI conference welcomed this untethering, because from their pragmatic point-of-view, any challenge that expands the domain of fundamental QIS experiments is a good one.