Archive for the ‘Speaking Truth to Parallelism’ Category

Book Review: “Quantum Supremacy” by Michio Kaku (tl;dr DO NOT BUY)

Friday, May 19th, 2023

Update (June 6): I wish to clarify that I did not write any of the dialogue for the “Scott Aaronson” character who refutes Michio Kaku’s quantum computing hype in this YouTube video, which uses an AI recreation of my voice. The writer appears to be physics/math blogger and podcaster Hassaan Saleem; see his website here. Luckily, the character and I do share many common views; I’m sure we’d hit it off if we met.


When I was a teenager, I enjoyed reading Hyperspace, an early popularization of string theory by the theoretical physicist Michio Kaku. I’m sure I’d have plenty of criticisms if I reread it today, but at the time, I liked it a lot. In the decades since, Kaku has widened his ambit to, well, pretty much everything, regularly churning out popular books with subtitles like “How Science Will Revolutionize the 21st Century” and “How Science Will Shape Human Destiny and Our Daily Lives.” He’s also appeared on countless TV specials, in many cases to argue that UFOs likely contain extraterrestrial visitors.

Now Kaku has a new bestseller about quantum computing, creatively entitled Quantum Supremacy. He even appeared on Joe Rogan a couple weeks ago to promote the book, surely reaching an orders-of-magnitude larger audience than I have in two decades of trying to explain quantum computing to non-experts. (Incidentally, to those who’ve asked why Joe Rogan hasn’t invited me on his show to explain quantum computing: I guess you now have an answer of sorts!)

In the spirit, perhaps, of the TikTokkers who eat live cockroaches or whatever to satisfy their viewers, I decided to oblige loyal Shtetl-Optimized fans by buying Quantum Supremacy and reading it. So I can now state with confidence: beating out a crowded field, this is the worst book about quantum computing, for some definition of the word “about,” that I’ve ever encountered.

Admittedly, it’s not obvious why I’m reviewing the book here at all. Among people who’ve heard of this blog, I expect that approximately zero would be tempted to buy Kaku’s book, at least if they flipped through a few random pages and saw the … level of care that went into them. Conversely, the book’s target readers have probably never visited a blog like this one and never will. So what’s the use of this post?

Well, as the accidental #1 quantum computing blogger on the planet, I feel a sort of grim obligation here. Who knows, maybe this post will show up in the first page of Google results for Kaku’s book, and it will manage to rescue two or three people from the kindergarten of lies.


Where to begin? Should we just go through the first chapter with a red pen? OK then: on the very first page, Kaku writes,

Google revealed that their Sycamore quantum computer could solve a mathematical problem in 200 seconds that would take 10,000 years on the world’s fastest supercomputer.

No, the “10,000 years” estimate was quickly falsified, as anyone following the subject knows. I’d be the first to stress that the situation is complicated; compared to the best currently-known classical algorithms, some quantum advantage remains for the Random Circuit Sampling task, depending on how you measure it. But to repeat the “10,000 years” figure at this point, with no qualifications, is actively misleading.

Turning to the second page:

[Quantum computers] are a new type of computer that can tackle problems that digital computers can never solve, even with an infinite amount of time. For example, digital computers can never accurately calculate how atoms combine to create crucial chemical reactions, especially those that make life possible. Digital computers can only compute on digital tape, consisting of a series of 0s and 1s, which are too crude to describe the delicate waves of electrons dancing deep inside a molecule. For example, when tediously computing the paths taken by a mouse in a maze, a digital computer has to painfully analyze each possible path, one after the other. A quantum computer, however, simultaneously analyzes all possible paths at the same time, with lightning speed.

OK, so here Kaku has already perpetuated two of the most basic, forehead-banging errors about what quantum computers can do. In truth, anything that a QC can calculate, a classical computer can calculate as well, given exponentially more time: for example, by representing the entire wavefunction, all 2n amplitudes, to whatever accuracy is needed. That’s why it was understood from the very beginning that quantum computers can’t change what’s computable, but only how efficiently things can be computed.

And then there’s the Misconception of Misconceptions, about how a QC “analyzes all possible paths at the same time”—with no recognition anywhere of the central difficulty, the thing that makes a QC enormously weaker than an exponentially parallel classical computer, but is also the new and interesting part, namely that you only get to see a single, random outcome when you measure, with its probability given by the Born rule. That’s the error so common that I warn against it right below the title of my blog.

[Q]uantum computers are so powerful that, in principle, they could break all known cybercodes.

Nope, that’s strongly believed to be false, just like the analogous statement for classical computers. Despite its obvious relevance for business and policy types, the entire field of post-quantum cryptography—including the lattice-based public-key cryptosystems that have by now survived 20+ years of efforts to find a quantum algorithm to break them—receives just a single vague mention, on pages 84-85. The possibility of cryptography surviving quantum computers is quickly dismissed because “these new trapdoor functions are not easy to implement.” (But they have been implemented.)


There’s no attempt, anywhere in this book, to explain how any quantum algorithm actually works, let alone is there a word anywhere about the limitations of quantum algorithms. And yet there’s still enough said to be wrong. On page 84, shortly after confusing the concept of a one-way function with that of a trapdoor function, Kaku writes:

Let N represent the number we wish to factorize. For an ordinary digital computer, the amount of time it takes to factorize a number grows exponentially, like t ~ eN, times some unimportant factors.

This is a double howler: first, trial division takes only ~√N time; Kaku has confused N itself with its number of digits, ~log2N. Second, he seems unaware that much better classical factoring algorithms, like the Number Field Sieve, have been known for decades, even though those algorithms play a central role in codebreaking and in any discussion of where the quantum/classical crossover might happen.


Honestly, though, the errors aren’t the worst of it. The majority of the book is not even worth hunting for errors in, because fundamentally, it’s filler.

First there’s page after page breathlessly quoting prestigious-sounding people and organizations—Google’s Sundar Pichai, various government agencies, some report by Deloitte—about just how revolutionary they think quantum computing will be. Then there are capsule hagiographies of Babbage and Lovelace, Gödel and Turing, Planck and Einstein, Feynman and Everett.

And then the bulk of the book is actually about stuff with no direct relation to quantum computing at all—the origin of life, climate change, energy generation, cancer, curing aging, etc.—except with ungrounded speculations tacked onto the end of each chapter about how quantum computers will someday revolutionize all of this. Personally, I’d say that

  1. Quantum simulation speeding up progress in biochemistry, high-temperature superconductivity, and the like is at least plausible—though very far from guaranteed, since one has to beat the cleverest classical approaches that can be designed for the same problems (a point that Kaku nowhere grapples with).
  2. The stuff involving optimization, machine learning, and the like is almost entirely wishful thinking.
  3. Not once in the book has Kaku even mentioned the intellectual tools (e.g., looking at actual quantum algorithms like Grover’s algorithm or phase estimation, and their performance on various tasks) that would be needed to distinguish 1 from 2.

In his acknowledgments section, Kaku simply lists a bunch of famous scientists he’s met in his life—Feynman, Witten, Hawking, Penrose, Brian Greene, Lisa Randall, Neil deGrasse Tyson. Not a single living quantum computing researcher is acknowledged, not one.

Recently, I’d been cautiously optimistic that, after decades of overblown headlines about “trying all answers in parallel,” “cracking all known codes,” etc., the standard for quantum computing popularization was slowly creeping upward. Maybe I was just bowled over by this recent YouTube video (“How Quantum Computers Break the Internet… Starting Now”), which despite its clickbait title and its slick presentation, miraculously gets essentially everything right, shaming the hypesters by demonstrating just how much better it’s possible to do.

Kaku’s slapdash “book,” and the publicity campaign around it, represents a noxious step backwards. The wonder of it, to me, is Kaku holds a PhD in theoretical physics. And yet the average English major who’s written a “what’s the deal with quantum computing?” article for some obscure link aggregator site has done a more careful and honest job than Kaku has. That’s setting the bar about a millimeter off the floor. I think the difference is, at least the English major knows that they’re supposed to call an expert or two, when writing about an enormously complicated subject of which they’re completely ignorant.


Update: I’ve now been immersed in the AI safety field for one year, let I wouldn’t consider myself nearly ready to write a book on the subject. My knowledge of related parts of CS, my year studying AI in grad school, and my having created the subject of computational learning theory of quantum states would all be relevant but totally insufficient. And AI safety, for all its importance, has less than quantum computing does in the way of difficult-to-understand concepts and results that basically everyone in the field agrees about. And if I did someday write such a book, I’d be pretty terrified of getting stuff wrong, and would have multiple expert colleagues read drafts.

In case this wasn’t clear enough from my post, Kaku appears to have had zero prior engagement with quantum computing, and also to have consulted zero relevant experts who could’ve fixed his misconceptions.

Cargo Cult Quantum Factoring

Wednesday, January 4th, 2023

Just days after we celebrated my wife’s 40th birthday, she came down with COVID, meaning she’s been isolating and I’ve been spending almost all my time dealing with our kids.

But if experience has taught me anything, it’s that the quantum hype train never slows down. In the past 24 hours, at least four people have emailed to ask me about a new paper entitled “Factoring integers with sublinear resources on a superconducting quantum processor.” Even the security expert Bruce Schneier, while skeptical, took the paper surprisingly seriously.

The paper claims … well, it’s hard to pin down what it claims, but it’s certainly given many people the impression that there’s been a decisive advance on how to factor huge integers, and thereby break the RSA cryptosystem, using a near-term quantum computer. Not by using Shor’s Algorithm, mind you, but by using the deceptively similarly named Schnorr’s Algorithm. The latter is a classical algorithm based on lattices, which the authors then “enhance” using the heuristic quantum optimization method called QAOA.

For those who don’t care to read further, here is my 3-word review:

No. Just No.

And here’s my slightly longer review:

Schnorr ≠ Shor. Yes, even when Schnorr’s algorithm is dubiously “enhanced” using QAOA—a quantum algorithm that, incredibly, for all the hundreds of papers written about it, has not yet been convincingly argued to yield any speedup for any problem whatsoever (besides, as it were, the problem of reproducing its own pattern of errors) (one possible recent exception from Sami Boulebnane and Ashley Montanaro).

In the new paper, the authors spend page after page saying-without-saying that it might soon become possible to break RSA-2048, using a NISQ (i.e., non-fault-tolerant) quantum computer. They do so via two time-tested strategems:

  1. the detailed exploration of irrelevancies (mostly, optimization of the number of qubits, while ignoring the number of gates), and
  2. complete silence about the one crucial point.

Then, finally, they come clean about the one crucial point in a single sentence of the Conclusion section:

It should be pointed out that the quantum speedup of the algorithm is unclear due to the ambiguous convergence of QAOA.

“Unclear” is an understatement here. It seems to me that a miracle would be required for the approach here to yield any benefit at all, compared to just running the classical Schnorr’s algorithm on your laptop. And if the latter were able to break RSA, it would’ve already done so.

All told, this is one of the most actively misleading quantum computing papers I’ve seen in 25 years, and I’ve seen … many. Having said that, this actually isn’t the first time I’ve encountered the strange idea that the exponential quantum speedup for factoring integers, which we know about from Shor’s algorithm, should somehow “rub off” onto quantum optimization heuristics that embody none of the actual insights of Shor’s algorithm, as if by sympathetic magic. Since this idea needs a name, I’d hereby like to propose Cargo Cult Quantum Factoring.

And with that, I feel I’ve adequately discharged my duties here to sanity and truth. If I’m slow to answer comments, it’ll be because I’m dealing with two screaming kids.

Dismantling Quantum Hype with Tim Nguyen

Tuesday, November 22nd, 2022

Happy Thanksgiving to my American readers! While I enjoy a family holiday-week vacation in exotic Dallas—and yes, I will follow up on my old JFK post by visiting Dealey Plaza—please enjoy the following Thanksgiving victuals:

I recently recorded a 3-hour (!) YouTube video with Timothy Nguyen, host of the Cartesian Cafe. Our episode is entitled Quantum Computing: Dismantling the Hype. In it, I teach a sort of extremely compressed version of my undergraduate Intro to Quantum Information Science course, unburdening myself about whatever Tim prompts me to explain: the basic rules of quantum information, quantum circuits, the quantum black-box model, the Deutsch-Jozsa algorithm, BQP and its relationship to classical complexity classes, and sampling-based quantum supremacy experiments. This is a lot more technical than an average podcast, a lot less technical than an actual course, and hopefully just right for some nonempty subset of readers.

Outside of his podcasting career, some of you might recognize Nguyen as the coauthor, with Theo Polya, of a rebuttal of “Geometric Unity.” This latter is the proposal by the financier, podcaster, and leading “Intellectual Dark Web” figure Eric Weinstein for a unified theory of particle physics. Now, I slightly know Weinstein, and have even found him fascinating, eloquent, and correct about various issues. So, in an addendum to the main video, Nguyen chats with me about his experience critiquing Weinstein’s theory, and also about something where my knowledge is far greater: namely, my 2002 rebuttal of some of the central claims in Stephen Wolfram’s A New Kind of Science, and whether there are any updates to that story twenty years later.

Enjoy!

That Financial Times QC skepticism piece

Monday, August 29th, 2022

Several people have asked me to comment about a Financial Times opinion piece entitled The Quantum Computing Bubble (subtitle: “The industry has yet to demonstrate any real utility, despite the fanfare, billions of VC dollars and three Spacs”) (archive link). The piece is purely deflationary—not a positive word in it—though it never goes so far as to suggest that QC is blocked by any Gil-Kalai-like fundamental principle, nor does it even evince curiosity about that question.

As it happens, the author, physicist Nikita Gourianov, had emailed me a few days ago with some nice words about my own skeptical efforts on Shtetl-Optimized, and a request for comment on his article. So, as a way to get back into blogging after a 2-week hiatus, I figured I’d share my respoinse.


Hi Nikita,

Thanks for the kind words about my blog, and for your piece, which I just read.  There’s a great deal of truth in what you write, but I also take issue with a few points.  You say:

A convincing strategy for overcoming these errors has not yet been demonstrated, making it unclear as to when — if ever — it will become possible to build a large-scale, fault-tolerant quantum computer.

In one sense this is tautologically true — the only fully convincing and clear demonstration that something is possible is to do it, as with the Wright brothers or the Trinity nuclear test.  In other sense, though, we’ve known the “strategy” since the 1990s.  It’s just that the fault-tolerance theorem called for gate fidelities 5-6 orders of magnitude better than anything achievable at the time.  In the 25 years since, about 3 of those orders of magnitude have been achieved, so it doesn’t take any great imagination to foresee that the remainder could be as well.  A layperson reading your piece might not understand this.

As for applications, my position has always been that if there were zero applications, it would still be at least as scientifically important to try to build QCs as it was to build the LHC, LIGO, or the James Webb telescope.  If there are real applications, such as simulating chemical dynamics, or certifiable randomness — and there very well might be — then those are icing on the cake.  This, of course, radically differs from the vision that now gets presented to investors and the press (hence all the railing on my blog!), but it also differs from what a reader of your piece would take away.

Anyway, thanks again for sharing!

Best,
Scott

Slowly emerging from blog-hibervacation

Wednesday, July 21st, 2021

Alright everyone:

  1. Victor Galitski has an impassioned rant against out-of-control quantum computing hype, which I enjoyed and enthusiastically recommend, although I wished Galitski had engaged a bit more with the strongest arguments for optimism (e.g., the recent sampling-based supremacy experiments, the extrapolations that show gate fidelities crossing the fault-tolerance threshold within the next decade). Even if I’ve been saying similar things on this blog for 15 years, I clearly haven’t been doing so in a style that works for everyone. Quantum information needs as many people as possible who will tell the truth as best they see it, unencumbered by any competing interests, and has nothing legitimate to fear from that. The modern intersection of quantum theory and computer science has raised profound scientific questions that will be with us for decades to come. It’s a lily that need not be gilded with hype.
  2. Last month Limaye, Srinivasan, and Tavenas posted an exciting preprint to ECCC, which apparently proves the first (slightly) superpolynomial lower bound on the size of constant-depth arithmetic circuits, over fields of characteristic 0. Assuming it’s correct, this is another small victory in the generations-long war against the P vs. NP problem.
  3. I’m grateful to the Texas Democratic legislators who fled the state to prevent the legislature, a couple miles from my house, having a quorum to enact new voting restrictions, and who thereby drew national attention to the enormity of what’s at stake. It should go without saying that, if a minority gets to rule indefinitely by forcing through laws to suppress the votes of a majority that would otherwise unseat it, thereby giving itself the power to force through more such laws, etc., then we no longer live in a democracy but in a banana republic. And there’s no symmetry to the situation: no matter how terrified you (or I) might feel about wokeists and their denunciation campaigns, the Democrats have no comparable effort to suppress Republican votes. Alas, I don’t know of any solutions beyond the obvious one, of trying to deal the conspiracy-addled grievance party crushing defeats in 2022 and 2024.
  4. Added: Here’s the video of my recent Astral Codex Ten ask-me-anything session.

QC ethics and hype: the call is coming from inside the house

Saturday, March 20th, 2021

For years, I’d sometimes hear discussions about the ethics of quantum computing research. Quantum ethics!

When the debates weren’t purely semantic, over the propriety of terms like “quantum supremacy” or “ancilla qubit,” they were always about chin-strokers like “but what if cracking RSA encryption gives governments more power to surveil their citizens? or what if only a few big countries or companies get quantum computers, thereby widening the divide between haves and have-nots?” Which, OK, conceivably these will someday be issues. But, besides barely depending on any specific facts about quantum computing, these debates always struck me as oddly safe, because the moral dilemmas were so hypothetical and far removed from us in time.

I confess I may have even occasionally poked fun when asked to expound on quantum ethics. I may have commented that quantum computers probably won’t kill anyone unless a dilution refrigerator tips over onto their head. I may have asked forgiveness for feeding custom-designed oracles to BQP and QMA, without first consulting an ethics committee about the long-term effects on those complexity classes.

Now fate has punished me for my flippancy. These days, I really do feel like quantum computing research has become an ethical minefield—but not for any of the reasons mentioned previously. What’s new is that millions of dollars are now potentially available to quantum computing researchers, along with equity, stock options, and whatever else causes “ka-ching” sound effects and bulging eyes with dollar signs. And in many cases, to have a shot at such riches, all an expert needs to do is profess optimism that quantum computing will have revolutionary, world-changing applications and have them soon. Or at least, not object too strongly when others say that.

Some of today’s rhetoric will of course remind people of the D-Wave saga, which first brought this blog to prominence when it began in earnest in 2007. Quantum computers, we hear now as then, will soon leave the Earth’s fastest supercomputers in the dust. They’re going to harness superposition to try all the exponentially many possible solutions at once. They’ll crack the Traveling Salesman Problem, and will transform machine learning and AI beyond recognition. Meanwhile, simulations of quantum systems will be key to solving global warming and cancer.

Despite the parallels, though, this new gold rush doesn’t feel to me like the D-Wave one, which seems in retrospect like just a little dry run. If I had to articulate what’s new in one sentence, it’s that this time “the call is coming from inside the house.” Many of the companies making wildly overhyped claims are recognized leaders of the field. They have brilliant quantum computing theorists and experimentalists on their staff with impeccable research records. Some of those researchers are among my best friends. And even when I wince at the claims of near-term applications, in many cases (especially with quantum simulation) the claims aren’t obviously false—we won’t know for certain until we try it and see! It’s genuinely gotten harder to draw the line between defensible optimism and exaggerations verging on fraud.

Indeed, this time around virtually everyone in QC is “complicit” to a greater or lesser degree. I, too, have accepted compensation to consult on quantum computing topics, to give talks at hedge funds, and in a few cases to serve as a scientific adviser to quantum computing startups. I tell myself that, by 2021 standards, this stuff is all trivial chump change—a few thousands of dollars here or there, to expound on the same themes that I already discuss free of charge on this blog. I actually get paid to dispel hype, rather than propagate it! I tell myself that I’ve turned my back on the orders of magnitude more money available to those willing to hitch their scientific reputations to the aspirations of this or that specific QC company. (Yes, this blog, and my desire to preserve its intellectual independence and credibility, might well be costing me millions!)

But, OK, some would argue that accepting any money from QC companies or QC investors just puts you at the top of a slope with unabashed snake-oil salesmen at the bottom. With the commercialization of our field that started around 2015, there’s no bright line anymore marking the boundary between pure scientific curiosity and the pursuit of filthy lucre; it’s all just points along a continuum. I’m not sure that these people are wrong.

As some of you might’ve seen already, IonQ, the trapped-ion QC startup that originated from the University of Maryland, is poised to have the first-ever quantum computing IPO—a so-called “SPAC IPO,” which while I’m a financial ignoramus, apparently involves merging with a shell company and thereby bypassing the SEC’s normal IPO rules. Supposedly they’re seeking $650 million in new funding and a $2 billion market cap. If you want to see what IonQ is saying about QC to prospective investors, click here. Lacking any choice in the matter, I’ll probably say more about these developments in a future post.

Meanwhile, PsiQuantum, the Palo-Alto-based optical QC startup, has said that it’s soon going to leave “stealth mode.” And Amazon, Microsoft, Google, IBM, Honeywell, and other big players continue making large investments in QC—treating it, at least rhetorically, not at all like blue-sky basic research, but like a central part of their future business plans.

All of these companies have produced or funded excellent QC research. And of course, they’re all heterogeneous, composed of individuals who might vehemently disagree with each other about the near- or long-term prospects of QC. And yet all of them have, at various times, inspired reflections in me like the ones in this post.

I regret that this post has no clear conclusion. I’m still hashing things out, solicing thoughts from my readers and friends. Speaking of which: this coming Monday, March 22, at 8-10pm US Eastern time, I’ve decided to hold a discussion around these issues on Clubhouse—my “grand debut” on that app, and an opportunity to see whether I like it or not! My friend Adam Brown will moderate the discussion; other likely participants will be John Horgan, George Musser, Michael Nielsen, and Matjaž Leonardis. If you’re on Clubhouse, I hope to see you there!

Update (March 22): Read this comment by “FB” if you’d like to understand how we got to this point.

Sufficiently amusing that I had no choice

Thursday, January 21st, 2021

From quantum supremacy to classical fallacy

Wednesday, October 2nd, 2019

Retrospective Comment (Dec. 26, 2019): While I basically stand by what I wrote in this post, I wanted to call attention to the fact that, in its aftermath, one of the authors of the p-bit paper—Kerem Camsari—displayed a striking degree of intellectual courage and honesty. He showed up in the comments section to defend the motivation for the p-bit model, but also to concede the points I’d raised about scaling. Notably, on some matters, he explicitly broke with his own coauthors. He treated having his paper harshly criticized on Shtetl-Optimized not as a personal attack, but as an opportunity to learn and grow. I’m not sure that I would’ve been able to do the same in his shoes, and I regard it as one of the happier outcomes in this blog’s history. –SA

Another Update (March 15, 2024): Kerem Camsari has a new Twitter thread, attacking me for having stuck my neck out in this post to tell the truth about the lack of any asymptotic speedup from the p-bit model. As a result, I retract all the conciliatory things that I said above.

————————————

Maybe I should hope that people never learn to distinguish for themselves which claimed breakthroughs in building new forms of computation are obviously serious, and which ones are obviously silly. For as long as they don’t, this blog will always serve at least one purpose. People will cite it, tweet it, invoke its “authority,” even while from my point of view, I’m offering nothing more intellectually special than my toddler does when he calls out “moo-moo cow! baa-baa sheep!” as we pass them on the road.

But that’s too pessimistic. Sure, most readers must more-or-less already know what I’ll say about each thing: that Google’s quantum supremacy claim is serious, that memcomputing to solve NP-complete problems is not, etc. Even so, I’ve heard from many readers that this blog was at least helpful for double-checking their initial impressions, and for making common knowledge what before had merely been known to many. I’m fine for it to continue serving those roles.

Last week, even as I dealt with fallout from Google’s quantum supremacy leak, I also got several people asking me to comment on a Nature paper entitled Integer factorization using stochastic magnetic tunnel junctions (warning: paywalled). See also here for a university press release.

The authors report building a new kind of computer based on asynchronously updated “p-bits” (probabilistic bits). A p-bit is “a robust, classical entity fluctuating in time between 0 and 1, which interacts with other p-bits … using principles inspired by neural networks.” They build a device with 8 p-bits, and use it to factor integers up to 945. They present this as another “unconventional computation scheme” alongside quantum computing, and as a “potentially scalable hardware approach to the difficult problems of optimization and sampling.”

A commentary accompanying the Nature paper goes much further still—claiming that the new factoring approach, “if improved, could threaten data encryption,” and that resources should now be diverted from quantum computing to this promising new idea, one with the advantages of requiring no refrigeration or maintenance of delicate entangled states. (It should’ve added: and how big a number has Shor’s algorithm factored anyway, 21? Compared to 945, that’s peanuts!)

Since I couldn’t figure out a gentler way to say this, here goes: it’s astounding that this paper and commentary made it into Nature in the form that they did. Juxtaposing Google’s sampling achievement with p-bits, as several of my Facebook friends did last week, is juxtaposing the Wright brothers with some guy bouncing around on a pogo stick.

If you were looking forward to watching me dismantle the p-bit claims, I’m afraid you might be disappointed: the task is over almost the moment it begins. “p-bit” devices can’t scalably outperform classical computers, for the simple reason that they are classical computers. A little unusual in their architecture, but still well-covered by the classical Extended Church-Turing Thesis. Just like with the quantum adiabatic algorithm, an energy penalty is applied to coax the p-bits into running a local optimization algorithm: that is, making random local moves that preferentially decrease the number of violated constraints. Except here, because the whole evolution is classical, there doesn’t seem to be even the pretense that anything is happening that a laptop with a random-number generator couldn’t straightforwardly simulate.

Even so, I wouldn’t be writing this post if you opened the paper and it immediately said, in effect, “look, we know. You’re thinking that this is just yet another stochastic local optimization method, which could clearly be simulated efficiently on a conventional computer, thereby putting it into a different conceptual universe from quantum computing. You’re thinking that factoring an n-bit integer will self-evidently take exp(n) time by this method, as compared to exp(n1/3) for the Number Field Sieve, and that no crypto is in even remote danger from this. But here’s why you should still be interested in our p-bit model: because of other advantages X, Y, and Z.” Alas, in vain one searches the whole paper, and the lengthy supplementary material, and the commentary, for any acknowledgment of the pachyderm in the pagoda. Not an asymptotic runtime scaling in sight. Quantum computing is there, but stripped of the theoretical framework that gives it its purpose.

That silence, in the pages of Naturethat’s the part that convinced me that, while on the negative side this blog seems to have accomplished nothing for the world in 14 years of existence, on the positive side it will likely have a role for decades to come.

Update: See a response in the comments, which I appreciated, from Kerem Cansari (one of the authors of the paper), and my response to the response.

(Partly) Unrelated Announcement #1: My new postdoc, Andrea Rocchetto, had the neat idea of compiling a Quantum Computing Fact Sheet: a quick “Cliffs Notes” for journalists, policymakers, and others looking to get the basics right. The fact sheet might grow in the future, but in the meantime, check it out! Or at a more popular level, try the Quantum Atlas made by folks at the University of Maryland.

Unrelated Announcement #2: Daniel Wichs asked me to give a shout-out to a new Conference on Information-Theoretic Cryptography, to be held June 17-19 in Boston.

Third Announcement: Several friends asked me to share that Prof. Peter Wittek, quantum computing researcher at the University of Toronto, has gone missing in the Himalayas. Needless to say we hope for his safe return.

Just says in P

Wednesday, April 17th, 2019

Recently a Twitter account started called justsaysinmice. The only thing this account does, is to repost breathless news articles about medical research breakthroughs that fail to mention that the effect in question was only observed in mice, and then add the words “IN MICE” to them. Simple concept, but it already seems to be changing the conversation about science reporting.

It occurred to me that we could do something analogous for quantum computing. While my own deep-seated aversion to Twitter prevents me from doing it myself, which of my readers is up for starting an account that just reposts one overhyped QC article after another, while appending the words “A CLASSICAL COMPUTER COULD ALSO DO THIS” to each one?

Quantum computing for policymakers and philosopher-novelists

Wednesday, June 6th, 2018

Last week Rebecca Newberger Goldstein, the great philosopher and novelist who I’m privileged to call a friend, wrote to ask me whether I “see any particular political and security problems that are raised by quantum computing,” to help her prepare for a conference she’d be attending in which that question would be discussed.  So I sent her the response below, and then decided that it might be of broader interest.

Shtetl-Optimized regulars and QC aficionados will find absolutely nothing new here—move right along, you’ve been warned.  But I decided to post my (slightly edited) response to Rebecca anyway, for two reasons.  First, so I have something to send anyone who asks me the same question in the future—something that, moreover, as Feynman said about the Feynman Lectures on Physics, contains views “not far from my own.”  And second, because, while of course I’ve written many other popular-level quantum computing essays, with basically all of them, my goal was to get the reader to hear the music, so to speak.  On reflection, though, I think there might also be some value in a piece for business and policy people (not to mention humanist intellectuals) that sets aside the harmony of the interfering amplitudes, and just tries to convey some of the words to the song without egregious howlers—which is what Rebecca’s question about “political and security problems” forced me to do.  This being quantum computing, of course, much of what one finds in the press doesn’t even get the lyrics right!  So without further ado:


Dear Rebecca,

If you want something serious and thoughtful about your question, you probably won’t do much better than the recent essay “The Potential Impact of Quantum Computers on Society,” by my longtime friend and colleague Ronald de Wolf.

To elaborate my own thoughts, though: I feel like the political and security problems raised by quantum computing are mostly the usual ones raised by any new technology (national prestige competitions, haves vs have-nots, etc)—but with one added twist, coming from quantum computers’ famous ability to break our current methods for public-key cryptography.

As Ronald writes, you should think of a quantum computer as a specialized device, which is unlikely to improve all or even most of what we do with today’s computers, but which could give dramatic speedups for a few specific problems.  There are three most important types of applications that we know about today:

(1) Simulation of quantum physics and chemistry. This was Richard Feynman’s original application when he proposed quantum computing in 1981, and I think it’s still the most important one economically.  Having a fast, general-purpose quantum simulator could help a lot in designing new drugs, materials, solar cells, high-temperature superconductors, chemical reactions for making fertilizer, etc.  Obviously, these are not applications like web browsing or email that will directly affect the everyday computer user.  But they’re areas where you’d only need a few high-profile successes to generate billions of dollars of value.

(2) Breaking existing public-key cryptography.  This is the most direct political and security implication.  Every time you visit a website that begins with “https,” the authentication and encryption—including, e.g., protecting your credit card number—happen using a cryptosystem based on factoring integers or discrete logarithms or a few other related problems in number theory.  A full, universal quantum computer, if built, is known to be able to break all of this.

Having said that, we all know today that hackers, and intelligence agencies, can compromise people’s data in hundreds of more prosaic ways than by building a quantum computer!  Usually they don’t even bother trying to break the encryption, relying instead on poor implementations and human error.

And it’s also important to understand that a quantum computer wouldn’t mean the end of online security.  There are public-key cryptosystems currently under development—most notably, those based on lattices—that are believed to resist attack even by quantum computers; NIST is planning to establish standards for these systems over the next few years.  Switching to these “post-quantum” systems would be a significant burden, much like fixing the Y2K bug (and they’re also somewhat slower than our current systems), but hopefully it would only need to happen once.

As you might imagine, there’s some interest in switching to post-quantum cryptosystems even now—for example, if you wanted to encrypt messages today with some confidence they won’t be decrypted even 30 years from now.  Google did a trial of a post-quantum cryptosystem two years ago.  On the other hand, given that a large fraction of web servers still use 512-bit “export grade” cryptography that was already breakable in the 1990s (good news: commenter Viktor Dukhovni tells me that this has now been mostly fixed, since security experts, including my childhood friend Alex Halderman, raised a stink about it a few years ago), it’s a safe bet that getting everyone to upgrade would take quite a long time, even if the experts agreed (which they don’t yet) which of the various post-quantum cryptosystems should become the new standard.  And since, as I said, most attacks target mistakes in implementation rather than the underlying cryptography, we should expect any switch to post-quantum cryptography to make security worse rather than better in the short run.

As a radical alternative to post-quantum crypto, there’s also (ironically enough) quantum cryptography, which doesn’t work over the existing Internet—it requires setting up new communications infrastructure—but which has already been deployed in a tiny number of places, and which promises security based only on quantum physics (and, of course, on the proper construction of the hardware), as opposed to mathematical problems that a quantum computer or any other kind of computer could potentially solve.  According to a long-running joke (or not-quite-joke) in our field, one of the central applications of quantum computing will be to create demand for quantum cryptography!

Finally, there’s private-key cryptography—i.e., the traditional kind, where the sender and recipient meet in secret to agree on a key in advance—which is hardly threatened by quantum computing at all: you can achieve the same level of security as before, we think, by simply doubling the key lengths.  If there’s no constraint on key length, then the ultimate here is the one-time pad, which when used correctly, is theoretically unbreakable by anything short of actual physical access to the sender or recipient (e.g., hacking their computers, or beating down their doors with an ax).  But while private-key crypto might be fine for spy agencies, it’s impractical for widespread deployment on the Internet, unless we also have a secure way to distribute the keys.  This is precisely where public-key crypto typically gets used today, and where quantum crypto could in principle be used in the future: to exchange private keys that are then used to encrypt and decrypt the actual data.

I should also mention that, because it breaks elliptic-curve-based signature schemes, a quantum computer might let a thief steal billions of dollars’ worth of Bitcoin.  Again, this could in principle be fixed by migrating Bitcoin (and other cryptocurrencies) to quantum-resistant cryptographic problems, but that hasn’t been done yet.

(3) Optimization and machine learning.  These are obviously huge application areas for industry, defense, and pretty much anything else.  The main issue is just that we don’t know how to get as large a speedup from a quantum computer as we’d like for these applications.  A quantum computer, we think, will often be able to solve optimization and machine learning problems in something like the square root of the number of steps that would be needed classically, using variants of what’s called Grover’s algorithm.  So, that’s significant, but it’s not the exponential speedup and complete game-changer that we’d have for quantum simulation or for breaking public-key cryptography.  Most likely, a quantum computer will be able to achieve exponential speedups for these sorts of problems only in special cases, and no one knows yet how important those special cases will be in practice.  This is a still-developing research area—there might be further theoretical breakthroughs (in inventing new quantum algorithms, analyzing old algorithms, matching the performance of the quantum algorithms by classical algorithms, etc.), but it’s also possible that we won’t really understand the potential of quantum computers for these sorts of problems until we have the actual devices and can test them out.

 

As for how far away all this is: given the spectacular progress by Google and others over the last few years, my guess is that we’re at most a decade away from some small, special-purpose quantum computers (with ~50-200 qubits) that could be useful for quantum simulation.  These are what the physicist John Preskill called “Noisy Intermediate Scale Quantum” (NISQ) computers in his excellent recent essay.

However, my guess is also that it will take longer than that to get the full, error-corrected, universal quantum computers that would be needed for optimization and (most relevant to your question) for breaking public-key cryptography.  Currently, the engineering requirements for a “full, universal” quantum computer look downright scary—so we’re waiting either for further breakthroughs that would cut the costs by a few more orders of magnitude (which, by their very nature, can’t be predicted), or for some modern-day General Groves and Oppenheimer who’d be licensed to spend however many hundreds of billions of dollars it would take to make it happen sooner.

The race to build “NISQ” devices has been heating up, with a shift from pure academic research to venture capitalists and industrial efforts just within the last 4-5 years, noticeably changing the character of our field.

In this particular race, I think that the US is the clear world leader right now—specifically, Google, IBM, Intel, Microsoft, University of Maryland / NIST, and various startups—followed by Europe (with serious experimental efforts in the Netherlands, Austria, and the UK among other places).  Here I should mention that the EU has a new 1-billion-Euro initiative in quantum information.  Other countries that have made or are now making significant investments include Canada, Australia, China, and Israel.  Surprisingly, there’s been very little investment in Russia in this area, and less than I would’ve expected in Japan.

China is a very interesting case.  They’ve chosen to focus less on quantum computing than on the related areas of quantum communication and cryptography, where they’ve become the world leader.  Last summer, in a big upset, China launched the first satellite (“Micius”) specifically for quantum communications, and were able to use it to do quantum cryptography and to distribute entanglement over thousands of miles (from one end of China to the other), the previous record being maybe 100 miles.  If the US has anything comparable to this, it isn’t publicly known (my guess is that we don’t).

This past year, there were hearings in Congress about the need for the US to invest more in quantum information, for example to keep up with China, and it looks likely to happen.  As indifferent or hostile as the current administration has been toward science more generally, the government and defense people I’ve met are very much on board with quantum information—often more so than I am!  I’ve even heard China’s Micius satellite referred to as the “quantum Sputnik,” the thing that will spur the US and others to spend much more to keep up.

As you’d imagine, part of me is delighted that something so abstruse, and interesting for fundamental science, and close to my heart, is now getting attention and funding at this level.  But part of me is worried by how much of the current boom I know to be fueled by misconceptions, among policymakers and journalists and the general public, about what quantum computers will be able to do for us once we have them.  Basically, people think they’ll be magic oracles that will solve all problems faster, rather than just special classes of problems like the ones I enumerated above—and that they’ll simply allow the continuation of the Moore’s Law that we know and love, rather than being something fundamentally different.  I’ve been trying to correct these misconceptions, on my blog and elsewhere, to anyone who will listen, for all the good that’s done!  In any case, the history of AI reminds us that a crash could easily follow the current boom-time, if the results of quantum computing research don’t live up to people’s expectations.

I guess there’s one final thing I’ll say.  Quantum computers are sometimes analogized to nuclear weapons, as a disruptive technology with implications for global security that scientists theorized about decades before it became technically feasible.  But there are some fundamental differences.  Most obviously: the deterrent value of a nuclear weapon comes if everyone knows you have it but you never need to use it, whereas the intelligence value of a quantum computer comes if you use it but no one knows you have it.

(Which is related to how the Manhattan Project entered the world’s consciousness in August 1945, whereas Bletchley Park—which was much more important to the actual winning of WWII—remained secret until the 1970s.)

As I said before, once your adversaries realized that you had a universal quantum computer, or might have one soon, they could switch to quantum-resistant forms of encryption, at least for their most sensitive secrets—in which case, as far as encryption was concerned, everyone would be more-or-less back where they started.  Such a switch would be onerous, cost billions of dollars, and (in practice) probably open up its own security holes unrelated to quantum computing.  But we think we already basically understand how to do it.

This is one reason why, even in a hypothetical future where hostile powers got access to quantum computers (and despite the past two years, I still don’t think of the US as a “hostile power”—I mean, like, North Korea or ISIS or something…!)—even in that future, I’d still be much less concerned about the hostile powers having this brand-new technology, than I’d be about their having the generations-old technology of fission and fusion bombs.

Best,
Scott


Unrelated Update (June 8): Ian Tierney asked me to advertise a Kickstarter for a short film that he’s planning to make about Richard Feynman, and a letter that he wrote to his first wife Arlene after she died.