Research
Papers and Surveys
(Mostly-)Quantum
Papers
S. C. Marshall, S. Aaronson, and V. Dunjko. Improved separation between quantum and classical computers for sampling and functional tasks, arXiv:2410.20935, 2024.
This paper furthers existing evidence that quantum computers are capable of computations beyond classical computers. Specifically, we strengthen the collapse of the polynomial hierarchy to the second level if: (i) Quantum computers with postselection are as powerful as classical computers with postselection (PostBQP=PostBPP), (ii) any one of several quantum sampling experiments (BosonSampling, IQP, DQC1) can be approximately performed by a classical computer (contingent on existing assumptions). This last result implies that if any of these experiment's hardness conjectures hold, then quantum computers can implement functions classical computers cannot (FBQP≠FBPP) unless the polynomial hierarchy collapses to its 2nd level. These results are an improvement over previous work which either achieved a collapse to the third level or were concerned with exact sampling, a physically impractical case.
The workhorse of these results is a new technical complexity-theoretic result which we believe could have value beyond quantum computation. In particular, we prove that if there exists an equivalence between problems solvable with an exact counting oracle and problems solvable with an approximate counting oracle, then the polynomial hierarchy collapses to its second level, indeed to ZPPNP.
S. Aaronson, M. Bavarian, T. Cubitt, S. Grewal, G. Gueltrini, R. O'Donnell, and M. Raat. Computability Theory of Closed Timelike Curves arXiv:1609.05507, 2024.
We study the question of what is computable by Turing machines equipped with time travel into the past; i.e., with Deutschian closed timelike curves (CTCs) having no bound on their width or length. An alternative viewpoint is that we study the complexity of finding approximate fixed points of computable Markov chains and quantum channels of countably infinite dimension. Our main result is that the complexity of these problems is precisely Δ2, the class of languages Turing-reducible to the Halting problem. Establishing this as an upper bound for qubit-carrying CTCs requires recently developed results in the theory of quantum Markov maps.
[Note: This version corrects an error in the proof of the main result in an earlier version of the paper.]
S. Aaronson and Y. Zhang. On Verifiable Quantum Advantage with Peaked Circuit Sampling, arXiv:2404.14493, 2024.
Over a decade after its proposal, the idea of using quantum computers to sample hard distributions has remained a key path to demonstrating quantum advantage. Yet a severe drawback remains: verification seems to require exponential classical computation. As an attempt to overcome this difficulty, we propose a new candidate for quantum advantage experiments with otherwise-random ''peaked circuits,'' i.e., quantum circuits whose outputs have high concentrations on a computational basis state. Naturally, the heavy output string can be used for classical verification.
In this work, we analytically and numerically study an explicit model of peaked circuits, in which τr layers of uniformly random gates are augmented by τp layers of gates that are optimized to maximize peakedness. We show that getting constant peakedness from such circuits requires τp=Ω((τr/n)0.19) with overwhelming probability. However, we also give numerical evidence that nontrivial peakedness is possible in this model -- decaying exponentially with the number of qubits, but more than can be explained by any approximation where the output of a random quantum circuit is treated as a Haar-random state. This suggests that these peaked circuits have the potential for future verifiable quantum advantage experiments.
Our work raises numerous open questions about random peaked circuits, including how to generate them efficiently, and whether they can be distinguished from fully random circuits in classical polynomial time.
S. Aaronson and S. Grewal and V. Iyer and S. C. Marshall and R. Ramachandran. PDQMA = DQMA = NEXP: QMA With Hidden Variables and Non-collapsing Measurements, arXiv:2403.02543, 2024.
We define and study a variant of QMA (Quantum Merlin Arthur) in which Arthur can make multiple non-collapsing measurements to Merlin's witness state, in addition to ordinary collapsing measurements. By analogy to the class PDQP defined by Aaronson, Bouland, Fitzsimons, and Lee (2014), we call this class PDQMA. Our main result is that PDQMA = NEXP; this result builds on the MIP = NEXP Theorem and complements the result of Aaronson (2018) that PDQP/qpoly = ALL. While the result has little to do with quantum mechanics, we also show a more "quantum" result: namely, that QMA with the ability to inspect the entire history of a hidden variable is equal to NEXP, under mild assumptions on the hidden-variable theory. We also observe that a quantum computer, augmented with quantum advice and the ability to inspect the history of a hidden variable, can solve any decision problem in polynomial time.
S. Aaronson and S. Hung. Certified Randomness from Quantum Supremacy, in Proceedings of ACM STOC'2023, arXiv:2303.01625.
We propose an application for near-term quantum devices: namely, generating cryptographically certified random bits, to use (for example) in proof-of-stake cryptocurrencies. Our protocol repurposes the existing "quantum supremacy" experiments, based on random circuit sampling, that Google and USTC have successfully carried out starting in 2019. We show that, whenever the outputs of these experiments pass the now-standard Linear Cross-Entropy Benchmark (LXEB), under plausible hardness assumptions they necessarily contain Ω(n) min-entropy, where n is the number of qubits. To achieve a net gain in randomness, we use a small random seed to produce pseudorandom challenge circuits. In response to the challenge circuits, the quantum computer generates output strings that, after verification, can then be fed into a randomness extractor to produce certified nearly-uniform bits -- thereby "bootstrapping" from pseudorandomness to genuine randomness. We prove our protocol sound in two senses: (i) under a hardness assumption called Long List Quantum Supremacy Verification, which we justify in the random oracle model, and (ii) unconditionally in the random oracle model against an eavesdropper who could share arbitrary entanglement with the device. (Note that our protocol's output is unpredictable even to a computationally unbounded adversary who can see the random oracle.) Currently, the central drawback of our protocol is the exponential cost of verification, which in practice will limit its implementation to at most n∼60 qubits, a regime where attacks are expensive but not impossible. Modulo that drawback, our protocol appears to be the only practical application of quantum computing that both requires a QC and is physically realizable today.
S. Aaronson, A. Bouland, B. Fefferman, S. Ghosh, U. Vazirani, C. Zhang, and Z. Zhou. Quantum Pseudoentanglement, in Innovations in Theoretical Computer Science (ITCS) 2024, arXiv:2211.00747.
Entanglement is a quantum resource, in some ways analogous to randomness in classical computation. Inspired by recent work of Gheorghiu and Hoban, we define the notion of "pseudoentanglement'', a property exhibited by ensembles of efficiently constructible quantum states which are indistinguishable from quantum states with maximal entanglement. Our construction relies on the notion of quantum pseudorandom states -- first defined by Ji, Liu and Song -- which are efficiently constructible states indistinguishable from (maximally entangled) Haar-random states. Specifically, we give a construction of pseudoentangled states with entanglement entropy arbitrarily close to logn across every cut, a tight bound providing an exponential separation between computational vs information theoretic quantum pseudorandomness. We discuss applications of this result to Matrix Product State testing, entanglement distillation, and the complexity of the AdS/CFT correspondence. As compared with a previous version of this manuscript this version introduces a new pseudorandom state construction, has a simpler proof of correctness, and achieves a technically stronger result of low entanglement across all cuts simultaneously.
S. Aaronson, H. Buhrman, and W. Kretschmer. A Qubit, a Coin, and an Advice String Walk Into a Relational Problem, in Innovations in Theoretical Computer Science (ITCS) 2024, ECCC TR23-015.
Relational problems (those with many possible valid outputs) are different from decision problems, but it is easy to forget just how different. This paper initiates the study of FBQP/qpoly, the class of relational problems solvable in quantum polynomial-time with the help of polynomial-sized quantum advice, along with its analogues for deterministic and randomized computation (FP, FBPP) and advice (/poly, /rpoly).
Our first result is that FBQP/qpoly ≠ FBQP/poly, unconditionally, with no oracle---a striking contrast with what we know about the analogous decision classes. The proof repurposes the separation between quantum and classical one-way communication complexities due to Bar-Yossef, Jayram, and Kerenidis. We discuss how this separation raises the prospect of near-term experiments to demonstrate "quantum information supremacy," a form of quantum supremacy that would not depend on unproved complexity assumptions.
Our second result is that FBPP ⊄ FP/poly---that is, Adleman's Theorem fails for relational problems---unless PSPACE ⊆ NP/poly. Our proof uses IP=PSPACE and time-bounded Kolmogorov complexity. On the other hand, we show that proving FBPP ⊄ FP/poly will be hard, as it implies a superpolynomial circuit lower bound for PromiseBPEXP.
We prove the following further results:
- Unconditionally, FP ≠ FBPP and FP/poly ≠ FBPP/poly (even when these classes are carefully defined).
- FBPP/poly = FBPP/rpoly (and likewise for FBQP). For sampling problems, by contrast, SampBPP/poly ≠ SampBPP/rpoly (and likewise for SampBQP).
S. Aaronson and J. Pollack. Discrete Bulk Reconstruction, Journal of High Energy Physics (JHEP), arXiv:2210.15601, 2022.
According to the AdS/CFT correspondence, the geometries of certain spacetimes are fully determined by quantum states that live on their boundaries -- indeed, by the von Neumann entropies of portions of those boundary states. This work investigates to what extent the geometries can be reconstructed from the entropies in polynomial time. Bouland, Fefferman, and Vazirani (2019) argued that the AdS/CFT map can be exponentially complex if one wants to reconstruct regions such as the interiors of black holes. Our main result provides a sort of converse: we show that, in the special case of a single 1D boundary, if the input data consists of a list of entropies of contiguous boundary regions, and if the entropies satisfy a single inequality called Strong Subadditivity, then we can construct a graph model for the bulk in linear time. Moreover, the bulk graph is planar, it has O(N2) vertices (the information-theoretic minimum), and it's ``universal,'' with only the edge weights depending on the specific entropies in question. From a combinatorial perspective, our problem boils down to an ``inverse'' of the famous min-cut problem: rather than being given a graph and asked to find a min-cut, here we're given the values of min-cuts separating various sets of vertices, and need to find a weighted undirected graph consistent with those values. Our solution to this problem relies on the notion of a ``bulkless'' graph, which might be of independent interest for AdS/CFT. We also make initial progress on the case of multiple 1D boundaries -- where the boundaries could be connected via wormholes -- including an upper bound of O(N4) vertices whenever a planar bulk graph exists (thus putting the problem into the complexity class NP).
W. Gong and S. Aaronson. Learning Distributions over Quantum Measurement Outcomes, in International Conference on Machine Learning (ICML) 2023, arXiv:2209.03007.
Shadow tomography for quantum states provides a sample efficient approach for predicting the properties of quantum systems when the properties are restricted to expectation values of 2-outcome POVMs. However, these shadow tomography procedures yield poor bounds if there are more than 2 outcomes per measurement. In this paper, we consider a general problem of learning properties from unknown quantum states: given an unknown d-dimensional quantum state ρ and M unknown quantum measurements M1,...,MM with K=2 outcomes, estimating the probability distribution for applying Mi on ρ to within total variation distance ε. Compared to the special case when K=2, we need to learn unknown distributions instead of values. We develop an online shadow tomography procedure that solves this problem with high success probability requiring ~O(K log2M log d / ε4) copies of ρ. We further prove an information-theoretic lower bound that at least O(min{d2,K+logM}/ε2) copies of ρ are required to solve this problem with high success probability. Our shadow tomography procedure requires sample complexity with only logarithmic dependence on M and d and is sample-optimal for the dependence on K.
S. Aaronson, D. Ingram, and W. Kretschmer. The Acrobatics of BQP, in Proceedings of Computational Complexity Conference (CCC) 2022. ECCC TR21-164. Won the CCC Best Paper Award.
We show that, in the black-box setting, the behavior of quantum polynomial-time (BQP) can be remarkably decoupled from that of classical complexity classes like NP. Specifically:
- There exists an oracle relative to which NPBQP⊄BQPPH, resolving a 2005 problem of Fortnow. Interpreted another way, we show that AC0 circuits cannot perform useful homomorphic encryption on instances of the Forrelation problem. As a corollary, there exists an oracle relative to which P=NP but BQP≠QCMA.
- Conversely, there exists an oracle relative to which BQPNP⊄PHBQP.
- Relative to a random oracle, PP=PostBQP is not contained in the "QMA hierarchy" QMAQMA^QMA^..., and more generally PP⊄(MIP*)(MIP*)^(MIP*)^... (!), despite the fact that MIP*=RE in the unrelativized world. This result shows that there is no black-box quantum analogue of Stockmeyer's approximate counting algorithm.
- Relative to a random oracle, Σk+1⊄BQPΣ_k for every k.
- There exists an oracle relative to which BQP=P#P and yet PH is infinite. (By contrast, if NP⊆BPP, then PH collapses relative to all oracles.)
- There exists an oracle relative to which P=NP≠BQP=P#P.
To achieve these results, we build on the 2018 achievement by Raz and Tal of an oracle relative to which BQP⊄PH, and associated results about the Forrelation problem. We also introduce new tools that might be of independent interest. These include a "quantum-aware" version of the random restriction method, a concentration theorem for the block sensitivity of AC0 circuits, and a (provable) analogue of the Aaronson-Ambainis Conjecture for sparse oracles.
S. Aaronson and S. Grewal. Efficient Learning of Non-Interacting Fermion Distributions, in TQC'2023, arXiv:2102.10458.
We give an efficient algorithm that recovers the distribution of a non-interacting fermion state over the standard basis, given measurements in additional bases. For a system of n non-interacting fermions and m modes, we show that O(m2n2log(1/δ)/ε2) samples and O(m3n2log(1/δ)/ε2) time are sufficient to learn the original distribution to total variation distance ε with probability 1-δ. Our algorithm empirically estimates one-mode correlations in O(m) different measurement bases and uses them to reconstruct a succinct description of the entire distribution efficiently.
S. Aaronson, Y. Atia, and L. Susskind. On the Hardness of Detecting Macroscopic Superpositions, 2020. arXiv:2009.07450.
When is decoherence "effectively irreversible"? Here we examine this central question of quantum foundations using the tools of quantum computational complexity. We prove that, if one had a quantum circuit to determine if a system was in an equal superposition of two orthogonal states (for example, the |Alive〉 and |Dead〉 states of Schrödinger's cat), then with only a slightly larger circuit, one could also swap the two states (e.g., bring a dead cat back to life). In other words, observing interference between the |Alive〉 and |Dead〉 states is a "necromancy-hard" problem, technologically infeasible in any world where death is permanent. As for the converse statement (i.e., ability to swap implies ability to detect interference), we show that it holds modulo a single exception, involving unitaries that (for example) map |Alive〉 to |Dead〉 but |Dead〉 to -|Alive〉. We also show that these statements are robust---i.e., even a partial ability to observe interference implies partial swapping ability, and vice versa. Finally, without relying on any unproved complexity conjectures, we show that all of these results are quantitatively tight. Our results have possible implications for the state dependence of observables in quantum gravity, the subject that originally motivated this study.
S. Aaronson, S. Ben-David, R. Kothari, S. Rao, and A. Tal. Degree vs. Approximate Degree and Quantum Implications of Huang's Sensitivity Theorem, in Proceedings of ACM STOC'2021. arXiv:2010.12629.
Based on the recent breakthrough of Huang (2019), we show that for any total Boolean function f,
- deg(f)=O(~deg(f)2): The degree of f is at most quadratic in the approximate degree of f. This is optimal as witnessed by the OR function.
- D(f)=O(Q(f)4): The deterministic query complexity of f is at most quartic in the quantum query complexity of f. This matches the known separation (up to log factors) due to Ambainis, Balodis, Belovs, Lee, Santha, and Smotrovs (2017).
We apply these results to resolve the quantum analogue of the Aanderaa-Karp-Rosenberg conjecture. We show that if f is a nontrivial monotone graph property of an n-vertex graph specified by its adjacency matrix, then Q(f)=Ω(n), which is also optimal. We also show that the approximate degree of any read-once formula on n variables is Θ(√n).
Based on the recent breakthrough of Huang (2019), we show that for any total Boolean function f, the deterministic query complexity, D(f), is at most quartic in the quantum query complexity, Q(f): D(f)=O(Q(f)4). This matches the known separation (up to log factors) due to Ambainis, Balodis, Belovs, Lee, Santha, and Smotrovs (2017). We also use the result to resolve the quantum analogue of the Aanderaa-Karp-Rosenberg conjecture. We show that if f is a nontrivial monotone graph property of an n-vertex graph specified by its adjacency matrix, then Q(f)=Ω(n), which is also optimal.
S. Aaronson, J. Liu, Q. Liu, M. Zhandry, and R. Zhang. New Approaches for Quantum Copy-Protection, in Proceedings of CRYPTO'2021. arXiv:2004.09674.
Quantum copy protection uses the unclonability of quantum states to construct quantum software that provably cannot be pirated. Copy protection would be immensely useful, but unfortunately little is known about how to achieve it in general. In this work, we make progress on this goal, by giving the following results:
- We show how to copy protect any program that cannot be learned from its input/output behavior, relative to a classical oracle. This improves on Aaronson [CCC'09], which achieves the same relative to a quantum oracle. By instantiating the oracle with post-quantum candidate obfuscation schemes, we obtain a heuristic construction of copy protection.
-We show, roughly, that any program which can be watermarked can be copy detected, a weaker version of copy protection that does not prevent copying, but guarantees that any copying can be detected. Our scheme relies on the security of the assumed watermarking, plus the assumed existence of public key quantum money. Our construction is general, applicable to many recent watermarking schemes.
S. Aaronson, N.-H. Chia, H.-H. Lin, C. Wang, and R. Zhang. On the Quantum Complexity of Closest Pair and Related Problems, in Proceedings of CCC'2020, p. 16:1-16:43, 2020. arXiv:1911.01973.
The closest pair problem is a fundamental problem of computational geometry: given a set of n points in a d-dimensional space, find a pair with the smallest distance. A classical algorithm taught in introductory courses solves this problem in O(n log(n)) time in constant dimensions (i.e., when d=O(1)). This paper asks and answers the question of the problem's quantum time complexity. Specifically, we give an ~O(n2/3) algorithm in constant dimensions, which is optimal up to a polylogarithmic factor by the lower bound on the quantum query complexity of element distinctness. The key to our algorithm is an efficient history-independent data structure that supports quantum interference.
In polylog(n) dimensions, no known quantum algorithms perform better than brute force search, with a quadratic speedup provided by Grover's algorithm. To give evidence that the quadratic speedup is nearly optimal, we initiate the study of quantum fine-grained complexity and introduce the Quantum Strong Exponential Time Hypothesis (QSETH), which is based on the assumption that Grover's algorithm is optimal for CNF-SAT when the clause width is large. We show that the naïve Grover approach to closest pair in higher dimensions is optimal up to an no(1) factor unless QSETH is false. We also study the bichromatic closest pair problem and the orthogonal vectors problem, with broadly similar results.
S. Aaronson and S. Gunn. On the Classical Hardness of Spoofing Linear Cross-Entropy Benchmarking [PDF], Theory of Computing 16(11):1--8, 2020. arXiv:1910.12085.
Recently, Google announced the first demonstration of quantum computational supremacy with a programmable superconducting processor. Their demonstration is based on collecting samples from the output distribution of a noisy random quantum circuit, then applying a statistical test to those samples called Linear Cross-Entropy Benchmarking (Linear XEB). This raises a theoretical question: how hard is it for a classical computer to spoof the results of the Linear XEB test? In this short note, we adapt an analysis of Aaronson and Chen [2017] to prove a conditional hardness result for Linear XEB spoofing. Specifically, we show that the problem is classically hard, assuming that there is no efficient classical algorithm that, given a random n-qubit quantum circuit C, estimates the probability of C outputting a specific output string, say 0n, with variance even slightly better than that of the trivial estimator that always estimates 1/2n. Our result automatically encompasses the case of noisy circuits.
S. Aaronson and P. Rall. Quantum Approximate Counting, Simplified, in Proceedings of SOSA@SODA2020, pp. 24-32, 2020. arXiv:1908.10846.
In 1998, Brassard, Høyer, Mosca, and Tapp (BHMT) gave a quantum algorithm for approximate counting. Given a list of N items, K of them marked, their algorithm estimates K to within relative error ε by making only O(√(N/K)/ε) queries. Although this speedup is of "Grover" type, the BHMT algorithm has the curious feature of relying on the Quantum Fourier Transform (QFT), more commonly associated with Shor's algorithm. Is this necessary? This paper presents a simplified algorithm, which we prove achieves the same query complexity using Grover iterations only. We also generalize this to a QFT-free algorithm for amplitude estimation. Related approaches to approximate counting were sketched previously by Grover, Abrams and Williams, Suzuki et al., and Wie (the latter two as we were writing this paper), but in all cases without rigorous analysis.
S. Aaronson and G. N. Rothblum. Gentle Measurement of Quantum States and Differential Privacy [PDF], in Proceedings of ACM STOC'2019. ECCC TR19-060.
In differential privacy (DP), we want to query a database about n users, in a way that "leaks at most ε about any individual user," even conditioned on any outcome of the query. Meanwhile, in gentle measurement, we want to measure n quantum states, in a way that "damages the states by at most α," even conditioned on any outcome of the measurement. In both cases, we can achieve the goal by techniques like deliberately adding noise to the outcome before returning it. This paper proves a new and general connection between the two subjects. Specifically, we show that on products of n quantum states, any measurement that is α-gentle for small α is also O(α)-DP, and any product measurement that is ε-DP is also O(ε√n)-gentle.
Illustrating the power of this connection, we apply it to the recently studied problem of shadow tomography. Given an unknown d-dimensional quantum state ρ, as well as known two-outcome measurements E1,...,Em, shadow tomography asks us to estimate Pr[Ei accepts ρ], for every i∈[m], by measuring few copies of ρ. Using our connection theorem, together with a quantum analog of the so-called private multiplicative weights algorithm of Hardt and Rothblum, we give a protocol to solve this problem using O((log m)2(log d)2) copies of ρ, compared to Aaronson's previous bound of ~O((log m)4(log d)). Our protocol has the advantages of being online (that is, the Ei's are processed one at a time), gentle, and conceptually simple.
Other applications of our connection include new lower bounds for shadow tomography from lower bounds on DP, and a result on the safe use of estimation algorithms as subroutines inside larger quantum algorithms.
S. Aaronson, D. Grier, and L. Schaeffer. A Quantum Query Complexity Trichotomy for Regular Languages, in Proceedings of IEEE FOCS'2019. arXiv:1812.04219.
We present a trichotomy theorem for the quantum query complexity of regular languages. Every regular language has quantum query complexity Θ(1), ~Θ(√n), or Θ(n). The extreme uniformity of regular languages prevents them from taking any other asymptotic complexity. This is in contrast to even the context-free languages, which we show can have query complexity Θ(nc) for all computable c in [1/2,1]. Our result implies an equivalent trichotomy for the approximate degree of regular languages, and a dichotomy--either Θ(1) or Θ(n)--for sensitivity, block sensitivity, certificate complexity, deterministic query complexity, and randomized query complexity.
The heart of the classification theorem is an explicit quantum algorithm which decides membership in any star-free language in ~O(√n) time. This well-studied family of the regular languages admits many interesting characterizations, for instance, as those languages expressible as sentences in first-order logic over the natural numbers with the less-than relation. Therefore, not only do the star-free languages capture functions such as OR, they can also express functions such as "there exist a pair of 2's such that everything between them is a 0."
Thus, we view the algorithm for star-free languages as a nontrivial generalization of Grover's algorithm which extends the quantum quadratic speedup to a much wider range of string-processing algorithms than was previously known. We show applications to problems such as evaluating dynamic constant-depth Boolean formulas and recognizing balanced parentheses nested constantly many levels deep.
S. Aaronson, R. Kothari, W. Kretschmer, and J. Thaler. Quantum Lower Bounds for Approximate Counting Via Laurent Polynomials, in Proceedings of CCC'2020, pp. 7:1-7:47, 2020. arXiv:1904.08914.
This paper proves new limitations on the power of quantum computers to solve
approximate counting--that is, multiplicatively estimating the
size of a nonempty set S⊆[N].
Given only a membership oracle for S, it is well-known that approximate
counting takes Θ(√(N/|S|)) quantum queries. But what
if a quantum algorithm is also given "QSamples"---i.e., copies of the state |S〉=Σi∈S|i〉---or even the ability to apply
reflections about |S〉? Our first main result is that, even then, the
algorithm needs either Θ(√(N/|S|)) queries or else Θ(min{|S|1/3,√(N/|S|)}) reflections or samples.
We also give matching upper bounds.
We prove the lower bound using a novel generalization of the polynomial
method of Beals et al. to Laurent polynomials, which can have
negative exponents. We lower-bound Laurent polynomial degree using two
methods: a new "explosion argument" that
pits the positive- and negative-degree parts of the polynomial against each
other, and a new formulation of the dual polynomials method.
Our second main result rules out the possibility of a black-box Quantum
Merlin-Arthur (or QMA) protocol for proving that a set is large.
More precisely, we show that, even if Arthur can make T quantum queries
to the set S⊆[N], and also receives an m-qubit quantum
witness from Merlin in support of S being large, we have Tm=Ω(min{|S|,√(N/|S|)}). This resolves the open problem of giving an oracle
separation between SBP, the complexity class that captures
approximate counting, and QMA.
Note that QMA is "stronger"
than the queries+QSamples model in that Merlin's witness can be anything,
rather than just the specific state |S〉, but also
"weaker" in that Merlin's witness cannot
be trusted. Intriguingly, Laurent polynomials also play a crucial
role in our QMA lower bound, but in a completely different
manner than in the queries+QSamples lower bound. This suggests that the
"Laurent polynomial method" might be
broadly useful in complexity theory.
S. Aaronson. Quantum Lower Bound for Approximate Counting Via Laurent Polynomials [PDF], 2018. arXiv:1808.02420.
S. Aaronson. PDQP/qpoly=ALL [PDF], Quantum Information and Computation, 2018. arXiv:1805.08577.
We show that combining two different hypothetical enhancements to quantum computation---namely, quantum advice and non-collapsing measurements---would let a quantum computer solve any decision problem whatsoever in polynomial time, even though neither enhancement yields extravagant power by itself. This complements a related result due to Raz. The proof uses locally decodable codes.
S. Aaronson, X. Chen, E. Hazan, and A. Nayak. Online Learning of Quantum States, in Proceedings of NIPS'2018. arXiv:1802.09025.
Suppose we have many copies of an unknown n-qubit state ρ. We measure some copies of ρ using a known two-outcome measurement E1, then other copies using a measurement E2, and so on. At each stage t, we generate a current hypothesis σt about the state ρ, using the outcomes of the previous measurements. We show that it is possible to do this in a way that guarantees that |Tr(Eiσt)-Tr(Eiρ)|, the error in our prediction for the next measurement, is at least ε at most O(n/ε2) times. Even in the "non-realizable" setting---where there could be arbitrary noise in the measurement outcomes---we show how to output hypothesis states that do significantly worse than the best possible states at most O(√(Tn)) times on the first T measurements. These results generalize a 2007 theorem by Aaronson on the PAC-learnability of quantum states, to the online and regret-minimization settings. We give three different ways to prove our results---using convex optimization, quantum postselection, and sequential fat-shattering dimension---which have different advantages in terms of parameters and portability.
A. Rocchetto, S. Aaronson, S. Severini, G. Carvacho, D. Poderini, I. Agresti, M. Bentivegna, and F. Sciarrino. Experimental Learning of Quantum States, Science Advances 5(3), 2019. arXiv:1712.00127.
The number of parameters describing a quantum state is well known to grow exponentially with the number of particles. This scaling clearly limits our ability to do tomography to systems with no more than a few qubits and has been used to argue against the universal validity of quantum mechanics itself. However, from a computational learning theory perspective, it can be shown that, in a probabilistic setting, quantum states can be approximately learned using only a linear number of measurements. Here we experimentally demonstrate this linear scaling in optical systems with up to 6 qubits. Our results highlight the power of computational learning theory to investigate quantum information, provide the first experimental demonstration that quantum states can be "probably approximately learned" with access to a number of copies of the state that scales linearly with the number of qubits, and pave the way to probing quantum states at new, larger scales.
S. Aaronson. Shadow Tomography of Quantum States [PDF], in Proceedings of STOC'2018. ECCC TR17-164, arXiv:1711.01053.
We introduce the problem of shadow tomography: given an unknown D-dimensional quantum mixed state ρ, as well as known two-outcome measurements E1,...,EM, estimate the probability that Ei accepts ρ, to within additive error ε, for each of the M measurements. How many copies of ρ are needed to achieve this, with high probability? Surprisingly, we give a procedure that solves the problem by measuring only ~O(ε-5 log4M log D) copies. This means, for example, that we can learn the behavior of an arbitrary n-qubit state, on all accepting/rejecting circuits of some fixed polynomial size, by measuring only nO(1) copies of the state. This resolves an open problem of the author, which arose from his work on private-key quantum money schemes, but which also has applications to quantum copy-protected software, quantum advice, and quantum one-way communication. Recently, building on this work, Brandão et al. have given a different approach to shadow tomography using semidefinite programming, which achieves a savings in computation time.
S. Aaronson, A. Cojocaru, A. Gheorghiu, and E. Kashefi. Complexity-Theoretic Limitations on Blind Delegated Quantum Computation, in Proceedings of ICALP'2019. Earlier version at arXiv:1704.08482.
Suppose a large scale quantum computer becomes available over the Internet. Could we delegate universal quantum computations to this server, using only classical communication between client and server, in a way that is information-theoretically blind (i.e., the server learns nothing about the input apart from its size, with no cryptographic assumptions required)? In this paper we give strong indications that the answer is no. This contrasts with the situation where quantum communication between client and server is allowed --- where we know that such information-theoretically blind quantum computation is possible. It also contrasts with the case where cryptographic assumptions are allowed: there again, it is now known that there are quantum analogues of fully homomorphic encryption. In more detail, we observe that, if there exist information-theoretically secure classical schemes for performing universal quantum computations on encrypted data, then we get unlikely containments between complexity classes, such as BQP ⊂ NP/poly. Moreover, we prove that having such schemes for delegating quantum sampling problems, such as Boson Sampling, would lead to a collapse of the polynomial hierarchy. We then consider encryption schemes which allow one round of quantum communication and polynomially many rounds of classical communication, yielding a generalization of blind quantum computation. We give a complexity theoretic upper bound, namely QCMA/qpoly ∩ coQCMA/qpoly, on the types of functions that admit such a scheme. This upper bound then lets us show that, under plausible complexity assumptions, such a protocol is no more useful than classical schemes for delegating NP-hard problems to the server. Lastly, we comment on the implications of these results for the prospect of verifying a quantum computation through classical interaction with the server.
S. Aaronson and L. Chen. Complexity-Theoretic Foundations of Quantum Supremacy Experiments [PDF], in Proceedings of CCC'2017, pp. 1-67. ECCC TR16-200, arXiv:1612.05903.
In the near future, there will likely be special-purpose quantum computers with 40-50 high-quality qubits. This paper lays general theoretical foundations for how to use such devices to demonstrate "quantum supremacy": that is, a clear quantum speedup for some task, motivated by the goal of overturning the Extended Church-Turing Thesis as confidently as possible.
First, we study the hardness of sampling the output distribution of a random quantum circuit, along the lines of a recent proposal by the Quantum AI group at Google. We show that there's a natural average-case hardness assumption, which has nothing to do with sampling, yet implies that no polynomial-time classical algorithm can pass a statistical test that the quantum sampling procedure's outputs do pass. Compared to previous work---for example, on BosonSampling and IQP---the central advantage is that we can now talk directly about the observed outputs, rather than about the distribution being sampled.
Second, in an attempt to refute our hardness assumption, we give a new algorithm, inspired by Savitch's Theorem, for simulating a general quantum circuit with n qubits and depth d in polynomial space and dO(n) time. We then discuss why this and other known algorithms fail to refute our assumption.
Third, resolving an open problem of Aaronson and Arkhipov, we show that any strong quantum supremacy theorem---of the form "if approximate quantum sampling is classically easy, then the polynomial hierarchy collapses"---must be non-relativizing. This sharply contrasts with the situation for exact sampling.
Fourth, refuting a conjecture by Aaronson and Ambainis, we show that there is a sampling task, namely Fourier Sampling, with a 1 versus n separation between its quantum and classical query complexities.
Fifth, in search of a "happy medium" between black-box and non-black-box arguments, we study quantum supremacy relative to oracles in P/poly. Previous work implies that, if one-way functions exist, then quantum supremacy is possible relative to such oracles. We show, conversely, that some computational assumption is needed: if SampBPP=SampBQP and NP⊆BPP, then quantum supremacy is impossible relative to oracles with small circuits.
S. Aaronson, A. Bouland, G. Kuperberg, and S. Mehraban. The Computational Complexity of Ball Permutations, in Proceedings of ACM STOC'2017, pp. 317-327. arXiv:1610.06646.
Inspired by connections to two dimensional quantum theory, we define several models of computation based on permuting distinguishable particles (which we call balls), and characterize their computational complexity. In the quantum setting, we find that the computational power of this model depends on the initial input states. More precisely, with a standard basis input state, we show how to approximate the amplitudes of this model within additive error using the model DQC1 (the class of problems solvable with one clean qubit), providing evidence that the model in this case is weaker than universal quantum computing. However, for specific choices of input states, the model is shown to be universal for BQP in an encoded sense. We use representation theory of the symmetric group to partially classify the computational complexity of this model for arbitrary input states. Interestingly, we find some input states which yield a model intermediate between DQC1 and BQP. Furthermore, we consider a restricted version of this model based on an integrable scattering problem in 1+1 dimensions. We show it is universal under postselection, if we allow intermediate destructive measurements and specific input states. Therefore, the existence of any classical procedure to sample from the output distribution of this model within multiplicative error implies collapse of polynomial hierarchy to its third level. Finally, we define a classical version of this model in which one can probabilistically permute balls. We find this yields a complexity class which is intermediate between L and BPP. Moreover, we find a nondeterministic version of this model is NP-complete.
S. Aaronson and S. Ben-David. Sculpting Quantum Speedups, in Proceedings of CCC'2016. ECCC TR15-203, arXiv:1512.04016.
Given a problem which is intractable for both quantum and classical algorithms, can we find a sub-problem for which quantum algorithms provide an exponential advantage? We refer to this problem as the "sculpting problem." In this work, we give a full characterization of sculptable functions in the query complexity setting. We show that a total function f can be restricted to a promise P such that Q(f|P)=O(polylogN) and R(f|P)=NΩ(1), if and only if f has a large number of inputs with large certificate complexity. The proof uses some interesting techniques: for one direction, we introduce new relationships between randomized and quantum query complexity in various settings, and for the other direction, we use a recent result from communication complexity due to Klartag and Regev. We also characterize sculpting for other query complexity measures, such as R(f) vs. R0(f) and R0(f) vs. D(f).
Along the way, we prove some new relationships for quantum query complexity: for example, a nearly quadratic relationship between Q(f) and D(f) whenever the promise of f is small. This contrasts with the recent super-quadratic query complexity separations, showing that the maximum gap between classical and quantum query complexities is indeed quadratic in various settings - just not for total functions!
Lastly, we investigate sculpting in the Turing machine model. We show that if there is any BPP-bi-immune language in BQP, then every language outside BPP can be restricted to a promise which places it in PromiseBQP but not in PromiseBPP. Under a weaker assumption, that some problem in BQP is hard on average for P/poly, we show that every paddable language outside BPP is sculptable in this way.
S. Aaronson, A. Ambainis, J. Iraids, M. Kokainis, and J. Smotrovs. Polynomials, Quantum Query Complexity, and Grothendieck's Inequality, in Proceedings of CCC'2016. arXiv:1511.08682.
We show an equivalence between 1-query quantum algorithms and representations by degree-2 polynomials. Namely, a partial Boolean function f is computable by a 1-query quantum algorithm with error bounded by ε<1/2 iff f can be approximated by a degree-2 polynomial with error bounded by ε' <1/2. This result holds for two different notions of approximation by a polynomial: the standard definition of Nisan and Szegedy, and the approximation by block-multilinear polynomials recently introduced by Aaronson and Ambainis. We also show two results for polynomials of higher degree. First, there is a total Boolean function which requires ~Ω(n) quantum queries but can be represented by a block-multilinear polynomial of degree ~O(√n). Thus, in the general case (for an arbitrary number of queries), block-multilinear polynomials are not equivalent to quantum algorithms. Second, for any constant degree k, the two notions of approximation by a polynomial (the standard and the block-multilinear) are equivalent. As a consequence, we solve an open problem from Aaronson and Ambainis, showing that one can estimate the value of any bounded degree-k polynomial p:{0,1}n→[-1,1] with O(n1-1/2k) queries.
S. Aaronson, S. Ben-David, and R. Kothari. Separations in Query Complexity Using Cheat Sheets, in Proceedings of ACM STOC'2016. ECCC TR15-175, arXiv:1511.01937.
We show a power 2.5 separation between bounded-error randomized and quantum query complexity for a total Boolean function, refuting the widely believed conjecture that the best such separation could only be quadratic (from Grover's algorithm). We also present a total function with a power 4 separation between quantum query complexity and approximate polynomial degree, showing severe limitations on the power of the polynomial method. Finally, we exhibit a total function with a quadratic gap between quantum query complexity and certificate complexity, which is optimal (up to log factors). These separations are shown using a new, general technique that we call the cheat sheet technique. The technique is based on a generic transformation that converts any (possibly partial) function into a new total function with desirable properties for showing separations. The framework also allows many known separations, including some recent breakthrough results of Ambainis et al., to be shown in a unified manner.
S. Aaronson and D. J. Brod. BosonSampling with Lost Photons, Phys. Rev. A 93:012335, 2016. arXiv:1510.05245.
BosonSampling is an intermediate model of quantum computation where linear-optical networks are used to solve sampling problems expected to be hard for classical computers. Since these devices are not expected to be universal for quantum computation, it remains an open question of whether any error-correction techniques can be applied to them, and thus it is important to investigate how robust the model is under natural experimental imperfections, such as losses and imperfect control of parameters. Here we investigate the complexity of BosonSampling under photon losses---more specifically, the case where an unknown subset of the photons are randomly lost at the sources. We show that, if k out of n photons are lost, then we cannot sample classically from a distribution that is 1/nΘ(k)-close (in total variation distance) to the ideal distribution, unless a BPPNP machine can estimate the permanents of Gaussian matrices in nO(k) time. In particular, if k is constant, this implies that simulating lossy BosonSampling is hard for a classical computer, under exactly the same complexity assumption used for the original lossless case.
Z. Liu, C. Perry, Y. Zhu, D. Koh, and S. Aaronson. Doubly Infinite Separation of Quantum Information and Communication, Phys. Rev. A 93:012347, 2016. arXiv:1507.03546.
We prove the existence of (one-way) communication tasks with a vanishing vs. diverging type of asymptotic gap, which we call "doubly infinite", between quantum information and communication complexities. We do so by showing the following: As the size of the task n increases, the quantum communication complexity of a certain regime of the exclusion game, recently introduced by Perry, Jain, and Oppenheim, scales at least logarithmically in n, while the information cost of a winning quantum strategy may tend to zero. The logarithmic lower bound on the quantum communication complexity is shown to hold even if we allow a small probability of error, although the n-qubit quantum message of the zero-error strategy can then be compressed polynomially. We leave open the problems of whether the quantum communication complexity of the specified regime scales polynomially in n, and whether the gap between quantum and classical communication complexities can be superexponential beyond this regime.
S. Aaronson and A. Ambainis. Forrelation: A Problem that Optimally Separates Quantum from Classical Computing [PDF], in Proceedings of ACM STOC'2015, pp. 307-316. Conference version [PDF]. arXiv:1411.5729, ECCC TR14-155.
We achieve essentially the largest possible separation between quantum and classical query complexities. We do so using a property-testing problem called Forrelation, where one needs to decide whether one Boolean function is highly correlated with the Fourier transform of a second function. This problem can be solved using 1 quantum query, yet we show that any randomized algorithm needs ~(√N)/log(N) queries (improving an ~N1/4 lower bound of Aaronson). Conversely, we show that this 1 versus ~√N separation is optimal: indeed, any t-query quantum algorithm whatsoever can be simulated by an O(N1-1/2t)-query randomized algorithm. Thus, resolving an open question of Buhrman et al. from 2002, there is no partial Boolean function whose quantum query complexity is constant and whose randomized query complexity is linear. We conjecture that a natural generalization of Forrelation achieves the optimal t versus ~N1-1/2t separation for all t. As a bonus, we show that this generalization is BQP-complete. This yields what's arguably the simplest BQP-complete problem yet known, and gives a second sense in which Forrelation "captures the maximum power of quantum computation."
Update: An error was found in our O(N1-1/2t)-query simulation of t-query quantum algorithms. We currently know how to recover the result only in the case t=1. For more see this ECCC preprint and this blog post.
S. Aaronson, A. Bouland, J. Fitzsimons, and M. Lee. The Space "Just Above" BQP, in Proceedings of ACM ITCS (Innovations in Theoretical Computer Science) 2016, pp. 271-280. arXiv:1412.6507, ECCC TR14-181.
We explore the space "just above" BQP by defining a complexity class PDQP (Product Dynamical Quantum Polynomial time) which is larger than BQP but does not contain NP relative to an oracle. The class is defined by imagining that quantum computers can perform measurements that do not collapse the wavefunction. This (non-physical) model of computation can efficiently solve problems such as Graph Isomorphism and Approximate Shortest Vector which are believed to be intractable for quantum computers. Furthermore, it can search an unstructured N-element list in O(N1/3) time, but no faster than Ω(N1/4), and hence cannot solve NP-hard problems in a black box manner. In short, this model of computation is more powerful than standard quantum computation, but only slightly so.
Our work is inspired by previous work of Aaronson on the power of sampling the histories of hidden variables. However Aaronson's work contains an error in its proof of the lower bound for search, and hence it is unclear whether or not his model allows for search in logarithmic time. Our work can be viewed as a conceptual simplification of Aaronson's approach, with a provable polynomial lower bound for search.
R. Gross and S. Aaronson. Bounding the Seed Length of Miller and Shi's Unbounded Randomness Expansion Protocol, 2014. arXiv:1410.8019.
Recent randomness expansion protocols have been proposed which are able to generate an unbounded amount of randomness from a finite amount of truly random initial seed. One such protocol, given by Miller and Shi, uses a pair of non-signaling untrusted quantum mechanical devices. These play XOR games with inputs given by the user in order to generate an output. Here we present an analysis of the required seed size, giving explicit upper bounds for the number of initial random bits needed to jump-start the protocol. The bits output from such a protocol are ε-close to uniform even against quantum adversaries. Our analysis yields that for a statistical distance of ε=10-1 and ε=10-6 from uniformity, the number of required bits is smaller than 225,000 and 715,000, respectively; in general it grows as O(log(1/ε)).
A. Nayebi, S. Aaronson, A. Belovs, and L. Trevisan. Quantum Lower Bound for Inverting a Permutation with Advice, Quantum Information and Computation 15(11-12):901-913, 2015. ECCC TR14-109, arXiv:1408.3193.
Given a random permutation f:[N]→[N] as a black box and y∈[N], we want to output f-1(y). Supplementary to our input, we are given classical advice in the form of a pre-computed data structure; this advice can depend on the permutation but not on the input y. Classically, there is a data structure of size ~O(S) and an algorithm that with the help of the data structure, given f(x), can invert f in time ~O(T), for every choice of parameters S, T such that ST ≥ N. We prove a quantum lower bound of T2S ≥ ~Ω(εN) for quantum algorithms that invert a random permutation f on an ε fraction of inputs, where T is the number of queries to f and S is the amount of advice. This answers an open question of De et al.
We also give an Ω(√(N/m)) quantum lower bound for the simpler but related Yao's box problem, which is the problem of recovering a bit xj, given the ability to query an N-bit string x at any index except the jth, and also given m bits of advice that depend on x but not on j.
J. Barry, D. Barry, and S. Aaronson. Quantum POMDPs Physical Review A 90:032311, 2014. arXiv:1406.2858.
We present quantum observable Markov decision processes (QOMDPs), the quantum analogues of partially observable Markov decision processes (POMDPs). In a QOMDP, an agent’s state is represented as a quantum state and the agent can choose a superoperator to apply. This is similar to the POMDP belief state, which is a probability distribution over world states and evolves via a stochastic matrix. We show that the existence of a policy of at least a certain value has the same complexity for QOMDPs and POMDPs in the polynomial and infinite horizon cases. However, we also prove that the existence of a policy that can reach a goal state is decidable for goal POMDPs
and undecidable for goal QOMDPs.
A. Bouland and S. Aaronson. Any Beamsplitter Generates Universal Quantum Linear Optics [PDF], Physical Review A 89:062316, 2014. arXiv:1310.6718, ECCC TR13-147.
In 1994, Reck et al. showed how to realize any linear-optical unitary transformation using a product of beamsplitters and phaseshifters. Here we show that any single beamsplitter that nontrivially mixes two modes, also densely generates the set of m × m unitary transformations (or orthogonal transformations, in the real case) on m ≥ 3 modes. (We prove the same result for any 2-mode real optical gate, and for any 2-mode optical gate combined with a generic phaseshifter.) Experimentally, this means that one does not need tunable beamsplitters or phaseshifters for universality: any nontrivial beamsplitter is universal. Theoretically, it means that one cannot produce "intermediate" models of quantum-optical computation (analogous to the Clifford group for qubits) by restricting the allowed beamsplitters and phaseshifters: there is a dichotomy; one either gets a trivial set or else a universal set. No similar classification theorem for gates acting on qubits is currently known. We leave open the problem of classifying optical gates that act on 3 or more modes.
S. Aaronson and A. Arkhipov. BosonSampling Is Far From Uniform [PS] [PDF], Quantum Information and Computation, vol. 14, no. 15&16, pp. 1383--1423, 2014. arXiv:1309.7460, ECCC TR13-135.
BosonSampling, which we proposed three years ago, is a scheme for using linear-optical networks to solve sampling problems that appear to be intractable for a classical computer. In a recent manuscript, Gogolin et al. claimed that even an ideal BosonSampling device's output would be "operationally indistinguishable" from a uniform random outcome, at least "without detailed a priori knowledge"; or at any rate, that telling the two apart might itself be a hard problem. We first answer these claims---explaining why the first is based on a definition of "a priori knowledge" so strange that, were it adopted, almost no quantum algorithm could be distinguished from a pure random-number source; while the second is neither new nor a practical obstacle to interesting BosonSampling experiments. However, we then go further, and address some interesting research questions inspired by Gogolin et al.'s mistaken arguments. We prove that, with high probability over a Haar-random matrix A, the BosonSampling distribution induced by A is far from the uniform distribution in total variation distance. More surprisingly, and directly counter to Gogolin et al., we give an efficient algorithm that distinguishes these two distributions with constant bias. Finally, we offer three "bonus" results about BosonSampling. First, we report an observation of Fernando Brandao: that one can efficiently sample a distribution that has large entropy and that's indistinguishable from a BosonSampling distribution by any circuit of fixed polynomial size. Second, we show that BosonSampling distributions can be efficiently distinguished from uniform even with photon losses and for general initial states. Third, we offer the simplest known proof that FermionSampling is solvable in classical polynomial time, and we reuse techniques from our BosonSampling analysis to characterize random FermionSampling distributions.
S. Aaronson, A. Bouland, L. Chua, and G. Lowther. Psi-Epistemic Theories: The Role of Symmetry [PS] [PDF], Physical Review A 88:032111, 2013. arXiv:1303.2834.
Formalizing an old desire of Einstein, "ψ-epistemic theories" try to
reproduce the predictions of quantum mechanics, while viewing quantum
states
as ordinary probability distributions over underlying objects called
"ontic states." Regardless of one's philosophical views about such
theories, the
question arises of whether one can cleanly rule them out, by proving
no-go theorems analogous to the Bell Inequality. In the 1960s, Kochen
and
Specker (who first studied these theories) constructed an elegant
ψ-epistemic theory for Hilbert space dimension d=2, but also showed that
any deterministic ψ-epistemic theory must be "measurement contextual" in
dimensions 3 and higher. Last year, the topic attracted renewed
attention, when Pusey, Barrett, and Rudolph (PBR) showed that any
ψ-epistemic theory must "behave badly under tensor product." In this
paper, we prove that even without the Kochen-Specker or PBR assumptions,
there are no ψ-epistemic theories in dimensions d≥3 that satisfy
two reasonable conditions: (1) symmetry under unitary transformations,
and (2) "maximum nontriviality" (meaning that the probability
distributions
corresponding to any two non-orthogonal states overlap). This no-go
theorem
holds if the ontic space is either the set of quantum states or the set
of unitaries. The proof of this result, in
the general case, uses some measure theory and differential geometry.
On the other hand, we also show the surprising result that without the symmetry
restriction, one can construct maximally-nontrivial ψ-epistemic theories in every finite dimension d.
M. A. Broome, A. Fedrizzi, S. Rahimi-Keshari, J. Dove, S. Aaronson, T. Ralph, and A. G. White. Photonic Boson Sampling in a Tunable Circuit, Science 339(6121):794-798, February 2013. arXiv:1212.2234.
Quantum computers are unnecessary for exponentially-efficient
computation or simulation if the Extended Church-Turing thesis---a
foundational tenet of computer science---is correct. The thesis would be
directly contradicted by a physical device that efficiently performs a
task believed to be intractable for classical computers. Such a task is
BosonSampling: obtaining a distribution of n bosons scattered by some
linear-optical unitary process. Here we test the central premise of
BosonSampling, experimentally verifying that the amplitudes of 3-photon
scattering processes are given by the permanents of submatrices
generated from a unitary describing a 6-mode integrated optical circuit.
We find the protocol to be robust, working even with the unavoidable
effects of photon loss, non-ideal sources, and imperfect detection.
Strong evidence against the Extended Church-Turing thesis will come from
scaling to large numbers of photons, which is a much simpler task than
building a universal quantum computer.
S. Aaronson and P. Christiano. Quantum Money from Hidden Subspaces [PS] [PDF], Theory of Computing 9(9):349-401, 2013. Conference version [PS] [PDF] in Proceedings of ACM STOC 2012, pages 41-60. arXiv:1203.4740, ECCC TR12-024.
Forty years ago, Wiesner pointed out that quantum mechanics raises the
striking possibility of money that cannot be counterfeited according to
the laws of physics. We propose the first quantum money scheme that is
(1) public-key---meaning that anyone can verify a banknote as genuine, not only the bank that printed it, and
(2) cryptographically secure, under a "classical" hardness assumption that has nothing to do with quantum money.
Our scheme is based on hidden subspaces, encoded as the
zero-sets of random multivariate polynomials. A main technical advance
is to show that the "black-box" version of our scheme, where the
polynomials are replaced by classical oracles, is unconditionally secure. Previously, such a result had only been known relative to a quantum oracle (and even there, the proof was never published).
Even in Wiesner's original setting---quantum money that can only
be verified by the bank---we are able to use our techniques to patch a
major security hole in Wiesner's scheme. We give the first private-key
quantum money scheme that allows unlimited verifications and that
remains unconditionally secure, even if the counterfeiter can interact
adaptively with the bank.
Our money scheme is simpler than previous public-key quantum
money schemes, including a knot-based scheme of Farhi et al. The
verifier needs to perform only two tests, one in the standard basis and
one in the Hadamard basis---matching the original intuition for quantum
money, based on the existence of complementary observables.
Our security proofs use a new variant of Ambainis's quantum
adversary method, and several other tools that might be of independent
interest.
Update: The explicit, polynomial-based scheme in this paper has since been broken. For more details, see Section 9.6 in Aaronson's Barbados lecture notes.
S. Aaronson. A Linear-Optical Proof that the Permanent is #P-Hard [PS] [PDF], Proceedings of the Royal Society A, 467:3393-3405, 2011. ECCC TR11-043, arXiv:1109.1674.
One of the crown jewels of complexity theory is Valiant's 1979 theorem
that computing the permanent of an n-by-n matrix is #P-hard. Here we
show that, by using the model of linear-optical quantum computing---and
in particular, a universality theorem due to Knill, Laflamme, and
Milburn---one can give a different and arguably more intuitive proof of
this theorem.
S. Aaronson. Impossibility of Succinct Quantum Proofs for Collision-Freeness [PS] [PDF], Quantum Information and Computation, 12:21-28, 2012. ECCC TR11-001, arXiv:1101.0403.
We show that any quantum algorithm to decide whether a function
f:[n]→[n] is a permutation or far from a permutation must make Ω(n1/3/w)
queries to f, even if the algorithm is given a w-qubit quantum witness
in support of f being a permutation. This implies that there exists an
oracle A such that SZKA⊄QMAA, answering an
eight-year-old open question of the author. Indeed, we show that
relative to some oracle, SZK is not in the counting class A0PP defined by Vyalyi. The proof is a fairly simple extension of the quantum lower bound for the collision problem.
S. Aaronson and A. Drucker. Advice Coins for Classical and Quantum Computation [PS] [PDF], in Proceedings of ICALP 2011, pages 61-72. ECCC TR11-008, arXiv:1101.5355.
We study the power of classical and quantum algorithms equipped with
nonuniform advice, in the form of a coin whose bias encodes useful
information. This question takes on particular importance in the quantum
case, due to a surprising result that we prove: a quantum finite automaton with just two states can be sensitive to arbitrarily small changes in a coin's bias.
This contrasts with classical probabilistic finite automata, whose
sensitivity to changes in a coin's bias is bounded by a classic 1970
result of Hellman and Cover.
Despite this finding, we are able to bound the power of advice coins
for space-bounded classical and quantum computation. We define the
classes BPPSPACE/coin and BQPSPACE/coin, of languages decidable by
classical and quantum polynomial-space machines with advice coins. Our
main theorem is that both classes coincide with PSPACE/poly. Proving
this result turns out to require substantial machinery. We use an
algorithm due to Neff for finding roots of polynomials in NC; a result
from algebraic geometry that lower-bounds the separation of a
polynomial's roots; and a result on fixed-points of superoperators due
to Aaronson and Watrous, originally proved in the context of quantum
computing with closed timelike curves.
S. Aaronson and A. Arkhipov. The Computational Complexity of Linear Optics [PDF], Theory of Computing 4:143-252, 2013. Conference version [PS] [PDF] in Proceedings of ACM STOC 2011, pages 333-342. ECCC TR10-170, arXiv:1011.3245. See also BosonSampling Mathematica notebook by Justin Dove.
We give new evidence that quantum computers -- moreover, rudimentary
quantum computers built entirely out of linear-optical elements --
cannot be efficiently simulated by classical computers. In particular,
we define a model of computation in which identical photons are
generated, sent through a linear-optical network, then nonadaptively
measured to count the number of photons in each mode. This model is not
known or believed to be universal for quantum computation, and indeed,
we discuss the prospects for realizing the model using current
technology. On the other hand, we prove that the model is able to solve
sampling problems and search problems that are classically intractable
under plausible assumptions.
Our first result says that, if there exists a polynomial-time
classical algorithm that samples from the same probability distribution
as a linear-optical network, then P#P=BPPNP, and
hence the polynomial hierarchy collapses to the third level.
Unfortunately, this result assumes an extremely accurate simulation.
Our main result suggests that even an approximate or noisy
classical simulation would already imply a collapse of the polynomial
hierarchy. For this, we need two unproven conjectures: the Permanent-of-Gaussians Conjecture,
which says that it is #P-hard to approximate the permanent of a matrix A
of independent N(0,1) Gaussian entries, with high probability over A;
and the Permanent Anti-Concentration Conjecture, which says that
|Per(A)|≥√(n!)/poly(n) with high probability over A. We present
evidence for these conjectures, both of which seem interesting even
apart from our application.
This paper does not assume knowledge of quantum optics. Indeed,
part of its goal is to develop the beautiful theory of noninteracting
bosons underlying our model, and its connection to the permanent
function, in a self-contained way accessible to theoretical computer
scientists.
S. Aaronson and A. Drucker. A Full Characterization of
Quantum Advice [PDF], SIAM Journal on Computing 43(3):1131-1183, 2014. Conference version [PS] [PDF] in Proceedings of ACM STOC 2010, pages 131-140. ECCC TR10-057, arXiv:1004.0377.
We prove the following surprising result: given any quantum state ρ on n
qubits, there exists a local Hamiltonian H on poly(n) qubits (e.g., a
sum of two-qubit interactions), such that any ground state of H can be
used to simulate ρ on all quantum circuits of fixed polynomial size. In
terms of complexity classes, this implies that BQP/qpoly is contained
in QMA/poly, which supersedes the previous result of Aaronson that
BQP/qpoly is contained in PP/poly. Indeed, we can exactly characterize
quantum advice, as equivalent in power to untrusted quantum
advice combined with trusted classical advice.
Proving our main result requires combining a large number of previous
tools -- including a result of Alon et al. on learning of real-valued
concept classes, a result of Aaronson on the learnability of quantum
states, and a result of Aharonov and Regev on "QMA+
super-verifiers" -- and also creating some new ones. The main new tool
is a so-called majority-certificates lemma, which is closely
related to boosting in machine learning, and which seems likely to find
independent applications. In its simplest version, this lemma says the
following. Given any set S of Boolean functions on n variables, any
function f∈S can be expressed as the pointwise majority of m=O(n)
functions f1,...,fm∈S, such that each fi
is the unique function in S compatible with O(log|S|) input/output
constraints.
A. Lutomirski, S. Aaronson, E. Farhi, D. Gosset, A. Hassidim, J.
Kelner, and P. Shor. Breaking and making quantum money: toward a new
quantum cryptographic protocol, Proceedings of Innovations in
Computer Science (ICS), 2010. arXiv:0912.3825.
Public-key quantum money is a cryptographic protocol in which a bank can
create quantum states which anyone can verify but no one except
possibly the bank can clone or forge. There are no secure public-key
quantum money schemes in the literature; as we show in this paper, the
only previously published scheme is insecure. We introduce a category of
quantum money protocols which we call collision-free. For these
protocols, even the bank cannot prepare multiple identical-looking
pieces of quantum money. We present a blueprint for how such a protocol
might work as well as a concrete example which we believe may be
insecure.
S. Aaronson and A. Ambainis. The Need for Structure in
Quantum Speedups [PS] [PDF],
in Proceedings of ICS 2011, pages 338-352. arXiv:0911.0996,
ECCC TR09-110.
Is there a general theorem that tells us when we can hope for
exponential speedups from quantum algorithms, and when we cannot? In
this paper, we make two advances toward such a theorem, in the black-box
model where most quantum algorithms operate.
First, we show that for any problem that is invariant under
permuting inputs and outputs (like the collision or the element
distinctness problems), the quantum query complexity is at least the 9th
root of the classical randomized query complexity. This resolves a
conjecture of Watrous from 2002.
Second, inspired by recent work of O'Donnell et al. and Dinur et
al., we conjecture that every bounded low-degree polynomial has a
"highly influential" variable. Assuming this conjecture, we show that
every T-query quantum algorithm can be simulated on most inputs
by a poly(T)-query classical algorithm, and that one essentially cannot
hope to prove P≠BQP relative to a random oracle.
S. Aaronson. BQP and the Polynomial Hierarchy [PS] [PDF], in Proceedings of ACM STOC 2010, pages 141-150. arXiv:0910.4698, ECCC TR09-104.
The relationship between BQP and PH has been an open problem since the
earliest days of quantum computing. We present evidence that quantum
computers can solve problems outside the entire polynomial hierarchy, by
relating this question to topics in circuit complexity,
pseudorandomness, and Fourier analysis.
First, we show that there exists an oracle relation problem (i.e., a
problem with many valid outputs) that is solvable in BQP, but not in PH.
This also yields a non-oracle relation problem that is solvable in
quantum logarithmic time, but not AC0.
Second, we show that an oracle decision problem separating BQP
from PH would follow from the Generalized Linial-Nisan Conjecture,
which we formulate here and which is likely of independent interest.
The original Linial-Nisan Conjecture (about pseudorandomness against
constant-depth circuits) was recently proved by Braverman, after being
open for twenty years.
(Update: See also A Note on Oracle Separations for BQP by Lijie Chen, which fixes some mistaken proofs in this paper)
S. Aaronson. Quantum Copy-Protection and Quantum Money [PS] [PDF],
conference version in Proceedings of IEEE Complexity 2009, pages
229-242.
Forty years ago, Wiesner proposed using quantum states to create money
that is
physically impossible to counterfeit, something that cannot be done in
the
classical world. However, Wiesner's scheme required a central bank to
verify
the money, and the question of whether there can be unclonable quantum
money
that anyone can verify has remained open since. One can also ask a
related
question, which seems to be new: can quantum states be used as
copy-protected programs, which let the user evaluate some
function
f, but not create more programs for f?
This paper tackles both questions using the arsenal of modern
computational
complexity. Our main result is that there exist quantum
oracles relative to which publicly-verifiable quantum money is
possible, and
any family of functions that cannot be efficiently learned from its
input-output behavior can be quantumly copy-protected. This provides
the
first formal evidence that these tasks are achievable. The technical
core of
our result is a "Complexity-Theoretic No-Cloning
Theorem," which generalizes both the standard No-Cloning
Theorem and the optimality of Grover search, and might be of independent
interest. Our security argument also requires explicit constructions of
quantum t-designs.
Moving beyond the oracle world, we also present an explicit
candidate
scheme for publicly-verifiable quantum money, based on random
stabilizer
states; as well as two explicit schemes for copy-protecting the family
of
point functions. We do not know how to base the security of these
schemes on
any existing cryptographic assumption. (Note that without an oracle, we
can
only hope for security under some computational assumption.)
S. Aaronson, F. Le Gall, A. Russell, and S. Tani. The One-Way
Communication Complexity of Group Membership [PS] [PDF], Chicago Journal of Theoretical Computer Science Article 6, 2011. arXiv:0902.3175.
This paper studies the one-way communication complexity of the subgroup
membership problem, a classical problem closely related to basic
questions in quantum computing. Here Alice receives, as input, a
subgroup H of a finite group G; Bob receives an element x∈G. Alice is
permitted to send a single message to Bob, after which he must decide if
his input x is an element of H. We prove the following upper bounds on
the classical communication complexity of this problem in the
bounded-error setting:
- The problem can be solved with O(log |G|) communication, provided
the subgroup H is normal;
- The problem can be solved with O(dmax log |G|)
communication, where dmax is the maximum of the dimensions of
the irreducible complex representations of G;
- For any prime p not dividing |G|, the problem can be solved
with O(dmax log p) communication, where dmax is
the maximum of the dimensions of the irreducible Fp-representations
of G.
S. Aaronson and J. Watrous. Closed Timelike Curves Make
Quantum and Classical Computing Equivalent [PS] [PDF], Proceedings
of the Royal Society A 465:631-647, 2009. arXiv:0808.2669.
While closed timelike curves (CTCs) are not known to exist, studying
their consequences has led to nontrivial insights in general relativity,
quantum information, and other areas. In this paper we show that if
CTCs existed, then quantum computers would be no more powerful than
classical computers: both would have the (extremely large) power of the
complexity class PSPACE, consisting of all problems solvable by a
conventional computer using a polynomial amount of memory. This solves
an open problem proposed by one of us in 2005, and gives an essentially
complete understanding of computational complexity in the presence of
CTCs. Following the work of Deutsch, we treat a CTC as simply a region
of spacetime where a "causal consistency" condition is imposed, meaning
that Nature has to produce a (probabilistic or quantum) fixed-point of
some evolution operator. Our conclusion is then a consequence of the
following theorem: given any quantum circuit (not necessarily unitary), a
fixed-point of the circuit can be (implicitly) computed in polynomial
space. This theorem might have independent applications in quantum
information.
S. Aaronson. On Perfect Completeness for QMA [PS] [PDF], Quantum
Information & Computation, vol. 9, pp. 81-89, 2009. arXiv:0806.0450.
Whether the class QMA (Quantum Merlin Arthur) is equal to QMA1,
or QMA with one-sided error, has been an open problem for years. This
note helps to explain why the problem is difficult, by using ideas from
real analysis to give a "quantum oracle" relative to which QMA≠QMA1.
As a byproduct, we find that there are facts about quantum complexity
classes that are classically relativizing but not quantumly
relativizing, among them such "trivial" containments as BQP⊆ZQEXP.
N. Harrigan, T. Rudolph, and S. Aaronson. Representing
Probabilistic Data via Ontological Models, 2008. arXiv:0709.1149.
Ontological models are attempts to quantitatively describe the results
of a probabilistic theory, such as Quantum Mechanics, in a framework
exhibiting an explicit realism-based underpinning. Unlike either the
well known quasi-probability representations, or the "r-p" vector
formalism, these models are contextual and by definition only involve
positive probability distributions (and indicator functions). In this
article we study how the ontological model formalism can be used to
describe arbitrary statistics of a system subjected to a finite set of
preparations and measurements. We present three models which can
describe any such empirical data and then discuss how to turn an
indeterministic model into a deterministic one. This raises the issue of
how such models manifest contextuality, and we provide an explicit
example to demonstrate this. In the second half of the paper we consider
the issue of finding ontological models with as few ontic states as
possible.
S. Aaronson, S. Beigi, A. Drucker, B. Fefferman and P. Shor. The
Power of Unentanglement [PS] [PDF], Theory
of Computing, 5(1):1-42, 2009. Conference version [PS] [PDF] in Proceedings
of IEEE Complexity 2008, pp. 223-236. arXiv:0804.0802.
The class QMA(k), introduced by Kobayashi et al., consists of all
languages that can be verified using k unentangled quantum proofs. Many
of the simplest questions about this class have remained embarrassingly
open: for example, can we give any evidence that k quantum proofs are
more powerful than one? Does QMA(k)=QMA(2) for k≥2? Can QMA(k)
protocols be amplified to exponentially small error?
In this paper, we make progress on all of the above questions.
- We give a protocol by which a verifier can be convinced that a 3SAT
formula of size n is satisfiable, with constant soundness, given Õ(√n)
unentangled quantum witnesses with O(log n) qubits each. Our protocol
relies on the existence of very short PCPs.
- We show that assuming a weak version of the Additivity
Conjecture from quantum information theory, any QMA(2) protocol can be
amplified to exponentially small error, and QMA(k)=QMA(2) for all k≥2.
- We prove the nonexistence of "perfect disentanglers" for
simulating multiple Merlins with one.
S. Aaronson. The Learnability of Quantum States [PS] [PDF], Proceedings
of the Royal Society A463(2088), 2007. quant-ph/0608142.
Traditional quantum state tomography requires a number of measurements
that grows exponentially with the number of qubits n. But using ideas
from computational learning theory, we show that "for most practical
purposes" one can learn a state using a number of measurements that
grows only linearly with n. Besides possible implications for
experimental physics, our learning theorem has two applications to
quantum computing: first, a new simulation of quantum one-way protocols,
and second, the use of trusted classical advice to verify untrusted
quantum advice.
S. Aaronson and G. Kuperberg. Quantum Versus Classical Proofs
and Advice [PS]
[PDF],
Theory of Computing 3(7):129-157, 2007. Conference version [PS] [PDF] in Proceedings
of IEEE Complexity 2007, pp. 115-128. quant-ph/0604056.
This paper studies whether quantum proofs are more powerful than
classical
proofs, or in complexity terms, whether QMA=QCMA. We prove three results
about
this question. First, we give a "quantum oracle separation" between QMA
and
QCMA. More concretely, we show that any quantum algorithm needs
Ω(sqrt(2n/(m+1))) queries to find an n-qubit "marked state"
|ψ>, even if given
an m-bit classical description of |ψ> together with a quantum black
box that
recognizes |ψ>. Second, we give an explicit QCMA protocol that nearly
achieves this lower bound. Third, we show that, in the one
previously-known
case where quantum proofs seemed to provide an exponential advantage,
classical
proofs are basically just as powerful. In particular, Watrous gave a QMA
protocol for verifying non-membership in finite groups. Under plausible
group-theoretic assumptions, we give a QCMA protocol for the same
problem. Even
with no assumptions, our protocol makes only polynomially many queries
to the
group oracle. We end with some conjectures about quantum versus
classical
oracles, and about the possibility of a classical oracle
separation between QMA
and QCMA.
S. Aaronson. QMA/qpoly Is Contained In PSPACE/poly:
De-Merlinizing Quantum Protocols [PS] [PDF], in Proceedings
of IEEE Complexity 2006, pages 261-273. quant-ph/0510230.
This paper introduces a new technique for removing existential
quantifiers over quantum states. Using this technique, we show that
there is no way to pack an exponential number of bits into a
polynomial-size quantum state, in such a way that the value of any one
of those bits can later be proven with the help of a polynomial-size
quantum witness. We also show that any problem in QMA with
polynomial-size quantum advice, is also in PSPACE with polynomial-size
classical advice. This builds on our earlier result that BQP/qpoly is
contained in PP/poly, and offers an intriguing counterpoint to the
recent discovery of Raz that QIP/qpoly = ALL. Finally, we show that
QCMA/qpoly is contained in PP/poly and that QMA/rpoly = QMA/poly.
Update: See also this paper by Harrow, Lin, and Montanaro, which corrects a mistaken proof in this paper.
S. Aaronson. Quantum Computing, Postselection, and
Probabilistic Polynomial-Time [PS] [PDF], Proceedings
of the Royal Society A, 461(2063):3473-3482, 2005. quant-ph/0412187.
I study the class of problems efficiently solvable by a quantum
computer, given the ability to "postselect" on the outcomes of
measurements. I prove that this class coincides with a classical
complexity class called PP, or Probabilistic Polynomial-Time. Using
this result, I show that several simple changes to the axioms of quantum
mechanics would let us solve PP-complete problems efficiently. The
result also implies, as an easy corollary, a celebrated theorem of
Beigel, Reingold, and Spielman that PP is closed under intersection,
as well as a generalization of that theorem due to Fortnow and Reingold.
This illustrates that quantum computing can yield new and simpler
proofs of major results about classical computation.
S. Aaronson. Quantum Computing
and Hidden Variables [PS] [PDF], Physical
Review A 71:032325, March 2005. quant-ph/0408035
and quant-ph/0408119.
This paper initiates the study of hidden variables from a
quantum computing perspective. For us, a hidden-variable theory is
simply a way to convert a unitary matrix that maps one quantum state to
another, into a stochastic matrix that maps the initial probability
distribution to the final one in some fixed basis. We list five axioms
that we might want such a theory to satisfy, and then investigate which
of the axioms can be satisfied simultaneously. Toward this end, we
propose a new hidden-variable theory based on network flows. In a
second part of the paper, we show that if we could examine the entire
history of a hidden variable, then we could efficiently solve problems
that are believed to be intractable even for quantum computers. In
particular, under any hidden-variable theory satisfying a reasonable
axiom, we could solve the Graph Isomorphism problem in polynomial time,
and could search an N-item database using O(N1/3) queries, as
opposed to O(N1/2) queries with Grover's search algorithm.
On the other hand, the N1/3 bound is optimal, meaning that we
could probably not solve NP-complete problems in polynomial
time. We thus obtain the first good example of a model of computation
that appears slightly more powerful than the quantum computing
model.
Update: See also this paper by Aaronson, Bouland, Fitzsimons, and Lee, for a retraction of the claimed optimality proof for the N1/3 search algorithm.
S. Aaronson and D. Gottesman. Improved Simulation of
Stabilizer Circuits [PS] [PDF] [Webpage], Physical
Review A 70:052328, 2004. quant-ph/0406196.
The Gottesman-Knill theorem says that a stabilizer circuit -- that is, a
quantum circuit consisting solely of CNOT, Hadamard, and phase gates --
can be
simulated efficiently on a classical computer. This paper improves that
theorem in several directions. First, by removing the need for Gaussian
elimination, we make the simulation
algorithm much faster at the cost of a factor-2 increase in the number
of
bits needed to represent a state. We have implemented the improved
algorithm
in a freely-available program called CHP (CNOT-Hadamard-Phase), which
can
handle thousands of qubits easily. Second, we show that the problem of
simulating stabilizer circuits is complete
for the classical complexity class ParityL, which means that
stabilizer circuits are probably not even universal for classical
computation. Third, we give efficient algorithms for computing the
inner product between two
stabilizer states, putting any n-qubit stabilizer circuit into a
"canonical form" that requires at most
O(n2/log n) gates, and other useful tasks. Fourth, we extend
our simulation algorithm to circuits acting on mixed states,
circuits containing a limited number of non-stabilizer gates, and
circuits
acting on general tensor-product initial states but containing only a
limited
number of measurements.
S. Aaronson. Limitations of Quantum Advice and One-Way
Communication [PS]
[PDF],
Theory of Computing 1:1-28, 2005. Conference version in Proceedings
of IEEE Complexity 2004 pp. 320-332 (won the Ron Book Best Student
Paper Award). quant-ph/0402095.
Although a quantum state requires exponentially many classical bits to
describe, the laws of quantum mechanics impose severe restrictions on
how that state can be accessed. This paper shows in three settings that
quantum messages have only limited advantages over classical ones.
First, we show that BQP/qpoly is contained in PP/poly, where
BQP/qpoly is the class of problems solvable in quantum polynomial time,
given a polynomial-size "quantum advice state" that depends only on the
input length. This resolves a question of Buhrman, and means that we
should not hope for an unrelativized separation between quantum and
classical advice. Underlying our complexity result is a general new
relation between deterministic and quantum one-way communication
complexities, which applies to partial as well as total functions.
Second, we construct an oracle relative to which NP is not contained
in BQP/qpoly. To do so, we use the polynomial method to give the first
correct proof of a direct product theorem for quantum search.
This theorem has other applications; for example, it can be used to fix a
result of Klauck about quantum time-space tradeoffs for sorting.
Third, we introduce a new trace distance method for proving
lower bounds on quantum one-way communication complexity. Using this
method, we obtain optimal quantum lower bounds for two problems of
Ambainis, for which no nontrivial lower bounds were previously known
even for classical randomized protocols.
Update: See Section 1.3 of Aaronson's Barbados lecture notes for a corrected proof of the "Almost As Good As New Lemma.
S. Aaronson. Is Quantum Mechanics An Island In Theoryspace?
[PS] [PDF], Proceedings
of the Växjö Conference "Quantum Theory: Reconsideration of
Foundations" (A. Khrennikov, ed.), 2004. quant-ph/0401062.
This paper investigates what happens if we change quantum mechanics in
several ways. The main results are as follows. First, if we replace
the 2-norm by some other p-norm, then there are no nontrivial
norm-preserving linear maps. Second, if we relax the demand that norm
be preserved, we end up with a theory that allows rapid solution of hard
computational problems known as PP-complete problems (as well as
superluminal signalling). And third, if we restrict amplitudes to be
real, we run into a difficulty much simpler than the usual one based on
parameter-counting of mixed states.
Note: The computational results in this paper are
superseded by "Quantum Computing, Postselection, and Probabilistic
Polynomial-Time."
S. Aaronson. Multilinear Formulas and Skepticism of Quantum
Computing [PS]
[PDF], in STOC 2004,
pp. 118-127. Conference version [PS] [PDF]. quant-ph/0311039.
Several researchers, including Leonid Levin, Gerard 't Hooft, and
Stephen Wolfram, have argued that quantum mechanics will break down
before the factoring of large numbers becomes possible. If this is
true, then there should be a natural "Sure/Shor separator" -- that is, a
set of quantum states that can account for all experiments performed to
date, but not for Shor's factoring algorithm. We propose as a
candidate the set of states expressible by a polynomial number of
additions and tensor products. Using a recent lower bound on
multilinear formula size due to Raz, we then show that states arising in
quantum error-correction require nΩ(log n) additions and
tensor products even to approximate, which incidentally yields the first
superpolynomial gap between general and multilinear formula size of
functions. More broadly, we introduce a complexity classification of
pure quantum states, and prove many basic facts about this
classification. Our goal is to refine vague ideas about a breakdown of
quantum mechanics into specific hypotheses that might be experimentally
testable in the near future.
S. Aaronson. Lower Bounds for Local Search by Quantum
Arguments [PS]
[PDF], Proceedings
of ACM STOC 2004, pp. 465-474 (won the Danny Lewin
Best Student Paper Award). Also in STOC'04 Special Issue of SIAM
Journal on Computing. quant-ph/0307149.
The problem of finding a local minimum of a black-box function is
central for understanding local search as well as quantum adiabatic
algorithms. For functions on the Boolean hypercube {0,1}n,
we show a lower bound of Ω(2n/4/n) on the number of queries
needed by a quantum computer to solve this problem. More surprisingly,
our approach, based on Ambainis' quantum adversary method, also yields a
lower bound of Ω(2n/2/n2) on the problem's classical
randomized query complexity. This improves and simplifies a 1983
result of Aldous. Finally, in both the randomized and quantum cases, we
give the first nontrivial lower bounds for finding local minima on
grids of constant dimension greater than 2.
S. Aaronson and A. Ambainis. Quantum Search of Spatial
Regions [PS]
[PDF], Theory
of Computing 1:47-79, 2005. Conference version [PS] [PDF]
in Proceedings of IEEE FOCS 2003, pp. 200-209. quant-ph/0303041.
Can Grover's quantum search algorithm speed up search of a physical
region - for example a 2-D grid of size sqrt(n) by sqrt(n)? The problem
is that sqrt(n) time seems to be needed for each query, just to move
amplitude across the grid. Here we show that this problem can be
surmounted, refuting a claim to the contrary by Benioff. In particular,
we show how to search a d-dimensional hypercube in time O(sqrt(n)) for d
at least 3, or O(sqrt(n)log5/2(n)) for d=2. More generally,
we introduce a model of quantum query complexity on graphs,
motivated by fundamental physical limits on information storage,
particularly the holographic principle from black hole thermodynamics.
Our results in this model include almost-tight upper and lower bounds
for many search tasks; a generalized algorithm that works for any graph
with good expansion properties, not just hypercubes; and relationships
among several notions of `locality' for unitary matrices acting on
graphs. As an application of our results, we give an O(sqrt(n))-qubit
communication protocol for the disjointness problem, which improves an
upper bound of Høyer and de Wolf and matches a lower bound of Razborov.
S. Aaronson. Quantum Certificate Complexity [PS] [PDF], IEEE
Conference on Computational Complexity (CCC) 2003, pp. 171-178 (won
the Ron Book Best Student Paper Award). Journal version [PS] [PDF]
in JCSS Special Issue for Complexity 2003. ECCC
TR03-005, quant-ph/0210020.
Given a Boolean function f, we study two natural generalizations of the
certificate complexity C(f): the randomized certificate complexity RC(f)
and the quantum certificate complexity QC(f). Using Ambainis' adversary
method, we exactly characterize QC(f) as the square root of RC(f). We
then use this result to prove the new relation R0(f) = O(Q2(f)2
Q0(f) log n) for total f, where R0, Q2,
and Q0 are zero-error randomized, bounded-error quantum, and
zero-error quantum query complexities respectively. Finally we give
asymptotic gaps between the measures, including a total f for which C(f)
is superquadratic in QC(f), and a symmetric partial f for which QC(f) =
O(1) yet Q2(f) = Ω(n/log n).
Note: If you're interested in the part of this
paper dealing with asymptotic separations among C, RC, and bs, then
please see some improvements and corrections in a later paper by Avishay Tal.
S. Aaronson. Quantum Lower
Bound for Recursive Fourier Sampling [PS] [PDF], Quantum
Information and Computation 3(2):165-174, 2003. quant-ph/0209060.
One of the earliest quantum algorithms was discovered by
Bernstein and Vazirani, for a problem called Recursive Fourier Sampling.
This paper shows that the Bernstein-Vazirani algorithm is not far from
optimal. The moral is that the need to "uncompute" garbage can impose a
fundamental limit on efficient quantum computation. The proof
introduces a new parameter of Boolean functions called the "nonparity
coefficient," which might be of independent interest.
Note: I've revised this paper since it appeared
in QIC, both to correct an error and to emphasize the need to uncompute.
The version here is taken from Chapter 9 of my PhD thesis.
S. Aaronson. Quantum
Lower Bound for the Collision Problem [PS] [PDF], Proceedings
of ACM
STOC 2002, pp. 635-642 (won the C.
V. Ramamoorthy Award). Journal version (joint with Y. Shi) in Journal
of the ACM 51(4):595-605, 2004. quant-ph/0111102.
The collision problem is to decide whether a function
X:{1,...,n}→{1,...,n} is one-to-one or two-to-one, given that one of
these is the case. We show a lower bound of Ω(n1/5) on the
number of queries needed by a quantum computer to solve this problem
with bounded error probability. The best known upper bound is O(n1/3),
but obtaining any lower bound better than Ω(1) was an open problem
since 1997. Our proof uses the polynomial method augmented by some new
ideas. We also give a lower bound of Ω(n1/7) for the problem
of deciding whether two sets are equal or disjoint on a constant
fraction of elements. Finally we give implications of these results for
quantum
complexity theory.
(Mostly-)Classical
Papers
E. Davis and S. Aaronson. Testing GPT-4 with Wolfram Alpha and Code Interpreter plug-ins on math and science problems, 2023. arXiv:2308.05713.
This report describes a test of the large language model GPT-4 with the Wolfram Alpha and the Code Interpreter plug-ins on 105 original problems in science and math, at the high school and college levels, carried out in June-August 2023. Our tests suggest that the plug-ins significantly enhance GPT's ability to solve these problems. Having said that, there are still often "interface" failures; that is, GPT often has trouble formulating problems in a way that elicits useful answers from the plug-ins. Fixing these interface failures seems like a central challenge in making GPT a reliable tool for college-level calculation problems.
G. Marcus, E. Davis, and S. Aaronson. A Very Preliminary Analysis of DALL-E 2, 2022. arXiv:2204.13807.
The DALL-E 2 system generates original synthetic images corresponding to an input text as caption. We report here on the outcome of fourteen tests of this system designed to assess its common sense, reasoning and ability to understand complex texts. All of our prompts were intentionally much more challenging than the typical ones that have been showcased in recent weeks. Nevertheless, for 5 out of the 14 prompts, at least one of the ten images fully satisfied our requests. On the other hand, on no prompt did all of the ten images satisfy our requests.
E. Yolcu, S. Aaronson, and M. Heule. An Automated Approach to the Collatz Conjecture, in Proceedings of 28th International Conference on Automated Deduction (CADE), 2021. arXiv:2105.14697.
We explore the Collatz conjecture and its variants through the lens of termination of string rewriting. We construct a rewriting system that simulates the iterated application of the Collatz function on strings corresponding to mixed binary-ternary representations of positive integers. We prove that the termination of this rewriting system is equivalent to the Collatz conjecture. We also prove that a previously studied rewriting system that simulates the Collatz function using unary representations does not admit termination proofs via matrix interpretations. To show the feasibility of our approach in proving mathematically interesting statements, we implement a minimal termination prover that uses matrix/arctic interpretations and we find automated proofs of nontrivial weakenings of the Collatz conjecture. Finally, we adapt our rewriting system to show that other open problems in mathematics can also be approached as termination problems for relatively small rewriting systems. Although we do not succeed in proving the Collatz conjecture, we believe that the ideas here represent an interesting new approach.
N. Roquet, A. P. Soleimany, A. C. Ferris, S. Aaronson, and T. K. Lu. Synthetic Recombinase-Based State Machines in Living Cells [click here] [blog post], Science 353(6297), July 22, 2016.
State machines underlie the sophisticated functionality behind human-made and natural computing systems that perform order-dependent information processing. We developed a recombinase-based framework for building state machines in living cells by leveraging chemically controlled DNA excision and inversion operations to encode states in DNA sequences. This strategy enables convenient readout of states (by sequencing and/or polymerase chain reaction) as well as complex regulation of gene expression. We validated our framework by engineering state machines in Escherichia coli that used one, two, or three chemical inputs to control up to 16 DNA states. These state machines were capable of recording the temporal order of all inputs and performing multi-input, multi-output control of gene expression. We also developed a computational tool for the automated design of gene regulation programs using recombinase-based state machines. Our scalable framework should enable new strategies for recording and studying how combinational and temporal events regulate complex cell functions and for programming sophisticated cell behaviors.
E. Demaine, F. Ma, A. Schvartzman, E. Waingarten, and S. Aaronson. The Fewest Clues Problem [PDF], in Proceedings of FUN'2016.
When analyzing the computational complexity of well-known puzzles, most papers consider the algorithmic challenge of solving a given instance of (a generalized form of) the puzzle. We take a different approach by analyzing the computational complexity of designing a "good" puzzle. We assume a puzzle maker designs part of an instance, but before publishing it, wants to ensure that the puzzle has a unique solution. Given a puzzle, we introduce the FCP (fewest clues problem) version of the problem: Given an instance to a puzzle, what is the minimum number of clues we must add in order to make the instance uniquely solvable? We analyze this question for the Nikoli puzzles Sudoku, Shakashaka, and Akari. Solving these puzzles is NP-complete, and we show their FCP versions are Σ2P-complete. Along the way, we show that the FCP versions of 3SAT, 1-in-3SAT, Triangle Partition, Planar 3SAT, and Latin Square are all Σ2P-complete. We show that even problems in P have difficult FCP versions, sometimes even Σ2P-complete, though "closed under cluing" problems are in the (presumably) smaller class NP; for example, FCP 2SAT is NP-complete.
A. Yedidia and S. Aaronson. A Relatively Small Turing Machine Whose Behavior Is Independent of Set Theory [PDF], Complex Systems 25(4), 2016.
Since the definition of the Busy Beaver function by Radó in 1962, an interesting open question has been the smallest value of n for which BB(n) is independent of ZFC set theory. Is this n approximately 10, or closer to 1,000,000, or is it even larger? In this paper, we show that it is at most 7,910 by presenting an explicit description of a 7,910-state Turing machine Z with 1 tape and a 2-symbol alphabet that cannot be proved to run forever in ZFC (even though it presumably does), assuming ZFC is consistent. The machine is based on work of Harvey Friedman on independent statements involving order-invariant graphs. In doing so, we give the first known upper bound on the highest provable Busy Beaver number in ZFC. To create Z, we develop and use a higher-level language, Laconic, which is much more convenient than direct state manipulation. We also use Laconic to design two Turing machines, G and R, which halt if and only if there are counterexamples to Goldbach's Conjecture and the Riemann Hypothesis, respectively.
S. Aaronson, D. Grier, and L. Schaefer. The Classification of Reversible Bit Operations [PDF], in Proceedings of Innovations in Theoretical Computer Science (ITCS), 2017. ECCC TR15-066, arXiv:1504.05155.
We present a complete classification of all possible sets of classical reversible gates acting on bits, in terms of which reversible transformations they generate, assuming swaps and ancilla bits are available for free. Our classification can be seen as the reversible-computing analogue of Post's lattice, a central result in mathematical logic from the 1940s. It is a step toward the ambitious goal of classifying all possible quantum gate sets acting on qubits.
Our theorem implies a linear-time algorithm (which we have implemented), that takes as input the truth tables of reversible gates G and H, and that decides whether G generates H. Previously, this problem was not even known to be decidable (though with effort, one can derive from abstract considerations an algorithm that takes triply-exponential time). The theorem also implies that any n-bit reversible circuit can be "compressed" to an equivalent circuit, over the same gates, that uses at most 2npoly(n) gates and O(1) ancilla bits; these are the first upper bounds on these quantities known, and are close to optimal. Finally, the theorem implies that every non-degenerate reversible gate can implement either every reversible transformation, or every affine transformation, when restricted to an "encoded subspace."
Briefly, the theorem says that every set of reversible gates generates either all reversible transformations on n-bit strings (as the Toffoli gate does); no transformations; all transformations that preserve Hamming weight (as the Fredkin gate does); all transformations that preserve Hamming weight mod k for some k; all affine transformations (as the Controlled-NOT gate does); all affine transformations that preserve Hamming weight mod 2 or mod 4, inner products mod 2, or a combination thereof; or a previous class augmented by a NOT or NOTNOT gate. Prior to this work, it was not even known that every class was finitely generated. Ruling out the possibility of additional classes, not in the list, requires some arguments about polynomials, lattices, and Diophantine equations.
S. Aaronson and H. Nguyen. Near Invariance of the Hypercube [PS] [PDF], Israel Journal of Mathematics 212(1):385--417, 2016. arXiv:1409.7447.
We give an almost-complete description of orthogonal matrices M of order n that "rotate a non-negligible fraction of the Boolean hypercube Cn={-1,1}n onto itself," in the sense that Prx∈C_n[Mx ∈ Cn] ≥ n-C, for some positive constant C, where x is sampled uniformly over Cn. In particular, we show that such matrices M must be very close to products of permutation and reflection matrices. This result is a step toward characterizing those orthogonal and unitary matrices with large permanents, a question with applications to linear-optical quantum computing.
S. Aaronson, S. M. Carroll, and L. Ouellette. Quantifying the Rise and Fall of Complexity in Closed Systems: The Coffee Automaton [PDF], 2014.
In contrast to entropy, which increases monotonically, the "complexity" or "interestingness" of closed systems seems intuitively to increase at first and then decrease as equilibrium is approached. For example, our universe lacked complex structures at the Big Bang and will also lack them after black holes evaporate and particles are dispersed. This paper makes an initial attempt to quantify this pattern. As a model system, we use a simple, two-dimensional cellular automaton that simulates the mixing of two liquids ("coffee" and "cream"). A plausible complexity measure is then the Kolmogorov complexity of a coarse-grained approximation of the automaton's state, which we dub the "apparent complexity." We study this complexity measure, and show analytically that it never becomes large when the liquid particles are non-interacting. By contrast, when the particles do interact, we give numerical evidence that the complexity reaches a maximum comparable to the "coffee cup's" horizontal dimension. We raise the problem of proving this behavior analytically.
S. Aaronson, R. Impagliazzo, and D. Moshkovitz. AM with Multiple Merlins [PDF], in Proceedings of Conference on Computational Complexity (CCC), 2014. ECCC TR14-012, arXiv:1401.6848.
We introduce and study a new model of interactive proofs: AM(k), or Arthur-Merlin with k non-communicating Merlins. Unlike with the better-known MIP, here the assumption is that each Merlin receives an independent random challenge from Arthur. One motivation for this model (which we explore in detail) comes from the close analogies between it and the quantum complexity class QMA(k), but the AM(k) model is also natural in its own right.
We illustrate the power of multiple Merlins by giving an AM(2) protocol for 3SAT, in which the Merlins' challenges and responses consist of only n1/2+o(1) bits each. Our protocol has the consequence that, assuming the Exponential Time Hypothesis (ETH), any algorithm for approximating a dense CSP with a polynomial-size alphabet must take n(log n)^(1-o(1)) time. Algorithms nearly matching this lower bound are known, but their running times had never been previously explained. Brandao and Harrow have also recently used our 3SAT protocol to show quasipolynomial hardness for approximating the values of certain entangled games.
In the other direction, we give a simple quasipolynomial-time approximation algorithm for free games, and use it to prove that, assuming the ETH, our 3SAT protocol is essentially optimal. More generally, we show that multiple Merlins never provide more than a polynomial advantage over one: that is, AM(k)=AM for all k=poly(n). The key to this result is a subsampling theorem for free games, which follows from powerful results by Alon et al. and Barak et al. on subsampling dense CSPs, and which says that the value of any free game can be closely approximated by the value of a logarithmic-sized random subgame.
S. Aaronson, A. Ambainis, K. Balodis, and M. Bavarian. Weak Parity [PDF], in Proceedings of International Colloquium on Automata, Languages, and Programming (ICALP), 2014. ECCC TR13-164, arXiv:1312.0036.
We study the query complexity of Weak Parity: the problem of computing the parity of an n-bit input string, where one only has to succeed on a 1/2+ε fraction of input strings, but must do so with high probability on those inputs where one does succeed. It is well-known that n randomized queries and n/2 quantum queries are needed to compute parity on all inputs. But surprisingly, we give a randomized algorithm for Weak Parity that makes only O(n/log0.246(1/ε)) queries, as well as a quantum algorithm that makes only O(n/√log(1/ε)) queries. We also prove a lower bound of Ω(n/log(1/ε)) in both cases, as well as lower bounds of Ω(log(n)) in the randomized case and Ω(√log(n)) in the quantum case for any ε>0. We show that improving our lower bounds is intimately related to two longstanding open problems about Boolean functions: the Sensitivity Conjecture, and the relationships between query complexity and polynomial degree.
S. Aaronson and T. Hance. Generalizing and Derandomizing Gurvits's Approximation Algorithm for the Permanent [PS] [PDF], Quantum Information and Computation, 14(7-8):541-559, 2014. ECCC TR12-170.
In 2005, Leonid Gurvits gave a striking randomized
algorithm to approximate the permanent of an n×n matrix A. The
algorithm runs in O(n2/ε2) time, and approximates Per(A) to within ±ε||A||n
additive error. A major advantage of Gurvits's algorithm is that it
works for arbitrary matrices, not just for nonnegative matrices. This
makes it highly relevant to quantum optics, where the permanents of
bounded-norm complex matrices play a central role. Indeed, the
existence of Gurvits's algorithm is why, in their recent work on the
hardness of quantum optics, Aaronson and Arkhipov (AA) had to talk about
sampling problems rather than estimation problems.
In this paper, we improve Gurvits's algorithm in two ways. First,
using an idea from quantum optics, we generalize the algorithm so that
it yields a better approximation when the matrix A has either repeated
rows or repeated columns. Translating back to quantum optics, this lets
us classically estimate the probability of any outcome of an AA-type
experiment---even an outcome involving multiple photons "bunched" in the
same mode---at least as well as that probability can be estimated by
the experiment itself. (This does not, of course, let us solve the AA
sampling problem.) It also yields a general upper bound on the
probabilities of "bunched" outcomes, which resolves a conjecture of
Gurvits and might be of independent physical interest.
Second, we use ε-biased sets to derandomize Gurvits's algorithm,
in the special case where the matrix A is nonnegative. More
interestingly, we generalize the notion of ε-biased sets to the complex
numbers, construct "complex ε-biased sets," then use those sets to
derandomize even our generalization of Gurvits's algorithm to the
multirow/multicolumn case (again for nonnegative A). Whether Gurvits's
algorithm can be derandomized for general A remains an outstanding
problem.
F. Mota, S. Aaronson, L. Antunes, and A. Souto. Sophistication as Randomness Deficiency [PDF], Descriptional Complexity of Formal Systems, Lecture Notes in Computer Science Volume 8031, pp. 172-181, 2013.
The sophistication of a string measures how much structural information it contains. We introduce naive sophistication, a variant of sophistication based on randomness deficiency. Naive sophistication measures the minimum number of bits needed to specify a set in which the string is a typical element. Thanks to Vereshchagin and Vitányi, we know that sophistication and naive sophistication are equivalent up to low order terms. We use this to relate sophistication to lossy compression, and to derive an alternative formulation for busy beaver computational depth.
S. Aaronson, B. Aydinlioglu, H. Buhrman, J. Hitchcock, and D. van Melkebeek. A note on exponential circuit lower bounds from derandomizing Arthur-Merlin games, 2010. ECCC TR10-174.
We present an alternate proof of the recent result by Gutfreund and Kawachi that derandomizing Arthur-Merlin games into PNP implies linear-exponential circuit lower bounds for ENP.
Our proof is simpler and yields stronger results. In particular,
consider the promise-AM problem of distinguishing between the case where
a given Boolean circuit C accepts at least a given number b of inputs,
and the case where C accepts less than δb inputs for some positive
constant δ. If PNP contains a solution for this promise problem then ENP requires circuits of size Ω(2n/n) almost everywhere.
S. Aaronson. The Equivalence of Sampling and Searching [PS] [PDF], in Proceedings of International Computer Science Symposium in Russia (CSR), pp. 1-14, 2011 (won the Best Paper Award). Journal version in Theory of Computing Systems, 2014. arXiv:1009.5104, ECCC TR10-128.
In a sampling problem, we are given an input x∈{0,1}n, and asked to sample approximately from a probability distribution Dx over poly(n)-bit strings. In a search problem, we are given an input x∈{0,1}n, and asked to find a member of a nonempty set Ax
with high probability. (An example is finding a Nash equilibrium.) In
this paper, we use tools from Kolmogorov complexity to show that
sampling and search problems are "essentially equivalent." More
precisely, for any sampling problem S, there exists a search problem RS such that, if C is any "reasonable" complexity class, then RS is in the search version of C if and only if S is in the sampling version. What makes this nontrivial is that the same RS works for every C.
As an application, we prove the surprising result that SampP=SampBQP if and only if
FBPP=FBQP: in other words, classical computers can efficiently sample
the output distribution of every quantum circuit, if and only if they
can efficiently solve every search problem that quantum computers can
solve.
S. Aaronson and D. van Melkebeek. On Circuit Lower Bounds from Derandomization, Theory of Computing 7(1):177-184, 2011. ECCC TR10-105.
We present an alternate proof of the result by Kabanets
and Impagliazzo that derandomizing polynomial identity testing implies
circuit lower bounds. Our proof is simpler, scales better, and yields a
somewhat stronger result than the original argument.
S. Aaronson. A Counterexample to the Generalized Linial-Nisan Conjecture [PS] [PDF], 2010. ECCC TR10-109.
In earlier work, we gave an oracle separating the
relational versions of BQP and the polynomial hierarchy, and showed that
an oracle separating the decision versions would follow from what we
called the Generalized Linial-Nisan (GLN) Conjecture: that
"almost k-wise independent" distributions are indistinguishable from the
uniform distribution by constant-depth circuits. The original
Linial-Nisan Conjecture was recently proved by Braverman; we offered a
$200 prize for the generalized version. In this paper, we save
ourselves $200 by showing that the GLN Conjecture is false, at least for
circuits of depth 3 and higher.
As a byproduct, our counterexample also implies that Π2p⊄PNP
relative to a random oracle with probability 1. It has been
conjectured since the 1980s that PH is infinite relative to a random
oracle, but the highest levels of PH previously proved separate were NP
and coNP. Update: I retract my claimed proof of this "byproduct" result, which in any case has been superseded by the work of Rossman, Servedio, and Tan.
Finally, our counterexample implies that the famous results of Linial, Mansour, and Nisan, on the structure of AC0 functions, cannot be improved in several interesting respects.
S. Aaronson and A. Wigderson. Algebrization: A New
Barrier in Complexity Theory [PS] [PDF], ACM
Transactions on Computing Theory 1(1), 2009. Conference version [PS] [PDF] in Proceedings
of ACM STOC'2008, pp. 731-740.
Any proof of P≠NP will have to overcome two barriers: relativization
and natural proofs. Yet over the last decade, we have seen
circuit lower bounds (for example, that PP does not have linear-size
circuits) that overcome both barriers simultaneously. So the question
arises of whether there is a third barrier to progress on the central
questions in complexity theory.
In this paper we present such a barrier, which we call algebraic
relativization or algebrization. The idea is that, when we
relativize some complexity class inclusion, we should give the
simulating machine access not only to an oracle A, but also to a
low-degree extension of A over a finite field or ring.
We systematically go through basic results and open problems in
complexity theory to delineate the power of the new algebrization
barrier. First, we show that all known non-relativizing results based
on arithmetization -- both inclusions such as IP=PSPACE and MIP=NEXP,
and separations such as MAEXP⊄P/poly -- do indeed algebrize.
Second, we show that almost all of the major open problems -- including
P versus NP, P versus RP, and NEXP versus P/poly -- will require non-algebrizing
techniques. In some cases algebrization seems to explain exactly
why progress stopped where it did: for example, why we have superlinear
circuit lower bounds for PromiseMA but not for NP.
Our second set of results follows from lower bounds in a new
model of algebraic query complexity, which we introduce in this
paper and which is interesting in its own right. Some of our lower
bounds use direct combinatorial and algebraic arguments, while others
stem from a surprising connection between our model and communication
complexity. Using this connection, we are also able to give an
MA-protocol for the Inner Product function with O(√n log n)
communication (essentially matching a lower bound of Klauck), as well as
a communication complexity conjecture whose truth would imply NL≠NP.
S. Aaronson. Oracles Are Subtle But Not Malicious [PS] [PDF], in Proceedings
of IEEE Complexity 2006, pages 340-354. cs.CC/0504048.
Theoretical computer scientists have been debating the role of
oracles since the 1970's. This paper illustrates both that oracles can
give us nontrivial insights about the barrier problems in circuit
complexity, and that they need not prevent us from trying to solve those
problems.
First, we give an oracle relative to which PP has linear-sized
circuits, by proving a new lower bound for perceptrons and low-degree
threshold polynomials. This oracle settles a longstanding open
question, and generalizes earlier results due to Beigel and to Buhrman,
Fortnow, and Thierauf. More importantly, it implies the first
nonrelativizing separation of "traditional" complexity classes, as
opposed to interactive proof classes such as MIP and MAEXP.
For Vinodchandran showed, by a nonrelativizing argument, that PP does
not have circuits of size nk for any fixed k. We present an
alternative proof of this fact, which shows that PP does not even have
quantum circuits of size nk with quantum advice. To our
knowledge, this is the first nontrivial lower bound on quantum circuit
size.
Second, we study a beautiful algorithm of Bshouty et al. for
learning Boolean circuits in ZPPNP. We show that the NP
queries in this algorithm cannot be parallelized by any relativizing
technique, by giving an oracle relative to which ZPP||NP and
even BPP||NP have linear-size circuits. On the other hand,
we also show that the NP queries could be parallelized if P=NP.
Thus, classes such as ZPP||NP inhabit a "twilight zone,"
where we need to distinguish between relativizing and black-box
techniques. Our results on this subject have implications for
computational learning theory as well as for the circuit minimization
problem.
S. Aaronson. The Complexity of Agreement [PS] [PDF].
Conference version
[PS] [PDF] in Proceedings
of ACM STOC 2005, pp. 634-643.
cs.CC/0406061.
A celebrated 1976 theorem of Aumann asserts that honest, rational
Bayesian agents with common priors will never "agree to disagree": if
their opinions about any topic are common knowledge, then those opinions
must be equal. Economists have written numerous papers examining the
assumptions behind this theorem. But two key questions went
unaddressed: first, can the agents reach agreement after a conversation
of reasonable length? Second, can the computations needed for that
conversation be performed efficiently? This paper answers both
questions in the affirmative, thereby strengthening Aumann's original
conclusion.
We first show that, for two agents with a common prior to agree
within ε about the expectation of a [0,1] variable with high probability
over their prior, it suffices for them to exchange order 1/ε2
bits. This bound is completely independent of the number of bits n of
relevant knowledge that the agents have. We then extend the bound to
three or more agents; and we give an example where the economists'
"standard protocol" (which consists of repeatedly announcing one's
current expectation) nearly saturates the bound, while a new "attenuated
protocol" does better. Finally, we give a protocol that would cause
two Bayesians to agree within ε after exchanging order 1/ε2
messages, and that can be simulated by agents with limited computational
resources. By this we mean that, after examining the agents' knowledge
and a transcript of their conversation, no one would be able to
distinguish the agents from perfect Bayesians. The time used by the
simulation procedure is exponential in 1/ε6 but not in n.
Note: The dependence on δ in this paper's main results can be improved from 1/δ to log(1/δ) without too much difficulty. I thank Mark Sellke for this observation.
S. Aaronson. Algorithms for
Boolean Function Query Properties [PS] [PDF], SIAM
Journal on Computing 32(5):1140-1157, 2003.
We investigate efficient algorithms for computing Boolean
function properties relevant to query complexity. Such properties
include, for example, deterministic, randomized, and quantum query
complexities; block sensitivity; certificate complexity; and degree as a
real polynomial. The algorithms compute the properties given an
n-variable function's truth table (of size N=2n) as input.
Our main results are the following:
- O(N1.585 log N) algorithms for many common properties.
- An O(N2.322 log N) algorithm for block sensitivity.
- An O(N) algorithm for testing 'quasisymmetry.'
- A notion of a 'tree decomposition' of a Boolean function, proof
that the decomposition is unique, and an O(N1.585 log N)
algorithm for finding the decomposition.
- A subexponential-time approximation algorithm for space-bounded
quantum query complexity. To develop this algorithm, we give a new way
to search systematically through unitary matrices using
finite-precision arithmetic.
The algorithms discussed have been implemented in a linkable library.
S. Aaronson. Stylometric
Clustering: A Comparison of Data-Driven and Syntactic Features [MS Word],
2001.
We present evidence that statistics drawn from an automated
parser can aid in stylometric analysis. The problem considered is that
of clustering a collection of texts by author, without any baseline
texts of known authorship. We use a feature based on relative
frequencies of grammar rules as computed by an automated parser, the CMU
Link Grammar parser. We compare this feature against standard
"data-driven" features: letter, punctuation, and function word
frequencies, and mean and variance of sentence length. On two corpora
-- (1) the Federalist Papers and (2) selections from Twain, Hawthorne,
and Melville -- our results indicate that syntactic and data-driven
features combined yield accuracy as good as or better than data-driven
features alone. We analyze the results using a cluster validity measure
that may be of independent interest.
S. Aaronson. Optimal
Demand-Oriented Topology for Hypertext Systems [PS] [PDF], in Proceedings
of ACM
SIGIR 1997, pp. 168-177.
This paper proposes an algorithm to aid in the design of
hypertext systems. A numerical index is presented for rating the
organizational efficiency of hypertexts based on (1) user demand for
pages, (2) the relevance of pages to one another, and (3) the
probability that users can navigate along hypertext paths without
getting lost. Maximizing this index under constraints on the number of
links is proven NP-complete, and a genetic algorithm is used to search
for the optimal link topology. An experiment with computer users
provides evidence that a numerical measure of hypertext efficiency might
have practical value.
Surveys
and Book Reviews
S. Aaronson. How Much Structure Is Needed for Huge Quantum Speedups? [PDF], in Proceedings of the 28th Solvay Conference, 2022.
I survey, for a general scientific audience, three decades of research into
which sorts of problems admit exponential speedups via quantum
computers---from the classics (like the algorithms of Simon and
Shor), to the breakthrough of Yamakawa and
Zhandry from April 2022. I discuss both the quantum circuit model,
which is what we ultimately care about in practice but where our
knowledge is radically incomplete, and the so-called oracle or
black-box or query complexity model, where we've managed to
achieve a much more thorough understanding that then informs our
conjectures about the circuit model. I discuss the strengths and
weaknesses of switching attention to sampling tasks, as was done in
the recent quantum supremacy experiments. I make some
skeptical remarks about widely-repeated claims of exponential quantum
speedups for practical machine learning and optimization problems.
Through many examples, I try to convey the "law of conservation
of weirdness," according to which every problem admitting an exponential
quantum speedup must have some unusual property to allow the amplitude
to be concentrated on the unknown right answer(s).
Edited transcript of a rapporteur talk delivered at the 28th Solvay Physics Conference in Brussels, Belgium on May 21, 2022.
S. Aaronson. Open Problems Related to Quantum Query Complexity [PDF], to appear in ACM Transactions on Quantum Computing, 2021.
I offer a case that quantum query complexity still has loads of enticing and fundamental open problems---from relativized QMA versus QCMA and BQP versus IP, to time/space tradeoffs for collision and element distinctness, to polynomial degree versus quantum query complexity for partial functions, to the Unitary Synthesis Problem and more.
S. Aaronson. The Busy Beaver Frontier [PDF], SIGACT News 51(3):31-55, 2020.
The Busy Beaver function, with its incomprehensibly rapid growth, has captivated generations of computer scientists, mathematicians, and hobbyists. In this survey, I offer a personal view of the BB function 58 years after its introduction, emphasizing lesser-known insights, recent progress, and especially favorite open problems. Examples of such problems include: when does the BB function first exceed the Ackermann function? Is the value of BB(20) independent of set theory? Can we prove that BB(n+1)>2BB(n) for large enough n? Given BB(n), how many advice bits are needed to compute BB(n+1)? Do all Busy Beavers halt on all inputs, not just the 0 input? Is it decidable, given n, whether BB(n) is even or odd?
S. Aaronson. P=?NP [PDF], in Open Problems in Mathematics (Springer), 2016.
ECCC TR17-004.
In 1955, John Nash sent a remarkable letter to the National Security Agency, in which—seeking to build theoretical foundations for cryptography—he all but formulated what today we call the P=?NP problem, considered one of the great open problems of science. Here I survey the status of this problem in 2016, for a broad audience of mathematicians, scientists, and engineers. I offer a personal perspective on what it’s about, why it’s important, why it’s reasonable to conjecture that P?NP is both true and provable, why proving it is so hard, the landscape of related problems, and crucially, what progress has been made in the last half-century toward solving those problems. The discussion of progress includes diagonalization and circuit lower bounds; the relativization, algebrization, and natural proofs barriers; and the recent works of Ryan Williams and Ketan Mulmuley, which (in different ways) hint at a duality between impossibility proofs and algorithms.
S. Aaronson (with A. Bouland and L. Schaeffer). The Complexity of Quantum States and Transformations: From Quantum Money to Black Holes [PDF], Barbados Lecture Notes, 2016. ECCC TR16-109.
This mini-course will introduce participants to an exciting frontier for quantum computing theory: namely, questions involving the computational complexity of preparing a certain quantum state or applying a certain unitary transformation. Traditionally, such questions were considered in the context of the Nonabelian Hidden Subgroup Problem and quantum interactive proof systems, but they are much broader than that. One important application is the problem of “public-key quantum money” – that is, quantum states that can be authenticated by anyone, but only created or copied by a central bank – as well as related problems such as copy-protected quantum software. A second, very recent application involves the black-hole information paradox, where physicists realized that for certain conceptual puzzles in quantum gravity, they needed to know whether certain states and operations had exponential quantum circuit complexity. These two applications (quantum money and quantum gravity) even turn out to have connections to each other! A recurring theme of the course will be the quest to relate these novel problems to more traditional computational problems, so that one can say, for example, “this quantum money is hard to counterfeit if that cryptosystem is secure,” or “this state is hard to prepare if PSPACE is not in PP/poly.” Numerous open problems and research directions will be suggested, many requiring only minimal quantum background. Some previous exposure to quantum computing and information will be assumed, but a brief review will be provided.
S. Aaronson. Quantum Machine Learning Algorithms: Read the Fine Print [PDF], Nature Physics 11:291-293, 2015.
New quantum algorithms promise exponential speedup for machine learning, clustering, and finding patterns in big data. But in order to achieve real speedup, we need to delve into the details.
S. Aaronson. The Ghost in the Quantum Turing Machine [PDF] [PS], in The Once and Future Turing, edited by S. Barry Cooper and Andrew Hodges, 2016.
In honor of Alan Turing's hundredth birthday, I unwisely set out some thoughts
about one of Turing's obsessions throughout his life, the question of physics
and free will. I focus relatively narrowly on a notion that I call
"Knightian freedom": a certain kind of in-principle physical unpredictability that goes beyond probabilistic
unpredictability. Other, more metaphysical aspects of free will I regard as possibly outside the scope of science.
I examine a viewpoint, suggested independently by Carl Hoefer, Cristi Stoica,
and even Turing himself, that tries to find scope for
"freedom" in the universe's boundary conditions rather than
in the dynamical laws. Taking this viewpoint seriously leads to many
interesting conceptual problems. I investigate how far one can go toward
solving those problems, and along the way, encounter (among other things) the
No-Cloning Theorem, the measurement problem, decoherence, chaos, the arrow of
time, the holographic principle, Newcomb's paradox, Boltzmann brains,
algorithmic information theory, and the Common Prior Assumption. I also
compare the viewpoint explored here to the more radical speculations of Roger Penrose.
The result of all this is an unusual perspective on time, quantum mechanics,
and causation, of which I myself remain skeptical, but which has several
appealing features. Among other things, it suggests interesting empirical
questions in neuroscience, physics, and cosmology; and takes a millennia-old
philosophical debate into some underexplored territory.
S. Aaronson. Get Real [PDF] [HTML], Nature
Physics News & Views, 8:443–444, 2012.
Do quantum states offer a faithful representation of reality or
merely encode the partial knowledge of the experimenter? A new theorem
illustrates how the latter can lead to a contradiction with quantum
mechanics.
S. Aaronson. Why Philosophers Should Care About Computational Complexity
[PS] [PDF], pp. 261-328 in Computability: Turing, Gödel, Church, and Beyond, edited by B. J. Copeland, C. Posy, and O. Shagrir, MIT Press, 2013. ECCC TR11-108, arXiv:1108.1791.
One might think that, once we know something is computable, how efficiently
it can be computed is a practical question with little further
philosophical importance. In this essay, I offer a detailed case that
one would be wrong. In particular, I argue that computational complexity theory---the
field that studies the resources (such as time, space, and randomness)
needed to solve computational problems---leads to new perspectives on
the nature of mathematical knowledge, the strong AI debate,
computationalism, the problem of logical omniscience, Hume's problem of
induction, Goodman's grue riddle, the foundations of quantum mechanics,
economic rationality, closed timelike curves, and several other topics
of philosophical interest. I end by discussing aspects of complexity
theory itself that could benefit from philosophical analysis.
[This essay in Spanish]
S. Aaronson. QIP = PSPACE Breakthrough (Technical Perspective) [HTML]
[PDF], Communications of the ACM, 53(12):101, December 2010.
S. Aaronson. Why Quantum Chemistry Is Hard [HTML]
[PS] [PDF], Nature
Physics News & Views, 5(10):707-708, 2009.
The burgeoning field of quantum information science is not only
about building a working device. Already we can learn a lot by thinking
about how computation works under the rule of quantum mechanics.
S. Aaronson. Are Quantum States Exponentially Long Vectors?
[PS] [PDF], shorter
version in Proceedings of the 2005 Oberwolfach Meeting on Complexity
Theory. quant-ph/0507242.
I'm grateful to Oded Goldreich for inviting me to the
Oberwolfach meeting. In this extended abstract, which is based on a
talk that I gave there, I demonstrate that gratitude by explaining why
Goldreich's views about quantum computing are wrong.
S. Aaronson. NP-complete Problems and Physical Reality [PS] [PDF], SIGACT
News Complexity Theory Column, March 2005. quant-ph/0502072.
Can NP-complete problems be solved efficiently in the
physical universe? I survey proposals including soap bubbles,
protein folding, quantum computing, quantum advice, quantum
adiabatic algorithms, quantum-mechanical nonlinearities, hidden
variables, relativistic time dilation, analog computing,
Malament-Hogarth spacetimes, quantum gravity, closed timelike
curves, and "anthropic computing." The section on soap bubbles
even includes some "experimental" results. While I do not
believe that any of the proposals will let us solve
NP-complete problems efficiently, I argue that by
studying them, we can learn something not only about computation but
also about physics.
Update: This article leaves it as an "exercise for the reader" to prove that BQPCTC⊆SQG. While that's now known to hold---as follows from BQPCTC=PSPACE (A.-Watrous 2008) together with SQG=PSPACE (Gutoski-Wu 2010)---it was not easily proved with any of the tools known in 2005, and was therefore a much harder "exercise" than I'd intended.
S. Aaronson. Is P Versus NP Formally Independent? [PS] [PDF], Bulletin
of the EATCS 81, October 2003.
This is a survey about the title question, written for people who (like
the author) see logic as forbidding, esoteric, and remote from their
usual concerns. Beginning with a crash course on Zermelo-Fraenkel set
theory, it discusses oracle independence; natural proofs; independence
results of Razborov, Raz, DeMillo-Lipton, Sazanov, and others; and
obstacles to proving P vs. NP independent of strong logical theories.
It ends with some philosophical musings on when one should expect a
mathematical question to have a definite answer.
S. Aaronson. Book Review:
A New Kind of Science [PS] [PDF], Quantum
Information and
Computation 2(5):410-423, 2002. quant-ph/0206089.
This is a critical review of the book 'A New Kind of Science' by
Stephen Wolfram. We do not attempt a chapter-by-chapter evaluation, but
instead focus on two areas: computational complexity and fundamental
physics. In complexity, we address some of the questions Wolfram raises
using standard techniques in theoretical computer science. In physics,
we examine Wolfram's proposal for a deterministic model underlying
quantum mechanics, with 'long-range threads' to connect entangled
particles. We show that this proposal cannot be made compatible with
both special relativity and Bell inequality violations.
[Return to home
page]