PHYS771 Lecture 16: Interactive Proofs and More

Scott Aaronson

Scribe: Chris Granade


Last lecture, I ended by giving you a puzzle problem: can I do ornithology without leaving my office?

I want to know if all ravens are black. The old-fashioned approach would involve going outside, looking for a bunch of ravens and seeing if they're black or not. The more modern approach: look around the room at all of the things which are not black and note that they also are not ravens. In this way, I become increasingly convinced that all not-black things are not ravens, or equivalently, that all ravens are black. Can I be a leader in the field of ornithology this way?

A: Well, when you're doing the not-black things, there's a lot more possible observations. You could get the same effect as measuring millions of not-black things by measuring just one raven.
Scott: Yeah, I think that's a large part of it. Anyone else?
A: You wouldn't be getting a good random sample of non-black things by just sitting in your office.
Scott: I wouldn't be getting a random sample of all ravens either. I'd be getting some of those ravens that live in Waterloo.

Something completely tangential that I'm reminded of: there's this game where you're given four cards, each of which you're promised has a letter on one side and a number on the other. If what you can see of the cards is shown in the figure below, which cards do you need to flip over to test the rule that all cards with a K on one side have a 3 on the other?

Apparently, if you give this puzzle to people, the vast majority get it wrong. In order to test that K ⇒ 3, you need to flip the K and the 1. On the other hand, you can give people a completely equivalent problem, where they're a bouncer at a bar and need to know if anyone under 21 (or 19 in Canada) is drinking, and where they're told that there's someone who is drinking, someone who isn't drinking, someone who's over 21 and someone who's under 21. In this scenario, funny enough, most people get it right. You ask the person who's drinking, and the underage customer. This is a completely equivalent problem to the cards, but if you give it to the people in the abstract way, many say (for example) that you have to turn over the 3 and the Q, which is wrong. So, people seem to have this built-in ability to reason logically about social situations, but they have to be painstakingly taught to apply that same ability to abstract mathematical problems.

Anyway, the point is that there are many, many more not-black things then there are ravens, so if there were a pair (raven, not-black), then we would be much more likely to find it by randomly sampling a raven then sampling a not-black thing. Therefore, if we sample ravens and fail to find a not-black raven, then we're much more confident in saying that "all ravens are black," because our hypothesis had a much higher chance of being falsified by sampling ravens.


Interactive Proofs

Why should we in quantum computing care about interactive proofs? I'll answer this question in a rather unconventional way, by asking a different question: can quantum computers be simulated efficiently by classical computers?

I was talking to Ed Fredkin a while ago, and he said that he believes that the whole universe is a classical computer and thus everything can be simulated classically. But instead of saying that quantum computing is impossible, he takes things in a very interesting direction, and says that BQP must be equal to P. Even though we have factoring algorithms for quantum computers that are faster than known classical algorithms, that doesn't mean that there isn't a fast classical factoring algorithm that we don't know about. On the other side you have David Deutsch, who makes an argument that we've talked about several times before: if Shor's Algorithm doesn't involve these "parallel universes," then how is it factoring the number? Where was the number factored, if not using these exponentially many universes? I guess one way that you could criticize Deutsch's argument (certainly not the only way), is to say he's assuming that there isn't an efficient classical simulation. We believe that there's no way for Nature to perform the same computation using polynomial classical resources, but we don't know that. We can't prove that.

Q: How many people have tried to prove that you can't classically simulate a quantum computer?
Scott: I don't even know that anyone has looked specifically at this question or tried to prove this directly. The crucial point is that if you could prove that P≠BQP, then you would have also proved that P≠PSPACE. (Physicists might think it's obvious these classes are unequal and it doesn't even require proof, but that's another matter...) As for going in the other direction and proving P = BQP, I guess people have tried that. I don't know if I should say this in public, but I've even spent a day or two on it. It would at least be nice to put BQP in AM, or the polynomial hierarchy---some preliminary fact like that. Unfortunately, I think we simply don't yet understand efficient computation well enough to answer such questions, completely leaving aside the quantum aspect.

The question is, if P≠BQP, P≠NP, etc., why can't anyone prove these things? There are several arguments that have been given for that. One of them is relativization. We can talk about giving a P computer and a BQP computer access to the same oracle. That is, give them the same function that they can compute in a single computation step. There will exist an oracle that makes them equal and there will exist another oracle that makes them unequal. The oracle that makes them equal, for example, could just be a PSPACE oracle which kind of sandwiches everything and just makes everything equal to PSPACE. The oracle that makes them unequal could be an oracle for Simon's Problem, or some period-finding problem that the quantum computer can solve but the classical one can't. Then, you see that any proof technique is going to have to be sensitive to the presence of these oracles. This doesn't sound like such a big deal until you realize that almost every proof technique we have is not sensitive to the presence of oracles. It's very hard to come up with a technique that is sensitive, and that---to me---is why interactive proofs are interesting. This is the one clear and unambiguous example I can show you of a technique we have that doesn't relativize. In other words, we can prove that something is true, which wouldn't be true if you just gave everything an oracle. You can see this as the foot in the door or the one distant point of light in this cave that we're stuck in. Through the interactive proof results, we can get a tiny glimmer of what the separation proofs would eventually have to look like if we ever came up with them. The interactive proof techniques seem much too weak to prove anything like P≠NP, or else you would have heard about it. (Note: A year after giving this lecture, Avi Wigderson and I proposed algebrization, which gives a formal explanation for why the interactive proof techniques are too weak to prove P≠NP and other basic conjectures in complexity theory.) Already, though, we can use these techniques to get some non-relativizing separation results. I'll show you some examples of that also.

Q: What about P versus BPP? What's the consensus there?

The consensus is that P and BPP actually are equal. We know from Impagliazzo and Wigderson that if we could prove that there exists a problem solvable in 2n time that requires circuits of size 2Ω(n), then we could construct a very good pseudorandom generator; that is, one which cannot be distinguished from random by any circuit of fixed polynomial size. Once you have such a generator, you can use it to derandomize any probabilistic polynomial-time algorithm. You can feed your algorithm the output of the pseudorandom generator, and your algorithm won't be able to tell the difference between it and a truly random string. Therefore, the probabilistic algorithm could be simulated deterministically. So we really seem to be seeing a difference between classical randomness and quantum randomness. It seems like classical randomness really can be efficiently simulated by a deterministic algorithm, whereas quantum "randomness" can't. One intuition for this is that, with a classical randomized algorithm, you can always just "pull the randomness out" (i.e., treat the algorithm as deterministic and the random bits as part of its input). On the other hand, if we want to simulate a quantum algorithm, what does it mean to "pull the quantumness out?"

Q: I could see how, with two classes that are different, adding an oracle could kind of "boost" them up to the same level, but if two classes are the same, intuitively, how can giving them more power make them different?
Scott: That's a good question, and the key is to realize that when we feed an oracle to a class, we aren't acting on the class itself. We're acting on the definition of the class. As an example, even though we believe P = BPP in the real world, it's very easy to construct an oracle O where PO≠BPPO. Clearly, if what we were doing was operating on the classes, then operating on two equal classes would give two results that were still equal. But that's not what we're doing, and maybe the notation is confusing that way. (A rough analogy: "The third planet from the Sun is the third planet from the Sun" is a tautology, whereas "Earth is the third planet from the Sun" is not a tautology---even though, as it turns out, Earth = the third planet from the Sun.)
Q: Are there any classes that are provably equal, for which there's an oracle that makes them unequal?
Scott: Yes. We're going to see that today.

So let's see this one example of a non-relativizing technique. So we've got a Boolean formula (like the ones used in SAT) in n variables which is not satisfiable. What we'd like is a proof that it's not satisfiable. That is, we'd like to be convinced that there is no setting of the n variables that causes our formula to evaluate to TRUE. This is what we saw before as an example of a coNP-complete problem. The trouble is that we don't have enough time to loop through every possible assignment and check that none of them work. Now the question that was asked in the 80s was, "what if we have some super-intelligent alien that comes to Earth and can interact with us?" We don't trust the alien and its technology, but we'd like it to prove to us that the formula is unsatisfiable in such a way that we don't have to trust it. Is this possible?

Normally in computational complexity, when you can't answer a question, the first thing you do is find an oracle where the question is true or false. It's probably like what physicists do when they do perturbative calculations. You do it because it you can, not because it necessarily tells you what you ultimately want to know. So this is what Fortnow and Sipser did in the late 80s. They said, all right, suppose you have an exponentially long string, and the alien wants to convince you that this exponentially long string is the all-zero string. That is, that there are no 1's anywhere. So can this prover do it? Let's think of what could happen. The prover could say, "the string is all zeroes."
"Well, I don't believe you. Convince me."
"Here, this location's a zero. This one's also a zero. So is this one..."
OK, now there's only 210,000 bits left to check, and so the alien says "trust me, they're all zeroes." There's not a whole lot the prover can do. Fortnow and Sipser basically formally proved this obvious intuition. Take any protocol of messages between you and the prover that terminates with you saying "yes" if you're convinced and "no" if you aren't. What we could then do is to then pick one of the bits of the string at random, surreptitiously change it to a 1, and almost certainly, the entire protocol goes through as before. You'll still say that the string is all zeroes.

As always, we can define a complexity class: IP. This is the set of problems where you can be convinced of a "yes" answer by interacting with the prover. So we talked before about these classes like MA and AM---those are where you have a constant number of interactions. MA is where the prover sends a message to you and you perform a probabilistic computation to check it. In AM, you send a message to the prover, and then the prover sends a message back to you and you run a probabilistic computation. It turns out that with any constant number of interactions, you get the same class AM, so let's be generous and allow polynomially many interactions. The resulting class is IP. So what Fortnow and Sipser did is they gave a way of constructing an oracle relative to which coNP is not in IP. They showed that, relative to this oracle, you cannot verify the unsatisfiability of a formula via a polynomial number of interactions with a prover. Following the standard paradigm of the field, of course we can't prove unconditionally that coNP is not in IP, but this gives us some evidence; that is, it tells us what we might expect to be true.

Now for the bombshell (which was discovered by Lund, Fortnow, Karloff, and Nisan): in the "real," unrelativized world, how do we show that a formula is unsatisfiable? We're going to somehow have to use the structure of the formula. We'll have to use that it's a Boolean formula that was explicitly given to us, and not just some abstract Boolean function. What will we do? Let's assume this is a 3SAT problem (since 3SAT is NP-complete, that assumption is without loss of generality). There's a bunch of clauses (n of them) involving three variables each, and we want to verify that there's no way to satisfy all the clauses. Now what we'll do is map this formula to a polynomial over a finite field. This trick is called arithmetization. Basically, we're going to convert this logic problem into an algebra problem, and that'll give us more leverage to work with. This is how it works: we rewrite our 3SAT instance as a product of degree-3 polynomials. Each clause---that is, each OR of three literals---just becomes 1 minus the product of 1 minus each of the literals: e.g., (x OR y OR z) becomes 1-(1-x)(1-y)(1-z). Notice that, so long as x, y, and z can only take the values 0 and 1, this polynomial is exactly equivalent to the logic expression that started with. But now, what we can do is reinterpret the polynomial as being over some much larger field. Pick some reasonably large prime number N, and we'll interpret the polynomial as being over GFN (the field of N elements). I'll call the polynomial P(x1,...,xn). Now what we want to verify is this: there are no satisfying assignments, or equivalently, that if you take P(x1,...,xn) and sum it over all possible Boolean settings of x1,...,xn, then you get zero. The problem, of course, is that this doesn't seem any easier than what we started with! We've got this sum over exponentially many terms, and we have to check every one of them and make sure that they're all zero. But now, we can have the prover help us. If we just have this string of all zeroes, and he just tells us that it's all zeroes, we don't believe him. But now, we've lifted everything to a larger field and we have some more structure to work with.

Q: Why does it follow that if the formula is unsatisfiable, then the sum evaluates to zero?
Scott: If the formula is unsatisfiable, then no matter what setting x1,...,xn you pick for the variables, there's going to be some clause in the formula that isn't satisfied. Hence one of the degree-3 polynomials that we're multiplying together will be zero, and hence the product will itself be zero. And since this is true for all 2n Boolean settings of x1,...,xn, you'll still get zero if you sum P(x1,...,xn) over all of them.

So now what can we do? What we ask the prover to do is to sum for us over all 2n−1 possible settings of the variables x2, ..., xn, leaving x1 unfixed. Thus, the prover sends us a univariate polynomial Q1 in the first variable. Since the polynomial we started with had poly(n) degree, the prover can do this by sending us a polynomial number of coefficients. He can send us this univariate polynomial. Then, what we have to verify is that Q1(0)+Q1(1)=0 (everything being mod N). How can we do that? The prover has given us the claimed value of the entire polynomial. So just pick an r1 at random from our field. Now, what we would like to do is verify that Q1(r1) equals what it's supposed to. Forget about 0 and 1, we're just going to go somewhere else in the field. Thus, we send r1 to the prover. Now the prover sends a new polynomial Q2, where the first variable is fixed to be r1, but where x2 is left unfixed and x3, ..., xn are summed over all possible Boolean values (like before). We still don't know that the prover hasn't been lying to us and sending bullshit polynomials. So what can we do?

Check that Q2(0)+Q2(1)=Q1(r1), then pick another element r2 at random and send it to the prover. In response, he'll send us a polynomial Q3(X). This will be a sum of P(x1,...,xn) over all possible Boolean settings of x4 up to xn, with x1 set to r1 and x2 set to r2, and x3 left unfixed. Again, we'll check and make sure that Q3(0) + Q3(1) = Q2(r2). We'll continue by picking a random r3 and sending it along to the prover. This keeps going for n iterations, when we reach the last variable. What do we do when we reach the last iteration? At that point, we can just evaluate P(r1,...,rn) ourselves without the prover's help, and check directly if it equals Qn(rn).

We have a bunch of tests that we're doing along the way. My first claim is that if there is no satisfying assignment, and if the prover was not lying to us, then each of the n tests accepts with certainty. The second claim is that if there was a satisfying assignment, then with high probability, at least one of these tests would fail. Why is that the case? The way I think of it is that the prover is basically like the girl in Rumpelstiltskin. The prover is just going to get trapped in bigger and bigger lies as time goes on until finally the lies that will be so preposterous that we'll be able to catch them. This is what's going on. Why? Let's say that, for the first iteration, the real polynomial that the prover should give us is Q1, but that the prover gives us Q1' instead. Here's the thing: these are polynomials of not too large a degree. The final polynomial, P, has degree at most three times the number of clauses. We can easily fix the field size to be larger. So let the degree d of the polynomial be much smaller than the field size N.

A quick question: suppose we have two polynomials P1 and P2 of degree d. How many points can they be equal at (assuming they aren't identical)? Consider the difference P1P2. Since this is also a polynomial of degree at most d, by the Fundamental Theorem of Algebra, it can have at most d distinct roots (again, assuming it's not identically zero). Thus, two polynomials that are not equal can agree in at most d places, where d is the degree. This means that if these are polynomials over a field of size N, and we pick a random element in the field, we can bound the probability that the two will agree at that point: it's at most d/N.

Going back to the protocol, we assumed that d is much less than N, and so the probability that Q1 and Q1' agree at some random element of the field is much less than 1. So when we pick r1 at random, the probability that Q1(r1)=Q1'(r1) is at most d/N. Only if we've gotten very unlucky will we pick r1 such that these are equal, so we can go on and assume that Q1(r1)≠Q1'(r1). Now, you can picture the prover sweating a little. He's trying to convince us of a lie, but maybe he can still recover. But next, we're going to pick an r2 at random. Again, the probability that he'll be able to talk himself out of the next lie is going to be at most d/N. This is the same in each of the iterations, so the probability that he can talk himself out of any of the lies is at most nd/N. We can just choose N to be big enough that this will be much smaller than 1.

Q: Why not just run this protocol over the positive integers?
Scott: Because we don't have a way of generating a random positive integer, and we need to be able to do that. So we just pick a very large finite field.

So this protocol gives us that coNP ⊆ IP. Actually, it gives us something stronger. Does anyone see the stronger thing that it gives us?

A: Strictly contained?
Scott: No, we can't show that, though it would be nice.

A standard kind of argument shows us that the biggest IP could possibly be in our wildest dreams would be PSPACE. You can prove that anything you can do with an interactive protocol, you can simulate in PSPACE. Can we bring IP up? Make it bigger? What we were trying to verify was that all of these values of P(x1,...,xn) summed to zero, but the same proof would go through as before if we were trying to verify that they summed to some other constant (whatever we want). So that actually lets us do counting, and shows that IP contains P♯P, which in turn we know to contain the entire polynomial hierarchy (by Toda's Theorem). After this "LFKN Theorem" came out, a number of people carried out a discussion by e-mail, and a month later, Shamir figured out that IP = PSPACE---that is, IP actually "hits the roof." I won't go through Shamir's result here, but this means that if a super-intelligent alien came to Earth, it could prove to us whether white or black has the winning strategy in chess, or if chess is a draw. It could play us and beat us, of course, but then all we'd know is that it's a better chess player. But it can prove to us which player has the winning strategy by reducing chess to this game of summing polynomials over large finite fields. (Technical note: this only works for chess with some reasonable limit on the number of moves, like the "50-move rule" used in tournament play.)

Q: Chess on an n×n board, right?
Scott: Sure. The protocol works in particular for n = 8, but you can generalize chess to arbitrary board sizes. You just have to limit the number of moves to some polynomial in n, or else you get EXP. Then you'd need two provers to convince you---which is another story!

This is already something that is---to me---pretty counterintuitive. Like I said, it gives us a very small glimpse of the kinds of techniques we'd need to use to prove non-relativizing results like P≠NP. A lot of people seem to think that the key is somehow to transform these problems from Boolean to algebraic ones. The question is how to do that. I can show you, though, how these techniques already let you get some new lower bounds. Heck, even some quantum circuit lower bounds.

First claim: if we imagine that there are polynomial-size circuits for counting the number of satisfying assignments of a Boolean formula, then there's also a way to prove to someone what the number of solutions is. Does anyone see why this would follow from the interactive proof result? Well, notice that, to convince the verifier about the number of satisfying assignments of a Boolean formula, the prover itself does not need to have more computational power than is needed to count the number of assignments. After all, the prover just keeps having to compute these exponentially large sums! In other words, the prover for ♯P can be implemented in ♯P. If you had a ♯P oracle, then you too could be the prover. Using this fact, Lund et al. pointed out that if ♯P⊂P/poly---that is, if there's some circuit of size polynomial in n for counting the number of solutions to a formula of size n---then P♯P=MA. For in MA, Merlin can give Arthur the polynomial-size circuit for solving ♯P problems, and then Arthur just has to verify that it works. To do this, Arthur just runs the interactive protocol from before, but where he plays the part of both the prover and the verifier, and uses the circuit itself to simulate the prover. This is an example of what are called self-checking programs. You don't have to trust an alleged circuit for counting the number the solutions to a formula, since you can put it in the role of a prover in an interactive protocol.

Now, we can prove that the class PP, consisting of problems solvable in probabilistic polynomial-time with unbounded error, does not have linear-sized circuits. (This result is originally due to Vinodchandran.) Why? Well, there are two cases. If PP doesn't even have polynomial-sized circuits, then we're done. On the other hand, if PP does have polynomial-sized circuits, then so does P♯P, by the basic fact (which you might enjoy proving) that P♯P=PPP. Therefore P♯P = MA by the LFKN Theorem, so P♯P = MA = PP, since PP is sandwiched in between MA and P♯P. But one can prove (and we'll do this shortly) that P♯P doesn't have linear-sized circuits, using a direct diagonalization argument. Therefore, PP doesn't have linear-sized circuits either.

All I'm trying to say is that once you have this interactive proof result, you can leverage it to get new circuit lower bounds. For example, you can show that there's a language in the class PP that doesn't have linear-sized circuits. In fact, for any fixed k, there's a language in PP that doesn't have circuits of size nk. Of course, that's much weaker than showing that PP doesn't have polynomial-sized circuits, but it's something. (Note: After I gave this lecture, Santhanam improved on Vinodchandran's result, to show that for every fixed k, there's a language in the complexity class PromiseMA that doesn't have circuits of size nk.)

I'd like to go back now and fill in the missing step in the argument. Let's say you wanted to show for some fixed k that P♯P doesn't have circuits of size nk. How many possible circuits are there of size nk? Something like n2n^k. Now what we can do is define a Boolean function f by looking at the behavior of all circuits of size nk. Order the possible inputs of size n as x1, ..., x2^n. If at least half of the circuits accept x1, then set f(x1) = 0, while if more than half of the circuits reject x1, then set f(x1) = 1. This kills off at least half of the circuits of size nk (i.e., causes them to fail at computing f on at least one input). Now, of those circuits that got the "right answer" for x1, do the majority of them accept or reject x2? If the majority accept, then set f(x2) = 0. If the majority reject, then set f(x2) = 1. Again, this kills off at least half of those circuits remaining. We continue this Darwinian process where each time we define a new value of our function, we kill off at least half of the remaining circuits of size nk. After log2(n2n^k)+1≈2nklog(n) steps, we will have killed off all of the circuits of size nk. Furthermore, the process of constructing f involves a polynomial number of counting problems, each of which we can solve in P♯P. So the end result is a problem which is in P♯P, but which by construction does not have circuits of size nk (for any fixed k of our choice). This is an example of a relativizing argument, because we paid no attention to whether these circuits had any oracles or not. To get this argument to go down from P♯P to the smaller class PP, we had to use a non-relativizing ingredient: namely, the interactive proof result of LFKN.

But does this actually give us a non-relativizing circuit lower bound? That is, does there exist an oracle relative to which PP has linear-sized circuits? A couple years ago, I was able to construct such an oracle. This shows that Vinodchandran's result was non-relativizing---indeed, it's one of the few examples in all of complexity theory of an indisputably non-relativizing separation result. In other words, the relativization barrier---which is one of the main barriers to showing P≠NP---can be overcome in some very limited cases. It would be nice to overcome it in other cases, but this is what we can do.

Q: So these arguments show that there are no quadratic-size circuits for PP?
Scott: Yes. Let me put this another way: for any fixed k, there exists a language L in PP such that L cannot be decided by a circuit of size O(nk). That's very different from saying that there is a single language in PP that does not have any circuits of any polynomial size. The second statement is much harder to show! If you give me your (polynomial) bound, then I find a PP problem that defeats circuits constrained by your bound, but the problem might be solvable by circuits with some larger polynomial bound. I could also defeat that larger polynomial bound, but I'd have to construct a different problem, and so on indefinitely.

While we're waiting for better circuit lower bounds in the classical case, I can tell you about the quantum case. We always have to ask about the quantum case. Here, a simple extension of the previous argument shows that not only does PP not have circuits of size nk, it doesn't even have quantum circuits of size nk. You can get a quantum circuit lower bound, but that's peanuts. Let's try to throw in quantum to something and get a different answer.

We can define a complexity class QIP: Quantum Interactive Proofs. This is the same as IP, except that now you're a quantum polynomial-time verifier, and instead of exchanging classical messages with the prover, you can exchange quantum messages. For example, you could send the prover half of an EPR pair and keep the other half, and play whatever other such games you want.

Certainly, this class is at least as powerful as IP. You could just restrict yourself to classical messages if that's what you wanted to do. Since IP = PSPACE, QIP has to be at least as big as PSPACE. The other thing that was proved by Kitaev and Watrous, using a semidefinite programming argument, was that QIP is contained in EXP. This is actually all we know about where QIP lies. It would be a great Ph.D. thesis for any of you to show (for example) that QIP can be simulated in PSPACE, and hence QIP=PSPACE. The exciting thing that we do know (also due to Kitaev and Watrous) is that any quantum interactive protocol can be simulated by one that takes place in three rounds. In the classical case, we had to play this whole Rumpelstiltskin game, where we kept asking the prover one question after another until we finally caught him in a lie. We had to ask the prover polynomially many questions. But in the quantum case it's no longer necessary to do that. The prover sends you a message, you send a message back, then the prover sends you one more message and that's it. That's all you ever need.

We don't have time today to prove why that's true, but I can give you some intuition. Basically, the prover prepares a state that looks like ∑r|r⟩|q(r)⟩. This r is the sequence of all the random bits that you would use in the classical interactive protocol. Let's say that we're taking the classical protocol for solving coNP or PSPACE, and we just want to simulate it by a three-round quantum protocol. We sort of glom together all the random bits that the verifier would use in the entire protocol and take a superposition over all possible settings of those random bits. Now what's q(r)? It's the sequence of messages that the prover would send back to you if you were to feed it the random bits in r. Now, the prover will just take the q register and second r register and will send it to you. Certainly, the verifier can check that then q(r) is a valid sequence of messages given r. What's the problem? Why isn't this a good protocol?

A: It could be a superposition over a subset of the possible random bits.

Right! How do we know that the prover didn't just cherry-pick r to be only drawn from those that he could successfully lie about? The verifier needs to pick the challenges. You can't have the prover picking them for you. But now, we're in the quantum world, so maybe things are better. If you imagine in the classical world that there was some way to verify that a bit is random, then maybe this would work. In the quantum world, there is such a way. For example, if you were given a state like

you could just rotate it and verify that, had you measured in the standard basis, you would have gotten 0 and 1 with roughly equal probability. (More precisely: if the outcome in the standard basis would have been random, then you'll accept with probability 1; if the outcome would have been far from random, then you'll reject with noticeable probability.)

Still, the trouble is that our |r⟩ is entangled with the |q(r)⟩ qubits. So we can't just apply Hadamard operations to |r⟩---if we did, we'd just get garbage out. However, it turns out that what the verifier can do is to pick a random round i of the protocol being simulated---say there are n such rounds---and then ask the prover to uncompute everything after round i. Once the prover has done that, he's eliminated the entanglement, and the verifier can then check by measuring in the Hadamard basis that the bits for round i really were random. If the prover cheated in some round and didn't send random bits, this lets the verifier detect that with probability that scales inversely with the number of rounds. Finally, you can repeat the whole protocol in parallel a polynomial number of times to increase your confidence. (I'm skipping a whole bunch of details---my goal here was just to give some intuition.)

Q: So this is kind of like quantum MAM (Merlin-Arthur-Merlin)?
Scott: Yes. In the classical world, you've just got MA and AM: every proof protocol between Arthur and Merlin with a larger constant number of rounds collapses to AM. If you allow a polynomial number of rounds, then you go up to IP (which equals PSPACE). In the quantum world, you've QMA, QAM and then QMAM which is the same as QIP. There's also another class, QIP[2], which is different from QAM in that Arthur can send any arbitrary string to Merlin (or even a quantum state) instead of just a random string. In the classical case, AM and IP[2] are the same, but in the quantum case, we have no idea.

That's our tour of interactive proofs, so I'll end with a puzzle for next week. God flips a fair coin. Assuming that the coin lands tails, She creates a room with a red-haired person. If the coin lands heads, She creates two rooms: one has a person with red hair and the other has a person with green hair. Suppose that you know that this is the whole situation, then wake up to find a mirror in the room. Your goal is to find out which way the coin landed. If you see that you've got green hair, then you know right away how the coin landed. Here's the puzzle: if you see that you have red hair, what is the probability that the coin landed heads?


[Discussion of this lecture on blog]

[← Previous lecture | Next lecture →]

[Return to PHYS771 home page]