Great Ideas in Theoretical Computer Science Lectures 12-15

Yeah, yeah, I know. Combination of the end of the semester and family matters. But I’m back with four more GITCS lectures:

  • Lecture 15: Derandomization / Crypto Double Feature

Comments welcome as always.

32 Responses to “Great Ideas in Theoretical Computer Science Lectures 12-15”

  1. Job Says:

    On Lecture 12: “As Godel argued, P=NP would mean mathematical creativity could be automated.”

    We can still randomly guess theorems and such and check them. I think human creativity amounts to random “educated” guesses. If so, it’s automatable (with the same efficiency).

  2. Liron Says:

    Hey Scott, great stuff as always.

    In section 1.2 of lecture 13, I don’t think it increases your probability of being executed to know the name of a prisoner who isn’t executed.

    Even in the example where 999,998 names of free prisoners are revealed, your probability of being executed is still 1/1,000,000. This is exactly like the Monty Hall problem. Assigning a high chance that the remaining prisoner is going to die is exactly like assigning a high chance that the remaining closed door has the prize.

  3. Liron Says:

    To clarify, it doesn’t increase your probability of being executed to be told the name of one of the *other* two prisoners who won’t be executed. The exaggerated example is different, because the guard tells you 999,998 names at random, so if you were free, you would have expected to see your name with high probability.

  4. Job Says:

    So, if 999,998 prisioners have 0 chance of being executed, and i still have 1/1,000,000, does the other person have 999,999/1,000,000 chance of being executed?

  5. Liron Says:

    Yeah, as long as you ask the guard to “list 999,998 people other than me who won’t be executed” rather than just “list 999,998 people who won’t be executed”.

  6. Job Says:

    I’ll be sure to ask that question first next time i’m in that situation.

  7. carter Says:

    This is sort of off topic, but I’m going to be doing comp sci research up the road from MIT this summer and I was wondering how do I find out about what nifty theory, algorithm and math talks are afoot at MIT for when I’m in the areas?
    thanks!
    -Carter

  8. Monty Says:

    I agree, the prisoners example in lecture 13 is incorrect as stated in the lecture. Because the prisoner asks the guard a question that he can answer regardless of whether the prisoner is the one to be executed or not, his probability of being executed is still 1/3 even after the guard answers. This is exactly analogous to Monty Hall opening a door with a goat behind it — the chance that you have the lucky door is still 1/3, and this is -exactly- why switching is the correct thing to do. From the prisoner’s perspective, he should gladly switch places with the other prisoner still in jeapordy.

  9. Pedro Says:

    I have to agree that Section 1.2 is just wrong. Suppose there are 1,000,000 prisoners and only one will be chosen uniformly at random to be executed, and you ask the guard: “Tell me 999,998 people who won’t be executed.” Upon learning this information, your probability does not somehow miraculously jump to 1/2. On the contrary, your probability is still 1/1,000,000. The other 99.9999% of the time, it will be the other poor fellow.

  10. Scott Says:

    Liron, Monty, Pedro: Thanks! Fixed. I wish I had a good explanation for how that could’ve not been caught, but I don’t.

  11. Scott Says:

    Carter: You could start here and here. As at most places, talks tend to be sparse over the summer.

  12. Blaise Pascal Says:

    Scott: In Lecture 12, where you discuss the Baker-Gill-Solovay argument, the lecture notes say “On the other hand, we can also create an oracle B such that PB=NPB”, and then go on to construct an oracle B which seems to require PB&neq;NPB, assuming I’m understanding the argument correctly. Is this a typo, or am I having a thinko?

  13. Scott Says:

    Blaise: No, it was just a typo, and is fixed now.

    Thanks, everyone, for catching embarrassing errors — I wish I had time to go through more carefully prior to posting! (And as I said before, don’t use these notes to build a bridge or a nuclear sub or anything.)

  14. carter Says:

    thanks!

  15. John Galloway Says:

    Regarding your March 08 Scientific American article: I do not understand the idea of using a CTC to get around the huge runtime of NP-complete problems. This only works if you use the user perceived run time of the system. But this is not different than if you put the user into suspended animation after starting the task and wake them back up a million years later when the program is about to finish. In either case a clock inside the machine still registers a million years. So I don’t see the CTC argument as being valid. What am I not getting?

  16. Scott Says:

    John: No, the argument is different from that. We specifically don’t exploit the user-perceived runtime (which would be trivial), but rather Deutsch’s requirement of “causal consistency.” In other words, if M is the quantum operation acting inside a CTC, then nature needs to find some quantum state ρ such that M(ρ)=ρ — and even if M is polynomial-time computable, one can still set things up so that finding ρ amounts to solving an NP-complete or even PSPACE-complete problem. For more details, see Section 8 of my survey article in SIGACT News — or my actual paper with Watrous, which as it happens, should be finished any week now! (Wish I could retrieve it from the near future for you.)

  17. Boris Says:

    I have a comment on another lecture of yours, Lecture 7 (“Randomness”) of your course “Quantum Computing Since Democritus”, available at
    http://www.scottaaronson.com/democritus/lec7.html

    You say that “E[XY] = E[X] E[Y] … only if X and Y are independent.” That’s not true. It IS true that “E[XY] = E[X] E[Y] IF X and Y are independent” and “E[XY] = E[X] E[Y] if and only if X and Y are uncorrelated” (essentially by definition).

    But I’m loving the entire series! Thanks!

  18. Scott Says:

    Boris: Thanks! Fixed.

  19. Zelah Says:

    Hi Scott,

    A bit off topic but I have reading up about complexity classes and have a question for yourself.

    Please can you help?

    It is known that if P/poly >= NP then

    MA = AM(2)

    I was wondering if this would also imply that

    QMA = QIP(1) = QIP(2) = QIP(3)?

    Thanks in advance

    Zelah

  20. Alex Says:

    Thanks Scott, cool lectures.

    Are the rules of probability given in lecture as ‘obvious’ as you make them look? I am puzzeled for two reasons:
    1- I thought Kolmogorov was such such a genius because he came up with them?
    2- I’ve read this somewhere, although I don’t remember the exact details (and don’t know enough math to understand them anyway, something about non measurable sets), that by weakining those rules you could explain Bell’s inequalities without the spooky implications. Is that true? I think the author was Pitowsky.

  21. Alex Says:

    Sorry, I meant lecture 13.
    gomen

  22. Scott Says:

    Zelah, very interesting question! No, we don’t know that NP in P/poly would have any such quantum implication. But now that I think about it, a sufficiently strong quantum hypothesis (basically, that “QAM provers can be simulated by poly-size quantum circuits with quantum advice”) would imply that QMA=QAM, for basically the same reason NP in P/poly implies MA=AM. (I.e., the QMA machine could first guess the quantum circuit and then use it to simulate the QAM prover.) To get QMA=QIP(2) or QMA=QIP(3)=QIP, one would need still stronger and less plausible hypotheses.

  23. Scott Says:

    Alex, also interesting questions!

    1 – Kolmogorov was a genius for many reasons, but realizing that Pr[not(A)]=1-Pr[A] was not one of them. His achievement was not showing that this and a few other statements are true of probabilities (which is obvious), but (basically) that they imply everything else you’d want and are therefore suitable as axioms (crucially, even with infinite probability spaces).

    2 – I can believe it, but changing probability theory (or worse yet, logic) to explain quantum mechanics always struck me as a grave offense against poor Occam. It’s just so much more drastic than is needed for the job! Why not simply say that we have probabilities that obey the usual Kolmogorov axioms, and that we calculate those probabilities from a quantum state vector?

  24. Louis Says:

    Ahh, lecture 15 feels like a cliffhanger and leaves me wanting more. *sigh*

  25. Simon Says:

    I always enjoy reading Scott’s blog, and GITCS seems like a very nice course in complexity theory. However, I have to say that I don’t think “Great Ideas in Theoretical Computer Science” is a very appropriate title for the course. It suggests something much more general. In some circles, Theoretical Computer Science seems to be used more or less as a synonym for complexity theory, but there is a large community that interprets it much more broadly. For example, look at the web page of the Theoretical Computer Science journal.

  26. Scott Says:

    Simon: Well, the course title isn’t All Great Ideas in Theoretical Computer Science. 🙂 Seriously, the question came up, and future iterations of the course will likely include something about (say) distributed computing and concurrency. The choice of topics of this semester was sharply limited by my own competence, as well as time constraints.

  27. Adam Says:

    There’s some precedent to the title, also. The analogue course at Carnegie-Mellon, taught previously by Steven Rudich and now Luis von Ahn, has been called Great Theoretical Ideas in Computer Science since at least 1997.

  28. Blake Stacey Says:

    Fuel for griping and perhaps merriment: Seed magazine’s cribsheet on quantum computing.

  29. Jakob Says:

    Typo in lecture 14: the Chernoff bound has its last inequality reversed

  30. Zelah Says:

    Hi Scott,

    I have another question regarding complexity classes.

    I have been reading (very slowly…)

    Quantum additive approximation for the Potts polynomial
    Arxiv 0702008v1 quant-ph.

    And have been wondering if one replaced BQP with QMA, would allow better approximations of #P?

    In particular, Could QMA = P^#P? (or QMA = P^#P(1)?)

    Thanks in advance

    Zelah

  31. Scott Says:

    Hi Zelah,

    Personally I see QMA=P#P as extraordinarily unlikely. Indeed, I don’t believe QMA even contains coNP (it’s easy to show this is true in the oracle setting, though obviously that isn’t decisive). Still, I don’t think anyone has considered QMA in the setting of additive approximations to #P-complete problems (the Jones polynomial, the Potts model, etc.), and it would be interesting to know if you get anything new there.

  32. Alon Says:

    Scott,

    Thanks for a great blog.

    Your overview of the envelopes paradox in 1.4 is slightly inaccurate, in that it seems to suggest that the ONLY problem with this setting is that you can’t uniformly select an integer at random.

    In fact, you can define a perfectly legitimate probability distribution on the positive integers, and STILL get the (apparent) paradox. Let’s say you pick a positive integer k with probability (1/2)^k, and the envelopes contain 3^k and 3^(k+1) dollars, respectively.

    You open an envelope, observe its contents, and are faced with a decision – to switch or not to switch. What do you do?

    If you see $3, you should definitely switch.

    If you see $3^m with m>1, the other envelope certainly contains either $3^(m-1) or $3^(m+1), depending on whether k was 3^(m-1) or 3^m. The former is two times more likely than the latter, so the probs are 2/3 for the lower amount and 1/3 for the higher – but that still gives you a positive net expected gain of 2*3^(m-2). Switch!

    It’s true that there’s no real mathematical paradox here (the gain from switching turns out to be a random variable with no expectation), but the situation is slightly more subtle than the mere assumption of uniform distribution.