The Aaronson Postdoctoral Fellowship

So, I’ve decided to simultaneously do something positive for theoretical computer science, stimulate BQPology research at MIT, and solve the “problem” of having too much grant money by hiring a postdoc.  The main area of interest for this postdoc is quantum complexity theory, but I’ll also consider outstanding applicants of a more classical bent—in fact, the ideal applicant is someone equally excited to tackle meaty open problems in quantum complexity, classical complexity, or any combination of the two.  As a postdoc here, you’ll collaborate (I hope) with me and my PhD students, but you’ll also have plenty of opportunities for interaction with the other quantum computing theory folks at MIT (Peter Shor, Ed Farhi, Seth Lloyd…), as well as the other computational complexity folks (too many to list).  This postdoc is for one year, with the expectation of a renewal for a second year.

If you’re interested, email your CV, a link to your homepage, and what you consider your top 3 or 4 publications to aaronson@csail.mit.edu, and also arrange to have 3 rec letters emailed to me.  Feel free to apply even if you previously applied for other postdocs at MIT: this is a new opportunity that’s somewhat different from previous ones.  The application deadline is, shall we say, December 1st?  Let me know if you have any questions.

Finally, while I was tempted to make “reading Shtetl-Optimized” an effective prerequisite for the postdoc, feel free to distribute this call for applications more widely.


42 Responses to “The Aaronson Postdoctoral Fellowship”

  1. Anonymous Says:

    Excellent idea.
    I will be waiting to read about the winner of the Aaronson fellowship. 🙂

  2. John Sidles Says:

    Scott, the post-doc description is open-ended (by intent we can be sure). Might it encompass the research that you and Alex Arkhipov have been pursuing on nonadaptive linear optics experiments? Your invited lecture at QIP 2010, New evidence that quantum mechanics is hard to simulate on classical computers, was (IMHO) among the most thought-provoking lectures of the past year (in any STEM discipline) … and yet (to date) it is the only invited lecture at QIP 2010 that has no preprint associated to it.

    No opprobrium is intended to you or Alex … you are tackling what seems (to me) to be one of the hardest, deepest open problems in CT/QIT today … a problem that has enough depth to it to keep a dozen postdocs busy.

    But on the other hand, if you’ve got a “last word” manuscript almost ready-to-go … there surely would be postdoc applicants who would benefit from this knowledge … not to mention, a wide audience eager to read it.

  3. Scott Says:

    John: It’s a-comin’! 🙂 Indeed, if reading this particular paper is important for deciding whether or not one wants to do a postdoc with me, I can even promise it’ll be out before December 1st.

  4. GASARCH Says:

    A theorist saying he has Too Much Grant Money?

    Actually this does make sense since we don’t really need
    coders or that much equipment.

    KUDOS to Scott for having too much grant money!

  5. Me too Says:

    Just curious, why a posdtoc? Why not n grad students or m undergrads? Should n be bigger than m? If so, how much larger?

  6. Scott Says:

    Me too: I don’t know the optimal n and m values, but FYI I currently have 3 PhD students, and have supervised about 10 undergrad projects since coming here.

  7. BlackSails Says:

    Can we send you emails with papers attached? The biased evil journals wont accept my proof of P=NP which is based upon my elementary proof of Fermat’s Last theorem!

    (just kidding)

  8. Scott Says:

    BlackSails: Sure, emails with papers attached are fine. 🙂

  9. Guy Malachi Says:

    BlackSails: Your prove suppose to be base on the algorithm itself – so find it, compile & run it – simple like that.
    (+:|=|

  10. anon Says:

    Just curious where all the grant money is coming from. (This is a real question: I’m curious where big TCS money comes from, since it’s not NSF…)

  11. Scott Says:

    anon: In my case, mostly an NSF CAREER, though I also have smaller grants from DARPA and Sloan.

  12. Jon Tyson Says:

    Hey Scott,

    Thought of putting yourself on mathjobs.org?

  13. I can haz taxpayer dollarz Says:

    What is the scope of the work you can do with your NSF CAREER award? How much flexibility do you get?

    Are you required to work within the confines of traditional academia (postdocs, grad students, etc…) or can you enlist a person off the street to help further your goals?

  14. LifeIsChill Says:

    Money is evil unless it is spent.

  15. Star Dog Says:

    Hey, Scott, big fan. If money were no object–and for a man of your brilliance, the federal govt should ensure that it isn’t–how many grad students, post docs, and teaching responsibilities would you have? I ask because I’m concerned that you might be impeding your research goals by taking on too many students. Let’s not fool ourselves here, either; post docs these days can get into a lot of mischief if not looked after fairly closely. As to the virtues of collaboration–don’t you think they are somewhat over-rated?

  16. blk Says:

    What happened to the Complexity Zoo? The site is down.

  17. John Sidles Says:

    Apropo of Bik’s comment (above), there is a 2009 article by Michael Freedman titled ““Complexity classes as mathematical axioms” that provides a thought-provoking perspective on what the Complexity Zoo is (or might be) all about … in essence Freedman’s premise is “Wir werden nicht wissen” … in other words, the exact opposite of Hilbert’s 1905 premise … perhaps this would be a suitable topic for a future Shtetl Optimized essay?

  18. asdf Says:

    The Freedman article is on arxiv:

    http://arxiv.org/abs/0810.0033

  19. Mitchell Porter Says:

    John, see Scott’s article on the proof-theoretic strength of P vs NP (especially section 5).

    Regarding Freedman’s paper… If complexity class A is a proper subclass of complexity class B, then there are problems in B which do not map “simply” (e.g. polynomially) onto problems in A. So I don’t find it very surprising that if you assume a particular separation of complexity classes, it will imply the existence of very complicated constructions in (for example) topology. If you just *assume* P!=NP, there will surely be hundreds of comparable theorems, to be inferred from the thousands of problems known to be NP-complete. It’s the implications in the other direction – existence of a specific construction implies a separation theorem – which are really interesting.

  20. Sniffnoy Says:

    blk: It seems to have just moved, to http://qwiki.stanford.edu/index.php/Complexity_Zoo

  21. BlackSails Says:

    One of the links posted above mentioned that P=NP might be undecidable. I dont see how that could be. Either algorithms exist to solve NP problems in polynomial time, or they dont. Saying its undecidable would be like saying there are algorithms that we cannot write down, and not for lack of space!

    It differs from purely abstract mathematical questions, like the continuum hypothesis, because we dont have a physical instance of transfinite sets. But we do for computers and algorithms.

  22. Cranky McCrank Says:

    @BlackSails
    I can’t guarantee to be 100% correct, but as far as i know there are 2 scenarios for “P=NP is undecidabe”.

    Scenario 1.
    P≠NP is true but it is impossible to find a proof for P≠NP.
    Therefore we’ll never know if P≠NP is true even if it is.

    Scenario 1.
    P = NP is true and we can find an algorithm wich solves
    NP-complete problems in polynomial time. Unfortunatley
    it is impossible to proove the algorithm is in P for all instances. Therefore we’ll never know if P = NP is true even
    if it is.

    In both scenarios you can assume P=NP or P≠NP without
    finding a contradiction with the mathematics of TCS, wich
    makes the problem undecidable.

  23. BlackSails Says:

    Oh, I didnt see that possibility. That we have an algorithm that we believe is polynomial time for NP-complete problems, but we cant prove that it runs in P.

  24. John Sidles Says:

    BlackSails, a question that I have been pondering (for several weeks) is “Does P contain algorithms that are not provably in P? If so, can we concretely exhibit such an algorithm?”

    Functionally speaking, algorithms in this (hypothetical) subset of P always halt (in polynomial time), no matter what the input. Yet we are unable (hypothetically) to prove—in ZFC, or an extension thereof?—that these algorithms always halt, despite having full access to their source code, and being able to watch them run in specific instances.

    This question arises naturally in connection with the validation of simulation algorithms. E.g., adaptive-mesh solvers for the Navier-Stokes equations: do they run in polynomial time? Uhhhh … do they even converge to a finite solution? This is the Clay Millenium “Navier-Stokes Equation” problem.

    The point being, that it is conceivable (AFAICT) that we live in an Impagliazzo-world in which many algorithms run in polynomial time … and yet only an (unphysical) oracle can tell us which ones they are. Obviously, this would constitute (yet another) reason for us to expect that P will be hard to separate from NP. Remarks from folks who know the literature in this regard would be very welcome.

  25. Nick Says:

    “Does P contain algorithms that are not provably in P?

    Isn’t this an ill-posed question? If P contains an algorithm, then it is provably in P. Whether or not an algorithm (as expressed by a Turing Machine) is in P is undecidable (due to Rice theorem).

  26. John Sidles Says:

    Nick, to the best of my (wholly non-expert) understanding, having checked several textbooks, and relying in particular, on Steven Cook’s Millenium problem definition, there is no requirement that an algorithm in P  be accompanied by a proof of its membership in P.

    Indeed, my question amounts to asking whether such proofs exist (and moreover, can be exhibited) for all algorithms in P.

    To phrase it another way, if we amend Cook’s statement of the problem to read as follows: “The P versus NP problem is to determine whether every language accepted by some nondeterministic algorithm in polynomial time is also accepted by some (deterministic) algorithm that provably runs in polynomial time”—where the phrase “that provably runs” has been inserted—then my question is, has the problem definition been altered in any material respect?

    Mathematically speaking, it is “fair play” to postulate (if need be) an oracle that decides membership in P.  But from an engineering point of view … well … it’s mighty tough to find a vendor of these oracles.

  27. Nick Says:

    John Sidles, ah I see. I guess this would be similar to Godel’s incompleteness theorem, i.e., with true statements not being provable. I (incorrectly) thought that membership in complexity classes were defined in terms of provability.

  28. John Sidles Says:

    Nick, you raise another good point. After all, Gödel not only proved that undecidable statements exist—he constructed concrete examples.

    Might we harbor similar ambitions? Associated to a specified proof formalism (nominally ZFC), might we exhibit concrete algorithms that reside in P, but not provably so?

    You mentioned Rice’s theorem, aka the Rice-Myhill-Shapiro theorem, but it seems to me this (and similar) reductions are inapplicable to the question at hand (because the reduction alters the run-time of the algorithms to which it is applied).

    Thus, to the best of my (wholly non-expert) understanding, this is an open problem.

    Such crypto-P  algorithms—especially if they could be concretely instantiated and exhibited—would be a wonderfully interesting addition to the Complexity Zoo.

    And if it should turn out that crypto-P  algorithms exist, but cannot be captured and exhibited, even in principle … well, gosh … that would be interesting too.

  29. Nick Says:

    John Sidles, Just to clarify. The incompleteness theorem doesn’t have to do with decidability, but provability.

    For example, Godel constructed a version of the following statement:

    S: “The statement S is not provable using the axioms of ZFC”

    Now suppose S is provable. Then, it is true, and hence non provable, giving us a contradiction.

    However, if S is not provable, it is clearly true.

    Thus, provided ZFC is consistent, we have a statement which is true but not provable.

    This suggests a strategy to prove the claim of an algorithm in P that is not provably so. Suppose, for example, that Fermat’s last theorem was false, but not provably so. We can construct an algorithm which is in P iff there is a solution to Fermat’s equation. And, if it were false, it would find the solution. Of course, FLT is provable. So, one would have to find a better example. It seems like there should be one out there, but it’s beyond my (also wholly non-expert) knowledge.

    (The Rice theorem is not related to this, and talks only about decidability)

  30. John Sidles Says:

    Nick, my thinking ran along similar lines to yours. In particular, I’ve looked at various lists of undecidable problems, with a view toward identifying undecidable problems that naturally lend themselves to instantiation as crypto-P  algorithms.

    For example, the Mortal Matrix problem is formally undecidable, but (frustratingly) this fact does not seem to be of much help in constructing crypto-P algorithms that entail mortal matrix manipulations.

    Even more frustratingly, the nature of the obstruction is not evident (to me at least) … it is just an empirical observation that the (many) well-known problems that are provably undecidable all (seemingly) do not lend themselves to constructing concrete crypto-P  algorithms.

    Eventually I’ll pose the questions “Do crypto-P algorithms exist? If so, can concrete examples be exhibited?” on Math Overflow and/or TCS Stack Exchange.

    Obviously, these questions benefit from careful phrasing and a diligent literature search … that is why comments here on Shtetl Optimized would be very welcome. If and when Dick Lipton’s weblog Gödel’s Lost Letter and P=NP hosts a discussion relating to this topic, I’ll query his readers too.

  31. Cranky McCrank Says:

    @John Sidles
    I have 2 questions about crypto-P algorithms. I don’t know if these questions make sense as my understanding of the matter is very limited. Apologies in advance.

    Do we talk about algorithms in P wich cannot be proven to be in P but wich are known to be in NP?
    Shouldn’t one expect this question to be extremly difficult to answer? If such an algorithm exist, there can’t be a proof
    of P = NP because in that case the algorithm would
    obviously be in P.

  32. Cranky McCrank Says:

    I’m sorry, i just realised the mistake in my previous statement.
    I confused a problem in P with it’s solving algorithm. But a part of my confusion comes from this thought:
    There a P complete problems, i assume i is known how to make
    the reduction from any problem in P to a P-complete problem.
    If assumtion isn’t wrong, doesn’t that mean for every problem
    wich is solved by a crypto-P algorithm, theres an algorithm wich is provably in P and solves the problem aswell ?

  33. Cranky McCrank Says:

    And again i’m wrong. As NP-complete problems are not known
    to be outside P, their membership in P could be testet by trying to reduce them to a P-complete Problem. My assumtion is obviously obviously wrong. Sorry, after 2 wrong post i’ll better give it a rest ;-).

  34. John Sidles Says:

    Cranky, thinking about the existence versus non-existence of crypto-P algorithms is a good way to appreciate that non-physical oracles play an essential yet non-obvious role in the P versus NP problem as it is conventionally stated.

    These oracular foundations are a peculiar and distinctive feature of complexity theory. For example, given an integer n, we don’t need an oracle to tell us whether it is prime or not. In contrast, given an algorithm A, only a (non-physical) oracle can tell us whether it is in P. Perhaps this is one reason why proving theorems about prime numbers is so much easier than proving theorems that apply globally to algorithms in P.

    If any Shtetl Optimized readers can recommend a textbook/article that discusses these issues—which have considerable practical importance in connection with verification and validation of simulation algorithms—such a recommendation would be very welcome.

  35. Kaveh Says:

    Every problem in P has a formalization which is provably total in Cook’s theory PV.

    On the other hand, take any algorithm M and add an undecidable condition to the algorithm in the following way: take a (Delta^b_0) formula A(x) which is consistent with the theory but false for every standard natural number, and define the output to be undefined if A(x) is true, and M(x) if it A(x) is false. It is easy to see that this is an algorithm in P (since A(x) is false and this can be checked in uniform AC^0 for each x), but the theory cannot prove that the totality of the new algorithm since it would imply refuting A(x) for all x.

    You can take A(x) to mean that x is not a (very sparse) proof of consistency of the theory.

  36. Kaveh Says:

    typo:

    You can take A(x) to be: x *is* a (very sparse) proof of the consistency of the theory (in the same theory).

  37. John Sidles Says:

    Kaveh says: Every problem in P has a formalization which is provably total in Cook’s theory PV.

    Kaveh, thank you for your thoughtful reply … which I hope you will expand upon with some references and concrete examples.

    Even the first sentence came as a surprise to me, and I am by no means sure that I have grasped what “provably total” means in this context; here a reference with an extended discussion would be very helpful. Similarly, with respect to the second sentence, do the standard lists of undecidable formulas include concrete examples of the (undecidable, always-false, and PTIME checkable) formula A(x)? Here too an example would be very welcome.

    Broadly speaking, the sentiments of engineers are opposite to Henry George Ford’s aphorism: “The virtue of a logical proof is not that it compels belief but that it suggests doubts. The proof tells us where to concentrate our doubts.” Whereas, an engineer might say “The virtue of a logical proof is not that it compels belief but that it facilitates practical instantiations. The proof tells us where to concentrate our search for instantiations.”

    There is a wonderful (but only partly successful) book by Gerald Sussman and Jack Wisdom titled Structure and Interpretation of Classical Mechanics (SICM, available on-line), which is a companion volume to Abelson and Sussman’s classic textbook Structure and Interpretation of Computer Programs (SICP, also available on-line).

    Sussman and Wisdom’s goal was to formalize and instantiate the computational processes associated to (classical) dynamical simulation. In the Preface the authors confess “We quickly learned that many things we thought we understood we did not in fact understand.” This is the great virtue of the struggle for instantiation.

    Similarly in complexity theory as in simulation theory—is there any difference between them?—the struggle to concretely instantiate theorems helps us to a deeper understanding of those theorems. So Kaveh, I hope you will post more!

  38. Kaveh Says:

    Hi John,

    You can take a look at proof complexity books, e.g. Jan Krajicek’s book (1995), or Steve Cook and Phuong Nguyen’s recent book (2010) (You can find a link to the draft of the later on Steve’s homepage. See chapter VI for the proof that every polynomial time function has an polynomial time algorithm that is provably total in the theory $V^1$.)

    Provably total means that we can prove that the algorithm halts on all inputs, which is weaker than saying the algorithm halts on all inputs in polynomial time. So if you cannot prove that an algorithm halts on all inputs, you can not prove that it is in P.

    For any given (recursively axiomatizable, … ) theory, there are universal sentences which are undecidable: $\forall x. A(x)$, where A(x) is a polytime decidable sentence. This is a trick going back to computability theory, see the first chapter of Classical Recursion Theory, vol I.

    (The formula A(x) can be made very simple since $AC^0$ is capable of checking that a string $c$ is a correct halting computation of a TM $e$ on input $x$ with output $y$. We can also take A(x) to be an equality between two polynomials by MRDP theorem. This can be done for any universal sentence.)

    Let $T$ be a theory satisfying the requirements in Godel’s first theorem. Let $Prf_T(x,y)$ be the formula expressing that $x$ is a proof for the formula $y$ in the theory $T$ (using some fixed proof system, for example LK). Given $x$ and $y$ it is easy to check if $Prf_T(x,y)$ is true. This can be done in $AC^0$.

    Take A(x) to be $\lnot Prf_T(x,\bot)$ (where bot is the contradiction). Given $x$, it is easy to compute the truth value of A(x). The theory $T$ is consistent so A(x) is true for every (standard) natural number $x$. But $\forall x. A(x)$ (which means “T is consistent”) is not provable in $T$.

    Let $M$ be any polytime algorithm. Let $M’$ be the following algorithm: given $x$, first compute the truth value of A(x). If A(x) is true, run $M$ on $x$, if it is false loop forever.

    We know that $T$ is constant, so this algorithm always halts in polytime and $M'(x) = M(x)$. But $T$ cannot prove this. (Assume it did, i.e. $T$ proves that $\forall x, M’ halts on x$. Then this would imply that $T$ proves that $forall x, A(x)$, i.e. $T$ would prove its own consistency.)

    Hope this helps.

  39. John Sidles Says:

    Kaveh Says: …See chapter VI [of Cook & Nguyen] for the proof that every polynomial time function has an polynomial time algorithm that is provably total in the theory V^1 …

    Kaveh, please accept my thanks for the effort you put into your reply, and my appreciation of the clarity and thoroughness of your answer. The quoted sentence above supplies precisely what I was hoping for: the necessary key idea and a reference in which it is carefully explained.

    It will take awhile (for me) to digest this material … in particular, it is clear that the capabilities of the theory V^1 have to be carefully delineated, in order that the first paragraph of your post not contradict the construction given in the remainder.

    Clearly these issues have considerable subtlety associated to them, and I am reminded of a contest that Elif Batuman recently hosted to define the term “Kafka Prawn”, in which the winning entries included Dimiter Kenarov’s definition: “undressing a person only to find new and new layers of clothing underneath.”

    Similarly, in grappling with the proof complexity literature, my engineer’s heart is filled with the quixotic hope that peeling away the Witnessing Theorems associated to V^0, V^1 … V^n will eventually expose, not more-and-more layers of complexity, but rather a satisfying view of the role that oracles play in defining the complexity classes P and NP.

    So, thank you very much, Kaveh, for helping me along this hope-driven journey … I see that your rating on Math Overflow comfortably exceeds 1000; this high rating is IMHO well-deserved, and reflects a sustained & valuable service to the broader STEM community.

  40. John Sidles Says:

    Kaveh, I took the occasion to link to your post (and praise it) on Terry Tao’s weblog discussion of The Vagueness Paradox.

    In this regard, the FOCS 2010 presentations are now available (through a link on the Fortnow/GASARCH weblog), and in particular Ketan Mulmuley’s tutorial on geometric complexity theory (GCT) concludes with an extended discussion of the need for broader & more rigorous answers to the question “What is P?”

    It’s presumptuous to ask … but any remarks you might care to offer, relating your post above to the closing themes of Ketan’s GCT talk, surely would be welcome and enlightening to me and many.

    Moreover, as I wade deeper into MathOverflow and TCS StackExchange, I find there are at least two persons who post as “Kaveh” … my appreciation and thanks are extended to all “Kavehs” everywhere.

  41. Kaveh Says:

    John, you are welcome, and thanks for your kind complements.

    I haven’t find time to watch Ketan Mulmuley’s tutorial yet, and I know very little about GCT to make any comments. You may want to ask that question from Scott, or even better, post it on cstheory! 🙂

    The witnessing theorem shows that all of the function which are provably total in $V^1$ are in P. I think you are interested in the other direction, i.e. “bootstrapping” (every problem in P has a polytime algorithm provably total in $V^1$).

    The bootstrapping is usually quite tedious but the ideas are not very complicated. The main point (IMHO) is we have a nice characterization/representation of problems in P, e.g. problems which are uniform $AC^0$ reducible to CircuitValue (CV). If you prove that uniform $AC^0$ functions are total in a theory and then show that CV is total, you have shown that all problems in P have algorithms which are provably total in the theory (provable total functions of a theory are closed under composition). In fact, $V^1$ contains the theory $V^0 + “CV is total”$. (The corresponding complexity class to the theory $V^0$ is uniform $AC^0$, and bootstrapping for $V^0$ gives us the $AC^0$ functions.)

    I think it is similar to the characterization of problems in P as TM’s with polynomial clocks. Any problem in P is solved by one of these machines, but the problem is that given an algorithm, we can not find which one. There always exists a nice algorithm, but we cannot find it. Also note that the theory might be unable to prove that these algorithms are computing the same function, so although they are computing the same function (they are extensionally equal), the theory does not know this, so it can prove that one of them is total while being unable to prove the totality of the other.

  42. John Sidles Says:

    Kaveh Says: I think you are interested in the other direction, i.e. “bootstrapping” (every problem in P has a polytime algorithm provably total in $V^1$).

    You are exactly right … because it is this “other direction” that arises most naturally in both engineering practice.

    E.g., we can assert (correctly AFAICT): “Every true formula is associated to a proof in a consistent axiomatic system.” But this doesn’t help us much with the practical problem of finding proofs for true statements!

    Similarly, defining P as: “problems which are uniform $AC^0$ reducible to CircuitValue (CV)” doesn’t help us much if we can’t find that reduction, or if we are given the reduction by an oracle, we are unable to prove that the reduction is a reduction.

    These issues are why (AFAICT) Ketan Mulmuley is absolutely right to assert, in the concluding summary to his FOCS GCT tutorial:
    ————————-
    “The P versus NP conjecture is not saying that the complexity class P is weak; in fact it is saying that the complexity class P is extremely strong … so to prove that P is not equal to NP, you have to actually show that P is big, not small; this is very paradoxical. … The problem is, we don’t know what P is; this is really the main problem in complexity theory at this point. We have no clue. We have a great definition of the complexity class P, but we have no clue what P is. … What GCT theory is saying is “Now begins the real complexity theory of the hard kind.” … We really have to understand what the complexity class P is [and] this problem has turned out to be far grander than we expected.”
    ————————-
    This is why problems associated to the computational complexity of deciding membership in P are so very interesting, from both a fundamental and a practical point of view. Indeed, it appears (to me) that if we thoroughly understood the “deciding membership in P” problem, then every other problem associated to P  would become considerably easier to attack.