Open thread

I’ve had a miserable week (only partly because of the headaches and coughing fits that have been keeping me up all night), and feel a need to be of use to some other human being without leaving my apartment. So this thread is for you to ask about whatever’s on your mind — complexity classes, philosophy, grad school advice, anteaters … anything asked in earnest will be responded to, in considerably less than the two years it took me for Lev R.

Update (4/13): Having spent a good part of the weekend answering 57 questions about everything from quantum computing to painting elephants, I think it’s time to call it quits. Thanks to everyone who submitted; it really cheered me up! We’ll do this again sometime.

144 Responses to “Open thread”

  1. mg Says:

    What are your thoughts on the ubiquity of binary von Neumann computers when other models are possible?

    For instance, analog computers or something along the lines of what Kittler is getting at at the end of this paper: http://www.ctheory.net/articles.aspx?id=74

  2. Super Says:

    Is quantum computing feasible in the absence of cheap (!) liquid helium? Can decoherence be mastered at 77K?

  3. tom s. Says:

    What’s with the painting elephants? Do they know that what they are painting is a representation of an elephant, or a representation of something? Do they paint the same thing every time? What’s going on and what does it mean?

  4. tom s. Says:

    Your comments remove html I see. Try

  5. Undergrad Says:

    What do you want to see in a prospective student’s statement of purpose?

  6. Scott Says:

    mg: For my thoughts on analog computing, see Sec. 6 of my “NP-complete Problems and Physical Reality” article. Basically, the central challenge for any analog computing model is how to cope with noise — and in my opinion, this challenge was never successfully met in principle, in the sense that it was for quantum computing in the mid-nineties. Furthermore, if physicists’ speculations about space and time breaking down at the Planck scale of 10-43 seconds or 10-33 cm are correct, that would give a fundamental reason why analog computing models could never scale in principle.

    Of course, even if it doesn’t change our fundamental ideas about what’s efficiently computable, analog computing could still be useful for some applications. In fact, a large fraction of all experiments done in science and engineering can be seen as analog computations! (Namely, those experiments in any domain where we know the underlying laws, and could in principle simulate things on a digital computer given enough processing power.)

  7. Domenic Says:

    OK, let’s start this one off.

    For the past few years I’ve always assumed I’d (attempt to) go all the way in academia, i.e. undergrad → masters → PhD → postdoc × n → assistant professor → professor → tenured professor. (I think I got that right.) I love research, and I think “uncovering the secrets of the universe” as a job description is pretty much the most fulfilling thing ever. I like college, and from what I can tell the social atmosphere remains pretty fun all the way through, albeit perhaps with a continued “ratio problem” and other such this-isn’t-the-real-world issues.

    Somewhat recently though I’ve become less sure. The thing is, that’s a lot of moving; a lot of application-writing; a lot of grant-chasing; a lot of proving myself to other people in power; and possibly worst of all, there’s the potential for a lot of noninteresting work that must be done as a stepping-stone to more-interesting work. (E.g., do I suck it up and be an IDL code monkey for someone’s physics simulation project freshman year, since it’s the only research opportunity available for people at my level of knowledge?) The social dynamic is different, and in my (Caltech) experience doesn’t function nearly as well for common tasks like, say, meeting girls over dinner instead of over a homework set (paper, conference, etc.), or interacting with a heterogeneous set of people. And while it’s not that big of a concern, there is the fact that with all probability I won’t actually do anything really field-shaping. (But whatever—that isn’t a necessary criteria for having fun doing research.)

    And then on the other hand, I’ve been doing two or three little programming projects (scratch-an-itch type stuff) on the side… and really enjoying myself. I love browsing programming blogs and learning about new tricks, techniques, and technologies; I’m constantly monitoring the pipeline for new stuff that I should be learning or at least getting an overview of. All just because it’s fun! And to add to the pressure toward this kind of vocation, programming is a lot more profitable than academia, and the culture is often pretty nice—after all, it’s the rare college town that can compare with Silicon Valley or New York, say.

    I guess one of the things that really brought this issue to a head was Sabine’s post about being a postdoc: pretty depressing stuff. I thought maybe it would be helpful to get a perspective from you, as you seem likely to have a somewhat different take on things, and also I imagine you’ve also likely faced down this kind of choice before.

    So… yeah. Thoughts? Mainly I’m just gathering data and perspectives; I should have another 4 years or so before I need to really start making decisions.

  8. roland Says:

    do you agree that the simpsons feature film sucked?
    also, did you spent a full month understanding Mulmuley’s Geometric Complexity Theory approach to P vs Np ?

  9. Scott Says:

    Super: Indeed, many quantum computing proposals (like solid-state NMR and ion traps) seem to require extremely low temperatures. But I’m not sure that the availability of liquid helium is a limiting constraint — many labs around the world have liquid helium, and they still face huge decoherence obstacles!

    It might be possible in principle to “master decoherence” at 77K — many of the same intuitions that tell you it’s not possible, would also tell you the Fault-Tolerance Theorem should be false! But not surprisingly, it seems much easier to fight decoherence at lower temperatures.

    (Any readers who actually understand implementation care to comment?)

  10. Liron Says:

    What are your top recommended books?

  11. Nagesh Adluru Says:

    Scott, first of all I hope you will feel better soon!

    Modern science is essentially based on the concept of materialism that is reasoning based on observable phenomenon.

    Landmark scientists like Newton and Einstein both in their later parts of life got carried away by “scientific quests” not entirely based on materialism (alchemy in case Newton and unified theory in case of Einstein).

    Einstein also said if quests are not based on search for underlying reality and just based on observables then scientific enterprise seems like silly empiricism!

    As a leading scientist do you see why such things happen? Is it because they see limits of modern science and lose balance? Is there a way to update standards of scientific effort to be able answer real big questions which if answered either intuitively or surprisingly, either in an easy way or hard way, can satisfy all our inquisitiveness?

    Just to avoid misconceptions: I consider modern scientific effort to be the noblest of all efforts in the history of mankind. I am asking the above questions not because I think current science standards are useless, but just to see if we can do more and be scientific.

  12. John Armstrong Says:

    To tag onto Domenic’s: will I get an offer of employment from any of the dozens of schools I applied to? What fraction of those that reject me will actually let me know they don’t want me? Why can’t the rest of them even send a form email?

    Seriously, though, I’m advising a junior through reading the Aharonov/Jones/Landau paper, and he’s tearing it up. I introduced him to Sam Lomonaco, and I’m going to try to get him into UMBC for grad school after next year. But where else should he apply for quantum computation, particularly the more mathematical end?

  13. Scott Says:

    What’s with the painting elephants? Do they know that what they are painting is a representation of an elephant, or a representation of something?

    Tom: Thanks for the link! I have to admit that I hadn’t seen the painting elephant video until just now. (Incredibly, I seem to be behind on my Internet procrastination…)

    To state the obvious, this is pretty incredible no matter what the right explanation is. Snopes (the debunking site) says that the (1) the video is real, but (2) the elephant is “merely” reproducing a painting it memorized (thus, presumably, leaving open the question of whether it in any sense knows it’s painting a representation of an elephant). Unfortunately, Snopes doesn’t give a source for these assertions besides a BBC News report, which I haven’t seen.

    The question is answerable in principle: for example, one could use fMRI to see whether the elephant is activating “elephant” brain regions when it paints an elephant, or (more simply) whether there’s a systematic difference in how it paints elephants vs. how it paints other animals. Unfortunately, in 10 minutes of googling, I was unable to find a single article that treats this as a scientific question rather than a puff piece. Can anyone point us to one?

  14. Jay Says:

    If a lottery randomly picks 6 numbers out of 49, what is the minimum number of tickets needed to guarantee you win something assuming you win if any ticket matches 6 of 6, 5 of 6, 4 of 6 or 3 of 6 of the winning numbers?

    I’ll settle for whether this problem is NP complete or not and why.

  15. Scott Says:

    What do you want to see in a prospective student’s statement of purpose?

    Specifics, specifics, specifics. Most students lard their statements with flowery phrases about “the wonders of discovery,” “our 21st-century technological society,” and their wanting to be a scientist since the age of 4. This sort of verbiage won’t hurt you (everyone else indulges in it too), but it won’t help either.

    I want to know what research projects you worked on, what your own contribution was, and what you learned from them. If you fell in love with theory after taking a class on it, I want to know which results and open problems you fell in love with: the old standbys (P vs. NP and Turing’s undecidability theorem), or maybe something a bit more unusual? Even if it’s only a few sentences, I want to see you address some technical issue competently, the way an actual computer scientist would address it. I want to know which research areas within theory you’re interested in (it’s fine if you’re not sure yet, but it’s better to be unsure because you can think of so many great options than because you can’t think of a single one). Finally, I want to know which faculty you might want to work with, and what it is specifically about their research that interests you.

    One other thing: DO NOT use the research statement to brag about your grades or class rank or performance in various math competitions. Not only does it make me wonder about your priorities and whether you’re grown-up enough for grad school, but there’s plenty of room for that elsewhere in the application, and I won’t miss it.

    Good luck!

  16. Josh Schraiber Says:

    Scott,

    That post about what you want to see in a statement of purpose is much, much, much appreciated, and even though I didn’t ask that question, and even though I’m going to go into a different field than you work in, I think that that is some damned good advise. I’ll make to sure to save that on my damned hard disk when I get home!

    Anyway, I was wondering: what’s your favorite aspect of more current applied computer science (besides quantum computer research, I guess)?

  17. Scott Says:

    Roland:

    do you agree that the simpsons feature film sucked?

    That’s a tough one. I easily enjoyed it more than 93% of movies I’ve seen — it is, after all, The Simpsons. On the other hand, by the impossibly-high standards of the early episodes, it could indeed be said to suck. It contained large elements that didn’t make any sense (for example, the whole subplot involving Alaska), and the Russ Cargill villain couldn’t hold a candle to Monty Burns (who disappointingly played almost no role in the film).

    Related to that, I watched a few recent Simpsons during spring vacation, and the thing is truly suffering under the impossible weight of 19 seasons. (In one episode I saw, they rewrote the whole prehistory of Marge and Homer’s marriage just for something new to do.) Indeed, I think The Simpsons is now the best argument around for assisted suicide: if you truly love something, best to let it die while it still has a shred of dignity.

    also, did you spent a full month understanding Mulmuley’s Geometric Complexity Theory approach to P vs Np ?

    No, I did not. I spent a day trying to understand it and then gave up.

    It’s conceivable that if we mastered this approach, it could blast through the relativization, algebrization, and natural proofs barriers. But right now, the approach seems to be much more a hope than a concrete program with identifiable successes — and the known barriers have little to say about hopes.

  18. Scott Says:

    What are your top recommended books?

    In my Facebook profile, I listed my all-time favorites as Huck Finn, Alan Turing: The Enigma by Andrew Hodges, Field Notes from a Catastrophe by Elizabeth Kolbert, The Mind-Body Problem by Rebecca Goldstein, Dialogues Concerning The Two Chief World Systems by Galileo, Computational Complexity by Papadimitriou, and Escape from Childhood by John Holt. But those were just the ones I thought of at the time, and if you asked me again I’d probably have a completely different list.

    Browsing through my Amazon history, here are a few I ordered and enjoyed within the past year…
    The Fortune Cookie Chronicles: Adventures in the World of Chinese Food by Jennifer 8. Lee
    1491: New Revelations of the Americas Before Columbus by Charles C. Mann
    The Stuff of Thought: Language as a Window into Human Nature by Steven Pinker
    The World Without Us by Alan Weisman
    God Is Not Great by Christopher Hitchens

  19. John Sidles Says:

    I hope you feel well soon, Scott!

    Here is a question … what (if any) is the connection between your work on the “learnability of quantum states” and the work of people like Candes, Tao, Romberg, Donoho (etc.) in the area of compressive sensing, compressive sampling, and compressive simulation (CS^3).

    Specifically, are “learnable quantum states” Hilbert-space instances of what Candes and Tao call “compressible objects”? If so, does this mean that all learnable quantum states can be reconstructed from sparse random projections?

  20. anonymous Says:

    Let’s say, hypothetically, that someone (perhaps me) is in their mid-twenties and hasn’t really found their calling. They have, let’s say, a bachelor of science, but with very mediocre grades. Also, they are unimaginative when it comes to figuring out what they should do with their life. My question is: what should that person do, career-wise? Note: if this person isn’t able to come up with a decent answer or get one from someone else, then they will go to law school, that being the default path for someone who doesn’t have a dream (although dentistry also fits that description I guess, but in this case it will be law).

  21. Scott Says:

    Nagesh: Thanks for the well-wishes, and for the joke about my being a “leading scientist”! 🙂

    The phenomenon you describe is what I’ve called “the other Stockholm Syndrome.” It’s a disease that affects many great scientists after they’ve won a Nobel or otherwise become famous, causing them to believe their next contribution will be not “merely technical,” but a Truly Deep Insight into the Nature of Reality.

    The etiology of OSS is unknown, and seems to vary greatly from one famous scientist to the next. To take just the two examples you mentioned: Newton believed in alchemy and Biblical prophecy pretty much his whole life; I don’t think these were new developments after he got famous. Crucially, Newton was also secretive about his mystical beliefs, in stark contrast to the dozens of Stockholm Syndrome sufferers who came later.

    Einstein was a very different case: he spent his last thirty years pursuing research that most physicists at the time saw as a dead end (and that, with the benefit of hindsight, basically was). But it’s really not unusual for half of your research career to go nowhere — what’s unusual is for the other half to change the world!

    And, yes, Einstein did have a research strategy that valued beauty and philosophical clarity above all other desiderata, that was more concerned with “knowing the thoughts of the Old One” than with explaining new experimental results. As we know, this strategy did work spectacularly in at least two cases (SR and GR). So, despite a half-century of attempts to read something deeper into Einstein’s later failure, it could just be a case of you win some, you lose some.

    Also, at no point did Einstein ever embrace mysticism per se. That’s another crucial difference between him and Newton (as well as lesser Stockholm Syndrome sufferers like Brian Josephson).

  22. John Sidles Says:

    Scott, if you need to occupy your mind with something that is fun and not too serious, you might track this weekend’s Freestyle Chess Tournament … http://freestylechess.com/ .

    In “freestyle chess” everything is legal … computer assistance … hire Gary Kasparov … which makes for interesting games.

    Or else, start reading Patrick O’Brian’s Aubrey-Maturin series … these books are long enough to outlast *any* bout of cold or inflenza, and they repay reading at almost any level of attention.

    Either way, the strategy is to defeat the viruses by boring them to death. 🙂

  23. Scott Says:

    will I get an offer of employment from any of the dozens of schools I applied to?

    Sorry, I dunno, but good luck! 🙂

    where else should he apply for quantum computation, particularly the more mathematical end?

    Arguably the strongest groups in North America for quantum computing theory are (in no particular order) Waterloo, Caltech, MIT, and Berkeley. Caltech in particular has a strong focus on anyonic/topological QC (or to put it another way, has Kitaev). But there are lots of other great places for great quantum computing theory — U. of New Mexico, Santa Barbara, Michigan, Calgary, Montreal/McGill… — and it never hurts to apply broadly. In the end, it’ll come down to finding an individual advisor who’s a good match for your student.

  24. Robert W Says:

    What the hell is a natural proof?

  25. Scott Says:

    If a lottery randomly picks 6 numbers out of 49, what is the minimum number of tickets needed to guarantee you win something assuming you win if any ticket matches 6 of 6, 5 of 6, 4 of 6 or 3 of 6 of the winning numbers?

    I’ll settle for whether this problem is NP complete or not and why.

    OK then: the problem is not NP-complete, the reason being that it’s a particular finite instance and not a language. 😉

  26. mg Says:

    Thanks for your reply Scott. Just fyi, your link (in the reply to my question) is broken.

    Here’s the real link: http://www.scottaaronson.com/papers/npcomplete.pdf

    You seem the type to scoff at media theory, but I’d like to hear what you think of Kittler’s paper. I’d like a “scientist’s” opinion, if you will.

    Again, it’s: http://www.ctheory.net/articles.aspx?id=74

  27. Scott Says:

    what’s your favorite aspect of more current applied computer science (besides quantum computer research, I guess)?

    Well, I’m a huge fan of work in applied computer security (e.g. the security of voting machines) — this sort of work has enormous social importance but seems systematically undervalued by our community.

    I’m a fan of the stuff Tapan Parikh, Eric Brewer, and others are doing on information technology in the developing world.

    Google is too obvious even to mention (though I see that I have).

    I see CGI animation (what Pixar does) as one of the most spectacular successes of recent CS research — though arguably it’s a solved problem by now.

    Right now I’m pretty excited by the work DARPA’s sponsoring on autonomous vehicles. In principle, we now know how to build self-driving cars; were the technology made practical, it could reduce gas consumption and save thousands of lives.

  28. Scott Says:

    mg: Sorry about the link! It’s fixed now.

  29. Carl Says:

    A.) “(In one episode I saw, they rewrote the whole prehistory of Marge and Homer’s marriage just for something new to do.)”

    I liked that episode! As a child of the ’90s, I grew up watching TV shows about the 50s, 60s, 70s, and (to a lesser extent) 80s, and now finally it’s my turn to laugh because I recognize things. It was like Forest Gump for people born in 1982!

    My favorite joke was the one about Back to the Future: “It’s your cousin, Marvin Cobain!”

    B.) Is there a good article to cite on “the definition of computation”? Not as in “the Turing machine” or an oracle machine or whatever, I want to know about computation in general. I’m a philosophy grad student, and all I’ve found is Jack Copland’s “The Broad Concept of Computation,” which is OK, but not quite what I’m looking for.

    (His definition is “Part of the machine must be capable of being prepared in configurations that represent the arguments of whatever functions are to be computed, and part of the machine must be capable of coming to assume, as a result of activity within the machine, configurations that represent values of these functions. Sub-devices of the machine–“black boxes”–must make available some number of primitive operations. The machine must have resources sufficient to enable these operations to be sequenced in some predetermined way.”)

    The definition of computation that I’ve create for my own purposes is “the process by which the fact that one system is rule governed is used to make reliable inferences about another rule governed system.” Would you agree with that definition or is it flawed somehow?

  30. Carl Says:

    Oh, the DARPA car thing. Do you know my brother, Flip Johnson? He’s a first year grad student at MIT in the EE department. I think he’s on the fringes of that, since he’s in the Navy as well as being a student.

  31. VR Says:

    Recently, talking to a VP from amazon, I learned that
    the research lab is dying. Microsoft Research, seems to be the last of the old school Research Lab. Google seems to hire PhD’s for software development jobs. That seems to be the way things are heading.

    So if you are a third year PhD student, and you realize you are probably not going to lead the field (read, you don’t think you are faculty material in a decent school; that may change, but thats where you are mentally) and you realize that most of the training you receive may not have a direct outlet of use (vanishing RL’s), what would you do?

    Would you?
    a) drop out
    b) enjoy the PhD, then be perfectly happy with a dev job which you could have done right after your bachelors/masters?
    c)Something completely different? Please elucidate.

  32. Nagesh Adluru Says:

    Thanks for the response Scott!

    I see the point now. Also I came to know about Stockholm Syndrome 🙂

    On a different note I am glad you are excited about autonomous navigation because as part of my thesis, I happen to be working on robot mapping problem which is one of the crucial components for robust autonomy:)

  33. Scott Says:

    what (if any) is the connection between your work on the “learnability of quantum states” and the work of people like Candes, Tao, Romberg, Donoho (etc.) in the area of compressive sensing, compressive sampling, and compressive simulation (CS^3)

    John, I haven’t read the papers of Candes, Tao et al. that you mention (though I really should someday). So can I broaden your question just a little, to “what is the relationship between compressed sensing and the information content of quantum states?” If you, then we get a fantastic question, which I happen to know the answer to since Piotr Indyk asked me just a few weeks ago!

    Perhaps the simplest result in compressed sensing is the following: there exists a linear transformation that maps unit vectors in R2^n to vectors in Rn, and which is “almost invertible” (i.e. such that the original unit vector can be almost recovered by applying another linear transformation). To compressed sensing experts, this is surely a baby result not even worth stating — but it’s all I’ll need for present purposes.

    Already with this result, an apparent paradox arises: namely, it seems to suggest that it should be possible to map an n-bit classical string |x⟩ onto a log(n)-qubit quantum state |ψx⟩, from which |x⟩ can later be recovered by making a measurement. But the dense quantum coding lower bound of Ambainis, Nayak, Ta-Shma, and Vazirani — the same lower bound I used in my learning paper — specifically says that this is impossible!

    The resolution of the paradox is that the compressed sensing matrix that gives you the exponential savings, while linear, is necessarily non-unitary, and indeed can’t be simulated by any quantum-mechanical operation. And how do I know that? Well, because of Ambainis et al.’s lower bound…

    To summarize, the relation between compressed sensing and learnability of quantum states is that, if compressed sensing were possible with unitary matrices (which it isn’t), then my learning result would dramatically fail!

  34. Scott Says:

    Let’s say, hypothetically, that someone (perhaps me) is in their mid-twenties and hasn’t really found their calling. They have, let’s say, a bachelor of science, but with very mediocre grades. Also, they are unimaginative when it comes to figuring out what they should do with their life. My question is: what should that person do, career-wise? Note: if this person isn’t able to come up with a decent answer or get one from someone else, then they will go to law school, that being the default path for someone who doesn’t have a dream (although dentistry also fits that description I guess, but in this case it will be law).

    Reading biographies of Lincoln, Gandhi, and other great non-scientists, I was struck over and over by how they seemed to have no clear idea in their twenties of what they wanted to do, and how before the deeds for which they’re revered, had this whole prior life that in retrospect seems almost comically disconnected from their great achievements. (Incidentally, in both Lincoln and Gandhi’s cases the prior life involved becoming a lawyer.)

    I guess there are two lessons here. The first is that someone with no idea of what he wants to do might actually be in an enviable position: unlike a focused person, the unfocused one isn’t expected by peers and family to do one particular thing (say math or science), and can therefore take a much broader view of his or her options. One can say to such a person, look around you. Are there business opportunities that look promising? Volunteer work? If not, nothing wrong with law school or dentistry school (to some people those are a calling).

    The second lesson is that a “calling” is generally not something you find by “looking deeply into yourself” or meditating in the woods — rather, it’s something you discover through actual engagement with the world. (Imagine that instead of going to South Africa, the teenage Gandhi sat in his room, went through a list of options, and circled “win independence for India.”)

    I know these are generalities and platitudes, but not knowing more about your hypothetical person it’s about the best I can do. Best of luck to him or her!

  35. Pre-final year undergrad Says:

    Hi Scott,

    I am an avid reader of your blog, and would like to ask you something.

    What exactly are the job opportunities after a PhD in theoretical computer science? (Besides academia of course).

    And specifically within theory I’ve had some introduction to cryptography and security and would like to pursue some active research in my final year to aid my grad school application. Does this area have good scope for employment outside academia? Sorry to sound prejudiced towards academia, I just don’t want to have decided what I want to do in life at the age of 20 itself.

    Any inputs reg. which areas are going to be a good investment of time (in theory) will be highly appreciated. And what is the scope of switching branches within Computer Science after (say) a year of two at grad school?

  36. Scott Says:

    What the hell is a natural proof?

    Robert: A natural proof is a proof of a complexity lower bound — that is, a proof that some Boolean function f:{0,1}n→{0,1} is not in some class C — that satisfies two conditions called “constructivity” and “largeness.”

    “Constructivity” means that the proof actually identifies some property P of f, which is computable in time polynomial in the size of f’s truth table, by virtue of which f is not in C. In other words,
    (i) f satisfies P,
    (ii) no functions in C satisfy P, and
    (iii) there’s an algorithm that, given the truth table of any Boolean function g, decides in time 2O(n) whether or not g satisfies P.

    “Largeness” means that a random Boolean function satisfies property P with high probability. That is, of the 22^n Boolean functions on n inputs, at least a 1/poly(n) fraction of them satisfy P.

    In their famous paper, Razborov and Rudich noticed two things:
    (1) almost all circuit lower bound proofs then known satisfied the constructivity and largeness conditions (though in many cases, this wasn’t immediately obvious).
    (2) any natural proof of the statements we really care about (like P≠NP) would imply a strongly subexponential algorithm for breaking pseudorandom generators (and hence for the factoring and discrete log problems).

    To put it loosely, a natural proof would bite itself in the ass, and yield efficient algorithms for some of the very same problems that we wanted to prove were hard! In fact, as you would show the factoring and discrete log problems to be harder by a natural proof, you’d simultaneously show them to be easier, eventually reaching a point where the upper bound implied by the proof was smaller than the lower bound!

    The conclusion is that, assuming these problems are hard, it will have to be shown by an “unnatural” proof, which you can think of as one that zeroes in on a specific property of the function f being shown to be hard — a property that f does not have in common with random functions. Right now, the main example we have of an unnatural proof technique is diagonalization, the same technique used by Gödel and Turing. Diagonalization avoids the largeness condition by zeroing in on the function f’s ability to simulate a whole class of machines, which is not an ability shared by a random function. Alas, if we try to use diagonalization to separate P and NP, we run right into another big barrier, the relativization barrier!

    I know natural proofs are a hard concept to wrap one’s head around. For more, see the original RR paper, or this blog entry from Lance, or this Wikipedia article.

  37. Robert W Says:

    Thanks Scott!

    When you mention relativization, do you mean using oracles?

    Do we have a grip on the number of NP-hard problems as compared to, say, those in P?

    Why aren’t you in bed? Get better!

  38. Scott Says:

    mg, I read the article you linked to, and I didn’t understand anything it was saying — possibly because it’s too profound for me, but possibly because it’s pompous and deliberately obscure, and abuses terms like “one-way function” in cringe-inducing ways that rival anything in the book by Sokal and Bricmont. (On the other hand, maybe complexity theorists should be flattered that the lit-crit crowd is finally abusing our terms along with the mathematicians’ and physicists’ — we’ve arrived! 🙂 ) For those too lazy to click the link, let me quote what I found to be the clearest two paragraphs in the article, and let you judge for yourselves:

      This ongoing triumph of software is a strange reversal of Turing’s proof that there can be no mathematically computable problem a simple machine would not solve. Instead, the physical Church-Turing hypothesis, by identifying physical hardware with the algorithm forged for its computation has finally got rid of hardware itself. As an effect, software successfully occupied the empty place and profited from its obscurity. The ever-growing hierarchy of high-level programming languages works exactly the same way as one-way functions in recent mathematical cryptography.

      These kinds of functions, when used in their straightforward form, can be computed in reasonable time, in a time growing only in polynomial expressions with the function’s complexity. The time needed for its inverse form, however; that is for reconstructing from the functions’ output its presupposed input; would grow at an exponential and therefore unviable rates. One-way functions, in other words, hide an algorithm from its very result. For software, this cryptographic effect offers a convenient way to bypass the fact that by virtue of Turing’s proof the concept of mental property as applied to algorithms has become meaningless.

  39. Scott Says:

    Is there a good article to cite on “the definition of computation”? Not as in “the Turing machine” or an oracle machine or whatever, I want to know about computation in general. I’m a philosophy grad student, and all I’ve found is Jack Copland’s “The Broad Concept of Computation,” which is OK, but not quite what I’m looking for … The definition of computation that I’ve create for my own purposes is “the process by which the fact that one system is rule governed is used to make reliable inferences about another rule governed system.” Would you agree with that definition or is it flawed somehow?

    Carl: Since you’re a philosopher, I can tell you that I think computation is probably best defined as computation by a Turing machine, and that this is a good example of a Kripkean a posteriori necessity. That is, just like in the famous definition “water is H2O,” all uses of the word “computation” prior to 1936 should be interpreted as forward-pointers to the correct definition that Turing eventually gave.

    Of course, there are “computations” that are not Turing machine computations — oracle TM computations and computations over the real and complex numbers spring to mind — but all of them (so far as I know) are obtained by starting with the concept of a Turing machine and then modifying it to include some infinite element. Furthermore, these modified concepts, while extremely interesting mathematically, are not known to have any direct relevance to the physical world. So as the “exemplar” of what we mean by computation, the Turing machine seems pretty hard to beat.

    Having said that, if you insist on an “Aristotelean” definition of computation and not a “Wittgensteinian” one, the definition that you proposed — “the process by which the fact that one system is rule governed is used to make reliable inferences about another rule governed system” — is about as good as I’ve seen. The one objection I’d raise is that it seems to talk about the process of interpreting the results of the computation rather than the computation itself.

    Unfortunately, I don’t know of any good reference for these issues. I tried to read Copeland once, but when he started talking about hypercomputation (i.e. solving the halting problem) as a hot scientific idea that’s being actively pursued, I got disgusted and couldn’t read any further.

  40. Scott Says:

    Do you know my brother, Flip Johnson?

    I don’t think so, no…

  41. Scott Says:

    So if you are a third year PhD student, and you realize you are probably not going to lead the field … what would you do?
    Would you?
    a) drop out
    b) enjoy the PhD, then be perfectly happy with a dev job which you could have done right after your bachelors/masters?
    c)Something completely different? Please elucidate.

    I think whether I dropped out or not would depend on whether I was enjoying grad school, as well as (a related question) whether I was getting research results and making progress toward my degree. If so, I’d probably stay and finish, then worry about the job market afterwards. If not, I’d probably start looking for jobs (maybe in software, maybe in something else), and drop out as soon as I found one that I liked.

  42. Scott Says:

    What exactly are the job opportunities after a PhD in theoretical computer science? (Besides academia of course).

    Well, I personally know maybe a half-dozen theoretical computer science PhD’s who now work at Google. There’s also Microsoft Research, IBM research, Akamai, RSA Labs, NEC Labs, various Wall Street firms, the Center for Communications Research, NSA (to apply, call your mom and tell her you want an application…), and others — some involving basic research and some not.

    And specifically within theory I’ve had some introduction to cryptography and security and would like to pursue some active research in my final year to aid my grad school application. Does this area have good scope for employment outside academia? Any inputs reg. which areas are going to be a good investment of time (in theory) will be highly appreciated.

    My advice would be not to pick research areas within theory based on what’s most likely to land you a job. As I alluded to in an earlier post, employers probably won’t care much about the specifics of your research anyway (to them, all theory might seem pretty much the same!); they’ll mostly be interested in it insofar as it demonstrates your intelligence, motivation, ability to work with others, ability to learn new things, etc. Knowing that, you should pick a research topic based on (1) what really interests you and (2) where you’re most likely to succeed, noting that the two are strongly correlated with each other. So if you’re interested crypto and good at it, then yes, crypto would be a fine choice.

    And what is the scope of switching branches within Computer Science after (say) a year of two at grad school?

    People do this all the time, but it will increase the number of years you spend in grad school, so if you don’t want to end up like Slackenerny you should do it only if there are strong intrinsic reasons. In particular, switching areas midstream as a way to game the job market is almost certainly not a good idea. This is yet another area of life where being too strategic is not the best strategy! 🙂

  43. Scott Says:

    Robert W.:

    When you mention relativization, do you mean using oracles?

    Yes, that’s what the word means.

    Do we have a grip on the number of NP-hard problems as compared to, say, those in P?

    The number of both is Aleph0 (i.e., countably infinite).

    Why aren’t you in bed?

    I’m going there now.

    Get better!

    Thanks, will try!

  44. Carl Says:

    Fair criticisms.

    “The one objection I’d raise is that it seems to talk about the process of interpreting the results of the computation rather than the computation itself.”

    True, but I suspect that’s inescapable, since everything can be used as a computation in at least the trivial sense that it can be used to assist counting (like abacus beads), hence it only makes sense to talk about what “is a computation for me” not what “is a computation” in general.

  45. Mark Probst Says:

    How many qubits does a quantum computer need to have to factor an integer of N binary digits in polynomial time?

  46. Pre-final year undergrad Says:

    @Scott:

    Thanks for all the advice. 🙂
    I will try and make an informed choice.

  47. Tez Says:

    Here is my question:

    In Valiant’s “Permanent is #P-complete” paper he has a 4×4 matrix which is the “junction gadget”, which he says must satisfy 4 stated conditions and that these conditions imply at least 2 of the matrix elements must be >1 in magnitude.

    Yet this matrix satisfies all his conditions (matlab notation):

    [0 -1 1 0;0 1 1 1;1 1 -1 0;0 1 1 0]

    My question is, should we discount everything he has ever done since? He clearly only operates at physicists’ levels of “proof”…

  48. Green Elf Says:

    Do you think the changes of Dungeons and Dragons 4th edition are mostly good or mostly bad?

  49. sp Says:

    Do you acctualy understand how Grover’s algorithm working and how it can be used in practice? Do exist some more convincing papers about how Grover’s algorithm work than all current on internet?
    And do you understand how Deutsch or Deutsch-Josza algorithm working?
    And which algorithm is most hard to understand Shor, Grover or Deutsch algorithm and which most easy (if they can be understanded without understanding of quantum mechanic)?

  50. aravind Says:

    get well soon, scott! you have also been of use to other human beings in another way this week — i am going to tell a few mathematically-interested middle- and high-school kids i know, about the seattle math camp you wrote about.

  51. Scott Says:

    How many qubits does a quantum computer need to have to factor an integer of N binary digits in polynomial time?

    Mark: The best upper bound I’ve seen is ~2N+4 qubits, but maybe there’s a better bound by now.

    Depending on how the model is defined, there might be a lower bound of N qubits, arising from the need to write down the number. Or, if the quantum computer is only called as a subroutine, then the limit beneath which any quantum factoring algorithm would imply faster classical factoring algorithms than what we currently know is ~N1/3 qubits.

  52. Scott Says:

    My question is, should we discount everything [Les Valiant] has ever done since? He clearly only operates at physicists’ levels of “proof”…

    Tez: No, we shouldn’t discount everything Valiant has done since (nor the #P-completeness paper itself for that matter). I haven’t read the paper to confirm the error you found, but I can easily believe it might be there (these things happen all the time). If so, it wouldn’t decrease my confidence in the slightest in that particular proof’s validity — the reason being that by now, hundreds of other people have written their own versions of the proof (see Papadimitriou’s textbook, among other places).

    (Incidentally, can anyone who has read the paper confirm or deny this error?)

    It’s always good to distinguish between minor errors (which are annoying but almost inevitable in a proof of any length) and major errors (which can’t be patched with any reasonable amount of effort, and which usually lead to a paper’s retraction, or at least the retraction of the specific result in question). If we’re going to discard any result whose original writeup was full of minor errors, then Turing’s 1936 construction of the Universal TM has to go in the garbage among others.

    Of course, Valiant also happens to be one of the most creative CS theorists on earth — having invented PAC-learning, matchgates, evolvability, and a half-dozen other things people now write STOC/FOCS papers about — so even if one of his famous papers were irreparably flawed, one still wouldn’t want to discount everything else he’s done.

  53. Scott Says:

    Do you think the changes of Dungeons and Dragons 4th edition are mostly good or mostly bad?

    Green Elf: I only played D&D once in my entire life, at a summer camp when I was about 9. As soon as I realized that
    (1) the “Dungeon Master” can do anything he wants (“a lightning bolt comes from the sky, and kills every player I don’t like!”), and
    (2) I was not the Dungeon Master,
    I lost interest.

    So I can’t answer your question.

  54. Scott Says:

    Do you acctualy understand how Grover’s algorithm working and how it can be used in practice?

    Yes.

    Do exist some more convincing papers about how Grover’s algorithm work than all current on internet?

    There are many convincing papers, but almost all of them are on the Internet. You could also try the Nielsen-Chuang textbook or various course lecture notes (Preskill, Vazirani, Nayak…)

    And do you understand how Deutsch or Deutsch-Josza algorithm working?

    Yes.

    And which algorithm is most hard to understand Shor, Grover or Deutsch algorithm and which most easy (if they can be understanded without understanding of quantum mechanic)?

    Obviously you can’t understand any quantum algorithm without knowing the basics of the quantum computing model (which doesn’t mean you need to be able to calculate the ground state of hydrogen, or know anything else about quantum mechanics as it would be taught in a physics course).

    Mathematically, Deutsch-Jozsa is the easiest algorithm to understand and Shor is the hardest, with Grover somewhere in between. None of them are hard by a mathematician’s standards.

  55. Mark Probst Says:

    Or, if the quantum computer is only called as a subroutine, then the limit beneath which any quantum factoring algorithm would imply faster classical factoring algorithms than what we currently know is ~N1/3 qubits.

    I don’t understand that. Does it mean that if we have a quantum computer with at least N^(1/3) qubits which we can use as a subroutine in a classical computer, then we can factor faster than we can now? How much faster?

  56. hk Says:

    Hey, I’m lazy so bear with me if this has already been pointed out.

    1) In the complexity lecture you say that hierarchy theorems hold for “all” functions. They don’t. They hold for all functions you can come up with (that are constructable), but there’s an interesting theory about other functions (with arbitrarily large gaps before you get more languages).

    2) In some comment you say that it may be the case that all languages in P are NP-complete. At least under many-one reductions 2 sets are provably not NP-complete 🙂

  57. Scott Says:

    Mark: No, I’m saying that if we had a polynomial-time quantum factoring algorithm whose “quantum part” used only Q qubits, then we could simulate it classically in O(2Q poly(n)) time (by just writing down the whole quantum state). The best known classical factoring algorithm, the number field sieve, uses roughly 2O(n^1/3) time. Hence, if Q were o(n1/3), then simulating the quantum algorithm would beat the number field sieve.

  58. D. Banks Says:

    Are solely quantum mechanical explanations sufficient to describe all natural phenomena?

  59. mg Says:

    possibly because it’s too profound for me, but possibly because it’s pompous and deliberately obscure

    Hah! How Chomskyan of you to admit that ‘maybe you’re just not smart enough’. The war continues…

    What you quoted above is the sort of thesis of the paper, which bears similarity to the tenets of FOSS.

    I should have been more specific, but what I wanted your opinion on was the last bit, which may very well be mislead:

    Switching components, however, be they telegraph relays, tubes or finally microtransistor cells, pay a price for their very composability. Confronted as they are with a continuous environment of weather, waves, and wars, digital computers can cope with this real number-avalanche only by adding element to element. However, the growth rate of possible interconnections between those elements and, that is, of computing power as such, is proven to have a square root function as its upper bound. In other words, it cannot even “keep up with polynomial growth rates in problem size.”19 Thus, the very isolation between digital or discrete elements accounts for a drawback in connectivity which otherwise, “according to current force law” as well as to the basics of combinatorial logics, would be bounded only by a maximum equalling the square number of all involved elements.20

    Precisely this maximal connectivity defines nonprogrammable systems, on the physical side, be they waves or beings. That is why these systems show polynomial growth rates in complexity and, consequently, why only computations done on nonprogrammable machines could keep up with them. In all evidence, this hypothetical but all too necessary type of machine would constitute sheer hardware, a physical device working amidst physical devices and subjected to the same bounded resources. Software in the usual sense of an ever-feasible abstraction would not exist any more. The procedures of these machines, though still open to an algorithmic notation, would have to work essentially on a material substrate whose very connectivity would allow for cellular reconfigurations. And even though the “substrate can also be described in algorithmic terms, by means of simulation,” its “…characterization is of such immense importance for the effectiveness […] and so closely connected with choice of hardware, that…”21 programming them will have little to do anymore with approximated Turing machines.

    Silicon hardware obeys many of the requisites for such highly connected, non-programmable systems. Between its millions of transistor cells, some million to the power of two interactions take place already; there is electronic diffusion, there is quantum mechanical tunneling all over the chip.22 Yet, technically, these interactions are still treated in terms of system limitations, physical side-effects, and so on. To minimize all the noise that it would be possible to eliminate is the prize paid for structurally programmable machines. The inverse strategy of maximizing noise would not only find the way back from IBM to Shannon, it may well be the only way to enter that body of real numbers originally known as chaos.

  60. mg Says:

    Also,

    postmodernists have been “abusing” computer science terminology since the 70’s (see Deleuze and Guattari, for instance). And Kittler dates himself with references to DOS.

    http://www.amazon.com/review/R1VKDTLO0H1WBN/ref=cm_cr_pr_viewpnt#R1VKDTLO0H1WBN is a nice critique indeed. However, note the Kittler is by no means trying to be scientific. His paper is an example of media theory, something quite distinct from science (and something computer scientists should read more of)!

  61. travis Says:

    Scott, in regards to the definition of computation, it seems that you are a believer in the Strong Turing Thesis: A TM can do (compute) anything that a computer can do. I thought these papers by Peter Wegner raise good questions about STT.

    http://www.engr.uconn.edu/~dqg/papers/myth.pdf
    and
    http://www.cs.brown.edu/people/pw/papers/ficacm.ps

    The reason is that a TM computes functions where all the input is available at the start. Today, most computation now has input occurring at all stages of the process. The question is whether TMs are a good model when I/O evolves over time.

  62. Scott Says:

    Travis, yes, I’m a proud believer in the Church-Turing Thesis. (Though I don’t know what the modifier “Strong” means in your context — for me, it usually means the polynomial-time version, which is probably falsified by quantum computation.)

    Like Lance, I find Wegner and Goldin’s insistence that the Church-Turing Thesis is falsified by I/O at intermediate stages to be risible. To paraphase Babbage, I can’t even rightly apprehend the kind of confusion of ideas that provokes such a claim. To me, it’s precisely like saying that the Church-Turing Thesis is falsified because real computers are hooked up to mice and printers whereas Turing machines are not. In both cases, the resolution is obvious: give your Turing machine (formal models of) whatever I/O devices you want — for example using an oracle tape — and the TM can interact with those devices exactly like any other computer.

  63. Scott Says:

    hk:

    1) In the complexity lecture you say that hierarchy theorems hold for “all” functions. They don’t…

    Thanks, fixed!

    2) In some comment you say that it may be the case that all languages in P are NP-complete. At least under many-one reductions 2 sets are provably not NP-complete

    We discussed this in a previous thread — I was talking about Turing reductions.

  64. Daniel Says:

    New Scientist confuses Tavelling Salesman with Shortest Path:

    http://technology.newscientist.com/article/dn13657?DCMP=ILC-arttplnk&nsref=dn13657

    […] In his experiments, the masses of independent amoeba-like cells that act as a single organism would initially spread out to explore all the possible paths of a maze.

    But when one train of cells found the shortest path to some food hidden at the maze’s exit the rest of the mass stopped exploring. The slime mould then withdrew from the dead end routes and followed the direct path to the food.

    This is interesting for computer scientists because maze solving is similar to the travelling salesman problem, which asks for the shortest route between a number of points in space. The problem quickly scales in complexity as more points are added, making it a tough problem for classical computers.

  65. Bharath Says:

    I’m an undergrad at Berkeley seriously interested in going to grad school in CS. People have recently been telling me that the best way to get into a good grad school is to identify your area of interest early and do research in it while still an undergrad. As of right now, I’m fairly split between wanting to go into CS theory (most likely complexity) and AI (probably machine learning). I spoke with my faculty advisor about this topic and asked for his opinion. He seemed to favor AI over theory for a number of reasons:

    1) He made the argument that those in theory often develop a tendency to look for easy proof opportunities. For example, tons of people are currently working on SVM’s in machine learning because they are easy to prove theorems about. On the other hand, multimodal classifiers are far more difficult to approach theoretically and are thus often ignored.

    2) He questioned whether theoreticians really understand fields as well as they claimed to. His specific comment was that a theoretical understanding of a field is flawed in some way if it cannot help solve a challenging and interesting problem in the field.

    3) He was skeptical whether years of research in complexity had really yielded useful results.

    I found his arguments fairly compelling, but I’m very interested in hearing what a theoretician would have to say. Would you mind giving your opinion Scott?

  66. Scott Says:

    Are solely quantum mechanical explanations sufficient to describe all natural phenomena?

    In 300 words or less, and standing on one foot? 🙂

    I think it’s fair to say that quantum mechanics seems sufficient to account for all natural phenomena that we know about. Here I’m “merely” talking about things that are uncontroversially observationally accessible, like quarks, molecules, stars, black holes, and the early universe, and not about things like consciousness or the origin of the universe that would take us into philosophical no-man’s-land.

    On the other hand, we’ve known since about the 1930’s that QM conflicts with general relativity, and that when we reach the Planck scale at least one of them has to give. Most physicists assume GR will be the one to fail, for two reasons:

    (1) It’s much, much easier to fiddle with GR and get other consistent theories than it is to fiddle with QM, which seems to demand of us that we either swallow it whole or else reject it entirely.

    (2) Even without QM, we know that GR has to break down anyway, because of singularities.

    On the other hand, a small minority of physicists (including Smolin and Penrose) believe QM will fail as well, and it’s just possible they’ll turn out to be right. Penrose, interestingly, has made specific predictions of where he thinks QM will break down and has been involved in designing experiments to test those predictions. One can view the effort to build quantum computers in similar terms: as possibly the most stringent test to which QM itself has ever been subjected.

    In summary, I don’t view the question of QM’s validity as a closed one (as some of the high-energy physicists I’ve talked to do). But it should be clear to everyone that, as spectacular as some of the predictions of QM are (like Shor’s algorithm), at this point a falsification of QM would be ten thousand times more spectacular.

  67. Scott Says:

    mg:

    Hah! How Chomskyan of you to admit that ‘maybe you’re just not smart enough’.

    On this topic, at least, Chomsky and I stand shoulder to shoulder. 😉

    If you want computer scientists to “read more media studies”, please tell the media studies people to define their terms more carefully! I reread the paragraphs you quoted several times, and they still make zero sense to me. Crucially, my lack of understanding is not similar in kind to what I face when reading (say) a nuclear physics paper, where there are many terms that I don’t understand, but that the author obviously does. Rather, here we seem to have an author throwing terms around with no regard whatsoever for their meaning.

    Let me give two examples:

    However, the growth rate of possible interconnections between those elements and, that is, of computing power as such, is proven to have a square root function as its upper bound.

    Huh? What’s being upper-bounded? The number of interconnections? The number of possibilities for the interconnections? Why is the “growth rate of possible interconnections” equivalent to “computing power as such”? And it’s being upper-bounded by the square root of what — the number of elements? If I don’t understand what’s being talked about here, how much chance is there for someone in media studies?

    The inverse strategy of maximizing noise would not only find the way back from IBM to Shannon, it may well be the only way to enter that body of real numbers originally known as chaos.

    Two things strike me as laughable here: first, the reference to “IBM” is actually to Bennett et al.’s logical depth measure, which the author calls the “IBM measure.” That’s like calling general relativity the “University of Berlin theory,” after where Einstein happened to be when he invented it!

    Second, there’s a “body of real numbers originally known as chaos”? Which body of real numbers?

    In short, you might as well be asking me for my opinion on the truth or falsehood of an LSD trip, or a dream you had last night.

  68. Job Says:

    If i take an encoding that can map any real world object to a unique number, then i go around mapping objects to numbers, then i take all the subsets of the objects i mapped and determine the minimum amount of swaps necessary for each given subset of objects to become sorted. If i repeat this lots of times, and determine which universes are impossible (require a given subset of objects to become sorted in less swaps than are required). Then won’t there be lots and lots of impossible universes? In particular if we consider all lower bounds beyond sorting?

  69. Bilal Says:

    Hi Scott! Hope you feel better. And no, I don’t have a question yet. 🙂

  70. Scott Says:

    Bharath, it won’t surprise you to learn that I disagree with your advisor’s attacks on complexity theory in the strongest possible terms. Criticizing complexity theorists for a “lack of useful results” is like criticizing cosmologists, number theorists, or mathematical logicians for the same offense: it’s not the main goal and they never said it was. (Though in each of these cases, one could argue that the spinoffs have more than justified the field even in crude practical terms.)

    But the question that faces you is only marginally related to all of this. You need to decide: what do you enjoy doing, and where will you be successful? And here the answer might very well be AI. (Had your advisor told you to go into AI for reasons that made some reference to you, I probably would’ve had no quarrel, since he knows you better than I do.)

    It might interest you to know that, when I started grad school at Berkeley, I was a machine learning / AI person — partly because those were the people who took an interest in my application, partly because my undergrad training at Cornell was much more in AI than in theory, partly because I genuinely liked it (at the time, my dream was to forge a connection between AI and quantum computing), and partly because I was convinced that I wasn’t cut out for pure theory. It took me a year to discover that (1) there wasn’t as much to say at the intersection of AI and quantum computing than I’d thought, and (2) I could indeed get results in pure quantum computing theory.

    Again, I don’t know you well enough to know which research area is right for you (and you might not yet know yourself well enough). But I strongly urge you to make your decision based not only on some “global” assessment of the fields (your own or someone else’s), but also on your own inclinations and abilities.

  71. Scott Says:

    Job: Sorry, I didn’t understand your question at all. (E.g., are you identifying a “universe” with the subset of objects that are in it? But in that case, where does sorting come in? A subset doesn’t induce an ordering on its elements.) Any way you can distill the essence of your question down to one sentence?

  72. Job Says:

    I’m not sure, but it stems a little from the many-worlds view in QM. If we consider every computational lower bound, then of all the universes imaginable branching to 10 minutes from now, then don’t most of them require going down a path that involves breaking some bound, seeing as there’s so many of them?

  73. Scott Says:

    Job, I don’t know of any bound in CS theory that conflicts with the many-worlds interpretation. From the standpoint of any individual branch in MWI, the choices that were made to get to that branch just look random. But for a computer scientist, randomness is relatively mundane — it doesn’t even take you beyond BPP to BQP! In particular, we don’t believe that having an infinite supply of random bits gives you the power to solve specific hard problems like NP-complete ones efficiently. (Of course it lets you solve them with exponentially small probability, but that’s not useful or interesting.) If you think more concretely about what computational lower bounds we actually know or conjecture, and what they actually say, I have a feeling your difficulties will lessen.

  74. V Says:

    I graduated 4 years ago, and after industry, I am now in my first year in grad school. I tried another research area, but I decided that I am really interested in theoretical work. (Theory in AI, I fancy.) I’ve begun taking all the kewl theory courses I can. What is the most efficient way to acquire all the math I never cared to learn properly?

  75. Scott Says:

    V: For me, the most efficient way was to tackle actual theory problems, and learn a given area of math when and only when I could see that I needed it. (Though note that, at the time I started tackling theory problems, I’d already taken undergrad algebra, analysis, topology, and probability at Cornell.)

    This approach had the benefit that everything I was learning had a clear context (and was therefore much more likely to stick in my head); on the other hand, it also left many embarrassing gaps in my education (for example, I still don’t know representation theory). Which combination of courses and “hands-on training” is right for you depends on your own style, as well as just how mathematical the theory problems you want to work on are.

  76. V Says:

    For some time, I have had the impression that Computational Learning Theory requires tools from diverse areas of mathematics; but I don’t know for sure, as I haven’t had a suitable opportunity to find out yet.. So, besides being “Theory in AI”, I think that it will provide me with an opportunity to learn a lot of Mathematics.

    If my guesstimate/ notion is wrong, please let me know.. If it isn’t, it would be very interesting to see a list of the areas of mathematics it draws from.. (At least it will help me see what I’m in for next semester..)

  77. anonymous Says:

    Do you have any advice on how to find problems from theoretical computer science that would make fun and challenging online puzzles/games?

    For example, see:

    http://www.planarity.net

    Maybe a puzzle based on adaptive sorting might be interesting?

    Moreover, do you think that having people play the role of an adversary in such games would be interesting? And if so, for which problems in particular?

    For example, one can have a sorting game where you are shown a partial order. One player chooses two nodes in the partial order and the other player determines the outcome of the comparison. The partial order is then updated.

  78. Abel Says:

    Are there any interesting results concerning subsets of Turing Machines for which the Halting Problem can be solved?

    For example, it is trivial to see that machines for which you cannot go back to a previously visited state do not halt. But it would be interesting if there were any non-trivial result for a subset in which there are both machines that halt and that doesn’t halt.

  79. mg Says:

    Thanks for your replies Scott. I had the same criticisms (though for me they were suspicions). As an artist-programmer, I’m easily wooed by pseudo-technical aesthetics. When I read things like that, part of me is inspired, but another part is left with a “does he really know what he’s talking about?” feeling. The question for me remains, then: if there is little scientific credibility to the paper, do the arguments hold real ethico-aesthetic merit?

    Nevertheless, the blind focus on utility — that is, market value and supposed “life-saving ability”, so ethical utility (as in utilitarianism) — in computer science (not theoretical computer science so much) is just silly. Computers have tremendous ethical implications and to ignore the great body of literature on the subject, or to replace it with Wired optimism, know-it-all blogger espousings, “scientific rationality”, and/or manifestos from ‘visionary’ entrepreneurs, is contemptible. It’s not all postmodern trash, as “Olly” contends.

  80. mg Says:

    V, this might interest you: http://steve-yegge.blogspot.com/2006/03/math-for-programmers.html

    Also, you can teach yourself quite a bit of math between Open Courseware video lectures and self-teaching-oriented math books. Janich is great.

  81. D. Banks Says:

    A traveling saleslady problem: Suppose a traveling saleslady (gender for purpose of distinction) always makes route selections on an irrational, but not necessarily random, basis. Is it irrational to suppose that an accidental optimal solution would be envied by a traveling salesman who arrives simultaneously by applying an algorithm of rigorous logic? [Okay, this is a joke!]

  82. Eric Hennigan Says:

    A while back I had a really crazy idea while I was dozing off to sleep, it is therefore very likely to be complete nonsense. I have no clue as to the details of the idea, but I’d like to know wether you think the approach has any merit, and if you are aware of anyone in the Theory CS community having already worked in this direction.

    Hallucinated Abstract:
    This paper presents an isomorphism between the set of problems in P and the Natural Numbers, and an isomorphism between the set of problem in NP and the reals. Using this isomorphism the properties of the equivalent classes P and NP are investigated. A property is found which holds for one class but not the other, thus P != NP. The isomorphisms are then used to prove that the Continuum Hypothesis is false because of the relationship between P and NP.

  83. Not Even Right Says:

    Scott,

    I think you must be very confident as you dared ask us to ask you what was in our mind.

    I have got a question: Do you know the differences in Computer Science research and Physics research?

  84. Scott Says:

    V: Yes, COLT involves Fourier analysis, probability theory, polynomial approximation theory, and at least in Steve Smale’s interpretation, pretty much the whole of math. 🙂

  85. Scott Says:

    anonymous: There are hundreds of NP-complete problems that could be turned into entertaining games. (When I was 15, I wrote a game based on the planar Traveling Salesman Problem, which kept me and some friends occupied for a week or two…) The central difficulty, of course, is that the fun-ness of a game depends on all sorts of hard-to-formalize factors that have nothing to do with computational complexity. To illustrate, I guarantee you that a game based on packing Tetris pieces will be more fun than a game based on solving systems of quadratic equations over a finite field — even though the two games are polynomially equivalent!

  86. Scott Says:

    Are there any interesting results concerning subsets of Turing Machines for which the Halting Problem can be solved?

    Abel: That’s an interesting question!

    As a first observation, any nontrivial result that shows that some problem is decidable, could be interpreted as showing that the Halting Problem is decidable for some nontrivial subset of Turing machines. So for example, Tarski gave a decision procedure for the first-order theory of reals. One could interpret his result as saying, “the Halting Problem for Turing machines that enumerate proofs in the first-order theory of reals, halting when they find a valid proof for some given sentence, is decidable.”

    Now, that’s admittedly a contrived example! Probably what you’re looking for is a subset of Turing machines whose halting problem is nontrivially decidable, and which can be defined in purely “syntactic” terms, without referring to what the machines do. In that case, I can’t think of a single good example — can anyone?

  87. Anon Says:

    For those wondering whether to consider an academic career (numbers 7, 12, 20, …), one piece of advice I can give you is to ask a wide variety of professors/researchers *outside the top 10 schools* their opinions on the matter. A problem is that students most likely to pursue academic careers are also most likely in top 10-20 schools, and therefore get a more positively biased view of what life as a professor is like. To find out whether you would still be happy as a professor at a low-ranked school, you have to *ask* a professor at a low-ranked school rather than a successful professor at MIT. (No knock on you, Scott, just being fair to those who aren’t going to end up at MIT…)

  88. Scott Says:

    mg: I couldn’t agree more strongly that we should think through the ethical implications of everything we do, and that “Wired optimism” often verges on self-parody. The question is, how does postmodern gobbledygook help us to become more ethical? As Martha Nussbaum pointed out in her brilliant takedown of Judith Butler, “writing undemocratically” actually seems diametrically opposed to social justice…

  89. Scott Says:

    Hallucinated Abstract:
    This paper presents an isomorphism between the set of problems in P and the Natural Numbers, and an isomorphism between the set of problem in NP and the reals. Using this isomorphism the properties of the equivalent classes P and NP are investigated. A property is found which holds for one class but not the other, thus P != NP. The isomorphisms are then used to prove that the Continuum Hypothesis is false because of the relationship between P and NP.

    Eric: Sounds like a wild dream! Mine usually involve being lost in a gigantic junior high school…

  90. KaoriBlue Says:

    What role does exposure (at an early age) to science fiction, fantasy, associated genres – or lack thereof – play in the maturation and choice of field for scientists/mathematicians?

  91. Scott Says:

    Do you know the differences in Computer Science research and Physics research?

    See my fable of The Physicists and the Wagon.

  92. Scott Says:

    What role does exposure (at an early age) to science fiction, fantasy, associated genres – or lack thereof – play in the maturation and choice of field for scientists/mathematicians?

    KaoriBlue: That’s a great question, and I don’t know the answer!

    Personally, I went through a phase when I was ten or so when I devoured Asimov’s robot stories, but I don’t know if it played any significant role in my going into CS. I do remember that after I learned BASIC, one of my first projects was to write a human-level AI program, whose secret (one that all those scientists missed!) would be that it would rely on Asimov’s Three Laws of Robotics. I wrote a really nice user interface for submitting questions to the program, and only got stuck at the part where I had to make the program answer the questions in a way that would protect human life, obey orders insofar as that didn’t conflict with the First Law, and protect the program’s own existence insofar as that didn’t conflict with the First or Second Laws… 🙂

  93. Job Says:

    Concerning the halting subsets of TMs, if TMs A and B halt, then TM C, a composite of A and B, always halts as well right? That being the case, there is a whole hierarchy of halting TMs. I wonder if, given any TM M, whether we can usually find a TM M’ in the halting hierarchy that is equivalent.

  94. Scott Says:

    Job, you really need to define things more carefully. For example, what is the “composite” of two Turing machines?

  95. Job Says:

    What i had in mind was one of A or B followed by the other, such as: if A accepts, B is called on A’s output, if A rejects, reject.

  96. harrison Says:

    so, more on halting TMs:

    is it known that probabilistic Turing machines can’t estimate the halting problem (i.e., we have a probabilistic turing machine that returns true with probability at least 2/3 if machine M halts on input X, and returns true with probability at most 1/3 otherwise)? I mean, I’m 98% sure it’s true — I’m just wondering if it’s proven.

  97. Oleg Says:

    Hi Scott, I want to ask you something about PhDs in Computer Science and how they are viewed in industry. Currently I’m tossing up whether to do a PhD in CS, but I have no desire to enter academia thereafter. Now, dont get me wrong, I love research, which is why I want to do a PhD, but I fully intend to go into industry after completing my PhD. My reason for wanting to do a PhD is that I feel I have “unfinished business” after my undergrad degree, things I want to do and learn and discover, so it not like I’m just trying to escape the “real world” for a few more years.

    Anyway, so the question is, is a PhD in CS viewed favourably in the I.T. industry, will it stereotype me as somebody who wanted to escape the real world for a few more years, or is it viewed as a valuable experience by employers?

    If anyone else has a POV on the issue or experience with this, feel free to chime in.

  98. Raoul Ohio Says:

    The academic jobs/graduate study issue:

    1. There are not that many good academic positions. They are hard to get. It is a lot of work. The good aspects/bad aspects ratio drops every year: Administrators get more money, faculty fill out more forms.

    2. But – learning is fun, reasearch is fun, teaching is mostly fun. Being around a college has a lot of up side.

    3. Keep in mind that most people change what area they work in several times.

    4. Invest much of your course work in math, physics, CS (including programming), and subjects you particularly like. Work on your writing skills. You will always be able to get interesting jobs that pay halfway well. These jobs might not be in any area you study now, or even exist yet. But you will be ready when they do.

  99. Job Says:

    In my opinion a PhD is definitely favorable, but the engineering side of CS moves so quickly that losing a few years to a PhD will leave you back some distance.
    I’m either programing or studying new technologies all of the time and yet i can’t even keep up with even just the web stuff, it’s very frustrating, there’s too much stuff going on.

  100. sp Says:

    Are you computer sciencist about all CS or only about quantum computing (or at least in this dirrection)?
    What is harder to understand, all classical CS or “quantum” CS (all complexity, algorithms, etc)?

    Do algebra has big realation with CS? Do algebra is part of CS? Personly I see algebra in math like science about nothing and mathematical analyse is very realated with physic and elementary math.

  101. Joe Bebel Says:

    I’ve been reading your blog for a while and never posted – but since you’re taking questions a couple just popped in my head:

    (1) It seems to be generally thought that BQP != BPP (due to classically believed-to-be hard problems being in BQP, and currently provable speedups a la Grover).
    What if, on the other hand, BQP = BPP? Besides the obvious conclusion of important problems like factoring being in BPP, are there any profound implications? i.e., that quantum mechanics can provide a polynomial improvement in complexity but not an exponential one? For example, QM seems to already allow an exponential explosion in some sense: classically, n bits encode one state; n qubits can encode the superposition of many states.

    (2) Quantum circuits seem to be the primary useful means of describing and analyzing quantum computers, but it seems like it can be difficult to draw analogy to classic turing machines (in some vague sense, I know, sorry)
    Quantum turing machines, on the other hand, apparently have their own problems (possible superposition of halted/not halted?) Are there effective, alternative models of quantum computation besides quantum circuits? Or is there no major obvious, tangible benefit to models other than circuits?

    Thanks!

  102. Scott Says:

    is it known that probabilistic Turing machines can’t estimate the halting problem

    Harrison: Yes, in fact they decide exactly the same languages as deterministic TM’s. If all branches in the probabilistic TM’s computation tree halt, this is trivial: just create a deterministic TM that calculates the probabilistic TM’s acceptance probability. The one slight subtlety occurs if not all branches of the probabilistic TM halt, but there’s still some limiting acceptance probability. But even in that case, if there’s any finite gap between the limiting accepting probability and limiting rejecting probability, it’s a result of Shannon that a deterministic TM can calculate which case you’re in, and hence (in particular) you can’t solve the halting problem.

  103. Scott Says:

    is a PhD in CS viewed favourably in the I.T. industry, will it stereotype me as somebody who wanted to escape the real world for a few more years, or is it viewed as a valuable experience by employers?

    As I said in an earlier post, notwithstanding the PhD expunging service, it seems to me (from my friends in industry) that a PhD in CS from a top institution is viewed pretty favorably in Silicon Valley. People with money are not (all!) idiots; they know that Google, Yahoo!, Akamai, etc. all emerged out of academic CS departments. Of course, it will help if you actually learn things of value during your PhD work, as opposed to just having the line to add to your resume. 🙂 I’m sure there are readers in industry who can answer the question better than I can.

  104. Scott Says:

    sp:

    Are you computer sciencist about all CS or only about quantum computing (or at least in this dirrection)?

    I do lots of classical CS too — look at my papers if you don’t believe me.

    What is harder to understand, all classical CS or “quantum” CS (all complexity, algorithms, etc)?

    I think I’ll have to go with “all classical CS,” just because there’s so much more of it.

    Do algebra has big realation with CS?

    Yes.

    Do algebra is part of CS?

    Some aspects of it, maybe.

    sp and everyone else: enough with the multi-part questions! One question at a time (or at most two); thanks!

  105. Scott Says:

    Joe Bebel:

    (1) Yes, the “profound” implications of BPP=BQP would be that (i) quantum mechanics would give at most a polynomial improvement in complexity, and (ii) the RSA cryptosystem would fall. At present, we don’t know of any strange “complexity-theoretic” consequences of BPP=BQP, other than BPP=BQP itself — that is, we don’t know if it would collapse the polynomial hierarchy or anything like that.

    (2) By now, we have several striking ways to think about quantum computation besides just quantum circuits and quantum Turing machines. These include adiabatic quantum computation, cluster-state quantum computation, anyonic/topological quantum computation, and quantum cellular automata. But except for quantum cellular automata, all of these take us further from the intuition of Turing machines rather than closer to it! Also, while the new models are extremely interesting, so far they’ve proven to be more useful for physicists trying to connect quantum computing to existing physics, than for computer scientists hoping to learn something about BQP independent of any model. That could change though.

  106. Anonymous (for his own safety) Says:

    Can you crank dat as well or better than Stallman?

  107. KaoriBlue Says:

    One last one last question I’m going to throw out into the ether (I’ve been having awful insomnia). I was introduced to Valiant’s “Holographic Algorithms” paper a little while back, and how it could be used to solve problems like sharp PL-3-NAE-SAT in polynomial time by reducing it to a PerfMatch problem. It really surprised me, and to be honest I’m still trying to understand how it works… then again I’m not really a computer scientist!

    He states in his paper (in regards to his holographic reductions):

    “…any proof of P (not equal) NP may need to explain, and not only to imply, the unsolvability of our polynomial systems.”

    What exactly does he mean by this? Does he simply mean that holographic algorithms are introducing yet another road-block or constraint on the methods used to prove P (not equal) NP, or that something additional on top of P (!=) NP must now be shown?

  108. Scott Says:

    Can you crank dat as well or better than Stallman?

    Not without years of training.

  109. Scott Says:

    KaoriBlue: Personally I don’t agree with the idea that holographic algorithms have major conceptual implications for the P vs. NP problem. To me, they’re simply another class of extremely interesting polynomial-time algorithms, for problems for which one might not have imagined such algorithms to exist. But we’ve seen many such examples since the 1960’s — indeed the algorithm that started the field, for maximum matching in general graphs, was an excellent example.

    So I don’t know what Valiant meant in the passage you quoted. If he simply meant that any proof of P≠NP will have to fail for sharp PL-3-NAE-SAT, in the same (nontrivial) sense that it will have to fail for 2SAT and maximum matching, then I wholeheartedly agree with him. If he meant something stronger, then it’s quite possible that I disagree.

  110. KaoriBlue Says:

    Hey, thanks for your response. It sounds like we’re on the same page about Valiant’s holographic algorithms… which is nice to know since this work is pretty far removed from what I mostly spend my time thinking about.

  111. Yarden Says:

    What field will uncover the mystery of intelligence/the way the mind works first, neuroscience or artificial intelligence?

  112. Simina Says:

    Hello,
    I’m an undergraduate student and will be applying for grad school this fall. I’ve spent ~ 1 year doing compiler work and a few months this summer will be working with a professor in AI.
    My question is – if I apply for a more theoretical area, like pure and applied logic, + programming languages, is the other experience going to be relevant at all? I’d assume most students applying for a department have done research in that specific area.
    Thank you.

  113. KaoriBlue Says:

    Well, the AI field has certainly made astounding breakthroughs in probing the lower bounds of human intelligence. However, since cutting edge wet neuroscience labs at places like UCSF barely have probe-arrays working (I have friend who’s been filling me in on just how tedious and miserable it is to isolate single neurons in a macaque with one electrode), I’m going to say it’s a toss-up.

  114. D. Banks Says:

    Two perhaps very naive questions:

    1. Is there anyway conceptually possible wherein advances in conventional computer hardware/software will be able to emulate what we expect a quantum computer will be able to do?

    2. How can we ever say X = Y exactly if when taken individually X and Y are changed upon observation?

  115. Jack in Danville Says:

    When, if ever, can we expect to see the publication of the rest of the Democritus lectures, and/or the series expanded into a book?

  116. Scott Says:

    What field will uncover the mystery of intelligence/the way the mind works first, neuroscience or artificial intelligence?

    That’s like asking who will get us across the ocean first, the navigators or the shipbuilders. Both seem essential if we’re ever going to get there.

  117. Scott Says:

    Is there anyway conceptually possible wherein advances in conventional computer hardware/software will be able to emulate what we expect a quantum computer will be able to do?

    Not without huge theoretical advances. If BPP=BQP, then even computers today would be able to simulate quantum computers “efficiently” (i.e., with some polynomial slowdown). But if BPP≠BQP, then we don’t expect any improvement to conventional computers, ever, to enable them to simulate quantum computers in a reasonable amount of time.

    (Of course, it’s also conceivable that BPP≠BQP, but that efficient classical algorithms will be discovered for factoring and various other things that we’d actually want to use a quantum computer for.)

    How can we ever say X = Y exactly if when taken individually X and Y are changed upon observation?

    If (1) X and Y are both changed upon observation, (2) the observation doesn’t reveal to values to which they’re changed, and (3) you don’t have prior knowledge of their values, then I guess you can’t be certain they’re currently equal (at most, that they had been equal at some previous time). So what?

  118. Scott Says:

    When, if ever, can we expect to see the publication of the rest of the Democritus lectures, and/or the series expanded into a book?

    “Soon.”

  119. Scott Says:

    if I apply for a more theoretical area, like pure and applied logic, + programming languages, is the other experience going to be relevant at all?

    Simina: Yes, pretty much any research, teaching, or work experience is relevant, and you should discuss it in your application. (You should also discuss what you gained from the experience, and why you now want to switch to a more theoretical area.)

  120. AF Says:

    A question that keeps coming into my head whenever I read the physics/CS theory-oriented discussions on this page: do you believe that the research process is more rewarding if you come at it from a more broad background?

    While specializing in a very specific field certainly provides a researcher with particularly sharp tools for attacking his or her topic, I would think that progress in a field like quantum computation (which is the amalgamation of a variety of other research fields) would be accelerated by individuals who utilize an collection of different approaches to solve problems.

    Additionally, would you recommend that theory researchers dabble in the experimental side of quantum computation in order to broaden their perspectives on their own topic? For me, doing a research project with an equal mix of theory and experimentation would probably be the most insightful and rewarding.

  121. Scott Says:

    do you believe that the research process is more rewarding if you come at it from a more broad background?

    In general, absolutely! But you also need depth in the specific area you’ll be working in. (I must’ve met a dozen people who would say things like “but there must be a deep connection between quantum computing and biology!,” not knowing much about either, and feel certain they were being infinitely more profound than any mere specialist.)

    Additionally, would you recommend that theory researchers dabble in the experimental side of quantum computation in order to broaden their perspectives on their own topic?

    If the kind of theory you want to do is the kind that’s one step removed from experiment, then spending some time doing lab work is an excellent idea. On the other hand, if the kind of theory you want to do involves BQPSPACE and QMA/qpoly, then you’re probably better off heading to the math department for your dose of interdisciplinarity. (Taking tours of labs is a different story entirely. Even I get to tour QC labs a couple times per year, as long as I promise not to touch anything!)

  122. anonymous Says:

    Do you think CS professors are being deceptive when they say that computer science is a creative field? They do this by redefining creativity to mean implementation-level creativity. Yet, that’s not what most people mean by creativity. Isn’t this wrong?

    On a related note, suppose three people worked on Wikipedia: one person came up with the idea, another wrote the software, and a third evaluated the quality of the result. Which of these three people would be considered computer scientists? Does it make a difference if they write three papers on each of their distinct contributions or whether they write a paper combining all their work?

  123. Scott Says:

    Do you think CS professors are being deceptive when they say that computer science is a creative field?

    No.

    Which of these three people would be considered computer scientists?

    I dunno — the one(s) who considered themselves computer scientists, and/or published in computer science venues?

  124. LK Says:

    Scott, do you think self-modifying programs would have any practical application or are they possible at all in such a way that they can modify any part of themselves (including the self-modification mechanism) and still remain useful for some task?

  125. Douglas Knight Says:

    What do you think of the phase transition perspective on P vs NP?

  126. Scott Says:

    do you think self-modifying programs would have any practical application

    Sure. In fact many programs we use all the time — web browsers, operating systems, etc. — could be seen as “self-modifying,” depending on how you define the term. (E.g., they can download updates for themselves, though admittedly they can’t write the updates yet.)

  127. Scott Says:

    What do you think of the phase transition perspective on P vs NP?

    I’ve been keenly interested in phase transitions in random k-SAT for more than a decade. I can’t say that this perspective has yet provided much insight into the P vs. NP question, but conceivably it will someday.

  128. Maleq Khan Says:

    Hi Scott, can you shed some light on self referential stuffs.

    E.g., Let S be a set of some sets and S is a member of itself. Do you consider S well-defined? How do you explain russell’s paradox?

    In the proof of Godel’s theorem:
    “This sentence is not provable.” — I find it a meaningless undefined sentence. After many tries, I still don’t accept (meaning I don’t understand) a proof based on such selfreferential sentences. How do we make sure that it is leading us to a proof, not to a paradox?

    I am asking you this question, mainly because after reading your GITCS lectures, I found how easily you can explain the complex stuffs.

  129. lylebot Says:

    Do you think CS professors are being deceptive when they say that computer science is a creative field? They do this by redefining creativity to mean implementation-level creativity. Yet, that’s not what most people mean by creativity. Isn’t this wrong?

    I’ve never understood CS creativity to mean “implementation-level creativity”. Creativity in CS is just like creativity in any other field of science: coming up with interesting research questions and figuring out how to answer them, either by experiment or proof.

    On a related note, suppose three people worked on Wikipedia: one person came up with the idea, another wrote the software, and a third evaluated the quality of the result. Which of these three people would be considered computer scientists?

    What’s the research question?

  130. Scott Says:

    Maleq: Without the Axiom of Regularity, you can indeed have sets that are members of themselves, and it’s not necessarily a disaster. The set of all sets is what leads to contradictions like Russell’s Paradox, and that’s why “unrestricted comprehension” was banished in the early 20th century, with the creation of axiomatic set theory.

    Gödel’s Theorem is a different story entirely. The whole point is that Gödel showed how to compile “This sentence is not provable” into a sentence purely about positive integers, which doesn’t say anything directly about sentences, provability, or itself. I can’t explain this fully in a blog comment; you’ll have to do some reading if you want to understand it.

    On that note, I recently read Gödel’s Theorem: An Incomplete Guide to Its Use and Abuse by Torkel Franzén, and can recommend it in the strongest possible terms to anyone who wants to understand the theorem and what it says and doesn’t say. (I think I understand pretty well, yet I learned maybe a dozen new things from this book.)

  131. Adam Says:

    Got any tip for “success” in grad school/research in general?

  132. anonymous Says:

    I’ve never understood CS creativity to mean “implementation-level creativity”. Creativity in CS is just like creativity in any other field of science: coming up with interesting research questions and figuring out how to answer them, either by experiment or proof.

    The vast majority of CS majors would go on to industry where much of their work does not involve asking (interesting) research questions or doing (interesting) experiments/proofs. So that sort of creativity does not apply to them.

    What’s the research question?

    It’s sort of obvious for Wikipedia: can an open access wiki result in a high quality encyclopedia? So only the first person, the one who came up with this idea, is being creative?

  133. Anonymous Says:

    Since you offered us to ask any question, I am gonna ask a question about graduate school applications ..

    I am an Egyptian CS student, have finished my undergrad studies in Egypt (Ain Shams Univ.) – have little research experience – never got published – ranked 1st on my dept. – currently working as a TA/RA in my school – two years experience working in industry – excellent recommendation letters but from unknown professors – good but not outstanding GRE/TOEFL scores.

    Which universities would you suggest for me, either in the US or Canada, for doing my graduate studies? And, by the way, is there ‘any’ hope to get a decent school?!

  134. Scott Says:

    Got any tip for “success” in grad school/research in general?

    Yes: be more specific when you ask questions! 🙂

  135. Scott Says:

    Which universities would you suggest for me, either in the US or Canada, for doing my graduate studies? And, by the way, is there ‘any’ hope to get a decent school?!

    Yes, there’s hope to get into a decent school even without a research record. However, you’ll greatly increase your chances if you apply broadly — i.e. to 20 or 30 places and not just to Berkeley, Stanford, MIT, etc. Not knowing more about you I can’t recommend specific places.

  136. Scott Says:

    Matthew P. Johnson writes in to answer Abel’s question, on whether there exists a “nontrivial” but structurally-defined subset of Turing machines for which the halting problem is decidable:

      Supposedly, the subset is extremely close to trivial, i.e., almost all of them (!):
      “The halting problem for Turing machines is decidable on a set of asymptotic probability one. Specifically, there is a set B of Turing machine programs such that (i) B has asymptotic probability one, so that as the number of states n increases, the proportion of all n-state programs that are in B goes to one; (ii) B is polynomial time decidable; and (iii) the halting problem H intersect B is polynomial time decidable. The proof is sensitive to the particular computational model.”
      “The halting problem is decidable on a set of asymptotic probability one”
      http://arxiv.org/abs/math/0504351
  137. asdf Says:

    Interesting. Does that result actually look correct? It is kind of surprising. Calude and Jurgensen have a paper of a few years ago showing that (maybe it’s a little more general than this, I’ll have to check) as arithmetic sentences over (N,+,*,

  138. asdf Says:

    Interesting. Does that result actually look correct? It is kind of surprising. Calude and Jurgensen have a paper of a few years ago showing that (maybe it’s a little more general than this, I’ll have to check) as arithmetic sentences over (N,+,*,<) get longer and longer, the probability of their being decidable (provable or disprovable) in first order Peano arithmetic approaches zero. The argument (also used a lot by Chaitin) is basically the set of statements decidable from an axiom set has Kolmogorov complexity (I guess I’m suppoesd to say Chaitin complexity) no higher than the complexity of the axiom set itself. So as you admit longer and longer propositions, a lot of them have to have higher complexity than that, so can’t be in the decidable set. I would have thought something similar applied to decidability of the halting problem for Turing machines.

  139. cody Says:

    do you think anyone has ever tried to use Turing’s proof of the undecidability of the halting problem as an excuse for running a stop sign or red light? anyone?

  140. Algirdas Says:

    Scott,

    you say CGI animation is a solved problem. And yet… it is interesting to see that vast majority of cgi animation films, at least good ones, like Pixar fare, are about cartoonish characters that are not human. Even humans, when present, are decidedly cartoonish. This is smart – CGI animation has not reached yet the film-realistic level (and arguably it does need to be to make good films). By film-realistic I mean looking like real humans acting. One might imagine a test for CGI animation that parallels Turing test for machine intelligence: can you make a 2-hour film where humans are entirely simulated by CGI, which would fool humans into thinking that these are human actors? You can imagine there being a lesser test – film mainly focusing on body motion, and a greater test – film focusing on facial expressions, conveying emotion. We are optimized by millenia of evolution to be good at both tasks – appreciation of animal bodies moving, and perception of minutiae of facial expressions. So, assuming the film audience are not doofuses (oh, and assuming you are simulating somebody with a bit more depth than Schwarzenegger) – how long until CGI would pass such test? Care to speculate?

  141. Algirdas Says:

    I should proofread before posting. When I wrote “arguably it does need to be to make good films”, I meant to write “arguably it does not need to be film-realistic to make good films”.

  142. Abel Says:

    I have just read the paper you linked, and although as the authors say the answer is “unsatisfactory, because one doesn’t like results in computability theory to be sensitive to the choice of computational model.” (well, basically it is based on it), it is still much better than nothing.

    Thanks to Matthew P. Johnson and you for answering my question.

  143. Alex Says:

    Just in case Scott is still answering questions:
    In this paper: ‘Classical Spin Models and the Quantum-Stabilizer Formalism’, M. Van den Nest, W. Dür, and H. J. Briegel, Phys. Rev. Lett. 98, 117207 (2007).
    The authors mention at some point that: “For example, nonplanar graphs of large tree-width would typically
    give rise to states where MQC (Measurement based quantum computing) is NP-hard to simulate classically.” If simulating MQC classically is NP-hard, doesn’t this mean that a physical implementation of MQC can solve NP-complete problems?

  144. Scott Says:

    Alex, I assume they were just careless in choice of words. They probably mean simulating MQC on such graphs in the “natural” or “obvious” way gives rise to an NP-complete problem, not that NP is in BQP!