PHYS771 Lecture 4: Minds and Machines

Scott Aaronson


Today we're going to launch into something I know you've all been waiting for: a philosophical foodfight about minds, machines, and intelligence!


First, though, let's finish talking about computability. One concept we'll need again and again in this class is that of an oracle. The idea is a pretty obvious one: we assume we have a "black box" or "oracle" that immediately solves some hard computational problem, and then see what the consequences are! (When I was a freshman, I once started talking to my professor about the consequences of a hypothetical "NP-completeness fairy": a being that would instantly tell you whether a given Boolean formula was satisfiable or not. The professor had to correct me: they're not called "fairies"; they're called "oracles." Much more professional!)

Oracles were apparently first studied by Turing, in his 1938 PhD thesis. Obviously, anyone who could write a whole thesis about these fictitious entities would have to be an extremely pure theorist, someone who wouldn't be caught dead doing anything relevant. This was certainly true in Turing's case -- indeed he spent the years after his PhD, from 1939 to 1943, studying certain abstruse symmetry transformations on a 26-letter alphabet.

Anyway, we say that problem A is Turing-reducible to problem B, if A is solvable by a Turing machine given an oracle for B. In other words, "A is no harder than B": if we had a hypothetical device to solve B, then we could also solve A. Two problems are Turing-equivalent if each is Turing-reducible to the other. So for example, the problem of whether a statement can be proved from the axioms of set theory is Turing-equivalent to the halting problem: if you can solve one, you can solve the other.

Now, a Turing-degree is the set of all problems that are Turing-equivalent to a given problem. What are some examples of Turing-degrees? Well, we've already seen two examples: (1) the set of computable problems, and (2) the set of problems that are Turing-equivalent to the halting problem. Saying that these Turing-degrees aren't equal is just another way of saying that the halting problem isn't solvable.

Are there any Turing-degrees above these two? In other words, is there any problem even harder than the halting problem? Well, consider the following "super halting problem": given a Turing machine with an oracle for the halting problem, decide if it halts! Can we prove that this super halting problem is unsolvable, even given an oracle for the ordinary halting problem? Yes, we can! We simply take Turing's original proof that the halting problem is unsolvable, and "shift everything up a level" by giving all the machines an oracle for the halting problem. Everything in the proof goes through as before, a fact we express by saying that the proof "relativizes."

Here's a subtler question: is there any problem of intermediate difficulty between the computable problems and the halting problem? This question was first asked by Emil Post in 1944, and was finally answered in 1956, by Richard Friedberg in the US and (independently) A. A. Muchnik in the USSR. The answer is yes. Indeed, Friedberg and Muchnik actually proved a stronger result: that there are two problems A and B, both of which are solvable given an oracle for the halting problem, but neither of which is solvable given an oracle for the other. These problems are constructed via an infinite process whose purpose is to kill off every Turing machine that might reduce A to B or B to A. Unfortunately, the resulting problems are extremely contrived; they don't look like anything that might arise in practice. And even today, we don't have a single example of a "natural" problem with intermediate Turing degree.

Since Friedberg and Muchnik's breakthrough, the structure of the Turing degrees has been studied in more detail than you can possibly imagine. Here's one of the simplest questions: if two problems A and B are both reducible to the halting problem, then must there be a problem C that's reducible to A and B, such that any problem that's reducible to both A and B is also reducible to C? Hey, whatever floats your boat! But this is the point where some of us say, maybe we should move on to the next topic... (Incidentally, the answer to the question is no.)


Alright, the main philosophical idea underlying computability is what's called the Church-Turing Thesis. It's named after Turing and his adviser Alonzo Church, even though what they themselves believed about "their" thesis is open to dispute! Basically, the thesis is that any function "naturally to be regarded as computable" is computable by a Turing machine. Or in other words, any "reasonable" model of computation will give you either the same set of computable functions as the Turing machine model, or else a proper subset.

Already there's an obvious question: what sort of claim is this? Is it an empirical claim, about which functions can be computed in physical reality? Is it a definitional claim, about the meaning of the word "computable"? Is it a little of both?

Well, whatever it is, the Church-Turing Thesis can only be regarded as extremely successful, as theses go. As you know -- and as we'll discuss later -- quantum computing presents a serious challenge to the so-called "Extended" Church-Turing Thesis: that any function naturally to be regarded as efficiently computable is efficiently computable by a Turing machine. But in my view, so far there hasn't been any serious challenge to the original Church-Turing Thesis -- neither as a claim about physical reality, nor as a definition of ‘computable.'

There have been plenty of non-serious challenges to the Church-Turing Thesis. In fact there are whole conferences and journals devoted to these challenges -- google "hypercomputation." I've read some of this stuff, and it's mostly along the lines of, well, suppose you could do the first step of a computation in one second, the next step in a half second, the next step in a quarter second, the next step in an eighth second, and so on. Then in two seconds you'll have done an infinite amount of computation! Well, as stated it sounds a bit silly, so maybe sex it up by throwing in a black hole or something. How could the hidebound Turing reactionaries possibly object? (It reminds me of the joke about the supercomputer that was so fast, it could do an infinite loop in 2.5 seconds.)

We should immediately be skeptical that, if Nature was going to give us these vast computational powers, she would do so in a way that's so mundane, so uninteresting. Without making us sweat or anything. But admittedly, to really see why the hypercomputing proposals fail, you need the entropy bounds of Bekenstein, Bousso, and others -- which are among the few things the physicists think they know about quantum gravity, and which hopefully we'll say something about later in the course. So the Church-Turing Thesis -- even its original, non-extended version -- really is connected to some of the deepest questions in physics. But in my opinion, neither quantum computing, nor analog computing, nor anything else has mounted a serious challenge to that thesis in the seventy years since it was formulated.


If we interpret the Church-Turing Thesis as a claim about physical reality, then it should encompass everything in that reality, including the goopy neural nets between your respective ears. This leads us, of course, straight into the cratered intellectual battlefield that I promised to lead you into.

As a historical remark, it's interesting that the possibility of thinking machines isn't something that occurred to people gradually, after they'd already been using computers for decades. Instead it occurred to them immediately, the minute they started talking about computers themselves. People like Leibniz and Babbage and Lovelace and Turing and von Neumann understood from the beginning that a computer wouldn't just be another steam engine or toaster -- that, because of the property of universality (whether or not they called it that), it's difficult even to talk about computers without also talking about ourselves.


So, I asked you to read Turing's second famous paper, Computing Machinery and Intelligence. Reactions?

What's the main idea of this paper? As I read it, it's a plea against meat chauvinism. Sure, Turing makes some scientific arguments, some mathematical arguments, some epistemological arguments. But beneath everything else is a moral argument. Namely: if a computer interacted with us in a way that was indistinguishable from a human, then of course we could say the computer wasn't "really" thinking, that it was just a simulation. But on the same grounds, we could also say that other people aren't really thinking, that they merely act as if they're thinking. So what is it that entitles us to go through such intellectual acrobatics in the one case but not the other?

If you'll allow me to editorialize (as if I ever do otherwise...), this moral question, this question of double standards, is really where Searle, Penrose, and every other "strong AI skeptic" comes up empty for me. One can indeed give weighty and compelling arguments against the possibility of thinking machines. The only problem with these arguments is that they're also arguments against the possibility of thinking brains!

So for example: one popular argument is that, if a computer appears to be intelligent, that's merely a reflection of the intelligence of the humans who programmed it. But what if humans' intelligence is just a reflection of the billion-year evolutionary process that gave rise to it? What frustrates me every time I read the AI skeptics is their failure to consider these parallels honestly. The "qualia" and "aboutness" of other people is simply taken for granted. It's only the qualia of machines that's ever in question.

But perhaps a skeptic could retort: I believe other people think because I know I think, and other people look sort of similar to me -- they've also got five fingers, hair in their armpits, etc. But a robot looks different -- it's made of metal, it's got an antenna, it lumbers across the room, etc. So even if the robot acts like it's thinking, who knows? But if I accept this argument, why not go further? Why can't I say, I accept that white people think, but those blacks and Asians, who knows about them? They look too dissimilar from me.


In my view, one can divide everything that's been said about artificial intelligence into two categories: the 70% that's somewhere in Turing's paper from 1950, and the 30% that's emerged from a half-century of research since then.

So, after 56 years, there are some things we can say that would've surprised Alan Turing. What are those things? Well, one of them is how little progress has been made, compared to what he expected! Do you remember, Turing made a falsifiable prediction?

How well has his prediction fared? First let's note that the prediction about computers themselves was damn good. Turing predicted that in 50 years' time (i.e., 2000), we'd be programming computers with a storage capacity of about 109 (i.e., one gig).

But what about programming the computers to pass the imitation game? How well has Turing's prediction fared there?

Well, some of you might have heard of a program called ELIZA, written by Joseph Weizenbaum in 1966. This program simulates a psychotherapist who keeps spitting back whatever you said. The amazing thing Weizenbaum found is that many people will spill their hearts out to this program! And sometimes, if you then tell them they were talking to a program (and an extremely simple one at that), they won't believe you.

A few years ago, someone had the brilliant idea to take the original ELIZA program and let it loose in AOL chat rooms to see what happened. If you go to fury.com/aoliza, you can see some hilarious (fortunately anonymized) examples of people trying to seduce the program, telling it about their infidelities, etc. Here's one of my favorite exchanges, from a guy who (before moving on to amorous solicitations) had told the program that he planned to finish his B of A and then "move into corporate business alliance with Starbucks":

So this is about the state of the art in terms of man-machine repartee. It seems one actually needs to revise the Turing Test, to say that, if we want to verify intelligence in a computer, then we need some minimal level of intelligence in the human interrogator.


Despite what I said about the Turing Test, there have been some dramatic successes of AI. We all know about Kasparov and Deep Blue. Maybe less well-known is that, in 1996, a program called Otter was used to solve a 60-year-old open problem in algebra called the Robbins Conjecture, which Tarski and other famous mathematicians had worked on. (Apparently, for decades Tarski would give the problem to his best students. Then, eventually, he started giving it to his worst students...) The problem is easy to state: given the three axioms

can one derive as a consequence that Not(Not(A)) = A?

Let me stress that this was not a case like Appel and Haken's proof of the Four-Color Theorem, where the computer's role was basically to check thousands of cases. In this case, the proof was 17 lines long. A human could check the proof by hand, and say, yeah, I could've come up with that. (In principle!)

What else? Arguably there's a pretty sophisticated AI system that almost all of you used this morning and will use many more times today. What is it? Right, Google.

You can look at any of these examples -- Deep Blue, the Robbins conjecture, Google -- and say, that's not really AI. That's just massive search, helped along by clever programming. Now, this kind of talk drives AI researchers up a wall. They say: if you told someone in the sixties that in 30 years we'd be able to beat the world grandmaster at chess, and asked if that would count as AI, they'd say, of course it's AI! But now that we know how to do it, now it's no longer AI. Now it's just search. (Philosophers have a similar complaint: as soon as a branch of philosophy leads to anything concrete, it's no longer called philosophy! It's called math or science.)


There's another thing we appreciate now that people in Turing's time didn't really appreciate. This is that, in trying to write programs to simulate human intelligence, we're competing against a billion years of evolution. And that's damn hard. One counterintuitive consequence is that it's much easier to program a computer to beat Gary Kasparov at chess, than to program a computer to recognize faces under varied lighting conditions. Often the hardest tasks for AI are the ones that are trivial for a 5-year-old -- since those are the ones that are so hardwired by evolution that we don't even think about them.


In the last fifty years, have there been any new insights about the Turing Test itself? In my opinion, no. There has, on the other hand, been a non-insight, which is called Searle's Chinese Room. This is supposed to be an argument that even a computer that did pass the Turing Test wouldn't be intelligent. The way it goes is, let's say you don't speak Chinese. (Debbie isn't here today, so I think that's a safe assumption.) You sit in a room, and someone passes you paper slips through a hole in the wall with questions written in Chinese, and you're able to answer the questions (again in Chinese) just by consulting a rule book. In this case, you might be carrying out an intelligent Chinese conversation, yet by assumption, you don't understand a word of Chinese! Therefore symbol-manipulation can't produce understanding.

So, class, how might a strong AI proponent respond to this argument? Duh: you might not understand Chinese, but the rule book does! Or if you like, understanding Chinese is an emergent property of the system consisting of you and the rule book, in the same sense that understanding English is an emergent property of the neurons in your brain. Like many other thought experiments, the Chinese Room gets its mileage from a deceptive choice of imagery -- and more to the point, from ignoring computational complexity. We're invited to imagine someone pushing around slips of paper with zero understanding or insight -- much like the doofus freshmen who write (a+b)2=a2+b2 on their math tests. But how many slips of paper are we talking about? How big would the rule book have to be, and how quickly would you have to consult it, to carry out an intelligent Chinese conversation in anything resembling real time? If each page of the rule book corresponded to one neuron of (say) Debbie's brain, then probably we'd be talking about a "rule book" at least the size of the Earth, its pages searchable by a swarm of robots traveling at close to the speed of light. When you put it that way, maybe it's not so hard to imagine that this enormous Chinese-speaking entity -- this dian nao -- that we've brought into being might have something we'd be prepared to call understanding or insight.


Of course, everyone who talks about this stuff is really tiptoeing around the hoary question of consciousness. See, consciousness has this weird dual property, that on the one hand, it's arguably the most mysterious thing we know about, and the other hand, not only are we directly aware of it, but in some sense it's the only thing we're directly aware of. You know, cogito ergo sum and all that. So to give an example, I might be mistaken about Richard's shirt being blue -- I might be hallucinating or whatever -- but I really can't be mistaken about my perceiving it as blue. (Or if I could, then we get an infinite regress.)

Now, is there anything else that also produces the feeling of absolute certainty? Right -- math! Incidentally, I think this similarity between math and subjective experience might go a long away toward explaining mathematicians' "quasi-mystical" tendencies. (I can already hear Greg Kuperberg wincing. Wince, Greg, wince!) This is a good thing for physicists to understand: when you're talking to a mathematician, you might not be talking to someone who fears the real world and who's therefore retreated into intellectual masturbation. You might be talking to someone for whom the real world was never especially real to begin with! I mean, to come back to something we mentioned earlier: why did many mathematicians look askance at the computer proof of the Four-Color Theorem? Sure, the computer might have made a mistake, but humans make plenty of mistakes too!

What it boils down to, I think, is that there is a sense in which the Four-Color Theorem has been proved, and there's another sense in which many mathematicians understand proof, and those two senses aren't the same. For many mathematicians, a statement isn't proved when a physical process (which might be a classical computation, a quantum computation, an interactive protocol, or something else) terminates saying that it's been proved -- however good the reasons might be to believe that physical process is reliable. Rather, the statement is proved when they (the mathematicians) feel that their minds can directly perceive its truth.

Of course, it's hard to discuss these things directly. But what I'm trying to point out is that many people's "anti-robot animus" is probably a combination of two ingredients:

  1. the directly-experienced certainty that they're conscious -- that they perceive colors, sounds, positive integers, etc., regardless of whether anyone else does, and
  2. the belief that if they were just a computation, then they could not be conscious in this way.
For example, I think Penrose's objections to strong AI derive from these two ingredients. I think his arguments about Gödel's Theorem are window dressing added later.

For people who think this way (as even I do, at least in certain moods), granting consciousness to a robot seems strangely equivalent to denying that one is conscious oneself. Is there any respectable way out of this dilemma -- or in other words, any way out that doesn't rely on a meatist double standard, with one rule for ourselves and a different rule for robots?

My own favorite way out is one that's been advocated by the philosopher David Chalmers. Basically, what Chalmers proposes is a "philosophical NP-completeness reduction": a reduction of one mystery to another. He says that if computers someday pass the Turing Test, then we'll be compelled to regard them as conscious. And as for how they could be conscious, we'll understand that just as well and as poorly as we understand how a bundle of neurons could be conscious. Yes, it's mysterious, but the one mystery doesn't seem so different from the other.


Today's Puzzles


Answers to Homework

Recall that BB(n), or the "nth Busy Beaver number," is the largest number of steps that an n-state Turing machine can make on an initially blank tape before halting.

The first problem was to prove that BB(n) grows faster than any computable function. Did people get this one? Excellent!

Yeah, suppose there were a computable function f(n) such that f(n)>BB(n) for every n. Then given an n-state Turing machine M, we could first compute f(n), then simulate M for up to f(n) steps. If M hasn't halted by then, then we know it never will halt, since f(n) is greater than the maximum number of steps any n-state machine could make. But this gives us a way to solve the halting problem, which we already know is impossible. Therefore the function f doesn't exist.

So the BB(n) function grows really, really, really fast. (In case you're curious, here are the first few values, insofar as they've been computed by people with too much free time: BB(1)=1, BB(2)=6, BB(3)=21, BB(4)=107, BB(5)≥47,176,870.)

The second problem was whether

is a computable real number. In other words, is there an algorithm that given an integer k, outputs a rational number S' such that |S-S'| < 1/k?

People had more trouble with this one? Alright, let's see the answer. The answer is no -- it isn't computable. For suppose it were computable; then we'll give an algorithm to compute BB(n) itself, which we know is impossible.

Assume by induction that we've already computed BB(1), BB(2), ..., BB(n-1). Then consider the sum of the "higher-order terms":

If S is computable, then Sn must be computable as well. But this means we can approximate Sn within 1/2, 1/4, 1/8, and so on, until the interval that we've bounded Sn in no longer contains 0. When that happens, we get an upper bound on 1/Sn. But since 1/BB(n+1), 1/BB(n+2), and so on are so much smaller than 1/BB(n), any upper bound on 1/Sn immediately yields an upper bound on BB(n) as well. But once we have an upper bound on BB(n), we can then compute BB(n) itself, by simply simulating all n-state Turing machines. So assuming we could compute S, we could also compute BB(n), which we already know is impossible. Therefore S is not computable.


[Discussion of this lecture on blog]

[← Previous lecture | Next lecture →]

[Return to PHYS771 home page]