Dude, it’s like you read my mind

Newcomb’s Problem, for those of you with social lives, is this. A superintelligent “Predictor” puts two opaque boxes on a table. The first contains either $1,000,000 or nothing, while the second contains $1,000. You have a choice: you can either open the first box or both boxes. Either way, you get to keep whatever you find.

But (duhhh…) there’s a catch: the Predictor has already predicted what you’ll do. If he predicted you’ll open both boxes, then he left the first box empty; if he predicted you’ll open the first box only, then he put $1,000,000 in the first box. Furthermore, the Predictor has played this game hundreds of times before, with you and other people, and has never once been wrong.

So what do you do? As Robert Nozick wrote, in a famous 1969 paper:

“To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.”

Actually, people confronted with Newcomb’s Problem tend to split into three camps: the one-boxers, the two-boxers, and the Wittgensteins.

The one-boxers figure they might as well trust the Predictor: after all, he’s never been wrong. According to the prediction, if you open the first box you’ll get $1,000,000, while if you open both you’ll only get $1,000. So it’s a no-brainer: you should open only the first box.

“But that’s stupid!” say the two-boxers. “By the time you’re making the choice, the $1,000,000 is either in the first box or it isn’t. Your choice can’t possibly change the past. And whatever you’d get by opening the first box, you’ll get $1,000 more by opening both. So obviously you should open both boxes.”

(Incidentally, don’t imagine you can wiggle out of this by basing your decision on a coin flip! For suppose the Predictor predicts you’ll open only the first box with probability p. Then he’ll put the $1,000,000 in that box with the same probability p. So your expected payoff is 1,000,000p2 + 1,001,000p(1-p) + 1,000(1-p)2 = 1,000,000p + 1,000(1-p), and you’re stuck with the same paradox as before.)

The Wittgensteins take a third, boring way out. “The whole setup is contradictory!” they say. “It’s like asking what happens if an irresistable force hits an immovable object. If the ‘Predictor’ actually existed, then you wouldn’t have free will, so you wouldn’t be making a choice to begin with. Your very choice implies that the Predictor can’t exist.”

I myself once belonged to the Wittgenstein camp. Recently, however, I came up with a new solution to Newcomb’s Problem — one that I don’t think has ever been discussed in the literature. (Please correct me if I’m wrong.) As I see it, my solution lets me be an intellectually-fulfilled one-boxer: someone who can pocket the $1,000,000, yet still believe the future doesn’t affect the past. I was going to write up my solution for a philosophy journal, but what fun is that? Instead, I hereby offer it for the enlightenment and edification of Shtetl-Optimized readers.

We’ll start with a definition:

“You” are anything that suffices to predict your future behavior.

I know this definition seems circular, but it has an important consequence: that if some external entity could predict your future behavior as well as you could, then we’d have to regard that entity as “instantiating” another copy of you. In other words, just as a perfect simulation of multiplication is multiplication, I’m asserting that a perfect simulation of you is you.

Now imagine you’re standing in front of the boxes, agonizing over what to do. As the minutes pass, your mind wanders:

I wonder what the Predictor thinks I’ll decide? “Predictor”! What a pompous asshole. Thinks he knows me better than I do. He’s like that idiot counselor at Camp Kirkville — what was his name again? Andrew. I can still hear his patronizing voice: “You may not believe me now, but someday you’ll realize you were wrong to hide those candy bars under the bed. And I don’t care if you hate the cafeteria food! What about the other kids, who don’t have candy bars? Didn’t you ever think of them?” Well, you know what, Predictor? Let’s see how well you can track my thoughts. Opening only one box would be rather odd, wouldn’t you say? Camp Kirkville, Andrew, candy bar – that’s 27 letters in total. An odd number. So then that settles it: one box.

What’s my point? That reliably predicting whether you’ll take one or both boxes is “you-complete,” in the sense that anyone who can do it should be able to predict anything else about you as well. So by definition, the Predictor must be running a simulation of you so detailed that it’s literally a copy of you. But in that case, how can you possibly know whether you’re the “real” you, or a simulated version running inside the Predictor’s mind?

“But that’s silly!” you interject. “Here, I’ll prove I’m the ‘real’ me by pinching myself!” But of course, your simulated doppelganger says and does exactly the same thing. Let’s face it: the two of you are like IP and PSPACE, water and H2O, Mark Twain and Samuel Clemens.

If you accept that, then the optimal strategy is clear: open the first box only. Sure, you could make an extra $1,000 by opening both boxes if you didn’t lead a double life inside the Predictor’s head, but you do. That, and not “backwards-in-time causation,” is what explains how your decision can affect whether or not there’s $1,000,000 in the first box.

An important point about my solution is that it completely sidesteps the “mystery” of free will and determinism, in much the same way that an NP-completeness proof sidesteps the mystery of P versus NP. What I mean is that, while it is mysterious how your “free will” could influence the output of the Predictor’s simulation, it doesn’t seem more mysterious than how your free will could influence the output of your own brain! It’s six of one, half a dozen of the other. Or at least, that’s what the neural firings in my own brain have inexorably led me to believe.

44 Responses to “Dude, it’s like you read my mind”

  1. Wolfgang Says:

    Pretty clever, but your ‘solution’ hinges on the assumption that predicting your behavior requires complete emulation. I am not sure about that.

    By the way, Cosma Shalizi had a pretty smart answer
    http://cscs.umich.edu/~crshalizi/weblog/337.html

  2. Anonymous Says:

    But isn’t the whole question of “free will” established since quantum mechanics came into the picture? Randomness is at the root of nature and I feel certain that even if I could make an *exact* copy of Scott Aaronson that copy would make different choices than you would.

  3. Wolfgang Says:

    anonymous,

    this is of course anobvious question.
    But then the strange thing is that a pure thought experiment would tell us that brains are unpredictable (not classical). How can a pure thought experiment tell us anything about a collection of molecules ?

  4. Scott Says:

    Anonymous: No, even under quantum mechanics, if the Predictor knew the quantum state of your brain then Newcomb’s Paradox would arise as before. This is because the Predictor only needs to be able to calculate the probabilities of your making various choices (see my parenthetical comment for a proof of that). Of course, you might argue that the Predictor couldn’t learn the state of your brain without violating the no-cloning theorem… πŸ™‚

  5. Anonymous Says:

    I am not sure that I agree with the idea that a “pure thought experiment” would tell you anything about the physical world. It might. It might not.

  6. Anonymous Says:

    Certainly original. But there are issues. If you take the environment as “advice,” I will argue that advice will be different even if the machine is cloned. So maybe you can argue that there cannot be a predictor.

    In fact, I am convinced that there cannot be a predictor. But how does one formalize it in the physics_but_not_formal_math framework you are advocating?

    By the way, what is you-complete? are you talking of

  7. Scott Says:

    Wolfgang: “your ‘solution’ hinges on the assumption that predicting your behavior requires complete emulation.”

    I’d put it differently: I’m defining a complete emulation of you to be anything that can predict your behavior! πŸ™‚

    P.S. Thanks for the link to Cosma’s post. I confess that to me, saying that Newcomb’s Paradox demonstrates a “limitation in our ideas about rational decision-making” has always seemed like a cop-out. It’s like “solving” the black hole information paradox by saying that it demonstrates a limitation in our ideas about physics … well, duh! πŸ™‚ The hard part is to explain where the principles of decision theory break down, and what to replace them with.

  8. Scott Says:

    Anonymous: “If you take the environment as ‘advice,’ I will argue that advice will be different even if the machine is cloned. So maybe you can argue that there cannot be a predictor.”

    That’s certainly a defensible position. It’s just that it’s no fun to constantly reject the premises of thought experiments — I’d rather pull out all the stops to imagine how they might be true! In this case, that means postulating a Predictor who can simulate not only you, but your whole environment as well.

  9. Scott Says:

    “By the way, what is you-complete?”

    As hard as predicting any other aspect of your behavior.

  10. Wolfgang Says:

    >I’d put it differently: I’m defining a complete emulation of you to be anything that can predict your behavior! πŸ™‚

    But the predictor does not have to forecast all of your behavior (inner feelings etc.) She only has to forecast which box you will pick.

    It might be that only a small part of your brain needs to be predicted.

  11. Scott Says:

    Wolfgang: “But the predictor does not have to forecast all of your behavior (inner feelings etc.) She only has to forecast which box you will pick.”

    Right, that’s exactly what my argument was about! I’m saying that you could, if you chose, base your decision of whether to open one or two boxes on whatever inner feelings, childhood memories, etc. you cared about. Therefore anyone who can predict the one thing should also be able to predict all the rest. It’s similar to how, if you could solve the halting problem, then you could also predict whether a Turing machine will ever enter a particular state.

  12. Anonymous Says:

    but it seems to me that if you do flip a coin to make your decision that the predictor cannot with 100% accuracy predict what the outcome will be since it itself is merely using the probability that you will pick some subset of the boxes.

    so you could defeat it?

  13. Scott Says:

    “but it seems to me that if you do flip a coin to make your decision that the predictor cannot with 100% accuracy predict what the outcome will be…”

    Right, it can’t predict with 100% accuracy, but it doesn’t need to. What matters is that, if you open one box with probability p, then the Predictor can make your expected payoff equal to 1,000,000p+1,000(1-p). This function is maximized when p=1, which leads to the same “paradoxical” conclusion as in the original Newcomb Problem — namely, that your best strategy is to open only one box.

  14. Anonymous Says:

    Hi Scott,

    Very cute solution in my opinion! Perhaps in CS terms we’re in this situation:

    Your free will is your choice of what (possibly quantum,probabilistic) Boolean circuit to run , but then you give the circuit to the predictor (i.e., the free will is that you can can choose yourself but then need to live with it).
    Now, it’s clear you want to give a “one-boxer” circuit to the predictor.

    –Boaz

  15. Anonymous Says:

    I’m not at all convinced that the ability to predict one aspect of your behavior will mean the ability to predict the rest of it (‘though this sounds a bit like The Scott Test for Scott completeness…). Obviously this isn’t a computational constraint on possible functions, as not everything can be reduced, to the best of my knowledge, to “the box opening problem”.
    Can’t we just shake the box… Please?
    The point is that it’s more than regular decision making breaking down here. Faced with a being that showed me it could succeed with 100 test rounds of this game, I’d probably take both boxes to a psychiatric hospital and commit myself… Or at least think this “super powerful being” is a good stage magician. Facing this type of situation is so far from the understanding I’ve built of this world based on my experiences since someone pulled me out of a womb and proceeded (within a few days) to cut off the edge of my penis, I’ve no way to handle it. It’s not exactly logic breaking down here, it’s the ability to translate the world into logical terms.
    If you consider yourself a deterministic machine (which you may well be), you’re not choosing anything (that is, given M, and a machine M’ that _always_ predicts Ms output, M doesn’t really have free choice anyway).
    Gilad.

  16. Bram Says:

    That’s a fairly good exposition, but it still boils down to pointing out that if we assume that the predictor can actually predict then we should just open one box. The two-boxers are basically arguing that the predictor couldn’t possibly do that, so we should call his bluff.

    Unfortunately the two-box theory is quite common. The ‘this is too important a subject to think about probability’ idea is a common (anti-)intellectual thread in the united states, particularly when it comes to the chances of terrorist attacks.

  17. Scott Says:

    Boaz: Exactly! That’s a very nice way to put it.

  18. Wolfgang Says:

    Bram,

    > The two-boxers are basically arguing that the predictor couldn’t possibly do that,

    No, not really.
    The two-boxers argue that the predictor has done his calculation and now the money is there or not. In both cases it is now better to open both boxes.

  19. Scott Says:

    Gilad: LOL! On the other hand, I’m sure a lot of unlikely things have happened to both of us since our foreskins were lopped off.

  20. Cheshire Cat Says:

    Scott, I think your “solution” is interesting as a model of how a Predictor can be powerful enough to induce a one-box decision without contradicting the laws of physics (in an obvious way). But in response to Wolfgang’s posts, you seem to be claiming something stronger – that a Predictor needs to be that powerful to induce a one-box solution. This sounds much more dubious to me… It’s unreasonable to assume that one can “choose” to base the decision an any aspects of one’s experience whatsoever – after all, people are computationally-bounded, even if Predictors are not.

  21. Anonymous Says:

    Scott, your argument seems to mean that a more likely conclusion of someone who sees a predictor succeed with 100% accuracy is not that it can completely predict your thoughts but that there is a problem with physical commitment. In a simulated world, after all,
    physical commitment is impossible (the simulator can always wait until you make your choice and then decide what the contents of the boxes will be).

    If the existence of a Predictor means you’re just as likely to be living in
    a simulated world, it seems a simpler conclusion is that you’re living in a world without commitment.

  22. Scott Says:

    cheshire cat: You raise a great question — how powerful does the Predictor have to be? (Sort of like asking about the power of the prover in interactive proof systems. πŸ™‚ ) Let me clarify: the Predictor has to be just powerful enough to predict how many boxes you’ll open, and (as a consequence) any other aspect of you or your environment to which that choice might be sensitive. In my account, it follows that the Predictor must bring into being another instantiation of you, at least at the specific moment when you’re making the choice! In other words, whenever you’re thinking about something relevant to the boxes, you ought to be unsure whether those thoughts are yours or the Predictor’s. I admit this gets metaphysically weird, but I think that’s inherent in the problem!

  23. Scott Says:

    Anonymous: “In a simulated world, after all, physical commitment is impossible (the simulator can always wait until you make your choice and then decide what the contents of the boxes will be).”

    Man, now it’s getting interesting! I think you’re making a subtle category mistake. It’s true that you don’t know whether you’re living in a simulated world. But even if you are, your goal (by assumption) is still to maximize the expected earnings of the “you” in the real world! And while I never said so explicitly, it’s clearly fair to assume that in the real world, boxes obey standard physical laws. It follows, then, that the boxes in the simulated world must obey the same laws, at least insofar as the simulated you is concerned!

  24. Cheshire Cat Says:

    Right, the Predictor should be able to tell whether you’ll have quark spaetzle for dinner, but not necessarily whether you’ll end up with an odd or even number of research papers…

    The ambiguity about the power of the predictor is what makes different solutions both reasonable and unreasonable at the same time. Any predictor with normal resource bounds would induce a two-box solution ; yet one can conceive of a predictor powerful enough to induce a one-box solution. The premise of the thought experiment
    encourages us to stretch our imaginations, yet we wish to salvage our intuitions. No wonder the Wittgensteinians have it so good, they can get all metaphysical… By talking about the power of the predictor, you try (as a good Wittgensteinian) to demystify the paradox.

    And now I should stop trying to simulate you, and get some research done…

  25. Anonymous Says:

    Scott, my comment about the simulated world was meant to provide a different justification for the one-box choice:

    Instead of basing my choice on my belief in the fact that perfect prediction is possible, I base it on the belief that commitment is impossible (i.e., the predictor is a “wizard” who can change the contents of the boxes after I make my choice — or like another anonymous said, that he is a very good stage magician).

    By Occam’s razor I should prefer the simplest explanation that explains the evidence, and the “impossibility of commitment” belief is “simpler” than believing I can be completely simulated (since then I could be living in a simulated world in which commitment is impossible).

  26. Anonymous Says:

    An example might be enlightening.

    I choose a smart inhabitant of a two dimensional plane, the distinguished Mr. Dot.
    In his proximity I draw two squares: the first with a marked point A and the second with
    a marked point B on their edges. Then I’ll let Mr. Dot either to erase point A and enter the first
    square or to erase both A and B and then enter both squares. In the first square he might count
    either 1,000,000 points or no one and in the second for sure 1000 points.
    Eventually I claim that I can predict with probability 1 what his choice will be. If I
    predict that he opens the first box then I put 1,000,000 points in the first box
    otherwise no point. Now Mr. Dot is in the dilemma of Newcombs chooser.

    However, as I am living in three dimensions, I’ll do a nasty trick. I’ll mold the plane so that the point B
    will intersect the interior of first square 1,000,000 times. When you erase the point B, the
    first square will look empty to Mr. Dot.

    All people trying to solve this kind of puzzle are in position of Mr. Dot. They can build fancy theories
    but never get the truth.

  27. Bram Says:

    One time on animaniacs, Plucky duck didn’t feel like doing his homework, so he got the bright idea that after the homework was due, he’d build a time machine, go back in time, do the homework, then give it to himself to hand in. Sure enough, plucky from the future appears right then. Plucky from the present then goes off and fools around for a while, and comes back. Plucky from the future hands him a sheet of paper, hops in the time machine, and disappears. Plucky from the present excitedly looks at the paper, expecting it to be completed homework, and is instead greeted with the message ‘ha ha! you’ll just have to do the homework yourself’

  28. optionsScalper Says:

    Scott,

    1. Does The Predictor have the benefit of an oracle (and have a cage in the zoo)?

    2. Does the emulation of you to make a you-complete clone provide sufficient conditions for the reproduction of the test? In particular, if environmental conditions were presented (as a random process), and these conditions were considered by you and the you-complete clone as part of the decision making process, it would seem that the emulation of all exact environmental conditions would be needed as well. Or, I suppose, if I read your definition, the emulation has the capability to perceive all aspects of inputs (ignoring that Heisenberg guy) to the you-state-machine from any location and thus deliver the predicted state.

    3. Is bisimulation an appropriate topic when discussing the clone and the capabilities of The Predictor? This doesn’t follow the emulation thoughts, but I’m thinking about the clone scenario and my previous point.

    4. Would someone who has taken a vow of poverty or otherwise has a demonstrable lack of desire for money view the test differently? A kind of modal or dynamic logic version with an ambivalent perspective might be used as opposed to a minimization of wealth outcome.

    5. Was Wittgenstein the type of person who would bring his own bags for carrying groceries to the grocery store and when presented with the query, “Paper or Plastic?” had the benefit of declaring “neither”?

    6. Dare I ask if The Predictor is also an “Intelligent Designer”? It would seem they have similar capabilities.

    Regards,

    —O

    p.s. Sorry, I had to throw in a little humor. The toughest part is probably determining which of the above I actually find humorous.

  29. Robin Hanson Says:

    Once you are there at the point of choice you are better off opening both boxes. But just before the box-filler gets his last data on you and your tendencies, you are better off being the sort of person who would later only open the one box. So this is just an example of that game theory standard: time-inconsistency of choice.

  30. Wolfgang Says:

    Scott,

    just one more thought.
    Let’s assume you publish your solution (and it is very smart!) in a widely circulated paper and it (along with you) becomes famous as the
    “Scott Aaronson solution to the Newcomb problem using the emulation argument ..”

    In this case two remarkable things happen:

    i) the task becomes much easier for the predictor. It only has to check whether the person knows the famous solution and accepts it. (Just as I said a complete emulation is perhaps not necessary.)

    ii) Since the predictor most likely will predict the 1-box solution (because everybody chooses it, based on the famous solution) it is really obvious that now the 2-box solution is the best choice 😎

  31. Scott Says:

    Robin: “Once you are there at the point of choice you are better off opening both boxes. But just before the box-filler gets his last data on you and your tendencies, you are better off being the sort of person who would later only open the one box.”

    Right! What makes the problem so strange is that those two moments in time are postulated to be identical.

  32. Scott Says:

    Wolfgang: Yeah, the Predictor business is brutal; you can’t rest on your laurels for an instant… πŸ™‚

  33. Anonymous Says:

    “The Predictor has played this game hundreds of times before, with you and other people and has never once been wrong.”

    This is more than just the ability to predict your choices. Even your choice could be based on advice from a few friends. Must the Predictor then be universal, i.e. complete for the entire web of humanity collectively (or at least all local neighborhoods in this web), rather than simply each individual human?
    I think at least approximately.
    Perfect universality seems to lead into the kinds of conflicts of free will and predestination that give Calvinists logical fits. Approximation seems more tolerable.

    But what level of approximation?
    The only evidence is “hundreds”; i.e., the success rate (in terms of the fraction of web neighborhoods for which the predictor is correct) is then at least 99% but you do not have evidence beyond 99.9%. You also have no evidence against perfect prediction.

    If you are in the 99+% then the box one selection is the optimum choice. The only issue is your confidence that this is right in your case. As your adviser on how to play given this confidence I would need to understand your relative utility of the outcomes: $0, $1000, $1,000,000, and $1,001,000…

    Paul

  34. Osias Says:

    Poor philosophy journals! πŸ™‚

  35. Drew Arrowood Says:

    In order for the predictor to have a reason for his prediction, there has to be something that is the case (possibly a set of states in the emulated Scott) that determines that his prediction will be the right one. Does the mere existence of this state of affairs (call it a real intention in Scott) preclude free will? Are real intentions scientifically respectable (possibly observable) kinds of things? No, and possibly yes. Free will is in fact acting in accord with our intentions.

    Intention does crop up in science in a couple of places. First off, there is the Bayes versus Fisher debate on statistical inference. Must we commit to a stopping rule for taking data and stick with this rule in order to have scientifically reasonable conclusions? A few years back, the philosophers were all on the Bayes bandwagon, and the scientists were Fishereans — but the two camps seem to be reversing allegiance today.

    The second puzzle is Polchinski’s interpretation of Weinberg’s nonlinear quantum mechanics. Polchinski forbids faster than light communication by imposing a condition on the observables of the system, but he says that a result of this move things in one (Everett world) affect things in another (Everett world) depending on the intentions of the experimenter. The problem goes something like this:

    A spin 1/2 ion enters a Stern-Gerlach machine, which measures the z component of the spin. Things are arranged such that the deflected beam is rejoined to the undeflected beam, and will continue on. If the observer saw spin up, he does nothing. If he saw spin down, he makes a firm intention to either (1) does nothing or (2) rotate the spin by use of a field coupled to the y direction (this is not a measurement). The ion then enters a region of a field coupled to a quadratic x spin (it evolves nonlinearily). Now, if you measure the z spin after the nonlinear evolution (after having observed spin-up earlier), you will ALWAYS see an up spin if you had intended to take Action (1), and you will ALWAYS see a down spin if you had intended Action (2), in the case that you had observed a down spin earlier (if you didn’t observe spin up in step one, do nothing in this step). It seems that your intentions — about what you would do in situations that haven’t arisen — have an effect on the world. Actually, the discussion in John Preskill’s lecture notes is clearer than Polchinski’s.

    For Polchinski (and Preskill), this is an argument against nonlinear quantum mechanics — for me, it is an argument against the many worlds interpretation — but the question remains, what is the scientific status of an intention? We philosophers have talked a lot about this question with regard to beliefs and cognitive science, but compartively little ink has been spilled with regard to intention.

    I’m not surprised this problem (along with the issues of Baysean convergence) finds its way into your work.

  36. Jeff Erickson Says:

    Following your logic, if a Predictor offered me Newcomb’s choice, I would assume that I’m not really me, but just a simulation of me facing a simulation of the Predictor, and would therefore “take” only the simulation of the first box, so that the real me could really get both real boxes.

    And then I would take the red pill.

    Wait, something’s not right….

  37. Scott Says:

    “Following your logic, if a Predictor offered me Newcomb’s choice, I would assume that I’m not really me, but just a simulation of me facing a simulation of the Predictor, and would therefore ‘take’ only the simulation of the first box, so that the real me could really get both real boxes.”

    No, so that the real you could take only the first real box (just like the simulated you, who’s actually “the same” person), but have there be $1,000,000 in that box.

  38. Aristus Says:

    Well, hell — I’m not a physicist or a philosopher, but I know solipsism when I see it.

    But there is still a difference between you and the Predictor: whether or not it knows your intentions, you don’t know *its* intentions. Everyone seems to assume that it wants to give you as little money a possible. Why?

  39. Scott Says:

    “But there is still a difference between you and the Predictor: whether or not it knows your intentions, you don’t know *its* intentions. Everyone seems to assume that it wants to give you as little money a possible. Why?”

    Aristus: No, if the Predictor wants anything, it’s to give you $1,000,000 if you take the first box only, or $1,000 if you take both. If it wanted to give you as little money as possible, then presumably it wouldn’t put anything in either box.

  40. Bunny Dee Says:

    The way I see it, your argument basically answers the “how can God (as an idea – whether you choose to believe in it or not) be omniscient and offer us free will at the same time”.

    And what you’re saying is that, if this holds true, then the only way it can be true is if we are part of God (again, as an idea) – thus having our free will and exercising it will influence the outcome and the knowledge. So, insofar as we know what we do, God (or whatever word you choose to replace the idea of an omniscient entity that still offers us free will) shall know as well, because we are part of it, and its omniscience occurs as a result of the sum of partial knowledges, our own partial knowledge (resulting from and resulting in our own free will) included.

    Well, yeah πŸ˜›
    Well put, however. And thanks for mentioning Newcomb’s problem, I didn’t know of it before – I do indeed have a social life, regrettably (or not) πŸ˜›

  41. Scott Says:

    Bunny Dee: Thanks for your very interesting interpretation. Strange as it seems in retrospect, I hadn’t realized that my Newcomb idea had possible theological applications. So then maybe I should publish it… πŸ™‚

  42. James Friesen Says:

    A true scientist would want to study this problem at length to try to understand it.

    Therefore, if the Predictor has, in fact, always been right, then the best choice is to open box one, collect the million dollars and use it to fund your research into the workings of this puzzle. You will then have plenty of time to experiment with opening both boxes.

    From an economic standpoint, anyone can come up with a thousand dollars if they work hard and save, and most people could just whip out the old credit card if they needed to make a purchase of that size, but a million is much harder to come by, so why even bother taking chances for a mere thousand dollars? Open box one and have a chance at the million.

    In either case, I think the box with the thousand in it is a waste of time.

    The puzzle should be re-worded so as to offer more of an incentive to risk it all and open both boxes.

  43. Scott Says:

    James: In most of these puzzles involving money, you’re supposed to assume that “$1” is shorthand for “one unit of value to you, whatever that is.” In other words, anything for which you’re indifferent between getting one unit of it, or having a 1/K chance of getting K units of it.

  44. Anonymous Says:

    this link may be interesting for you new combs paradox