Supported by Fastmail
Sponsor: Fastmail

Fast, private email hosting for you or your business. Try Fastmail free for up to 30 days.

Newcomb’s Paradox: ‘Two Boxes, One Choice, and $1,000,000’

The Veritasium team presents Newcomb’s Paradox:

You walk into a room. There’s a super computer and two boxes on the table.

One box is open. It’s got a thousand dollars in it. There’s no trick. You know it’s a thousand dollars.

The other box is a mystery box. You can’t see inside.

You also know that this super computer is very good at predicting people. It has correctly predicted the choices of thousands of people in the exact problem you’re about to face. You don’t know what the problem is yet, but you know it has been correct almost every time.

Now, the super computer says you can either take both boxes — the mystery box and $1,000 — or you can just take the mystery box.

What’s in the mystery box?

The supercomputer tells you that before you walked into the room, it made a prediction about your choice.

If the super computer predicted you would just take the mystery box and you’d leave the $1,000 on the table, then it put a million dollars in the mystery box.

If the super computer predicted that you would take both boxes, then it put nothing in the mystery box.

The supercomputer made its prediction before you knew about the problem, and has already set up the boxes. It’s not trying to trick you. It’s not trying to deprive you of money. Its only goal is to make a correct prediction.

So do you take both boxes, or do you just take the mystery box?

I encourage you to make your choice, then watch the video—and see if your mind is changed.

I started as a two-boxer, under the assumption that in both a right and wrong prediction, I end up with $1,000 more than I would with one box. This is causal rationality: in each case, the two-box outcome is better (in this case, more money) than its one-box counterpart.

After watching the video, I’m starting to think I should be a one-boxer, under the assumption that if the supercomputer predicts I’d choose one box, my eventual choice is evidence of its prediction—and thus what’s in the box. This is outcome rationality: I’m making a decision that best correlates with the “one box” containing $1M—that is, being the kind of person who would pick one box most often means ending up with a box that contains a million dollars.

It seems like the “obviously safe bet” to walk away with something can cost you everything. I wouldn’t have predicted that.

⚙︎

Subscribe to JAG’s Workshop to get new posts by email, and follow JAG’s Workshop using RSS, Mastodon, Bluesky, or LinkedIn . You can also support the site with a one-time tip of any amount.