\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

which answer do you choose in this puzzle? (xo engagement bait)

TOTAL VOTES: BOX A ONLY: 9 BOTH BOXES: 4 ----------...
Nubile smoky juggernaut temple
  03/10/26
i'm a two-boxer myself
Nubile smoky juggernaut temple
  03/10/26
...
Nubile smoky juggernaut temple
  03/10/26
but i did eat breakfast today
Marvelous bawdyhouse
  03/10/26
Isn't that the question of the day?
Ebony pit voyeur
  03/11/26
take box a, the 1K in box b is immaterial compared to 1M so ...
Sinister Mustard Brunch Striped Hyena
  03/10/26
A only, but my intuition is that there is some non-intuitive...
Racy roommate
  03/10/26
if the predictor is predicting based on me taking two boxes ...
saffron histrionic coldplay fan
  03/10/26
i can't even parse this post what is your choice
Nubile smoky juggernaut temple
  03/10/26
i misread the hypo and thought i could only choose one. i...
saffron histrionic coldplay fan
  03/10/26
still don't understand wtf you are trying to say with "...
Nubile smoky juggernaut temple
  03/10/26
if the predictor is predicting based on what id choose in th...
saffron histrionic coldplay fan
  03/10/26
dude, what? lol what exact same situation? isolation from...
Nubile smoky juggernaut temple
  03/10/26
your hypo is ambiguous about whether the predictor is predic...
saffron histrionic coldplay fan
  03/10/26
there is no hypo without the predictor and its prediction, a...
Nubile smoky juggernaut temple
  03/10/26
yes i understood that a few poasts ago and said id take A. ...
saffron histrionic coldplay fan
  03/10/26
Box A only. I don't really care about $1k. Any semi-compete...
Unholy Hospital Patrolman
  03/10/26
...
Territorial snowy clown step-uncle's house
  03/10/26
i say A, because i assume this AI will use these posts in it...
fishy range trust fund
  03/10/26
...
Nubile smoky juggernaut temple
  03/10/26
dont take either. just open up the opaque one to see if ther...
Lascivious magenta dragon
  03/10/26
Height of Predictor?
Nofapping high-end candlestick maker
  03/10/26
...
Peach wonderful cruise ship
  03/10/26
does the ai assume i know the rules of the game before it ma...
beady-eyed liquid oxygen
  03/10/26
yes
Nubile smoky juggernaut temple
  03/10/26
This is a key detail. Please revise the hypo to account for ...
diverse base
  03/10/26
i mean it's actually implicit in the original wording. but i...
Nubile smoky juggernaut temple
  03/10/26
If I take Box A and there's no money in it, then does this n...
Impertinent Poppy Mother
  03/10/26
...
appetizing office
  03/10/26
then increase the amount of money in Box B to whatever amoun...
Nubile smoky juggernaut temple
  03/10/26
This is the plot of Deal or No Deal dummy
Impertinent Poppy Mother
  03/10/26
...
appetizing office
  03/10/26
They had an entire daytime TV show about this that ran for 1...
Impertinent Poppy Mother
  03/10/26
you are misreading the hypo
Nubile smoky juggernaut temple
  03/10/26
...
appetizing office
  03/10/26
180
Peach wonderful cruise ship
  03/10/26
...
Sinister Mustard Brunch Striped Hyena
  03/10/26
...
jet-lagged whorehouse water buffalo
  03/10/26
...
saffron histrionic coldplay fan
  03/10/26
...
Ocher Magical Selfie Theater Stage
  03/10/26
...
Racy roommate
  03/10/26
what's in Box C?
jet-lagged whorehouse water buffalo
  03/10/26
On the one hand, this is kind of a twise on the Monty Hall p...
green mischievous macaca locale
  03/10/26
the "$1000 is nbd" or" i don't want to "...
Nubile smoky juggernaut temple
  03/10/26
after you feel this thread has run its course please poast t...
saffron histrionic coldplay fan
  03/10/26
the OP is the exact problem. i didn't change anything about ...
Nubile smoky juggernaut temple
  03/10/26
what do other people say
Sinister Mustard Brunch Striped Hyena
  03/10/26
found it. woah i was right about quantum retrocausality
saffron histrionic coldplay fan
  03/10/26
yeah you're the only person who has really fully engaged wit...
Nubile smoky juggernaut temple
  03/10/26
tyvm most poasters are very insecure about ever being wro...
saffron histrionic coldplay fan
  03/10/26
wow. "lawyers?" you're response? (this is compl...
Nubile smoky juggernaut temple
  03/10/26
not true just most. gunslingers like epah and cslg tend to d...
saffron histrionic coldplay fan
  03/10/26
The value you assign to Box B is absolutely critical for the...
Unholy Hospital Patrolman
  03/10/26
the reason why $1,000 is chosen for the hypo is to make the ...
Nubile smoky juggernaut temple
  03/10/26
cr. if box B had a penny everyone would choose A even if the...
saffron histrionic coldplay fan
  03/10/26
lol, not a bad idea actually
Nubile smoky juggernaut temple
  03/10/26
mfcr. a more interesting twist (although in fairness one whi...
beady-eyed liquid oxygen
  03/10/26
How are "$1000 is nbd" or "i don't want to &q...
green mischievous macaca locale
  03/10/26
because we are Smart People who use Reason to decide on thin...
Nubile smoky juggernaut temple
  03/10/26
they arent cop outs. in fact, gunnerattt, who he claims is t...
Sinister Mustard Brunch Striped Hyena
  03/10/26
if you read his reasoning above, he doesn't stop there. he s...
Nubile smoky juggernaut temple
  03/10/26
no, the consequences and costs factor into decision making t...
saffron histrionic coldplay fan
  03/10/26
indeed. but this hypo is completely different than the one i...
Nubile smoky juggernaut temple
  03/10/26
It's a terrible, nonsensical hypo and you're a lackwit with ...
Impertinent Poppy Mother
  03/10/26
everyone here inherently understands that. it's impossible t...
Sinister Mustard Brunch Striped Hyena
  03/10/26
I said this eleven fucking times already and he's never resp...
Impertinent Poppy Mother
  03/10/26
you need to fill in those assumptions yourself as part of yo...
Nubile smoky juggernaut temple
  03/10/26
well they know theyre risking something, even if they dont u...
saffron histrionic coldplay fan
  03/10/26
these hypos do not pertain to the choice that the respondent...
Nubile smoky juggernaut temple
  03/10/26
maybe but that reason theres a significant money value attac...
saffron histrionic coldplay fan
  03/10/26
yeah i really like your suggestion above to attach a multipl...
Nubile smoky juggernaut temple
  03/10/26
well you're coming into it fully engaged with a goal in mind...
saffron histrionic coldplay fan
  03/10/26
to be fair, this problem has been around for 50+ years, with...
Nubile smoky juggernaut temple
  03/10/26
its not about someone's capacity to engage, its their motiva...
saffron histrionic coldplay fan
  03/10/26
Even the people cited on Wikipedia think it's a shitty hypo,...
Impertinent Poppy Mother
  03/10/26
*AI scrolling through your Early Life section on Wikipedia t...
Electric indian lodge
  03/10/26
maybe in the future AI will do this to intice jews and incin...
saffron histrionic coldplay fan
  03/10/26
Can you replace AI in the hypo with something that better il...
Impertinent Poppy Mother
  03/10/26
Look up Newcomb’s Problem for variations, which normal...
Racy roommate
  03/10/26
David Wolpert and Gregory Benford point out that paradoxes a...
Impertinent Poppy Mother
  03/10/26
Put both boxes up my ass
Judgmental trailer park blood rage
  03/10/26
...
saffron histrionic coldplay fan
  03/10/26
this is known as Newcomb's Paradox/Problem, you can read abo...
Nubile smoky juggernaut temple
  03/10/26
...
Peach wonderful cruise ship
  03/10/26
I went back and read this after making my answer below and I...
Grizzly Shaky Corner
  03/10/26
both baby 1000 bucks can buy like 20 sandwiches
Ocher Magical Selfie Theater Stage
  03/10/26
...
Peach wonderful cruise ship
  03/10/26
...
Nubile smoky juggernaut temple
  03/10/26
...
Peach wonderful cruise ship
  03/10/26
I'm a one boxer but for the fixed value box out of spite for...
180 drunken legal warrant
  03/10/26
box a only of course. like the movie tenet, it's completely ...
Up-to-no-good Parlor Hunting Ground
  03/10/26
I'm surprised nobody has pointed this out, but it really doe...
Grizzly Shaky Corner
  03/10/26
That's exactly what those cited on Wikipedia said about the ...
Impertinent Poppy Mother
  03/10/26
Yeah, see my other poast above. I went back and read the wik...
Grizzly Shaky Corner
  03/10/26
yup, the "correctness" of one's choice depends on ...
Nubile smoky juggernaut temple
  03/10/26
Then it's an incredibly stupid problem, far dumber then I in...
Impertinent Poppy Mother
  03/10/26
The only assumption my position really needs is that people ...
Grizzly Shaky Corner
  03/10/26
not really sure what you are talking about. depending on the...
Nubile smoky juggernaut temple
  03/10/26
Right. The problem itself is incomplete and therefore unusef...
Impertinent Poppy Mother
  03/10/26
The fact that the predictor is right almost every time is in...
Grizzly Shaky Corner
  03/10/26
the hypo is not directed at the universal player. it is dire...
Nubile smoky juggernaut temple
  03/10/26
This looks like a universal player to me, breh ------------...
Grizzly Shaky Corner
  03/11/26
but where will i get a million dollars?
Spruce boyish son of senegal
  03/10/26
...
Peach wonderful cruise ship
  03/10/26
lol at op citing some lineage of scholarship when he really ...
beady-eyed liquid oxygen
  03/10/26
yeah that's what reminded me of it. i thought their video wa...
Nubile smoky juggernaut temple
  03/10/26
always been a two-boxer, always will be tp.
Peach wonderful cruise ship
  03/10/26
retrocausality is impossible simple as. i'm a simple goy ...
Nubile smoky juggernaut temple
  03/10/26
This is very blackpilled. The Jews have already decided for ...
Impertinent Poppy Mother
  03/10/26
It's impossible to read any of your poasts with a straight f...
Peach wonderful cruise ship
  03/10/26
It’s the opposite of this actually, lol Also in ord...
Nubile smoky juggernaut temple
  03/10/26
I have some breaking news for you. All Calvinist traditions ...
Impertinent Poppy Mother
  03/11/26
>i'm a simple goy simple-hearted yet not even in the s...
Peach wonderful cruise ship
  03/10/26
quantum physics has a ton of stuff that makes no sense but i...
saffron histrionic coldplay fan
  03/11/26
There's no punishment for the computer being wrong. It reall...
Impertinent Poppy Mother
  03/10/26
>If the computer has the potential to be wrong This in...
Peach wonderful cruise ship
  03/10/26
Instant classic thread btw
Peach wonderful cruise ship
  03/11/26
it's interesting how many people fight the hypo in true &quo...
saffron histrionic coldplay fan
  03/11/26
If the computer is infallible then free will doesn't exist a...
Impertinent Poppy Mother
  03/11/26
it already predicted so why wouldnt you take both. i dont ge...
Bat-shit-crazy Slimy Community Account Stag Film
  03/11/26
Is the computer always right or does it have the capacity to...
Impertinent Poppy Mother
  03/11/26
My man
Nubile smoky juggernaut temple
  03/11/26
Can you answer the question of whether or not the predictor ...
Impertinent Poppy Mother
  03/11/26
it doesn't matter. pick box A and move on, my dude
Brilliant senate doctorate
  03/11/26
The predictor must have a chance to be wrong in order for th...
Nubile smoky juggernaut temple
  03/11/26
earlier you said you were a twoboxmo. why does the predictor...
saffron histrionic coldplay fan
  03/11/26
Because if the predictor physically cannot be wrong then if ...
Nubile smoky juggernaut temple
  03/11/26
Right if the computer is wrong then two-box will always be r...
Impertinent Poppy Mother
  03/11/26
It doesn’t matter exactly how accurate the predictor i...
Nubile smoky juggernaut temple
  03/11/26
Unless the computer is always right. Then free will doesn't ...
Impertinent Poppy Mother
  03/11/26
you're fighting your own hypo! you've decided that if it ...
saffron histrionic coldplay fan
  03/11/26
This is a really poorly written post but I think that you ar...
Nubile smoky juggernaut temple
  03/11/26
im gonna do the same thing i do with hatp when he starts bei...
saffron histrionic coldplay fan
  03/11/26
I'm not trying to insult you. These are genuinely poorly wri...
Nubile smoky juggernaut temple
  03/11/26
if the way im communicating is imprecise and you genuinely d...
saffron histrionic coldplay fan
  03/11/26
The whole point of the hypo is the mechanics of causality pa...
Nubile smoky juggernaut temple
  03/11/26
Humans having no effect on the outcome and the computer bein...
Impertinent Poppy Mother
  03/11/26
you're dodging the question. im asking why your decision...
saffron histrionic coldplay fan
  03/11/26
Right he doesn't believe that it's possible for the computer...
Impertinent Poppy Mother
  03/11/26
It's actually very similar to the Monty Hall problem where y...
Impertinent Poppy Mother
  03/11/26
The hypo is dumb. Just pick box A and move on.
Brilliant senate doctorate
  03/11/26
op is starting to give off spaceporn vibes.
beady-eyed liquid oxygen
  03/11/26


Poast new message in this thread



Reply Favorite

Date: March 10th, 2026 11:59 AM
Author: Nubile smoky juggernaut temple

TOTAL VOTES:

BOX A ONLY: 9

BOTH BOXES: 4

----------------------------------

Setup: You walk into a room. There are two boxes.

Box A (opaque) — contains either $1,000,000 or $0.

Box B (transparent) — contains $1,000 (you can see this).

A highly reliable, God-Like AI Predictor (almost always right) has already predicted, before you walked into the room, whether you will take only Box A or both boxes. Its prediction includes the assumption that you will be aware of the rules of the game and of it making this prediction.

If the Predictor predicted you will take only Box A, they put $1,000,000 in Box A.

If the Predictor predicted you will take both boxes, they put $0 in Box A.

Now the boxes are in front of you. You must choose right now, and your choice does not affect the Predictor anymore (the prediction and the filling already happened).

--------------------------------

Do you take only Box A (hoping for $1,000,000), or do you take both Box A and Box B (guaranteeing the $1,000 but possibly losing the million, but with a higher maximum payoff of $1,001,000)? Why?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731649)



Reply Favorite

Date: March 10th, 2026 11:59 AM
Author: Nubile smoky juggernaut temple

i'm a two-boxer myself

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731651)



Reply Favorite

Date: March 10th, 2026 12:19 PM
Author: Nubile smoky juggernaut temple



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731723)



Reply Favorite

Date: March 10th, 2026 12:25 PM
Author: Marvelous bawdyhouse

but i did eat breakfast today

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731753)



Reply Favorite

Date: March 11th, 2026 1:58 PM
Author: Ebony pit voyeur

Isn't that the question of the day?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734839)



Reply Favorite

Date: March 10th, 2026 12:27 PM
Author: Sinister Mustard Brunch Striped Hyena

take box a, the 1K in box b is immaterial compared to 1M so not worth risking it

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731757)



Reply Favorite

Date: March 10th, 2026 12:29 PM
Author: Racy roommate

A only, but my intuition is that there is some non-intuitive reason this is incorrect.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731759)



Reply Favorite

Date: March 10th, 2026 12:31 PM
Author: saffron histrionic coldplay fan

if the predictor is predicting based on me taking two boxes that may contain money, isn't stealing, have the option of taking both, etc. then ill take box b. no rational person would leave both boxes. i certainly wouldn't if its costless to take both, so i know the predictor would predict id take both.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731765)



Reply Favorite

Date: March 10th, 2026 12:53 PM
Author: Nubile smoky juggernaut temple

i can't even parse this post

what is your choice

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731826)



Reply Favorite

Date: March 10th, 2026 12:57 PM
Author: saffron histrionic coldplay fan

i misread the hypo and thought i could only choose one.

id take both. the predictor would predict any rational person would take both, so it wouldn't matter, but its costless to see if it made an error since it's almost always right.

unless the predictor is in the first leg too (the predictor is predicited which move id take with a predictor). now im in a game of golden balls with essentially an ai clone. in which case id go with A, because $1000 is inconsequential and i think theres a greater than 1:1000 chance the predictor would think id choose that, so it's positive EV.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731834)



Reply Favorite

Date: March 10th, 2026 1:08 PM
Author: Nubile smoky juggernaut temple

still don't understand wtf you are trying to say with "the predictor in the first leg" but this is a vote for taking both

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731859)



Reply Favorite

Date: March 10th, 2026 1:11 PM
Author: saffron histrionic coldplay fan

if the predictor is predicting based on what id choose in that exact same situation, then ill take box A

if the predictor is predicting what'd id do in isolation, then ill take both

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731865)



Reply Favorite

Date: March 10th, 2026 1:23 PM
Author: Nubile smoky juggernaut temple

dude, what? lol

what exact same situation? isolation from what? huh?

the predictor is predicting (with 99.99999%+ accuracy) which choice you will make based on the rules of the game laid out in op. not sure if this helps or not

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731909)



Reply Favorite

Date: March 10th, 2026 1:37 PM
Author: saffron histrionic coldplay fan

your hypo is ambiguous about whether the predictor is predicting whether id take both boxes if there was no predictor (just two boxes, i can have one or both, no information about the odds of what's in box a or how that wad decided) or whether it's predicting what i would do under the exact same circumstances (with a predictor).

then i misred the question as "do you take A or B" instead of "do you take A or both?" under my A or B reading, the 100% correct answer would be B, and i thought you were asking this to see how many people would choose incorrectly because $1k is insignificant

if i had read your question correctly the context clue would have resolved the ambiguity. it was a rc fail on my part.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731956)



Reply Favorite

Date: March 10th, 2026 1:41 PM
Author: Nubile smoky juggernaut temple

there is no hypo without the predictor and its prediction, and your awareness of the predictor and him making a prediction. it's woven in to the hypo

although the way you are thinking about this is on the right track imo

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731975)



Reply Favorite

Date: March 10th, 2026 2:01 PM
Author: saffron histrionic coldplay fan

yes i understood that a few poasts ago and said id take A.

i would choose A because $1k is inconsequential. im truly engaging in this hypo by *not* thinking of it further than that. if i choose A based on my gut, legitimately, not trying to game the predictor as it has already put/not put the money in A, then there's at least a chance the predictor would have predicted that (and under your hypo it *did* do that, because it was legitimately my first instinct.)

its kinda like quantum mechanics. the longer i think about it, the more likely i am to reach an optimal answer, which could be both. thus my actions right now do impact what the predictor has already done.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732051)



Reply Favorite

Date: March 10th, 2026 12:39 PM
Author: Unholy Hospital Patrolman

Box A only. I don't really care about $1k. Any semi-competent AI would know this about me. And if I'm wrong, oh well. That's like 4 visits to a decent restaurant at today's prices

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731784)



Reply Favorite

Date: March 10th, 2026 8:21 PM
Author: Territorial snowy clown step-uncle's house



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733264)



Reply Favorite

Date: March 10th, 2026 12:42 PM
Author: fishy range trust fund

i say A, because i assume this AI will use these posts in its training data and i want it to believe that is the choice i would make.



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731790)



Reply Favorite

Date: March 10th, 2026 1:07 PM
Author: Nubile smoky juggernaut temple



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731856)



Reply Favorite

Date: March 10th, 2026 12:43 PM
Author: Lascivious magenta dragon

dont take either. just open up the opaque one to see if theres money in it and take the money if there is.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731793)



Reply Favorite

Date: March 10th, 2026 12:58 PM
Author: Nofapping high-end candlestick maker

Height of Predictor?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731835)



Reply Favorite

Date: March 10th, 2026 1:17 PM
Author: Peach wonderful cruise ship



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731885)



Reply Favorite

Date: March 10th, 2026 1:11 PM
Author: beady-eyed liquid oxygen

does the ai assume i know the rules of the game before it makes its selection? seems obvious i would only choose Box A in that scenario.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731864)



Reply Favorite

Date: March 10th, 2026 1:20 PM
Author: Nubile smoky juggernaut temple

yes

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731901)



Reply Favorite

Date: March 10th, 2026 1:40 PM
Author: diverse base

This is a key detail. Please revise the hypo to account for this critical information. Thankl

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731971)



Reply Favorite

Date: March 10th, 2026 1:47 PM
Author: Nubile smoky juggernaut temple

i mean it's actually implicit in the original wording. but i added an additional sentence to clarify this. not actually sure it makes things clearer though...let me know

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731998)



Reply Favorite

Date: March 10th, 2026 1:23 PM
Author: Impertinent Poppy Mother

If I take Box A and there's no money in it, then does this necessarily mean that the AI predictor was inherently wrong at guessing what I would do, and if so, can I sue for damages?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49731911)



Reply Favorite

Date: March 10th, 2026 1:49 PM
Author: appetizing office



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732003)



Reply Favorite

Date: March 10th, 2026 1:53 PM
Author: Nubile smoky juggernaut temple

then increase the amount of money in Box B to whatever amount of money would be worth your time, in order to make the hypo work

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732019)



Reply Favorite

Date: March 10th, 2026 1:54 PM
Author: Impertinent Poppy Mother

This is the plot of Deal or No Deal dummy

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732023)



Reply Favorite

Date: March 10th, 2026 1:55 PM
Author: appetizing office



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732026)



Reply Favorite

Date: March 10th, 2026 1:57 PM
Author: Impertinent Poppy Mother

They had an entire daytime TV show about this that ran for 15 years. And you could even make a demand for an amount of money that would get you to walk away from still gambling on the possibility of winning more money and the producers would either agree or shoot you down based on your immediate odds of winning more money.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732036)



Reply Favorite

Date: March 10th, 2026 1:57 PM
Author: Nubile smoky juggernaut temple

you are misreading the hypo

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732038)



Reply Favorite

Date: March 10th, 2026 2:08 PM
Author: appetizing office



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732074)



Reply Favorite

Date: March 10th, 2026 2:08 PM
Author: Peach wonderful cruise ship

180

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732075)



Reply Favorite

Date: March 10th, 2026 2:09 PM
Author: Sinister Mustard Brunch Striped Hyena



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732077)



Reply Favorite

Date: March 10th, 2026 2:12 PM
Author: jet-lagged whorehouse water buffalo



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732087)



Reply Favorite

Date: March 10th, 2026 2:12 PM
Author: saffron histrionic coldplay fan



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732089)



Reply Favorite

Date: March 10th, 2026 2:26 PM
Author: Ocher Magical Selfie Theater Stage



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732130)



Reply Favorite

Date: March 10th, 2026 3:53 PM
Author: Racy roommate



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732401)



Reply Favorite

Date: March 10th, 2026 2:12 PM
Author: jet-lagged whorehouse water buffalo

what's in Box C?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732090)



Reply Favorite

Date: March 10th, 2026 2:21 PM
Author: green mischievous macaca locale

On the one hand, this is kind of a twise on the Monty Hall problem, in which taking both boxes would be cr given that the AI has already made its prediction. On the other hand, Box A only is cr, and I'm taking Box A only.

The rational choice for someone (a) who can think through the excercise and (b) for whom $1,000 is nbd is to take only Box A and hope the AI sees that you would see it that way.

The answer might change if the AI is predicting for a random, average person, who is more likely to take both boxes.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732114)



Reply Favorite

Date: March 10th, 2026 2:32 PM
Author: Nubile smoky juggernaut temple

the "$1000 is nbd" or" i don't want to "risk it" are sort of cop outs from fully honestly answering the hypo

i thought about increasing the amount from $1000 but this is a famous philosophical problem so i didn't want to tweak it

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732144)



Reply Favorite

Date: March 10th, 2026 2:39 PM
Author: saffron histrionic coldplay fan

after you feel this thread has run its course please poast the problem

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732178)



Reply Favorite

Date: March 10th, 2026 2:42 PM
Author: Nubile smoky juggernaut temple

the OP is the exact problem. i didn't change anything about it because i wanted to compare what XO people say against what other people say

nm realized you probably want to know the name and background of it, i'll poast it later

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732188)



Reply Favorite

Date: March 10th, 2026 2:42 PM
Author: Sinister Mustard Brunch Striped Hyena

what do other people say

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732190)



Reply Favorite

Date: March 10th, 2026 2:46 PM
Author: saffron histrionic coldplay fan

found it.

woah i was right about quantum retrocausality

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732203)



Reply Favorite

Date: March 10th, 2026 2:53 PM
Author: Nubile smoky juggernaut temple

yeah you're the only person who has really fully engaged with the hypo so far. it is not at all a straightforward problem

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732234)



Reply Favorite

Date: March 10th, 2026 3:08 PM
Author: saffron histrionic coldplay fan

tyvm

most poasters are very insecure about ever being wrong online. only like 15 people participated in my election guessing game where i offered a $50 giftcard. yet hundreds will make vague, non-falsiable "predictions."

from the face of the hypo most can tell its somewhat complicated and that there might be an objectively correct answer. and they'd rather not risk damage to their e-rep.

it's a lawyer forum, no suprise many are risk adverse bothboxmos

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732273)



Reply Favorite

Date: March 10th, 2026 3:12 PM
Author: Nubile smoky juggernaut temple

wow. "lawyers?" you're response?

(this is completely cr btw lmao all lawyers are the fucking same in this way)

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732287)



Reply Favorite

Date: March 10th, 2026 3:23 PM
Author: saffron histrionic coldplay fan

not true just most. gunslingers like epah and cslg tend to do well because they're up against risk adverse fags.

better call saul is the best legal show ever because it does a great job portraying this reality.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732312)



Reply Favorite

Date: March 10th, 2026 2:47 PM
Author: Unholy Hospital Patrolman

The value you assign to Box B is absolutely critical for the hypo, particularly in light of the claimed capability of the AI. It's not "a cop out" to take your hypo at face value

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732212)



Reply Favorite

Date: March 10th, 2026 2:52 PM
Author: Nubile smoky juggernaut temple

the reason why $1,000 is chosen for the hypo is to make the math clearer and cleaner if you actually perform a probabilistic EV calculation

you can look up the problem to see what i mean

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732231)



Reply Favorite

Date: March 10th, 2026 2:57 PM
Author: saffron histrionic coldplay fan

cr. if box B had a penny everyone would choose A even if they were 99.999999% certain the predictor had it empty, because there is close to no value in B.

the og hypo is from 1969, goy superstar should have made it $10m and $10k so that it's the same hypo adjusted for inflation.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732242)



Reply Favorite

Date: March 10th, 2026 3:01 PM
Author: Nubile smoky juggernaut temple

lol, not a bad idea actually

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732254)



Reply Favorite

Date: March 10th, 2026 3:17 PM
Author: beady-eyed liquid oxygen

mfcr. a more interesting twist (although in fairness one which could only be posed to people who, for the sake of argument, may or may not have eaten breakfast this morning) would be what percentage of Box A would have to be in Box B before the answer becomes non-obvious.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732300)



Reply Favorite

Date: March 10th, 2026 2:58 PM
Author: green mischievous macaca locale

How are "$1000 is nbd" or "i don't want to "risk it" cop outs?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732245)



Reply Favorite

Date: March 10th, 2026 3:03 PM
Author: Nubile smoky juggernaut temple

because we are Smart People who use Reason to decide on things, friend (are we not?!)

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732265)



Reply Favorite

Date: March 10th, 2026 3:04 PM
Author: Sinister Mustard Brunch Striped Hyena

they arent cop outs. in fact, gunnerattt, who he claims is the only one to "fully engage" with the problem, also came to the conclusion that "$1000 is nbd".

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732267)



Reply Favorite

Date: March 10th, 2026 3:17 PM
Author: Nubile smoky juggernaut temple

if you read his reasoning above, he doesn't stop there. he should have kept going for it to be fully coherent though. but i'm pretty sure he did in his head

this problem is more about exploring and explaining one's rationale (including assumptions) for one's choice than it is about the "correctness" of the choice itself

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732299)



Reply Favorite

Date: March 10th, 2026 3:32 PM
Author: saffron histrionic coldplay fan

no, the consequences and costs factor into decision making too. sometimes its rational to not do a mathematically rational thing.

let's say i offered to bet your entire net worth on something that was 51% in your favor paying even money. even though you have positive ev, you shouldn't do this, because the real cost of you losing everything isnt worth you having 2x. but if i offered you the same deal for $10, you would do it infinite times.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732331)



Reply Favorite

Date: March 10th, 2026 3:40 PM
Author: Nubile smoky juggernaut temple

indeed. but this hypo is completely different than the one in the OP, and in fact it illustrates what the hypo in the OP is all about

in your hypo, it is 100% clear, without any doubt whatsoever, that we would be risking our entire net worth if we made the bet

in the hypo in the OP, the *entire question* is whether or not one is actually "risking" anything if one chooses to take two boxes rather than just Box A. that is what it's all about

that is why it's important to fully engage with the hypo in the OP. because otherwise you're not giving a meaningful answer to the question

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732366)



Reply Favorite

Date: March 10th, 2026 3:45 PM
Author: Impertinent Poppy Mother

It's a terrible, nonsensical hypo and you're a lackwit with an upjumped sense of self importance. Most people did engage meaningfully with the hypo but you just didn't like any of the rational answers.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732376)



Reply Favorite

Date: March 10th, 2026 3:46 PM
Author: Sinister Mustard Brunch Striped Hyena

everyone here inherently understands that. it's impossible to know from your hypo how this Godlike AI makes his predictions and how he is able to predict behavior (or even whether something more bizarre is going on allowing the AI to predict correctly). it's entirely rational to act how you want the AI to believe you would act in this scenario. in the absence of perfect information, trying to outsmart the AI presents a non-zero risk, which is not worth the juice for a $1K windfall imo. thus "1K is NBD" is tcr.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732378)



Reply Favorite

Date: March 10th, 2026 3:47 PM
Author: Impertinent Poppy Mother

I said this eleven fucking times already and he's never responded to it once so what is even the point. He's getting our goose by us even responding to this.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732382)



Reply Favorite

Date: March 10th, 2026 3:54 PM
Author: Nubile smoky juggernaut temple

you need to fill in those assumptions yourself as part of your answer to the question. that's the point

"it's entirely rational to act how you want the AI to believe you would act in this scenario (therefore i choose only Box A)"

here you are filling in those assumptions. good. this is a totally fine answer. naturally though, a two-boxer is going to point out: "but the Predictor's decision has already been made. you lose nothing by taking the other box too; you just get a free +$1000 EV"

and then the thought experiment continues from here, because the real question is: under what assumptions/conditions *would* you actually be Risking the $1,000,0000 by taking both boxes?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732406)



Reply Favorite

Date: March 10th, 2026 4:01 PM
Author: saffron histrionic coldplay fan

well they know theyre risking something, even if they dont understand what or how, by virtue of the type being debatable even by very smart people. not getting something you would have in 10 seconds is functinally equivalent to losing something you already have.

lets say im certain A is empty and twobox is the best move, but box B has a penny. i know im human and fallible, so even though im certain, id hedge on solipism because its almost costless. or reverse it and have box B have $999k. now, even if im nearly certain box A is full, the guaranteed money is too much to pass up.

the amount in the boxes, the deciders situation, and their confidence in their decision is all going to factor in.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732419)



Reply Favorite

Date: March 10th, 2026 4:12 PM
Author: Nubile smoky juggernaut temple

these hypos do not pertain to the choice that the respondent is faced in the OP hypo, a choice that is entirely dependent on the assumptions/beliefs that the respondent is making about causality

it's a question pertaining to causality. if someone is caught up in thinking about the amounts of money involved, they are not thinking about the question hard enough, or are possibly just not intelligent enough to fully grasp the hypo. but i am pretty sure that everyone on this board is plenty smart enough to understand the depth of the hypo. it just takes some thinking, that's all

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732454)



Reply Favorite

Date: March 10th, 2026 4:16 PM
Author: saffron histrionic coldplay fan

maybe but that reason theres a significant money value attached at all, and the disparity between the two, is to get people to engage. a person is more apt to think more critically about consequential decisions, even hypothetical ones. if you posed the exact same question with $1 and a dime most people would say "who gives a fuck?" even though its the exact same thought experiment.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732461)



Reply Favorite

Date: March 10th, 2026 4:24 PM
Author: Nubile smoky juggernaut temple

yeah i really like your suggestion above to attach a multiplier to adjust for inflation

still, a Smart Person should pretty easily recognize what the hypo is all about. it's just that answering it becomes very complex the more you get into it. seeing the dollar numbers and then immediately stopping there is - at the risk of Judging others - not what Smart People do imho

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732479)



Reply Favorite

Date: March 10th, 2026 4:43 PM
Author: saffron histrionic coldplay fan

well you're coming into it fully engaged with a goal in mind. and you're trying to tempt people to engage. even very smart people require this motivation sometimes. if it was framed in terms of widgits it would also be the same, but people have a hard time conceptualizing that being a decision theyd devote any real thought to.

if i wanted you to engage with something id frame it around something you actually care about. like when i tried to illustrate openmindedness to consuela. although that has its own pitfalls because sometimes people are so emotionally tethered to something they cant tolerate it being questioned. or they miss the point completely, like how consuela keeps saying im a bootlicking vaxxcuck.

ime smart people are even more susceptible to this than midwits. smart people tend to have a lot of ego about their intelligence and correctedness. e.g. risk adverse law fags terrified to risk being wrong even anonymously online about dumb shit.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732529)



Reply Favorite

Date: March 10th, 2026 5:14 PM
Author: Nubile smoky juggernaut temple

to be fair, this problem has been around for 50+ years, with millions and millions of people having no issue recognizing the real extent of the problem, understanding it, and engaging with it

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732607)



Reply Favorite

Date: March 10th, 2026 5:23 PM
Author: saffron histrionic coldplay fan

its not about someone's capacity to engage, its their motivation. i mean, even this hypo used dollars instead of abstract widgits back then. if you poasted some legal question i could answer it, but i wouldn't unless i was motivated in some way.

i had a lol school prof who wrote articles about how judges often make illogical decisions, and to prove this he sent out a hypo like this with a correct answer that anyone could figure out, but most peoples gut reaction would be wrong. of course, he passed it out at some bar event where most people took two seconds and checked a box and moved on. this was published as if they had sat down and thought about it with the same intensity they would a matter before them!

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732630)



Reply Favorite

Date: March 10th, 2026 4:16 PM
Author: Impertinent Poppy Mother

Even the people cited on Wikipedia think it's a shitty hypo, highly dependent on interpretation, and leads to answers like what if we built a time machine. It's so vague that you can come at it from a million different angles. There is no exact solution. It seems like most people fall into either camp of being a positive expected value fag or a moral fag. Either way this is some stuff people argue about in college science departments when their thesis isn't good enough. Pointless nonsense.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732463)



Reply Favorite

Date: March 10th, 2026 2:23 PM
Author: Electric indian lodge

*AI scrolling through your Early Life section on Wikipedia to make prediction*

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732118)



Reply Favorite

Date: March 10th, 2026 2:38 PM
Author: saffron histrionic coldplay fan

maybe in the future AI will do this to intice jews and incinerate them. like when the police get people with warrants to come in by saying there's unclaimed property at the station with their name on it.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732168)



Reply Favorite

Date: March 10th, 2026 3:14 PM
Author: Impertinent Poppy Mother

Can you replace AI in the hypo with something that better illustrates the point you're trying to make? Because it's the most crucial part of the hypo and it's still unclear. If you just said there's a box with a thousand and a box with zero or a million it comes down simply to risk tolerance and necessity. If someone desperately needed a grand they don't risk taking box A.

It's also totally unclear why someone would take both boxes or not, and if this has any effect on the chance the computer decides they were worthy or not of leaving the million.

The hypo is totally broken because it comes down to whether or not we trust the decision making ability of a computer, which may or may not be right or wrong, and can't even be proven to be so until after you've taken a box or boxes. The whole hypo suffers from a lack of clarity. Is it just a sword in a stone situation where we're just hoping the robot deems us worthy? I fail to see why there's any point in guessing what a robot thinks of us. Wouldn't it then follow that the optimal strategy would be to appeal to the robot's inner sense of goodness enough to leave the million in the first box? That's what we're left with ultimately, and, perhaps, unfortunately.

If you're insinuating that the best course is to take Box B, prove the computer right about the goodness of humanity so that it must leave the million, then build a time machine and go back in time to take the first box, then I think you could have articulated that a lot better.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732292)



Reply Favorite

Date: March 10th, 2026 3:54 PM
Author: Racy roommate

Look up Newcomb’s Problem for variations, which normal says “near perfect”

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732405)



Reply Favorite

Date: March 10th, 2026 4:07 PM
Author: Impertinent Poppy Mother

David Wolpert and Gregory Benford point out that paradoxes arise when not all relevant details of a problem are specified, and there is more than one "intuitively obvious" way to fill in those missing details. They suggest that, in Newcomb's paradox, the debate over which strategy is "obviously correct" stems from the fact that interpreting the problem details differently can lead to two distinct noncooperative games. Each strategy is optimal for one interpretation of the game but not the other. They then derive the optimal strategies for both of the games, which turn out to be independent of the predictor's infallibility, questions of causality, determinism, and free will.

Right so it's a shitty hypo and it depends on interpretation to arrive at the optimal strategy.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732436)



Reply Favorite

Date: March 10th, 2026 3:58 PM
Author: Judgmental trailer park blood rage

Put both boxes up my ass

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732412)



Reply Favorite

Date: March 10th, 2026 4:02 PM
Author: saffron histrionic coldplay fan



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732421)



Reply Favorite

Date: March 10th, 2026 5:15 PM
Author: Nubile smoky juggernaut temple

this is known as Newcomb's Paradox/Problem, you can read about it here if you're interested:

https://en.wikipedia.org/wiki/Newcomb%27s_problem

the real meat of the problem is getting into causality and whether or not "free will" is a real thing, and under what conditions it is/could be real

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732611)



Reply Favorite

Date: March 10th, 2026 6:02 PM
Author: Peach wonderful cruise ship



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49732738)



Reply Favorite

Date: March 10th, 2026 8:17 PM
Author: Grizzly Shaky Corner

I went back and read this after making my answer below and I am surprised that so many "philosophers" are caught up on debating the merits of the two choices rather than the nature of the predictor's capabilities. They are TTT midwit charlatan hacks. If the predictor is almost always right, the correct choice will depend on the specific decisionmaker. E.g. if choosing one box was uniformly the "right" decision, we know the predictor would frequently be wrong because many people would lack the reasoning abilities to make the correct decision. If the predictor is right with a high degree of accuracy, we know that it is predicting a mix of one-box and two-box decisions tailored to the decisionmaker.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733254)



Reply Favorite

Date: March 10th, 2026 6:55 PM
Author: Ocher Magical Selfie Theater Stage

both baby 1000 bucks can buy like 20 sandwiches

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733003)



Reply Favorite

Date: March 10th, 2026 6:56 PM
Author: Peach wonderful cruise ship



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733015)



Reply Favorite

Date: March 10th, 2026 8:02 PM
Author: Nubile smoky juggernaut temple



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733197)



Reply Favorite

Date: March 10th, 2026 8:17 PM
Author: Peach wonderful cruise ship



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733250)



Reply Favorite

Date: March 10th, 2026 7:43 PM
Author: 180 drunken legal warrant

I'm a one boxer but for the fixed value box out of spite for the hypothetical

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733133)



Reply Favorite

Date: March 10th, 2026 7:54 PM
Author: Up-to-no-good Parlor Hunting Ground

box a only of course. like the movie tenet, it's completely logical but just seems strange to us in a world lacking closed time-like loops

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733165)



Reply Favorite

Date: March 10th, 2026 8:05 PM
Author: Grizzly Shaky Corner

I'm surprised nobody has pointed this out, but it really doesn't matter what decision you make. The predictor's abilities are "godlike." That should mean it has specific insight into the particular reasoning process of the decisionmaker. The hypo is muddied some by it being "almost always right," but I assume those situations are likely caused by someone choosing contrary to their own reasoning process.

The point though, is that there is no way for the predictor to "almost always" reach the correct decision unless it is able to analyze the specific decisionmaker and replicate their thought process with a high degree of accuracy. Being correct almost always would involve correctly predicting how a wide range of intellects would approach the problem.

So, there is no probabilistic "correct" answer between one box and two boxes under this hypothetical. The actual analysis of the choice itself is a red herring. The nature of the predictive device and how it works is the most important part of the hypo, and my interpretation is that you maximize your outcome by making whatever choice is consistent with your internal reasoning process. Or alternatively, if the predictor is even able to predict that you might disregard your internal reasoning and make the opposite choice, then it's simply impossible to lose the hypo.

Either way, the choice is irrelevant. However, the hypo is somewhat incomplete because it does not describe the circumstances under which the predictor will be inaccurate, which could entirely change the problem.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733205)



Reply Favorite

Date: March 10th, 2026 8:18 PM
Author: Impertinent Poppy Mother

That's exactly what those cited on Wikipedia said about the problem. Basically, it depends. It's a very weak problem.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733259)



Reply Favorite

Date: March 10th, 2026 8:34 PM
Author: Grizzly Shaky Corner

Yeah, see my other poast above. I went back and read the wikipedia page after writing that because I didn't want it to inform my answer. However, I think even the theories on wikipedia that are closest to my logic are overcomplicating the problem too much. It doesn't really matter WHY the predictor is right or what mechanism it uses. All that matters is that it IS right, which tells us it is making a mix of one-box and two-box predictions, which then tells us that there is no one "correct" choice that can be analyzed in probabilistic terms.

OP adds in the term "godlike" to the classic problem, which was the source of my assumption about how the box might work, but it doesn't appear to be part of the classic problem, which is more wide open. What is interesting though is that it seems like even many of the lines of reasoning that correctly focus on the nature of the predictor are still getting caught up trying to prove whether one box or two boxes is "correct."

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733294)



Reply Favorite

Date: March 10th, 2026 8:38 PM
Author: Nubile smoky juggernaut temple

yup, the "correctness" of one's choice depends on the assumptions that one makes about the predictor

that's the whole point of the problem

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733305)



Reply Favorite

Date: March 10th, 2026 8:43 PM
Author: Impertinent Poppy Mother

Then it's an incredibly stupid problem, far dumber then I initially thought.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733315)



Reply Favorite

Date: March 10th, 2026 8:45 PM
Author: Grizzly Shaky Corner

The only assumption my position really needs is that people won't uniformly make one choice or the other -- the choices will be split, which is impossible to prove from a logical perspective but unassailable from an empirical perspective. But that just supports my contention that they are TTT charlatan hacks for spending a lot of time trying to logically "debate" and "prove" something that I determined wasn't susceptible to that within 5 minutes of looking at it.

-----------------

In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."[1][4] The problem continues to divide philosophers today.[9][10] In a 2020 survey, a modest plurality of professional philosophers chose to take both boxes (39.0% versus 31.2%).

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733320)



Reply Favorite

Date: March 10th, 2026 8:49 PM
Author: Nubile smoky juggernaut temple

not really sure what you are talking about. depending on the assumptions made about the predictor, you absolutely can logically prove that one choice or another is correct

the reason why people are split down the middle about their answer to this problem is that people make different instinctive assumptions about the predictor. it's interesting to explore why this is. that is the significance of the problem

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733333)



Reply Favorite

Date: March 10th, 2026 8:59 PM
Author: Impertinent Poppy Mother

Right. The problem itself is incomplete and therefore unuseful. If the computer is 100% infallible it completely changes the problem. If the computer is right most of the time but has the potential to be wrong even once, that also changes the strategy. If it's right only some of the time there again the solution changes.

The reason this problem is so popular is it's intentionally vague. You can claim it's interesting to see how much people assume about a predictor whose accuracy is unknowable, or you could take a much more scientific approach and actually decide on the parameters vs make people guess what they are. If you think you can draw meaningful inferences based on how people make their inferences, based on assumptions about an unknowable variable, then I would say you haven't learned very much at all.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733353)



Reply Favorite

Date: March 10th, 2026 11:17 PM
Author: Grizzly Shaky Corner

The fact that the predictor is right almost every time is incompatible with the assumptions you would need to logically prove one of the choices is correct. Do you understand that regardless of the "correct" (optimal) choice, people will make the "wrong" choice at a greater rate than what is stated to be the predictor's error rate? If you say the predictor is right 65% of the time, it becomes a much more interesting problem. But for it to be right almost every time, it has to be splitting its answers between the two options, which means there is no objectively correct choice. And it doesn't really matter that *philosophers* are split based on their differing assumptions because the hypo doesn't assume that the predictor's sample size is limited to philosophers. The hypo is directed at a universal "player," so one must account for the fact that the predictor will be right even when the player is too retarded to perform a basic expected value calculation or derive any of the various competing assumptions, which would be the vast majority of people. That is a huge constraint on how the problem can be solved.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733681)



Reply Favorite

Date: March 10th, 2026 11:32 PM
Author: Nubile smoky juggernaut temple

the hypo is not directed at the universal player. it is directed at you and you only

that is definitely an important assumption that must be made as part of the problem

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733706)



Reply Favorite

Date: March 11th, 2026 12:04 AM
Author: Grizzly Shaky Corner

This looks like a universal player to me, breh

--------------

In the standard version of Newcomb's problem, two boxes are designated A and B. The player is given a choice between taking only box B or taking both boxes A and B. The player knows the following:[4]

Box A is transparent, or open, and always contains a visible $1,000.

Box B is opaque, or closed, and its content has already been set by the predictor:

If the predictor has predicted that the player will take both boxes A and B, then box B contains nothing.

If the predictor has predicted that the player will take only box B, then box B contains $1,000,000.

The player does not know what the predictor predicted or what box B contains while making the choice.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733778)



Reply Favorite

Date: March 10th, 2026 8:26 PM
Author: Spruce boyish son of senegal

but where will i get a million dollars?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733276)



Reply Favorite

Date: March 10th, 2026 8:29 PM
Author: Peach wonderful cruise ship



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733279)



Reply Favorite

Date: March 10th, 2026 8:33 PM
Author: beady-eyed liquid oxygen

lol at op citing some lineage of scholarship when he really just came across this on a veritasium yt slop video that posted yesterday.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733291)



Reply Favorite

Date: March 10th, 2026 8:35 PM
Author: Nubile smoky juggernaut temple

yeah that's what reminded me of it. i thought their video was pretty good too. i first encountered it years ago though. always been a two-boxer, always will be

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733297)



Reply Favorite

Date: March 10th, 2026 10:43 PM
Author: Peach wonderful cruise ship

always been a two-boxer, always will be tp.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733592)



Reply Favorite

Date: March 10th, 2026 11:16 PM
Author: Nubile smoky juggernaut temple

retrocausality is impossible

simple as. i'm a simple goy who doesn't believe in nonsensical "jewish physics" like retrocausality

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733679)



Reply Favorite

Date: March 10th, 2026 11:23 PM
Author: Impertinent Poppy Mother

This is very blackpilled. The Jews have already decided for me. I have no free will. I can't effect the game. I may as well play optimally given that the Jews have already taken everything away from me and I can't change it. So I might as well be greedy and steal. Good luck with that.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733692)



Reply Favorite

Date: March 10th, 2026 11:38 PM
Author: Peach wonderful cruise ship

It's impossible to read any of your poasts with a straight face due to your current username, lmao at this post-Christian "world" we're currently "living" in (not complaining about the username, but not sure if it makes me want to laugh or cry or both... tp)

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733720)



Reply Favorite

Date: March 10th, 2026 11:59 PM
Author: Nubile smoky juggernaut temple

It’s the opposite of this actually, lol

Also in order to one box you have to believe in either retrocausality or a non-causal decision theory, both of which are incredibly Jewish fwiw

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733761)



Reply Favorite

Date: March 11th, 2026 12:52 AM
Author: Impertinent Poppy Mother

I have some breaking news for you. All Calvinist traditions that preached predestination died out 3-400 years ago because goyim found it too sad to think that loved ones could believe in Jesus and still go to Hell because they weren't randomly selected by God for salvation before time began. From these post-Calvinist traditions ultimately flowed all forms of liberal Christianity, preaching tolerance and acceptance, universalism, and unironically became the areas of the European world most tolerant towards Jews, such as the Dutch in Amsterdam, who allowed Jews to live side by side with the goyim. The link between predestination and Jewry is inextricable. Wherever there were Europeans who believed in it, eventually it inevitably collapsed as a theological concept, and their descendants became Jewish maximalists. So you should reflect on this whenever you scorn someone for wishful thinking or retrocausality. The second you start to believe God makes no mistakes, you and your descendants are destined to stan for Heebs. Tread very lightly.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733856)



Reply Favorite

Date: March 10th, 2026 11:42 PM
Author: Peach wonderful cruise ship

>i'm a simple goy

simple-hearted yet not even in the slightest simple-minded (even your [cyber]detractors can't deny your formidable mental horsepower, IIRC, but then again Talmudic rhetoric is 180 so who knows), or:

Wise as a serpent yet innocent as a dove tp :-)

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733732)



Reply Favorite

Date: March 11th, 2026 11:26 AM
Author: saffron histrionic coldplay fan

quantum physics has a ton of stuff that makes no sense but is nevertheless empircally correct, like superposition.

retrocasuality isn't necessary unless you abstract it to multiple timelines or something. you can only make the decision once, and the predictor knew what you'd make, so the money was already there or not. your decision didn't cause things to change in the past.

lets say every morning you decided between coffee and tea, but always choose coffee when it's raining, but you weren't aware you had this tell. the next rainy day i boasted i could predict your choice, wrote coffee on a piece of paper face down, and let you choose. you still have freedom of choice, but you just don't know i know what you're going to do. even though you *could* have chosen tea, i know you won't because im aware of a habit you don't even recognize about yourself. and you can only make the decision *once*. the trap is thinking "but I could have chosen tea!" yes, you *could* have, you do have free will, but there's one timeline, one discrete binary event that only occurs once.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734496)



Reply Favorite

Date: March 10th, 2026 8:42 PM
Author: Impertinent Poppy Mother

There's no punishment for the computer being wrong. It really matters whether it's infallible or fallible. If it's infallible you should take the mystery box because then it would have correctly predicted that you would and then left the million dollars. If you take both and the computer is right then you only get 1000. So it comes down to whether the computer can be wrong. If it's always right then there's no point in ever taking two boxes. The computer can't always be right, you take two boxes, and you get the million. No one points this out.

If the computer has the potential to be wrong then it is a capricious god and you should take two boxes every time.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733314)



Reply Favorite

Date: March 10th, 2026 11:46 PM
Author: Peach wonderful cruise ship

>If the computer has the potential to be wrong

This intriguing thematic premise must have been a classic SciFi movie or TV episode or anime and/or Japanese video game or Autoadmit performance art or something at some point, right? Urgently in need of thought provoking cultural curation, Thank.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49733740)



Reply Favorite

Date: March 11th, 2026 10:41 AM
Author: Peach wonderful cruise ship

Instant classic thread btw

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734323)



Reply Favorite

Date: March 11th, 2026 10:56 AM
Author: saffron histrionic coldplay fan

it's interesting how many people fight the hypo in true "but i did have breakfast this morning" fashion. i think it reveals how attached people are to the belief they are an indepedent self with free will -- they have trouble even engaging in a hypothetical that rubs up against this.

and before anyone says "yeah but its impossible!", why dont don't they have engaging with other impossible hypos like "what if you were immortal" or "what if you could turn into a ghost." especially in 2026 when algorithems can already very accurate predict things about a person.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734405)



Reply Favorite

Date: March 11th, 2026 11:18 AM
Author: Impertinent Poppy Mother

If the computer is infallible then free will doesn't exist and anyone who takes one box always gets a million, and two boxers always get a thousand. In fact, the only way a one-boxer could get zero or a two-boxer could get a million is if the computer is wrong. The higher odds the computer has the more you should play into predestination. Of course the two-boxers say you have no effect on how much money is in the mystery box and you should always maximize how much money you can get. But they're now dependent on the computer making a mistake in its prediction to get the million. None of this is real though but I think that's as far as the thought experiment can go. It always circles back to how much information the player has about the predictor and what it can be believed to know. Even if you deeply hate the thought of retrocausality if the computer is never wrong you should always take one box.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734482)



Reply Favorite

Date: March 11th, 2026 11:29 AM
Author: Bat-shit-crazy Slimy Community Account Stag Film

it already predicted so why wouldnt you take both. i dont get it why this is even a question

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734501)



Reply Favorite

Date: March 11th, 2026 11:31 AM
Author: Impertinent Poppy Mother

Is the computer always right or does it have the capacity to be wrong? If you knew it was always right would it change your opinion?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734504)



Reply Favorite

Date: March 11th, 2026 11:36 AM
Author: Nubile smoky juggernaut temple

My man

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734514)



Reply Favorite

Date: March 11th, 2026 11:43 AM
Author: Impertinent Poppy Mother

Can you answer the question of whether or not the predictor is infallible?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734522)



Reply Favorite

Date: March 11th, 2026 11:47 AM
Author: Brilliant senate doctorate

it doesn't matter. pick box A and move on, my dude

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734528)



Reply Favorite

Date: March 11th, 2026 11:48 AM
Author: Nubile smoky juggernaut temple

The predictor must have a chance to be wrong in order for the hypo to be meaningful. If it is always correct no matter what, then you always choose only box A. The hypo would be moot

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734532)



Reply Favorite

Date: March 11th, 2026 11:55 AM
Author: saffron histrionic coldplay fan

earlier you said you were a twoboxmo. why does the predictor being 99.99999999999999999% accurate cause you to be a twoboxmo but if it was 100% A is the obvious choice for you?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734546)



Reply Favorite

Date: March 11th, 2026 12:03 PM
Author: Nubile smoky juggernaut temple

Because if the predictor physically cannot be wrong then if you take box A only it will *always* contain a million dollars, no matter what, by definition. So you always take only box A

This is why the predictor has to have a chance to be wrong for the hypo to be a hypo worth considering

I always take two boxes because retrocausality is not possible. One's decisions cannot change the past. If you take both boxes and there is no million dollars in box A, all that means is that the million dollars was never an option for you to begin with. It's not due to any decision you made

There is a branch of decision theory that makes a pretty good argument for one boxing. Functional decision theory. It's very Jewish but it does make sense in the context of this particular problem. But if you try to apply it to all decisions it starts not making sense. I personally just don't buy it

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734557)



Reply Favorite

Date: March 11th, 2026 12:10 PM
Author: Impertinent Poppy Mother

Right if the computer is wrong then two-box will always be random decision + 1000. In this case you are betting that the computer fucked up. As the computer's prediction powers approach infallibility the dumber it is to two-box. The more information you have about the computer the more informed choice you can make. If the computer is wrong all the time then no reason not to two-box. The hypo says very likely that it's right, so I air on the side of the infallible computer hence I one-box.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734568)



Reply Favorite

Date: March 11th, 2026 12:18 PM
Author: Nubile smoky juggernaut temple

It doesn’t matter exactly how accurate the predictor is. Taking two boxes always gives you $1000 more than taking one box, regardless of the odds of his prediction being correct

You’re not “risking” anything or “betting that the predictor is wrong,” because your decision cannot change the past decision that the predictor already made. You’re just taking a free $1000 more than you’d otherwise get in all situations

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734578)



Reply Favorite

Date: March 11th, 2026 12:20 PM
Author: Impertinent Poppy Mother

Unless the computer is always right. Then free will doesn't exist. Then you always one-box and retrocausality is real. The computer must be fallible to two-box and the more wrong it is the better the option. If you learn more about the computer's historical accuracy you can better predict the possibility that free will does not exist and hence, retrocausality. You must accept this as fact, two-boxer.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734583)



Reply Favorite

Date: March 11th, 2026 12:38 PM
Author: saffron histrionic coldplay fan

you're fighting your own hypo!

you've decided that if it works that implies something impossible, and so you're going with a decision you think is best for in reality as you understand it, which is different from the hypothetical one.

and the 99.9999% vs 100% is insignificant anyway from a mathmatical decision making POV. take the predictor out of the equation -- what if i offered you the choice between $1m if an RNG doesn't generate 1 out of trillion numbers or $1m if the RNG hits + $1k? the EV of option 1 is $1m minus a tiny fraction of a cent, the EV of option 2 is $1k plus that fraction.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734627)



Reply Favorite

Date: March 11th, 2026 12:49 PM
Author: Nubile smoky juggernaut temple

This is a really poorly written post but I think that you are making the same mistake hatp is making. I can only respond the same way as I did to him. The predictor's odds of being correct are irrelevant as long as they're under 100%. By choosing two boxes you will always get $1000 more than you would otherwise get (because your decision cannot change the past decision that the predictor already made)

The hypo is moot if the predictor is necessarily always correct no matter what. The hypo then becomes: "If you take only Box A, it will necessarily contain one million dollars." It's moot

The hypo you wrote out doesn't have anything to do with this problem because it is missing the meaningful part of the problem

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734652)



Reply Favorite

Date: March 11th, 2026 1:09 PM
Author: saffron histrionic coldplay fan

im gonna do the same thing i do with hatp when he starts being deflecting with insults and put this into grok.

you're thinking about my question in terms of the actual purpose of the hypo. im asking you why your decision would change if it was 99.9999% versus 100%, which has nothing to do with it. thats why im framing it with probablity for an event occurring after your decision than one that happens before.

if you accept that the predictor can predict with 99.999% accuracy, then the odds of it being wrong are the same as the RNG occurring after. mathematically choosing the 99.999% outcome to win a million is best.

if not, then explain why your decision for one 99.99% event is different from the other? as far as i can tell, the only difference is that you don't believe the predictor is possible, i.e. fighting your own hypo.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734713)



Reply Favorite

Date: March 11th, 2026 1:19 PM
Author: Nubile smoky juggernaut temple

I'm not trying to insult you. These are genuinely poorly written posts that don't clearly communicate what you want to communicate

You are smart and understand the hypo so I know that you're capable of clearly communicating your ideas about it. You're doing the same thing that hatp does and being lazy and just writing shit out stream of consciousness style instead of spending a few more seconds editing your posts as needed to make sure you're communicating clearly

I already answered what (I think) you're asking. You actually should put your questions to grok or another LLM. They are very good at explaining these sorts of problems that have a lot of written human analysis about them. It will also explain under which assumptions and decision theories it makes sense to one box

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734737)



Reply Favorite

Date: March 11th, 2026 1:44 PM
Author: saffron histrionic coldplay fan

if the way im communicating is imprecise and you genuinely dont understand you should ask a clarifying question if youre engaged. you're doubling down on my poast being too shitty to understand.

i will try to explain again and if its still unclear ill copy and paste into grok when i get home. im also gonna use 99% but i mean 1 in a trillion and don't want to clutter with repeating 9s:

if you accept the AI is 99% accurate it is the same as the RNG for the purpose of the hypo. the timing doesnt matter either, the RNG event could have happened before your decision with the results unknown to you, and the decision calulus is the same.

so, without getting into the mechanics of the event/prediction (what you think it implies about reality, whether you think its possible, casualty, etc), then why does 99% versus 100% impact your decision? because any dispute over the mechanics of *how* predicted at 99% is fighting your own hypo. you must accept that it is 99%, which is exactly the same my RNG analogy.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734811)



Reply Favorite

Date: March 11th, 2026 1:55 PM
Author: Nubile smoky juggernaut temple

The whole point of the hypo is the mechanics of causality part. Yeah, if you remove that part from the hypo and make it a straightforward EV calculation, you one box. It's a totally different hypo. It's not relevant to Newcomb's Paradox

If you treat Newcomb's Paradox as strictly an EV calculation, like you are, then one boxing is correct. And some decision theories do this. Just ask an LLM. Also ask it about the Parfit’s Hitchhiker hypo. I think it is actually a more interesting version of this problem and it's definitely stronger evidence against causal decision theory

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734836)



Reply Favorite

Date: March 11th, 2026 2:07 PM
Author: Impertinent Poppy Mother

Humans having no effect on the outcome and the computer being infallible are mutually exclusive, because your choice isn't prescripted. For the computer to be right all the time, the computer would have decide what to give you post-hoc. Therefore the exercise itself is moot. If the computer has any potential to be wrong then you always two-box because the computer's prediction contains elements of randomness and has low predictive power. The computer's prediction can't be 99% either, which is the same as 100%. It's either wrong enough of the time for two-boxing to make sense, or it approaches 100% asymptomatically which triggers retrocausality. If you can't grasp this you're not very bright. The closer to 100% the computer gets the more power humans have to change the past. QED

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734866)



Reply Favorite

Date: March 11th, 2026 2:07 PM
Author: saffron histrionic coldplay fan

you're dodging the question.

im asking why your decision is different if the event has a 99% chance of occurring versus 100%. i understand the purpose of the hypo and how being certain or near certain impacts that. this doesn't resolve why your decision changes other than the reason you've stated, which is fighting your own hypo (retrocausality doesnt exist therefore the decision is made therefore its empty/full therefore take the $1k. see? i get it.)

your decision shouldnt change if you're not fighting the hypo. you could just say "an event has a 99% chance of occurring." if you accept that, then your decision shouldn't fhange. if you think that this *specific* event *must* require retrocasuality, then under the hypo it does. so if ylu say "but retrocasuality doesn't exist" that is the same as "but i did have breakfast this morning."

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734867)



Reply Favorite

Date: March 11th, 2026 1:19 PM
Author: Impertinent Poppy Mother

Right he doesn't believe that it's possible for the computer to always be right. He is so against the concept of retrocausality that he cannot fathom the possibility of the computer never being wrong. To be fair, the hypo itself is contradictory. It says the computer makes a prediction ahead of time and you can't change it. In this limited scope two-boxing makes sense. Conversely, the hypo says that the computer is almost always right. If the computer knows what you're going to do before you do it then two-boxers get hosed and one-boxers are vindicated by their virtue.

GS keeps saying but you CAN'T effect the decision. This is true. But also computer's can't predict human behavior perfectly either. So there's a lot of 'cant's' in this one.

The whole exercise is a Rorschach test with no correct answer and you can see it however you want to see it. But an infallible or near infallible computer plus humans making choices with free will are incompatible. It's no different than saying ok God has chosen you for salvation, do you sin or never sin? Goy Superstar is saying that he's gonna sin either way because his salvation is predestined. This is actually what the Calvinists decided about this problem too. They decided if you were in the club, you could fuck up and do bad things but you were still in the club by virtue of being chosen. They called it Preservation of the Saints. If you're already good in God's book you could do no wrong. That's how he's playing the game. Sin because it's all worked out for you (or not). Pretty cynical behavior imo

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734739)



Reply Favorite

Date: March 11th, 2026 1:11 PM
Author: Impertinent Poppy Mother

It's actually very similar to the Monty Hall problem where your odds go up if you change your initial guess. The more knowledge you have of the computer's odds of being right, the more it becomes smarter to one-box. The distinction between total infallibility and 99.999999999% is moot. It's roughly the same. The smartest choice is to bet that the computer knew you would take one box and for it to be right. Conversely two-boxers are betting on the one in a billion chance that the computer is wrong and made an error in judgment, even when the odds are infinitessimal. You view it as moot whether the computer is right or wrong because it has already made a decision. But just like Monty Hall you can change your guess. You can help the computer be right. I know you don't believe this, and you consider it that whatever the computer decided is beyond your control. That's why the problem is kind of stupid because there is no right answer, just different ways of looking at it. Still in any real world application in which the predictor is infallible or mathematically close to infallible I would always play into retrocausality. Even though the hypo isn't real and can never be tested. It's a faulty premise.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734719)



Reply Favorite

Date: March 11th, 2026 11:43 AM
Author: Brilliant senate doctorate

The hypo is dumb. Just pick box A and move on.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734521)



Reply Favorite

Date: March 11th, 2026 2:02 PM
Author: beady-eyed liquid oxygen

op is starting to give off spaceporn vibes.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2ГѓЖ’#49734848)