Misplaced Pages

Talk:Monty Hall problem/Archive 3: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
< Talk:Monty Hall problem Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 17:18, 6 February 2006 edit193.129.187.183 (talk)No edit summary← Previous edit Revision as of 21:52, 6 February 2006 edit undoMike Van Emmerik (talk | contribs)496 edits Tricky Host Scenario: Answer questionNext edit →
Line 383: Line 383:
Somebody please help - I think I understand the correct answer, but my mind is still struggling to get around the following: Somebody please help - I think I understand the correct answer, but my mind is still struggling to get around the following:
Imagine contestant A chooses door 1. Monty hall then opens door 3 to reveal a goat. At this point you introduce contestant B. Contestant B has no prior knowledge of the game. He is told he has been "allocated" door 1, he does not know why door 3 is open. He is effectively in the same position as contestant A, but he does not know that the game is fixed. This time both contestant A and B are offered the choice to switch or stick. Surely the percentage chance for contestant B is 50/50. If so how can the same two doors have different probabilities of a prize at the same time for two people standing in front of them? If contestant B does not have 50/50 why not? Imagine contestant A chooses door 1. Monty hall then opens door 3 to reveal a goat. At this point you introduce contestant B. Contestant B has no prior knowledge of the game. He is told he has been "allocated" door 1, he does not know why door 3 is open. He is effectively in the same position as contestant A, but he does not know that the game is fixed. This time both contestant A and B are offered the choice to switch or stick. Surely the percentage chance for contestant B is 50/50. If so how can the same two doors have different probabilities of a prize at the same time for two people standing in front of them? If contestant B does not have 50/50 why not?
: Yes, the chances for B are 50/50; B is playing a different game. For A, there were initially three choices, all equally likely. For B, he sees one open door, with a goat in it. Note also how one of the rules of the game comes into play here: there is always ONE car and TWO goats. But for B, since door 3 is not available as a choice, there are only two choices, one of which has a car, the other a goat. The fact that B has been "allocated" the first door doesn't matter. Summary: A has three choices initially, all equally likely to have the car. B has two choices, both equally likely to have the car. After the host action, A has two choices, NOT equally likely (stay with door 1, or swap to door 2). B arrives after the host action, so there is no before and after for him. Look at it another way: the game is vastly different for B because the whole 1/3 chance that door three might have the car has been taken away, and distributed randomly into the two remaining doors. Lucky B! --] 21:52, 6 February 2006 (UTC)

Revision as of 21:52, 6 February 2006

Template:Featured article is only for Misplaced Pages:Featured articles. Template:Mainpage date

Monty Hall problem/Archive 3 received a peer review by Misplaced Pages editors, which is now archived. It may contain ideas you can use to improve this article.

I've moved the existing talk page to Talk:Monty Hall problem/Archive2, so the edit history is now with the archive page. I've copied back a few recent threads. Older discussions are in Talk:Monty Hall problem/Archive1. Hope this helps, Wile E. Heresiarch 15:28, 28 July 2005 (UTC)

Actual rules for the gameshow

Analysis of the errors in the intuitive answer

(as opposed to the correctness of the mathematical answer)

It is not enough to describe why the mathematically derived solution is correct. To resolve the paradox to the satisfaction of all, one must also describe why the intuitive solution is wrong. I think this requires three steps.

1) an understanding that the "intuitive solution" is just a dismissive term for a first analysis that turned out to have a flaw

2) a description of the logical steps that were employed in coming to the intuitive solution

3) an analysis of that logic, to find its flaw(s)

(1) I not argue the point, but rather just hope that people agree with it.

(2) I think that the intuitive solution takes the following steps:

  • transform the game into a simpler yet completely isomorphic game
  • calculate (or intuit) the probabilities for that game
  • extrapolate the answer back to the original Monty Hall game

Here is the game that I believe is used intuitively (I’ll call it the Silly game):

  • There are three doors, A, B, and C.
  • Door A is open and has a goat behind it.
  • Doors B and C are closed, and there is a goat behind one and a car behind the other.
  • The contestant just guessed that door B has the car behind it, and is now being given a chance to change his mind.

Should he change it or not?

Clearly the answer to this is that the chance is 50% either way, and it does not matter whether the contestant changes his mind.

Now compare the Silly game to an original Monty Hall game at a point half way through, in which the contestant has guessed door B and Monty has opened door A.

  • There are three doors, A, B, and C
  • Door A is open and has a goat behind it.
  • Doors B and C are closed and there is a goat behind one and a car behind the other.
  • The contestant just guessed that door B has the car behind it, and is now being given a chance to change his mind

It seems that the Silly game is exactly the same as the original Monty Hall game at this point. To a viewer who has just ‘tuned in’ and does not know what has previously happened, the games look identical. Therefore, logic would dictate that the answer to the original question is that it doesn't matter whether the contestant changes their mind; the probability is 50% either way.

(3) It turns out, of course because the two games are not completely isomorphic. There is a crucial piece of information missing from the Silly game that is needed to make it isomorphic with the Monty Hall game, as follows: Monty (who knows where the car is)

  • was required to open a door to reveal a goat,
  • was given an opportunity to open door C,
  • chose not to open door C.

Monty has thus said something concrete about door C (“I didn’t choose it, perhaps because I couldn’t choose it”), but nothing about door B. This is the source of the asymmetry between doors B and C, and the reason that door C is more likely to not have a goat behind it. Happyharris 20:57, 25 July 2005 (UTC)

Perhaps, what we need in the main project page is an explanation to "why the intuitive answer is NOT the Monty Hall problem". Something like: Extrapolating the probabilities of an isomorphic game back to the Monty Hall game is the cause of much of the controversy of the game, since answering the probabilities of the game with randomized strategy (1/2) is NOT answering the probabilities of each strategy of switching (2/3) and not switching (1/3). aCute 08:50, 27 July 2005 (UTC)
I believe I've given a start at the top of the Aids to understanding section. Most people I've seen argue the incorrect position assume you can forget past events and look at it as a fifty-fifty chance (as they can with, say, coin flipping). The more-tenacious ones cannot be persuaded from their little optimization, when you ignore that they are using it. I plant the seed of doubt by showing that their premise fails in a case, card counting, in which they will almost undoubtedly acknowledge its failure.
Their premises always trump yours in their reasoning. If you don't try to correct improper premises, no argument you make will matter. This topic inspires debates to no end on Usenet because people ignore that. They end up, effectively, arguing over definitions, which is the prime example of a useless and stupid debate.
Logically, every single one of the article's diagrams can be redrawn and all the article's explanations can be rewritten while simply ignoring the past. Logically, any isomorphism you use can be discounted as an incorrect choice because it contradicts their assumptions, so you must be using hocus pocus. — 131.230.133.185 04:52, 10 August 2005 (UTC)

Breaking it down into steps

For the people who still find it hard to see why the probability is not 50/50 when there are two doors left, I hope the following illustration may help. I'm going to show that the Monty Hall problem is a specific case of a more general game; I'll call all the games that differ in their parameters "Hall games" for ease of use.

The basic idea behind a Hall game is this: We start with one large set of secret-hiding items -- they can be doors to be opened, or cards to be turned over, it doesn't actually matter. What matters is that there are n cards, but only one of them is the Prize card. The rest are Null cards.

Step One: The cards are divided into two hands. Each hand must have at least one card. For simplicity's sake, we call the number of cards in the first and second hands h1 and h2.

Step Two: Someone who can see which cards are Nulls can discard some number of Nulls from one hand, the other, or both. We call the number of cards remaining in each hand after the discarding of Nulls r1 and r2 (like h1 and h2, they must be at least one.)

Step Three: The player makes a guess at which of the two hands contains the Prize.

Step Four: The player makes a guess at which card out of the hand he selected is the Prize.

It's clear that to win the game, the player has to make correct guesses in both Step Three and Step Four. What are the chances of picking the correct hand? The first hand is correct h1/n of the time; the second hand is correct h2/n of the time. If we want, we could enumerate the cases: if h1=2 and h2=5, then the Prize could be the first card of the first hand, the second card of the first hand, the first card of the second hand ... et cetera, et cetera.

Now this is the part that many people find counter-intuitive. If we enumerate the cases, and then we reduce Nulls to meet any legal value of r1 and r2, we find that in no case can the removal of Nulls switch the Prize from one hand to the other. This means that the chance of the card being in the first hand or the second always stays at h1/n and h2/n -- even if the sizes of the hands do not stay at h1 and h2. If it's dealt to that hand, it stays in that hand; therefore the chance of it being in a particular hand is always equal to the chance that it was dealt to that hand.

What about the removal of Nulls? Does it affect anything after all? Yes, it does -- it affects the player's chances in Step Four. The chance that the Prize is in a particular hand is dependent upon h1 and h2 -- how many cards each hand started with. But the chance of finding the Prize in a hand (assuming it's the right hand) is based on how many cards that hand contains after Nulls have been removed -- since there's only one Prize, the chance of finding it in a hand of r1 cards is 1/r1.

Now, what are the chances of making both guesses correctly? If no Nulls get removed from either hand, then the chances of picking the Prize are either h1/n x 1/h1 (since r1 is equal to h1 when no Nulls have been removed) or by similar logic h2/n x 1/h2, which also multiplies out to 1/n. If, however, one of the hands -- say, hand 1 -- has been reduced down to one card (r1 = 1), then the chances of finding the Prize in that hand is h1/n x 1/1 -- if you've correctly guessed that the Prize is in that hand, you have a 100% chance of finding it in that hand when it's the only card in the hand.

With this being the general structure of the Hall game, we can see that the Monty Hall problem is really just the case where n=3, h1=1 and h2=2, r1=1 and r2=1. The chance that the Prize is in the player's hand is h1/n -- 1/3. The chance that it's in Monty's hand is 2/3. Before one Null is removed from Monty's hand, the chance of finding the Prize in his hand is 2/3 x 1/2 -- i.e., 1/3. But when the Null is removed, the chance is now 2/3 x 1/1 -- i.e., 2/3! -- Antaeus Feldspar 03:44, 26 July 2005 (UTC)

three prisoners problem

kudos to those contributors to this article. i have been thinking about this for days (despite the fact that i "got it" ... after about 20 minutes or so). i find the "two sets" explanation the most clear, though i'm sure some people will love the bayes' theorem explanation. i also found the talk page very entertaining. some contributors are clearly manifesting what kahneman and tversky (writers on cognitive biases - the former won a nobel prize) refer to as "belief perseverance." geez, sit down with a friend and three cards for 10 minutes and you'd realize you're wrong. anyway... i thought you might find this page interesting:

http://econwpa.wustl.edu:8089/eps/exp/papers/9906/9906001.html

besides elaborating on many of the key assumptions (often unstated) underlying the MHP, the paper presents an interesting (and supposedly earlier) problem of the same form, the "three prisoners problem." here's the upshot...

There are three prisoners, you and Prisoners A and B. Two of you are to be executed, while one will be pardoned. The prison warden knows who will be executed and who will be pardoned (like monty must know where the goats and car are). According to policy, the warden is NOT allowed to tell any prisoner if he/she is to be pardoned. You point out to the warden that if he tells you if A or B will be executed, he will not be violating any rule. The warden says C is to be executed. What is the chance that you will be pardoned?

the answer presented, like that to the the MHP, is that your chances of being pardoned remain at 1/3, while the the chances of B being pardoned, GIVEN WHAT THE WARDEN HAS TOLD YOU, is 2/3.

I found looking at this problem reinforced how *$&%^@# hard it is to get one's head around the MHP, because despite having read about the latter for days, i still had to think about the three prisoners problem for several minutes to get my head around IT, despite knowing that it was essentially the same as the MHP. K-razy.

someone mentioned the idea of blocking edits to featured articles for some period of time. while apparently there is a policy against such things, i agree with the suggestion, at least where articles that are sources of dispute. the featuring of an article doubtless attracts many potential editors, some of who may simply not know enough about the subject to edit appropriately. for example, the first time i read the monty hall page (when it was featured), someone had altered the solution section to express the misguided (given certain assumptions, which too often are unstated) 50/50 approach. needless to say, i was confused by the fact that all other sections of the article contradicted the solution.

good job. Contributed by 24.89.202.141.

The Mueser and Granberg paper is already one of the references, and the three prisoner's problem is the Gardner problem referred to in both the "The problem" section and the "Origins" section (although Grardner's version is not described). I suspect the point of allowing featured articles to be edited is to reinforce the notion that wikipedia is really, no kidding, editable by anyone. I find this to be a very principled, and quite admirable, stance even though it does require some effort (and I suspect protecting the main page was only done with the greatest reluctance). I cannot take credit for much more than nominating the article as a WP:FAC, but thanks for the compliment. -- Rick Block (talk) 01:01, July 27, 2005 (UTC)

Game Theory

I copied this back in from the archive:

I would argue it's a problem in probability. Rich Farmbrough 12:44, 23 July 2005 (UTC)
I have switch the statement back because it is certainly a problem of probability. Apparently MathWorld categorizes the problem under game theory, but in my opinion the connection is tenuous. In the standard statement of the problem there are no "conflicting interests" because the host is not an active player in the sense that he cannot make choices that affect the outcome. Because the problem clearly could generalize into a game theoretical topic, I would have no objections to a Wiki categorization as such. In the text, however, this would require some explanation, so in my opinion it isn't appropriate for the first line. Certainly it does not supersede probability. Davilla 16:21, 23 July 2005 (UTC)

Hecatonchires originally changed 'probability' into 'game theory', and then did it again two times. I've changed it back and posted a notice on his/her Talk page, pointing him/her to this discussion. I've invited him/her to start a discussion here if s/he wants it changed to 'game theory' again. Phaunt 10:54, 10 August 2005 (UTC)

I was initially quite dubious about this edit, but became less so when I looked up game theory and realized that it does rather fit the Monty Hall problem after all, at least by the definition in the article. How does a player maximize his chances of walking off with the prize? Of course, this may mean that it's game theory that has to be tweaked to clarify the matter. -- Antaeus Feldspar 23:08, 10 August 2005 (UTC)
Game theory is used to analyze strategic situations, where "strategic" means that there are interacting interests between two players. For instance, Robert Gibbon's book on game theory starts with this sentence: "Game theory is the study of multiperson decision problems." This is echoed (somewhat less clearly) in our article: "A definitive feature of game theory that distinguishes it from decision theory whose main subject is also studying formalized incentive structures is that game theory encompasses decisions that are made in an environment or states of the world in which strategic interaction between various players occurs." (Actually this sentence is really confusing, I will fix it.) The monty hall problem is clearly not a multiperson decision problem, since there is only one interested actor, the player. I think probability theory or utility theory would be best descriptions of the problem. --best, kevin ···Kzollman | Talk··· 23:55, August 10, 2005 (UTC)
Beter still, decision theory?--best, kevin ···Kzollman | Talk··· 23:59, August 10, 2005 (UTC)

Ahem, don't we have an encyclopedia to write here. It seem that you are arguing about the distance between two points on a beach ... jeesh. Well, if it is really important for one of you to prevail here then fine, go at it. Every once in a while however, look around and notice what you are spending your valuable time doing.  ;-( hydnjo talk 02:15, 11 August 2005 (UTC)

I can't believe this, they're still at it! Jeesh. hydnjo talk 02:31, 16 August 2005 (UTC)

Back and forth, the edits revert again and again. Can we get some consensus on this? It's two words... I don't mind the current compromise of mentioning both terms in the same sentence, but would prefer only mentioning probability. But is there any way we can get people to stop changing it every week or so? Fieari 05:22, August 26, 2005 (UTC)

I, for one, think this qualifies as one of the lamest edit wars ever.
If this is an edit war, it seems to be a rather slow one. Note that the last two reverts were on 10 and 15 August. The 23 August revert had to do with Increasing the number of doors, see below.
Anyway, I don't really have a problem with either formulation, if there's a majority. My problem is just that the 'game theorists' refuse to discuss this here on the talk page, even after having been explicity invited. That was the reason for the last 8/10 and 8/15 reverts.
I haven't voiced my own opinion yet; I like 'decision theory'. The problem with 'game theory' is that there is only one player. Phaunt 00:44, 27 August 2005 (UTC)

My own understanding is that within the context of game theory, "Monty Hall Problem" refers to some modification of the problem from the one currently given in the introduction to this article. Specifically, some range of behaviors is permitted to the host. In this case, each of the two players chooses a strategy from within their ranges of possible behaviors, and the task is to identify a Nash equilibrium. The Mueser and Granberg paper describes this approach to the question. --Wmarkham 21:05, 5 September 2005 (UTC)

Reverted addition under Increasing the number of doors

I just reverted the additions by User:62.99.223.20 under Increasing the number of doors, because they were copied verbatim from . This page was linked to, but that doesn't change the copyvio. Also, it didn't really belong there (under aids to understanding) but rather under Variants -> n doors, where a shorter discussion of this variant already exists. I invite User:62.99.223.20 to expand on this if he feels it's too short (but without violating copyright, of course).

Congratulations

It's a bit late, but I want to congratulate and thank everyone who worked on this and helped it become a featured article. Good job! Phaunt 11:53, 27 August 2005 (UTC)

"the assumptions explicitly stated below"

The intro refers to "the assumptions explicitly stated below" that do not appear (to me) to exist in the article.

From the introduction to the article:

In this puzzle a player is shown three closed doors; behind one is a car, and behind each of the other two is a goat. The player is allowed to open one door, and will win whatever is behind the door. However, after the player selects a door but before opening it, the game host (who knows what's behind the doors) must open another door, revealing a goat. The host then must offer the player an option to switch to the other closed door. Does switching improve the player's chance of winning the car? With the assumptions explicitly stated below, the answer is yes — switching results in the chances of winning the car improving from 1/3 to 2/3.

The description of the puzzle appears to be quite clear on the constrants on the host's behavior, and I believe that any mention of assumtions is unnecessary here. I happen to be someone who does quibble over the statement of the problem, and I find this one to be quite clear, so this is probably not problematic.

Unfortunately, there are references to "the assumptions" throughout the article. Further unfortunately, my own, possibly nonstandard, position is that the "Monty Hall problem" is really a related family of problems. The ones in Selvin's letter and the Parade article each pose slightly different problems than the one stated above. The stated result (or any clear result) can only be obtained in those cases if additional assumptions are made. In my opinion, understanding the nature of some of the confusion surrounding the Monty Hall problem is made easier if the existence of these (nontrivial, IMO) assumptions is made clearer.

Since the article is already quite good, and my editions could be construed as having an agenda, my hope is that one of the perennial maintainers of the article is willing to adjust it in order to reflect my concerns, in a manner that is true to what the article describes. My guess is that the only changes needed are to move the existing comments about assumptions from the "Anecdotes" section to the introduction of the Parade article, and to eliminate the reference to assumptions from the intro. In fact, after consideration, I think it is safe for me to make the latter change myself. My observation is that these words currently serve no purpose one way or another, and hopefully I am a good representative of the view that the existence of assumptions is important. I will do this shortly.

--Wmarkham 19:44, 5 September 2005 (UTC)

The words referred to the "Mueser and Granberg" constraints in the next section, although I agree the current problem statement in the lead-in is unambiguous without this forward reference. As far as I can tell, the problem itself is generally viewed to be the specific one described in the lead-in and not the family of problems framed by any ambiguous statements of it. The Parade controversy, in particular, was not primarily due to the missing assumptions but the apparent inconsistency between there being two doors to choose but the choice not being 50/50. Marilyn vos Savant has said (in the 1991 NY Times article) that virtually no one complained that the assumptions were not clear and that she thought the constraints on the host others have added as explicit assumptions were implicit in the statement of the problem published in Parade magazine. The constraints on the host are described in "The solution". Perhaps this section and "Anecdotes" could be strengthened with slightly more discussion about the ambiguities in the Parade magazine problem statement, referencing the 1991 NY Times article. -- Rick Block (talk) 21:40, September 5, 2005 (UTC)

Why I think it's wrong...

When you're given the choice to switch or not, it still doesn't matter, because switching or not switching is a binary decision. To give the probability of 1/3 to the scenario where you decide not to switch doesn't make any sense; when the host gives you the option, the scenario and therefore the rules change. You're no longer choosing between three doors, you're choosing between two.

It'd be different though, if you were not given the choice to switch. When one door is revealed to not be winning, you couldn't say "now my chances are 50/50" though from an intuitive standpoint you could say that, because without any ability to act on the events that occur, the probability really doesn't change from the standpoint of the start (though if we were weather forecasters we would say the chances were now 50%, but this isn't a forecast that continually updates, gamblers want to know what their overall chances are from the start, because that's where they're stuck making their decisions)

The thing here is when someone decides to base chance on when one assesses the situation or base the chance on when a choice was made. Gamblers will want to base it on when they made their choice, but weather forecasters will want to assess the situation continuously.

If you have read the entire article then you missed the point. It's a scam, "the game host (who knows what's behind the doors) must open another door, revealing a goat". If you believe otherwise let me explain one more time (I'm the scammer and you're the scamee or mark). To make the example more obvious we are going to start with ten doors. You choose one of them and by default you un-choose nine of them. Would you agree that your chances at this point are one in ten or 10%? OK then, I then slowly and dramatically -ta dahh- open eight doors . It is critical at this point that you understand that I know where the prize is so the eight doors that I (the scammer) open are known to me to be non-winners (goats). So, we're down to you with a 10% door and me with a 90% door. Do you still think it's a 50/50 choice? So, bring it back to three doors and so long as you realize that the host (scammer) knows which door is a non-winner which he procedes to open then you should obviously choose his remaining door whether you start with three or ten or one hundred doors. --hydnjo talk 21:33, 6 November 2005 (UTC)
Also note that it doesn't matter if you think it's wrong. It's objectively right. Probability is nothing but a measure of the fraction of attempts that produce a given outcome. Empirical studies have demonstrated that the answer given in this article is right. No amount of argument or logic can change the fact that this article's conclusion matches reality. --P3d0 22:10, 6 November 2005 (UTC)
It does matter to me. I'd like to think that we have explained this in a way that 69.246.138.166 comes away with an understanding rather than a dogmatic. But then... --hydnjo talk 00:36, 7 November 2005 (UTC)
That's a noble goal, but with all due respect, I wasn't talking to you.  :-) --P3d0 14:58, 7 November 2005 (UTC)
I think I see where you're getting confused, so I hope you'll let me try to explain. Let's generalize the problem as given to a new class of problems:
  1. The host presents X doors, one of which has the prize behind it, and the others of which are "misses".
  2. The player divides these doors into two sets, each of which must have at least one door in it.
  3. The host can then adjust either or both sets by removing "miss" doors or by adding new miss doors (the player cannot distinguish just-added doors from the doors that were originally there.) There must still be one door in each set and the host can only add or remove misses, not the door with the prize.
  4. Challenge: The player must correctly guess which of the two sets contains the door with the prize.
  5. Challenge: The player (assuming they picked the correct set) must correctly guess which door in the set contains the prize.
Now, we can quickly confirm that the Monty Hall problem is just a specific case of the general problem. The host presents three doors (step 1); the player divides them into a one-door set and a two-door set (step 2); the host then removes a miss door from the two-door set (step 3).
As for the challenges, let's look at the second (step 5) before the first (step 4). The effect of the host removing a miss door in step 3 is to eliminate any actual "challenge" from the challenge of step 5; you can't make a right choice or a wrong choice if you have no choice to make! This means that the odds for the first challenge, step 4, become the odds for the whole problem.
So what are the odds for step 4? Well, in step 2, the player divided the door into two sets with no idea of which one was the prize door. There's a 1/3 chance that the prize door ended up in the one-door set, and a 2/3 chance that the prize door ended up in the two-door set. Now, 69.246.138.166's challenge to the correctness of the stated solution is that "when the host gives you the option, the scenario and therefore the rules change." My question in response is: "How?" The host can remove a miss door from a set; in our expanded general problem, he can remove multiple miss doors or add multiple miss doors to either set. But nothing he can do can change which set the prize door is in; therefore the odds of which set the prize door can be found in must be exactly the same in step 4 as they were after step 2. If you disagree, reply and spell out exactly how the prize door could change from one set to the other in step 3.
So let's recap. In the official Monty Hall problem, the player divides the doors into two sets; the set with one door has a 1/3 chance of containing the prize door; the set with two doors has a 2/3 chance. The host removes a miss door from the two-door set, and with it he removes the chance that the player could pass the first challenge and fail the second. Even though both sets are now down to one door each, the "two-door set" still has a 2/3 chance of containing the prize door. To pick the correct set is to pick the correct door, so picking the door that was in the two-door set gives you a 2/3 chance of winning. Again, if you disagree, don't just assert that the situation does change; explain how it could have changed. -- Antaeus Feldspar 20:06, 7 November 2005 (UTC)

Easy peasy

Jesus Christ, why are people so thick. It's like this..

  1. Choose a door; you're lucky it's a goat! but you had a 2/3 chance of choosing a goat so the odds were on your side.
  2. Monty reveals the other goat.
  3. You switch; it has to be the car as the other goat is gone.
  4. You win!
  5. If you had chosen not to switch you would have lost.
  6. By not switching you're stuck with that 2/3 chance of getting a goat. Switching meant that you turned it into a 2/3 chance of winning the car.
  7. Easy peasy. End of story. Nighty Night Jooler 02:09, 8 November 2005 (UTC)
But if we let the proles know how simple this is, they might start thinking for themselves, and then where will we be? Bonalaw 14:31, 22 November 2005 (UTC)
OK, how's this for a paraphrase: we do away with the numbers and use the layman's term "chances are." You pick a door. Chances are, it's a goat. Then, Monty opens a door that he knows is a goat door. Now, assuming you picked a goat, and of course Monty showed you a goat, the only door left is the car. You'd be a fool not to switch. Of course, it's much less likely that you'd pick the car at first, in which case you'd lose by switching. Dyfsunctional 17:44, 14 December 2005 (UTC)

I think this reasoning is too simplistic, and would give the wrong answer in some cases. For instance, would this reasoning apply to #paragraph_about_Who_Wants_to_be_a_Millionaire? I think it would lead to the wrong answer. (The right answer is that your odds are 50-50 in that case.) --P3d0 21:31, 18 December 2005 (UTC)

Actual rules for the gameshow

I wonder if somewhere in the article it should be pointed out that under the actual rules for the “Lets Make a Deal” game show that this problem seems to be named after, switching doors didn’t actually increase your chances of winning. On the game show Monty would only offer the chance to switch ½ of the time if the player initially picked incorrectly, but would always offer the choice if the player initially picked correctly. This throws off the normal analysis in which the choice is always offered, since simply being offered the choice to switch increases the chances that your initial door pick was correct. — Preceding unsigned comment added by 128.227.7.193 (talkcontribs)

  1. How do you know this to be true?
  2. New stuff goes at the bottom of this page.
  3. As this is your first edit from your IP I'll wait a day or so before moving it down so as to help you find this response. --hydnjo talk 19:46, 17 November 2005 (UTC)

Comment moved from article

  • Note- i'm not the original writer and i may be wrong, but it seems to me like the probability is actually 50/50. it's 2/3 for a person to lose. BUT say you're player 2. if player 1 is eliminated, he had the goat. which means that player 2 either got the car, or it's left. Which means there's an equal chance for him to win or lose by switching... right? -- 24.196.238.213
    • This question relates to the variant where there are two players, one is eliminated, and the question is "should you switch". Assuming you're not eliminated the answer is no. and the probability is 2/3 that you have the car. There are indeed two outcomes left, i.e. you have the car or switching gets it, but they have unequal probabilities in this variant - just like the two outcomes in the original problem have unequal probabilities. The key is the realization that N possible outcomes doesn't mean each one must have a 1/N chance. To make this one more obvious, consider a similar game with 10 doors, 1 car, and 9 contestants. If none of the 9 choose the car, 8 are eliminated randomly. If any of the 9 choose the car the other 8 are eliminated. This game ends up in the same situation, a player and a door to potentially switch to, with the same two possible outcomes with what I hope are clearly not even probabilities. In the 3-door, 2-person case, I assume you agree the unchosen door has a 1/3 chance of having the car at the beginning of the game. Eliminating one of the players doesn't change this, but since the car is either behind the unchosen door (still 1/3 chance) or one of the players has it, when there's only one player left the probability is (1 - 1/3) = 2/3. -- Rick Block (talk) 14:47, 23 November 2005 (UTC)

paragraph about Who Wants to be a Millionaire

I deleted the following paragraph about the Who Wants to be a Millionaire show:

The game show "Who wants to be a millionaire" has the same problem. The player is given 4 possible answers to a question. You can ask the host to remove 2 wrong answers, leaving you with 2 answers, one right and one wrong. Assuming you have no idea which of the 4 is right you can guess one (Lets say A), remove two and be left with two . If A is not removed then there is a 1/4th chance that A is right and a 3/4th chance that the other one is right.

On the millionaire show, the contestant does not get to pick an answer and then have two wrong answers removed. Without picking, two are removed and if you then pick randomly there's a 50/50 chance. Even if you mentally (randomly) "pick" and your pick is one of the two remaining answers, the result is a 50/50 chance because your pick is not related to the process by which the other answers are removed. -- Rick Block (talk) 16:28, 16 December 2005 (UTC)

Agreed. No amount of meditation before the removal of two choices will affect the probability of the final outcomes. You need to tell Monty your pick and have that affect his actions. --P3d0 21:29, 18 December 2005 (UTC)

Note that, (In the English version at least), the player can tell the presenter which of the 4 answers they think it is before the 50:50 takes one away, but it is still possible that the one they picked could be taken away. Leaving two, and a true 50:50 chance of picking at random the correct answer. --JP Godfrey 21:10, 23 January 2006 (UTC)

this problem is easier than stated

Maybe I missed it ,but it seems noone has brought up the claim that this is a pseudo antiintuative paradox.. The Wierdness of the probablilty is ONLY a result when a convergance (to that specicifed probablilty) is reached over an infinte number of cases. Per this particular singal case ,when you are in a REAL game switching the door doen't chagne AT ALL YOUR specific case chance of winning. "there ar e3 kinds of lies in this world : lies ,damn lies and statistics" The Procrastinator 14:14, 30 December 2005 (UTC)

In a real game that followed the rules set down in the problem, yes, switching the door does change your chance of winning. Picture the following situation: instead of doors, you and Monty have cards, and instead of three cards, you have ten cards, one of which is the Ace. Monty shuffles the cards, lets you pick one, and keeps the other nine in his hand. What are the chances that the card is in Monty's hand? Obviously, nine to one. Now, Monty discards eight non-Ace cards from his hand. Can the Ace possibly change from one hand to the other during this step? Clearly not, so the chance that the Ace is in Monty's hand is still nine to one, even though the actual size of Monty's hand is now one. -- Antaeus Feldspar 15:34, 30 December 2005 (UTC)

Let's revert to the fundamentals of probability

To determine the probability of an outcome you list all the possible outcomes and then count the number of times the outcome you are interested in turns up. Look at the decision tree under the Venn diagrams in the main entry. How many possible outcomes are there under each of the possible contestant choices? How many of these outcomes result in the contestant winning the car?

50% in all cases!

Why?

Because the problem is mis-stated. When the contestant has chosen Goat 1 the quizmaster reveals Goat 2 - he doesn't have a choice. When the contestant has chosen Goat 2 the quizmaster reveals Goat 1 - he doesn't have a choice. When the contestant has chosen the car the quizmaster has to choose whether to reveal Goat 1 or Goat 2. These are independent possibilities and should not be selectively aggregated for the purposes of determining probabilities.

If you dispute this and believe that revealing Goat 1 or Goat 2 are aspects of the same event because the quizmaster only reveals one of them then you don't understand how probability works. The quizmaster also has to make a decision if the contestant picks the car and this decision must be included in the calculation of probabilities, as the decision tree correctly shows. The numbers allocated to the various decisions are meaningless, it is the counts that count.

Wherever you find a paradox, there's a fallacy lurking. (Hodgson's law - you read it here first) 80.47.80.51 01:45, 18 January 2006 (UTC) Graham Hodgson, 17 January 2006

Sorry, nope. They are not independent possibilities. There is a one-in-three chance that the contestant initially picks the car. The host can only choose between revealing Goat 1 or revealing Goat 2 when this one-in-three chance has already happened, and he must choose one of the two; therefore the total probability of these two chances must be exactly one-in-three. No more, no less. If we were to display the probabilities visually on a pie chart, we would see the total size of the user-picks-the-car slice stay exactly the same size as it was divided into the (not-significant) possibilities of "host reveals Goat 1" and "host reveals Goat 2". Are you seriously suggesting that that slice of the pie must actually get bigger because it's being divided into a larger number of slices? -- Antaeus Feldspar 02:18, 18 January 2006 (UTC)
The purpose of this page is to demonstrate that there's a sucker born every minute. The (number of ways to explain the problem logically and correctly) divided by the (number of dissenting opinions) will always be less than one. All you do by trying yet one more logically correct perspective in hopes of persuading only one more true believer is to perpetuate the ranks of non-believers by more than one. Thus the numerator will grow more slowly than the denominator and the ratio will continue to be less than one, (believers ÷ nonbelievers < 1), always. ;-) hydnjo talk 05:03, 18 January 2006 (UTC)
Of course - the chances of making a wrong initial choice are twice as great as the chances of making a correct initial choice, and therefore the chances of improving on the initial choice are also twice as great. I withdraw covered in confusion. 213.78.64.39 12:03, 18 January 2006 (UTC)Graham Hodgson
This has nothing to do with true-believers and non-believers, it's not a philosophical or moral dilemma, it's a simple mathematical problem whose solution is counterintuitive. Some understand it, some don't. Tailpig 19:37, 18 January 2006 (UTC)
I was using the terms believers and non-believers metaphorically for those that do understand and those that don't.  :-) hydnjo talk 19:48, 25 January 2006 (UTC)

Töff's analysis

This is my own plain-language explanation & debunk of "switching increases your odds." (It's essentially the Markov Chain). I hope Tailpig will read it and become a "believer." :)

Well, first off, let me say that I've never really liked that diagram; I know how the problem goes and what the 'trick' of it is and I find it incredibly difficult to see how that diagram relates to it.
With that said, however, your analysis is fundamentally flawed. Let me quote from your analysis: "Let's say you choose Goat1. The host shows Goat2. At that point, you have two paths: switch(Y) or not(N) ... and you have equal 50-50 chances to take either path." (emphasis in original) Well, that's the source of your confusion right there, because that is in no way the original problem. The essence of the problem is that the player chooses his strategy, whether to switch or stay -- he does not have it randomly selected for him with 50-50 probability! It's no wonder that your calculations show the player's chances as 50-50; if the player has one of two strategies randomly assigned to him, that will make his overall chances 50-50 no matter what the probability is for a given strategy.
If you doubt this point, let me illustrate. I will roll six ten-sided dice in a row. If and only if all six of them come up "10" will I put the prize in Box A; otherwise I'll put the prize in Box B. Elementary analysis should confirm that the strategy of picking Box B will pay off 999,999 times out of 1,000,000. If you are allowed to choose your strategy, you can win 999,999 times out of 1,000,000; if, however, I then randomly pick a 'strategy' and therefore a box for you, with equal probability of either, you now win the prize only 1 in 2 times.
Now that you know that the problem you've been dealing with isn't the actual Monty Hall problem, please let us know if you have any problem seeing why the answer of the actual Monty Hall problem is that switching gives you a 2/3 chance of winning. -- Antaeus Feldspar 00:08, 26 January 2006 (UTC)

Monty Hall is a Markov Chain

Well, I may be thick, but I certainly don't get it. After the door has been opened, I am left with two doors. The story so far has been very entertaining, but in fact it has given me no information which will indicate if my first choice was right or not. That means it is now 1:2. How we got to this situation is irrelevant, unless the process of getting there gives me information which is relevant, but I don't see how it does.
The article correctly says that for some statistical calculations the past can be ignored, while for others it cannot. The major fault of the article is that it does not go on to say why in this case the past is relevant. The article cites card counting as an example where the past cannot be ignored - if I know that some cards have already gone (and I know WHICH) then I have information which affects the probability of the next card being an ace, so the past is relevant for future probability. But if we have a sequence of events which are separate events, then we have a Markov chain, and previous events in the chain do not affect the next one: for example a series of coin tosses.
The point about Markov is that there is a difference in perspective before and after any event in the sequence. If the chances of tossing a coin and getting heads is 1:2, obviously the chances of doing it twice is 1:4, but if I toss a coin and get heads, the chances of now doing it a second time are 1:2, because the perspective after the first toss no longer takes the past probability into account. This applies to all sequences of probability, unless there is a CAUSAL link between the past event and the current probability (e.g. the last toss dented the coin so that it now falls differently). The Monty hall problem looks to me like a Markov chain. If I am wrong, then the article has to show that. It does not address this problem, and I seriously dout any of you can.
Think about this: I have three cards, and I pick two of them and put them on the table in front of you. I tell you to pick one, and if you pick the higher of the two cards, you win. We all agree that your chances are 1:2. The fact that I have three cards doesn't affect the choice I gave you, and your chances would be the same if I had four cards or only the two. The fact that there WAS another door has no bearing on the chances of getting the car NOW.
(BTW, the article says that "hundreds of maths professors" have attested that the proabability is 1:2. Is that not something you 1:3 proponents should be worried about - this smugness is incredibly arrogant! If the article is right, it needs to give serious maths authorities as sources, not internet sites.) --Doric Loon 11:24, 24 January 2006 (UTC)

You say the problem looks to you like a Markov chain. I assure you it is not, for reasons already explained in the article. The CAUSAL link is the constraint that the host MUST open a door, CANNOT open the door you've picked, and the opened door MUST NOT show the car (i.e. the host is NOT opening a random door). Your initial pick is a random event, but the host's action is not. The Bayes' theorem section is effectively a proof of the result explained in numerious other ways in the article. The references section already cites "serious maths authorities, not internet sites" (as you request). I assume you've read the article and the previous discussions on this talk page, and you still think the probability is 50/50. If you seriously want to understand I suggest you either print this article and take it to your maths teacher to discuss, or I can try to help here. -- Rick Block (talk) 15:09, 24 January 2006 (UTC)

Oh sure, I stand by the "assume good faith" principle and wouldn't be here if I didn't really want to understand it. The point is, though, that what Markov proved is that the probability of a future action depends entirely on the present situation and not on how we got here. How we got here is only relevant if it alters the present situation; the probabilities involved in getting here are not in themselves relevant for the probability of the next event. Now, I understand that the show host has no choice. What you haven't explained to me is what I learn from that which makes my next choice (switch or no switch) into an informed choice rather than an arbitrary (i.e. 50:50) one.
I'm not a mathematician, so of course I know I could easily be stumbling in the dark. But I do wonder if you (and the article) are not confusing two different things. Remember Markov and the coin: The chances of tossing a coin heads up twice are 1:4, but after I have tossed it heads up once, the chances of doing it a second time are 1:2. Now is it not possible that here too there are two different phases with different probabilities:

  1. the game is about to begin, I have to choose between the three doors, and know that I will later get the chance to switch. Are my chances better if I plan to switch? Yes.
  2. we are in the middle of the game, the host has opened a goat-door, and I now have the chance to switch. Are my chances better if I switch? No.

In other words, in terms of game theory, if we do it many times, I can optimise my chances by having a switch policy, but in the particular case, when I stand before two doors, it is 50:50. As with Markov's coin tossing, this seems intuitively wrong, but makes sense mathematically. I think. If that is true, then it explains why there are two strongly held views. They are answering different questions. In that case, though, the top of the article needs to rephrase the problem. --Doric Loon 16:11, 24 January 2006 (UTC)

Doric, the Monty Hall problem is the probability equivalent of an optical illusion: the situation is carefully chosen to make the mind jump to false conclusions based on the interpretive shortcuts that speed up everyday processing.
In this case, the situation has tricked you into thinking that there's more than one random event. There isn't; the only random event is the player choosing one door out of the three. (Technically, you can argue that when the player picks the car, there's another random event because Monty has to choose which one of two doors both containing goats should be opened. However, since there is no distinction between the goats, this is not a significant random event; either way, the result is exactly the same, that Monty winds up with one remaining door which has a goat behind it.)
Now, let's look at that random event, of the player choosing one door out of the three. The effect of this choice is to divide the doors into two sets, the player's set of one door and Monty's set of two doors. The car is either in the player's set, or it's in Monty's set; it should be fairly easy to see that the chances are only one in three that the car is in the player's set.
Next comes the other part that most often tricks people: Monty opens a door from his set which he knows to contain a goat. This reduces the size of his set from two down to one; this often fools people into thinking that because the two sets are now the same size, they must have the same probability of containing the goat. However, this is not the case: the car cannot move from one set to the other during this step, so obviously the chance of the car being in Monty's set must still be two in three, as it was when the only random event of the problem happened.
(Note: some people get fooled for a different reason when contemplating this step, especially if the problem is phrased incorrectly or ambiguously. Some people think that there is a chance under the rules of the game for Monty to open a door and reveal the car, and that when the problem says "Monty opens the door to reveal a goat", it means that we are to eliminate those cases as not having happened. However, the correctly stated problem makes it clear that Monty knows which doors contain goats, and will always choose a door which contains a goat.)
So, in summary: the player makes a guess, which he has a one-in-three chance of getting right, of where the car is. Monty then reduces the size of his set to one, so that choosing the right set is equivalent to choosing the right door. If the player guessed right the first time, staying wins; if the player guessed wrongly the first time, switching wins. The probabilities are still determined by that first and only random event, the one-in-three chance of the player getting the car in his set on the first try. -- Antaeus Feldspar 17:18, 24 January 2006 (UTC)

First of all, congratulations, that's the clearest presentation of your argument I have heard, and better than what is in the article. In particular, what you say about viewing both parts as one event is helpful. I am now comfortable with the 1:3 solution, provided we are talking about the probability of the whole. In my last comment I accepted that for the sake of argument, but now I accept it without difficulty. Standing at the beginning of the game I am cool about saying, my chances are improved by switching when the time comes.
But can you see my problem about coming into the thing half way through? The article begins by asking about the probability of picking the right door out of two AFTER a random choice has been made. Taking all three doors into account means we are including past events in the calculation (or past phases of the event, if you prefer). But that is selective. Perhaps, unbeknown to me, there were five doors, with three cars and two goats, and two car-doors were eliminated which I never heard about. That would reverse the probability. Taking the past into account is therefore dangerous. This is just instinct, but I sense your idea of viewing both parts as one event is only legitimate when you stand back and look at the whole thing, not when you are standing in the middle with one part done and the next part to be thought about.
But I WILL take your advice about asking a maths prof. --Doric Loon 18:48, 24 January 2006 (UTC)

If you come in half way through (two closed doors, one open, player having originally picked one), unless you know what has happened you would likely think the probability is 50/50. It's not. In the Markov case the next event (the next coin toss) is an independent random event, unrelated to previous coin tosses. In this case, the probabilty of the player's chosen door having the car is related to the conditions in effect at the time this choice was made. This is the only Markov event. By varying the initial conditions, we could make the probability with two doors left anything we'd like. Start with 100,000 doors and one car. You choose one. The host opens 99,998. There are now two left. It doesn't matter whether you watched this happened from the beginning, came in with 50,000 closed doors, or only at the very end. The initially chosen door has a 1:100,000 chance the whole time (just like when it was picked). At the end the other door has a 99,999:100,000 chance. When there are 101 doors left (the player's and 100 more), as a group the 100 that aren't the player's have a 99,999:100,000 chance so each individually has a 9,999:100,000 chance. Start with 100,000 doors and 99,999 cars. The host opens 99,998 doors with cars. Now the selected door has a 99,999:100,000 chance the whole time (just like when it was picked) and the other door has a 1:100,000 chance. The point is opening the doors has no effect on the probability when the player picks, and unless we reveal enough information to remove any uncertainty (making the "probablity" 1 or 0) this probability doesn't change (if doors are not randomly opened). -- Rick Block (talk) 21:36, 24 January 2006 (UTC)


OK, I've got it. In maths I'm a plodder, but even plodders get there. Thanks both. --Doric Loon 07:11, 25 January 2006 (UTC)

The fallacy of distinct goats

Just wanted to note something:

  • The player picks goat number 1. The game host picks the other goat. Switching will win the car.
  • The player picks goat number 2. The game host picks the other goat. Switching will win the car.

Umm.. let's say:

  • The player picks the car. The game host picks goat number 1! Switching will lose.
  • The player picks the car. The game host picks goat number 2! Switching will lose.

The chances are now 50-50. As it logically would be. Your chances of winning the car at all has actually increased your chance of winning from 1/3 to 1/2. Now you have 2 choices and one of them contains a car.

I'm afraid not. You are confusing the fact that four possibilities can be enumerated separately with the idea that they must all be equally probable. That is not the case; since the last two possibilities you mention are both dependent upon the player initially picking the car, they can only divide between them the cases where that in fact occurred. Thus, the possibilities could be written like this:
  • The player picks goat number 1: 1/3
  • The player picks goat number 2: 1/3
  • The player picks the car (1/3) AND the host picks goat 1 (1/2): 1/6
  • The player picks the car (1/3) AND the host picks goat 2 (1/2): 1/6
If you still don't see how absurd it is, then let me pose this: when the door is opened, a goat could be clean or it could be dirty. So that means there are four goat possibilities: the host could pick a clean goat 1, a dirty goat 1, a clean goat 2, or a dirty goat 2! Just by considering whether the goat is clean or dirty, we've elevated the chance that the player initially picks the car to four out of six! Now you're saying to yourself "that's ridiculous -- whether the goat is clean or dirty can't change the probabilities!" Yes, exactly right -- and neither can considering which of two identical goats Monty shows change the probabilities. -- Antaeus Feldspar 16:04, 25 January 2006 (UTC)
I'm not sure anymore whether the argument is that switching improves your odds, or whether the whole business is a scam. It reminds me of the card game scam that Wednesday uses to cheat the waitress in American Gods. One thing I didn't like is that the sums of the probabilities at the bottom of the probability diagram add up to 200%, when they are only allowed to go to 100%. ie. The odds when player picks goat x and does whatever should only be a portion of the original slice. By the time you get to the end, you are pretending that you have 2 pies.
Pick goat 1 and switch: 1/6
Pick goat 1 and stand pat: 1/6
Pick goat 2 and switch: 1/6
Pick goat 2 and stand pat: 1/6
Pick car, see goat 1 and switch: 1/12
Pick car, see goat 1 and stand pat: 1/12
Pick car, see goat 2 and switch: 1/12
Pick car, see goat 2 and stand pat: 1/12
If you choose blindly between the last two doors, your odds are winning are 50/50. This stays true regardless of how many doors you initially begin with. That must be why the show has the 'no switching once you picked' rule, otherwise they'd give away a lot of cars. Of course, you don't have to choose blindly, in the example you are allowed to choose the door with better odds. -- JethroElfman 23:37, 1 February 2006 (UTC)
Frankly, I think the diagram is not too good, and should be replaced with two trees -- one showing what happens if you use a "switching" strategy each time, one showing what happens if you use a "staying" strategy each time. The current diagram seems to confuse a lot of people into thinking that staying or switching is going to be picked for them, which is of course not the point. -- Antaeus Feldspar 02:51, 2 February 2006 (UTC)

Markov again

Coming back to Markov. I suspect the reason most people have difficulty is because at school they were taught the Markov principle (usually not under that name) and this really does look like it at first sight. Fooled me for long enough. As Antaeus Feldspar says above, it is an optical illusion in this respect. The article doesn't really help here. At the top of the "Aids to understanding" section it points to this issue, saying that "The most common objection to the solution is the idea that, for various reasons, the past can be ignored when assessing the probability." But I am not sure that what follows really helps most people see why it cannot be ignored: I certainly read it the first time with a sense of frustration that the key point was evading me. For me the eurika effect came with Antaeus' pointing to the (in retrospect obvious) fact that the car can't move. I've been mulling over how to explain the difference between Markov and Monty Hall. I think the difference is that after I toss a coin, I don't toss it a second time from where it landed, but rather I pick it up first: the way it landed last time doesn't affect the next toss because I return it to a neutral position before the next event in the sequence. The equivalent of returning to a neutral position would be if, after the first door has been opened, the game organisers were to remove the car and remaining goat and reallocate them by a random principle. THEN the second phase would be unaffected by the first, and that would be Markov. But they don't. What everyone understands is that if we play the game three times, the first guess will be right once and wrong twice; what is so easy to miss is that that cannot change unless the car moves. Now this all seems so obvious, but it is in fact the massive blind spot which makes the optical illusion trick people. I wonder if it would be worth having a short paragraph on Markov in this article, and discuss the difference properly - and perhaps less chaotically than I can do it. --Doric Loon 15:24, 31 January 2006 (UTC)

I think you've got a good point, Doric. I'm not even sure that really is the most common objection, either: my experience with explaining it to people tends to be evenly split between those who think that probability automatically corresponds to the size of the sets, and those who think that Monty's removing a door has "reset" the probabilities and has generated a new random event, with probabilities independent from previous random events. Though... now that I typed that out, I'm wondering if they're really the same thing, after all. -- Antaeus Feldspar 15:37, 2 February 2006 (UTC)

Staying or switching at random

"Note that switching at random is quite distinct from just keeping the original choice. Having arrived at A, B, C, or D, if the contestant then blindly flips a coin to choose whether to switch or stand pat, then there is a 50% chance of ending up with the car. However, the choice doesn't have to be made at random. The coin flip gives a 50% chance of being the option with 1/3 likelyhood, and 50% of being the option with 2/3 likelyhood, so they balance."

This has nothing to do with the Monty Hall problem. The Monty Hall problem is about whether choosing a particular strategy can increase or lower your chances. Talking about what would happen if a coin flip decided your strategy for you only confuses the issue and makes it harder for people to understand the real problem. -- Antaeus Feldspar 16:02, 2 February 2006 (UTC)

Yes, perhaps the old diagram was just irritating me too much. It showed the odds at 50/50 and was confusing. Still, I think people consider choosing at random to be strategy in itself and extrapolate from that to the mistaken notion that both doors are equal. The number of cries for help here on this talk page indicates to me that there's still edits to be done so the article makes its point better. That's my idea with the random thing; to tell people that their gut instinct of it being 50/50 is correct, but only if they choose were choosing blindly. If the page keeps any of the new diagrams I'll make better images to replace these quickies. -- JethroElfman 02:47, 3 February 2006 (UTC)
I agree there's still room for improvement. I think the card game experiment is maybe the best aid to understanding, since people can actually carry it out themselves and see why Monty opening a goat-door doesn't change any probabilities. Would anyone object if I moved that to the top of the "Aids to understanding" section? -- Antaeus Feldspar 16:00, 3 February 2006 (UTC)
Hey, I think that would be a good move. In fact, I would eliminate the 3-card version (or make it the extrapolation) and begin stratight away with a 52 card pack where the objective is to find the ace of spades. It is important to get this done early because the length of the article means people may not make it to the end. JethroElfman 16:58, 3 February 2006 (UTC)
I would have to disagree about making the 52-card rather than the 3-card version the primary version; I've seen a lot of people respond to attempts to prove the 2/3 result through simulation with "well, if your simulation comes out with what's obviously the wrong result, it proves that you mis-programmed your simulation!" I'd rather make the simulation correspond as precisely as possible to the actual problem so that there's less room for people to think that some significant factor differs between the door version and the car version. -- Antaeus Feldspar 18:00, 3 February 2006 (UTC)
Additionally, what about a "Common misperceptions" section, explaining common ways in which people either misunderstand the ground rules of the problem or misunderstand the effects of those ground rules on the probabilities? -- Antaeus Feldspar 16:02, 3 February 2006 (UTC)

Simply.

Looking only at winning posibilites.

If you are going to swap you must pick a goat in the first place and your odds are 2/3 of doing that. If you are not going to swap you must pick the car in the first place and your odds of doing that are 1/3.

--81.79.90.68 14:55, 4 February 2006 (UTC)Tim Robinson.

Tricky Host Scenario

I'm not sure I would switch. The Monty Hall problem states that the host allows you to switch only AFTER you have picked a door already (It was not part of the initial rules). So if the host knows that you picked a door with a goat, he COULD directly open that one. This would give you a much reduced probability of getting the car, and you could only get the car if you didn't switch.

No, sorry. That's not possible. Once you pick a door, the host must pick another door to open. He can't choose to open the door that you picked. -- Antaeus Feldspar 02:45, 4 February 2006 (UTC)
But that begs the question, why didn't he tell you about the switch in the first place.
How is that relevant? -- Antaeus Feldspar 03:13, 4 February 2006 (UTC)

Somebody please help - I think I understand the correct answer, but my mind is still struggling to get around the following: Imagine contestant A chooses door 1. Monty hall then opens door 3 to reveal a goat. At this point you introduce contestant B. Contestant B has no prior knowledge of the game. He is told he has been "allocated" door 1, he does not know why door 3 is open. He is effectively in the same position as contestant A, but he does not know that the game is fixed. This time both contestant A and B are offered the choice to switch or stick. Surely the percentage chance for contestant B is 50/50. If so how can the same two doors have different probabilities of a prize at the same time for two people standing in front of them? If contestant B does not have 50/50 why not?

Yes, the chances for B are 50/50; B is playing a different game. For A, there were initially three choices, all equally likely. For B, he sees one open door, with a goat in it. Note also how one of the rules of the game comes into play here: there is always ONE car and TWO goats. But for B, since door 3 is not available as a choice, there are only two choices, one of which has a car, the other a goat. The fact that B has been "allocated" the first door doesn't matter. Summary: A has three choices initially, all equally likely to have the car. B has two choices, both equally likely to have the car. After the host action, A has two choices, NOT equally likely (stay with door 1, or swap to door 2). B arrives after the host action, so there is no before and after for him. Look at it another way: the game is vastly different for B because the whole 1/3 chance that door three might have the car has been taken away, and distributed randomly into the two remaining doors. Lucky B! --Mike Van Emmerik 21:52, 6 February 2006 (UTC)
Category: