Slot machine moguls are on a tear over games with nostalgic themes. One of these is based on yesterdays TV show, Lets Make a Deal. Seeing this model reminded me of an interesting puzzle in probability and logic, getting me wondering whether the same math applies to todays popular Who Wants to Be a Millionaire?
The conundrum is known as the "Monte Hall Problem," named for the host of Lets Make a Deal. At the end of a session, a player had to pick one of three nominally identical curtains. One concealed a fabulous prize; the others hid disappointments. After the contestant chose, Monte - who knew what was where - opened one of the unselected curtains, revealing that it would have been a bummer. The player could then switch. Was it preferable to stick with the original choice, change, or would it be a toss-up?
Compare this to a feature of the Millionaire. Picture this scenario. Liz gets a question and four possible answers. She hasnt a clue - A, B, C, and D seem equally likely - but guesses "A." The host, Regis Philbin, asks whether thats final. Liz vacillates, saying "no, lets use the 50-50 lifeline." Two wrong answers are accordingly erased, leaving A and C. Is it better to stay with A, switch to C, or would it be truly 50-50?
Most folks think the odds are the same either way on either show.
Sorry. On Lets Make a Deal, switching is better. The chance of the initial choice being right is one out of three. Its twice as good, two out of three, for the unopened unselected curtain.
How so? The contestant began with one of three indistinguishable curtains. The chance it was right was one out of three. The chance the biggie was behind one of the two other curtains was the complementary two out of three. Monte opened one of the two unselected curtains, revealing it as wrong. The probability of the first choice being right was still one out of three; that of the other two curtains, combined, still two out of three. When one of the unselected curtains was shown to be wrong, the chance associated with the second became the full two out of three.
Theres no comparable "Regis Philbin Problem" on Millionaire. The contestant has one chance out of four of being right before the lifeline. Afterward, its one out of two, sticking or switching.
What if, on Millionaire, two wrong answers were eliminated from three originally unselected options? The chance associated with the first choice would still be one out of four, and with the remaining response three out of four. In this instance, switching would be better; it would have three chances out of four of being correct, versus one out of four for the starting guess.
But this isnt how Millionaire works. Wrong answers are dropped from the whole set of four, not from three originally unselected alternatives. An incorrect initial choice could just as easily blink off as any other wrong answer. So, the two remaining possibilities are equally likely and switching wouldnt matter.
Technically, the difference is that Monte Halls probabilities are "conditional," while Regis are absolute.
Because the Monte Hall Problem is counterintuitive, its fun to set it up as a parlor spinoff of the "three-card Monte" carnival game. Take three playing cards, an ace and two jacks. Shuffle, then place them face down on a table. A player, trying to find the ace, slides one out of the set without looking. A dealer checks the other two and removes a jack. The player can elect to stick with the first choice or switch. Keep score by noting how many rounds are played, and how often sticking and switching are correct. It wont take long to see that switching is twice as good as sticking.
The Monte Hall Problem illustrates that instinct can fool you. Its especially true in gambling. And it affirms the acumen of the perspicacious poet, Sumner A Ingmark:
What you think and what authenticate,
Arent always in compliance.
Thats why what you speculate,
Runs second best to science.