PDA

View Full Version : OT - Five card flush puzzle


QuikSand
12-14-2000, 12:26 PM
Warning, this puzzle might require some slightly more advanced mathematics than most of my previously posted puzzles, which have tended more toward more intuition than real math. I think this one is solvable with just intuition, but a true proof requires some bigger number-crunching.

This puzzle involves a standard, 52-card deck of playing cards, in four suits and ranks from 2 through ace. A flush is a hand that includes all five cards from the same suit, though they may be of any rank. If the ranks happen to fall in sequence, the hand is designated as a straight flush (4-5-6-7-8 for example) or a royal flush (10-J-Q-K-A only) but in either case, the hand still counts as a flush.

For purposes of this puzzle, all the probabilities are as they seem-- the hands described are the product of some purely random selection process-- there is no trickery involved, just pure probability.

Here's the puzzle:

Rank the following sets of 5-card poker hands in descending order of their likelihood of being a flush:

(A) All 5 card hands
(B) Hands whose first card is an ace
(C) Hands whose first card is the ace of spades
(D) Hands with at least one ace
(E) Hands with the ace of spades

I'll give you a (hopefully) helpful hint-- the ranking will not have five different tiers. There is at least one instance where two or more of the groups listed above have an identical likelihood of being a flush.

Enjoy!

[This message has been edited by QuikSand (edited 12-14-2000).]

Passacaglia
12-14-2000, 12:35 PM
I think I know, but I don't want to post it, in case by some freak coincidence, I'm actually right.

------------------
"If we can put a man on the moon, we can grow grass indoors."

Scarecrow
12-14-2000, 01:36 PM
Just by first glances, and not actually trying to figure it out statisticaly, I woud say:

A is the most likely
B & D are next with the same likelyhood
C & E are next with the same likelyhood


------------------
If I only had a brain

Passacaglia
12-14-2000, 01:41 PM
Okay, now that someone has put up a guess different from mine, I'll say mine: I think they all have the same probablity.

------------------
"If we can put a man on the moon, we can grow grass indoors."

QuikSand
12-14-2000, 01:48 PM
I can set your fears to rest, Passacaglia-- you didn't get the correct answer and spoil it for everyone. http://dynamic.gamespy.com/~fof/ubb/wink.gif The correct answer is still out there...

Passacaglia
12-14-2000, 02:00 PM
Okay...after a little more thought, and a little more math, my new guess is:

D has the highest porbability.
All others are less likely.

------------------
"If we can put a man on the moon, we can grow grass indoors."

Vaj
12-14-2000, 02:19 PM
I'll go with D having the lowest probability, with A,B,C,and E having the same probability

Celeval
12-14-2000, 02:30 PM
Looks to me like A, B, and C are equal probablility; while D is less likely than any of those, and E is least likely.

Yes?

Kevin

WebEwbank
12-14-2000, 02:33 PM
I think that (D) has the lowest flush probability; the others are all equal and higher.

Reasoning is: presence of any one card has NO effect on "filling" (completing) your flush in the following four cards, since EVERY hand has a first card and it always has a suit.

The reason that (D) has a lower flush probability is that any hand with two aces, or more aces, is de facto not a flush. So within the set (D) you have many hands which can't flush and have eliminated many hands which do flush (any one without an ace). This is the one that needs math, but I don't have time right now to elaborate the proof.

Well, Q ?

Ctown-Fan
12-14-2000, 02:46 PM
First (A) is the most probable becasue it includes all flush hands

Second (B) and (D) are equal and the next most probable because they could be the same hand but only include 4 posibilities of receiving an ace.

Third, (c) and (e) are equal becasue it could be the same flush hand with only one posibility for receiving the correct ace.

My stats book is locked away or I would attempt to do the math . . .actually, i think i'll dig it out and give it a shot

[This message has been edited by Ctown-Fan (edited 12-14-2000).]

Dutch
12-14-2000, 02:53 PM
Originally posted by QS
Rank the following sets of 5-card poker hands in descending order of their likelihood of being a flush:

(A) All 5 card hands
(B) Hands whose first card is an ace
(C) Hands whose first card is the ace of spades
(D) Hands with at least one ace
(E) Hands with the ace of spades
Possible outcomes:

A: All flush combinations are still available
B: All flush combination are still available depending on the ace.
C: Only Spade Flush are still available
D: All Flush Combination are still available (depending on the ace).
E: Only Spade Flush are still available

Most Likely are A, B, and D.
Least Likely are C and E.



[This message has been edited by Dutch (edited 12-14-2000).]

Passacaglia
12-14-2000, 03:00 PM
I think people are on the right track in finding out how many combinations of flushes are available. However, we also need to find out how many hands are available, to get the "denominator" in the probability. I doubt that I did that right with my most recent guess, since I'm tired, and in a terrible, bored, mood, but combinations of hands possible as well as combinations of flushes need to be found.

------------------
"If we can put a man on the moon, we can grow grass indoors."

DukeRulesMAB
12-14-2000, 03:11 PM
Hmm...don't have time to give a well-researched answer, but here goes anyway.

The odds of getting a flush in general are:
12/51 * 11/50 * 10/49 * 9/48 = 33/16660

The reason that there are only 4 numbers in that equation is that it doesn't matter what the first card is; the odds are still the same.

In fact, because card order is random anyway, you can reorganize the last two hands such that the ace or ace of spades is picked up first.

Therefore, I'm going with the answer that all the situations are equally likely to land a flush.

Michael

Bad-example
12-14-2000, 03:16 PM
Dutch's answer looks pretty convincing to me.

Celeval
12-14-2000, 03:23 PM
Okay, here's how to do it:

A, B, and C are identical because the first card you draw has no effect on the probability. Take a deck, draw any card. Let's say it's the 3 of Hearts. The chance of getting a flush is exactly the same as it is before you drew that card; since no matter what card it is (any, any ace, or the ace of spades) the chance of the flush is the chance of drawing your next four cards to match that suit.

For D and E, split it into two pieces: that where you draw the required card first (any Ace, Ace of Spades), and that which you don't. For D and E, the probability is the same if you draw the card first; however if you do not draw the required card first (which you will not for 80% of the flushes you get), then the probability decreases.

D: If you draw a 2-K first, then you HAVE to draw the Ace of that suit later in the hand. Therefore, you have the same chance of remaining in flush until that last card you draw, where you have to draw a single card out of 48 (the ace) rather than any of the 8 cards you could have drawn were you going for a simple straight.

E: Same situation as D, except that you must draw a Spade as your first card; which drops the probability significantly.

The math to go with it:

<PRE>
Drawn card in hand
(Prob. of staying in suit)

CARD 1 2 3 4 5

A 52/52 11/51 10/50 9/49 8/48
B 4/4* 11/51 10/50 9/49 8/48
C 1/1^ 11/51 10/50 9/49 8/48
D
a 4/52 11/51 10/50 9/49 8/48
b 48/52 1/51 10/50 9/49 8/48
c 48/52 10/51 1/50 9/49 8/48
d 48/52 10/51 9/50 1/49 8/48
e 48/52 10/51 9/50 8/49 1/48

E
a 1/52 11/51 10/50 9/49 8/48
b 11/52 1/51 10/50 9/49 8/48
c 11/52 10/51 1/50 9/49 8/48
d 11/52 10/51 9/50 1/49 8/48
e 11/52 10/51 9/50 8/49 1/48
</PRE>

* - It's a given that the first card is an
ace, therefore there's a 4/4 probability
of getting that ace.

^ - It's a given that the first card is the
Ace of Spades, therefore there's a 1/1
probability of getting that card.

Calculation is left to the reader (I'm lazy), but I think this is right.

Kevin

Celeval
12-14-2000, 03:33 PM
Should have looked at it more closely first. Bleah, I'm tired.

<PRE>
D
a 4/52 11/51 10/50 9/49 8/48
b 48/48 1/4 10/50 9/49 8/48
c 48/48 10/47 1/4 9/49 8/48
d 48/48 10/47 9/46 1/4 8/48
e 48/48 10/47 9/46 8/45 1/4

E
a 1/1 11/51 10/50 9/49 8/48
b 11/51 1/1 10/50 9/49 8/48
c 11/51 10/50 1/1 9/49 8/48
d 11/51 10/50 9/49 1/1 8/48
e 11/51 10/50 9/49 8/48 1/1


</PRE>

I think I'm on the right track this time, but I'm leaning towards Dutch now... all have the same chance.

Darnit, I've got to learn to post more so I don't look so stupid when my only posts are wrong.

P.S. I've also got to work my tables out so I don't have to edit 12 times.

[This message has been edited by Celeval (edited 12-14-2000).]

[This message has been edited by Celeval (edited 12-14-2000).]

[This message has been edited by Celeval (edited 12-14-2000).]

[This message has been edited by Celeval (edited 12-14-2000).]

Ctown-Fan
12-14-2000, 03:48 PM
Lets try thing again . . .

the probability for A remains constant
(1/4)*(11/51)*(10/50)*(9/49)*(8/48)=
this will never change for a flush hand

(B) the probability remains constant
(1/13)*(11/51)*(10/50)*(9/49)*(8/48)=

(C) remains constant
(1/52)*(11/51)*(10/50)*(9/49)*(8/48)=

(D) does not remain constant becasue the ace can be drawn at any time
if ace is first
(1/13)*(11/51)*(10/50)*(9/49)*(8/48)=
second
(1/4)* (1/51)*(10/50)*(9/49)*(8/48)
third
(1/4)*(11/51)*(1/50)*(9/48)*(8/48)=
and so on for the furth and fifth cards
(E) does to remain constant becasue the ace of spade can be drawn at any time. but the odds only chagne from (d) on the first drawing from the deck. For two reasons, first the odds of drawing the ace of spades first are lower (b and c are not mutually exclusive from d and e) and the odds of drawing a spade with the first card are lower than drawing a card from any suit (52/52)
Ace is first
(1/52)*(11/51)*(10/50)*(9/49)*(8/48)=
Ace is second
(1/4)*(1/51)*(10/50)*(9/48)*(8/28)'
and so on

if I could remember the math then I would try to solve it, but I can. sorry.

Am I at least on the right track?

This is a great question, even though I had to dig out my stats book

[This message has been edited by Ctown-Fan (edited 12-14-2000).]

QuikSand
12-14-2000, 04:00 PM
I'm personally having trouble following the probability matrices above... part of it is the spacing, which doesn't line up properly from my current computer.

That said... three comments:

Dutch doesn't have it quite right (ABD / CE).
They are not all equally likely.
Celeval is correct - AB&C are all on the same level.

When I get home, I'l look again at the numbers above, and see if I can follow more clearly looking through IE.

QuikSand
12-14-2000, 04:53 PM
I'm still working my way through Celeval's math above... it may be accurate, but it is more complicated than it needs to be. (That happens a lot in combination calculations)

I'll start with intuition, because I feel the whole thing can be solved with just that tool.

Celeval said it very well above:


A, B, and C are identical because the first card you draw has no effect on the probability. Take a deck, draw any card. Let's say it's the 3 of Hearts. The chance of getting a flush is exactly the same as it is before you drew that card; since no matter what card it is (any, any ace, or the ace of spades) the chance of the flush is the chance of drawing your next four cards to match that suit.

I buy this thinking hook, line, and sinker. Seeing one card tells us *absolutely nothing* about the probability of getting a flush... as someone else noted, it will reduce both the numerator and denominator of the fraction that represents the chance... but by proportional amounts, so the likelihood remains identical. Agreed. So A, B, and C will all be on the same level in the correct answer.

That said, can't we assess option E with a similar intuitive approach? E narrows us to the subset of possibilities where one of the cards is the ace of spades-- yet, we know nothing of the other cards. Isn't this essentially the same case as B or C? Does it matter which of the cards we gain some knowledge about, first or otherwise? As long as we only have knowledge about one of the five cards, then (intuitively) we've learned nothing about the hand's chances of being a flush.

So, I would suggest that just by intuition and a basic grasp of the probability involved, we shoudl get to the point where there are only three possible answers:

ABCDE

ABCE
D

or

D
ABCE

I've already (above) ruled out the first one, so we're now stuck with the case that D is either less likely or more likely to be a flush (or else I lied to you).

(Incidentally, both of these were thrown out as possible answers in back-to-back posts earlier in the thread)

That said, who will make the case first here... The intuitive? Or the mathematical?

QuikSand
12-14-2000, 05:04 PM
Ctown-fan, I first need to apologize for my memory failure about Vermilion's playing with the "big boys" in high school athletics. I had somehow unfairly lumped you in with the also-rans of Erie County-- Wakeman, New London, that lot. Unfair of me, and I'm sorry.

Now, on to the math. Your calculations above (as of the time I copied them, they seem to be moving a bit):


Lets try thing again . . .
the probability for A remains constant
(1/4)*(11/51)*(10/50)*(9/49)*(8/48)=
this will never change for a flush hand

(B) the probability remains constant
(1/13)*(11/51)*(10/50)*(9/49)*(8/48)=

(C) remains constant
(1/52)*(11/51)*(10/50)*(9/49)*(8/48)=


It looks to me like what you will get with each of these calculations is the probability that a hand meeting those conditions will be drawn. So, your calculation for B would be an accurate representation of the probability that a given hand will have an ace as its first card and will be a flush.

However, this is not on point to the question posed. The question is: given the conditions of each lettered case, what is the probability of the hand being a flush? I trust that you can see the difference between the two constructs.

For any of A, B, or C, I think that the probabilities laid out by Celeval (appropriately labeled "probability of staying in suit") are the correct ones. If you multiply across each row of his layout, you ought to get the 33/16660 number that was calculated earlier by Duke/Michael (I'm taking his number on faith).

The forst probability is by definition 1-- this is the "given" in the setup. Form there, you calculate the incremental likelihood of "staying in suit" and maintaining the flush.

Now, time to crack case D.

Scarecrow
12-14-2000, 05:10 PM
Here's what I came up with. According to QS its wrong, but I'm not sure where.

1st 2nd 3rd 4th 5th TOTAL
A: (52/52) (12/51) (11/50) (10/49) (9/48) 0.1981%
B: (4/52) (12/51) (11/50) (10/49) (9/48) 0.0152%
C: (1/52) (12/51) (11/50) (10/49) (9/48) 0.0038%
D: (52/52) (1/51) (11/50) (10/49) (9/48) 0.0165%
E: (13/52) (1/51) (11/50) (10/49) (9/48) 0.0041%

A: 1st card anything, all others same suit.
B: 1st card ace, all others same suit.
C: 1st card Ace of Spades, all others same suit.
D: 1st card anything, 2nd card ace of that suit, all others same suit.
E: 1st card a spade, 2nd card ace of spade, all others same suit.

Therefore, it should be A,D,B,E,C.

------------------
If I only had a brain

QuikSand
12-14-2000, 05:16 PM
Scarecrow, see my comment above to Ctown-fan. I believe that you made the same errant translation from the narrative to the mathematical.

QuikSand
12-14-2000, 05:36 PM
I'll take a stab at my own set of numbers, but my approach is slightly different. For each case, I'll just calculate two things:

D = total number of cases fitting the criteria stated (flushes or otherwise)

N = total number of cases fitting the criteria that are flushes

Then, I'll just calculate N/D = probability of a flush, given that set of criteria.

(Personally, I find this simpler than the incremental multiplicative probability approach, though they are equally valid)

- - -

Case A - any 5 card hand

D = 52x51x50x49x48
N = 52x12x11x10x9

N / D = 0.1981% (I agree with Scarecrow here)

Case B - start with an ace

D = 4x51x50x49x48
N = 4x12x11x10x9

N / D = 0.1981%

(You can see here that the exact same transfer took place in both cases-- a 52 was reduced to a 4. When we use the ratio, it washes out completely)


Case C - start with a spade

D = 13x51x50x49x48
N = 13x12x11x10x9

N / D = 0.1981%

(Same logic, just 13 different starting points here)


Case E - hand has ace of spades

You can break this into five parts if you insist, but the order is unimportant-- this breaks down into just another run of the same logic...

D = 1x51x50x49x48
N = 1x12x11x10x9

N / D = 0.1981%

- - -

This, we are left with Case D.

I'll leave the rest as an exercise for the readers... for now, at least. Brute force (shown here) will get you to the correct answer, but intuition is still the easier path, I think.

[This message has been edited by QuikSand (edited 12-14-2000).]

jason_tobias
12-14-2000, 05:38 PM
I think its D.

QuikSand
12-14-2000, 05:40 PM
Once again, our friend J_T cuts through all the nonsense and gets to the heart of the issue. Well done, friend!

Vaj
12-14-2000, 06:37 PM
At the risk of replacing my earlier error with another, I now think the probability of a flush for Case D is 0.2233%. I'm sure Passacaglia can confirm this.

Ctown-Fan
12-14-2000, 11:01 PM
1. No problem quicksand for the misunderstanding. Since I have moved back to Vermilion, I have learned that conference has dispanded and vermilion plays a much weaker schedule (Oberlin, Firelands, and the like).
2. I think I understand now, my stats was never my strong suit. I only took it after I graduated when I learned the UMaine required it for grad school . . . which i found to be really silly because my M.A. is in rhetoric . . . but back to the point. I think this is really funny. I took stats at Kent State. Thirty people in my class and 27 seven were Fashion Design Majors!! No offense but it was not the most intellectually challenging course I have taken. But, I do see where my assumptions were off.

Thanks, this was a great challenge

WebEwbank
12-15-2000, 07:33 AM
Q: was I right or wrong ?

Passacaglia
12-15-2000, 07:55 AM
Sorry, Vaj -- i had 0.00223311

------------------
"If we can put a man on the moon, we can grow grass indoors."

QuikSand
12-15-2000, 07:55 AM
Web, et al -- the answer is that (D) has the HIGHEST flush probability of the five, while all the others are even.

I'll offer my intuitive explanation... it looks liek Vaj has already crunched the numbers (though I haven't checked fior accuracy).

Intuitively, the main issue with flushes is (as you have pointed out) that they cannot have multiple cards of the same rank.

Group (D) includes all the hands that have at least one ace. The intuitive trick here is to reverse that thinking-- then the math gets much easier.

If the entire set of hands has a certain chance of baing a flush, then than entire set is composed of two discrete subsets-- hands including one or more aces (Group D), and hands without any aces (the rest). By definition, the number of these two subsets of hands must add up to the total number of hands.

So, instead of looking at the hands in Group D, let's look at what remains after we peel them off-- the hands with no aces. This, essentially, means that we're looking at a new deck of cards-- only 48 cards, 4 suits but ranks of only 2 through K. Well, what are our chances of getting a flush from this kind of deck?

Again rather than use math, I'll use intuition. As we shrink the size of each suit, the proportions become dimmer that we can "remain in suit" after each card. Take this toits logical conclusion-- a 40suite deck with only 5 cards in each suit. Clearly, in that case our chances of drawing a 5-card flush become extremely remote-- much more so than with the original 52-card deck, right? So, it stands to reason that as we compress each suit, the likelihood of a flush gets lesser.

So, the subset of hands that contain no aces-- being the equivalent of a hand deal from a 48-card ace-free deck-- is less likely to be a flush than the hand from Group A (or any of the others).

Since the group we just described is the complement of Group D and they combine to form Group A... then by deduction, Group D must have a greater chance of being a flush.

That's the intuitive proof (with a little math) and it works just fine for me. However, if you prefer the math, here's the recipe:

(again, I prefer to use my N and D calculations as at the bottom of page 1 of this thread)

To calculate the chances of Group D being a flush, you must calculate:

D = total number of hands meeting the conditions;

N = total number of qualifying hands that are flushes;

and then calculate N / D.

To get D, I think it is (again) easier to calculate all the hands from a 52-card deck, which is:

52x51x50x49x48

...and then subtract out all the hands from a 48-card ace-free deck, which is:

48x47x46x45x44

The difference should be the number of hands that include at least one ace = D (quickly, I got 106,398,720 on my $1.00 calculator).

To get N is a bit more tedious, but the number of flushes that include an ace (of course they cannot include two of them) would involve assuming the ace, then calculating the number of combinations of the other 12 cards in the suit, and then multiplying by 4 for the different suits, and then by 5 for the five different positions where the ace could appear.

So, N = (12 x 11 x 10 x 9) x 4 x 5
and I calculate N = 237,600.

So, we then calcuate N / D, and I get a number like 0.2233% - which appears to be exactly what my friend Vaj got above and Passacaglia got below.

So, the correct order is:

D
ABCE

q.e.d.

My latest edits corrected my calculator and human errors, and are in italics just above



[This message has been edited by QuikSand (edited 12-15-2000).]

Passacaglia
12-15-2000, 08:09 AM
Dola-post.

For the numerator, here's what I did, and probably Vaj as well:

I took all the flushes possible.

52x12x11x10x9 = 617,760

Then subtracted all the flushes with NO aces. Probably a more difficult way of doing it, but it made sense, given how we found the denominator.

Flushes with no aces:
48x11x10x9x8 = 380,160

617,760 - 380,160 = 237,600

237,600/106,398,720 = 0.00223311

That was my reasoning, at least.

------------------
"If we can put a man on the moon, we can grow grass indoors."

Passacaglia
12-15-2000, 08:10 AM
Double-Dola.

Hey, QS! You can't be editing your message while I'm trying to correct you! :P

------------------
"If we can put a man on the moon, we can grow grass indoors."

Passacaglia
12-15-2000, 08:20 AM
QS! You're making me look like a *real* schmuck here, with me replying to everything, then you editing or deleting it! http://dynamic.gamespy.com/~fof/ubb/tongue.gif

------------------
"If we can put a man on the moon, we can grow grass indoors."

QuikSand
12-15-2000, 08:22 AM
I got it, Pass. My original calculation was off by x2 from yours, because I made two errors.

I mis-read my overflow of digits on my el cheapo calculator, and was off by a factor of 10 on the number of with-ace hands.

Then, I neglected to multiply my number of flushes by 5, to reflect the five different positions where the ace could appear.

The x10 but /5 errors overlapped to give me double your answer. Now they are untangled above.

Thanks for your help.

Passacaglia
12-15-2000, 08:23 AM
QS, I think the flaw might have been in my reasoning.

I wrote that the number of flushes with no aces is:

48x11x10x9x8 = 380,160

Wouldn't it be:

48x12x11x10x9 = 570,240 ?

Then 617,760-570,240 = 47,250 ?

------------------
"If we can put a man on the moon, we can grow grass indoors."

QuikSand
12-15-2000, 08:23 AM
And geez, Passacaglia, will you please quit Dolaposting? We're trying to do some math here...

QuikSand
12-15-2000, 08:26 AM
Wouldn't it be:

48x12x11x10x9 = 570,240 ?

the above-quoted selection is from a response that Passacgalia has since sheepishly deleted, after seeing the error of his ways


Nope, that's a 13-card suit. You were right the first time, though that is a clever way to get to my original incorrect answer.

[This message has been edited by QuikSand (edited 12-15-2000).]

Passacaglia
12-15-2000, 08:26 AM
Ha! Now I'm making *you* look like the idiot, since I'm deleting all my dola-posts!

Hey, if I post something, then delete it, do I still get "credit" for the post?

------------------
"If we can put a man on the moon, we can grow grass indoors."

[This message has been edited by Passacaglia (edited 12-15-2000).]

Passacaglia
12-15-2000, 08:28 AM
What do you mean, "that's a 13-card suit?"

------------------
"If we can put a man on the moon, we can grow grass indoors."

QuikSand
12-15-2000, 08:28 AM
Curse you, Red Baron!

Passacaglia
12-15-2000, 08:43 AM
So here I am, waiting for a response from QS to my question, only to find that he replied to it by editing a post from above! Very sneaky, QS, but it only fooled me for..well, never mind how long it fooled me for! The point is that I know now!

Anyway, I still don't know what you mean by "that's a 13-card suit," but I do see where the x5 comes in, for the different placement possibilities.

AND, I did delete some of my dola-posts, but those were the ones that were covered by your editing of your posts, not the mistakes I made, as you asserted! http://dynamic.gamespy.com/~fof/ubb/tongue.gif

------------------
"If we can put a man on the moon, we can grow grass indoors."

QuikSand
12-15-2000, 08:50 AM
Pass, my comment about a "13-card suit" is in response to your now-deleted post where you postulated an alternative calculation for how many flushed include no ace. You came up with:

Wouldn't it be:
48x12x11x10x9 = 570,240 ?


And my response is that your 48 is correct, because you may not start with an ace (so only 48, not 52 to start with). However, by using multipliers starting with 12, you are allowing the ace to be included-- which is forbidden by the restrictions. In my parlance above, when you must exclude the ace, you are essentially working with a deck that only has 12-card suits. Your calculation (above) excluded the ace at first, but then re-introduced it in the later stages.

Curiously, that resulted in the exact same wrong answer for N (by subtraction) as mine did (by neglecting to multiply by five).

Passacaglia
12-15-2000, 09:00 AM
Yep, that's exactly right. It's weird, I thought I would have been better quipped to do this kind of work today, but it seems I was dead on yesterday. It is interesting how those two incorrect solutions yielded the same answer, though.

------------------
"If we can put a man on the moon, we can grow grass indoors."

Ctown-Fan
12-15-2000, 10:39 AM
The only question I have about the final number crunching is this (and maybe I missed it somewhere through all the numbers talk) is, isn't (A) all flush hands? That would mean, by virtue of the question, a set which contains hands with or without aces, increasing the pool of posibilities when determining the answer. I guess when I was responding to this before, I was under the assumption the (A) actually contained (b-E) in it. Sort of the whole ven diagram thing.

Passacaglia
12-15-2000, 10:42 AM
If you look at it as a Venn diagram, imagine a circle, including all possible flush hands, inside the box, all possible hands. As you change states, you reduce the circle, since it's not the case that all flushes are possible, and the box, since it's not the case that all hands are possible!

------------------
"If we can put a man on the moon, we can grow grass indoors."

jason_tobias
12-16-2000, 08:02 PM
Please, in the name of of an american icon, Rudy Ray Moore (Dolamite), please refer to a dolapost as anything but a dolapost.

I've tried my very best to maintain myself, but I can't further view the abuse constantly perpetrated on the 'Avenging Disco Godfather,'aka,'DOLAMITE!'

Call it the 'Hamburger King.'

Passacaglia
12-16-2000, 08:30 PM
Yeah, but that was Dolemite, not Dolamite. And if we don't refer to a dolapost as a dolapost, what WOULD we refer to as a dolapost?

------------------
"If we can put a man on the moon, we can grow grass indoors."

ez
12-17-2000, 12:50 AM
yeah, we went through this before. http://dynamic.gamespy.com/~fof/ubb/smile.gif the rock/mineral, mountains, and movie character are dolEmite. the namesake of the dolapost (and dolanuts http://dynamic.gamespy.com/~fof/ubb/smile.gif) explained that his last name was dola or dola-something, and that his friends(?) therefore tagged him as dolAmite...

Morgado
12-17-2000, 06:35 AM
Originally posted by QuikSand:
The intuitive trick here is to reverse that thinking-- then the math gets much easier.

Just wondering, Quik... have you tried going forwards and doing the counting problem directly? It may be an... interesting... check.

Grunion
12-18-2000, 08:25 PM
QuikSand,

I'm not sure I agree with your conclusions. (However I'm quite sure that my math skills are not quite as refined as yours so I'm willing to have the error of my ways pointed out.)

Imagine five independant poker hands being dealt (from separate decks).

Case A:
No cards are known. The first card is exposed and is found to be a four of clubs (for sake of example, in case A it could be any card). The odds of pulling a flush would be the odds of each successive card being a club, or 12/51*11/50*10/49*9/48. (0.198%)

Case B:
The first card is exposed and found to be an ace of spades (for example, in case B it could be any ace). The odds of pulling a flush would be the odds of each successive card being a spade, or 12/51*11/50*10/49*9/48. (0.198% which is identical to case A)

Case C:
The first card is exposed and found to be ace of spades (as a conditional requirement). The probability is identical to cases A & B.

Case D:
Your cards are dealt, and the dealer accidentally deals one face up. It is an ace of hearts (or spades or clubs or diamonds the suit of the first known card is always irrelevant). The computed odds of the next four cards matching suit are 12/51*11/50*10/49*10/48. It does not matter that it is not necessarily the first card, order of cards being dealt is not significant, making the probabilities identical to Case B.

Case E:
This time the dealer deals an ace of spades face up. As the suit of the first known card is irrelevant, the odds of a flush are identical.

Another, much quicker way of looking at it is no matter how many constraints (suit, number, or order in which it was dealt) the odds of matching one given card with four more of the same suit is always the same. In every case other than A only one card has any information associated with it.

Also, If it is more likely to pull a flush in case D, it would stand to reason that the same arguement could be made for every single card value, making every single card value a higher probability than the default, five unknown cards condition.

Hopefully the above made sense. Now who's up for some cards!


------------------
Those who don't learn history are doomed to repeat it.

Passacaglia
12-18-2000, 08:48 PM
The only thing I think that's wrong with that is where you said that the order of the cards is not significant. It is -- that's how I got to my incorrect answer earlier (if you can glean anything through the rampant editing done by QS and I -- maybe it was just a "had-to-be-there-to-understand" sort of thing -- literally, because most of what we said is gone now. But the order of the cards does matter, so you need to multiply your probablity for D by 5, to allow for each position that the ace could be in.

------------------
"If we can put a man on the moon, we can grow grass indoors."

QuikSand
12-18-2000, 09:03 PM
Grunion, your analysis does make sense. However, the flaw is not in your math per se, but in the translation of the lettered conditions (as set forth) into mathematics.

Your discussion of A,B,C, and E are all right on, and in complete agreement with several other accurate discussions previously on this thread. All good.

As for D, here is your narrative:

Case D:
Your cards are dealt, and the dealer accidentally deals one face up. It is an ace of hearts (or spades or clubs or diamonds the suit of the first known card is always irrelevant). The computed odds of the next four cards matching suit are 12/51*11/50*10/49*10/48. It does not matter that it is not necessarily the first card, order of cards being dealt is not significant, making the probabilities identical to Case B.

While this has some surface logic, but it essentially boils down to a restatement of the conditions of item B, rather than an accurate representation of D. This is even suggested when you say "the next four cards" - the seeming differentiation between *the first card* in B and *the flipped card* in your example is illusory. It's the same construct exactly-- this is just case B. And thus, the calculation works out the same.

The flaw lies in the assumption that the narrative above is equivalent, in practice, to the conditions of D in this puzzle. The difference is subtle, but meaningful.

In your narrative for D, we now know one thing about the hand-- that it is a hand sonewhere within the subset of hands that contain at least one ace. That is true. However, that's not all we "know" about the hand at this point.

We also know that the inherent condition about having at least one ace has already been met. I realize this sounds picky, but it's the difference between right and wrong here.

Consider again all the hands that have at least one ace. Let's break them down into two groups-- based on the outcome of your suggested "misdealt" card.

D1 = the turned card is an ace
D2 = the turned card is not an ace

D1 is the case you describe above. D2 is the rest of the hands from the entire subset D.

Like you said, in D1, we now have no knowledge about the other cards in the hand-- particularly about their rank. Recall, in predicting a flush, it's the duplicate ranks that determine the chances. So, D1 (again, as you have said) has the same probability as A,B,C, or E.

However, D2 is different. For some subset of the D hands, we would have one card turned over accidentally, and we would know that it was a non-ace... say a 4.

Aha! Now we have some information we can use. We know that this hand has at least one ace, so we know that at least one of hte other cards [b]will not match the rank of the exposed card[b]. This knowledge-- reducing the overall chances of a duplicate rank (I trust that is intuitively obvious)-- therefore increases the chance that the hand is a flush.

And as a result, we now know that:

P (D=flush) is a weighted average of P(D1=flush) and P(D2=flush). And since P(D1=flush) = P(A=flush), and P(D2=flush) > P(A=flush), we now know that P(D=flush) > P(a=flush).

q.e.d.

This all comes back to a fairly tenuous argument (that I fear I have not articulated well) that your narrative description of D above is not in fact a description of D, but instead a description of what I call D1. By excluding the other cases within D, you eliminate the remainder of its probability-- which happens to be the portion that favors a flush.

Hope that helps...

Passacaglia
12-18-2000, 09:14 PM
Um....Go Bucs!

------------------
"If we can put a man on the moon, we can grow grass indoors."

Grunion
12-18-2000, 10:32 PM
QuikSand,

I think we are getting somewhere, however I still disagree.

You have broken D in to D1 & D2 to illustrate your logic. Your explanation was quite clear, I believe I fully understand it.

Our given is that there is at least one ace in our five card hand.

D1: We expose the first card and it is an Ace. No further discussion needed, as we are in agreement.

D2: We expose the first card and it is not an ace. Since an Ace, and therefore a guaranteed non-match to our first card, is certain there is an improved probability of a flush. This is because situations where the first two cards match would be eliminated, enhancing flush possibilities. However, lets continue exposing cards. I believe the probability for exposing a pair of Aces will be increased, so that while a second card ace will be guaranteed not to match rank with the first card, there will be a much higher chance for there to be two or more aces in the remainder of the hand, levelling the percentages out.

In case D2, where the first card pulled is not an ace and at least one ace is in the deck the odds of not pulling a second ace is 47/50*46/49*45/48 = 82.7%
So a pair of aces are 17.3% likely

If you remove the mandatory ace condition the odds of pulling two aces are going to decrease
48/51*47/50*46/49*45/48 = 77.9% chance of pulling no aces.
Which means the probability of pulling one ace is 22.1%.
Out of that 22.1% the odds of not pulling a second ace is identical to the situation above, meaning that the odds of pulling a pair of aces are (0.221*0.173)= 3.82%

(I know the above is an awkward way to present the calculations, but my statistics book is in my office. I am fairly sure that my logic is sound.)

Anyway, I think the dramatically increased chance of pulling a second ace cancels out the gains in knowing you have an "ace in the hole" as you expose non-ace cards.

Also,
Assume having at least one ace in your five card hand increased the likelyhood of pulling a flush as compared to the average probability (five cards, no info). As each value is equally represented by four cards in a deck it would stand to reason that having at least one X in your five card hand increased the likelyhood of a flush, where X can be ANY valued playing card. EVERY five card hand dealt will have at least one X, where X can be any valued playing card.
Therefore, every possible five card hand has a greater chance to pull a flush than the average probability, which is impossible.

QuikSand
12-19-2000, 08:08 AM
I agree, we are getting somewhere.

In your argument above:


I believe the probability for exposing a pair of Aces will be increased, so that while a second card ace will be guaranteed not to match rank with the first card, there will be a much higher chance for there to be two or more aces in the remainder of the hand, levelling the percentages out.

In case D2, where the first card pulled is not an ace and at least one ace is in the deck the odds of not pulling a second ace is 47/50*46/49*45/48 = 82.7%

So a pair of aces are 17.3% likely

If you remove the mandatory ace condition the odds of pulling two aces are going to decrease 48/51*47/50*46/49*45/48 = 77.9% chance of pulling no aces.

Which means the probability of pulling one ace is 22.1%. Out of that 22.1% the odds of not pulling a second ace is identical to the situation above, meaning that the odds of pulling a pair of aces are (0.221*0.173)= 3.82%

(I know the above is an awkward way to present the calculations, but my statistics book is in my office. I am fairly sure that my logic is sound.)


I'm not checking your math, but I will contest your logic.

Your discussion of the increased likelihood of coming up with two (or more) aces is certainly on point. And, by itself, that suggests that there might be a reduced chanced of getting a flush.

However, it's not complete to look at that issue in isolation. What about all the other ranks of cards? If your "shown" card is a deuce, and we add the promise of at least one ace in the hand-- doesn't that clearly mean that the chances of our seeing duplicate threes or queens in this hand decrease? Of course it does-- I'm not going to bother with the mathematics, it's intuitively obvious.

So, the complete analysis of the likeihood of getting duplicate cards within each rank would tell us something like this:

(Shown card = deuce, hand has at least one ace)

-Chances of getting duplicate aces = higher than without second condition

-Chances of getting duplicate deuces = lower than without second condition

-Chances of getting duplicates of any other rank = lower than without second condition

I'm not going to work through all the math here (ugh) but suffice it to say that with multiple effects in countervailing directions, it is no cinch that that we end upwith anything like the following conclusion:


Anyway, I think the dramatically increased chance of pulling a second ace cancels out the gains in knowing you have an "ace in the hole" as you expose non-ace cards.

- - -

In your second, more intuitive, argument, you observe:

Also, Assume having at least one ace in your five card hand increased the likelyhood of pulling a flush as compared to the average probability (five cards, no info). As each value is equally represented by four cards in a deck it would stand to reason that having at least one X in your five card hand increased the likelyhood of a flush, where X can be ANY valued playing card. EVERY five card hand dealt will have at least one X, where X can be any valued playing card. Therefore, every possible five card hand has a greater chance to pull a flush than the average probability, which is impossible.


The flaw here is that this is an ex post facto condition-- categorizing the hand after we see what's inside. Once we see what's in the hand, of course we will be able (after the fact) to go back and recognize that it fits into various subsets of all the possible hands. (Hands with at least one ace, hands with at least one queen, etc.) Some of these subsets will have a higher likelihood of being a flush than others, and many will have a different flush probability than the entire set of possible hands-- but that doesn't invalidate a mathematical analysis about the flush probability of any of the given subsets.

QuikSand
12-19-2000, 08:14 AM
Another intuitive argument about the "hand with at least one ace" is as follows:

By describing the condition D that the hand has at least one ace, we essentially divide the universe of all possible hands (A) into two subsets. The other subset (A-D) would be the hands that do not include at least one ace.

I would argue that you could characterize the set of (A-D) as being "the hands that do not make use of all the ranks in the deck." (It obviously doesn't matter whether the condition is that the hand has one ace, one queen, or whatever) As I have stated before, this subset (A-D) is exactly represented by the entire universe of hands that could be created by a 48-card deck that simply was stripped of the rank mentioned in the conditions for D.

Intuitively, as you "shorten" each rank within the deck, you make it increasingly improbable that the hand will remain in suit and stay a flush. This intuition is borne out by considering the extreme case-- a deck with only 5-card suits. In that case, it's painfully clear that drawing a flush is an extreme improbability.

By this logic, the subset of (A-D) has a lesser probability of being a flush than does the full set A. Since the subset D is what is "added" to the subset (A-D) to get to the full set A, it's clear that the flush probability of D must be greater than that of A.

QuikSand
12-19-2000, 09:08 AM
Flush Calculation (the "direct" method)

There was at least one request for this "direct" calculation, which is a little bit more tedious, but I suppose it may help to reinforce the correct answer.

Condition D - Hand has at least one ace

I'll use my standard method-calculating D and N where:

D = total number of hands meeting the stated condition

N = number of such hands that are flushes

- - -

Calculating D directly is a pain in the ass.

First, consider the one-ace hand:
4x48x47x46x45 = 18,679,680
x 5 different positions = 93,398,400
and since there is only one card we have no duplication

Second, consider the two-ace hand:
4x3x48x47x46 = 1,245,312
x 20 different positions = 24,906,240
and divide by 2! (2) to eliminate duplicates = 12,453,120

Next, consider the three-ace hand:
4x3x2x48x47 = 54,144
x 60 different positions = 3,248,640
and divide by 3! (6) to eliminate duplicates = 541,440

Finally, the four ace hand:
4x3x2x1x48 = 1,152
x 120 different positions = 138,240
and divide by 4! (24) to eliminate duplicates = 5,760

Adding the four together gives us 106,398,720 different hands that have at least one ace.

As predicted, this is identical to the number reached on page two of this thread, by the more simple calculation of subtracting out the 48-card hands from the 52-card hands

D = 106,398,720

- - -

Calculating N directly is actually pretty easy:

Start with the 4 aces, and multiply by the number of cards remaining in suit:

4 x (12 x 11 x 10 x 9) = 47,520

Then, multiply this by 5 to show each of the five positions for the ace:

47,520 x 5 = 237,600

This is the total number of flushes that contain at least one ace. (And, of course, they all contain exactly one ace-that's what makes this a simpler calculation than D above)

N = 237,600

- - -

To calculate the probability that a given five card hand, kown to have at least one ace, is a flush, we take N / D.

N = 237,600
D = 106,398,720

N/D = 0.22331%

This probability is, indeed, higher than the 0.198% or so that everyone has calculated as the correct probability for A (as well as B,C, and E).

Therefore, the answer to the problem remains:

D
ABCE

q.e.d.

Passacaglia
12-19-2000, 09:32 AM
What does q.e.d. mean again?

------------------
"If we can put a man on the moon, we can grow grass indoors."

QuikSand
12-19-2000, 09:51 AM
q.e.d. = (Latin) quod erat demonstrandum, meaning "that which was to be shown" or somesuch. Used mathematically to say "this is what we were after."

Grunion
12-19-2000, 03:18 PM
QuikSand,

I think I got it.

# of Total flushes = 52*12*11*10*9
=617,760

# of flushes w/ an Ace = 4*12*11*10*9
=47,520

# of total hands = 52*51*50*49*48
=311,875,200

# of total hands with an ace = 4*51*50*49*48
=23,990,400

617,760/311,875,200=.00198
47,520/23,990,400=.00198

------------------
Those who don't learn history are doomed to repeat it.

QuikSand
12-19-2000, 03:24 PM
Once again, Grunion, your math looks fine, but it is the math for the wrong problem.

Above, you are incorrectly labeling your calculations-- your numbers are not showing the total hands with an ace, nor the total flushes with an ace. Instead, they correctly count the hands (of each type) that start with an ace. (This was either B or C in the original puzzle, I forgot which)

And, as we have (all) agreed, that condition doesn't change the flush probability... as your correct (albeit mislabeled) mathematics reinforce.

[This message has been edited by QuikSand (edited 12-19-2000).]

Grunion
12-19-2000, 04:20 PM
This case describes a hand with at least one ace, with the position of the ace itself irrelevant. The placement of the known ace is irrelevant, if all other cards are unknown.

# of Total flushes = 52*12*11*10*9
=617,760

# of flushes w/ an Ace = 1/5(4*12*11*10*9)*1/5(12*4*11*10*9)*1/5(12*11*4*10*9)*1/5(12*11*10*4*9)*1/5(12*11*10*9*4)
=47,520

# of total hands = 52*51*50*49*48
=311,875,200

# of total hands with an ace = 1/5(4*51*50*49*48)*1/5(51*4*50*49*48)*1/5(51*50*4*49*48)*1/5(51*50*49*4*48)*1/5(51*50*49*48*4)
=23,990,400

617,760/311,875,200=.00198
47,520/23,990,400=.00198



------------------
Those who don't learn history are doomed to repeat it.

QuikSand
12-19-2000, 06:43 PM
Originally posted by Grunion:
This case describes a hand with at least one ace, with the position of the ace itself irrelevant. The placement of the known ace is irrelevant, if all other cards are unknown.

# of flushes w/ an Ace = 1/5(4*12*11*10*9)*1/5(12*4*11*10*9)*1/5(12*11*4*10*9)*1/5(12*11*10*4*9)*1/5(12*11*10*9*4)
=47,520

# of total hands with an ace = 1/5(4*51*50*49*48)*1/5(51*4*50*49*48)*1/5(51*50*4*49*48)*1/5(51*50*49*4*48)*1/5(51*50*49*48*4)
=23,990,400

47,520/23,990,400=.00198


Some very fancy numberworking here, to be sure.

(I'm assuming that some of the asterisks were intended to be plus signs... I'm a little bit perplexed by the presence of all the 1/5s in the forumla, but they are in the end harmless, as they cancel out in the final ratio. Converting some * to + is the best remedy I could think of, but I'm still not sure I'm following the logic there)

That said, a problem (I can't say right now if it's the only problem) with this calculation is that it counts certain hands more than once. Specifically, in the latter calculation (finding the number of hands with at least one ace), your numbers strongly suggest that you are considering five independent cases: one representing the appearance of the ace in each position of the hand. This is, of course, correct.

So, you consider each of the following:
a(4*51*50*49*48) - ace is in first position
b(51*4*50*49*48) - ace in in second position
c(51*50*4*49*48) - ace is in third position
d(51*50*49*4*48) - ace is in fourth position
e(51*50*49*48*4) - ace is in fifth position

However, this fails to account properly for the hands in which more than one ace appears. Consider the following hand:

AH-4C-5S-QH-AD

Since the card in slot #1 is an ace, it's certainly being counted in group a above, but since there's an ace in slot 5 (the AD is definitely one of the 48 cards for that slot), but it's also being counted in group e above (the ace of hearts is definitely included in the 51 for that slot).

By overstating the total number of hands with an ace, this then goes to understate the final ratio (since the denominator is too big).

Is it coincidence that this overcounting of hands leads to exactly the same probability as in case A? No, because the construct shown above is simply an algebraic restatement of the same calculations in your earlier post, which are themselves just a calculation for case B (rather than case D, which we're trying to solve) in the original problem (case B = hand's first card in an ace). It's no coincidence, it's a direct product of that original mistranslation.

QuikSand
12-19-2000, 06:47 PM
Incidentally, let me apologize for the pedantic tone in my writings on this thread, in particular those directed toward Grunion. I happen to believe (quite forcefully) that I am correct in this problem. If it turns out that I am demonstrably wrong, I'll quickly and publicly acknowledge Grunion and anyone else appropriate for their assistance in showing me the error of my ways.

QuikSand
12-19-2000, 07:09 PM
A couple more ideas for Grunion or anyone who is not yet sold on the "D is higher" answer. If you've got the patience to stick it out this far... more power to you.

Easy one first

--Grunion, since you've (in part) agreed to work this through with my methopds (counting all the hands, and taking the ratio)-- how would you set that construct up for selection B? It will, I believe, come out as exactly the same setup as what you are suggesting is correct for selection D. Is this a coincidence, are they exactly the same hands? (Neither one makes sense to me)

Togher one next

--Another idea is this: I could have added another selection F to say "Hands with no card higher than a king." We should be able to agree that if the subset of A that has at least one ace has the same proportion of flushes as does A, then the remainder of set A should have that same proportion. (I don't expact an argument on that). Therefore, if you answer that D has the same flush probability as A, then you would have to say that F has the same probability.

Wouldn't it stand to reason, then, that the flush probability in selection F it shouldn't change when we "lower the bar" by additional ranks? Under your calculations, we ought to be able to add new subsets of hands that would also have the same probability, by incrementally "throwing out" one more rank. First we throw out aces, then kings, etc. Eventually, we should be able to get all the way down to "Hands with no cards over a six." And that one should have the exact same probability as the original set A.

However, that one is easy to calculate. It's just 20x4x3x2x1=480 flush hands out of 20x19x18x17x16=18,604,480 possible hands-- which calculates to 0.00258% - a MUCH smaller chance than in the full set A. This makes intuitive sense-- once the suits get smaller, it gets tougher to "stay within the suit" and keep building a flush.

This is, in a sense, a reductio ad absurdum argument that also reinforced the correct answer-- that set D has a greater likelihood of generating a flush.

Grunion
12-19-2000, 10:10 PM
QuikSand,

First off, I am considering our conversation nothing more than a friendly debate. Even though I haven't posted on them before, I enjoy your frequent OT brainteasers. Please don't take our dialog as nothing more than an exchange of ideas.

Yes, you are correct, I got carried away with my *'s in my formula. Your assumpion that there should be plusses before each 1/5 is correct.

I have no idea to reference part of your post as a quote, but wat follows is from an earlier post of yours.

So, you consider each of the following:
a(4*51*50*49*48) - ace is in first position
b(51*4*50*49*48) - ace in in second position
c(51*50*4*49*48) - ace is in third position
d(51*50*49*4*48) - ace is in fourth position
e(51*50*49*48*4) - ace is in fifth position

However, this fails to account properly for the hands in which more than one ace appears. Consider the following hand:

AH-4C-5S-QH-AD

Since the card in slot #1 is an ace, it's certainly being counted in group a above, but since there's an ace in slot 5 (the AD is
definitely one of the 48 cards for that slot), but it's also being counted in group e above (the ace of hearts is definitely included in the 51 for that slot).

This ends your post.

In response to the above. I believe that my equation above does properly account for hands with more than one ace. And that is because of your constraint. We know that at least one card is an ace. In no case do we ever know that more than one card is an ace.
Also, we do not know where that ace is. However, we can be assured that no one position has a greater likelyhood of having an ace than any other (implied by the original constraint and stated by yourself in an earlier post). I completely agree that there is a greater than 1 in five chance of any given card being an ace, due to duplicate aces. However, the extra aces are accounted for in my equations, more specifically, the one I posted prior to my last one (4*51*50*49*48).

In essence, our disagreement boils down to this. You believe that the uncertainty of not knowing exactly what ace you have or where it is somehow effects the percentage of pulling a flush. I believe that the knowledge that there is one ace provides us with no additional information which would alter the baseline probability. I also believe that whether we have additional information on the location of an ace in the hand or suit of the ace does not effect the probabilities of a flush.

In response to your last post:

To answer your easy question. Yes, they are identical. That is exactly my point. It is not a coincidence, they are exactly the same hands. In every case which you initially presented only one card was known (other than the baseline case A). In each of those instances, we know that the card is an ace. Here is what intuitively makes no sence to me; We agree that case B (first card ace) there is no change in the probability of pulling a flush. We also agree that case C (first card ace of spades) there is no change in the probability of pulling a flush. Therefore we can conclude that knowing the suit of the card in question does not change the odds of pulling four more cards to match the suit of the first.
Now, in case E (hands w/ the ace of spades) we also agree that the odds of pulling a flush are unchanged. Therefore, we can conclude that the location of the ace of spades is irrelevant, and it is not significant whether we know where the ace of spades is.
We have now established that neither the knowledge of suit nor the knowledge of location has any bearing on the odds of pulling a flush. In case d, we now neither the suit nor the location.

Tough question:
I think I can provide a suitable answer. The difference between the two conditions are that in the case of at least one ace, we know that there is one ace, and the four random cards would be pulled from a pool of 51 cards, as multiple aces would be allowed, which takes us back to
# of flushes w/ an Ace = 4*12*11*10*9
=47,520
# of total hands with an ace = 4*51*50*49*48
=23,990,400

In the case of no card higher than a king, we have much different information. We now have no idea what the value of any card is. We also have partial knowledge of what all five cards are not (in this case Aces).
Therefore, the # of flushes with no aces allowed is: 12*11*10*9*8=95040
And the total number of hands allowed are:
48*47*46*45*44=205476480
flush probability = 0.046%
The difference between the two is that in the first case, your pool of cards remained the same. Yes, you put a constraint on the available permutations by fixing the rank of one of the cards, but this constraint by itself is meaningless. (Akin to pulling all of the clubs out of the deck would not influence the 1:13 chance to pull an ace on your first try).
In the second case, you are changing the size of the deck, and this does impact the probability of pulling a flush. (From 0% at only 2 thru 5 remain to 1 in 256 with an infinite amount of cards.

One more arguement (and I say arguement in the friendliest most non-confrontational way possible):

Take case B. This case is perfectly represented by dealing one card face down from a deck consisting of only the four aces and then combining the remaining three aces with the remainder of the deck (2-K) and shuffling, then dealing four more from the remainder of the deck face down.

Now, shuffle the hand that you are dealt. Now the condition of the cards exactly mirror the condition of case d. How did the odds of pulling a flush increase by shuffling our own hand?



------------------
Those who don't learn history are doomed to repeat it.

Passacaglia
12-20-2000, 07:41 AM
Jumping in here..

Grunion, I don't see how you could say that your equation provides for the cases with more than one ace. When I look at your equation, it seems to me to say,
"There is a 1/5 probability that the first card is an ace, and there are no others,
There is a 1/5 probability that the second card is an ace, and there are no others,
There is a 1/5 probability that the third card is an ace, and there are no others,
There is a 1/5 probability that the fourth card is an ace, and there are no others,
There is a 1/5 probability that the fifth card is an ace, and there are no others."

------------------
"If we can put a man on the moon, we can grow grass indoors."

Passacaglia
12-20-2000, 07:45 AM
I remember always having trouble finding out how many cases of "at least one" of something, until I'd remember to find out how many cases there are of none of them, and subtracting that from the sample set.
So I would say the number of hands with at least one ace is:

Number of possible hands - number of hands with no aces

Number of possible hands = 52x51x50x49x48 = 311,875,200

Number of hands with no aces = 48x47x46x45x44 = 205,476,480

311,875,200 - 205,476,480 = 106,398,720


------------------
"If we can put a man on the moon, we can grow grass indoors."

QuikSand
12-20-2000, 08:32 AM
Grunion, I think we are making progress now.
Incidentally, the instructions for "quoting" and other formatting are on the faq link at the top right of this page, I believe.

In the case of no card higher than a king, we have much different information. We now have no idea what the value of any card is. We also have partial knowledge of what all five cards are not (in this case Aces).

Therefore, the # of flushes with no aces allowed is: 12*11*10*9*8=95040

And the total number of hands allowed are:
48*47*46*45*44=205476480

flush probability = 0.046%

Okay, we have a problem here. I think you have calculated the number of non-ace flushes incorrectly-- there are quite a number of examples on this thread of a correct calculation, but 12x11x10x9x8 isn't enough-- I believe that your 12 should be a 48-- which rapidly goes to prove my point.

My point being-- step by step:

-Given: the entire set of hands A has a 0.198% flush probability

-We know that the entire set A can be broken into two mutually exclusive (complementary) sets:

D = hands with at least one ace
F = hands with no card over a king

(We know that each and every hand in A is either in D or in F, but not both, and that there are a positive number of each)

-If, as you argue, the flush probability of the hands in D is EXACTLY the same as the flush probability in A...

-Then, the flush probability in F must be exactly identical to that of both A and D. (Otherwise, adding those hands to D would alter the aggregate set's flush probability)

But, as you calculate above (and others have done elsewhere) - this isn't the case. The flush probability of F is clearly not the same as A (or D, for that matter). In fact, it is less than the flush probability of set A (for a variety of intuitite resaons previously stated on this thread).

Therefore, the correct balance is this:

Add set F (with lower flush probability than A)

to set D (with higher flush probability than A)
and get set A (with a flush probability in between those two).

QuikSand
12-20-2000, 08:49 AM
One more arguement (and I say arguement in the friendliest most non-confrontational way possible):

I appreciate that.

Take case B. This case is perfectly represented by dealing one card face down from a deck consisting of only the four aces and then combining the remaining three aces with the remainder of the deck (2-K) and shuffling, then dealing four more from the remainder of the deck face down.

Now, shuffle the hand that you are dealt. Now the condition of the cards exactly mirror the condition of case d. How did the odds of pulling a flush increase by shuffling our own hand?

Good, thought-provoking question.

The difference between your two cases is that in case D (the second one you describe), you are not generating the hands randomly. By starting with an ace and then drawing randomly from the rest of the deck, you are (among other things) increasing the chances of getting a hand with multiple aces. This does not map accurately to the same subset of hands generated by a random draw to the entire deck. The shuffling only upsets the order-- but it doesn't magically restore the probability.

You could follow that logic ("this technique results in all hands that have at least one ace") to its extreme by starting with an ace (darwm from the 4 aces) and one other randomly-drawn non-ace card (drawn from the 49 non-aces) and then drawing from the remaining shuffled deck of 50 to get the other three. You'll still have some probability of ending up with any of the hands that are within subset D, but you do not have the same probability of drawing each one, since the method of the draw is non-random.

So, in essence, I challenge your statement that "Now the condition of the cards exactly mirror the condition of case d." I don't believe that to be so.

QuikSand
12-20-2000, 01:41 PM
Grunion, I'm taking a step back here, and making a broader, sinpler appeal to intuition.

Above, your analysis of the puzzle states the following:

# of total hands = 52*51*50*49*48
=311,875,200

# of total hands with an ace =
1/5(4*51*50*49*48)*1/5(51*4*50*49*48)*1/5(51*50*4*49*48)*1/5(51*50*49*4*48)*1/5(51*50*49*48*4)
=23,990,400

Now, without even getting into any of the main issues upon which we currently disagree... we ought to be able to agree on one thing:

A hell of a lot more than 7 or 8 percent of five-card hands are going to have at least one ace.

The approximately 8% comes from taking the calculations above of total flush hands versus total hands-- 24 million into 312 million

This one you can take to the laboratory, if you like. Go and deal out 100 poker hands, and count up how many have aces. It most certainly won't end up being 7 or 8-- it's going to (intuitively) be 30 or 40 or so, right?

If absolutely nothing else I'm arging sticks, this ought to give you a strong hint that there's something fishy with your set of numbers above.

Passacaglia
12-20-2000, 02:48 PM
Oddly enough, that 7 or 8 percent figure is 1/13, which is the probability that a ONE card hand will include an ace.

------------------
"If we can put a man on the moon, we can grow grass indoors."

QuikSand
12-20-2000, 04:55 PM
Passacaglia, it's not odd at all. It's a direct product of the fact that all he's doing (rather than calculating all the hands that have one or more aces) is calculating all the hands that start with an ace. Therefore, the options for the first card are cut from 52 to 4, and therefore the number of hands by a factor of 1/13. It's no mystery-- it's just another product of the errant translation of the conditions described into the count used.

Correctly counting the total number of hands that conatin one or more aces is either pretty easily done using the "subtraction" method that you and I have used above... or a little tougher using the "direct" method that I did above... but either one yields the correct result, which is 106,398,720.

Incidentally, this number also passes the "smell test" when compared to what we'd expect. It represents about a third of the total possible hands (311,875,200) -- which is in keeping with what you'd see if you started dealing yourself random hands of cards: about one in three have at least one ace. (You get 5 shots at a 1/13 probability... that's pretty intuitive, but someone could work out the math)

Grunion
12-20-2000, 11:42 PM
Hi Guys,

In response to various comments:

From Pass.
Grunion, I don't see how you could say that your equation provides for the cases with more than one ace. When I look at your
equation, it seems to me to say,
"There is a 1/5 probability that the first card is an ace, and there are no others,
There is a 1/5 probability that the second card is an ace, and there are no others,
There is a 1/5 probability that the third card is an ace, and there are no others,
There is a 1/5 probability that the fourth card is an ace, and there are no others,
There is a 1/5 probability that the fifth card is an ace, and there are no others."

That is not what it says, to use the first case as an example:
1/5(4*51*50*49*48) means that with a guaranteed ace, there is a 1/5 chance that the guaranteed ace will fall in the first position, and it can be one of four aces. The following card can be one of 51 remaining cards, which does include the three remaining aces, with the process continued until all five cards are dealt.
The 1/5 factors are meaningless, and will cancel each other out. A flush is a flush is a flush no matter what order the cards were dealt in, so the full equation simplifies to: 4*51*50*49*48. That is how I originally stated it, but broke it out into 1/5's to illustrate a point to QuikSand. The equation above accounts for aces, as there are all 51 remaining cards to choose from.

Also, I must apologize, some of my number crunching has been sloppy, and it has been getting in the way drawing this discussion to a conclusion. I have calculated that there is a 34.1% chance of pulling an ace, which appears to agree with QuikSand's figures.

From QuikSand:

The difference between your two cases is that in case D (the second one you describe), you are not generating the hands randomly. By starting with an ace and then drawing randomly from the rest of the deck, you are (among other things)increasing the chances of getting a hand with multiple aces. This does not map accurately to the same subset of hands generated by a random draw to the entire deck. The shuffling only upsets the order-- but it doesn't magically restore the probability.

You could follow that logic ("this technique results in all hands that have at least one ace") to its extreme by starting with an
ace (darwm from the 4 aces) and one other randomly-drawn non-ace card (drawn from the 49 non-aces) and then drawing from
the remaining shuffled deck of 50 to get the other three. You'll still have some probability of ending up with any of the hands that are within subset D, but you do not have the same probability of drawing each one, since the method of the draw is non-random.

So, in essence, I challenge your statement that "Now the condition of the cards exactly mirror the condition of case d." I don't
believe that to be so.

I stand by my assessment that this example accurately represents case D. Of course there is an element of unrandomness in the process, but that is because there is a loss of randomness due the the constraint of at least one ace. In my example, the bottom line is we know there is at least one ace, that the remaining four cards were drawn randomly from the remaining 51 cards, and that both the suit of the ace and position of the ace are unknown. I think we agree that we know exactly the same amount of information about my example as your defined case D. It appears that your contention is that I have not undergone a random process to get there, thereby skewing the results.
I "start" with an ace because an ace is a constraint of the condition. We know that an ace is present. Pulling a random ace in the beginning of my exersize is done to meet that condition. Your contention that my method increases the probability of pulling multiple aces is entirely correct. It mirrors your constraint. Case D will result in more multiple ace hands than case A. I also completely agree that shuffling will only upset the order, not the probability. That has been one of my major points throughout this discussion, and is the reason that knowing you have a first card ace is no more relevant than knowing you have at least one ace. In both cases, the rank of exactly one card is known. The location of the ace (or the fact that the ace is known to be in the hand, but its precise location is unknown) is irrelevant to the probability of pulling a flush.

I fully agree with your contention that drawing a non ace as a second card would provide the same subset of results (since a 5 of a kind is impossible) and yet not be random. The fact that this is true does not mean that it is impossible to accurately model a scenario which is.

Your discussion on mutually exclusive sets is interesting. I'm going to sleep on it.


------------------
Those who don't learn history are doomed to repeat it.

QuikSand
12-21-2000, 08:07 AM
Grunion,

Your recently posted:

# of total hands = 52*51*50*49*48
=311,875,200

# of total hands with an ace = 4*51*50*49*48
=23,990,400

But, after some commentary on the inappropriate conclusions this leads to, you have posted this:

I have calculated that there is a 34.1% chance of pulling an ace, which appears to agree with QuikSand's figures.

Okay, I presume that your newfound calculation of 34.1% as the likelihood of getting one or more aces from a draw of five cards must reflect that you have revised your counts of either:

-the total number of hands possible (a pretty simple calculation, upon which we have agreed); or

-the number of hands with an ace (a tougher calculation, upon which we have disagreed).

[This presumes that we are in agreement that the flush probability of a random hand is equal to the ratio of these two numbers-- I cannot imagine that we have disagreeemnt there]

I strongly sspect the latter is the case. In that case, I now assume that you have joined me in my calculation of 106,398,720 as the correct total of hands that include at least one ace. The laborious version of this calculation is above, stamped posted 12-19-2000 07:08 AM. I'm assuming at this point that you now agree with that calculation.

This would, I think you will also agree, invalidate your calculations of the likihood of getting a flush within those 106,398,720 hands-- as your calculations are now invalidated by having a dramatically different denominator.

We're not yet to the point of agreeing on the numerator for that calculation, as we have used diffrent methods to calculate the number of flushes that include at least one ace. Mine is contained in the same post I referenced just above. If we can agree to both parts of that fraction, then we may be past this thing.

- - -

Case D will result in more multiple ace hands than case A.

Your discussion on this topic misses my main point. Of course case D will result in more mutiple ace hands than case A. My point is that the case you described (starting with an ace, and then randomly drawing from the deck) will give you the same set of hands as those in case D, but those hands will not be distributed in the same manner as in case D. The non-random distribution skews the probability toward an increased number of hands with more than one ace than were represented in the randomly-drawn case D. By doing so, this reduces the flush probability of that set-- and mathematically, it does so back to the flush probability of the original set A. Again, we are stuck on the issue of the substantive difference between set B and set D. Same old song.

I'm buoyed by the notion that you agree with one observation of mine-- intuitively, we are standing ont he same ground, it seems:

I fully agree with your contention that drawing a non ace as a second card would provide the same subset of results (since a 5 of a kind is impossible) and yet not be random.

So, you'll buy the argument that the "two-card headstart" (selecting two cards by a nonrandom manner, then drawing randomly) will generate exactly the same group of hands but in a different, nonrandom distribution. It doesn't seem a huge leap of faith to me to than recognize that your method-- the "one-card" headstart (selecting one card by a nonrandom manner, then drawing randomly)-- is also likely to yield the same thing: the same domain of possibilitites, but not the same internal probabilities.

- - -

Your discussion on mutually exclusive sets is interesting. I'm going to sleep on it.

We're in the same time zone... you must lead a far more interesting life than I do if you're just getting around to sleeping on anything at 1am. I'd been cuddled up with my bottle of cough medicine for 2 hours by then. Nighty-nite.

Grunion
12-21-2000, 11:22 AM
So, you'll buy the argument that the "two-card headstart" (selecting two cards by a nonrandom manner, then drawing randomly) will generate exactly the same group of hands but in a different, nonrandom distribution. It doesn't seem a huge leap of faith to me to than recognize that your method-- the "one-card" headstart (selecting one card by a nonrandom manner, then drawing randomly)-- is also likely to yield the same thing: the same domain of possibilitites, but not the same internal probabilities.

I completely disagree with the above statement, for the following reason. A concept that everyone has agreed upon is that the probability of pulling a pair is inversely proportional to the probability of pulling a flush, i.e. as conditions are set which make the odds of pulling a pair greater, the odds of pulling a flush are lessened. In the one card scenario, the odds of pulling any pair do not change. The rank of the first card has no bearing on the overall chance for a pair. Naturally, upon discovery of the first card, the pulling a pair of the same rank as the first card increase, but the odds of pulling a pair of other cards decrease. The net effect is a cancellation.
The drawing of a second card changes the scenario dramatically. You've created a condition where we know that the first and second cards do not pair up. This is useful information. We now can eliminate all non-flush hands in which the first two cards match rank, thereby reducing the denominator. As there is no reduction in the numerator, because there are no flush hands where the first two cards pair up, the odds of pulling a flush are greater than the default odds.

My selection of a random ace is not to manipulate the percentages, it is merely a mechanism to assure compliance to the stated constraint of case D. The overall compsition of the hand is random as it pertains to the constraint. To me this is as evident as removing the aces from a deck prior to dealing to get a random hand which contains no aces. The mechanism to get there is different, due to the constraint being an inclusion rather than an exclusion.

I believe we have successfully isolated our disagreement, I don't know if we are any closer to resolution.

I think your prior reduction arguements are invalid, because the addition of a second card changes the situation.

I think I have an answer to your mutually exclusive arguement as well.

I've edited this message and deleted my response, because it was incorrect. See following posts for discussion.

[This message has been edited by Grunion (edited 12-21-2000).]

Grunion
12-21-2000, 11:26 AM
Dolaposting to correct a flaw in my logic.

I am missing a subset, cards that contain at least one spade, but less than five spades, which would have a 0% probability of pulling a flush. Disregard my mutually exclusive arguement for now.

------------------
Those who don't learn history are doomed to repeat it.

Grunion
12-21-2000, 11:53 AM
QuikSand,

How about this:

Set A = all possible hands
Subset 1 = all hands w/ at least one club
Subset 2 = all hands w/o clubs

P flush(1) = (12/51)*(11/50)*(10/49)*(9/48)
=0.198%
This represents the possibility of each subsequent randomly selected card being a club.

Using your logic, subset 2 would have to have P flush(2) = 0.198%, which is cerainly not the case.

------------------
Those who don't learn history are doomed to repeat it.

QuikSand
12-21-2000, 01:40 PM
Originally posted by Grunion:
QuikSand,

How about this:

Set A = all possible hands
Subset 1 = all hands w/ at least one club
Subset 2 = all hands w/o clubs

P flush(1) = (12/51)*(11/50)*(10/49)*(9/48)
=0.198%
This represents the possibility of each subsequent randomly selected card being a club.

Using your logic, subset 2 would have to have P flush(2) = 0.198%, which is cerainly not the case.


I'm not sure to what you refer by "my logic" but I would make no such statement.

First, I would assert that you are making the same error here that you are making in our general puzzle-- when a condition is states that is general among the five-card hand, you are assuming that the condition may be fulfilled by using nonrandom means to select the first card, and then randomly selecting the others. This is a/the fundamental flaw in your calculation here, and several other places throughout your analyses, as I see them.

By this error, you have incorrectly calculated above the probability of "hands with at least one club" of being a flush.

Without even attempting the math, I can tell you that the probability of getting a flush among the hands you label above as "subset 1" will be less than that of the entire set of hands, A-- i.e. something less then 0.198%. Intuitively, this is because (as you suggest) we have two mutually exclusive sets of hands, and the intuitive flush probability of the group you label subset 2 will be higher than the whole set A.

I'd rather not work through the entire set of calculations to prove all this, but the flaw in your argument lies not in the logic I'm using (about mutually exclusive sets), but in the calculations that you are using to calculate P flush(1).

QuikSand
12-21-2000, 02:19 PM
Grunion, I think I need a recap-- you seem to feel that we have clarified our differences somehow, but I'm not so sure which of your statements you continue to support.

The puzzle, at this point, boils down to this:

Which of the two sets of hands has a greater probability of being a flush?

A) All five card hands
D) All five card hands that contain at least one ace

It appears to me that our dueling "conceptual" arguments are not successfully persuading "the other side" in this discussion. So, I propose that we try to isolate this into the simple mathematics of the puzzle.

We both seem to agree that one acceptable method to calculate the flush probability of a given set of hands is to calculate:

d (denominator) = the total number of hands in that set;

n (numerator) = the total number of flush hands in that set; and

the flush probability of that set may be calculated as (n/d).

I also believe that we have agreed to the entire calculation (using this method) for set A. I will quote your calculations from mid-page 3 of this thread:


# of Total flushes = 52*12*11*10*9
=617,760

# of total hands = 52*51*50*49*48
=311,875,200

617,760/311,875,200=.00198


And thus you calculate that the {approximate) flush probability for set A is equal to 0.198%.

I agree with these calculations completely.

After this point, as I understand it, we seemingly part company.

I have made (on page three of this thread) a similar calculation for case D above, which includes an explanation of each of my calculations, hopefully making it a bit easier to follow. This I will excerpt below, for ease of reference:

Flush Calculation (the "direct" method)

D = total number of hands meeting the stated condition

N = number of such hands that are flushes

- - -

Calculating D directly is a pain in the ass.

First, consider the one-ace hand:
4x48x47x46x45 = 18,679,680
x 5 different positions = 93,398,400
and since there is only one card we have no duplication

Second, consider the two-ace hand:
4x3x48x47x46 = 1,245,312
x 20 different positions = 24,906,240
and divide by 2! (2) to eliminate duplicates = 12,453,120

Next, consider the three-ace hand:
4x3x2x48x47 = 54,144
x 60 different positions = 3,248,640
and divide by 3! (6) to eliminate duplicates = 541,440

Finally, the four ace hand:
4x3x2x1x48 = 1,152
x 120 different positions = 138,240
and divide by 4! (24) to eliminate duplicates = 5,760

Adding the four together gives us 106,398,720 different hands that have at least one ace.

As predicted, this is identical to the number reached on page two of this thread, by the more simple calculation of subtracting
out the 48-card hands from the 52-card hands

D = 106,398,720

- - -

Calculating N directly is actually pretty easy:

Start with the 4 aces, and multiply by the number of cards remaining in suit:

4 x (12 x 11 x 10 x 9) = 47,520

Then, multiply this by 5 to show each of the five positions for the ace:

47,520 x 5 = 237,600

This is the total number of flushes that contain at least one ace. (And, of course, they all contain exactly one ace-that's what
makes this a simpler calculation than D above)

N = 237,600

- - -

To calculate the probability that a given five card hand, kown to have at least one ace, is a flush, we take N / D.

N = 237,600
D = 106,398,720

N/D = 0.22331%


While calculating these four numbers and their resulting ratios is sufficient to answer our original question, it also stands to reason that the numbers ought to pass a sort of "double check."

Using two of the numbers that we have had to calculate, we can also determine the probability that a random hand will contain one or more aces. Doing so requires that we simply divide the number of hands that have at least one ace (which we have calculated already) into the total number of hands (which we also have already).

Doing so, using my figures, works out as follows:

Total hands: 311,875,200
Hands with one or more aces: 106,398,720

Probability that a hand will contain one or more aces: 106,398,720 / 311,875,200 = 34.1%

This calculation-- again, using my numbers-- comports with theresult that we have already agreed upon, as evidenced by your comment from above:

I have calculated that there is a 34.1% chance of pulling an ace, which appears to agree with QuikSand's figures.

-from post marked posted 12-20-2000 09:42 PM


- - -

So, with that long-winded introduction, I make my argument that:

-From the original problem, a hand from set D is more likely to be a flush than is set A.

-The flush probability of set A is approximately 0.198%

-The flush probability of set B is approximately 0.223%

-These results are calculated using methods undisputed by all involved parties

-The probability of a random hand containing one or more aces is approximately 34.1%

-This result is calculated using the same data and calculations as above

- - - - -

Grunion, it would be helpful to me if you could lay out (with calculations, if necessary) what you now assert to be:

-The total number of hands that contain at least one ace
-The total number of flush hands that contain at least one ace
-The probability of a hand that contains at least one ace of being a flush

It might be useful to then use the first of those three numbers to employ the same "double-check" that I did, and show how it supports the fact that 34.1% of hands will contain one or more aces.

In particular, if you assert that the answer to the second item above is different than that shown by my calculations above, it would be very helpful to have a set of annotated calculations that might serve to guide me/us through your logic.

My suspicion, however, is that you won't be able to construct a set of numbers that resolve all these issues. Doing so clearly requires you to abandon your previously posted calculations both of the number of hands with one or more aces, and of the number of flush hands with at least one ace.

If you actually can get this far, then we'll have really clarified our differences-- and then you may be a lot closer to showing me the error of my ways. Or, I suppose, vice versa.


[This message has been edited by QuikSand (edited 12-21-2000).]

Grunion
12-21-2000, 04:25 PM
Ok,

Our discussion has boiled down to whether or not my methods reflect true randomness. If my methods do indeed reflect true randomness, it would stand to reason that my analysis is correct. Consider this:

Fred has two red pills and two blue pills. If he takes a red pill and a blue pill together he will die. He must select two at random.
Consider the random possibilities:
R1R2
R1B1
R1B2
R2R1
R2B1
R2B2
B1R1
B1R2
B1B2
B2B1
B2R1
B2R2

It should be pretty apparent that he has a 2 in 3 chance of croaking.

Now lets add a constraint, at least one of the pills is a red pill.
R1R2
R1B1
R1B2
R2B1
R2B2
R2R1
B1R1
B1R2
B2R1
B2R2

8 out of the 10 possible outcomes will kill him. Fred has lost the double blue combos which will let him live. Now the question is, are all of the above permutations equally likely to occur? I do not believe that this the case. Permutations are more appropriate when order matters. Since order is irrelevant (as it is in our flush problem), lets look at combinations.

Total combinations:
R1R2
R1B1
R1B2
R2B1
R2B2
B1B2

There is still a 2/3 chance of Fred dying.

For combos w/ the red pill constraint just remove B1B2. Now four out of five possible outcomes will kill Fred, same as above. However, I contend that it is twice as likely for Fred to pull two red pills, given that at least one of them is red, than it is the non conditional scenario. (This should be fairly intuitive, and is fundamentally the same as our agreement that there is a higher probability of pulling an ace given that one is already present.)

I think the method you are using in the flush problem applies the condition after the randomization, not before. (Sorry, I have no better way to explain the concept.)

Anyway, what I believe your method does is allow Fred to draw four pills completely at random, disregarding any constraints. Then if he is fortunate to pull double blues telling him "Oh, sorry pal, but at least one of those have to be red. Drop those two blue ones back in the pot and try again." Just as in your flush example, if you draw a hand with no aces, it is null and void. The set you are using for selection differs from the one you are using as a condition. This makes a big difference.

If your methodology is to pull five random cards from a 52 card deck, and throw out the result if an ace is not present, then I agree with your calculated probability completely. I still disagree with its appication on this constraint.

By the way, what's your occupation (just curious, I'm a civil engineer)


------------------
Those who don't learn history are doomed to repeat it.

Grunion
12-21-2000, 05:25 PM
Grunion, it would be helpful to me if you could lay out (with calculations, if necessary) what you now assert to be:

-The total number of hands that contain at least one ace
-The total number of flush hands that contain at least one ace
-The probability of a hand that contains at least one ace of being a flush


All calc's. are for permutations.
Total hands = 52*51*50*49*48=311,875,200
Total hands w/o aces = 48*47*46*45*44 =205,476,480
Total hands w/ at least one ace = 106,398,720
Ace % = 34.12%
Total flush hands = 4*13*12*11*10*9= 617,760
Flush % = 0.198%
Flush hands which contain an ace =4*12*11*10*9*5 (since this is a permutation)
=237,600
Flush w/ ace:Flush = 38.46%

Now, if I were to divide 237,600 into 617,760 I would arrive at your answer for D.
I can see one conflict with doing so.

It is evident that the percentage of flushes with an ace versus flushes is greater than the percentage of hands with an ace versus total hands. This is logical, the reason being that there is no possibility of there being more than one ace in the flush hands, while there is a possiblity of there being duplicate aces when looking at total hands.

From this we can conclude:
If I have a flush, then it is more likely to have an ace in it than if I have 5 random, non constrained cards.

We can also conclude:
If I have an ace, then it is more likely that I have a flush than if I do not have an ace. (not directly from above, but we are in agreement about this)

However, we can not conclude:
If I have an ace, then it is more likely that I have a flush than if I may or may not have an ace.

My calculation for case D is as follows:

One of four cards must be in the hand (the aces). In addition, the remaining cards all have equal probability of being in the hand.

Total combinations equal
# of flushes w/ an ace: 4*12*11*10*9=47,520
# of hands w/ an ace: 4*51*50*49*48=23,990,400
% flush = 0.198%

To obtain permutations, multiply both calc's by five, which accounts for the location of the ace, and will cancel out.
Which brings us back to:
# of flushes w/ an ace = 237,600
# of hands with an ace = 119,952,000

Which brings us to our disagreement. I believe that the # of hands with an ace in your arguement is not correct. I think the problem has to do with multiple aces not being accounted for. I'll see what I can do to account for the difference.


------------------
Those who don't learn history are doomed to repeat it.

QuikSand
12-21-2000, 06:43 PM
Grunion,

On first blush, I agree that your red/blue pill construct is an appropriate mirror for our disagreement. As you may not expect, I do, in fact, believe that poor Fred's chances of dying increase to 80% if we add the constraint you suggest.

Your comment:


Anyway, what I believe your method does is allow Fred to draw four pills completely at random, disregarding any constraints. Then if he is fortunate to pull double blues telling him "Oh, sorry pal, but at least one of those have to be red. Drop those two blue ones back in the pot and try again." Just as in your flush example, if you draw a hand with no aces, it is null and void. The set you are using for selection differs from the one you are using as a condition. This makes a big difference.

Yes, you are right-- that's exactly how I would suggest that we play the game. And yes, it makes a big difference-- you're darned right it does. My method versus your method is the difference between random (with each outcome having the same chance) and non-random (where some outconmes have different chances). Mine is random, yours isn't. And that is, indeed, a mighty big difference.

As for question two... I'm a lobbyist. I deal with tax policy and local government issues.

As for your numbers... I have to run right now, but I'll get to them fairly soon. I'm intrigued...I see you get to my answer, but then talk your way out of it. I haven't yet digested the rationale, but you'll have my undivided attention a little later.

For now, I'll make this one observation:

We can also conclude:
If I have an ace, then it is more likely that I have a flush than if I do not have an ace. (not directly from above, but we are in agreement about this)

However, we can not conclude:
If I have an ace, then it is more likely that I have a flush than if I may or may not have an ace.

I see these two statements as being mutually inconsistent. Back to my argument about set theory that we had a little trouble with earlier.

Later...

Grunion
12-21-2000, 08:25 PM
Calculating D directly is a pain in the ass.

First, consider the one-ace hand:
4x48x47x46x45 = 18,679,680
x 5 different positions = 93,398,400
and since there is only one card we have no duplication

Second, consider the two-ace hand:
4x3x48x47x46 = 1,245,312
x 20 different positions = 24,906,240
and divide by 2! (2) to eliminate duplicates = 12,453,120

Next, consider the three-ace hand:
4x3x2x48x47 = 54,144
x 60 different positions = 3,248,640
and divide by 3! (6) to eliminate duplicates = 541,440

Finally, the four ace hand:
4x3x2x1x48 = 1,152
x 120 different positions = 138,240
and divide by 4! (24) to eliminate duplicates = 5,760

Adding the four together gives us 106,398,720 different hands that have at least one ace.


I think you may be combining permutations and combinations.

The probability of pulling exactly one ace is equal to:
the probability that the first card is an ace and all of the other cards are not aces plus the probability that the second card is an ace and all of the other cards are not aces, plus.......the fifth card is an ace and all of the other cards are not aces.

or (4/52)*(48/51)*(47/50)*(46/49)*(45/48) + (48/52)*(4/51)*(47/50)*(46/49)*(45/48) + ..... + (48/52)*(47/51)*(46/50)*(45/49)*(4/48).
This reduces to: 5(4*48*47*46*45)/(52*51*50*49*48) = 29.95%

In like fashion,
Probability of two cards being aces is:
10(4*3*48*47*46)/(52*51*50*49*48) = 3.99%

Three aces:
9(4*3*2*48*47)/(52*51*50*49*48) = 0.16%

Four aces:
5(4*3*2*1*48)/(52*51*50*49*48) is essentially 0.00%.

Adding these up we get: 29.95+3.99+0.16=34.1%, which we both agree is the correct percentage of hands that contain at least one ace. So my process should be valid.

If you agree with the above, it should be intuitive that there is no difference between knowing that your first card is an ace and knowing that there is at least one ace.

What happens is that with known cards in unknown locations, the equations are going to collapse on themselves. The probability of having a first card ace equals the probability of having a second card ace equals the probability of having exactly one ace of unknown location.

This can be applied to case d: as the number of aces are not known, but we know there is one. We can assign that one ace a "slot" and pull from a 51 card deck for the remainder insuring true randomness.

or(4*12*11*10*9)/(4*51*50*49*48).

=0.198%

------------------
Those who don't learn history are doomed to repeat it.

Grunion
12-21-2000, 08:41 PM
Going back to the pill:

Resolve this paradox:

Chance of Fred dying =66.7%
Chance of Fred dying if he has at least one red pill = 80%
Chance of Fred dying if he has at least one blue pill =80%
In all cases Fred is assured of having either a red pill or a blue pill.

We are dealing with dependant events:

Bayes Theorem states:

p(A/B) = probability that A will occur given that B has already occured, where the two events are dependant.

p(A/B) = p(A and B)/p(B)

Let p(A) = Fred gets a blue pill
Let p(B) = Fred gets a red pill
p(A and B) = Fred gets one of each (death)

If we know that Fred has at least one red pill p(B) = 1

We also know that p(A and B) = 0.67

Therefore p(A/B) which represents the probability of Fred getting a blue pill is equal to 0.67/1 = 0.67

I can assure you that the pill scenario (as well as the flush scenario) deal with dependant events. In order for the events to be independant We would have to have to select pill 1 from a separate set of pills than pill 2.

------------------
Those who don't learn history are doomed to repeat it.

Grunion
12-21-2000, 08:55 PM
For now, I'll make this one observation:


quote:
--------------------------------------------------------------------------------
We can also conclude:
If I have an ace, then it is more likely that I have a flush than if I do not have an ace. (not directly from above, but we are in agreement about this)
However, we can not conclude:
If I have an ace, then it is more likely that I have a flush than if I may or may not have an ace.


--------------------------------------------------------------------------------

I see these two statements as being mutually inconsistent.

They are consistent. It goes back to the type of information we receive. Knowing that we have no aces reduces our potential card pool to 48, and we both agree that this makes flushes more difficult.

Knowing that we have an ace does not help us at all. We are still working with a pool of 52 cards. Our odds of pulling a flush have not changed (neither have our odds of pulling a pair, three of a kind or full house. However our chance of pulling a staraight would decrease, unless you play with wrap around straights.)

And yes, the knowledge that you have an ace eliminates an undesirable subset, which is the one that you know that you do not have an ace. It also enhances an undesirable subset, which is the one where you pull a pair of aces.

Unfortunately, even though you have eliminated an undesirable subset, you have not created an advantage. Not having to try construct a flush out of only 48 cards as opposed to 52 does not provide a flush making bonus. It merely assures you that your odds have not diminished.


------------------
Those who don't learn history are doomed to repeat it.

QuikSand
12-21-2000, 09:30 PM
Trying to reply to several things at once, I'll start at the end...

I find you agreeing with more and more of my supporting statements, without yet agreeing with my conclusions. Go figure.


Unfortunately, even though you have eliminated an undesirable subset, you have not created an advantage. Not having to try construct a flush out of only 48 cards as opposed to 52 does not provide a flush making bonus. It merely assures you that your odds have not diminished.


Nonsense.

Start with a finite set of anything (poker hands, pills, anything) of which a given proportion are of a certain character (flushes, blue). Remove from that set a finite subset of elements which are, on the whole proportionally less dense with the specified character. What remains from the original set, will absolutely be more dense with the character than the original set was.

Here's a more concrete example. Take 100 of your blue/red pills, of which exactly 70% are blue. Now remove any number you like-- as long as of the ones you remove, there are fewer than 70% blue. Now, examine what remains from your original 100. Are they 70% blue? Less? Of course not-- they are in every case more than 70% blue.

(my notation below is xx/yy where xx=number of blue pills, and yy=number of total pills)

Start with 70/100 blue.
Subtract 10/20 (50% of removed are blue).
Remaining are 60/80 blue = 75%.

Start with 70/100 blue.
Subtract 40/60 (66% of removed are blue).
Remaining are 30/40 blue = 75%.

Start with 70/100 blue.
Subtract 0/1 (0% of removed are blue).
Remaining are 70/99 blue = 70.7% blue.

- - -

The only case in which the odds in the remaining pool do not change is when the amoung "taken out" approaches zero... and those kinds of measures are obviously meaningless in this exercise with the whole flushes/aces thing.

So, back to the original problem, since you've agreed that it does, in fact, work this way (the exclusive sets concept):

Start with 311,875,200 hands, of which we agree that 0.198% are flushes.

Remove the 205,476,480 hands without aces, whic we now agree have a lower proportion of flushes than the entire set. (Just like my extractions from the 100 pills each contained less than 70% blue)

What remains from the set of hands will, mathematically, have a higher density of flushes then the original set.

There is, of course, algebra behind all this-- but I simply don't see how it's needed.

Grunion
12-21-2000, 09:53 PM
Remove the 205,476,480 hands without aces, whic we now agree have a lower proportion of flushes than the entire set. (Just like my extractions from the 100 pills each contained less than 70% blue)

What remains from the set of hands will, mathematically, have a higher density of flushes then the original set.

The identical point can be made for case B. We both agree that P flush for case B equals P flush for case A. There is an inconsistancy somewhere with your logic.

------------------
Those who don't learn history are doomed to repeat it.

QuikSand
12-21-2000, 09:54 PM
I think I buy everything in here:


The probability of pulling exactly one ace is equal to:
the probability that the first card is an ace and all of the other cards are not aces plus the probability that the second card is an ace and all of the other cards are not aces, plus.......the fifth card is an ace and all of the other cards are not aces.

or (4/52)*(48/51)*(47/50)*(46/49)*(45/48) + (48/52)*(4/51)*(47/50)*(46/49)*(45/48) + ..... + (48/52)*(47/51)*(46/50)*(45/49)*(4/48).
This reduces to: 5(4*48*47*46*45)/(52*51*50*49*48) = 29.95%

In like fashion,
Probability of two cards being aces is:
10(4*3*48*47*46)/(52*51*50*49*48) = 3.99%

Three aces:
9(4*3*2*48*47)/(52*51*50*49*48) = 0.16%

Four aces:
5(4*3*2*1*48)/(52*51*50*49*48) is essentially 0.00%.

Adding these up we get: 29.95+3.99+0.16=34.1%, which we both agree is the correct percentage of hands that contain at least one ace. So my process should be valid.


And this also means that you could calculate the precise chances of receiving exactly one ace, given the condition that you have one of these 34.1% of hands which does, indeed, have an ace.

Obviously, the chances of this are (using your numbers, which look right to me):

29.25% / 34.1% = 87.8%

That is, of all the hand with at least one ace, 87.8% will have exactly one ace. We shoudl agree there.

Now, as you propose to set this up-- we'll just draw the ace first, and then revert to a random selection for the rest of the hand. If you're right, that will undoubtedly generate the exact same likelihood of getting exactly one ace.

Well, this will be easy, right? I'll just use your preferree method of using the incremental probabilities...

Chance of only one ace, given that we start with an ace is:

chance that second card is a non-ace x
chance that third card is a non-ace x
chance that fourth card is a non-ace x
chance that fifth card is a non-ace.

Again this is easy:
48/51 times
47/50 times
46/49 times
45/48.

Which equals... 4,669,920 / 5,997,600
= 77.8%!

Geez, sure enough-- "seeding" the first card of the draw by using a non-random method does alter the likelihood of the various possible hands that will result. It makes a rather large difference-- now, instead of single-ace hands representing a full 87.8% of the hands (which is what happend when we allow the hands to be selected randomly, with every combination having exactly the same chance as any other), they now only represent 77.8% chance-- since we (in this example) used a non-random process which inherently makes some hands more likely than others.

- - -

Our original problem simply said "their likelihood of being a flush," not anything about the method of dealing or any other knowledge that might affect the likelihood of any one hand over another. (Which seems to be what you're trying to read into the puzzle)

This puzzle clearly states that we are looking at the entire set, and assessing the likelihood of those hands being a flush. We cimply cannot arbitrarily assign some sort of method to their distribution, and accordingly weigh certain hands as being more likely then others-- based on the way that we might have liked a dealer to have prepared this hand for us. The puzzle simply doesn't allow it.

We must work with the concept that each hand is equally likely-- which sends us invariably to the answer that Vaj and Passacaglia provided, and which I and others have demonstrated forwards and backwards.



[This message has been edited by QuikSand (edited 12-21-2000).]

QuikSand
12-21-2000, 10:00 PM
The identical point can be made for case B. We both agree that P flush for case B equals P flush for case A. There is an inconsistancy somewhere with your logic.

Not at all. Case B also breaks down the set B into two groups:

Set B = hands with the first card an ace
Set (A=B) = hands with the first card a non-ace

I would solidly assert that all three sets A, B, and (A-B) have precisely the same flush percentage.

That is because case B and case D are fundamentally different-- the basic point on which you and I disagree.

QuikSand
12-21-2000, 10:10 PM
As for your Bayes pill setup:

Bayes Theorem states:

p(A/B) = probability that A will occur given that B has already occured, where the two events are dependant.

p(A/B) = p(A and B)/p(B)

Let p(A) = Fred gets a blue pill
Let p(B) = Fred gets a red pill
p(A and B) = Fred gets one of each (death)

If we know that Fred has at least one red pill p(B) = 1

We also know that p(A and B) = 0.67

Therefore p(A/B) which represents the probability of Fred getting a blue pill is equal to 0.67/1 = 0.67


Your error here is clearly in your after-the-fact assignation of the probability of Fred's getting at least one red pill.

You state:


If we know that Fred has at least one red pill p(B) = 1

No, no, no. We can't take our ex post facto knowledge of the problem and go back and monkey with the percentages.

The percentages are fixed as we set up the problem, and they do not change (that's the idea behind Bayes' Theorem). So, we stick with:

Let p(A) = Fred gets a blue pill = 10/12
Let p(B) = Fred gets a red pill = 10/12
p(A and B) = Fred gets one of each (death) = 8/12

And, properly using Bayes' Theorem, we calculate it as you suggest:

p(A/B) = p(A and B)/p(B)
p(A/B) = (8/12) / (10/12)
P(A/B) = 80%

Which is exactly what we expact to see-- you rule out the two options that didn't meet the revised criteria, and look back and see that 8 out of 10 equally likely outcomes would have reulted in his death-- therefore he has an 80% chance of buying it.

No combining bundles of outcomes or shifting orders necessary, Bayes and all.



[This message has been edited by QuikSand (edited 12-21-2000).]

Vaj
12-21-2000, 10:12 PM
Pardon my butting in.

Grunion, let's deal strictly with combinations.

The number of combinations of flushes with an ace is C(4,1)*C(12,4).

The possible number of combinations with at least one ace is the sum of the possible number of combinations with one ace, two aces, three aces, and four aces. This is C(4,1)*C(48,4) + C(4,2)*C(48,3) + C(4,3)*C(48,2) + C(4,4)*C(48,1).

The quotient of these is 1980/(778320+103776+4512+48) = .0022331.

Grunion
12-21-2000, 10:12 PM
Well, this will be easy, right? I'll just use your preferree method of using the incremental probabilities...

Chance of only one ace, given that we start with an ace is: chance that second card is a non-ace x chance that third card is a non-ace x chance that fourth card is a non-ace x chance that fifth card is a non-ace.

Again this is easy:
48/51 times
47/50 times
46/49 times
45/48 times
44/47.

Which equals... 205,476,480/281,887,200 = 72.9%!

This is a fairly clear example of you incorrect reasoning. You have five cards which include one given and five unknowns. If one is given, only four can be unknown, this is precisely our discrepancy.

I noticed you skipped over Bayes Theorem.



------------------
Those who don't learn history are doomed to repeat it.

QuikSand
12-21-2000, 10:19 PM
You are right that I made an error above - but it is immaterial to everything else I argue, since adjusting it to the proper four terms (which I'll do momentarily) will leave the likelihood at 77.8% - still well below what is correct for an unbiased sampling.

My apologies for murkying the water with my too-hasty typing and multiplying...

QuikSand
12-21-2000, 10:30 PM
I noticed you skipped over Bayes Theorem.

Nope, just posted that one last. It showed up just before yours - please don't miss it.

I'm off to bed... enjoy.

Grunion
12-21-2000, 10:36 PM
Your error here is clearly in your after-the-fact assignation of the probability of Fred's getting at least one red pill.

You state:

quote:


If we know that Fred has at least one red pill p(B) = 1


No, no, no. We can't take our ex post facto knowledge of the problem and go back and monkey with the percentages.

It is not ex post facto knowledge. We have used logic to make a correct determination regarding the possible outcome of the event:

Given: Fred has at least one red pill.

The probability of Fred having a red pill is 100%. We know that before hand, that is the entire purpose of the given. Any time a condition is provided as a constraint, it is always valid to assume that the condition exists.

I can not understand what you disagree with:

1. Whether or not we "know" that Fred has at least one red pill. (We obviously do, it was stated as a condition of the problem. We do not need to deduce or infer, it is stated.)

2. That the probability of Fred having a red pill if we know that Fred has at least one red pill is 100%.


------------------
Those who don't learn history are doomed to repeat it.

QuikSand
12-22-2000, 07:18 AM
What I disagree with is your bastardization of Bayes Theorem, which is a tool to calculate posterior probability by using prior probabilities.

You've created a very elegant little Bayesian problem here, whoch lends itself quite nicely to a Bayesian analysis. However, in doing so, you are not properly using prior probabilities... specifically the prior probability that p(B) = Fred gets a red pill = 10/12.

In the Bayesian analysis, we properly look at the subset of outcomes that conform to the limiting condition, in this case that he too at least one red pill. There are ten such outcomes, which were equally likely to have occurred to cause the observed condition.

Of the ten equally likely outcomes, eight of them involved the deadly combination. Therefore, the likelihood of Fred's dying is 8/10 - 80%.

If we genuinely disagree on this problem, and you honestly believe that the posterior condition allows us to revisit the probabilites of each outcome and start re-weighting some more than others... then we've probably been wasting our time with the needlessly difficult flush puzzle, as our differences are much more elemental than anything required for that.

As for your biting question:


I can not understand what you disagree with:

1. Whether or not we "know" that Fred has at least one red pill. (We obviously do, it was stated as a condition of the problem. We do not need to deduce or infer, it is stated.)

2. That the probability of Fred having a red pill if we know that Fred has at least one red pill is 100%.


What I disagree with is your claim that either of these tautologies have anything to do with the prior probability of his getting a red pill... which is what this solution calls for using.

a commentary here...

It seems as though (in both the pill puzzle and the flush puzzle) you are reading an extra step into the puzzle-- something along the lines of: "the setup that we are to analyze was generated by a method that was created for the purpose of generating this particular outcome." Specifically, when you look at the set of outcomes in which Fred has at laest one red pill, you seem to be saying "well, we have to have had some method of getting to this guaranteed outcome, so I'll just advance him a red pill on his first draw." Similarly with the flush puzzle-- in original case D, you seem to be saying "well, we know the hand has to have at least one ace, so we'll just start it off with one and go from there."

In each case, this results in a distortion of the probabilities of each possible outcome. In the pill puzzle, it skews the probabilities toward an increased likelihood that Fred will draw a blue pill, since you "assumed" that one of the red ones was already out of circulation. In the flush puzzle, you increase the likelihood of any given multiple-ace hand over any given single-ace hand-- again, distorting the inherent probabilties from the unbiased subset that we're asked for in the puzzle.

I think your pill puzzle has served a useful purpose to simplify, or at least illustrate, our differences here. Hopefully, that one is transportable... maybe there is someone else (here or elsewhere) who can do a more articulate job than I in demonstrating the proper applicatin of Bayes' Theorem (or just inductive use of the mutiplicative property of probability) to solve it.

[This message has been edited by QuikSand (edited 12-22-2000).]

Vaj
12-22-2000, 07:45 AM
I suppose now wouldn't be the best time to bring up the Monty Hall problem http://dynamic.gamespy.com/~fof/ubb/smile.gif

QuikSand
12-22-2000, 08:02 AM
Originally posted by Vaj:
I suppose now wouldn't be the best time to bring up the Monty Hall problem http://dynamic.gamespy.com/~fof/ubb/smile.gif

Boy howdy!

Grunion
12-22-2000, 08:58 AM
QuikSand,

I agree, and we have completely isolated our difference of opinion. It is very evident in the pill problem, and I am fairly certain that our flush disagreement is for the same reasons as our pill arguement.

Alernative Solution for the pill problem.

Q. What are the odds of Fred dying given that he has at least one red pill.

A. If Fred has at least one red pill, then at least one of the following must be true:

Pill 1 is R1
Pill 1 is R2
Pill 2 is R1
Pill 2 is R2

If pill 1 is R1 then pill 2 is R2 1/3 of the time, B1 1/3 of the time and B2 1/3 of the time. Fred will die 2/3 of the time

If pill 1 is R2 then pill 2 is R1 1/3 of the time, B1 1/3 of the time and B2 1/3 of the time. Fred will die 2/3 of the time

If pill 2 is R1 then pill 1 is R2 1/3 of the time, B1 1/3 of the time and B2 1/3 of the time. Fred will die 2/3 of the time.

If pill 2 is R2 then pill 1 is R1 1/3 of the time, B1 1/3 of the time and B2 1/3 of the time.

Out of twelve possible outcomes, Fred has a red and a blue eight times. He has double reds four times, not two.

Also, going back to your contention:

If Fred has one red pill he has an 80% chance of dying.

If this is so, we can safely deduce:

If Fred has one blue pill he has an 80% chance of dying.

The original calculated chance of Fred dying is 67%.

The statement that Fred with always have either at least one red pill or at least one blue pill is true.

We are still left with the paradox stated earlier:

In all possible cases Fred has an 80% chance of dying, yet overall, he has a 67% chance of dying.

Grunion
12-22-2000, 09:24 AM
QuikSand,

First off, I have considered our exchange nothing more than a friendly debate. I just wanted to take a step back and state that. I've noticed our posts have become much more terse than when we began. I can assure you that on my end it is because I am trying to present a large amount of information, and only have a limited amount of time to take from my busy life to do so. I have much respect for your intellect and your reasoning.

I believe everything you have presented is true.....for independent events.

I'll make this one quick, for I know what we will agree on:

2 Dice, chance to roll a nine = 4/36 or 11.1%

Now, given that at least one die is a 6, the chance to roll a nine becomes 1/6 or 16.6%

It does not become 4/21 or 19.0%. The logic of selecting 19% parallels your contentions on both the flush and the pill problem.

However, in order for it to be true, we must have 6 ways of making a 7, 5 ways of making an 8,....to one way of making a 12. But given one 6, we have exactly 2 ways of making each number (6,1 & 1,6), (6,2 & 2,6) etc. Except for the 12 of course, because there is only one 12 combination. However, you must account for the 6 introduced at the beginning of the problem. The 12 must be counted twice for that reason. This is the logic that you are not following.

------------------
Those who don't learn history are doomed to repeat it.

QuikSand
12-22-2000, 09:37 AM
...we have completely isolated our difference of opinion. It is very evident in
the pill problem, and I am fairly certain that our flush disagreement is for the same
reasons as our pill arguement.

I agree with that statement, as well as your benevolent use of the term "argument."


Alernative Solution for the pill problem.

Q. What are the odds of Fred dying given that he has at least one red pill.

A. If Fred has at least one red pill, then at least one of the following must be true:

Pill 1 is R1
Pill 1 is R2
Pill 2 is R1
Pill 2 is R2

If pill 1 is R1 then pill 2 is R2 1/3 of the time, B1 1/3 of the time and B2 1/3 of the time. Fred will die 2/3 of the time

If pill 1 is R2 then pill 2 is R1 1/3 of the time, B1 1/3 of the time and B2 1/3 of the time. Fred will die 2/3 of the time

If pill 2 is R1 then pill 1 is R2 1/3 of the time, B1 1/3 of the time and B2 1/3 of the time. Fred will die 2/3 of the time.

If pill 2 is R2 then pill 1 is R1 1/3 of the time, B1 1/3 of the time and B2 1/3 of the time.

Out of twelve possible outcomes, Fred has a red and a blue eight times. He has double
reds four times, not two.


And once again, you're playing tricks with information gained after the fact.

There are not 12 possible outcomes represented here... there are 10. You simply may not count the two "double red" outcomes (R1 then R2, and R2 then R1) twice as separate events. They are not separate events.

You set up this problem. Fred is going to draw his pills randomly. We agreed originally that each of the 12 possible outcomes was equally likely. All that happened behind some sort fo curtain-- we don't get to go back and change the method of the draw to re-suit the new evidence that we add as a later observation of the outcomes (that at least one pill was red).

When we add the abserved result (that at least one pill was red) we simly eliminate the outcomes from the original set of 12 which do not comport with this observation-- they are B1/B2 and B2/B1. We're left with 10 outcomes which satisfy the observation, each of which were originally likely to have caused the result.

Your analysis is akin to trying to "go back in time" and create some new non-random selection process with some kind of double-weighting for certain outcomes to make sure that when we re-tell the story, the result is 100% likely to conform with the observation. It just doesn't add up.

Out of twelve possible outcomes, Fred has a red and a blue eight times. He has double reds four times, not two.

False.

Also, going back to your contention:

If Fred has one red pill he has an 80% chance of dying.

If this is so, we can safely deduce:

If Fred has one blue pill he has an 80% chance of dying.

The original calculated chance of Fred dying is 67%.

The statement that Fred with always have either at least one red pill or at least one blue
pill is true.

We are still left with the paradox stated earlier:

In all possible cases Fred has an 80% chance of dying, yet overall, he has a 67% chance
of dying.

Nice semantic trick, which has some surface appeal, but doesn't add up to anything at all.

We know that after the pill-drawing (which occurs in a random fashion as stated in the problem) the outcome will be something we could categorize into various subsets. Some of these subsets will have different probabilities of having results in his death than others-- some will be higher than the original set, ome will be lower.

The logical flaw is that the two sets you describe (that he will draw at least one blue; that he will draw at least one red) are not, using the parlance of probability: "exclusive and exhaustive" subsets.

If they were exclusive (meaning no overlap) and exhaustive (meaning they combine to form the entire whole set) then we could have a discussion very similar to the one we had about hands with aces and without aces.

However, it's very clear that these two subsets are exhaustive - meaning that each of the original 12 outcomes is covered by the union of the two subsets. It's equally clear that the two subsets are not exclusive - in fact, it's quite easy to see that there are fully 8 of the 12 original outcomes which are elements of each subset you describe.

In essence, the statement that you're missing in your discussion above is something like "Since we know that the outcome has to be either Fred draws a blue pill or Fred drawa red pill, but it cannot be both..." Were you able to say that (which you wisely have not) then you'd be describing an exclusive and exhaustive group of subsets, and you'd be able to draw some meaningful conclusions about their probabilities.

But by establishing non-exclusive subsets, you can play all sorts of semantic games with what they mean, but they don't add up to any sound logical statement.

Grunion
12-22-2000, 09:41 AM
As for your biting question:


quote:
--------------------------------------------------------------------------------

I can not understand what you disagree with:
1. Whether or not we "know" that Fred has at least one red pill. (We obviously do, it was stated as a condition of the problem. We do not need to deduce or infer, it is stated.)

2. That the probability of Fred having a red pill if we know that Fred has at least one red pill is 100%.

--------------------------------------------------------------------------------

What I disagree with is your claim that either of these tautologies have anything to do with the prior probability of his getting a red pill... which is what this solution calls for using.

It seems as though (in both the pill puzzle and the flush puzzle) you are reading an extra step into the puzzle-- something along the lines of: "the setup that we are to analyze was generated by a method that was created for the purpose of generating this particular outcome." Specifically, when you look at the set of outcomes in which Fred has at laest one red pill, you seem to be saying "well, we have to have had some method of getting to this guaranteed outcome, so I'll just advance him a red pill on his first draw." Similarly with the flush puzzle-- in original case D, you seem to be saying "well, we know the hand has to have at least one ace, so we'll just start it off with one and go from there."



QuikSand,

Using information provided in a given to draw a logical conclusion is an appropriate use of logic.

Also, I have worked out the math on several specific issues (the odds of drawing exactly one ace, two aces, etc.) with which you have been in agreement with my methodology. I have also demonstrated through use of that methodology, that when the location of the given is unknown, and order is not an issue (as in the case of a flush, or the pills, or the dice) the equation will simplify from something that looks like this:

(4/52)*(48/51)*(47/50)*(46/49)*(45/48) + (48/52)*(4/51)*(47/50)*(46/49)*(45/48) + ..... + (48/52)*(47/51)*(46/50)*(45/49)*(4/48).

This reduces to: 5(4*48*47*46*45)/(52*51*50*49*48) = 29.95%

In practical application, yes, I am advancing him a red pill on his first draw.

In a pure mathmatical application, refer to my above alternate solution to the problem.

The constraints in all examples do not change the possible outcomes. Your calculations on the number of outcomes appear to be correct. However, the probability of certain outcomes occuring does change.



------------------
Those who don't learn history are doomed to repeat it.

QuikSand
12-22-2000, 09:43 AM
Incidentally, I also have a great deal of respect or your intellect, and for the power of your beliefs. Not many would be willing to stick around for even a fraction of all this... and I don't mind the tone at all. Glad you agree there.

Now, on to your little dice puzzle:


I'll make this one quick, for I know what we will agree on:

2 Dice, chance to roll a nine = 4/36 or 11.1%

Now, given that at least one die is a 6, the chance to roll a nine becomes 1/6 or 16.6%

It does not become 4/21 or 19.0%. The logic of selecting 19% parallels your contentions
on both the flush and the pill problem.

However, in order for it to be true, we must have 6 ways of making a 7, 5 ways of
making an 8,....to one way of making a 12. But given one 6, we have exactly 2 ways of
making each number (6,1 & 1,6), (6,2 & 2,6) etc. Except for the 12 of course, because
there is only one 12 combination. However, you must account for the 6 introduced at the
beginning of the problem. The 12 must be counted twice for that reason. This is the logic
that you are not following.

There are 11 outcomes in which at least one die shows a six. They are:

1/6
2/6
3/6
4/6
5/6
6/6

6/1
6/2
6/3
6/4
6/5

Of these 11 outcomes, exactly two of them add to nine.

The likelihood of two dice adding to nine, given the fact that at least one is a six is 2/11 = 18.18%.

The examples are getting simpler, and your logical flaw is becoming more red-lettered.

We do not re-visit the random sequential rolling of two dice and re-establish their method to ensure that we'll get the observed outcome. You're making the exact same mistake again-- you;'re just placing the first die down with a six showing, and then rollin the second die. It's a fallacy-- it simply does not comport with the problem as you set it up. And yes, it's the exact same fallacy that has clouded your results in the flush probalem and the pill problem.

QuikSand
12-22-2000, 09:49 AM
Back to the flush puzzle for a moment. Since your concept seems to be "we'll revisit the process of selecting the possibe hands, to ensure that we get the subset that comports with the final observed outcoe (even if that corrupts the probabilities of the component elements of that subset)."

I once asked: why not just start off the hand with one ace and one non-ace? You rejected this method as unfairly skewing the outcome. I ask you now: how can you choose between these two different methods-- both of which will generate the same group of hands, but with different probabilities of certain ones happening? How can you say that "staring the hand with an ace" is appropriate, yet "starting the hand with an ace and then a non-ace" is inappropriate?

Once you betray randomness, how do you ever draw any lines? What's wrong with starting the hand with an ace then a non-ace?

QuikSand
12-22-2000, 09:52 AM
I openly wonder how we might settle this. The examples you are providing keep getting simpler and simpler... ar we getting close to something that we could actually conduct ourselves, using playing cards or the like?

I fear that we might not agree on how to set it up... but I'm not certain how else you'll "show me the light" here. (Or conceivably vice-versa)

QuikSand
12-22-2000, 09:57 AM
Maybe we can use your dice problem as our vehicle. Here is the expanded version of how I interpret the dice problem, with overbearing narrative to draw out our differences.

- - -

Two fair dice are rolled, and placed into separate spots (so we can tell which was the first, and which was the second). This takes place randomly, behind a curtain.

We then open the curtain, and we are asked "does at leat one of the dice show a six." We correctly respond affirmatively.

Given this knowledge, what is the likelihood that the sum of the two dice is nine?

- - -

Do you agree with this re-phrasing of the puzzle?

QuikSand
12-22-2000, 10:09 AM
Using information provided in a given to draw a logical conclusion is an appropriate use of logic.

Yes, it is. But that's not a fair characterization of what you are doing. You are using what is defined as posterior knowledge-- knowledge we gain only after the events in question have occurred-- and you are using it to re-define the events themselves.

The statements:

-The hand has at least one ace;
-Fred drew at least one red pill; and
-At least one die shows a six

...all reflect posterior information and by definition are not what you are calling "given." That's exactly the problem.

If you want to have a dice puzzle that includes the presence of a six as a given, then you must state it something like this:

"Two dice are rolled using a certain method that guarantees that at least one of the two dice shows a six. What is the probability that the two dice together add to nine?"

But of course then, you won't have any answer, since there are multiple methods that may have been used to generate the given conditions. If you want a puzzle with the singular answer you suggest, then you have to spell out the method that you want to have as a given:

"Two dice appear before you. The first was manually placed with the six showing. The second was rolled randomly. What is the probability that the two add to nine?"

And then, you'd finally have a puzzle that gives all the information you're taking as a given and places it all right up into the pre-conditions of the puzzle. And if you asked me that puzzle, I'd have no choice but to give you the answer you seek-- 1/6.

Absent the inclusion of that kind of information all contained within the given, it's just a wholly different puzzle. of course, a parallel argument exists for the pills and the flushes-- but I agree that the simpler puzzles help make the thinking more intuitive.

[This message has been edited by QuikSand (edited 12-22-2000).]

Grunion
12-22-2000, 10:13 AM
Once you betray randomness, how do you ever draw any lines? What's wrong with starting the hand with an ace then a non-ace?



Pretty clear answer here. It creates an entirely different set of circumstances. Starting off in the above manner is not consistant with the constraint. We know that there is at least one ace. And while we know there is at least one non-ace (due to their being only four aces), the arbitrary assignment of a second card non-ace artifically eliminates potential cominations of cards. As I stated in the past, in determining the probability of pulling a flush, knowledge about one card is meaningless, knowledge about more than one card is important (even partial knowledge, such as no cards are an ace).

Back to the dice:

There are 11 outcomes in which at least one die shows a six. They are:

1/6
2/6
3/6
4/6
5/6
6/6

6/1
6/2
6/3
6/4
6/5

Of these 11 outcomes, exactly two of them add to nine.

Lets rephrase the question:
Given two dice, and at least one of them is a six, what are the odds of the other die being a three?

Passacaglia
12-22-2000, 10:14 AM
dola-post

QuikSand
12-22-2000, 10:20 AM
Lets rephrase the question:
Given two dice, and at least one of them is a six, what are the odds of the other die
being a three?

Okay, I'll bite.

In the 10/11 likely event that the two dice are different, the chances are 1/5 that the "other die" is a three.

In the 1/11 likely event that the two dice are both sixes, the chances are 0 that the "other die" is a three.

The combined chances of the two dice adding to nine are (10/11 * 1/5) + (1/11 * 0) = 10/55 = 18.18%

q.e.d.

QuikSand
12-22-2000, 10:25 AM
Responding to my comment:
Once you betray randomness, how do you ever draw any lines? What's wrong
with starting the hand with an ace then a non-ace?

Grunion wrote:

Pretty clear answer here. It creates an entirely different set of circumstances. Starting off in the above manner is not consistant with the constraint. We know that there is at least one ace. And while we know there is at least one non-ace (due to their being only four aces), the arbitrary assignment of a second card non-ace artifically eliminates potential cominations of cards.

Simply false. There are no hands that satisyfy the requirement "hands that have at least one ace" that may not be generated by the process of selecting one ace, selecting one non ace, and then randomly selecting the remaining three hands.

Of course this process skews the hand selection, making some more probable than other. And of course that's a problem.

But it's the same problem that occurs in the numerous incorrect analyses of problems in this thread's recent history. You simply cannot alter the "selection method" to try to conform with information gained after the fact. The "one card seed" and the "two card seed" is just more clear evidence of the perils of doing so.

Grunion
12-22-2000, 10:26 AM
The statements:

-The hand has at least one ace;
-Fred drew at least one red pill; and
-At least one die shows a six

...all reflect posterior information and by definition are not what you are calling "given." That's exactly the problem.



I don't agree. We have posterior confirmation not posterior information.

You can not solve these problems setting a constraint, then ignoring the constraint to obtain results, and then throwing out the results that do not match the defined constraint. This process is not mathmatically sound. The information provided in the constraint must be accounted for in the randomization process.

For your conclusions to be valid, your statement should be:

Draw two pills, and ignore any result that does not have a red pill in it. (Which is fundamentally different than draw two pills with the constraint that at least one of the pills is red.)

Then, 80% death would be a valid assessment.

QuikSand
12-22-2000, 10:38 AM
It seems as though we are focusing in on our semantic differences, which might be the only ones we have. I confess I didn't see this coming two pages ago, but it certainly makes it clear why we seem to disagree on such fundamental issues.

Your interpretation of theze puzzle wordings is one that I have yet to hear from anyone, in any context. You defend it artfully and crispy, and the math that we've exchanged has (save for a few haste-induced errors on both sides) been sound.

Our differences are seemingly in the statement and interpretation of the problem, and I don't know how to resolve that to anyone's satisfaction. I can say with relative certainty that problems like the dice and pill problems you have set up will be replicated (or nearly so) in a wide variety of textbooks, and that in virtually no case will any problem be interpreted or presented in the manner you have argued.

I'm certain that were you to take a test from the instructor materials from any common probability textbook, your interpretation would lead you to any number of answers that would, in the eyes of the text and the instructor, be judged to be "wrong."

I'm doing my best here to avoid saying that you are wrong and I am right, because I now see it more as a matter of convention. Your answers have been, at least to a degree, correct for your interpretation of the problems-- no matter how unusual those interpretations may be.

I do think, though, that your interpretation suffers from a lack of precision. Back to the flush puzzle for a moment-- we probably could agree that there are a number of different non-random methods that could generate five-card hands that contain an ace. You seem to suggest "seeding" an ace as the first card. I've offered that we could "seed" the first card as an ace, and the second card as a non-ace. These two different approaches probably are not the only possiblemethods that could be used to genrate the entire domain of hands that contain one ace-- though with admittedly different probabilities of given hands within that subset. So, how can you ever be sure of a single answer with your interpretations of these puzzles? How can you ever be sure that the nonrandom method you chose to comply with the posterior information is the same method as the unknown being who set up our five card hands?

I don't know what else to suggest. Your various challengs to me, while sometimes cleverland challenging, simply dance around these same issues of interpretation. I'm doing my best to field each of your challenges, but I'm doing so with the nearly universally-accepted principles of probability on my side, so it's almost unfair.


[This message has been edited by QuikSand (edited 12-22-2000).]

Grunion
12-22-2000, 10:47 AM
Okay, I'll bite.

In the 10/11 likely event that the two dice are different, the chances are 1/5 that the "other die" is a three.

In the 1/11 likely event that the two dice are both sixes, the chances are 0 that the "other die" is a three.

The combined chances of the two dice adding to nine are (10/11 * 1/5) + (1/11 * 0) = 10/55 = 18.18%

Definately not true, the probability of two dice being different will always be 5/6. (unless, of course, we have a given such as both dice are the same or both dice are different). This should be fairly intuitive.

Given that at least one die is a six, we have one known value and one unknown value. We do not know which die is the six, but that is irrelevant, order does not matter in this case. (Which is defined in combinations, which we have been using excusively in our dice discussion.) What you are stating implies that the possibility of rolling a three on a given die increases because a separate die happens to land on a six.

The odds of rolling a nine given that at least one of the dice is a six equals the probabilty of rolling a nine given that the first die is a six which equals the odds of rolling a nine given that the second die is a six.

I think we can agree that the odds of two dice matching is one in six. Your logic would result in that probability being one in 11. (Just replace the nine with a twelve, and then apply it to ones to get two, twos to get four, etc.)

QuikSand
12-22-2000, 10:50 AM
You can not solve these problems setting a constraint, then ignoring the constraint to obtain results, and then throwing out the results that do not match the defined constraint. This process is not mathmatically sound.

You can solve these problems that way, and statisticians the world around insist upon it. It is the only mathematically sound way of doing so.

The information provided in the constraint must be accounted for in the randomization process.

By making the "randominzation process" non-random? How does this possibly comply with the concept of these puzzles?

I realize that these questions/answers seem like they splitting hairs, but I'm genuinely on the fence right now between believing that (a) you have a valid, if unusual, interpretation of this kind of probability puzzle which has its own branch of defensible solutions; or (b) you're just flat out wrong, and the source just happens to be in the setup of the problem.

To get me toward (a) instead of (b), I think I'd need you to convince me why I can't use my "two-card seed" assumption in the flush puzzle, when we talk about the hands with at least one ace, but why we must use your "one-card seed" assumption. How on earth can you generalize something like that, once the problem gets more complicated than five cards and one suit? (which it obviously can)

Grunion
12-22-2000, 10:55 AM
To get me toward (a) instead of (b), I think I'd need you to convince me why I can't use my "two-card seed" assumption in the flush puzzle, when we talk about the hands with at least one ace, but why we must use your "one-card seed" assumption. How on earth can you generalize something like that, once the problem gets more complicated than five cards and one suit? (which it obviously can)


Without getting into the calculation (if I need to I will), we agree on your calculation on all other subsets. If you use the two seed method on those, you would arrive at a different result than the ones which we unilaterally agree upon.

wignasty
12-22-2000, 11:08 AM
Damn. I have read over about 1/4 of this debate, and I have come to a preliminary conclusion.

You are both right.

I promise to finish reading all the info, but as I see it right now, you aren't really debating the answer to the problems. It seems as though you are debating the "correct" way of looking at it. I see one of you looking at it from a mathematical point of view, while the other is looking at it from a logical point of view. I have also noticd that you both seem to flip sides a few times. It's funny how you never seem to be on the same side at any time, though. This intersting exchange has brought up another puzzle.

1. wignasty, QuikSand and Grunion each have a die.
2. They are each behind a seperate curtain, so they can't see eachother.
3. They each roll their die 1 time.
4. wignasty states that he rolled a 5.
5. QuikSand and Grunion explain the probability of getting a total of 13 between the 3 dice, without saying what they each rolled.
6. Neither QuikSand or Grunion can leave or eat until they both come to the same conclusion.

Q1 - What is the probability that they will ever agree on what each of them rolled.
Q2 - What are the odds that one of them will die of starvation, and then will the other one cheat and look behind the curtain.
Q3 - What are the odds that wignasty lied about his die, because he was afraid "The Man" could use that information against him.

wignasty

------------------
Making a better America..... through paranoia

QuikSand
12-22-2000, 11:26 AM
Without getting into the calculation (if I need to I will), we agree on your calculation on all other subsets. If you use the two seed method on those, you would arrive at a different result than the ones which we unilaterally agree upon.

I believe that we have agreed on the total number of hands that include at least one ace.

What we have not yet agreed upon is within that total number of hands, what proportion of them are flushes? Or what proportion have more than one ace? or how any have exactly three aces?

I believe that the puzzle, as originally stated, asks about that set of hands, and makes a clear assumption that each of the hands within that set is equally represented. I would answer all the above questions by making the original assumption that we're examinning the randomly-created set, with each hand equally represented, and the calculations are absolutely clear.

You believe that the puzzle asks about that set of hands, and presumes that we know the non-random method that was used to generate them, which (as it turns out) results in some hands being represented more heavily than others. Based on your assumption of the generation method (which is not the only one which complies completely with the observable characteristics of the set of hands) you then weight certan hands more than others, to calculate a different answer to each of the subordinate questions.

Theoretically, someone else could come along, and using your idea that we are enabled to "re-build" the process used to generate the hands, they could use my "two-card seed" method to create the hands. This method, which is every bit as logially valid as yours, would result in a completely different set of answers to those questions. (what is the flush probability? what is the probability that the hand has more than one ace? what is the pobability that the hand has exactly three aces? etc.)

How can you argue that your anwer is "correct" when this hypothetical third person follows your exact logical reasoning, then comes to a fork in the road and chooses the other option (of the two I have thought of... there may be an infinite number of methods) and generates an entirely different distribution of hands, and therefore a completely different answer to the puzzle? (He'll say that the set D is less likely to be a flush, I'm guessing)

---

So, on your point-- we've agreed how many different hands exist that contain at least one ace. This admittedly took some doing.

The essential question upon which we still disagree is: given the fact that we know thos hand has at least one ace (and no other facts - there is nothing else stated in the original problem) what is the likelihood that the hand is a flush?

My Method

Using a neutral calculation of a random distribution of all the possible hands that fulfill the stated observation, I get 0.223%.

I'm comforted by the fact that it is impossible to use my logical reasoning here, do the math correctly, and come up with any other result.

Your Method

Using a biased calculation of a non-random distribution of all the same possible hands, granting more weight to some than others by virtue of a selection process that you decided was appropriate, you get 0.198%

Another guy uses your method

The other guy agrees with us on the count of total hands uses your logic, but chooses a different selection process (the two-card seed) and gets a different non-random distribution than yours, which grants different uneven weights to certain hands than your uneven weights. He then calculates the flush probability to be another number, perhaps 0.15% (I have not done the math)

My question is this: what is the airtight argument that your selection of your preferred (but admittedly non-random) "hand generation method" is correct, and that the other guy's method is incorrect?

And what happens when we have a more complicated problem, where there are an intuitively obvious and much larger number of methods thatcould be used to satisfy the observed results? How do we know how to choose the right one?

[This message has been edited by QuikSand (edited 12-22-2000).]

QuikSand
12-22-2000, 11:40 AM
Grunion, I may be confused about one thing here. Do you believe that this non-random selection process:

1) pick card #1 from the 4-card ace stack
2) pick card #2 from the 48-card non-ace stack
3) pick cards 3-5 from the re-shuffled remaining 50 cards

Do you argue that the set of possible hands resulting from this selection process (setting aside their various probabilities of occurring) is in some fashion different from the set of possible hands represented by the statement:

All hands which include at least one ace


If you believe there to be a difference in these two sets, then I understand your difficulty in following my arguments about the "two-card seed" option. If that's the case, I urge you to reconsider that belief-- I don't think you'll find it to be substantiated.

QuikSand
12-22-2000, 12:21 PM
Just in case the horse isn't "dead enough" yet...

Back to the "flush probability of hands with one or more aces" puzzle. I have dreamed up a few more things:

- - -

Method #1:

-Draw card #1 from the 4-card pile of aces
-Draw cards 2-5 from the remaining 51 reshuffled cards

Method #2:

-Draw card #1 from the 4-card pile of aces
-Draw card #2 from the 48-card pile of non-aces
-Draw cards 3-5 from the remaining 50 reshuffled cards

Method #3:

-Draw cards 1-4 from the shuffled 52-card deck
Look at cards 1-4
-If they contain at least one ace, draw card #5 from the remaining shuffled 48
-If they don't contain atleast one ace, pull the four aces from the remainign cards, shuffle them, and draw card 5 from that four-card stack

Methods #4, #5, #6

Various combinations along the same lines as #3-- seeding the ace in as card #4, #3, or #2 if a check of the cards in the hand thus far does not reveal an ace, then drawing the rest of the hands randomly from the reshuffled deck.

- - -

Okay, here are six different methods that might have been used to generate any of the possible five card hands that contain at least one ace. None of these methods will generate a five card ghand that does not meet this observed requirement.

When the original puzzle asks:

Rank the following sets of 5-card poker hands in descending order of their likelihood of being a flush:

(A) All 5 card hands
(B) Hands whose first card is an ace
(C) Hands whose first card is the ace of spades
(D) Hands with at least one ace
(E) Hands with the ace of spades


It seems clear that you (Grunion) are "assuming" that the "set of hands" labeled D was generated by a particular method designed to generate only that set of possible outcomes. (I believe I'm articulating your position properly here)

My question is: how did you know which of these six methods was used? Each of them will in every single case generate a hand that satisfies the requirement stated. As an added bonus, it's also true that every single hand that does satisfy that requirement can be generated by each and every one of these six different selection methods.

For your analysis, you chose selection method #1 from above, and you then worked through an internally consistent set of calculations to determine that the likelihood of that set of hands being a flush was exactly and indisputably 0.198%.

How is it, from the statement of the puzzle (quoted verbatim above) that the solver is to know to use "selection method" #1 from above, and to use that particular non-random method to determine the likely characteristics of that set? Why not use selection method number 2, or number 3, or any of the others that I have chosen above? (Which is only a sampling of what may well be an infinite number of selection methods that generate the entire set of hands with aces)

Mathematics demands certain answers to properly stated problems. This puzzle seems to have two different interpretations-- at least that's the synthesis of our collective argument.

My interpretation (that the set is evenly distributed) leads to a clear result, with absolutely no dispute about its veracity. There is no judgment involved, and there is only one correct answer.

Your interpretation (that we need to divine some implicit "selection method") leads to a potentially infinite number (but certainly a multiple) of options, each of which might lead to a different calculated result.

How can this be so? Is there some guideline that a puzzle-solver should employ when deciding what "selection method" might have been used to build this nonrandom group of hands? Why #1 over #2 or #3? How do we expand your solution to generally cover all similar cases? (This is, of course, a snap for my solution)

Grunion
12-22-2000, 12:42 PM
QuikSand,

Get back to me on my dice reponse (8:47 AM), I believe its extremely relevant. I'm just starting to wade through your recent posts now.

Grunion
12-22-2000, 12:57 PM
I believe that the puzzle, as originally stated, asks about that set of hands, and makes a clear assumption that each of the hands within that set is equally represented. I would answer all the above questions by making the original assumption that we're examinning the randomly-created set, with each hand equally represented, and the calculations are absolutely clear.



No, I do not believe that it does. In order to achieve that scenario, the condition would be stated as follows.

What are the odds of drawing a flush from five random cards, given that all hands not containing an ace are to be discarded.

Which is not identical to:

What are the odds of drawing a flush given that your hand contains at least one ace.

The constraint effects the overall probability of obtaining a flush. Knowing that you have an ace makes the probability of certain outcomes more likely than others, which should be fairly self evident. Your methods to not refine the process enough (which is quite ironic, and fairly non-intuitive, being that I am the one using the less complex equations).

Your method says: either the event happens, or it doesn't, and if the event happens there is an equal probability for each particular combination (of dice, pills, cards) which is possible to occur.

Which I think closely parallels a statement such as this:
2 green balls one yellow ball, one pick. The pick can be green or it can be yellow, therefore both have an equal chance of occuring.

QuikSand
12-22-2000, 01:08 PM
Will do.

In response to my comments:

In the 10/11 likely event that the two dice are different, the chances are 1/5 that the "other die" is a three.

In the 1/11 likely event that the two dice are both sixes, the chances are 0 that the "other die" is a three.

The combined chances of the two dice adding to nine are (10/11 * 1/5) + (1/11 * 0) = 10/55 = 18.18%[quote]

Grunion replies:

[quote] Definately not true, the probability of two dice being different will always be 5/6. (unless, of course, we have a given such as both dice are the same or both dice are different). This should be fairly intuitive.

Given that at least one die is a six, we have one known value and one unknown value. We do not know which die is the six, but that is irrelevant, order does not matter in this case. (Which is defined in combinations, which we have been using excusively in our dice discussion.) What you are stating implies that the possibility of rolling a three on a given die increases because a separate die happens to land on a six.

The odds of rolling a nine given that at least one of the dice is a six equals the probabilty of rolling a nine given that the first die is a six which equals the odds of rolling a nine given that the second die is a six.

I think we can agree that the odds of two dice matching is one in six. Your logic would
result in that probability being one in 11. (Just replace the nine with a twelve, and then apply it to ones to get two, twos to get four, etc.)

This is just another outpouring of our "difference of opinion" of how to react to after-the-fact observations about random events.

Your entire discussion rests on the belief that the die-rolling was not a random event, but instead was some sort of non-random event built to satisfy the conditions that were observed.

My statement (which conforms to univerally-held probability concepts) is that your puzzle:

I'll make this one quick, for I know what we will agree on:

2 Dice, chance to roll a nine = 4/36 or 11.1%

Now, given that at least one die is a 6, the chance to roll a nine becomes...

(I snipped out what follows immediately thereafter, because it's your solution)

...this puzzle simply must be understood to have the following meaning:

Part I:
We roll two fair dice.
We inspect the results.
We calculate the likeihood that the sum is nine.

Part II:
If, upon inspection of the dice after the process in Part I, we see that the actual result is within the subset of all originally-possible results described by "at least oen die shows a six" then we can "back out" the calculation of the probability taht the two dies show a sum of nine.

This is nothing more than you re-stating your interpretation of this sort fo puzzle, and me telling you it doesn't make any sense. And, apparently, vice versa.

Just an idle question-- I don't mean anything prejudicial about it. Are you either a non-native American, or did you perhaps study in another country? I'm wondering if there might be some difference in international conventions with this sort of thing... (which I still find hard to buy given the variability within your arguments, but I'm trying to find some benefit of the doubt)

- - -

Regardless, this brings us to your statement:

Definately not true, the probability of two dice being different will always be 5/6.

Well, now we can both agree that isn't true. I can set down two dice with each side showing a six... and there won't be a 5/6 chance that those two dice are different.

It's all a matter of what knowledge we claim to have about these dice.

I claim (again, in keeping with the universally-held conventions of probability study) that we know the following:

-the two dice were rolled randomly

-the result happened to be one of 11 equally likely combinations that show at least one six

With that knowledge about these two dice, it decidedly untrue that the dice have a 5/6 chance of being different. They have an exactly 10/11 chance of being different.

I suggest that you'll dispute my claim to the two bits of knowledge stated above (back to our semantic differences), but you can't claim that if those things are true that my calculation is off.

Grunion
12-22-2000, 01:11 PM
Grunion, I may be confused about one thing here. Do you believe that this non-random selection process:
1) pick card #1 from the 4-card ace stack
2) pick card #2 from the 48-card non-ace stack
3) pick cards 3-5 from the re-shuffled remaining 50 cards

Do you argue that the set of possible hands resulting from this selection process (setting aside their various probabilities of occurring) is in some fashion different from the set of possible hands represented by the statement:

All hands which include at least one ace


If you believe there to be a difference in these two sets, then I understand your difficulty in following my arguments about the "two-card seed" option. If that's the case, I urge you to reconsider that belief-- I don't think you'll find it to be substantiated.

In the context of our discussion, no the above is not a random process, for the reason that we have arbitrarily reduced one of our selection pools to 48, when it clearly should be 51.
In a general context, this could be be considered a random process, if it matched the constraint.
Such as:
At least one card is an ace, and one of the remaining four cards can not be considered to be an ace. (And yes I realize we do know this in the original case D, but it is a function of our relatively complex set of four cards of each value for 5 places, not by some additonal constraint placed on the problem.) If you are not buying into the above, consider a similar question, but either only drawing four cards versus five, or introducing a fifth suit. We could attempt to wade through the details of this particular scenario, but I think you'll agree that we'll end up spending a lot of time hashing through the complexity of the problem, and not focusing on our clearly defined difference of opinion.

QuikSand
12-22-2000, 01:13 PM
...the condition would be stated as follows.

What are the odds of drawing a flush from five random cards, given that all hands not
containing an ace are to be discarded.

Which is not identical to:

What are the odds of drawing a flush given that your hand contains at least one ace.

And this, again, is where we differ. I strongly believe that an overwhelming majority of any "reliable" sources (textbooks, academics, whomever) in such matters would agree that the two questions you juxtapose above are, indeed, identical.

QuikSand
12-22-2000, 01:19 PM
In your 11:11am post, when you answer:

no the above is not a random process

You are not responding to my question. We agree these are not random processes. Both your process (seed an ace, randomly draw the rest) and all the other processes I describe elsewhere are definitely non-random processes. They all skew the distribution within the universe of possible outcomes.

However, my argument is that they all have the exact same universe of possible outcomes. They all can generate each and ever hand with an ace, and they all can generate no other type of hand. I hope you follow my point here.

My poit is since you find it acceptable to assume one particular non-random method, and you justify it on the grounds that it generates only the hands which conform to the condition in the puzzle... why stop with that selection method? I've found a bunch of other ones-- how did you choose, and how can you defend that as a unique correct answer to the exclusion of some other answer?

QuikSand
12-22-2000, 01:21 PM
Which I think closely parallels a statement such as this:
2 green balls one yellow ball, one pick. The pick can be green or it can be yellow,
therefore both have an equal chance of occuring.

Give me a break.

QuikSand
12-22-2000, 01:27 PM
While I don't have a proof of this, I strongly suspect that using your logic and following it to the fullest extrame, the only possible answer to any of the puzzles we've discussed is "there is insufficient information int he puzzle to answer it."

Since you decide that all of these random events with defined probabilities are going to be re-visited and altered into some sort of non-random systems that are rigged to cpmply with our ex post facto observations... then I submit that there will exist more than one method of generating the results. Since there is ny systematic way to choose one among many (or infinitely many) possible non-ranodm systems, I believe you paint yourself into a corner on any one of these puzzles.

So, when you are confronted with a puzzle like:

Two dice were rolled.
At least one came up a six.
What is the chance that they add to nine?

You cannot reach the correct answer of 2/11, but instead you must respond by saying: "To answer that, I need to know the details of the system that was used to generate this result. There are too many systems that generate two dice with at least one six, and I cannot chose among them, and therefore cannot answer the question."

It's nonsensical, but I think it's the only outcome of your way of thinking here... the more I consider it, the more convinced I become that your entire logic leads to this-- at least in many cases, if not every case.

Grunion
12-22-2000, 01:33 PM
Method #1:

-Draw card #1 from the 4-card pile of aces
-Draw cards 2-5 from the remaining 51 reshuffled cards

Method #2:

-Draw card #1 from the 4-card pile of aces
-Draw card #2 from the 48-card pile of non-aces
-Draw cards 3-5 from the remaining 50 reshuffled cards

Method #3:

-Draw cards 1-4 from the shuffled 52-card deck
Look at cards 1-4
-If they contain at least one ace, draw card #5 from the remaining shuffled 48
-If they don't contain atleast one ace, pull the four aces from the remainign cards, shuffle them, and draw card 5 from that four-card stack


Method 1 is random, with respect to case B, and since order doesn't matter to case D as well. Although the equation ends up being the same for cases a, c and e. This is not a suitable method of modeling those scenarios. This would seem to validate my contention that no matter how much or how little we know a particular card, it is of no added significance in calculating flush probabilities. Order, starting suit, starting card value, and location of known card do not change the probabilities, provided that all known information pertains to the same card.

Method 2 is incorrect in all stated cases. The other part of my initial contention is that we need to know something about more than one card for there to be any chance of obtaining significant information. We now have knowledge about two cards, and this knowledge needs to be accounted for in our calculation. I believe that you are presenting method two not because you agree with it, but because you think that I find it a valid concept. I do not and never have.

Method 3 is also non valid, because we also have additional information, this time it is being introduced in the middle of the process. We then make a decision base on the information we know about four cards. And to bounce back to one of my main contentions, one we have information about more than one card, there is additional information which can alter our flush probability (in this case full knowledge of four complete cards). I'm not about to do the math for this, because it involves a decision tree.

I am very perplexed that you are able to (correctly) dismiss methods as invalid
because the result does not match those for cases A, B, C & E; yet have been unwavering in your assessment of D, even though it falls into the same category.

------------------
Those who don't learn history are doomed to repeat it.

Grunion
12-22-2000, 01:40 PM
And this, again, is where we differ. I strongly believe that an overwhelming majority of any "reliable" sources (textbooks, academics, whomever) in such matters would agree that the two questions you juxtapose above are, indeed, identical.



Sorry, but I have to call you out on this. Its happened more than once. If you are referring to statistical textbooks, etc. and want to use the info you dig up I strongly encourage you to do so. However, its improper and unfair to make an opinionated statement (I stongly believe....) to draw a conclusion about proper statistical analysis. I stongly believe the reverse is true, because I believe I am correct. However it is still not appropriate for me to argue that the academic community is in my corner.

QuikSand
12-22-2000, 01:50 PM
Here is a new, and rather simple, puzzle which might get to my point more clearly.

- - -

Let's go back to Fred, and his potentially deadly pills.

There are now six pills from which Fred will draw.

2 are blue
2 are red
2 are green

Fred will draw and ingest two pills, drawing randomly from a container including those remaining.

If he draws and ingests one blue and one red pill, he will die.

- - -

That is the basic setup. That is all we know in advance. That information is not allowed to change (as much as you would prefer that it did later).

First, calculate the likelihood that Fred will die.

We would agree (I suspect) that this is easy:
-there are 6 x 5 = 30 different pill combinations
-there are 4 x 2 = 8 deadly combiantions

The likelihood of Fred dying is 8 / 30 = ~23%

I'm pretty sure we'll agree here (unless I just made a bonehead mistake in my math - I dont think this reveals any of our differences is my point).

- - -

Now, let's add some posterior information:

Fred did not select both a green and a red pill.

What, now, is the likelihood that Fred will die?

For me, this is fairly easy-- I'll spell it out.

I had 30 possible outcomes, which were equally likely to begin with. Now, I know that 8 of those outcomes did not happen, based on the posterior knowledge that we have gained. Therefore, I am left with the remaining 22 possible outcomes, which remain equally likely (with respect to one another, though Bayes' Theorem properly applied tells us that they each now represent a larger probability of having been the even outcome).

Of the 8 deadly combinations, they all remain as possible among the group of 22 possible outcomes. So, there are 8 of the 22 equally likely outcomes that result in Fred's death.

I calculate Fred's likelihood of dying with this set of knowledge as being 8 / 22 = ~36%.

Now, how do you solve this one? You seem to need a "random" method of ensuring that the final outcome will not contain both a red and a green pill. I built this problem because (at least to me) it isn't intuitively obvious how one might do that. Do rush in before he dtarts drawing pills and flip a coin, and take out both the greens if it's heads and take out both the reds if it's tails?

I'm not sure how you determine the "selection method" here in any ambiguous way.

What's your answer to this puzzle? Can you even reach any answer?

[This message has been edited by QuikSand (edited 12-22-2000).]

QuikSand
12-22-2000, 01:53 PM
Sorry, but I have to call you out on this.

Point taken. I have edited another such remark out of my last post (the red/blue/green pill puzzle) and I won't use them again.

Grunion
12-22-2000, 01:55 PM
While I don't have a proof of this, I strongly suspect that using your logic and following it to the fullest extrame, the only possible answer to any of the puzzles we've discussed is "there is insufficient information int he puzzle to answer it."

Essentially, your are correct in my assessment. But it can be answered, we just can not draw pertinent conclusions from the question posed. Information on a second card is required to alter the probabilities,
Such as,
Ace of Hearts and four of spades
or ace of clubs and seven of clubs.

Interestingly enough, while knowing that we have the ace and seven of clubs helps us (because we have sucessfully restricted our pool) knowing that we have at least two clubs does not (as every possible hand will have at least two matching cards).

So, we need info pertaining to a minimum of two cards for us to obtain additional conclusions.

Having information on two cards may or may not help us, depending on how this information pertains to the characteristics of our deck and our flush subset.

Knowing we have at least one 4 and one seven does not help, but knowing we have two kings does.

Grunion
12-22-2000, 02:01 PM
Also,

You are still referencing ex post facto observations, without addressing my claim that we are merely dealing with posterior confirmation, not posterior information.

I contend that you are dealing with posterior information. You wait until the process is complete to "throw out" results (i.e. two blue pills) rather than correctly applying the constraint of a given at the beginning of the process. In all examples, my only possible results agree with the given. In yours, a decision needs to be made on the result to determine if it agrees with the given. This is what is skewing our results.

Grunion
12-22-2000, 02:21 PM
The likelihood of Fred dying is 8 / 30 = ~23%

agreed


Fred did not select both a green and a red pill.

What, now, is the likelihood that Fred will die?

Remembering that order is not important:
P(1) has a 1/3 chance of being red, a 1/3 chance of being green and a 1/3 chance of being blue.

If P(1) is red, P(2) has a 2/3 chance of being blue and a 1/3 chance of being red (as a red/green combo is impossible)

If P(1) is blue, P(2) has a 2/5 chance of being red, a 2/5 chance of being green and a 1/5 chance of being green.

If P(1) is green, P(2) has a 1/3 chance of being green and a 2/3 chance of being blue (as a red/green combo is impossible)

P(red/blue) = .33(.67)+.33(.4)+ .33(0)= 35.5%

QuikSand
12-22-2000, 02:28 PM
Grunion, I confess that I have not been consulting a textbook or other resource to guide my terminology. I may well have made loose use of certain terms like ex post facto and posterior information, and may have used the terms in a context that is not in keeping with some convention.

However, I think the context of all my discussions have been clear, and I think that we've successfully isolated our differences-- which revolve around the proper use of the information, and not really its label.

- - -

However, you're right when you characterize my method as being (paraphrasing): we let the random events occer, and just throw out all the outcomes that don't match the
observed condition.

That's exactly how I do it, and I continue to assert that this is the proper way to approach any such problem, unless the "condition" referenced is not an observation of outcomes, but is instead clearly built into the set of givens.

I argue that in each of the puzzles we've discussed, these various observations (the hand has at least one ace, Fred drew at least one red pill, at least one die shows a six, Fred did not draw both a red and a green pill) are all of the exact same character-- they are all observations made after the random event- not conditions around which one should build nonrandom events.

QuikSand
12-22-2000, 02:45 PM
With my red/green/blue pill puzzle - well done. I neglected to make the puzzle complex enough to avoid a simple brute force calculation, which would require you to depart into your theory of "selection process." You win there.

(Though, isn't it at least a little troubling to you that my method of "throwing out" the random outcomes that don't mesh with the observed condition generates the correct answer, which you also got by intuitive brute force? You've suggested that my methods are somehow laughable, yet they're right here. Of course, that proves nothing-- it could have been a coincidence, or a by-product of the setup somehow)

Note: I've since revised my position on your answer to this puzzle, and I now disagree with you. I was temporarily fooled that your method (which, of course differs from mine) resulted in an answer which was within a rounding error of mine.

- - -

I'll try again to illustrate my point.

Let's say that my original flush puzzle contained an option F that read:

F - hands that contain at least one ace or at least one queen, have at least two face cards, and do not contain the jack of diamonds

(or some other nonsensically complicated description)

If the puzzle asks you to determine the likeilhood of this set of hands being a flush... how would you go about calculating that?

My method is simple: I calculate the total number of hands that meet the restrictions, and then I calculate the total number of hands that meet the restrictions and happen to be flushes. I divide the latter into the former, and have the correct answer.

Since you reject my notion that each possible hand must be equally-weighted, I presume that you would have a different approach to a problem like this.

Consistent with your solution to the original flush puzzle (specifically the original case D) you would be required to come up with some hand-selection methodology that would be certain to generate the exact subset of hands described by the set F. Am I right-- that's how you would approach such a puzzle?

Can we generalize this flush puzzle to discuss our various aproaches?

My approach is for any given set of conditions x, you calculate the total number of hands that meet x, the total number of flushes that meet x, divide the latter into the former and you have the correct "flush probability" of the set defined by x.

Can you generalize? Do you have any way to solve a problem like the hypothectical F I describe above (or something even more tedious and complicated)?

That puzzle should be solvable, shouldn't it?

[This message has been edited by QuikSand (edited 12-22-2000).]

[This message has been edited by QuikSand (edited 12-22-2000).]

Grunion
12-22-2000, 02:54 PM
So, you consider each of the following:
a(4*51*50*49*48) - ace is in first position
b(51*4*50*49*48) - ace in in second position
c(51*50*4*49*48) - ace is in third position
d(51*50*49*4*48) - ace is in fourth position
e(51*50*49*48*4) - ace is in fifth position

However, this fails to account properly for the hands in which more than one ace appears. Consider the following hand:

AH-4C-5S-QH-AD

Since the card in slot #1 is an ace, it's certainly being counted in group a above, but since there's an ace in slot 5 (the AD is definitely one of the 48 cards for that slot), but it's also being counted in group e above (the ace of hearts is definitely included in the 51 for that slot).

By overstating the total number of hands with an ace, this then goes to understate the final ratio (since the denominator is too big).



The hand listed above should be included in both sets. This is also a core difference in our conceptual disagreement.

QuikSand
12-22-2000, 03:01 PM
I'm guessing that last post was just a hiccup-- I think we're pretty far beyond that stage, and have more closely zeroed in on our differences.

I think we now both understand the precise differences between the way you're calculating things and the way that I am-- the card-counting stuff was one vehicle that got us this far.

QuikSand
12-22-2000, 03:24 PM
Remembering that order is not important:

P(1) has a 1/3 chance of being red, a 1/3 chnce of being green and a 1/3 chance of being blue.

If P(1) is red, P(2) has a 2/3 chance of being blue and a 1/3 chance of being red (as a red/green combo is impossible)

If P(1) is blue, P(2) has a 2/5 chance of being red, a 2/5 chance of being green and a
1/5 chance of being green.

If P(1) is green, P(2) has a 1/3 chance of being green and a 2/3 chance of being blue (as a red/green combo is impossible)

P(red/blue) = .33(.67)+.33(.4)+ .33(0)= 35.5%

Oops. I got fooled by the coincidence that your answer happened to round off to the same percent as mine.

I, of course, disagree with your approach to this puzzle. You have cleverly come up with a "pill-draw generation system" which does result in only the outcomes specified, and you calculated your probabilittes based on that. The nonrandom "system" you use seems to be "draw one random pill, and then if the second pill causes an invalid combo, draw a different second pill."

Your answer breaks down as 48/135 = 35.55%

I would argue again that the correct answer is 8/22 = 36.36%

I disagree with your very first assertion-- that P(1) has a 1/3 chance of being each color. I disagree, and would argue that of the 22 combinations that could have generated the "non red and green" combo we have observed, that 10 begin with a blue pill, 6 with a red, and 6 with a green. I would, of course, argue that P(1) has a 10/22 chance of being blue, a 6/22 chance of being red, and a 6/22 chace of being green.


Regrettably, my puzzle was not sufficiently complex to require you to have much difficulty in coming up with a "pill-draw generation system" that ends up with the same distribution of outcomes. You did, and when you worked through the math, you came up with an answer consistent with your logic on all the other puzzles.

We again disagree on the proper use of the after-the-event observed information. It's just the same story over again.

I think my puzzle above (with the ridiculously complicated information about the hands of cards) is a better example of the meaningful differences between your approach and mine.

Sorry for this too-simple puzzle getting us off-track, and proving to just be redundant.

[This message has been edited by QuikSand (edited 12-22-2000).]

Grunion
12-22-2000, 03:47 PM
Unfortunately, I believe our answers are similar because of coinsidence, not because both are correct. I come to this conclusion because our denominators, factor differently. (Yours are 2 and 11 and mine are 3 and 5, with repetitions).

This brings me to an obsevation (as much as I would like to draw a definitive conclusion, I am not well educated enough on the subject to do so).

It would stand to reason, when dealing with 52 cards, we should not have to deal with prime numbers in the numerator or denominator greater than 52. (or 36 for the dice, or 6 for the pills). Yet pretty much all of your equations introduce primes greater than the original set. I've long since thrown out my scratch paper I was working with yesterday, but your case d total combination (106 million and change) broke down into many prime numbers, one of which being 2309 (if I recall correctly). This seems suspect, but as I said, I can not make a rock solid arguement that this should not be.


F - hands that contain at least one ace or at least one queen, have at least two face cards, and do not contain the jack of diamonds


Yes, to calculate an answer using my method would be tedious, but that does not make it inappropriate.

Addressing the first condition, (and I choose this condition not because it is sequentially first, but because it contains an or statement) at least one ace or one queen, I would make the following statement.

There is an equal chance that a hand that has at least one ace in it will have a queen in it as well as that a hand that has at least one queen in it will have an ace in it as well. I could prove this, but since there are the same number of aces and queens in the deck I believe it is unneccesary.

Therefore, there is a 1/2 chance that case one of the or statement will drive the remainder of the problem, and a 1/2 chance that case two will drive the remainder of the problem.

This is valid, as long as my model for one given allows for the inclusion of the other condition down the line. And yes, this will result in particular duplicate answers, which I have contended all along to be necessary.

Now, since the cases are not identical with respect to the constraints (the queen is a subset of the face cards, while the ace is not) The probabilities of both cases would have to be analyzed separately, as in the six pill problem.

There is no way I'm going to crank this thing out, but since case 2 is more complex, lets look at that case closer.

We know we have a queen, and therefore a face card, so we now must model four cards such that at least one of the four is a face card, and none of the four is a jack of diamonds, and that they combine to make a flush (working in probabilities, not total hands). Additionally, we know that if the given queen is the queen of diamonds, we can not have a flush, so only 3/4 of the queens work.

Anyway, I hope you were not expecting me to solve this, but I am confident that I could.
The main point is, with or statements, you can assign one as given as long as your model allows for the inclusion of the other condition (as opposed to either/or statements, which are exclusive)
Then you have to multiply the percentage of case one occuring followed by the percentage of pulling a flush given case 1 and add it to the percentage of case 2 occuring multiplied by the percentage of flush for case 2. And, of course, each case would have subcases which would have to follow the same logic.
And once again, it is OK to have combinations of hands in case one which would also be valid for case two. Each case describes an applied constraint, not the presence of a card. (although, in this particular example, by the nature of the constraint, we know that there is an ace in case 1 and we know there is a queen in case 2.

Or:
We have at least one ace or queen.
In case 1 we know we have an ace, we may have a queen.
In case 2 we know we have a queen we may have an ace.

Also,
We would not need to break the problem down into cases if the two face cards were not an added condition. As far as flushes are concerned, aces are as good as queens.

This leads me to an interesting question:

Case G: Hands that contain at least one ace or one queen.

I would contend that the answer is 0.198%, I assume your answer would be different, most likely slightly less than your answer for case D. The odds would get lower and lower antil you reached hands that contain at least one card greater than a two, as having five cards is impossible, at that point your answer would become 0.198% as well.

QuikSand
12-22-2000, 03:59 PM
Responding to my post:

Method #1:

-Draw card #1 from the 4-card pile of aces
-Draw cards 2-5 from the remaining 51 reshuffled cards

Method #2:

-Draw card #1 from the 4-card pile of aces
-Draw card #2 from the 48-card pile of non-aces
-Draw cards 3-5 from the remaining 50 reshuffled cards

Method #3:

-Draw cards 1-4 from the shuffled 52-card deck
Look at cards 1-4
-If they contain at least one ace, draw card #5 from the remaining shuffled 48
-If they don't contain atleast one ace, pull the four aces from the remainign cards, shuffle them, and draw card 5 from that four-card stack

Grunion replied:


Method 1 is random, with respect to case B, and since order doesn't matter to case D as well. Although the equation ends up being the same for cases a, c and e. This is not a suitable method of modeling those scenarios. This would seem to validate my contention that no matter how much or how little we know a particular card, it is of no added significance in calculating flush probabilities. Order, starting suit, starting card value, and location of known card do not change the probabilities, provided that all known information pertains to the same card.

Method 2 is incorrect in all stated cases. The other part of my initial contention is that we need to know something about more than one card for there to be any chance of obtaining significant information. We now have knowledge about two cards, and this knowledge needs to be accounted for in our calculation. I believe that you are presenting method two not because you agree with it, but because you think that I find it a valid concept. I do not and never have.

Method 3 is also non valid, because we also have additional information, this time it is being introduced in the middle of the process. We then make a decision base on the information we know about four cards. And to bounce back to one of my main contentions, one we have information about more than one card, there is additional information which can alter our flush probability (in this case full knowledge of four complete cards). I'm not about to do the math for this, because it involves a decision tree.

I am very perplexed that you are able to (correctly) dismiss methods as invalid
because the result does not match those for cases A, B, C & E; yet have been unwavering in your assessment of D, even though it falls into the same category.


First of all, how is it that you can fairly describe "seeding" the first card of the hand (Method #1) as an ace to be "random?" You've already acknowledged that it will cause there to be a higher density of multi-ace hands than the method I endorse, which has each hand weighted with absolutely equal probability. You skew the likelihood of given hands, making some more likely then others-- that is inerently non-random.

I add Method 2 and the others not because I believe them to be "valid" but because I believe them to be of the same level of validity as Method #1 (which I also deem to be "invalid" using your terms). My point is that any of these selection methods could have been employed by the dealer-- how can you choose which one?

I think you may not follow my use of these "method" descriptions. I am discusing a process that would be used by a third party to prepare the "set of hands" described in the original puzzle. The dealer follows the rules set forth in the various methods in order to prepare the hands that conform to the conditions specified. The fact that at some intermediate point the dealer "knows" what certain cards are does not add anything to our knowledge, which seems to be the principal reason you reject my alternative methods.

---

Again, I point back to my last discussion of the generalized flush puzzle.

If asked the question:

"What is the probability that this set of hands meeting condition x will be a flush?"

I have a simple decision process.

Count D = total number of hands meeting X
Count N = total number of flush hands meeting X
Calculate N / D = probability of flush within that set of hands

- - -

In the generic, the best summary I can put on your solution process is something like this:

Divine or select a hand-creation process (presumably but not necessarily non random) that would result in exactly the hands that meet the description X.

By following the probabilities of the divined/selected process, calculate the chance that the result would be a flush.

- - -

Is that a fair statement of our two approaches?

Or do you put more conditions on the process used to determine the "hand selection" method? You accept what I label Method #1 as appropriate, but reject Method #2 and the others. Why is that?

What if I modified Method 2 to read:

Method #2:

-Draw card #1 from the 4-card pile of aces
-Draw card #2 from the 48-card pile of non-aces
-Draw cards 3-5 from the remaining 50 reshuffled cards
-Before looking, shuffle cards 2-5, to randomize the location of the guaranteed non-ace card


Does this make it better? (It shouldn't, of course) This hand-creation method simly translates to: we know the first card is an ace, and we know that there's one non-ace card somewhere in the hand.

Obviously, we haven't gained any more knowledge from that, right? We weren't going to have 5 aces anyway...

Grunion
12-22-2000, 04:04 PM
(Though, isn't it at least a little troubling to you that my method of "throwing out" the random outcomes that don't mesh with the observed condition generates the correct answer, which you also got by intuitive brute force? You've suggested that my methods are somehow laughable, yet they're right here. Of course, that proves nothing-- it could have been a coincidence, or a by-product of the setup somehow)

One minute you essentially accept my calculation as correct (you did add the disclaimer though),and further evidence that your methods are correct because they agree.

I, of course, disagree with your approach to this puzzle. You have cleverly come up with a "pill-draw generation system" which does result in only the outcomes specified, and you calculated your probabilittes based on that. The nonrandom "system" you use seems to be "draw one random pill, and then if the second pill causes an invalid combo, draw a different second pill."

The next minute, you dismiss them completely, on the basis that they actually don't agree with yours, and then state that as further evidence that your process is correct. If you are assessing correctness based on the outcome of my result as compared to yours, we may as well quit right now.

Regrettably, my puzzle was not sufficiently complex to require you to have much difficulty in coming up with a "pill-draw generation system" that ends up with the same distribution of outcomes. You did, and when you worked through the math, you came up with an answer consistent with your logic on all the other puzzles.

Also, I don't think the creation of a scenario which A) has complexity greater than my statistical knowledge B) is so complex as to greatly increase my chance of error or C) would require massive amounts of time and effort to calculate would really help prove your point. We have pinned down our disagreement lets keep it simple.

If I wanted to be a pain in the ass, I would ask that you recalculate your solution for case d using only probabilities, not by calculating total combinations and dividing them into each other. (And please don't take that as a request or challenge, I wouldn't expect you to do it.)

QuikSand
12-22-2000, 04:10 PM
(On the red/blue/green pill puzzle)

Sorry if I wasn't clear. On first blush, since I saw your answers was very close to mine, I presumed it to be correct. I then realized that it was not (in my judgemtn) correct. Rather than completely covering my tracks, I added my post-script right after my original commentary on your answer. Sorry if that caused any confusion-- that was the opposite of my intent.

Let me be clear here. I disagree with your method, and your answer for that puzzle. I believe our disagreement to be resulting from the exact same fundamental disagreement abotu how to solve this kinf of problem, and more specifically on how to "use" the information we receive about the outcome of random processes.

QuikSand
12-22-2000, 04:16 PM
Also, I don't think the creation of a scenario which A) has complexity greater than my
statistical knowledge B) is so complex as to greatly increase my chance of error or C)
would require massive amounts of time and effort to calculate would really help prove
your point. We have pinned down our disagreement lets keep it simple.

If I wanted to be a pain in the ass, I would ask that you recalculate your solution for
case d using only probabilities, not by calculating total combinations and dividing them
into each other. (And please don't take that as a request or challenge, I wouldn't expect
you to do it.)

My point is not to exhaust your statistical knowledge, nor to "trip you up" into making an arithmetic error. I couldn't care less about either one.

My point with all this is to demonstrate that your process for solving these puzzles (replacing the completely random selection with a new non-random one of your choosing) is frought with critical problems.

I have tried to make the same point using our original problem, by using my "method #1" and "method #2" illustration, but I have failed to sufficiently articulate that point (or so it seems) and you continue to insist that there is something inherently "vaild" about your method and something that makes the other method(s) "invalid."

My point is that trying to divine some process of building the set of hands-- about which we know nothing except teh results-- is unsupportable. It invariably leads to making assumptions, choosing one method over another, and so forth. In my judgment, it leads to every such problem being impossible.

Grunion
12-22-2000, 05:04 PM
First of all, how is it that you can fairly describe "seeding" the first card of the hand (Method #1) as an ace to be "random?" You've already acknowledged that it will cause there to be a higher density of multi-ace hands than the method I endorse, which has each hand weighted with absolutely equal probability. You skew the likelihood of given
hands, making some more likely then others-- that is inerently non-random.

Because I am not "seeding" an ace. (although the full calculation that I use will cross cancel itself so that the remaining calculation will mirror the "seeded ace" calculation.
So yes, the reduced calculation is identical to a seeded ace calc, which would seem to support my contention that we need info from more than one card.

The full calc is as follows.

Q. P(flush) given at least one ace.

(Commentary - given the constraint, we know that the hand will contain at least one ace. However that is all we know, so we must identify not only the possible cases in which a known ace can occur, but the probability of each case occuring. To be truly mathmatically appropriate, we should also account for the suit of the known ace, which is also unknown. We could do so either by creating four subcases for each case, or by creating 20 cases instead of five. I can assure you the same result would be achieved as below.)

Case 1 = The known ace is in the first position.
Case 2 = The known ace is in the second position.
Case 3 = The known ace is in the third position.
Case 4 = The known ace is in the fourth position.
Case 5 = The known ace is in the fifth condition.
P(case 1) = 1/5
P(case 2) = 1/5
P(case 3) = 1/5
P(case 4) = 1/5
P(case 5) = 1/5

Case 1:
P(flush) = 4/4*12/51*11/50*10/49*9/48
=0.198%
Case 2:
P(flush) = 48/51*1/4*11/50*10/49*9/48
=0.198%
Case 3:
P(flush) = 48/51*11/50*1/4*10/49*9/48
=0.198%
Case 4:
P(flush) = 48/51*11/50*10/49*1/4*9/48
=0.198%
Case 5:
P(flush) = 48/51*11/50*10/49*9/48*1/4
=0.198%

P(flush)=1/5(0.198)+1/5(0.198)+1/5(0.198)+1/5(0.198)+1/5(0.198)
=0.198%

As you can see, I have not seeded an ace. The above process gives us no better idea where the ace is than the original constraint posed.

Based on your earlier contentions, I do not believe that we have any disagreement on the calculation for each individual case.
I don't know for sure, but I believe that you have no problem with my assigning case 1 through 5 an equal probability of occuring.

And, I do not believe I described method one as merely random. I described it to be random to case B. We should have no difficulty convincing you of this since you agree it is a valid solution for case B. Technically, it should not be considered random for case D. However the full process to determine case D, which is as above but adding cases or subcases to deal with unknown suit, will result in a calculation which will cross cancel until it is the exact of the one used for B. So from a practical standpoint, method 1 does accurately model case d as well.

Which reduces our discussion to this:
It is appropriate to break down the question into separate cases in the first case?

Grunion
12-22-2000, 05:17 PM
You skew the likelihood of given hands, making some more likely then others-- that is inerently non-random.

There are 6 ways to roll a seven with two dice, and ony one to roll a 12. Yet the results are fully random.

The equation:

P(x) = C(1)P(1)+C(2)P(2)

Which states the probability of x occuring is equal to the probability of case 1 occuring times the probability of x occuring given case 1 plus the probability of case 2 occuring times the probability of x occuring given case 2. Where C(1)+C(2) must equal 1.0
(Of course cases could be added so long as the addition of the probability of all cases equals 1.0)

The above equation can make some outcomes more likely than others, yet is valid (this equation is presented in my civil engineering handbook, which I paid $140 for, so it better be valid).

QuikSand
12-22-2000, 05:33 PM
The equation:

P(x) = C(1)P(1)+C(2)P(2)

Which states the probability of x occuring is equal to the probability of case 1 occuring times the probability of x occuring given case 1 plus the probability of case 2 occuring times the probability of x occuring given case 2. Where C(1)+C(2) must equal 1.0
(Of course cases could be added so long as the addition of the probability of all cases equals 1.0)

The above equation can make some outcomes more likely than others, yet is valid (this equation is presented in my civil engineering handbook, which I paid $140 for, so it better be valid).


Right you are. It is, of course, possible for a system of different outcomes to have different probabilities and still be a "valid" system.

However, when we're given no set of conditions about the distribution of the original set of hands (as we were not in the original flush problem) I argue that it is invalid to use the conditions that are observed, and to then use any process to generate anything other than an even distribution of hands.

So, I'm of course right back to my model-- you take all the hands that exist, and eliminate the ones which don't meet the specificed condition(s). Then, look at how many from that set are flushes... and you have your flush probability. I realize you disagree with this, but it's the exact same model I would use to solve any problem of this general type.

[This message has been edited by QuikSand (edited 12-22-2000).]

Vaj
12-22-2000, 06:04 PM
...the condition would be stated as follows.

What are the odds of drawing a flush from five random cards, given that all hands not containing an ace are to be discarded.

Which is not identical to:

What are the odds of drawing a flush given that your hand contains at least one ace.


As I see it, this is the core disagreement that problems involving pills and dice are obscuring. FWIW, my calculation on page 4 assumed the equivalence of these two statements, as I restricted the possible universe of randomly drawn hands to only those including at least one ace.

Perhaps we can attempt to calculate the probability of having 1, 2, 3, or 4 aces in hands generated by these two methods, and see how they compare.

Vaj
12-22-2000, 06:34 PM
As for the pill problem, I think the point of confusion involves Grunion's maintining of the a priori probabilities of the color of each pill, and then adjusting the probabilities of the color of the second pill drawn based on the additional knowledge provided.

There are 30 possible permutations of pills that can be drawn. Let's list them:

R1-R2, R1-B1, R1-B2, R1-G1, R1-G2
R2-R1, R2-B1, R2-B2, R2-G1, R2-G2
G1-G2, G1-B1, G1-B2, G1-R1, G1-R2
G2-G1, G2-B1, G2-B2, G2-R1, G2-R2
B1-B2, B1-R1, B1-R2, B1-G1, B1-G2
B2-B1, B2-R1, B2-R2, B2-G1, B2-G2

Clearly, there are 8 fatal permutations here. Also, each pill has an equal probability of being the first one selected.

Now, let's list the possible permutations after we know that a red/green combination was not drawn:

R1-R2, R1-B1, R1-B2
R2-R1, R2-B1, R2-B2
G1-G2, G1-B1, G1-B2
G2-G1, G2-B1, G2-B2
B1-B2, B1-R1, B1-R2, B1-G1, B1-G2
B2-B1, B2-R1, B2-R2, B2-G1, B2-G2

Again, 8 fatal permutations, only now there are only 22 possible permutations. Grunion's calculation reduces to 8/22.5. I don't see how we can have 22.5 possible permutations.

Alternatively, we can use Grunion's method of calculating a red/blue combination consisitently applying the posterior probabilitites. The posterior probability that the first pill is red is 6/22. Given that the first pill is red, the posterior probability that Fred croaks is 2/3. The posterior probability that the first pill is blue is 10/22. Given that the first pill is blue, the posterior probability
that Fred dies is 2/5. If the first pill chosen is green, Fred lives, so I'll ignore the resultant zero term. Thus, the probability that Fred dies, given that he did not choose a red and green pill, is (6/22)*(2/3) + (10/22)*(2/5) = 12/66 + 20/110 = 12/66 + 12/66 = 24/66 = 8/22.

QuikSand
12-22-2000, 07:33 PM
Here is my best re-statement of the semantic differences between Grunion and QuikSand at this point:

Grunion states his preferred wording of my original puzzle thusly:


What are the odds of drawing a flush given that your hand contains at least one ace.


And it appears that he and I differ in how we might "elucidate" on this phrase's intended meaning.

I would state it this way:

Given only the knowledge that your hand does, in fact, include at least one ace (and absent any knowledge about the method used to generate the hand), what is the likelihood that your hand is a flush?

And I'd answer it 0.223%.

He would (I believe) state it something like this:

Given that you have a hand that was created by a method ensured to generate a hand that includes at least one ace, what is the likelihood that your hand is a flush?

And he'd answer 0.189%.

---

Since this seems to pretty rapidly dissolve into a Clintonesque debate over the meanings of specific words, I'm not sure of there is a way to "resolve" this. But at this point, I don't think there are any real mathematical differences between us-- the differences are in the translation of sentences and phrases into the mathematical constructs needed to solve them.

Incidentally, Grunion, I'm hoping to provide a fair recap here-- if you would prefer to re-state your position in the context above in some manner, I'd be more than happy to re-write it in your words. I think I fairly understand our difference, but I don't seek to put improper words into your mouth.

Grunion
12-23-2000, 02:43 AM
However, when we're given no set of conditions about the distribution of the original set of hands (as we were not
in the original flush problem) I argue that it is invalid to use the conditions that are observed, and to then use any process to generate anything other than an even distribution of hands.

Ahhh, but we are given information about the distribution about the original set of hands. We are being told that we have at least one ace, which is information relevant to the distribution of hands.

Grunion
12-23-2000, 02:52 AM
Again, 8 fatal permutations, only now there are only 22 possible permutations. Grunion's calculation reduces to 8/22.5. I don't see how we can have 22.5 possible permutations.

8/22.5=16/45=(2*2*2*2)/(5*3*3)

So, in actuality, my solution factors quite nicely.


Alternatively, we can use Grunion's method of calculating a red/blue combination consisitently applying the
posterior probabilitites. The posterior probability that the first pill is red is 6/22. Given that the first pill is red, the
posterior probability that Fred croaks is 2/3. The posterior probability that the first pill is blue is 10/22. Given that
the first pill is blue, the posterior probability that Fred dies is 2/5. If the first pill chosen is green, Fred lives, so I'll ignore the resultant zero term. Thus, the probability that Fred dies, given that he did not choose a red and green pill, is (6/22)*(2/3) + (10/22)*(2/5) = 12/66 + 20/110 = 12/66 + 12/66 = 24/66 = 8/22.


Vaj,

Debate fairly or don't debate at all. You can't solve a problem using an initial set up which I have clearly indicated I disagree with, and call it Grunion's method. Then use "Grunion's" method to achieve QuikSand's result, implying that QuikSand is correct.

Grunion
12-23-2000, 03:09 AM
I would state it this way:

Given only the knowledge that your hand does, in fact, include at least one ace (and absent any knowledge about the method used to generate the hand), what is the likelihood that your hand is a flush?

And I'd answer it 0.223%.


But we do have knowledge about the method used:



For purposes of this puzzle, all the probabilities are as they seem-- the hands described are the product of some purely random selection process-- there is no trickery involved, just pure probability.


So in essence, are disagreement can be also looked at this way.
Given a condition, does a random model apply the constraint prior to the generation of combinations or afterwards?


What I disagree with is your bastardization of Bayes Theorem, which is a tool to calculate posterior probability by
using prior probabilities. You've created a very elegant little Bayesian problem here, whoch lends itself quite nicely to a Bayesian analysis. However, in doing so, you are not properly using prior probabilities... specifically the prior probability that p(B) =Fred gets a red pill = 10/12.

Your own words seem to indicate that you believe it to be more appropriate to apply the constraint prior to the generation of combinations.
My method clearly does this, yours clearly does not.

Grunion
12-23-2000, 03:12 AM
QuikSand,

I believe it would help us bring this to a conclusion if you could look at the "full" process for the flush problem I posted on 12/22 at 3:04 PM, and identify specifically what you believe to be improper with it, and why.

Vaj
12-23-2000, 06:25 AM
I'm sorry you took offense to my post, Grunion. I was just trying to show how you would have arrived at what I feel is the correct answer to the question at hand by weighting the probabilities of Fred croaking given the color of the first pill ingested by the probability of that first pill being selected. I was too lazy to type that all out, so I used "Grunion's method" as a shorthand description. I won't do that again!

All I was trying to show was that not only does the posterior probability of the color of the second pill chosen change with additional knowledge, but that the posterior probability of the color of the *first* pill must change as well.

Suppose we were told, after the fact, that neither pill Fred picked was green. What would be the posterior probability of the first pill being green? Can we agree that it's not 1/3, but 0? Perhaps more relevant questions are in my 7:43 post. I really shouldn't post before eating breakfast.


Incidentally, I still can't see how, when you start with 30 *discrete* permutations, a correct answer can compare the 8 fatal permutations to a winnowed down universe of possible permutations to 22.5.


[This message has been edited by Vaj (edited 12-23-2000).]

QuikSand
12-23-2000, 07:54 AM
Responding to my comments:

However, when we're given no set of conditions about the distribution of the original set of hands (as we were not
in the original flush problem) I argue that it is invalid to use the conditions that are observed, and to then use any process to generate anything other than an even distribution of hands.

- - -

Grunion replied:

Ahhh, but we are given information about the distribution about the original set of hands. We are being told that we have at least one ace, which is information relevant to the distribution of hands.

I disagree, but again, let me be clear about my semantics here. When I say "distribution" I mean "the likelihood of appearance of the various 106million+ given hands." I'm pretty sure that is/was clear, but I wanted to make sure.

I would argue that since the puzzle simply calls for the "flush probability" *of the subset* that we must use the simple exercise if counting the elements of the subset, then counting its elements that meet the condition (are a flush), then divide. No other choice.

I believe I understand your interpretation, but I would re-word it: "If we have one hand that we know is from that subset (the hands with aces) then we'll use some information about a/the process that would be used to generate all those hands in that subset, employ the resulting probabilities from that process that this might be certain hands (more likely than others), and then we can calculate the overall resulting likelihood that this particular hand is a flush."

I think this is a fair statement of your method. I disagree with the interpretation that gets from the original puzzle to this method, but I understand your method and as of yet, have found nothing else with which I disagree (your math, to my reckoning, has been on target).

Another way I would articulate our differences is that I am calculating the probability of the set, while you are calculating the probability of a hand drawn from the set, using an uneven probability for various hands within the set. We both have argued our side ad infinitum, but the questions remains-- which is in keeping with the original question?

QuikSand
12-23-2000, 07:57 AM
Regrettably (or perhaps not), I am walking out the door to embark on holiday travels. Perhaps it comes at a good time for this debate, I'm not sure.

Anyway, I'll probably be away for a few days.

Grunion, I've truly enjoyed this back-and-forth, and I apologize for the times when I may have slipped into a tone that suggested otherwise. I suspect you've reached a point where you are saying (about me) "he seems smart and able to understand this stuff, why can't he see he's wrong?" I feel the same way (with an emphasis on the former portions).

I hope you and your family have a happy and safe holiday season.

Vaj
12-23-2000, 09:43 AM
Here's another pill question. Given the same 6 pills, and given that a red/green combination was not drawn, what is the probability of two green pills being drawn? Is this different from the probability of two blue pills being drawn?

I submit there the two possible ways to select two green pills are G1-G2 and G2-G1, and that there are 22 possible permutations (the original 30 less the 8 red/green). Thus, P(2 green|!red/green)=2/22. I would go through an analogous procedure to arrive at a 2/22 chance of 2 blue pills being selected (B1-B2, B2-B1) given that one red pill and one green pill weren't chosen.

Now I'll go through this exercise using my understanding of your approach. The probability of the first pill being red, blue, or green are all 1/3. If the first pill is green, the probability of the second pill being green is 1/3, and the probability of the second pill being blue is 2/3, as we know green/red is not allowed. Thus, the probability of two pills being green is (1/3)*(1/3), or 1/9. Now, if the first pill is blue, the probabilities of the second pill being red, green, or blue are 2/5, 2/5, and 1/5, respectively. So the probability of two blue pills being selected, given that one red and one green pill weren't selected, is (1/3)*(1/5), or 1/15.

Is there really a higher probability of choosing two green pills than two blue pills? Absent any knowledge about the two pills chosen, I think we would agree that the chances of choosing 2 blue pills and 2 green pills are, in fact, 2 in 30. However, once we know that a red/green combination wasn't selected, your method implies that suddenly it's more likely that 2 green pills were selected than 2 blue pills. I disagree.

I apologize in advance if I've misrepresented your methodology. It's not my intent at all.

Grunion
12-23-2000, 10:44 AM
Vaj,


Suppose we were told, after the fact, that neither pill Fred picked was green. What would be the posterior probability of the first pill being green? Can we agree that it's not 1/3, but 0? Perhaps more relevant questions are in my 7:43 post. I really shouldn't post before eating breakfast.

Given that Fred does not have a green pill, Fred has a 0% percent chance of having a green pill.

It may seem like splitting hairs, but this is not the same as:

Fred picks two pills at random. If Fred has a green pill, the result does not count. If Fred does not have a green pill, the result counts. What is the probability that a counted result will have a green pill? (Answer is 0%)


Incidentally, I still can't see how, when you start with 30 *discrete* permutations, a correct answer can compare the 8 fatal permutations to a winnowed down universe of possible permutations to 22.5.


Once again:

Remembering that order is not important:
P(1) has a 1/3 chance of being red, a 1/3 chnce of being green and a 1/3 chance of being blue.

If P(1) is red, P(2) has a 2/3 chance of being blue and a 1/3 chance of being red (as a red/green combo is impossible)

If P(1) is blue, P(2) has a 2/5 chance of being red, a 2/5 chance of being green and a
1/5 chance of being green.

If P(1) is green, P(2) has a 1/3 chance of being green and a 2/3 chance of being blue (as a red/green combo is impossible)

P(red/blue) = .33(.67)+.33(.4)+ .33(0)= 35.5%

As you can see, the factors in my denominators are all threes and fives.
=1/3(2/3)+(1/3)(2/5)+(1/3)(0).

No offense taken on any of your posts Vaj, I enjoy the exchange.

Grunion
12-23-2000, 10:48 AM
QuikSand,

Anyway, I'll probably be away for a few days.

Grunion, I've truly enjoyed this back-and-forth, and I apologize for the times when I may have slipped into a tone that suggested otherwise. I suspect you've reached a point where you are saying (about me) "he seems smart and able to understand this stuff, why can't he see he's wrong?" I feel the same way (with an emphasis on the former portions).

I hope you and your family have a happy and safe holiday season.



I feel exactly the same way. Best wishes over the holiday.

Grunion
12-23-2000, 11:10 AM
Vaj,

Here's another pill question. Given the same 6 pills, and given that a red/green combination was not drawn, what is the probability of two green pills being drawn? Is this different from the probability of two blue pills being drawn?

Ok, given: a red/green combo was not drawn.


Is there really a higher probability of choosing two green pills than two blue pills? Absent any knowledge about the two pills chosen , I think we would agree that the chances of choosing 2 blue pills and 2 green pills are, in fact, 2 in 30. However, once we know that a red/green combination wasn't selected, your method implies that suddenly it's more likely that 2 green pills were selected than 2 blue pills. I disagree.


I contend that the given stated as a constraint in your problem gives us knowledge about the two pills chosen, which is if one is red the other can not be green.

I have not verified your math or your calculations, but what your result is in this exercise is not really at issue.

Our disagreement, it seems, is that I believe the contraint of a given must be applied before the randomization process, while you contend that the constraint of a given must occur after a result which may or may not be in agreement with the stated given, and all non-conforming results thrown out. This is exactly what my discussion with QuikSand boils down to, and unfortunately I believe we (myself and QuikSand) are at an impasse.
Our arguements have many, many issues raised, and when all is said and done they all boil down to the above disagreement. Unfortunately, our disagreement is conceptual. We can't resolve it because:

While I think we have both demonstrated we are fairly adept at math and logic, most likely neither one has the knowledge that will allow us to resolve our issue, as it is a theoretical one.

Wignasty said it best earlier, both of us are right. This is true, given that each of our methodologies are sound, both of our processes are right. Unfortunately, one of our methodologies are unsound and it proving difficult to prove that either one is incorrect.

QuikSand mentioned something at one point about getting together, and settling the issue through experimental means. Then we both quickly realized, that we would never be able to agree on the rules to conduct the experiment.

Vaj,
I see you are on the same page as QuikSand (which is fine). If you have time, go back through my posts to get a better understanding of my point of view. I'm not asking you to do so because I think it will change your mind, I am asking you to do so in order to fully understand the one issue which QuikSand and myself disagree on.

Vaj
12-23-2000, 11:58 AM
I suspect you're right about us being at an impasse. In the pill problem, I was viewing each permutation as a unit (say a piece of paper in a hat), and, when we know that a red/green combination was not picked, I would remove the 8 pieces of paper with a red/green combination (R1-G1, R1-G2, etc.), and then choose a permutation at random. I actually see this as applying the constraint before the randomization process.

I see your procedure as providing the constraint *during* the randomization process. My last problem [P(2 green) and P(2 blue)] was an example of what I feel is a nonsensical result caused by changing the randomization process midstream.

FWIW, another way I would solve these would be as follows. Given no red/green combinations, the probabilities of pill 1 being red, green, or blue are 6/22, 6/22, and 10/22, respectively. I do agree with your breakdown of the probabilites of the color of the second pill, given the color of the first. Specifically, P(2=G|1=G)=1/3, and P(2=B|1=B)=1/5. I would then calculate P(2 greens) as (6/22)*(1/3) = 6/66, and P(2 blues) as (10/22)*(1/5) = 10/110, which equals 6/66.

I will review your posts on this again to make sure I'm not misunderstanding you. Again, please correct me if I misrepresented/misunderstood how you would solve these last problems.

Considering this thread is almost longer than the religion discussion, I think declaring an impasse is a good idea http://dynamic.gamespy.com/~fof/ubb/smile.gif Happy holidays.

Grunion
12-23-2000, 01:31 PM
QuikSand, Vaj, etc.

Unless you guys want to break the post record (I think after this we will be four behind the religion discussion), I'm going to throw in the towel.

I believe I am correct, however I do not know I am correct. I'm sure both of you can make the identical statement.

I would much rather know that my premise is incorrect than believe that my premise is valid, without absolute proof. Unfortunately, this is not the case at this time.

Of course, if anybody comes across anything which could possibly resolve this one and for all please post it.

Anyway QuikSand, I believe I monopolized enough of your time, its been a while since your last OT challenge.

QuikSand
12-12-2006, 07:54 AM
Retreived from the archives and bumped for the nearly sixth anniversary of the granddaddy of all FOFC puzzle debates. Not for the weak of heart, I'll warn you.

albionmoonlight
12-12-2006, 08:16 AM
A classic.

If Monte Hall was the "Son House" of FOFC Puzzles, then this is the "Robert Johnson."

Dutch
12-12-2006, 08:56 AM
Apparently I was a lot smarter in December 2000 than now. When I looked at the start of this thread, I thought, "Who fucking knows that!?" then six posts down I am stunned to see myself trying to give the answer.

Maple Leafs
12-12-2006, 09:16 AM
Any truth to the rumor than Grunion disappeared from the board because he spent the next six years in a cabin in the woods repeatedly dealing himself five-card hands to prove he was right?

Arctus
12-12-2006, 11:48 AM
Any truth to the rumor than Grunion disappeared from the board because he spent the next six years in a cabin in the woods repeatedly dealing himself five-card hands to prove he was right?

None whatsoever.

He stopped visiting this board (and a bunch of others) due to some very demanding out of town disaster recovery assignments.

Years later, after his existance returned to normal, he rediscovered this board on May 10, 2005. After spending 1/2 hour trying to remember his old password, he decided to just reregister under a different username instead.

MJ4H
12-12-2006, 12:38 PM
None whatsoever.

He stopped visiting this board (and a bunch of others) due to some very demanding out of town disaster recovery assignments.

Years later, after his existance returned to normal, he rediscovered this board on May 10, 2005. After spending 1/2 hour trying to remember his old password, he decided to just reregister under a different username instead.

You sound like his mom or something


(yes i get it)

Passacaglia
12-12-2006, 12:38 PM
Apparently I was a lot smarter in December 2000 than now. When I looked at the start of this thread, I thought, "Who fucking knows that!?" then six posts down I am stunned to see myself trying to give the answer.

Same here. Although, I guess I was just out of college, then.

QuikSand
12-16-2015, 12:42 PM
fun times

Marc Vaughan
12-16-2015, 06:29 PM
Interesting question ...

Dutch
12-16-2015, 08:41 PM
Damn, I lived in Los Angeles way back then when you started this....and I was 29 years old...

QuikSand
05-29-2021, 07:56 AM
came across this old post while searching for another one... fun fun