Stud Versus Hold'em: Part III

Brian Alspach

Poker Digest Vol. 3, No. 19, September 8 - 21, 2000

We are in the midst of comparing the probabilities of at least one player having a flush or straight in seven-card stud and hold'em. The groundwork was established in Part I, and in Part II we determined the probability of at least one player having a flush given that 10 players each have been dealt two cards and five community cards have been displayed in the center. This is the context in which hold'em takes place, but in a real game, of course, seldomly are all ten players still in the hand by the time the river card has been placed in the center. We shall discuss the implications for real games in future parts.

Now we turn our attention to seven-card stud. Again we are going to work with an idealized situation in order to obtain a meaningful number. Let's determine the probability of at least one player having a flush, given that we deal seven cards to each of seven players. In order to make the computation feasible, we need to assume each player receives seven cards thereby limiting us to seven players because the standard deck contains only 52 cards. Most seven-card stud games in cardrooms seat eight players but because many of the players fold early in almost all hands, only infrequently does the dealer run short of cards. As mentioned above, we shall discuss the relationship between real games and these idealized situations in an upcoming article.

Unlike the idealized hold'em setting, we're not going to determine the exact probability for at least one player to have a flush when seven players are each dealt seven cards. To do so requires examining a number of subcases beyond tedious. Instead, we shall use the method of inclusion-exclusion discussed in Part I (Poker Digest Vol. 3/No. 17). When performing inclusion-exclusion, one obtains an expression of the form

\begin{displaymath}A(0)-A(1)+A(2)-A(3)+A(4)-A(5)+\cdots.\end{displaymath}

Here's the key to everything else we do. If the values A(0), A(1), A(2), and so on, are decreasing in magnitude, then because the terms alternate in sign, the value A(i) is a bound on the error if we use the first i terms to give an appoximation. Think about that statement and see if you can convince yourself of its validity. Once you understand that, our strategy is then obvious. What do we do? We compute as many of the initial terms as can be done reasonably easily. Suppose we then find a term such as A(4) and discover it has the value .0003. Then if we use A(0)-A(1)+A(2)-A(3) as an approximation, the error is less than .0003 and if this is sufficiently small, we stop.

The setting for inclusion-exclusion is described as follows. Let P(i)denote the property that player i is dealt a flush. We want to count the number of deals for which none of the properties $P(1), P(2),
\ldots,P(7)$ hold, that is, none of the players is dealt a flush. The first term in an inclusion-exclusion argument counts all possible deals and expressed as a probability it is 1. Thus, A(0) = 1 and it certainly has been easily determined.

It is known that the probability of a fixed player i being dealt a flush is .031 (see, for example, http://www.math.sfu.ca/~alspach under Poker Computations), where we are allowing straight flushes as well. Since there are seven players, we need to multiply by seven. Carrying out the arithmetic to nine decimal places we find that A(1) = .213960385.

To determine A(2), we must calculate the probability that two fixed players both are dealt flushes and then multiply by 21 since that's the number of ways of choosing two players from seven. This is where the calculations begin to be tedious because the number of subcases starts increasing quickly. Upon doing so, we find that A(2) = .033634185. Similarly, to determine A(3), we calculate the probability of three fixed players all being dealt flushes and multiply by 35 since that is the number of ways of choosing three players from seven. In this case we obtain A(3) = .005364137.

We now stop and obtain A(0)-A(1)+A(2)-A(3) = .814309663. This is the approximate probability that no one will be dealt a flush if we deal seven cards to each of seven players. The pertinent question is just how good of an approximation is this? Since the terms in the inclusion- exclusion expression alternate in sign, the approximations we obtain by using more and more terms in the expression bounce back and forth between being greater than and smaller than the true value. Notice that the last term in our approximation corresponds to subtracting some amount (A(3)). This means that our approximation above is smaller than the true value. On the other hand, as mentioned above, the magnitude of A(4) is a bound on the error. Can we say something about the magnitude of A(4) without going through the tedious exercise of actually computing it? Notice that A(1) is slightly more than one-fifth of A(0), that A(2) is slightly less than one-sixth of A(1), and that A(3) is less than one-sixth of A(2). It is the case that A(4) is less than one-sixth of A(3), that is, A(4) is slightly less than .0009.

Hence, we shall use .815 as the probability no one is dealt a flush and we know this is accurate to three decimal places. This means the probability of one or more players being dealt a flush when seven cards are dealt to seven players is .185.


Home | Publications | Preprints | MITACS | Poker Digest | Poker Computations | Feedback
website by the Centre for Systems Science
last updated 5 September 2001