The context of a multinomial distribution is similar to that for the binomial distribution except that one is interested in the more general case of when $k > 2$ outcomes are possible for each trial.

As an example of a situation involving a multinomial distribution where there are $3$ outcomes possible for each trial, suppose that two chess players had played numerous games and it was determined that the probability that Player $A$ would win is $0.40$, the probability that Player $B$ would win is $0.35$, and the probability that the game would end in a draw is $0.25$.

The multinomial distribution can be used to answer questions such as" "If these two chess players played $12$ games, what is the probability that Player $A$ would win $7$ games, Player $B$ would win $2$ games, and the remaining $3$ games would each end in a draw?".

Consider one way in which this might occur, as suggested by the sequence of letters $AAABDADAAABD$. The probability of this happening is clearly $$(0.40)(0.40)(0.40)(0.35)(0.25)(0.40)(0.25)(0.40)(0.40)(0.40)(0.35)(0.25)$$ which reduces to $$(0.40)^7 (0.35)^2 (0.25)^3$$ as it would for any other sequence of 7 $A$'s, 2 $B$'s and 3 $D$'s.

To find the probability of this distribution of wins, losses, and draws, irrespective of the order in which they occurred, then requires we multiply the aforementioned probability by the number of such sequences that are possible.

This number of possible sequences, of course, is simply the number of permutations of these letters, acknowledging that several are indistinguishable from one another.

This is a familiar problem, whose answer is given by $$\frac{12!}{7!2!3!}$$

Putting all of this together, we have: $$P(7 \textrm{ wins for A}; 2 \textrm{ wins for B}; 3 \textrm{ draws}) = \frac{12!}{7!2!3!} (0.40)^7 (0.35)^2 (0.25)^3$$ Now, let us consider the more general case...

Suppose a random variable $X$ has $k$ possible outcomes, $x_1, x_2, \ldots, x_k$, with probabilities $p_1, p_2, \ldots, p_k$, and we wish to know the probability that in $n$ trials, we see $n_1$ outcomes of $x_1$, $n_2$ outcomes of $x_2$, ..., and $n_k$ outcomes of $x_k$ (noting that it must be the case that $n_1 + n_2 + \cdots + n_k = n$).

The probability of any single ordering of these desired outcomes is, of course, given by $$p_1^{n_1} p_2^{n_2} \cdots p_k^{n_k}$$ while the number of ways we can reorder these outcomes is given by $$\frac{n!}{n_1! n_2! \cdots n_k!}$$ Thus, the probability of seeing $n_1$ outcomes of $x_1$, $n_2$ outcomes of $x_2$, ..., and $n_k$ outcomes of $x_k$ is given by $$P(n_1;n_2;\ldots;n_k) = \frac{n!}{n_1! n_2! \cdots n_k!}p_1^{n_1} p_2^{n_2} \cdots p_k^{n_k}$$