# Mode (statistics)

In statistics, mode means the most frequent value assumed by a random variable, or occurring in a sampling of a random variable. The term is applied both to probability distributions and to collections of experimental data. In some fields, notably education, sample data are often called scores, and the sample mode is known as the modal score.

Like the statistical mean and the median, the mode is a way of capturing important information about a random variable or a population in a single quantity. The mode is in general different from mean and median, and may be very different for strongly skewed distributions.

The mode is not necessarily unique, since the same maximum frequency may be attained at different values. The worst case is given by so-called uniform distributions, in which all values are equally likely.

## Mode of a probability distribution

The mode of a discrete probability distribution is the value x at which its probability mass function takes its maximum value. In other words, it is the value that is most likely to be sampled.

The mode of a continuous probability distribution is the value x at which its probability density function attains its maximum value, so, informally speaking, the mode is at the peak.

As noted above, the mode is not necessarily unique, since the probability mass function or probability density function may achieve its maximum value at several points x1, x2, etc.

When a probability density function has multiple local maxima, it is common to refer to all of the local maxima as modes of the distribution (even though the above definition implies that only global maxima are modes). Such a continuous distribution is called multimodal (as opposed to unimodal).

In symmetric unimodal distributions, such as the normal (or Gaussian) distribution (the distribution whose density function, when graphed, gives the famous "bell curve"), the mean (if defined), median and mode all coincide. For samples, if it is known that they are drawn from a symmetric distribution, the sample mean can be used as an estimate of the population mode.

## Mode of a sample

The mode of a data sample is the element that occurs most often in the collection. For example, the mode of the sample [1, 3, 6, 6, 6, 6, 7, 7, 12, 12, 17] is 6. Given the list of data [1, 1, 2, 4, 4] the mode is not unique, unlike the arithmetic mean.

For a sample from a continuous distribution, such as [0.935..., 1.211..., 2.430..., 3.668..., 3.874...], the concept is unusable in its raw form, since each value will occur precisely once. The usual practice is to discretize the data by assigning the values to equidistant intervals, as for making a histogram, effectively replacing the values by the midpoints of the intervals they are assigned to. The mode is then the value where the histogram reaches its peak. For small or middle-sized samples the outcome of this procedure is sensitive to the choice of interval width if chosen too narrow or too wide; typically one should have a sizable fraction of the data concentrated in a relatively small number of intervals (5 to 10), while the fraction of the data falling outside these intervals is also sizable.

## Comparison of mean, median and mode

For a probability distribution, the mean is also called the expected value of the random variable. For a data sample, the mean is also called the average.

### When do these measures make sense?

Unlike mean and median, the concept of mode also makes sense for "nominal data" (i.e., not consisting of numerical values). For example, taking a sample of Korean family names, one might find that "Kim" occurs more often than any other name. Then "Kim" might be called the mode of the sample. However, this use is not common.

Unlike median, the concept of mean makes sense for any random variable assuming values from a vector space, including the real numbers (a one-dimensional vector space) and the integers (which can be considered embedded in the reals). For example, a distribution of points in the plane will typically have a mean and a mode, but the concept of median does not apply. The median makes sense when there is a linear order on the possible values.

### Uniqueness and definedness

For the remainder, the assumption is that we have (a sample of) a real-valued random variable.

For some probability distributions, the expected value may be infinite or undefined, but if defined, it is unique. The average of a (finite) sample is always defined. The median is the value such that the fractions not exceeding it and not falling below it are both at least 1/2. It is not necessarily unique, but never infinite or totally undefined. For a data sample it is the "halfway" value when the list of values is ordered in increasing value, where usually for a list of even length the numerical average is taken of the two values closest to "halfway". Finally, as said before, the mode is not necessarily unique. Certain pathological distributions (for example, the Cantor distribution) have no defined mode at all. For a finite data sample, the mode is one (or more) of the values in the sample.

### Properties

Assuming definedness, and for simplicity uniqueness, the following are some of the most interesting properties.

• All three measures have the following property: If the random variable (or each value from the sample) is subjected to the linear or affine transformation which replaces X by aX+b, so are the mean, median and mode.
• However, if there is an arbitrary monotonic transformation, only the median follows; for example, if X is replaced by exp(X), the median changes from m to exp(m) but the mean and mode won't.
• Except for extremely small samples, the median is totally insensitive to "outliers" (such as occasional, rare, false experimental readings). The mode is also very robust in the presence of outliers, while the mean is rather sensitive.
• In continuous unimodal distributions the median lies, as a rule of thumb, between the mean and the mode, about one third of the way going from mean to mode. In a formula, median ≈ (2 × mean + mode)/3. This rule, due to Karl Pearson, is however not always true and the three statistics can appear in any order. It often applies to slightly non-symmetric distributions that resemble a normal distribution.

### Example for a skewed distribution

A well-known example of a skewed distribution is personal wealth: Few people are very rich, but among those some are extremely rich. However, many are rather poor.

A well-known class of distributions that can be arbitrarily skewed is given by the log-normal distribution. It is obtained by transforming a random variable X having a normal distribution into random variable Y = exp(X). Then the logarithm of random variable Y is normally distributed, whence the name.

Taking the mean μ of X to be 0, the median of Y will be 1, independent of the standard deviation σ of X. This is so because X has a symmetric distribution, so its median is also 0. The transformation from X to Y is monotonic, and so we find the median exp(0) = 1 for Y.

When X has standard deviation σ = 0.2, the distribution of Y is not very skewed. We find (see under Log-normal distribution), with values rounded to four digits:

• mean = 1.0202
• mode = 0.9608

Indeed, the median is about one third on the way from mean to mode.

When X has a much larger standard deviation, σ = 5, the distribution of Y is strongly skewed. Now

• mean = 7.3891
• mode = 0.0183

Here, Pearson's rule of thumb fails, though for this distribution it is true for the logarithms of the mean, median and mode. 