Confidence interval

Jump to navigation Jump to search
In this diagram, the bars represent observation means and the red lines represent the confidence intervals surrounding them. The difference between the two populations on the left is significant. However, "[i]t is a common statistical misconception to suppose that two quantities whose 95% confidence intervals just fail to overlap are significantly different at the 5% level"[1].

In statistics, a confidence interval (CI) is an interval estimate of a population parameter. Instead of estimating the parameter by a single value, an interval of likely estimates is given. How likely the estimates are is determined by the confidence coefficient. The more likely it is for the interval to contain the parameter, the wider the interval will be.

Confidence intervals are used to indicate the reliability of an estimate. For example, a CI can be used to describe how reliable survey results are. All other things being equal, a survey result with a small CI is more reliable than a result with a large CI.

In a narrower sense, a CI for a population parameter is an interval with an associated probability p that is generated from a random sample of an underlying population such that if the sampling was repeated numerous times and the confidence interval recalculated from each sample according to the same method, a proportion p of the confidence intervals would contain the population parameter in question. Confidence intervals are the most prevalent form of interval estimation.

If U and V are statistics (i.e., observable random variables) whose probability distribution depends on some unobservable parameter θ, and

<math>\Pr(U<\theta<V|\theta)=x</math> (where x is a number between 0 and 1)

then the random interval (UV) is a "(100·x)% confidence interval for θ". The number x (or 100·x%) is called the confidence level or confidence coefficient. In modern applied practice, most confidence intervals are stated at the 95% level (Zar 1984).

Practical example

A machine fills cups with margarine, and is supposed to be adjusted so that the mean content of the cups is close to 250 grams of margarine. Of course it is not possible to fill every cup with exactly 250 grams of margarine. Hence the weight of the filling can be considered to be a random variable X. The distribution of X is assumed here to be a normal distribution with unknown expectation μ and (for the sake of simplicity) known standard deviation σ = 2.5 grams. To check if the machine is adequately adjusted, a sample of n = 25 cups of margarine is chosen at random and the cups weighed. The weights of margarine are <math>X_1,\dots,X_{25}</math>, a random sample from X.

To get an impression of the expectation μ, it is sufficient to give an estimate. The appropriate estimator is the sample mean:

<math>\hat \mu=\bar X=\frac{1}{n}\sum_{i=1}^n X_i.</math>

The sample shows actual weights <math>x_1,\dots,x_{25}</math>, with mean:

<math>\bar x=\frac {1}{25} \sum_{i=1}^{25} x_i = 250.2</math> (grams).

If we take another sample of 25 cups, we could easily expect to find values like 250.4 or 251.1 grams. A sample mean value of 280 grams however would be extremely rare if the mean content of the cups is in fact close to 250g. There is a whole interval around the observed value 250.2 of the sample mean within which, if the whole population mean actually takes a value in this range, the observed data would not be considered particularly unusual. Such an interval is called a confidence interval for the parameter μ. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample <math>X_1,\dots,X_{25}</math> and hence random variables themselves.

In our case we may determine the endpoints by considering that the sample mean <math>\bar X </math> from a normally distributed sample is also normally distributed, with the same expectation μ, but with standard deviation <math>\sigma/\sqrt{n} = 0.5 </math> (grams). By standardizing we get a random variable

<math>Z = \frac {\bar X-\mu}{\sigma/\sqrt{n}} =\frac {\bar X-\mu}{0.5} </math>

dependent on μ, but with a standard normal distribution independent of the parameter μ to be estimated. Hence it is possible to find numbers −z and z, independent of μ, where Z lies in between with probability 1 − α, a measure of how confident we want to be. We take 1 − α = 0.95. So we have:

<math>P(-z\le Z\le z) = 1-\alpha = 0.95.</math>

The number z follows from:

<math>\Phi(z) = P(Z \le z) = 1 - \frac{\alpha}2 = 0.975\,,</math>
<math>z=\Phi^{-1}(\Phi(z)) = \Phi^{-1}(0.975) = 1.96\,,</math>

(see probit and cumulative distribution function), and we get:

<math>0.95 = 1-\alpha=P(-z \le Z \le z)=P \left(-1.96 \le \frac {\bar X-\mu}{\sigma/\sqrt{n}} \le 1.96 \right) = </math>
<math>=P \left( \bar X - 1.96 \frac{\sigma}{\sqrt{n}} \le \mu \le \bar X + 1.96 \frac{\sigma}{\sqrt{n}}\right) =</math>
<math>=P\left(\bar X - 1.96 \times 0.5 \le \mu \le \bar X + 1.96 \times 0.5\right)= </math>
<math>=P \left( \bar X - 0.98 \le \mu \le \bar X + 0.98 \right) </math>.

This might be interpreted as: with probability 0.95 to one we will choose a confidence interval in which we will meet the parameter μ between the stochastic endpoints, but that does not mean that possibility of meeting parameter μ in confidence interval is 95% :

<math> \bar X - 0{.}98 </math>


<math> \bar X + 0.98. </math>

Every time the measurements are repeated, there will be another value for the mean <math>\bar X</math> of the sample. In 95% of the cases μ will be between the endpoints calculated from this mean, but in 5% of the cases it will not be. The actual confidence interval is calculated by entering the measured weights in the formula. Our 0.95 confidence interval becomes:

<math>(\bar x - 0.98;\bar x + 0.98) = (250.2 - 0.98; 250.2 + 0.98) = (249.22; 251.18).\,</math>

This interval has fixed endpoints, where μ might be in between (or not). There is no probability of such an event. We cannot say: "with probability (1 − α) the parameter μ lies in the confidence interval." We only know that by repetition in 100(1 − α) % of the cases μ will be in the calculated interval. In 100α % of the cases however it doesn't. And unfortunately we don't know in which of the cases this happens. That's why we say: with confidence level 100(1 − α) % μ lies in the confidence interval."

The following picture shows 50 realisations of a confidence interval for μ.


Observation of the sample means we choose from the population of all realisations. There the probability is 95% we end up having chosen an interval that contains the parameter. After realisation we just have our chosen interval. As seen from the picture there was a fair chance we choose an interval containing μ; however we may be unlucky and have picked the wrong one. We'll never know; we're stuck with our interval.

Theoretical example

Suppose X1, ..., Xn are an independent sample from a normally distributed population with mean μ and variance σ2. Let




has a Student's t-distribution with n − 1 degrees of freedom. Note that the distribution of T does not depend on the values of the unobservable parameters μ and σ2; i.e., it is a pivotal quantity. If c is the 95th percentile of this distribution, then


(Note: "95th" and "0.9" are correct in the preceding expressions. There is a 5% chance that T will be less than −c and a 5% chance that it will be larger than +c. Thus, the probability that T will be between −c and +c is 90%.)



and we have a theoretical (stochastic) 90% confidence interval for μ.

After observing the sample we find values <math>\overline{x}</math> for <math>\overline{X}</math> and s for S, from which we compute the confidence interval


an interval with fixed numbers as endpoints, of which we can no more say there is a certain probability it contains the parameter μ. Either μ is in this interval or isn't.

How to understand confidence intervals

Confidence levels are typically given alongside statistics resulting from sampling.

In a statement "we are 90% confident that between 35% and 45% of voters favor Candidate A", 90% is our confidence level and 35%-45% is our confidence interval.

It is very tempting to misunderstand this statement in the following way. We used capital letters U and V for random variables; it is conventional to use lower-case letters u and v for their observed values in a particular instance. The misunderstanding is the conclusion that


so that after the data has been observed, a conditional probability distribution of θ, given the data, is inferred. For example, suppose X is normally distributed with expected value θ and variance 1. (It is grossly unrealistic to take the variance to be known while the expected value must be inferred from the data, but it makes the example simple.) The random variable X is observable. (The random variable X − θ is not observable, since its value depends on θ.) Then X − θ is normally distributed with expectation 0 and variance 1. Given that 90% of the standard normal distribution lies between −1.645 and 1.645, we know:




so the interval from X − 1.645 to X + 1.645 is a 90% confidence interval for θ. But when X = 82 is observed, can we then say that

<math>\Pr(82-1.645<\theta<82+1.645)=0.9\ \mbox{?}</math>

This conclusion does not follow from the laws of probability because θ is not a "random variable"; i.e., no probability distribution has been assigned to it. Confidence intervals are generally a frequentist method, i.e., employed by those who interpret "90% probability" as "occurring in 90% of all cases". Suppose, for example, that θ is the mass of the planet Neptune, and the randomness in our measurement error means that 90% of the time our statement that the mass is between this number and that number will be correct. The mass is not what is random. Therefore, given that we have measured it to be 82 units, we cannot say that in 90% of all cases, the mass is between 82 − 1.645 and 82 + 1.645. There are no such cases; there is, after all, only one planet Neptune.

But if probabilities are construed as degrees of belief rather than as relative frequencies of occurrence of random events, i.e., if we are Bayesians rather than frequentists, can we then say we are 90% sure that the mass is between 82 − 1.645 and 82 + 1.645? Many answers to this question have been proposed, and are philosophically controversial. The answer will not be a mathematical theorem, but a philosophical tenet. Less controversial are Bayesian credible intervals, in which one starts with a prior probability distribution of θ, and finds a posterior probability distribution, which is the conditional probability distribution of θ given the data.

For users of frequentist methods, the explanation of a confidence interval can amount to something like: "The confidence interval represents values for the population parameter for which the difference between the parameter and the observed estimate is not statistically significant at the 10% level". Critics of frequentist methods suggest that this hides the real and, to the critics, incomprehensible frequentist interpretation which might be expressed as: "If the population parameter in fact lies within the confidence interval, then the probability that the estimator either will be the estimate actually observed, or will be closer to the parameter, is less than or equal to 90%". The confidence interval can also be expressed in terms of samples: "Were this procedure to be repeated on multiple samples, the calculated confidence interval (which would differ for each sample) would encompass the true population parameter 90% of the time." Users of Bayesian methods, if they produced a confidence interval, might by contrast say "My degree of belief that the parameter is in fact in the confidence interval is 90%". Disagreements about these issues are not disagreements about solutions to mathematical problems. Rather they are disagreements about the ways in which mathematics is to be applied.

Confidence intervals in measurement

More concretely, the results of measurements are often accompanied by confidence intervals. For instance, suppose a scale is known to yield the actual mass of an object plus a normally distributed random error with mean 0 and known standard deviation σ. If we weigh 100 objects of known mass on this scale and report the values ±σ, then we can expect to find that around 68% of the reported ranges include the actual mass.

If we wish to report values with a smaller standard error value, then we repeat the measurement n times and average the results. Then the 68.2% confidence interval is <math>\pm \sigma/\sqrt{n}</math>. For example, repeating the measurement 100 times reduces the confidence interval to 1/10 of the original width.

Note that when we report a 68.2% confidence interval (usually termed standard error) as v ± σ, this does not mean that the true mass has a 68.2% chance of being in the reported range. In fact, the true mass is either in the range or not. How can a value outside the range be said to have any chance of being in the range? Rather, our statement means that 68.2% of the ranges we report using ± σ are likely to include the true mass.

This is not just a quibble. Under the incorrect interpretation, each of the 100 measurements described above would be specifying a different range, and the true mass supposedly has a 68% chance of being in each and every range. Also, it supposedly has a 32% chance of being outside each and every range. If two of the ranges happen to be disjoint, the statements are obviously inconsistent. Say one range is 1 to 2, and the other is 2 to 3. Supposedly, the true mass has a 68% chance of being between 1 and 2, but only a 32% chance of being less than 2 or more than 3. The incorrect interpretation reads more into the statement than is meant.

On the other hand, under the correct interpretation, each and every statement we make is really true, because the statements are not about any specific range. We could report that one mass is 10.2 ± 0.1 grams, while really it is 10.6 grams, and not be lying. But if we report fewer than 1000 values and more than two of them are that far off, we will have some explaining to do.

It is also possible to estimate a confidence interval without knowing the standard deviation of the random error. This is done using the T distribution, or by using non-parametric resampling methods such as the bootstrap, which do not require that the error have a normal distribution.

Robust confidence intervals

In the process of weighing 1000 objects, under practical conditions, it is easy to believe that the operator might make a mistake in procedure and so report an incorrect mass (thereby making one type of systematic error). Suppose he has 100 objects and he weighed them all, one at a time, and repeated the whole process ten times. Then he can calculate a sample standard deviation for each object, and look for outliers. Any object with an unusually large standard deviation probably has an outlier in its data. These can be removed by various non-parametric techniques. If he repeated the process only three times, he would simply take the median of the three measurements and use σ to give a confidence interval. The 200 extra weighings served only to detect and correct for operator error and did nothing to improve the confidence interval. With more repetitions, he could use a truncated mean, discarding say the largest and smallest values and averaging the rest. He could then use a bootstrap calculation to determine a confidence interval narrower than that calculated from σ, and so obtain some benefit from a large amount of extra work.

These procedures are robust against procedural errors which are not modeled by the assumption that the balance has a fixed known standard deviation σ. In practical applications where the occasional operator error can occur, or the balance can malfunction, the assumptions behind simple statistical calculations cannot be taken for granted. Before trusting the results of 100 objects weighed just three times each to have confidence intervals calculated from σ, it is necessary to test for and remove a reasonable number of outliers (testing the assumption that the operator is careful and correcting for the fact that he is not perfect), and to test the assumption that the data really have a normal distribution with standard deviation σ.

The theoretical analysis of such an experiment is complicated, but it is easy to set up a spreadsheet which draws random numbers from a normal distribution with standard deviation σ to simulate the situation (use =norminv(rand(),0,σ)). See for example Wittwer, J.W., "Monte Carlo Simulation in Excel: A Practical Guide", June 1, 2004. These techniques also work in Open Office and gnumeric.

After removing obvious outliers, one could subtract the median from the other two values for each object, and examine the distribution of the 200 resulting numbers. It should be normal with mean near zero and standard deviation a little larger than σ. A simple Monte Carlo spreadsheet calculation would reveal typical values for the standard deviation (around 105 to 115% of σ). Or, one could subtract the mean of each triplet from the values, and examine the distribution of 300 values. The mean is identically zero, but the standard deviation should be somewhat smaller (around 75 to 85% of σ).

Confidence intervals for proportions and related quantities

Template:Seealso An approximate confidence interval for a population mean can be constructed for random variables that are not normally distributed in the population, relying on the central limit theorem, if the sample sizes and counts are big enough. The formulae are identical to the case above (where the sample mean is actually normally distributed about the population mean). The approximation will be quite good with only a few dozen observations in the sample if the probability distribution of the random variable is not too different from the normal distribution (e.g. its cumulative distribution function does not have any discontinuities and its skewness is moderate).

One type of sample mean is the mean of an indicator variable, which takes on the value 1 for true and the value 0 for false. (Statisticians often call indicator variables "dummy variables", but that term is also frequently used by mathematicians for the concept of a bound variable.) The mean of such a variable is equal to the proportion that have the variable equal to one (both in the population and in any sample). Thus, the sample mean for a variable labeled MALE in data is just the proportion of sampled observations who have MALE = 1, i.e. the proportion who are male. This is a useful property of indicator variables, especially for hypothesis testing.

To apply the central limit theorem, one must use a large enough sample. A rough rule of thumb is that one should see at least 5 cases in which the indicator is 1 and at least 5 in which it is 0. Confidence intervals constructed using the above formulae may include negative numbers or numbers greater than 1, but proportions obviously cannot be negative or exceed 1. The probability assigned to negative numbers and numbers greater than 1 is usually small when the sample size is large and the proportion being estimated is not too close to 0 or 1.

Confidence intervals for cases where the method above assigns a substantial probability to (−∞, 0) or to (1, ∞) may be constructed by inverting hypothesis tests. If we think of conducting hypothesis tests over the whole feasible range of parameter values, and including any values for which a single hypothesis test would not reject the null hypothesis that the true value was that value, given our sample value, we can make a confidence interval based on the central limit theorem that does not violate the basic properties of proportions.

On the other hand, sample proportions can only take on a finite number of values, so the central limit theorem and the normal distribution are not the best tools for building a confidence interval. A better method would rely on the binomial distribution or the beta distribution, and there are a number of better methods in widespread use. For details on advantages and disadvantages of each, see:

  • "Interval Estimation for a Binomial Proportion", Lawrence D. Brown, T. Tony Cai, Anirban DasGupta, Statistical Science, volume 16, number 2 (May, 2001), pages 101-117.

See also


  • Fisher, R.A. (1956) Statistical Methods and Scientific Inference. Oliver and Boyd, Edinburgh. (See p. 32.)
  • Freund, J.E. (1962) Mathematical Statistics Prentice Hall, Englewood Cliffs, NJ. (See pp. 227-228.)
  • Hacking, I. (1965) Logic of Statistical Inference. Cambridge University Press, Cambridge
  • Keeping, E.S. (1962) Introduction to Statistical Inference. D. Van Nostrand, Princeton, NJ.
  • Kiefer, J. (1977) Journal of the American Statistical Association, 72, 789-827.
  • Neyman, J. (1937) Philosophical Transactions of the Royal Society of London A, 236, 333-380. (Seminal work.)
  • Robinson, G.K. (1975) Biometrika, 62, 151-161.
  • Zar, J.H. (1984) Biostatistical Analysis. Prentice Hall International, New Jersey. pp 43-45
  1. Goldstein, H., & Healey, M.J.R. (1995). The graphical presentation of a collection of means. Journal of the Royal Statistical Society, 158, 175-77. [1]

External links

de:Konfidenzintervall id:Selang kepercayaan it:Intervallo di confidenza he:רווח בר-סמך nl:Betrouwbaarheidsinterval lt:Pasikliautinas intervalas no:Konfidensintervall su:Interval kapercayaan sv:Konfidensintervall uk:Довірчий інтервал