Behrens-Fisher problem

Revision as of 22:49, 8 August 2012 by WikiBot (talk | contribs) (Bot: Automated text replacement (-{{SIB}} + & -{{EH}} + & -{{EJ}} + & -{{Editor Help}} + & -{{Editor Join}} +))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]


Overview

In statistics, the Behrens-Fisher problem is the problem of interval estimation and hypothesis testing concerning the difference between the means of two normally distributed populations when the variances of the two populations are not assumed to be equal, based on two independent samples.

Ronald Fisher in 1935 introduced fiducial inference in order to apply it to this problem. He referred to an earlier paper by W. V. Behrens from 1929. Behrens and Fisher proposed to find the probability distribution of

<math> T \equiv {\bar x_1 - \bar x_2 \over \sqrt{s_1^2/n_1 + s_2^2/n_2}} </math>

where <math> \bar x_1 </math> and <math> \bar x_2 </math> are the two sample means, and <math> s_1 </math> and <math> s_2 </math> are their standard deviations. Fisher approximated the distribution of this by ignoring the random variation of the relative sizes of the standard deviations, <math> {s_1 / \sqrt{n_1} \over \sqrt{s_1^2/n_1 + s_2^2/n_2}} </math>.

Fisher's solution provoked controversy because it did not have the property that the hypothesis of equal means would be rejected with probability α if the means were in fact equal. Many other methods of treating the problem have been proposed since.

Welch's approximate t solution

The most widely used method (for example in statistical packages and in Microsoft Excel) is that of B. L. Welch, who, like Fisher, was at University College London. The variance of the mean difference <math>\bar d =\bar x_1 - \bar x_2</math> results in <math> s_{\bar d}^2 = s_1^2/n_1 + s_2^2/n_2</math>. Welch (1938) approximated the distribution of <math>s_{\bar d}^2</math> by that Typ III Pearson distribution (a scaled chi-squared distribution) whose first two moments agree with that of <math>s_{\bar d}^2</math>. This applies to the following number of degrees of freedom (d.f.), which is generally non-integer:

<math> \nu = {(\gamma_1 + \gamma_2)^2 \over \gamma_1^2/(n_1-1) + \gamma_2^2/(n_2-1)} \qquad{\rm where~~}\gamma_i = \sigma_i^2/n_i</math>.

Under the null hypothesis of equal expectations, <math>\mu_1=\mu_2</math>, the distribution of the Behrens Fisher statistic <math>T</math>, which also depends on the variance ratio <math>\sigma_1^2/\sigma_2^2</math>, could now be approximated by Student's t distribution with these <math>\nu</math> degrees of freedom. But this <math>\nu</math> contains the population variances <math>\sigma_i^2</math>, and these are unknown. The following estimate only replaces the population variances by the sample variances:

<math>\hat\nu = {(g_1 + g_2)^2 \over g_1^2/(n_1-1) + g_2^2/(n_2-1)} \qquad{\rm where~~}g_i = s_i^2/n_i</math>.

This <math>\hat\nu</math> is a random variable. A t distribution with a random number of degrees of freedom does not exist. Nevertheless, the Behrens Fisher <math>T</math> can be compared with a corresponding quantile of Student's t distribution with these estimated number of degrees of freedom, <math>\hat\nu</math>, which is generally non-integer. In this way, the boundary between acceptance and rejection region of the test statistic <math>T</math> is described by a smooth function dependent on the empirical variances <math>s_i^2</math>.

This method also does not give exactly the nominal rate, but is generally not too far off. However, if the population variances are equal, or if the samples are rather small and the population variances can be assumed to be approximately equal, it is more accurate to use the standard method, which is the two-sample t-test.

References and external links


Template:WikiDoc Sources