Gauss–Markov theorem: Difference between revisions

Jump to navigation Jump to search
No edit summary
 
m (Robot: Automated text replacement (-{{WikiDoc Cardiology Network Infobox}} +, -<references /> +{{reflist|2}}, -{{reflist}} +{{reflist|2}}))
 
(3 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{SI}}
{{SI}}


{{EH}}
 


:''This article is '''not''' about [[Gauss–Markov process]]es.''
:''This article is '''not''' about [[Gauss–Markov process]]es.''
Line 69: Line 69:
*[http://www.xycoon.com/ols1.htm Proof of the Gauss Markov theorem for multiple linear regression] (makes use of matrix algebra)
*[http://www.xycoon.com/ols1.htm Proof of the Gauss Markov theorem for multiple linear regression] (makes use of matrix algebra)
*[http://emlab.berkeley.edu/GMTheorem/index.html A Proof of the Gauss Markov theorem using geometry]
*[http://emlab.berkeley.edu/GMTheorem/index.html A Proof of the Gauss Markov theorem using geometry]


==References==
==References==
<references />
{{reflist|2}}
{{refbegin}}
{{refbegin}}
* [[R. L. Plackett|Plackett, R.L.]] (1950) "Some Theorems in Least Squares", ''Biometrika'' '''37''': 149-157
* [[R. L. Plackett|Plackett, R.L.]] (1950) "Some Theorems in Least Squares", ''Biometrika'' '''37''': 149-157
Line 77: Line 79:


{{DEFAULTSORT:Gauss-Markov theorem}}
{{DEFAULTSORT:Gauss-Markov theorem}}
[[Category:Statistical theorems]]
[[Category:Statistical theorems]]


Line 89: Line 93:
{{WikiDoc Help Menu}}
{{WikiDoc Help Menu}}
{{WS}}
{{WS}}
{{jb1}}

Latest revision as of 17:48, 4 September 2012

WikiDoc Resources for Gauss–Markov theorem

Articles

Most recent articles on Gauss–Markov theorem

Most cited articles on Gauss–Markov theorem

Review articles on Gauss–Markov theorem

Articles on Gauss–Markov theorem in N Eng J Med, Lancet, BMJ

Media

Powerpoint slides on Gauss–Markov theorem

Images of Gauss–Markov theorem

Photos of Gauss–Markov theorem

Podcasts & MP3s on Gauss–Markov theorem

Videos on Gauss–Markov theorem

Evidence Based Medicine

Cochrane Collaboration on Gauss–Markov theorem

Bandolier on Gauss–Markov theorem

TRIP on Gauss–Markov theorem

Clinical Trials

Ongoing Trials on Gauss–Markov theorem at Clinical Trials.gov

Trial results on Gauss–Markov theorem

Clinical Trials on Gauss–Markov theorem at Google

Guidelines / Policies / Govt

US National Guidelines Clearinghouse on Gauss–Markov theorem

NICE Guidance on Gauss–Markov theorem

NHS PRODIGY Guidance

FDA on Gauss–Markov theorem

CDC on Gauss–Markov theorem

Books

Books on Gauss–Markov theorem

News

Gauss–Markov theorem in the news

Be alerted to news on Gauss–Markov theorem

News trends on Gauss–Markov theorem

Commentary

Blogs on Gauss–Markov theorem

Definitions

Definitions of Gauss–Markov theorem

Patient Resources / Community

Patient resources on Gauss–Markov theorem

Discussion groups on Gauss–Markov theorem

Patient Handouts on Gauss–Markov theorem

Directions to Hospitals Treating Gauss–Markov theorem

Risk calculators and risk factors for Gauss–Markov theorem

Healthcare Provider Resources

Symptoms of Gauss–Markov theorem

Causes & Risk Factors for Gauss–Markov theorem

Diagnostic studies for Gauss–Markov theorem

Treatment of Gauss–Markov theorem

Continuing Medical Education (CME)

CME Programs on Gauss–Markov theorem

International

Gauss–Markov theorem en Espanol

Gauss–Markov theorem en Francais

Business

Gauss–Markov theorem in the Marketplace

Patents on Gauss–Markov theorem

Experimental / Informatics

List of terms related to Gauss–Markov theorem


This article is not about Gauss–Markov processes.

Overview

In statistics, the Gauss–Markov theorem, named after Carl Friedrich Gauss and Andrey Markov, states that in a linear model in which the errors have expectation zero and are uncorrelated and have equal variances, a best linear unbiased estimator (BLUE) of the coefficients is given by the least-squares estimator. The errors are not assumed to be normally distributed, nor are they assumed to be independent (but only uncorrelated — a weaker condition), nor are they assumed to be identically distributed (but only having zero mean and equal variances).

Statement

Suppose we have

<math>Y_i=\sum_{j=1}^{K}\beta_j X_{ij}+\varepsilon_i</math>

for i = 1, . . ., n, where βj are non-random but unobservable parameters, Xij are non-random and observable (called the "explanatory variables"), εi are random , and so Yi are random. The random variables εi are called the "errors" (not to be confused with "residuals"; see errors and residuals in statistics). Note that to include a constant in the model above, one can choose to include the XiK = 1.

The Gauss–Markov assumptions state that

  • <math>{\rm E}\left(\varepsilon_i\right)=0,</math>
  • <math>{\rm Var}\left(\varepsilon_i\right)=\sigma^2<\infty,</math>

(i.e., all errors have the same variance; that is "homoscedasticity"), and

  • <math>{\rm Cov}\left(\varepsilon_i,\varepsilon_j\right)=0</math>

for <math>i\not=j</math>; that is "uncorrelatedness." A linear estimator of βj is a linear combination

<math>\widehat\beta_j = c_{1j}Y_1+\cdots+c_{nj}Y_n</math>

in which the coefficients cij are not allowed to depend on the earlier coefficients β, since those are not observable, but are allowed to depend on X, since this data is observable, and whose expected value remains βj even if the values of X change. (The dependence of the coefficients on X is typically nonlinear; the estimator is linear in Y and hence in ε which is random; that is why this is "linear" regression.) The estimator is unbiased iff

<math>{\rm E}(\widehat\beta_j)=\beta_j.\,</math>

Now, let <math>\sum_{j=1}^K\lambda_j\beta_j</math> be some linear combination of the coefficients. Then the mean squared error of the corresponding estimation is defined as

<math>{\rm E} \left(\sum_{j=1}^K\lambda_j(\widehat\beta_j-\beta_j)^2\right)</math>

i.e., it is the expectation of the square of the difference between the estimator and the parameter to be estimated. (The mean squared error of an estimator coincides with the estimator's variance if the estimator is unbiased; for biased estimators the mean squared error is the sum of the variance and the square of the bias.) A best linear unbiased estimator of β is the one with the smallest mean squared error for every linear combination λ. This is equivalent to the condition that

<math>{\rm Var}(\widehat\beta)-{\rm Var}(\tilde\beta)</math>

is a positive semi-definite matrix for every other linear unbiased estimator <math>\tilde\beta</math>.

The ordinary least squares estimator (OLS) is the function

<math>\widehat\beta=(X^{T}X)^{-1}X^{T}Y</math>

of Y and X that minimizes the sum of squares of residuals

<math>\sum_{i=1}^n\left(Y_i-\widehat{Y}_i\right)^2=\sum_{i=1}^n\left(Y_i-\sum_{j=1}^K\widehat\beta_j X_{ij}\right)^2.</math>

(It is easy to confuse the concept of error introduced early in this article, with this concept of residual. For an account of the differences and the relationship between them, see errors and residuals in statistics).

The theorem now states that the OLS estimator is a BLUE. The main idea of the proof is that the least-squares estimator isuncorrelated with every linear unbiased estimator of zero, i.e., with every linear combination <math>a_1Y_1+\cdots+a_nY_n</math> whose coefficients do not depend upon the unobservable β but whose expected value is always zero.

Generalized least squares estimator

The GLS or Aitken estimator extends the Gauss-Markov Theorem to the case where the error vector has a non-scalar covariance matrix - the Aitken estimator is also a BLUE. [1]

See also

External links


References

  1. A.C. Aitken,"On Least Squares and Linear Combinations of Observations", Proceedings of the Royal Society of Edinburgh, 1935, vol. 55, pp. 42-48
  • Plackett, R.L. (1950) "Some Theorems in Least Squares", Biometrika 37: 149-157

de:Satz von Gauß-Markow it:Teorema di Gauss-Markov sv:Blue (statistik)


Template:WS Template:Jb1