Akaike information criterion

(Redirected from Schwarz criterion)
Jump to navigation Jump to search

WikiDoc Resources for Akaike information criterion

Articles

Most recent articles on Akaike information criterion

Most cited articles on Akaike information criterion

Review articles on Akaike information criterion

Articles on Akaike information criterion in N Eng J Med, Lancet, BMJ

Media

Powerpoint slides on Akaike information criterion

Images of Akaike information criterion

Photos of Akaike information criterion

Podcasts & MP3s on Akaike information criterion

Videos on Akaike information criterion

Evidence Based Medicine

Cochrane Collaboration on Akaike information criterion

Bandolier on Akaike information criterion

TRIP on Akaike information criterion

Clinical Trials

Ongoing Trials on Akaike information criterion at Clinical Trials.gov

Trial results on Akaike information criterion

Clinical Trials on Akaike information criterion at Google

Guidelines / Policies / Govt

US National Guidelines Clearinghouse on Akaike information criterion

NICE Guidance on Akaike information criterion

NHS PRODIGY Guidance

FDA on Akaike information criterion

CDC on Akaike information criterion

Books

Books on Akaike information criterion

News

Akaike information criterion in the news

Be alerted to news on Akaike information criterion

News trends on Akaike information criterion

Commentary

Blogs on Akaike information criterion

Definitions

Definitions of Akaike information criterion

Patient Resources / Community

Patient resources on Akaike information criterion

Discussion groups on Akaike information criterion

Patient Handouts on Akaike information criterion

Directions to Hospitals Treating Akaike information criterion

Risk calculators and risk factors for Akaike information criterion

Healthcare Provider Resources

Symptoms of Akaike information criterion

Causes & Risk Factors for Akaike information criterion

Diagnostic studies for Akaike information criterion

Treatment of Akaike information criterion

Continuing Medical Education (CME)

CME Programs on Akaike information criterion

International

Akaike information criterion en Espanol

Akaike information criterion en Francais

Business

Akaike information criterion in the Marketplace

Patents on Akaike information criterion

Experimental / Informatics

List of terms related to Akaike information criterion

Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]

Overview

Akaike's information criterion, developed by Hirotsugu Akaike under the name of "an information criterion" (AIC) in 1971 and proposed in Akaike (1974), is a measure of the goodness of fit of an estimated statistical model. It is grounded in the concept of entropy. The AIC is an operational way of trading off the complexity of an estimated model against how well the model fits the data.

Definition

In the general case, the AIC is

<math>AIC = 2k - 2\ln(L)\,</math>

where k is the number of parameters in the statistical model, and L is the likelihood function.

Over the remainder of this entry, it will be assumed that the model errors are normally and independently distributed. Let n be the number of observations and RSS be the residual sum of squares. Then AIC becomes

<math>AIC=2k + n\ln(RSS/n)\,</math>

Increasing the number of free parameters to be estimated improves the goodness of fit, regardless of the number of free parameters in the data generating process. Hence AIC not only rewards goodness of fit, but also includes a penalty that is an increasing function of the number of estimated parameters. This penalty discourages overfitting. The preferred model is the one with the lowest AIC value. The AIC methodology attempts to find the model that best explains the data with a minimum of free parameters. By contrast, more traditional approaches to modeling start from a null hypothesis. The AIC penalizes free parameters less strongly than does the Schwarz criterion.

AICc and AICu

AICc is AIC with a second order correction for small sample sizes, to start with:

<math>AICc = AIC + \frac{2k(k + 1)}{n - k - 1}\,</math>

Since AICc converges to AIC as n gets large, AICc should be employed regardless of sample size (Burnham and Anderson, 2004).

McQuarrie and Tsai (1998: 22) define AICc as:

<math>AICc = \ln{\frac{RSS}{n}} + \frac{n+k}{n-k-2}\ ,</math>

and propose (p. 32) the closely related measure:

<math>AICu = \ln{\frac{RSS}{n-k}} + \frac{n+k}{n-k-2}\ .</math>

McQuarrie and Tsai ground their high opinion of AICc and AICu on extensive simulation work.

QAIC

QAIC (the quasi-AIC) is defined as:

<math>QAIC = 2k-\frac{1}{c}2\ln{L}\,</math>

where c is a variance inflation factor. QAIC adjusts for over-dispersion or lack of fit. The small sample version of QAIC is:

<math>QAICc = QAIC + \frac{2k(k + 1)}{n - k - 1}\,</math>.

References

  • Akaike, Hirotugu (1974). "A new look at the statistical model identification". IEEE Transactions on Automatic Control. 19 (6): 716–723.
  • Burnham, K. P., and D. R. Anderson, 2002. Model Selection and Multimodel Inference: A Practical-Theoretic Approach, 2nd ed. Springer-Verlag. ISBN 0-387-95364-7.
  • --------, 2004. Multimodel Inference: understanding AIC and BIC in Model Selection, Amsterdam Workshop on Model Selection.
  • Hurvich, C. M., and Tsai, C.-L., 1989. Regression and time series model selection in small samples. Biometrika, Vol 76. pp. 297-307
  • McQuarrie, A. D. R., and Tsai, C.-L., 1998. Regression and Time Series Model Selection. World Scientific.

See also

External links


Template:WikiDoc Sources