Autocovariance

Jump to navigation Jump to search

Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]


Overview

In statistics, given a stochastic process X(t), the autocovariance is simply the covariance of the signal against a time-shifted version of itself. If each state of the series has a mean, E[Xt] = μt, then the autocovariance is given by

<math>\, K_\mathrm{XX} (t,s) = E[(X_t - \mu_t)(X_s - \mu_s)] = E[X_t\cdot X_s]-\mu_t\cdot\mu_s.\,</math>

where E is the expectation operator.


Stationarity

If X(t) is wide sense stationary then the following conditions are true:

<math>\mu_t = \mu_s = \mu \,</math> for all t, s

and

<math>K_\mathrm{XX}(t,s) = K_\mathrm{XX}(s-t) = K_\mathrm{XX}(\tau) \, </math>

where

<math>\tau = s - t \,</math>

is the lag time, or the amount of time by which the signal has been shifted.

As a result, the autocovariance becomes

<math>\, K_\mathrm{XX} (\tau) = E \{ (X(t) - \mu)(X(t+\tau) - \mu) \} </math>
<math> = E \{ X(t)\cdot X(t+\tau) \} -\mu^2,\,</math>
<math> = R_\mathrm{XX}(\tau) - \mu^2,\,</math>

where RXX represents the autocorrelation.

Normalization

When normalised by dividing by the variance σ2 then the autocovariance becomes the autocorrelation coefficient ρ. That is

<math> \rho_\mathrm{XX}(\tau) = \frac{ K_\mathrm{XX}(\tau)}{\sigma^2}.\,</math>

Note, however, that some disciplines use the terms autocovariance and autocorrelation interchangeably.

The autocovariance can be thought of as a measure of how similar a signal is to a time-shifted version of itself with an autocovariance of σ2 indicating perfect correlation at that lag. The normalisation with the variance will put this into the range [−1, 1].

References

  • P. G. Hoel (1984): Mathematical Statistics, New York, Wiley


Template:WikiDoc Sources