Least-squares spectral analysis

Jump to navigation Jump to search

Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum, based on a least squares fit of sinusoids to data samples, similar to Fourier analysis[1][2]. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems[3] and is a superior alternative for analyzing long incomplete records such as most natural datasets.[4]

LSSA is also known as the Vaníček method[5] after Canadian geodesist and geophysicist Petr Vaníček, and as the Lomb method[3] (or the Lomb periodogram[6]) and the Lomb–Scargle method[7] (or Lomb–Scargle periodogram[8][2]), based on the contributions of Nicholas R. Lomb[9] and, independently, Jeffrey D. Scargle[10].

Historical background

The close connections between Fourier analysis, the periodogram, and least-squares fitting of sinusoids have long been known.[11] Most developments, however, are restricted to complete data sets of equally spaced samples. Barning, in 1963[12], handled unequally spaced data by similar techniques, including both a periodogram analysis equivalent to what is now referred to the Lomb method, and least-squares fitting of selected frequencies of sinusoids determined from such periodograms, connected by a procedure that is now known as matching pursuit with backfitting.[13]

Vaníček also proposed the matching-pursuit approach, which he called "successive spectral analysis", but with equally spaced data, in 1969[14]. Vaníček further developed this method, and analyzed the treatment of unequally spaced samples, in 1971[15]. The Vaníček method was then simplified in 1976 by Lomb, who pointed out its close connection to periodogram analysis[9]. It was subsequently further modified and analyzed by Scargle[10], who pointed out the now-standard way to decorrelate sine and cosine components evaluated at the set of sample times.

Scargle states that his paper "does not introduce a new detection technique, but instead studies the reliability and efficiency of detection with the most commonly used technique, the periodogram, in the case where the observation times are unevenly spaced," and further points out in reference to least-squares fitting of sinusoids compared to periodogram analysis, that his paper "establishes, apparently for the first time, that (with the proposed modifications) these two methods are exactly equivalent"[10].

Terminology

In this article, the term sinusoids can mean sine and cosine functions, or, in the Lomb–Scargle method, time-shifted (phase-shifted) sines and cosines. Vaníček and his students usually refer to them simply as trigonometric functions rather than specifically as sinusoids[16][4]. This usage was introduced by Vaníček in 1971[15], and is explained by Wells et al. in 1985[17], who "define the function trig(x) as being either cos(x) or sin(x)."

The Vaníček method

In the Vaníček method, a discrete data set is approximated by a weighted sum of sinusoids of predetermined frequencies, using a standard linear regression, or least-squares fit. The numbers of sinusoids must be less than or equal to the number of data samples (counting sines and cosines of the same frequency as separate sinusoids).

A data vector Φ is represented as a weighted sum of sinusoidal basis functions, tabulated in a matrix A by evaluating each function at the sample times, with weight vector x:

<math>\phi \approx \textbf{A}x</math>

where the weight vector x is chosen to minimize the sum of squared errors in approximating Φ. The solution for x is closed-form, using standard linear regression[16]:

<math>x = (\textbf{A}^{\mathrm{T}}\textbf{A})^{-1}\textbf{A}^{\mathrm{T}}\phi</math>.

Here the matrix A can be based on any set of functions that are mutually independent (not necessarily orthogonal) when evaluated at the sample times; for spectral analysis, the functions used are typically sines and cosines evenly distributed over the frequency range of interest. If too many frequencies are chosen in a too-narrow frequency range, the functions will not be sufficiently independent, the matrix will be badly conditioned, and the resulting spectrum will not be meaningful[16].

When the basis functions in A are orthogonal (that is, not correlated, meaning the columns have zero pair-wise dot products), the matrix ATA is a diagonal matrix; when the columns all have the same power (sum of squares of elements), then that matrix is an identity matrix times a constant, so the inversion is trivial. The latter is the case when the sample times are equally spaced and the sinusoids are chosen to be sines and cosines equally spaced in pairs on the frequency interval 0 to a half cycle per sample (spaced by 1/N cycle per sample, omitting the sine phases at 0 and maximum frequency where they are identically zero). This particular case is known as the discrete Fourier transform, slightly rewritten in terms of real data and coefficients[16].

<math>x = \textbf{A}^{\mathrm{T}}\phi</math>     (DFT case for N equally spaced samples and frequencies, within a scalar factor)

Lomb proposed using this simplification in general, since the correlations between pairs of sinusoids are often small, at least when they are not too closely spaced. This is essentially the traditional periodogram formulation, but now adopted for use with unevenly spaced samples. The vector x is a good estimate of an underlying spectrum, but since correlations are ignored, Ax is no longer a good approximation to the signal, and the method is no longer a least-squares method – yet it has continued to be referred to as such.

The Lomb–Scargle periodogram

Rather than just taking dot products of the data with sine and cosine waveforms directly, Scargle modified Lomb's method to first find a time delay τ such that this pair of sinusoids would be mutually orthogonal at sample times tj, and also adjusted for the potentially unequal powers of these two basis functions, to obtain a better estimate of the power at a frequency[10][3]:

<math>\tan{2 \pi \tau} = \frac{\sum_j \sin 2 \omega t_j}{\sum_j \cos 2 \omega t_j}

</math>

The periodogram at frequency ω is then estimated as:

<math>P_x(\omega) = \frac{1}{2}

\left(

 \frac { \left[ \sum_j X_j \cos \omega ( t_j - \tau ) \right] ^ 2}
       { \sum_j \cos^2 \omega ( t_j - \tau ) }

+

\frac {\left[ \sum_j X_j \sin \omega ( t_j - \tau ) \right] ^ 2}
       { \sum_j \sin^2 \omega ( t_j - \tau ) }

\right) </math>

which Scargle reports then has the same statistical distribution as in the evenly-sampled case[10].

At any individual frequency ω, this method gives the same power as does a least-squares fit to sinusoids of that frequency, of the form

<math>\phi(t) \approx A \sin \omega t + B \cos \omega t</math> [18].

Main features

Template:Laundry

Applications

Vaníček analysis has many scientific applications — ranging from astronomy, geophysics, physics, microbiology, genetics and medicine, to mathematics and finance[21]. This wide applicability stems from many useful properties of the least-squares fit. The most useful feature of the method is enabling for incomplete records to be spectrally analyzed, without the need to manipulate the record or to invent otherwise non-existent data.

Magnitudes in the Vanícek spectrum depict the contribution of a period to the variance of the time series, of the order of several percent[14]. Generally, spectral magnitudes defined in the above manner enable the output's straightforward significance level regime[20]. Alternatively, magnitudes in the Vanícek spectrum can also be expressed in dB[19]. Note that magnitudes in the Vaníček spectrum follow ß-distribution[22].

Inverse transformation of Vaníček's LSSA is possible, as is most easily seen by writing the forward transform as a matrix; the matrix inverse (when the matrix is not singular) or pseudo-inverse will then be an inverse transformation; the inverse will exactly match the original data if the chosen sinusoids are mutually independent at the sample points and their number is equal to the number of data points[16]. No such inverse procedure is known for the periodogram method.

Implementation

The LSSA can be implemented in less than a page of MATLAB code[23]. For each frequency in a desired set of frequencies, sine and cosine functions are evaluated at the times corresponding to the data samples, and dot products of the data vector with the sinusoid vectors are taken and appropriately normalized; following the method known as Lomb/Scargle periodogram, a time shift is calculated for each frequency to orthogonalize the sine and cosine components before the dot product, as described by Craymer[16]; finally, a power is computed from those two amplitude components. This same process implements a discrete Fourier transform when the data are uniformly spaced in time and the frequencies chosen correspond to integer numbers of cycles over the finite data record.

As Craymer explains, this method treats each sinusoidal component independently, or out of context, even though they may not be orthogonal on the data points, whereas Vaníček's original method does a full simultaneous least-squares fit by solving a matrix equation, partitioning the total data variance between the specified sinusoid frequencies[16]. Such a matrix least-squares solution is natively available in MATLAB as the backslash operator[24].

Craymer explains that the least-squares method, as opposed to the independent or periodogram version due to Lomb, can not fit more components (sines and cosines) than there are data samples, and further that[16]:

"...serious repercussions can also arise if the selected frequencies result in some of the Fourier components (trig functions) becoming nearly linearly dependent with each other, thereby producing an ill-conditioned or near singular N. To avoid such ill conditioning it becomes necessary to either select a different set of frequencies to be estimated (e.g. equally spaced frequencies) or simply neglect the correlations in N (i.e., the off-diagonal blocks) and estimate the inverse least squares transform separately for the individual frequencies..."

Lomb's periodogram method, on the other hand, can use an arbitrarily high number of, or density of, frequency components, as in a standard periodogram; that is, the frequency domain can be over-sampled by an arbitrary factor[3].

In Fourier analysis, such as the Fourier transform or the discrete Fourier transform, the sinusoids being fitted to the data are all mutually orthogonal, so there is no distinction between the simple out-of-context dot-product-based projection onto basis functions versus a least-squares fit; that is, no matrix inversion is required to least-squares partition the variance between orthogonal sinusoids of different frequencies[25]. This method is usually preferred for its efficient fast Fourier transform implementation, when complete data records with equally spaced samples are available.

See also

References

  1. Cafer Ibanoglu (2000). Variable Stars As Essential Astrophysical Tools. Springer. ISBN 0792360842.
  2. 2.0 2.1 D. Scott Birney, David Oesper, and Guillermo Gonzalez (2006). Observational Astronomy. Cambridge University Press. ISBN 0521853702.
  3. 3.0 3.1 3.2 3.3 Press; et al. (2007). Numerical Recipes (3rd Edition ed.). Cambridge University Press. ISBN 0521880688.
  4. 4.0 4.1 4.2 4.3 4.4 Omerbashich M., "Gauss–Vanicek spectral analysis of the Sepkoski compendium: no new life cycles", Pages 26-30, Computing in Science & Engineering, Volume 8, Number 4, (July-August, 2006) ISSN 1521-9615.
  5. 5.0 5.1 5.2 J. Taylor and S. Hamilton (1972-03-20). "Some tests of the Vaníček Method of spectral analysis". Astrophysics and Space Science.
  6. Alistair I. Mees (2001). Nonlinear Dynamics and Statistics. Springer. ISBN 0817641637.
  7. Frank Chambers (2002). Climate Change: Critical Concepts in the Environment. Routledge. ISBN 0415278589.
  8. Hans P. A. Van Dongen; et al. (1999). "Searching for Biological Rhythms: Peak Detection in the Periodogram of Unequally Spaced Data". Journal of Biological Rhythms. 14 (6): pp.617–620.
  9. 9.0 9.1 Lomb, N. R., "Least-squares frequency analysis of unequally spaced data," Astrophysics and Space Science 39, p.447–462 (1976).
  10. 10.0 10.1 10.2 10.3 10.4 Scargle, J. D., "Studies in astronomical time series analysis II: Statistical aspects of spectral analysis of unevenly spaced data," Astrophysics and Space Science 302, p.757–763 (1982).
  11. David Brunt (1931). The Combination of Observations (2nd ed. ed.). Cambridge University Press.
  12. Barning, F.J.M. "The numerical analysis of the light-curve of 12 lacertae," Bulletin of the Astronomical Institutes of the Netherlands, 17, pp.22–28.
  13. Pascal Vincent and Yoshua Bengio (2002). "Kernel Matching Pursuit" (PDF). Machine Learning. 48: 165–187.
  14. 14.0 14.1 14.2 Vanícek P. "Approximate Spectral Analysis by Least-squares Fit", Astrophysics and Space Science, pp.387–391, Volume 4 (1969).
  15. 15.0 15.1 Vanícek P. "Further development and properties of the spectral analysis by least-squares fit," Astrophysics and Space Science, pp.10–33, Volume 12 (1971).
  16. 16.0 16.1 16.2 16.3 16.4 16.5 16.6 16.7 Craymer, M.R., The Least Squares Spectrum, Its Inverse Transform and Autocorrelation Function: Theory and Some Applications in Geodesy, Ph.D. Dissertation, University of Toronto, Canada (1998).
  17. D. Wells, P. Vaníček, and S. Pagiatakis, "Least-Squares Spectral Analysis Revisited", TR84, Univ. New Brunswick (1985)
  18. William J. Emery, Richard E. Thomson (2001). Data Analysis Methods in Physical Oceanography. Elsevier. ISBN 0444507566.
  19. 19.0 19.1 19.2 19.3 Pagiatakis, S. Stochastic significance of peaks in the least-squares spectrum, J of Geodesy 73, p.67-78 (1999).
  20. 20.0 20.1 Beard, A.G., Williams, P.J.S., Mitchell, N.J. & Muller, H.G. A special climatology of planetary waves and tidal variability, J Atm. Solar-Ter. Phys. 63 (09), p.801–811 (2001).
  21. 21.0 21.1 Omerbashich M. , Earth-Model Discrimination Method, pp.129, Ph.D. dissertation, University of New Brunswick, Canada (2003).
  22. Steeves, R.R. A statistical test for significance of peaks in the least squares spectrum, Collected Papers of the Geodetic Survey, Department of Energy, Mines and Resources, Surveys and Mapping, Ottawa, Canada, p.149-166 (1981)
  23. Richard A. Muller and Gordon J. MacDonald (2000). Ice Ages and Astronomical Causes: Data, Spectral Analysis, and Mechanisms. Springer. ISBN 3540437797.
  24. Timothy A. Davis and Kermit Sigmon (2005). MATLAB Primer. CRC Press. ISBN 1584885238.
  25. Darrell Williamson (1999). Discrete-Time Signal Processing: An Algebraic Approach. Springer. ISBN 1852331615.

External links