Go to the first, previous, next, last section, table of contents.
This chapter describes routines for performing least squares fits to experimental data using linear combinations of functions. The data may be weighted or unweighted. For weighted data the functions compute the best fit parameters and their associated covariance matrix. For unweighted data the covariance matrix is estimated from the scatter of the points, giving a variance-covariance matrix. The functions are divided into separate versions for simple one- or two-parameter regression and multiple-parameter fits. The functions are declared in the header file 'gsl_fit.h'
The functions described in this section can be used to perform least-squares fits to a straight line model, Y = c_0 + c_1 X. For weighted data the best-fit is found by minimizing the weighted sum of squared residuals, \chi^2,
\chi^2 = \sum_i w_i (y_i - (c_0 + c_1 x_i))^2
for the parameters c_0, c_1. For unweighted data the sum is computed with w_i = 1.
The covariance matrix for the parameters (c0, c1) is estimated from weighted data and returned via the parameters (cov00, cov01, cov11). The weighted sum of squares of the residuals from the best-fit line, \chi^2, is returned in chisq.
The functions described in this section can be used to perform least-squares fits to a straight line model without a constant term, Y = c_1 X. For weighted data the best-fit is found by minimizing the weighted sum of squared residuals, \chi^2,
\chi^2 = \sum_i w_i (y_i - c_1 x_i)^2
for the parameter c_1. For unweighted data the sum is computed with w_i = 1.
The variance of the parameter c1 is estimated from the weighted data and returned via the parameters cov11. The weighted sum of squares of the residuals from the best-fit line, \chi^2, is returned in chisq.
The functions described in this section perform least-squares fits to a general linear model, y = X c where y is a vector of n observations, X is an n by p matrix of predictor variables, and c are the p unknown best-fit parameters, which are to be estimated.
The best-fit is found by minimizing the weighted sums of squared residuals, \chi^2,
\chi^2 = (y - X c)^T W (y - X c)
with respect to the parameters c. The weights are specified by the diagonal elements of the n by n matrix W. For unweighted data W is replaced by the identity matrix.
This formulation can be used for fits to any number of functions and/or variables by preparing the n-by-p matrix X appropriately. For example, to fit to a p-th order polynomial in x, use the following matrix,
X_{ij} = x_i^j
where the index i runs over the observations and the index j runs from 0 to p-1.
To fit to a set of p sinusoidal functions with fixed frequencies \omega_1, \omega_2, ..., \omega_p, use,
X_{ij} = sin(\omega_j x_i)
To fit to p independent variables x_1, x_2, ..., x_p, use,
X_{ij} = x_j(i)
where x_j(i) is the i-th value of the predictor variable x_j.
The functions described in this section are declared in the header file 'gsl_multifit.h'.
The solution of the general linear least-squares system requires an additional working space for intermediate results, such as the singular value decomposition of the matrix X.
The best-fit is found by singular value decomposition of the matrix X using the preallocated workspace provided in work. The modified Golub-Reinsch SVD algorithm is used, with column scaling to improve the accuracy of the singular values. Any components which have zero singular value (to machine precision) are discarded from the fit.
This function computes the best-fit parameters c of the model y = X c for the observations y and the matrix of predictor variables X. The covariance matrix of the model parameters cov is estimated from the weighted data. The weighted sum of squares of the residuals from the best-fit, \chi^2, is returned in chisq.
The best-fit is found by singular value decomposition of the matrix X using the preallocated workspace provided in work. Any components which have zero singular value (to machine precision) are discarded from the fit.
The following program computes a least squares straight-line fit to a simple (fictitious) dataset, and outputs the best-fit line and its associated one standard-deviation error bars.
#include <stdio.h> #include <gsl/gsl_fit.h> int main (void) { int i, n = 4; double x[4] = { 1970, 1980, 1990, 2000 }; double y[4] = { 12, 11, 14, 13 }; double w[4] = { 0.1, 0.2, 0.3, 0.4 }; double c0, c1, cov00, cov01, cov11, chisq; gsl_fit_wlinear (x, 1, w, 1, y, 1, n, &c0, &c1, &cov00, &cov01, &cov11, &chisq); printf ("# best fit: Y = %g + %g X\n", c0, c1); printf ("# covariance matrix:\n"); printf ("# [ %g, %g\n# %g, %g]\n", cov00, cov01, cov01, cov11); printf ("# chisq = %g\n", chisq); for (i = 0; i < n; i++) printf ("data: %g %g %g\n", x[i], y[i], 1/sqrt(w[i])); printf ("\n"); for (i = -30; i < 130; i++) { double xf = x[0] + (i/100.0) * (x[n-1] - x[0]); double yf, yf_err; gsl_fit_linear_est (xf, c0, c1, cov00, cov01, cov11, &yf, &yf_err); printf ("fit: %g %g\n", xf, yf); printf ("hi : %g %g\n", xf, yf + yf_err); printf ("lo : %g %g\n", xf, yf - yf_err); } return 0; }
The following commands extract the data from the output of the program
and display it using the GNU plotutils graph
utility,
$ ./demo > tmp $ more tmp # best fit: Y = -106.6 + 0.06 X # covariance matrix: # [ 39602, -19.9 # -19.9, 0.01] # chisq = 0.8 $ for n in data fit hi lo ; do grep "^$n" tmp | cut -d: -f2 > $n ; done $ graph -T X -X x -Y y -y 0 20 -m 0 -S 2 -Ie data -S 0 -I a -m 1 fit -m 2 hi -m 2 lo
The next program performs a quadratic fit y = c_0 + c_1 x + c_2
x^2 to a weighted dataset using the generalised linear fitting function
gsl_multifit_wlinear
. The model matrix X for a quadratic
fit is given by,
X = [ 1 , x_0 , x_0^2 ; 1 , x_1 , x_1^2 ; 1 , x_2 , x_2^2 ; ... , ... , ... ]
where the column of ones corresponds to the constant term c_0. The two remaining columns corresponds to the terms c_1 x and c_2 x^2.
The program reads n lines of data in the format (x, y, err) where err is the error (standard deviation) in the value y.
#include <stdio.h> #include <gsl/gsl_multifit.h> int main (int argc, char **argv) { int i, n; double xi, yi, ei, chisq; gsl_matrix *X, *cov; gsl_vector *y, *w, *c; if (argc != 2) { fprintf (stderr,"usage: fit n < data\n"); exit (-1); } n = atoi (argv[1]); X = gsl_matrix_alloc (n, 3); y = gsl_vector_alloc (n); w = gsl_vector_alloc (n); c = gsl_vector_alloc (3); cov = gsl_matrix_alloc (3, 3); for (i = 0; i < n; i++) { int count = fscanf (stdin, "%lg %lg %lg", &xi, &yi, &ei); if (count != 3) { fprintf (stderr, "error reading file\n"); exit (-1); } printf ("%g %g +/- %g\n", xi, yi, ei); gsl_matrix_set (X, i, 0, 1.0); gsl_matrix_set (X, i, 1, xi); gsl_matrix_set (X, i, 2, xi*xi); gsl_vector_set (y, i, yi); gsl_vector_set (w, i, 1.0/(ei*ei)); } { gsl_multifit_linear_workspace * work = gsl_multifit_linear_alloc (n, 3); gsl_multifit_wlinear (X, w, y, c, cov, &chisq, work); gsl_multifit_linear_free (work); } #define C(i) (gsl_vector_get(c,(i))) #define COV(i,j) (gsl_matrix_get(cov,(i),(j))) { printf ("# best fit: Y = %g + %g X + %g X^2\n", C(0), C(1), C(2)); printf ("# covariance matrix:\n"); printf ("[ %+.5e, %+.5e, %+.5e \n", COV(0,0), COV(0,1), COV(0,2)); printf (" %+.5e, %+.5e, %+.5e \n", COV(1,0), COV(1,1), COV(1,2)); printf (" %+.5e, %+.5e, %+.5e ]\n", COV(2,0), COV(2,1), COV(2,2)); printf ("# chisq = %g\n", chisq); } return 0; }
A suitable set of data for fitting can be generated using the following program. It outputs a set of points with gaussian errors from the curve y = e^x in the region 0 < x < 2.
#include <stdio.h> #include <math.h> #include <gsl/gsl_randist.h> int main (void) { double x; const gsl_rng_type * T; gsl_rng * r; gsl_rng_env_setup (); T = gsl_rng_default; r = gsl_rng_alloc (T); for (x = 0.1; x < 2; x+= 0.1) { double y0 = exp (x); double sigma = 0.1 * y0; double dy = gsl_ran_gaussian (r, sigma); printf ("%g %g %g\n", x, y0 + dy, sigma); } return 0; }
The data can be prepared by running the resulting executable program,
$ ./generate > exp.dat $ more exp.dat 0.1 0.97935 0.110517 0.2 1.3359 0.12214 0.3 1.52573 0.134986 0.4 1.60318 0.149182 0.5 1.81731 0.164872 0.6 1.92475 0.182212 ....
To fit the data use the previous program, with the number of data points given as the first argument. In this case there are 19 data points.
$ ./fit 19 < exp.dat 0.1 0.97935 +/- 0.110517 0.2 1.3359 +/- 0.12214 ... # best fit: Y = 1.02318 + 0.956201 X + 0.876796 X^2 # covariance matrix: [ +1.25612e-02, -3.64387e-02, +1.94389e-02 -3.64387e-02, +1.42339e-01, -8.48761e-02 +1.94389e-02, -8.48761e-02, +5.60243e-02 ] # chisq = 23.0987
The parameters of the quadratic fit match the coefficients of the expansion of e^x, taking into account the errors on the parameters and the O(x^3) difference between the exponential and quadratic functions for the larger values of x. The errors on the parameters are given by the square-root of the corresponding diagonal elements of the covariance matrix. The chi-squared per degree of freedom is 1.4, indicating a reasonable fit to the data.
A summary of formulas and techniques for least squares fitting can be found in the "Statistics" chapter of the Annual Review of Particle Physics prepared by the Particle Data Group.
The Review of Particle Physics is available online at the website given above.
The tests used to prepare these routines are based on the NIST Statistical Reference Datasets. The datasets and their documentation are available from NIST at the following website,
http://www.nist.gov/itl/div898/strd/index.html.
Go to the first, previous, next, last section, table of contents.