Home  >  Article  >  Backend Development  >  How to call and implement the least squares method in Python

How to call and implement the least squares method in Python

PHPz
PHPzforward
2023-05-19 09:09:542543browse

The so-called linear least squares method can be understood as a continuation of solving equations. The difference is that when the unknown quantity is far smaller than the number of equations, an unsolvable problem will be obtained. The essence of the least squares method is to assign values ​​to unknown numbers while ensuring the minimum error.

The least squares method is a very classic algorithm, and we have been exposed to this name in high school. It is an extremely commonly used algorithm. I have previously written about the principle of linear least squares and implemented it in Python: least squares and its Python implementation; and how to call nonlinear least squares in scipy: nonlinear least squares(Supplementary content at the end of the article);There is also the least squares method of sparse matrices: sparse matrix least squares method.

The following describes the linear least squares method implemented in numpy and scipy, and compares the speed of the two.

numpy implementation

The least squares method is implemented in numpy, that is, lstsq(a,b) is used to solve x similar to a@x=b, where a is M× N matrix; then when b is a vector of M rows, it is just equivalent to solving a system of linear equations. For a system of equations like Ax=b, if A is a full-rank simulation, it can be expressed as x=A−1b, otherwise it can be expressed as x=(ATA)−1ATb.

When b is a matrix of M×K, then for each column, a set of x will be calculated.

There are 4 return values, which are the x obtained by fitting, the fitting error, the rank of matrix a, and the single-valued form of matrix a.

import numpy as np
np.random.seed(42)
M = np.random.rand(4,4)
x = np.arange(4)
y = M@x
xhat = np.linalg.lstsq(M,y)
print(xhat[0])
#[0. 1. 2. 3.]

scipy package

scipy.linalg also provides the least squares function. The function name is also lstsq, and its parameter list is

lstsq(a, b, cond=None, overwrite_a=False, overwrite_b=False, check_finite=True, lapack_driver=None)

where a, b is Ax= b. Both provide overridable switches. Setting them to True can save running time. In addition, the function also supports finiteness checking, which is an option that many functions in linalg have. Its return value is the same as the least squares function in numpy.

cond is a floating point parameter, indicating the singular value threshold. When the singular value is less than cond, it will be discarded.

lapack_driver is a string option, indicating which algorithm engine in LAPACK is selected, optionally 'gelsd', 'gelsy', 'gelss'.

import scipy.linalg as sl
xhat1 = sl.lstsq(M, y)
print(xhat1[0])
# [0. 1. 2. 3.]

Speed ​​comparison

Finally, make a speed comparison between the two sets of least squares functions

from timeit import timeit
N = 100
A = np.random.rand(N,N)
b = np.arange(N)

timeit(lambda:np.linalg.lstsq(A, b), number=10)
# 0.015487500000745058
timeit(lambda:sl.lstsq(A, b), number=10)
# 0.011151800004881807

This time, the two are not too far apart The difference is that even if the matrix dimension is enlarged to 500, the two are about the same.

N = 500
A = np.random.rand(N,N)
b = np.arange(N)

timeit(lambda:np.linalg.lstsq(A, b), number=10)
0.389679799991427
timeit(lambda:sl.lstsq(A, b), number=10)
0.35642060000100173

Supplement

Python calls the nonlinear least squares method

Introduction and constructor

In In scipy, the purpose of the nonlinear least squares method is to find a set of functions that minimize the sum of squares of the error function, which can be expressed as the following formula

How to call and implement the least squares method in Python

where ρ represents the loss function , can be understood as a preprocessing of fi(x).

scipy.optimize encapsulates the nonlinear least squares function least_squares, which is defined as

least_squares(fun, x0, jac, bounds, method, ftol, xtol, gtol, x_scale, f_scale, loss, jac_sparsity, max_nfev, verbose, args, kwargs)

Among them, func and x0 are required parameters, func is the function to be solved, and x0 is the function input The initial value of , there is no default value for these two parameters, they are parameters that must be entered.

bound is the solution interval, the default is (−∞,∞). When verbose is 1, there will be a termination output. When verbose is 2, more information during the operation will be printed. In addition, the following parameters are used to control the error, which is relatively simple.

##Default valueRemarksftol10Function tolerancextol10Independent variable tolerancegtol10Gradient tolerancex_scale1.0Characteristic scale of the variablef_scale1.0Residual margin value

loss为损失函数,就是上面公式中的ρ \rhoρ,默认为linear,可选值包括

How to call and implement the least squares method in Python

迭代策略

上面的公式仅给出了算法的目的,但并未暴露其细节。关于如何找到最小值,则需要确定搜索最小值的方法,method为最小值搜索的方案,共有三种选项,默认为trf

  • trf:即Trust Region Reflective,信赖域反射算法

  • dogbox:信赖域狗腿算法

  • lm:Levenberg-Marquardt算法

这三种方法都是信赖域方法的延申,信赖域的优化思想其实就是从单点的迭代变成了区间的迭代,由于本文的目的是介绍scipy中所封装好的非线性最小二乘函数,故而仅对其原理做简略的介绍。

How to call and implement the least squares method in Python

其中r为置信半径,假设在这个邻域内,目标函数可以近似为线性或二次函数,则可通过二次模型得到区间中的极小值点sk。然后以这个极小值点为中心,继续优化信赖域所对应的区间。

How to call and implement the least squares method in Python

雅可比矩阵

在了解了信赖域方法之后,就会明白雅可比矩阵在数值求解时的重要作用,而如何计算雅可比矩阵,则是接下来需要考虑的问题。jac参数为计算雅可比矩阵的方法,主要提供了三种方案,分别是基于两点的2-point;基于三点的3-point;以及基于复数步长的cs。一般来说,三点的精度高于两点,但速度也慢一倍。

此外,可以输入自定义函数来计算雅可比矩阵。

测试

最后,测试一下非线性最小二乘法

import numpy as np
from scipy.optimize import least_squares

def test(xs):
    _sum = 0.0
    for i in range(len(xs)):
        _sum = _sum + (1-np.cos((xs[i]*i)/5)*(i+1))
    return _sum

x0 = np.random.rand(5)
ret = least_squares(test, x0)
msg = f"最小值" + ", ".join([f"{x:.4f}" for x in ret.x])
msg += f"\nf(x)={ret.fun[0]:.4f}"
print(msg)
'''
最小值0.9557, 0.5371, 1.5714, 1.6931, 5.2294
f(x)=0.0000
'''

-8
-8
-8

The above is the detailed content of How to call and implement the least squares method in Python. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:yisu.com. If there is any infringement, please contact admin@php.cn delete