Detailed explanation of gradient descent algorithm in Python

WBOY
Release: 2023-06-10 14:30:15
Original
2613 people have browsed it

Gradient descent is a commonly used optimization algorithm and is widely used in machine learning. Python is a great programming language for data science, and there are many ready-made libraries for implementing gradient descent algorithms. This article will introduce the gradient descent algorithm in Python in detail, including concepts and implementation.

1. Definition of Gradient Descent
Gradient descent is an iterative algorithm used to optimize the parameters of a function. In machine learning, we usually use gradient descent to minimize the loss function. Therefore, gradient descent can be thought of as a method of minimizing a function. The gradient descent algorithm can be used in any system where gradients can be calculated, including linear regression, logistic regression, neural networks, and more.

2. The principle of gradient descent
The basic principle of the gradient descent algorithm is to find the minimum value of a function. We usually think of the minimum value of a function as the minimum value in the function of the parameters of the function (the parameters refer to the variables we need to optimize). Therefore, we need to calculate the derivative of the parameter function. We use the derivative to determine the current slope of the function and multiply it by the learning rate to determine which direction we should go next. When the derivative of a function is zero, we have found the minimum of the function. In practical applications, we do not need to guarantee that the global minimum of the function can be found, we only need to find its local minimum.

3. Steps of gradient descent algorithm
1. Initialize parameters. We need to set the parameters required for the optimization function to an initial value, for example, set the parameters to zero or a random number.
2. Calculate the loss function. Computes a loss function using the given parameters.
3. Calculate the gradient. Calculate the gradient of the loss function. The gradient indicates the slope of the function under the current parameters.
4. Update parameters. Update parameters based on gradients. The updated parameters will bring the loss function closer to the optimal solution.
5. Repeat steps 2 to 4 until the stopping condition is met. The stopping condition can be reaching a certain number of iterations or reaching a certain optimization level.

4. Python implements gradient descent (batch gradient descent)
Next, we will introduce how to implement the batch gradient descent algorithm in Python. The batch gradient descent algorithm is a form of gradient descent algorithm, and Assume we have enough memory to process all training samples at once.

Data preparation
We use sklearn’s datasets built-in dataset IRIS as sample data for our implementation of batch gradient descent. The following is the Python package we need to use and the code to import the data set:

from sklearn.datasets import load_iris
import numpy as np

iris = load_iris()
X = iris.data
y = iris.target
Copy after login

Data preprocessing
Before performing batch gradient descent, we need to normalize our data. This can be done by calculating the mean and standard deviation of each feature.

mean = np.mean(X,axis=0)
std = np.std(X,axis=0)
X = (X - mean)/std
Copy after login

Define the loss function
We will use the squared error function as the loss function of the model. Our loss function is:

def loss_function(X,y,theta):
    m = len(y)
    predictions = np.dot(X,theta)
    cost = (1/(2*m)) * np.sum((predictions-y)**2)
    return cost
Copy after login

Define the training function
Next we define the function to implement the batch gradient descent algorithm.

def gradient_descent(X,y,theta,learning_rate,num_iterations):
    m = len(y)
    cost_history = np.zeros(num_iterations)
    theta_history = np.zeros((num_iterations,theta.shape[0]))
    for i in range(num_iterations):
        prediction = np.dot(X,theta)
        theta = theta - (1/m)*learning_rate*(X.T.dot((prediction - y)))
        theta_history[i,:] = theta.T
        cost_history[i]  = loss_function(X,y,theta)
        
    return theta, cost_history, theta_history
Copy after login

Run the training function
We now run the model training function and output the cost value and parameter value of the final model, and then fit the training data to the model.

theta = np.zeros(X.shape[1])
learning_rate = 0.1
num_iterations = 1000
theta,cost_history,theta_history = gradient_descent(X,y,theta,learning_rate,num_iterations)

print('Theta: ',theta)
print('Final cost/MSE: ',cost_history[-1])
Copy after login

5. Summary
In this article, we explain the gradient descent algorithm in Python, including concepts and implementation. We first introduce the definition and principle of the gradient descent algorithm, and then describe the steps of the gradient descent algorithm in detail. Finally, we implemented batch gradient descent and ran the sample data set to obtain the trained model and its cost.

The gradient descent algorithm is an essential knowledge point in machine learning, and Python is one of the most widely used programming languages ​​in data science, so it is very important to learn the gradient descent algorithm in Python. I hope this article is helpful for you to learn the gradient descent algorithm in Python.

The above is the detailed content of Detailed explanation of gradient descent algorithm in Python. For more information, please follow other related articles on the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template