Home > Backend Development > Python Tutorial > Detecting and handling multicollinearity issues in regression using Python

Detecting and handling multicollinearity issues in regression using Python

王林
Release: 2023-08-18 15:05:20
forward
1744 people have browsed it

Multicollinearity refers to the high degree of intercorrelation between the independent variables in the regression model. This can lead to inaccurate coefficients in the model, making it difficult to judge the impact of different independent variables on the dependent variable. In this case, it is necessary to identify and deal with multicollinearity of the regression model and combine different procedures and their outputs, which we will explain step by step.

method

  • Detecting multicollinearity

  • Dealing with multicollinearity

algorithm

Step 1 − Import necessary libraries

Step 2 - Load data into pandas Dataframes

Step 3 - Create a correlation matrix using predictor variables

Step 4 − Create a heat map of the correlation matrix to visualize the correlation

Step 5 - Calculate the variance inflation factor for each predictor of the output

Step 6 − Determine predictor

Step 7 - Predictor should be removed

Step 8 - Rerun the regression model

Step 9 - Check again.

Method 1: Detecting multicollinearity

Use the corr() function of the pandas package to determine the correlation matrix of independent variables. Use the seaborn library to generate heat maps to display the correlation matrix. Use the variance_inflation_factor() function of the statsmodels package to determine the variance inflation factor (VIF) for each independent variable. A VIF greater than 5 or 10 indicates high multicollinearity.

The Chinese translation of

Example-1

is:

Example-1

In this code, once the data is loaded into the Pandas DataFrame, the predictor variable X and the dependent variable y are separated. To calculate the VIF for each predictor variable, we use the variation_inflation_factor() function from the statsmodels package. In the final step of the process, we store the VIF values ​​along with the names of the predictors in a brand new Pandas DataFrame and then display the results. Using this code, a table containing the variable name and VIF value for each predictor variable will be generated. When a variable has a high VIF value (above 5 or 10, depending on the situation), it is important to analyze the variable further.

import pandas as pd
from statsmodels.stats.outliers_influence import variance_inflation_factor

# Load data into a pandas DataFrame
data = pd.read_csv("mydata.csv")

# Select independent variables
X = data[['independent_var1', 'independent_var2', 'independent_var3']]

# Calculate VIF for each independent variable
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
vif["features"] = X.columns

# Print the VIF results
print(vif)
Copy after login

Output

VIF  Factor      Features 
0    3.068988    Independent_var1
1    3.870567    Independent_var2
2    3.843753    Independent_var3
Copy after login

Method 2: Dealing with multicollinearity

Exclude one or more strongly correlated independent variables in the model. Principal component analysis (PCA) can be used to combine highly correlated independent variables into a single variable. Regularization methods such as ridge regression or lasso regression can be used to reduce the impact of strongly correlated independent variables on the model coefficients. Using the above approach, the following example code can be used to identify and resolve multicollinearity issues −

import pandas as pd
import seaborn as sns
from statsmodels.stats.outliers_influence import variance_inflation_factor
from sklearn.decomposition import PCA
from sklearn.linear_model import Ridge

# Load the data into a pandas DataFrame
data = pd.read_csv('data.csv')

# Calculate the correlation matrix
corr_matrix = data.corr()

# Create a heatmap to visualize the correlation matrix
sns.heatmap(corr_matrix, annot=True, cmap='coolwarm')

# Check for VIF for each independent variable
for i in range(data.shape[1]-1):
   vif = variance_inflation_factor(data.values, i)
   print('VIF for variable {}: {:.2f}'.format(i, vif))

# Remove highly correlated independent variables
data = data.drop(['var1', 'var2'], axis=1)

# Use PCA to combine highly correlated independent variables
pca = PCA(n_components=1)
data['pca'] = pca.fit_transform(data[['var1', 'var2']])

# Use Ridge regression to reduce the impact of highly correlated independent variables
X = data.drop('dependent_var', axis=1)
y = data['dependent_var']
ridge = Ridge(alpha=0.1)
ridge.fit(X, y)
Copy after login

This function does not generate any other output other than outputting the VIF value of each independent variable. Running this code will only output the VIF values ​​for each independent variable; no graphs or model performance will be printed.

In this example, the data is first loaded into a pandas DataFrame, then the correlation matrix is ​​calculated, and finally a heat map is created to display the correlation matrix. We then eliminated independent factors with high correlations after testing the VIF of each independent variable. We used ridge regression to reduce the impact of highly correlated independent variables on the model coefficients and used principal component analysis to combine highly correlated independent variables into one variable.

import pandas as pd

#create DataFrame
df = pd.DataFrame({'rating': [90, 85, 82, 18, 14, 90, 16, 75, 87, 86],
         'points': [22, 10, 34, 46, 27, 20, 12, 15, 14, 19],
         'assists': [1, 3, 5, 6, 5, 7, 6, 9, 9, 5],
         'rebounds': [11, 8, 10, 6, 3, 4, 4, 10, 10, 7]})

#view DataFrame
print(df)
Copy after login

Output

   rating  points  assists  rebounds
0      90      22        1        11
1      85      10        3         8
2      82      34        5        10
3      18      46        6         6
4      14      27        5         3
5      90      20        7         4
6      16      12        6         4
7      75      15        9        10
8      87      14        9        10
9      86      19        5         7
Copy after login

Using the Pandas package, an array data structure called a DataFrame can be generated through this Python program. The specific dimensions include four different columns: assists, rebounds, points, and ratings. The library is imported at the beginning of the code and is called "pd" thereafter to reduce complexity. A DataFrame is finally constructed by executing the pd.DataFrame() method in the second line of code.

Use the print() method in the third line of code to print the DataFrame to the console. The values ​​of each column form the definition of the list and serve as the keys and values ​​for the dictionary input function. Information for each player is displayed in a table format, with statistics including points, assists and rebounds arranged in columns, with each row representing a player.

in conclusion

In summary, when two or more predictor variables in a model are strongly correlated with each other, this is called multicollinearity. This situation can make interpreting model results difficult. In this case, it is difficult to determine how each unique predictor variable affects the outcome variable.

The above is the detailed content of Detecting and handling multicollinearity issues in regression using Python. For more information, please follow other related articles on the PHP Chinese website!

source:tutorialspoint.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template