Home > Backend Development > Python Tutorial > Detailed explanation of classification evaluation indicators and regression evaluation indicators and Python code implementation

Detailed explanation of classification evaluation indicators and regression evaluation indicators and Python code implementation

零到壹度
Release: 2018-04-16 11:11:17
Original
11486 people have browsed it

The content of this article is a detailed explanation of classification evaluation indicators and regression evaluation indicators as well as Python code implementation. It has certain reference value. Now I share it with you. Friends in need can refer to it.

1. Concept

Performance measurement (evaluation) indicators are mainly divided into two categories:
1) Classification evaluation indicators (classification), mainly analysis, discrete, Integer. Its specific indicators include accuracy (accuracy), precision (precision), recall (recall), F value, P-R curve, ROC curve and AUC.
2) Regression evaluation index (regression), mainly analyzes the relationship between integers and real numbers. Its specific indicators include explained variance score (explianed_variance_score), mean absolute error MAE (mean_absolute_error), mean square error MSE (mean-squared_error), root mean square difference RMSE, cross entropy loss (Log loss, cross-entropy loss), R Square value (coefficient of determination, r2_score).

1.1. Premise

Assume that there are only two categories - positive and negative. Usually the category of concern is the positive category and other categories are the negative category ( Therefore, multiple types of problems can also be summarized into two categories)
The confusion matrix is ​​as follows

##PositiveTP FNP(actually positive)##negative##AB pattern in the table: No. One represents whether the prediction result is right or wrong, and the second represents the category of the prediction. For example, TP means True Positive, that is, the correct prediction is the positive class; FN means False Negative, that is, the wrong prediction is the negative class.
Actual category Predicted category

Positive Negative Summary

FP TN N (actually negative)

2. Evaluation indicators (performance measurement)

2.1. Classification evaluation indicators

2.1.1 Value indicators-Accuracy, Precision, Recall, F value
##MeasurementAccuracyThe ratio of the number of correctly classified samples to the total number of samples (the proportion of real spam messages predicted to be spam messages)accuracy=
Precision Recall F value Definition
The ratio of the number of true positive cases among the positive cases to the number of positive cases (the proportion of all real spam text messages that are classified and correctly found) The number of correct cases judged to be positive and Ratio of the total number of positive examples Harmonic average F -score means

##precision=

##recall=

F

## - score =

1.Precision is also often called precision rate, and recall is called recall rate
2. The more commonly used one is F1,

##python3.6 code implementation:

#调用sklearn库中的指标求解from sklearn import metricsfrom sklearn.metrics import precision_recall_curvefrom sklearn.metrics import average_precision_scorefrom sklearn.metrics import accuracy_score#给出分类结果y_pred = [0, 1, 0, 0]
y_true = [0, 1, 1, 1]
print("accuracy_score:", accuracy_score(y_true, y_pred))
print("precision_score:", metrics.precision_score(y_true, y_pred))
print("recall_score:", metrics.recall_score(y_true, y_pred))
print("f1_score:", metrics.f1_score(y_true, y_pred))
print("f0.5_score:", metrics.fbeta_score(y_true, y_pred, beta=0.5))
print("f2_score:", metrics.fbeta_score(y_true, y_pred, beta=2.0))
Copy after login

2.1.2 Related curve-P-R curve, ROC curve and AUC value
1) P-R curve

Steps:
1. Change the "score" value from high to low Sort and use them as thresholds in turn;
2. For each threshold, test samples with a "score" value greater than or equal to this threshold are considered positive examples, and others are negative examples. Thus forming a set of forecast numbers.
eg.

Detailed explanation of classification evaluation indicators and regression evaluation indicators and Python code implementationSet 0.9 as the threshold, then the first test sample is a positive example, and 2, 3, 4, and 5 are negative examples
Get

Predicted positive examplePredicted negative exampleTotal Positive example (score is greater than the threshold) 0.90.11## Negative example (score is less than the threshold)precision=##The part below the threshold is treated as a negative example, and the value of the predicted negative example is the correct predicted value, that is, if it is a positive example, TP is taken; if it is a negative example, , then take TN, which are all prediction scores.

0.2 0.3 0.3 0.35 = 1.150.8 0.7 0.7 0.65 = 2.854

recall=

Python implementation of pseudo code

#precision和recall的求法如上
#主要介绍一下python画图的库
import matplotlib.pyplot ad plt
#主要用于矩阵运算的库
import numpy as np#导入iris数据及训练见前一博文
...
#加入800个噪声特征,增加图像的复杂度
#将150*800的噪声特征矩阵与150*4的鸢尾花数据集列合并
X = np.c_[X, np.random.RandomState(0).randn(n_samples, 200*n_features)]
#计算precision,recall得到数组
for i in range(n_classes):
    #计算三类鸢尾花的评价指标, _作为临时的名称使用
    precision[i], recall[i], _ = precision_recall_curve(y_test[:, i], y_score[:,i])#plot作图plt.clf()
for i in range(n_classes):
    plt.plot(recall[i], precision[i])
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.show()
Copy after login

After completing the above code, the P-R curve of the iris data set is obtained

2) ROC curve
Horizontal axis: False positive example Rate fp rate = FP / N Detailed explanation of classification evaluation indicators and regression evaluation indicators and Python code implementationVertical axis: True case rate tp rate = TP / N
Steps:
1. Sort the "score" values ​​from high to low and use them as thresholds in sequence;
2. For each threshold, the test samples whose "score" value is greater than or equal to this threshold are considered positive examples, and the others are negative examples. Thus forming a set of forecast numbers.


It is similar to the P-R curve calculation, so I won’t go into details

The ROC image of the iris data set is


AUC (Area Under Curve) is defined as the area under the ROC curve
The AUC value provides an overall numerical value for the classifier. Usually, the larger the AUC, the better the classifier, and the value is [0, 1]Detailed explanation of classification evaluation indicators and regression evaluation indicators and Python code implementation

2.2. Regression evaluation index

1) Explainable variance score

2) Mean absolute error MAE (Mean absolute error)
Detailed explanation of classification evaluation indicators and regression evaluation indicators and Python code implementation
3) Mean squared error MSE (Mean squared error)
Detailed explanation of classification evaluation indicators and regression evaluation indicators and Python code implementation

Detailed explanation of classification evaluation indicators and regression evaluation indicators and Python code implementation4) Logistics regression loss
Detailed explanation of classification evaluation indicators and regression evaluation indicators and Python code implementation
5) Consistency evaluation-pearson correlation coefficient method
Detailed explanation of classification evaluation indicators and regression evaluation indicators and Python code implementation
python code implementation

from sklearn.metrics import log_loss
log_loss(y_true, y_pred)from scipy.stats import pearsonr
pearsonr(rater1, rater2)from sklearn.metrics import cohen_kappa_score
cohen_kappa_score(rater1, rater2)
Copy after login

The above is the detailed content of Detailed explanation of classification evaluation indicators and regression evaluation indicators and Python code implementation. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template