在本文中,我们将学习如何在具有不同超参数的多个模型之间选择最佳模型,在某些情况下,我们可以拥有 50 多个不同的模型,了解如何选择一个模型对于为您的数据集获得最佳性能的模型非常重要.
我们将通过选择最佳学习算法及其最佳超参数来进行模型选择。
但是首先什么是超参数?这些是用户设置的附加设置,将影响模型学习其参数的方式。 参数 另一方面是模型在训练过程中学习的内容。
穷举搜索涉及通过搜索一系列超参数来选择最佳模型。为此,我们利用 scikit-learn 的 GridSearchCV.
GridSearchCV 的工作原理:
示例
我们可以设置逻辑回归作为我们的学习算法并调整两个超参数(C 和正则化惩罚)。我们还可以指定两个参数:求解器和最大迭代次数。
现在,对于 C 和正则化惩罚值的每种组合,我们训练模型并使用 k 折交叉验证对其进行评估。
因为我们有 10 个可能的 C 值,所以有 2 个可能的 reg 值。惩罚和 5 倍,我们总共有 (10 x 2 x 5 = 100) 个候选模型,从中选出最好的。
# Load libraries import numpy as np from sklearn import linear_model, datasets from sklearn.model_selection import GridSearchCV # Load data iris = datasets.load_iris() features = iris.data target = iris.target # Create logistic regression logistic = linear_model.LogisticRegression(max_iter=500, solver='liblinear') # Create range of candidate penalty hyperparameter values penalty = ['l1','l2'] # Create range of candidate regularization hyperparameter values C = np.logspace(0, 4, 10) # Create dictionary of hyperparameter candidates hyperparameters = dict(C=C, penalty=penalty) # Create grid search gridsearch = GridSearchCV(logistic, hyperparameters, cv=5, verbose=0) # Fit grid search best_model = gridsearch.fit(features, target) # Show the best model print(best_model.best_estimator_) # LogisticRegression(C=7.742636826811269, max_iter=500, penalty='l1', solver='liblinear') # Result
获得最佳模型:
# View best hyperparameters print('Best Penalty:', best_model.best_estimator_.get_params()['penalty']) print('Best C:', best_model.best_estimator_.get_params()['C']) # Best Penalty: l1 #Result # Best C: 7.742636826811269 # Result
当您想要一种比穷举搜索更便宜的计算方法来选择最佳模型时,通常会使用这种方法。
值得注意的是,RandomizedSearchCV 本质上并不比 GridSearchCV 更快,但它通常只需通过测试更少的组合即可在更短的时间内实现与 GridSearchCV 相当的性能。
RandomizedSearchCV 的工作原理:
示例
# Load data iris = datasets.load_iris() features = iris.data target = iris.target # Create logistic regression logistic = linear_model.LogisticRegression(max_iter=500, solver='liblinear') # Create range of candidate regularization penalty hyperparameter values penalty = ['l1', 'l2'] # Create distribution of candidate regularization hyperparameter values C = uniform(loc=0, scale=4) # Create hyperparameter options hyperparameters = dict(C=C, penalty=penalty) # Create randomized search randomizedsearch = RandomizedSearchCV( logistic, hyperparameters, random_state=1, n_iter=100, cv=5, verbose=0, n_jobs=-1) # Fit randomized search best_model = randomizedsearch.fit(features, target) # Print best model print(best_model.best_estimator_) # LogisticRegression(C=1.668088018810296, max_iter=500, penalty='l1', solver='liblinear') #Result.
获得最佳模型:
# View best hyperparameters print('Best Penalty:', best_model.best_estimator_.get_params()['penalty']) print('Best C:', best_model.best_estimator_.get_params()['C']) # Best Penalty: l1 # Result # Best C: 1.668088018810296 # Result
注意:训练的候选模型数量在n_iter(迭代次数)设置中指定。
在这一部分中,我们将了解如何通过搜索一系列学习算法及其各自的超参数来选择最佳模型。
我们可以通过简单地创建候选学习算法及其超参数的字典来用作 GridSearchCV.
的搜索空间来做到这一点步骤:
# Load libraries import numpy as np from sklearn import datasets from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline # Set random seed np.random.seed(0) # Load data iris = datasets.load_iris() features = iris.data target = iris.target # Create a pipeline pipe = Pipeline([("classifier", RandomForestClassifier())]) # Create dictionary with candidate learning algorithms and their hyperparameters search_space = [{"classifier": [LogisticRegression(max_iter=500, solver='liblinear')], "classifier__penalty": ['l1', 'l2'], "classifier__C": np.logspace(0, 4, 10)}, {"classifier": [RandomForestClassifier()], "classifier__n_estimators": [10, 100, 1000], "classifier__max_features": [1, 2, 3]}] # Create grid search gridsearch = GridSearchCV(pipe, search_space, cv=5, verbose=0) # Fit grid search best_model = gridsearch.fit(features, target) # Print best model print(best_model.best_estimator_) # Pipeline(steps=[('classifier', LogisticRegression(C=7.742636826811269, max_iter=500, penalty='l1', solver='liblinear'))])
最佳模特:
搜索完成后,我们可以使用best_estimator_查看最佳模型的学习算法和超参数。
有时我们可能希望在模型选择过程中包含预处理步骤。
最好的解决方案是创建一个包含预处理步骤及其任何参数的管道:
第一个挑战:
GridSeachCv 使用交叉验证来确定性能最高的模型。
然而,在交叉验证中,我们假装未看到测试集时保留的折叠,因此不属于任何预处理步骤(例如缩放或标准化)。
因此,预处理步骤必须是 GridSearchCV 所采取的操作集的一部分。
解决方案
Scikit-learn 提供了 FeatureUnion,它允许我们正确组合多个预处理操作。
步骤:
This allows us to outsource the proper handling of fitting, transforming, and training the models with combinations of hyperparameters to scikit-learn.
Second Challenge:
Some preprocessing methods such as PCA have their own parameters, dimensionality reduction using PCA requires the user to define the number of principal components to use to produce the transformed features set. Ideally we would choose the number of components that produces a model with the greatest performance for some evaluation test metric.
Solution.
In scikit-learn when we include candidate component values in the search space, they are treated like any other hyperparameter to be searched over.
# Load libraries import numpy as np from sklearn import datasets from sklearn.linear_model import LogisticRegression from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler # Set random seed np.random.seed(0) # Load data iris = datasets.load_iris() features = iris.data target = iris.target # Create a preprocessing object that includes StandardScaler features and PCA preprocess = FeatureUnion([("std", StandardScaler()), ("pca", PCA())]) # Create a pipeline pipe = Pipeline([("preprocess", preprocess), ("classifier", LogisticRegression(max_iter=1000, solver='liblinear'))]) # Create space of candidate values search_space = [{"preprocess__pca__n_components": [1, 2, 3], "classifier__penalty": ["l1", "l2"], "classifier__C": np.logspace(0, 4, 10)}] # Create grid search clf = GridSearchCV(pipe, search_space, cv=5, verbose=0, n_jobs=-1) # Fit grid search best_model = clf.fit(features, target) # Print best model print(best_model.best_estimator_) # Pipeline(steps=[('preprocess', FeatureUnion(transformer_list=[('std', StandardScaler()), ('pca', PCA(n_components=1))])), ('classifier', LogisticRegression(C=7.742636826811269, max_iter=1000, penalty='l1', solver='liblinear'))]) # Result
After the model selection is complete we can view the preprocessing values that produced the best model.
Preprocessing steps that produced the best modes
# View best n_components best_model.best_estimator_.get_params() # ['preprocess__pca__n_components'] # Results
That time you need to reduce the time it takes to select a model.
We can do this by training multiple models simultaneously, this is done by using all the cores in our machine by setting n_jobs=-1
# Load libraries import numpy as np from sklearn import linear_model, datasets from sklearn.model_selection import GridSearchCV # Load data iris = datasets.load_iris() features = iris.data target = iris.target # Create logistic regression logistic = linear_model.LogisticRegression(max_iter=500, solver='liblinear') # Create range of candidate regularization penalty hyperparameter values penalty = ["l1", "l2"] # Create range of candidate values for C C = np.logspace(0, 4, 1000) # Create hyperparameter options hyperparameters = dict(C=C, penalty=penalty) # Create grid search gridsearch = GridSearchCV(logistic, hyperparameters, cv=5, n_jobs=-1, verbose=1) # Fit grid search best_model = gridsearch.fit(features, target) # Print best model print(best_model.best_estimator_) # Fitting 5 folds for each of 2000 candidates, totalling 10000 fits # LogisticRegression(C=5.926151812475554, max_iter=500, penalty='l1', solver='liblinear')
This a way to speed up model selection without using additional compute power.
This is possible because scikit-learn has model-specific cross-validation hyperparameter tuning.
Sometimes the characteristics of a learning algorithms allows us to search for the best hyperparameters significantly faster.
Example:
LogisticRegression is used to conduct a standard logistic regression classifier.
LogisticRegressionCV implements an efficient cross-validated logistic regression classifier that can identify the optimum value of the hyperparameter C.
# Load libraries from sklearn import linear_model, datasets # Load data iris = datasets.load_iris() features = iris.data target = iris.target # Create cross-validated logistic regression logit = linear_model.LogisticRegressionCV(Cs=100, max_iter=500, solver='liblinear') # Train model logit.fit(features, target) # Print model print(logit) # LogisticRegressionCV(Cs=100, max_iter=500, solver='liblinear')
Note:A major downside to LogisticRegressionCV is that it can only search a range of values for C. This limitation is common to many of scikit-learn's model-specific cross-validated approaches.
I hope this Article was helpful in creating a quick overview of how to select a machine learning model.
以上是机器学习模型选择。的详细内容。更多信息请关注PHP中文网其他相关文章!