scikit学习交叉价值预测准确度分数是如何计算的?

2024-05-08 21:28:59 发布

您现在位置:Python中文网/ 问答频道 /正文

下面代码中所示的cross_val_predict(参见doc,v0.18)使用k-折叠方法计算每个折叠的精度并最终平均它们吗?

cv = KFold(len(labels), n_folds=20)
clf = SVC()
ypred = cross_val_predict(clf, td, labels, cv=cv)
accuracy = accuracy_score(labels, ypred)
print accuracy

Tags: 方法代码doclabelslen精度valpredict
3条回答

如文件sklearn.model_selection.cross_val_predict中所述:

It is not appropriate to pass these predictions into an evaluation metric. Use cross_validate to measure generalization error.

不,不是!

根据cross validation doc页面,cross_val_predict不返回任何分数,只返回基于特定策略的标签,如下所述:

The function cross_val_predict has a similar interface to cross_val_score, but returns, for each element in the input, the prediction that was obtained for that element when it was in the test set. Only cross-validation strategies that assign all elements to a test set exactly once can be used (otherwise, an exception is raised).

因此,通过调用accuracy_score(labels, ypred)您只需计算由上述特定策略预测的标签相对于真实标签的准确度分数。这在同一文档页中再次指定:

These prediction can then be used to evaluate the classifier:

predicted = cross_val_predict(clf, iris.data, iris.target, cv=10) 
metrics.accuracy_score(iris.target, predicted)

Note that the result of this computation may be slightly different from those obtained using cross_val_score as the elements are grouped in different ways.

如果你需要不同褶皱的准确度分数,你应该尝试:

>>> scores = cross_val_score(clf, X, y, cv=cv)
>>> scores                                              
array([ 0.96...,  1.  ...,  0.96...,  0.96...,  1.        ])

对于所有褶皱的平均精度,使用scores.mean()

>>> print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
Accuracy: 0.98 (+/- 0.03)

如何计算每个折叠的Cohen-kappa系数和混淆矩阵?

为了计算Cohen Kappa coefficient和混淆矩阵,我假设您是指真标签和每个折叠的预测标签之间的kappa系数和混淆矩阵:

from sklearn.model_selection import KFold
from sklearn.svm.classes import SVC
from sklearn.metrics.classification import cohen_kappa_score
from sklearn.metrics import confusion_matrix

cv = KFold(len(labels), n_folds=20)
clf = SVC()
for train_index, test_index in cv.split(X):
    clf.fit(X[train_index], labels[train_index])
    ypred = clf.predict(X[test_index])
    kappa_score = cohen_kappa_score(labels[test_index], ypred)
    confusion_matrix = confusion_matrix(labels[test_index], ypred)

cross_val_predict返回什么?

它使用KFold将数据分割成k部分,然后进行i=1..k迭代:

  • i'th部分作为测试数据,其他部分作为训练数据
  • 用训练数据训练模型(除了i'th之外的所有部分)
  • 然后使用这个训练模型,预测i'th部分(测试数据)的标签

在每次迭代中,都会预测数据的i'th部分的标签。最后,cross-val_predict合并所有部分预测的标签,并将其作为最终结果返回。

此代码将逐步显示此过程:

X = np.array([[0], [1], [2], [3], [4], [5]])
labels = np.array(['a', 'a', 'a', 'b', 'b', 'b'])

cv = KFold(len(labels), n_folds=3)
clf = SVC()
ypred_all = np.chararray((labels.shape))
i = 1
for train_index, test_index in cv.split(X):
    print("iteration", i, ":")
    print("train indices:", train_index)
    print("train data:", X[train_index])
    print("test indices:", test_index)
    print("test data:", X[test_index])
    clf.fit(X[train_index], labels[train_index])
    ypred = clf.predict(X[test_index])
    print("predicted labels for data of indices", test_index, "are:", ypred)
    ypred_all[test_index] = ypred
    print("merged predicted labels:", ypred_all)
    i = i+1
    print("=====================================")
y_cross_val_predict = cross_val_predict(clf, X, labels, cv=cv)
print("predicted labels by cross_val_predict:", y_cross_val_predict)

结果是:

iteration 1 :
train indices: [2 3 4 5]
train data: [[2] [3] [4] [5]]
test indices: [0 1]
test data: [[0] [1]]
predicted labels for data of indices [0 1] are: ['b' 'b']
merged predicted labels: ['b' 'b' '' '' '' '']
=====================================
iteration 2 :
train indices: [0 1 4 5]
train data: [[0] [1] [4] [5]]
test indices: [2 3]
test data: [[2] [3]]
predicted labels for data of indices [2 3] are: ['a' 'b']
merged predicted labels: ['b' 'b' 'a' 'b' '' '']
=====================================
iteration 3 :
train indices: [0 1 2 3]
train data: [[0] [1] [2] [3]]
test indices: [4 5]
test data: [[4] [5]]
predicted labels for data of indices [4 5] are: ['a' 'a']
merged predicted labels: ['b' 'b' 'a' 'b' 'a' 'a']
=====================================
predicted labels by cross_val_predict: ['b' 'b' 'a' 'b' 'a' 'a']

github上的cross_val_predict代码中可以看到,该函数计算每个折叠预测并将它们连接起来。这些预测是基于从其他褶皱得到的模型进行的。

下面是您的代码和代码中提供的示例的组合

from sklearn import datasets, linear_model
from sklearn.model_selection import cross_val_predict, KFold
from sklearn.metrics import accuracy_score

diabetes = datasets.load_diabetes()
X = diabetes.data[:400]
y = diabetes.target[:400]
cv = KFold(n_splits=20)
lasso = linear_model.Lasso()
y_pred = cross_val_predict(lasso, X, y, cv=cv)
accuracy = accuracy_score(y_pred.astype(int), y.astype(int))

print(accuracy)
# >>> 0.0075

最后,回答您的问题:“不,每一次折叠的准确度不取平均值”

相关问题 更多 >