logistic回归模型的值误差及如何检验预测精度?

2024-10-01 00:25:07 发布

您现在位置:Python中文网/ 问答频道 /正文

这是逻辑回归模型,下面的模型运行准确-

import pandas as pd
import statsmodels.api as sm

dummy_genders = pd.get_dummies(df['gender'], prefix = 'gender')
dummy_metro = pd.get_dummies(df['metropolitan_area'], prefix = 'metro_area')
dummy_device = pd.get_dummies(df['device_type'], prefix = 'device')
cols_to_keep = ['active', 'age']
activity_data = df[cols_to_keep].join(dummy_genders.loc[:, 'gender_M':])
activity_data = activity_data.join(dummy_metro.loc[:, 'metro_area_Birmingham':])
activity_data = activity_data.join(dummy_device.loc[:, 'device_Mobile':])
activity_data = sm.add_constant(activity_data, prepend=False)
explanatory_cols = activity_data.columns[1:]
full_logit_model = sm.GLM(activity_data['active'], activity_data[explanatory_cols], 
family=sm.families.Binomial())
result = full_logit_model.fit()

这是activity.csv中实际数据的示例,现在它是一个名为“df”的数据框,在上面创建模型

Data

userid,date,age,gender,metropolitan_area,device_type,active
4e3a9ea937b3a,8/4/2015,30,F,Detroit,Tablet,1
4e3dd5154a08c,8/6/2015,43,F,Charlotte,Desktop,1
4e3df1ecd131a,8/6/2015,41,F,Tampa,Mobile,1
4e4e77461b1e3,8/19/2015,56,F,Nashville,Desktop,1
4e4eb59b6de55,8/19/2015,33,F,Detroit,Mobile,1
4e551b9fbe969,8/24/2015,24,F,Birmingham,Mobile,1
4e57131ec1699,8/25/2015,51,F,Nashville,Desktop,1
4e5c9ff1eb382,8/30/2015,54,F,Birmingham,Tablet,1
4e5e7f3552b42,8/31/2015,24,F,Houston,Tablet,1
4e5e8bedd74e3,8/31/2015,26,F,Detroit,Mobile,1
4e5ea3c755939,8/31/2015,28,F,Austin,Mobile,1
4e5eaf5faf4e3,8/31/2015,30,F,Tampa,Mobile,1
4e61068267066,9/2/2015,18,M,Houston,Mobile,1
4e654e1357d7c,9/5/2015,50,F,Birmingham,Mobile,1
4e659cb802325,9/5/2015,39,F,Birmingham,Tablet,1
4e69f1bebcd65,9/9/2015,46,F,Austin,Mobile,1
4e794f9957f84,9/20/2015,42,F,Tampa,Mobile,1
4e7a202537b55,9/21/2015,53,F,Tampa,Mobile,1
4e7ba180f1a51,9/22/2015,23,F,Houston,Mobile,1
4e812357d66c3,9/26/2015,19,F,Detroit,Mobile,1
4e81fb5f749e3,9/27/2015,35,F,Birmingham,Mobile,1
4e8a53a78cc08,10/3/2015,30,F,Tampa,Mobile,1
4e96621a98060,10/12/2015,47,F,Houston,Tablet,1
4e97104767c85,10/13/2015,42,F,Austin,Mobile,1
4e97a4b5caed1,10/13/2015,50,F,Tampa,Mobile,1
4e9a11f238065,11/2/2015,32,F,Tampa,Mobile,1
4e9db901cddd3,10/18/2015,22,F,Houston,Mobile,1
4ea95ca93a5e9,10/27/2015,37,F,Houston,Tablet,1
4ea9b90293dd8,10/27/2015,26,F,Houston,Mobile,1
4eaab6781b2db,10/28/2015,25,F,Houston,Tablet,1
4eac151468326,11/1/2015,52,F,Austin,Tablet,1
4eae91e25757d,11/1/2015,34,F,Houston,Tablet,1
4eb0dd31cdb2f,11/1/2015,40,F,Birmingham,Mobile,1
4eb126e841245,11/2/2015,39,F,Houston,Mobile,1
4eb21a71863b3,11/2/2015,19,F,Birmingham,Mobile,1
4eb2eb12c95e3,11/3/2015,21,F,Austin,Mobile,1
4eb339b4c5424,11/3/2015,29,F,Birmingham,Mobile,1
4eb9ecf8efca2,11/8/2015,29,F,Detroit,Mobile,1
4ec17af8a4b6a,11/14/2015,53,F,Nashville,Mobile,1
4ec5493f7aca4,11/17/2015,32,F,Birmingham,Mobile,1
4ed2893798eb8,11/27/2015,52,F,Austin,Mobile,1
4ed8e311d24d5,12/2/2015,29,F,Houston,Mobile,1
4eecb2bb3b72c,12/17/2015,45,F,Detroit,Tablet,1
4eef423e165ec,12/19/2015,47,F,Birmingham,Tablet,1
4ef7b4bf58f95,12/26/2015,50,M,Austin,Mobile,1
4efa171ac6898,12/27/2015,29,F,Birmingham,Tablet,1
4efa4cfe3956a,12/27/2015,33,F,Houston,Mobile,1
4efccb9a28467,12/29/2015,45,F,Detroit,Mobile,1
4f05f49e6a588,1/5/2016,44,F,Detroit,Tablet,1
4f05fc42599c7,1/5/2016,46,M,Tampa,Mobile,1
4f07539176958,1/6/2016,33,F,Tampa,Tablet,1
4f0780b360b91,1/6/2016,39,F,Birmingham,Tablet,1
4f0b6496addfe,1/9/2016,28,F,Tampa,Mobile,1
4f0bd18e55134,1/9/2016,46,F,Tampa,Mobile,1
4f10ce90364d0,1/13/2016,30,F,Tampa,Mobile,1
4f14781697fe4,1/16/2016,22,M,Houston,Mobile,1
4f14c10ec50a7,1/16/2016,31,F,Birmingham,Tablet,1
4f164258b1bb6,1/17/2016,21,F,Houston,Tablet,1
4f1846a730a25,1/19/2016,21,F,Houston,Tablet,1
4f18b6615a703,1/19/2016,32,M,Tampa,Mobile,1
4f1e55553d7de,1/23/2016,28,F,Austin,Mobile,1
4f2093259bbd6,1/25/2016,29,M,Detroit,Mobile,1
4f23182154d52,1/27/2016,40,F,Austin,Mobile,1
4f242c4752b99,2/1/2016,49,F,Tampa,Mobile,1
4f2764d0cf434,1/30/2016,29,M,Tampa,Mobile,1
4f2d9e64779d0,2/4/2016,31,M,Birmingham,Mobile,1
4f2efb8f639ff,2/5/2016,35,F,Houston,Tablet,1
4f32cd83638db,2/8/2016,18,F,Houston,Mobile,1
4f36053fc68b3,2/10/2016,52,F,Birmingham,Tablet,1
4f39e32eea4d7,2/13/2016,35,F,Houston,Tablet,1
4f3d9a46a8bfd,2/16/2016,22,F,Detroit,Tablet,1
4f43c9093d832,2/21/2016,24,F,Tampa,Mobile,1
4f43d3ae21f85,2/21/2016,49,F,Houston,Tablet,1
4f4679ef62352,2/23/2016,45,F,Nashville,Mobile,1
4f4a53d5af035,2/26/2016,34,F,Tampa,Mobile,1
4f4d7474bfc32,2/28/2016,48,F,Nashville,Desktop,1
4f56dd35509e7,3/6/2016,35,F,Detroit,Mobile,1
4f57969aaeb8c,3/7/2016,37,F,Tampa,Mobile,1
4f58c73e6d91b,3/8/2016,41,F,Austin,Mobile,1
4f5995d4f26b6,3/8/2016,50,F,Detroit,Tablet,1
4f5d0dd6a39c4,3/11/2016,54,F,Houston,Mobile,1
4f626e2a28b2c,3/15/2016,32,F,Houston,Mobile,1
4f661940111b4,3/18/2016,22,F,Houston,Tablet,1
4f66737ea0a55,3/18/2016,20,F,Houston,Tablet,1
4f6a9ee5c553c,3/21/2016,32,F,Tampa,Mobile,1
4f6b9274864d7,3/22/2016,30,F,Birmingham,Mobile,1
4f6b9e7d8ea3e,3/22/2016,44,F,Austin,Tablet,1
4f6f548048d7d,3/25/2016,30,F,Houston,Mobile,1
4f6fb89399f8a,3/25/2016,30,F,Birmingham,Tablet,1
4f70bc0c20e2a,3/26/2016,23,M,Detroit,Tablet,1
4f71b84ece5bf,3/27/2016,37,F,Houston,Mobile,1
4f764c74b3e76,3/30/2016,47,F,Tampa,Mobile,1
4f768f1c3eec5,3/30/2016,39,F,Austin,Tablet,1
4e382ac9dd10a,8/2/2015,27,F,Tampa,Mobile,1
4e40221b84a45,8/8/2015,36,F,Detroit,Mobile,1
4e468d7e16236,8/13/2015,38,M,Nashville,Desktop,1
4e489c228a57a,8/14/2015,22,F,Austin,Tablet,1
4e4e950f4ed32,8/19/2015,27,F,Austin,Tablet,1
4e56adec17bfa,8/25/2015,61,F,Birmingham,Mobile,1

现在,我想把这个模型应用到训练模型的相同数据上,并评估预测的准确性:所以我尝试了

full_logit_model.fit(df)

但它给了我一个“值错误”:


---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-30-f7ee19ed420b> in <module>
----> 1 full_logit_model.predict(activity_data[explanatory_cols])

e:\Anaconda3\lib\site-packages\statsmodels\genmod\generalized_linear_model.py in predict(self, params, exog, exposure, offset, linear)
    870             exog = self.exog
    871 
--> 872         linpred = np.dot(exog, params) + offset + exposure
    873         if linear:
    874             return linpred

<__array_function__ internals> in dot(*args, **kwargs)

ValueError: shapes (5420,12) and (5420,12) not aligned: 12 (dim 1) != 5420 (dim 0)
  1. 这里可能出了什么问题
  2. 如何检查预测准确性

Tags: 模型dfdatadeviceactivitymobiledummycols
2条回答

如果目标是对用于训练的相同数据进行预测,则需要对拟合模型使用预测方法,如:result.predict(activity_data[explorative_cols])

函数pd.get_dummies将对分类列进行热编码,而不使用数字列,因此您可以将依赖和独立列简化为:

X = pd.get_dummies(df[['age','gender','metropolitan_area','device_type']])
X = sm.add_constant(X,prepend=False)
y = df['active']

然后适合:

full_logit_model = sm.GLM(y,X,family=sm.families.Binomial())
result = full_logit_model.fit()

可以使用以下方法之一获得拟合值:

result.predict()
result.fittedvalues

这些值是概率,你需要把它设为0/1,比如说制作一个混淆矩阵,所以:

from sklearn.metrics import confusion_matrix
prediction = (result.fittedvalues > 0.5).astype(int)
confusion_matrix(y,pred)

相关问题 更多 >