逻辑回归numpy.64

2024-10-03 21:30:28 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图使用平方损失函数从头开始创建逻辑回归。然而,我有一个错误,我不能真正弄清楚。错误是:无法将序列与“numpy.float64”类型的非int相乘。任何帮助都将不胜感激!(是的,我知道我对系数很懒惰。)

def logisticReg(data):
  X_train = [(d[0],d[1],d[2]) for d, _ in data]
  Y_train = [y for _, y in data]
  
  LogReg = LogisticRegression(random_state=42, solver='sag', penalty='none', max_iter=10000, fit_intercept=False)
  LogReg.fit(X_train, Y_train)
  w=[round(c,2) for c in LogReg.coef_[0]]

  sigmoid = lambda y: 1/(1+np.exp(-y))
  classify = lambda y: 1 if y > 0.5 else 0
  F = lambda W, X: sum([w*x for w,x in zip(W,X)])

  for i in range(len(X_train)):
    Function = F(w,X_train)
    y_pred = sigmoid(Fucntion)
    Data_m = (-2) * sum(x*(y-y_pred)) 
    Data_b = (-2) * sum(y - y_pred)
    m = m- L*Data_m #update weights
    b = b-L*Data_b
  
  weights = zip(m,b)
  print(weights)

   
data = [((1, 0, 0), 1), ((1, 1, 7), 0), ((1, -3, -2), 0), ((1, 8, 9), 1), ((1, 4, 3), 1), ((1, 5, -2), 1), ((1, 0, 0), 1), ((1, 6, 9), 1), ((1, 4, 2), 1), ((1, 1, -9), 1), ((1, -7, 7), 0), ((1, 0, -1), 1), ((1, 9, -4), 1), ((1, 1, 0), 1), ((1, -2, -5), 1), ((1, 2, 3), 1), ((1, -7, 2), 0), ((1, -3, 0), 0), ((1, 5, 0), 1), ((1, 0, -3), 1), ((1, -2, 3), 0), ((1, 9, 6), 1), ((1, 0, -8), 1), ((1, 0, 2), 0), ((1, -8, 6), 0), ((1, 1, 9), 0), ((1, 0, 5), 0), ((1, -4, 9), 0), ((1, 8, 2), 1), ((1, 2, 6), 0)] 
logisticReg(data)

Tags: lambdainfordata错误trainzipfit
1条回答
网友
1楼 · 发布于 2024-10-03 21:30:28

我发现错误发生在调用名为F的函数时

我通过稍微修改您的代码解决了这个问题

import numpy as np
import sklearn.linear_model as sk

def logisticReg(data):
    X_train = [(d[0], d[1], d[2]) for d, _ in data]
    Y_train = [y for _, y in data]
  
    LogReg = sk.LogisticRegression(random_state=42, solver='sag', penalty='l2', max_iter=10000, fit_intercept=False)
    LogReg.fit(X_train, Y_train)
    w=[round(c,2) for c in LogReg.coef_[0]]

    sigmoid = lambda y: 1/(1+np.exp(-y))
    thresh = lambda y: 1 if y > 0.5 else 0
    F = lambda W, X: sum([w*x for w,x in zip(np.array(W), np.array(X))])

    for i in range(len(X_train)):
          res = F(w, X_train)
          y_pred = sigmoid(res)
#    Data_m = (-2) * sum(x*(y - y_pred))
#    Data_b = (-2) * sum(y - y_pred)
#    m = m - L*Data_m
#    b = b - L*Data_b
  
#  weights = zip(m,b)
#  print(weights)
    
    return(None)
    
    
data = [((1, 0, 0), 1), ((1, 1, 7), 0), ((1, -3, -2), 0), ((1, 8, 9), 1), ((1, 4, 3), 1), ((1, 5, -2), 1), ((1, 0, 0), 1), ((1, 6, 9), 1), ((1, 4, 2), 1), ((1, 1, -9), 1), ((1, -7, 7), 0), ((1, 0, -1), 1), ((1, 9, -4), 1), ((1, 1, 0), 1), ((1, -2, -5), 1), ((1, 2, 3), 1), ((1, -7, 2), 0), ((1, -3, 0), 0), ((1, 5, 0), 1), ((1, 0, -3), 1), ((1, -2, 3), 0), ((1, 9, 6), 1), ((1, 0, -8), 1), ((1, 0, 2), 0), ((1, -8, 6), 0), ((1, 1, 9), 0), ((1, 0, 5), 0), ((1, -4, 9), 0), ((1, 8, 2), 1), ((1, 2, 6), 0)] 
logisticReg(data)

但是,,你应该重新思考你的算法应该做什么,因为它似乎不清楚:你想从头开始编写逻辑回归算法,但使用sklearn中的算法来计算给定数据集的模型系数值,然后你似乎在for循环中使用这些系数

在逻辑回归函数中,您应该做的是计算应用于给定数据集的逻辑回归模型的系数,而不使用sklearn。然后你可以对给定的数据集调用你自己的逻辑回归来得到系数,并将它们与逻辑回归从sklearn计算的系数进行比较

有关逻辑回归模型拟合的文档的有用链接:https://en.wikipedia.org/wiki/Logistic_regression#Model_fitting

相关问题 更多 >