RuntimeWarning:在logistic回归中使用梯度下降时,在登录时遇到除以零的情况

2024-10-04 11:21:48 发布

您现在位置:Python中文网/ 问答频道 /正文

我不知道如何解决这个问题,我只是个初学者。下面是代码


def sigmoid(X, theta):

    z = np.dot(X, theta[1:]) + theta[0]
    y =expit(1/1+np.exp(-z))
    return (y)

def costFunction(y, yhat):

    cost = -y.dot(np.log(yhat)) - (1-y).dot(np.log(1-yhat))
    return cost

def gradientFun(X,y, theta, learning_rate, iterations):
    
    cost = []
    
    for i in range(iterations):
        yhat = sigmoid(X, theta)
        error = yhat - y
        
        grad = X.T.dot(error)
        
        theta[0] = theta[0] - learning_rate * error.sum()
        theta[1:] = theta[1:] - learning_rate * grad
        
        cost.append(costFunction(y,yhat))
        
    return cost



m, n = X.shape

theta = np.zeros(1+n)

learning_rate = 0.001
iteration = 1000

cost = gradientFun(X, y, theta, learning_rate, iteration)

运行代码后,我得到一个错误

<ipython-input-87-26e2d5e7bf1c>:2: RuntimeWarning: divide by zero encountered in log
  cost = -y.dot(np.log(yhat)) - (1-y).dot(np.log(1-yhat))

Tags: 代码logreturnratedefnperrordot