sklearn梯度boosting回归器中的数值跳跃

2024-10-05 14:31:15 发布

您现在位置:Python中文网/ 问答频道 /正文

我一直在研究一种“手动”版本的梯度增强回归树。我发现错误与sklearn GradientBoostingRegressionor模块非常一致,直到我将树构建循环增加到某个值以上。我不确定这是我的代码中的一个bug还是算法本身的一个特性,所以我一直在寻找一些关于可能发生什么的指导。下面是我使用波士顿住房市场数据的完整代码列表,下面是我更改loop参数时的输出

from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
from sklearn.datasets import load_boston

X, y = load_boston(return_X_y=True)
X_train,X_test, = train_test_split(X,test_size=0.2,random_state=42)
y_train,y_test, = train_test_split(y,test_size=0.2,random_state=42)


alpha = 0.5
loop = 44
yhi_1=0
ypT=0
for i in range(loop):
    dt = DecisionTreeRegressor(max_depth=2, random_state=42)
    ri = y_train - yhi_1
    dt.fit(X_train, ri)
    hi = dt.predict(X_train)
    yhi = yhi_1 + alpha * hi
    ypi = dt.predict(X_test)*alpha
    ypT = ypT + ypi
    yhi_1 = yhi


r2Loop= metrics.r2_score(y_test,ypT)
print("dtL: R^2 = ", r2Loop)

from sklearn.ensemble import GradientBoostingRegressor
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=loop, learning_rate=alpha,random_state=42,init="zero")
gbrt.fit(X_train,y_train)
gbrt.loss
y_pred = gbrt.predict(X_test)
r2GBRT= metrics.r2_score(y_test,y_pred)
print("GBT: R^2 = ", r2GBRT)

print("R2loop - GBT: ", r2Loop - r2GBRT)

当参数loop=44

dtL: R^2 =  0.8702681499951852
GBT: R^2 =  0.8702681499951852
R2loop - GBT:  0.0

两人都同意。如果我将循环参数增加到loop=45,我得到

dtL: R^2 =  0.8726215419913225
GBT: R^2 =  0.8720222156381275
R2loop - GBT:  0.0005993263531949289

两种算法的精度在15到16位小数点之间突然提高。有什么想法吗


Tags: fromtestimportalphaloop参数dttrain
1条回答
网友
1楼 · 发布于 2024-10-05 14:31:15

我相信这里有两个不同之处。最大的一个是DecisionTreeRegressor.fit方法中的随机性。而在GradientBoostingRegressor和所有 DecisionTreeRegressorDecisionTreeRegressor训练循环不会重复GradientBoostingRegressor处理随机种子的方式。在循环中,在每次迭代中设置种子。在GradientBoostingRegressor.fit方法中,种子(我假设)仅在训练开始时设置一次。我已将您的代码修改如下:

from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
from sklearn.datasets import load_boston
import numpy as np

X, y = load_boston(return_X_y=True)
X_train,X_test, = train_test_split(X,test_size=0.2,random_state=42)
y_train,y_test, = train_test_split(y,test_size=0.2,random_state=42)


alpha = 0.5
loop = 45
yhi_1=0
ypT=0

np.random.seed(42)
for i in range(loop):
    dt = DecisionTreeRegressor(max_depth=2)
    ri = y_train - yhi_1
    dt.fit(X_train, ri)
    hi = dt.predict(X_train)
    yhi = yhi_1 + alpha * hi
    ypi = dt.predict(X_test)*alpha
    ypT = ypT + ypi
    yhi_1 = yhi


r2Loop= metrics.r2_score(y_test,ypT)
print("dtL: R^2 = ", r2Loop)

np.random.seed(42)
from sklearn.ensemble import GradientBoostingRegressor
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=loop, learning_rate=alpha,init="zero")
gbrt.fit(X_train,y_train)
gbrt.loss
y_pred = gbrt.predict(X_test)
r2GBRT= metrics.r2_score(y_test,y_pred)
print("GBT: R^2 = ", r2GBRT)

print("R2loop - GBT: ", r2Loop - r2GBRT)

唯一的区别在于我如何设置随机种子。我现在使用numpy在每个训练循环之前设置种子。通过进行此更改,我得到了以下带有loop = 45的输出:

dtL: R^2 =  0.8720222156381277
GBT: R^2 =  0.8720222156381275
R2loop - GBT:  1.1102230246251565e-16

这就是浮点错误的原因(我在第一句话中提到的另一个差异来源),对于loop的许多值,我看不到任何差异

相关问题 更多 >