为什么RandomizedSearchCV不与BaggingRegessor合作?

2024-09-29 19:25:55 发布

您现在位置:Python中文网/ 问答频道 /正文

我试着用BaggingRegressor模型在SKlearn中制作RSearchCV。该模型包含一个sklearn-HuberRegressor模型作为基估计量。 当我尝试在没有RSEARCV的情况下拟合单独的BaggingRegressionor模型时,它工作得很好。但当我尝试RSearchCV时,它会给我一条错误消息。 请看下面我的代码和之后的错误消息

kf = KFold(n_splits=5, shuffle=True, random_state=baseseed)

modelfinal = HuberRegressor(fit_intercept=True, alpha=4.2200999999999995, epsilon=1.3100000000000003, warm_start=True, max_iter=450)
bagmodel = BaggingRegressor(modelfinal, n_estimators=100, max_samples=0.33, max_features=0.5, bootstrap=True, 
                            bootstrap_features=True, random_state=baseseed, verbose=1, n_jobs=njobs)
bagmodel.fit(X_df_train, y_df_train)
#it is working this way

modelfinal = HuberRegressor(fit_intercept=True, alpha=4.2200999999999995, epsilon=1.3100000000000003, warm_start=True, max_iter=450)
bagmodel = BaggingRegressor()

bagparams = {"max_samples": np.arange(0.05, 1.01, 0.05), "max_features": np.arange(0.05, 1.01, 0.05), "bootstrap": [True, False],
             "bootstrap_features": [True, False], 
             "n_estimators": np.arange(10, 600, 10), 
             "base_estimator": [modelfinal], 
             "random_state": [baseseed], "verbose": [1]}

bagmodelrs = RandomizedSearchCV(bagmodel, bagparams, n_iter=100, n_jobs=njobs, scoring="neg_mean_absolute_error",
                                verbose=1, cv=kf)
bagmodelrs.fit(X_df_train, y_df_train)
#here comes the following error
Fitting 5 folds for each of 100 candidates, totalling 500 fits
[Parallel(n_jobs=7)]: Using backend LokyBackend with 7 concurrent workers.
Traceback (most recent call last):

  File "<ipython-input-5-ca5c3359581c>", line 20, in <module>
    bagmodelrs.fit(X_df_train, y_df_train)

  File "C:\Users\controllingde\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py", line 688, in fit
    self._run_search(evaluate_candidates)

  File "C:\Users\controllingde\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py", line 1469, in _run_search
    random_state=self.random_state))

  File "C:\Users\controllingde\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py", line 667, in evaluate_candidates
    cv.split(X, y, groups)))

  File "C:\Users\controllingde\Anaconda3\lib\site-packages\joblib\parallel.py", line 934, in __call__
    self.retrieve()

  File "C:\Users\controllingde\Anaconda3\lib\site-packages\joblib\parallel.py", line 833, in retrieve
    self._output.extend(job.get(timeout=self.timeout))

  File "C:\Users\controllingde\Anaconda3\lib\site-packages\joblib\_parallel_backends.py", line 521, in wrap_future_result
    return future.result(timeout=timeout)

  File "C:\Users\controllingde\Anaconda3\lib\concurrent\futures\_base.py", line 432, in result
    return self.__get_result()

  File "C:\Users\controllingde\Anaconda3\lib\concurrent\futures\_base.py", line 384, in __get_result
    raise self._exception

AttributeError: 'NoneType' object has no attribute 'write'

我也尝试过,但没有成功: -参数dict中的HuberRegressor()而不是modelfinal -在BaggingRegressionor调用中指定“modelfinal”,而不是在param dict中指定

我正在使用Windows、Spyder、Sklearn 0.21.3和Python 3.7.3

有人能帮忙吗

谢谢,比尔

克里斯托夫


Tags: inpyselftruedfliblinetrain

热门问题