我试图用pyspark从一个pickle模型生成预测,我用以下命令得到这个模型
model = deserialize_python_object(filename)
将deserialize_python_object(filename)
定义为:
import pickle
def deserialize_python_object(filename):
try:
with open(filename, ‘rb’) as f:
obj = pickle.load(f)
except:
obj = None
return obj
错误日志如下所示:
File “/Users/gmg/anaconda3/envs/env/lib**strong text**/python3.7/site-packages/pyspark/sql/udf.py”, line 189, in wrapper
return self(*args)
File “/Users/gmg/anaconda3/envs/env/lib/python3.7/site-packages/pyspark/sql/udf.py”, line 167, in __call__
judf = self._judf
File “/Users/gmg/anaconda3/envs/env/lib/python3.7/site-packages/pyspark/sql/udf.py”, line 151, in _judf
self._judf_placeholder = self._create_judf()
File “/Users/gmg/anaconda3/envs/env/lib/python3.7/site-packages/pyspark/sql/udf.py”, line 160, in _create_judf
wrapped_func = _wrap_function(sc, self.func, self.returnType)
File “/Users/gmg/anaconda3/envs/env/lib/python3.7/site-packages/pyspark/sql/udf.py”, line 35, in _wrap_function
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
File “/Users/gmg/anaconda3/envs/env/lib/python3.7/site-packages/pyspark/rdd.py”, line 2420, in _prepare_for_python_RDD
pickled_command = ser.dumps(command)
File “/Users/gmg/anaconda3/envs/env/lib/python3.7/site-packages/pyspark/serializers.py”, line 600, in dumps
raise pickle.PicklingError(msg)
_pickle.PicklingError: Could not serialize object: TypeError: can’t pickle _abc_data objects
似乎你也遇到了同样的问题,比如这期: https://github.com/cloudpipe/cloudpickle/issues/180
现在的情况是pyspark的cloudpickle库对于python3.7来说已经过时了,您现在应该用这个特制的补丁修复这个问题until pyspark gets that module updated。你知道吗
尝试使用此解决方法:
安装cloudpickle
pip install cloudpickle
将此添加到代码中:
monkeypatch信用卡https://github.com/cloudpipe/cloudpickle/issues/305
相关问题 更多 >
编程相关推荐