我要做的是在pickle对象中加载一个用于生成摘要的机器学习模型,这样当我将代码部署到我的web应用程序时,它不会一遍又一遍地进行手动加载。我花了10分钟的时间来生成总结,而用户却不能承受。在
import cPickle as pickle
from skip_thoughts import configuration
from skip_thoughts import encoder_manager
import en_coref_md
def load_models():
VOCAB_FILE = "skip_thoughts_uni/vocab.txt"
EMBEDDING_MATRIX_FILE = "skip_thoughts_uni/embeddings.npy"
CHECKPOINT_PATH = "skip_thoughts_uni/model.ckpt-501424"
encoder = encoder_manager.EncoderManager()
print "loading skip model"
encoder.load_model(configuration.model_config(),
vocabulary_file=VOCAB_FILE,
embedding_matrix_file=EMBEDDING_MATRIX_FILE,
checkpoint_path=CHECKPOINT_PATH)
print "loaded"
return encoder
encoder= load_models()
print "Starting cPickle dumping"
pickle.dump(encoder, open('/path_to_loaded_model/loaded_model.pkl', "wb"))
print "pickle.dump executed"
print "starting cpickle loading"
loaded_model = pickle.load(open('loaded_model.pkl', 'r'))
print "pickle load done"
cPickle最初是在泡菜,但没有一个在足够的时间内做到这一点。我第一次尝试这样做时,创建的pickle文件是11.2GB,我认为这太大了。我花了一个多小时才把我的电脑弄得一塌糊涂。代码没有执行完,我强制重启了我的电脑,因为它花了太长时间。在
如果有人能帮忙,我将不胜感激。在
我建议检查存储到hdf5中是否可以提高性能:
写入hdf5:
从hdf5读取:
^{pr2}$资料来源:
https://www.tensorflow.org/tutorials/keras/save_and_restore_models
https://geekyisawesome.blogspot.com/2018/06/savingloading-tensorflow-model-using.html
相关问题 更多 >
编程相关推荐