from sklearn.externals import joblib
##Suppose your trained model is named MyTrainedModel
##This is how you save it in a file called MyTrainedModelFile.txt.
joblib.dump(MyTrainedModel, 'MyTrainedModelFile.txt')
##Later you can recall the model and use it
Loaded_model = joblib.load('MyTrainedModelFile.txt')
import pickle
from sklearn.neural_network import MLPClassifier
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=1000, n_classes=4, n_features=11,
n_informative=4, weights=[0.25,0.25,0.25,0.25],
random_state=0)
x_batch1 = X[0:500]
y_batch1 = y[0:500]
x_batch2 = X[500:999]
y_batch2 = y[500:999]
clf = MLPClassifier()
clf.partial_fit(x_batch1, y_batch1, classes = np.unique(y)) # you need to pass the classes when you fit for the first time
pickle.dump(clf, open("MLP_classifier", 'wb'))
restored_clf = pickle.load(open("MLP_classifier", 'rb'))
restored_clf.partial_fit(x_batch2, y_batch2)
你可以试试下面的方法。你知道吗
教程是here。你知道吗
如果这是你想要的,请告诉我。你知道吗
使用
partial_fit
方法MLPClasifier
可以很好地做到这一点。我已经为此编写了一个示例代码。如果您批量获取数据,并且训练对您来说是一项成本高昂的操作,那么您可以很好地重新训练保存的模型,因此您不能每次都在获取新的一批数据时对整个数据集进行训练。你知道吗希望这有帮助!你知道吗
相关问题 更多 >
编程相关推荐