我正在运行以下示例:
https://keras.io/examples/nlp/text_classification_with_transformer/
我已经创建并训练了一个模型,如前所述,它运行良好:
inputs = layers.Input(shape=(maxlen,))
embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim)
x = embedding_layer(inputs)
transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim)
x = transformer_block(x,training=True)
x = layers.GlobalAveragePooling1D()(x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(20, activation="relu")(x)
x = layers.Dropout(0.1)(x)
outputs = layers.Dense(2, activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
"""
## Train and Evaluate
"""
model.compile("adam", "sparse_categorical_crossentropy", metrics=["accuracy"])
history = model.fit(
x_train, y_train, batch_size=1024, epochs=1, validation_data=(x_val, y_val)
)
model.save('SPAM.h5')
如何在Keras中正确保存和加载此类自定义模型
我试过了
best_model=tf.keras.models.load_model('SPAM.h5')
ValueError: Unknown layer: TokenAndPositionEmbedding
但是模型似乎忽略了自定义层。但是,以下方法也不起作用
best_model=tf.keras.models.load_model('SPAM.h5',custom_objects={"TokenAndPositionEmbedding": TokenAndPositionEmbedding()})
TypeError: __init__() missing 3 required positional arguments:
'maxlen', 'vocab_size', and 'embed_dim'
同样,通过课程也不能解决问题
best_model=tf.keras.models.load_model('SPAM.h5',
custom_objects={"TokenAndPositionEmbedding": TokenAndPositionEmbedding})
TypeError: __init__() got an unexpected keyword argument 'name'
best_model=tf.keras.models.load_model('SPAM.h5',
{"TokenAndPositionEmbedding":
TokenAndPositionEmbedding,'TransformerBlock':TransformerBlock,
'MultiHeadSelfAttention':MultiHeadSelfAttention})
基于this answer,您需要将此方法(get_config)添加到每个类(TokenAndPositionEmbedding和TransformerBlock):
变压器块:
并将构造函数更改为
标记和位置嵌入:
类似地,将其添加到类中
并将构造函数替换为:
答案链接中解释了无法保存和加载自定义图层的原因。加载时,只需执行以下操作:
相关问题 更多 >
编程相关推荐