Keras loadmodel for custom model和custom layers Transformer文档示例

2024-09-28 05:21:46 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在运行以下示例:

https://keras.io/examples/nlp/text_classification_with_transformer/

我已经创建并训练了一个模型,如前所述,它运行良好:

inputs = layers.Input(shape=(maxlen,))
embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim)
x = embedding_layer(inputs)
transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim)
x = transformer_block(x,training=True)
x = layers.GlobalAveragePooling1D()(x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(20, activation="relu")(x)
x = layers.Dropout(0.1)(x)
outputs = layers.Dense(2, activation="softmax")(x)

model = keras.Model(inputs=inputs, outputs=outputs)


"""
## Train and Evaluate
"""

model.compile("adam", "sparse_categorical_crossentropy", metrics=["accuracy"])
history = model.fit(
    x_train, y_train, batch_size=1024, epochs=1, validation_data=(x_val, y_val)
)

model.save('SPAM.h5')

如何在Keras中正确保存和加载此类自定义模型

我试过了

 best_model=tf.keras.models.load_model('SPAM.h5')
 ValueError: Unknown layer: TokenAndPositionEmbedding

但是模型似乎忽略了自定义层。但是,以下方法也不起作用

best_model=tf.keras.models.load_model('SPAM.h5',custom_objects={"TokenAndPositionEmbedding": TokenAndPositionEmbedding()})
 
TypeError: __init__() missing 3 required positional arguments:
 'maxlen', 'vocab_size', and 'embed_dim'

同样,通过课程也不能解决问题

best_model=tf.keras.models.load_model('SPAM.h5',
 custom_objects={"TokenAndPositionEmbedding": TokenAndPositionEmbedding})
 TypeError: __init__() got an unexpected keyword argument 'name'



 best_model=tf.keras.models.load_model('SPAM.h5',
{"TokenAndPositionEmbedding":
TokenAndPositionEmbedding,'TransformerBlock':TransformerBlock,
'MultiHeadSelfAttention':MultiHeadSelfAttention})

Tags: 模型modelmodelslayerstfloadspamkeras
1条回答
网友
1楼 · 发布于 2024-09-28 05:21:46

基于this answer,您需要将此方法(get_config)添加到每个类(TokenAndPositionEmbedding和TransformerBlock):

变压器块:

def get_config(self):
    config = super().get_config().copy()
    config.update({
        'embed_dim': self.embed_dim,
        'num_heads': self.num_heads,
        'ff_dim': self.ff_dim,
        'rate': self.rate
    })
    return config

并将构造函数更改为

def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1, **kwargs):
    super(TransformerBlock, self).__init__()
    self.embed_dim = embed_dim
    self.num_heads = num_heads
    self.ff_dim = ff_dim
    self.rate = rate
    self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim)
    self.ffn = keras.Sequential(
        [layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim),]
    )
    self.layernorm1 = layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = layers.LayerNormalization(epsilon=1e-6)
    self.dropout1 = layers.Dropout(rate)
    self.dropout2 = layers.Dropout(rate)

标记和位置嵌入:

类似地,将其添加到类中

def get_config(self):
    config = super().get_config().copy()
    config.update({
        'maxlen': self.maxlen,
        'vocab_size': self.vocab_size,
        'embed_dim': self.embed_dim
    })
    return config

并将构造函数替换为:

def __init__(self, maxlen, vocab_size, embed_dim, **kwargs):
    super(TokenAndPositionEmbedding, self).__init__()
    self.maxlen = maxlen
    self.vocab_size = vocab_size
    self.embed_dim = embed_dim
    self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim)
    self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim)

答案链接中解释了无法保存和加载自定义图层的原因。加载时,只需执行以下操作:

x = load_model('model.h5', custom_objects = {"TransformerBlock": TransformerBlock, "TokenAndPositionEmbedding": TokenAndPositionEmbedding})

相关问题 更多 >

    热门问题