OSError:无法加载标记器

2024-09-29 19:32:51 发布

您现在位置:Python中文网/ 问答频道 /正文

我想从头开始训练XLNET语言模型。首先,我对标记器进行了如下培训:

from tokenizers import ByteLevelBPETokenizer

# Initialize a tokenizer
tokenizer = ByteLevelBPETokenizer()
# Customize training
tokenizer.train(files='data.txt', min_frequency=2, special_tokens=[ #defualt vocab size
    "<s>",
    "<pad>",
    "</s>",
    "<unk>",
    "<mask>",
])
tokenizer.save_model("tokenizer model")

最后,在给定的目录中有两个文件:

merges.txt
vocab.json

我已为模型定义了以下配置:

from transformers import XLNetConfig, XLNetModel
config = XLNetConfig()

现在,我想在变形金刚中重新创建我的标记器:

from transformers import XLNetTokenizerFast

tokenizer = XLNetTokenizerFast.from_pretrained("tokenizer model")

但是,出现以下错误:

File "dfgd.py", line 8, in <module>
    tokenizer = XLNetTokenizerFast.from_pretrained("tokenizer model")
  File "C:\Users\DSP\AppData\Roaming\Python\Python37\site-packages\transformers\tokenization_utils_base.py", line 1777, in from_pretrained
    raise EnvironmentError(msg)
OSError: Can't load tokenizer for 'tokenizer model'. Make sure that:

- 'tokenizer model' is a correct model identifier listed on 'https://huggingface.co/models'

- or 'tokenizer model' is the correct path to a directory containing relevant tokenizer files

我该怎么办


Tags: from标记模型importtxtmodelfilestokenizer
1条回答
网友
1楼 · 发布于 2024-09-29 19:32:51

而不是

tokenizer = XLNetTokenizerFast.from_pretrained("tokenizer model")

你应该写:

from tokenizers.implementations import ByteLevelBPETokenizer
tokenizer = ByteLevelBPETokenizer(
    "tokenizer model/vocab.json",
    "tokenizer model/merges.txt",
)

相关问题 更多 >

    热门问题