如何使用python删除语料库中的人名

2024-05-05 20:46:10 发布

您现在位置:Python中文网/ 问答频道 /正文

我已经搜索了很长一段时间,我找到的大部分材料都是名为recognition的实体。我正在运行主题建模,但在我的数据中,文本中的名称太多了。
是否有包含人名(英文)的python库?或者,如果不是,有什么好方法可以从语料库中的每个文档中删除人名? 下面是一个简单的例子:

texts=['Melissa\'s home was clean and spacious. I would love to visit again soon.','Kevin was nice and Kevin\'s home had a huge parking spaces.'] 

Tags: and数据方法文档文本实体名称home
2条回答

不确定此解决方案是否有效和健壮,但它很容易理解(至少对我来说):

import re

# get a list of existed names (over 18 000) from the file
with open('names.txt', 'r') as f:
    NAMES = set(f.read().splitlines())

# your list of texts
texts=["Melissa's home was clean and spacious. I would love to visit again soon.",
"Kevin was nice and Kevin's home had a huge parking spaces."]

# join the texts into one string
texts = ' | '.join(texts)

# find all the words that look like names
pattern = r"(\b[A-Z][a-z]+('s)?\b)"
found_names = re.findall(pattern, texts)

# get singular forms, and remove doubles
found_names = set([name[0].replace("'s","") for name in found_names])

# remove all the words that look like names but are not included in the NAMES
found_names = [name for name in found_names if name in NAMES]

# loop trough the found names and remove every name from the texts
for name in found_names:
    texts = re.sub(name + "('s)?", "", texts) # include plural forms

# split the texts back to the list
texts = texts.split(' | ')

print(texts) 

输出:

[' home was clean and spacious. I would love to visit again soon.',
' was nice and  home had a huge parking spaces.']

此处获得了姓名列表:https://www.usna.edu/Users/cs/roche/courses/s15si335/proj1/files.php%3Ff=names.txt.html

我完全赞同@James_SO使用更多智能工具的建议

我建议使用具有一定识别和区分专有名词能力的标记器。spacy的功能非常广泛,它的默认标记器在这方面做得很好

如果使用一系列的名字,就好像它们是停止语,那么会有危险——让我举例说明:

import spacy
import pandas as pd
nlp = spacy.load("en_core_web_sm")
texts=["Melissa's home was clean and spacious. I would love to visit again soon.",
       "Kevin was nice and Kevin's home had a huge parking spaces."
      "Bill sold a work of art to Art and gave him a bill"]
tokenList = []
for i, sentence in enumerate(texts):
    doc = nlp(sentence)
    for token in doc:
        tokenList.append([i, token.text, token.lemma_, token.pos_, token.tag_, token.dep_])
tokenDF = pd.DataFrame(tokenList, columns=["i", "text", "lemma", "POS", "tag", "dep"]).set_index("i")

因此前两句很简单,spacy识别专有名词“PROPN”: enter image description here

现在,第三句话已经表达了这个问题——很多人的名字也是事物。spacy的默认标记器并不完美,但它在任务的两个方面都做得很好:当名称被用作常规词(例如,商品清单、艺术品)时,不要删除它们,当它们被用作名称时,一定要识别它们。(你可以看到,它把艺术(人物)的一个提法弄乱了

enter image description here

相关问题 更多 >