如何迭代存储在dataframe中的文本以提取句子并在循环中查找值?

2024-05-21 13:34:21 发布

您现在位置:Python中文网/ 问答频道 /正文

我把文本存储在一个包含许多句子的数据框中。我已经编写了一个单独的函数,在其中查找语句中的某些关键字和值,并希望能够将这些值存储在同一数据帧的不同列中。我有一个问题,当我迭代数据帧的行标记到每个句子第一。你知道吗

当我将显式语句传递给函数时,这就起作用了。我的问题是当我试着把文本标记成循环中的句子时。我在rf[“Nod\u size”]中得到空结果。然而,“2.9x1.7”和“2.5x1.3”是我的预期结果。你知道吗

这是我正在使用的代码

 import pandas as pd
 import numpy as np
 import nltk
 import re
 from nltk.tokenize import TweetTokenizer, sent_tokenize, word_tokenize

 rf = pd.DataFrame([{"Text": "CHEST CA lung. -Increased sizes of nodules in RLL. There is further increased size and solid component of part-solid nodule associated with internal bubbly lucency and pleural tagging at apicoposterior segment of the LUL (SE 3; IM 38-50), now measuring about 2.9x1.7 cm in greatest transaxial dimension (previously size 2.5x1.3 cm in 2015).", "Stage": "T2aN2M0"},
               {"Text": "CHEST CA lung. Post LL lobectomy. As compared to study obtained on 30/10/2018, -Top normal heart size. -Increased sizes of nodules in RLL.", "Stage": "T2aN2M0"}])

 nodule_keywords = ["nodules","nodule"]
 nodule_length_keyword = ["cm","mm", "centimeters", "milimeters"]

 def GetNodule(sentence):
     sentence = re.sub('-', ' ', sentence)
     token_words = nltk.word_tokenize(sentence)
     df = pd.DataFrame(token_words)
     df['check_nodkeywords'] = df[0].str.lower().isin(nodule_keywords)
     df['check_nod_len_keywords'] = 
     df[0].str.lower().isin(nodule_length_keyword)
     check = np.any(df['check_nodkeywords']==True)
     check1 =np.any(df['check_nod_len_keywords']==True)
     if ((check==True)&(check1==True)):
          position = np.where(df['check_nod_len_keywords']==True)
          position = position[0]
          nodule_size = df[0].iloc[position-1]
          return nodule_size

 for sub_list in rf['Text']:
     sent = sent_tokenize(str(sub_list))
     for sub_sent_list in sent:
         result_calcified_nod = GetNodule(sub_sent_list)
         rf["Nod_size"] = result_calcified_nod 

请帮忙!!我认为这是一个概念问题,而不是编程问题。请帮我解决!你知道吗


Tags: ofinimporttruedfsizechecknp
1条回答
网友
1楼 · 发布于 2024-05-21 13:34:21

下面的代码应该满足您的要求。你知道吗

rf["Nod_size"] = ""
for i,sub_list in zip(range(len(rf)),rf['Text']):
    temp = []
    for sentence in sent_tokenize(sub_list):
        result_calcified_nod = GetNodule(sentence)
        temp.append(result_calcified_nod)
    rf.loc[i]["Nod_size"] = temp

相关问题 更多 >