当我尝试使用python networkx总结文本文档时出现错误“power iteration未能在100次迭代中收敛”)

2024-10-05 14:29:40 发布

您现在位置:Python中文网/ 问答频道 /正文

当我尝试使用python networkx总结文本文档时,我得到了一个PowerIterationFailedConvergence:(PowerIterationFailedConvergence(…),“power iteration未能在100次迭代中收敛”),如下代码所示。代码“scores=nx.pagerank(句子相似性图)”处显示的错误

def read_article(file_name):
    file = open(file_name, "r",encoding="utf8")
    filedata = file.readlines()
    text=""
    for s in filedata:
        text=text+s.replace("\n","")
        text=re.sub(' +', ' ', text) #remove space
        text=re.sub('—',' ',text)
    
    article = text.split(". ") 
    sentences = []
    for sentence in article:
#         print(sentence)
        sentences.append(sentence.replace("[^a-zA-Z]", "").split(" "))
    sentences.pop()
    new_sent=[]
    for lst in sentences:
        newlst=[]
        for i in range(len(lst)):
            if lst[i].lower()!=lst[i-1].lower():
                newlst.append(lst[i])
            else:
                newlst=newlst
        new_sent.append(newlst)
    return new_sent
def sentence_similarity(sent1, sent2, stopwords=None):
    if stopwords is None:
        stopwords = []
 
    sent1 = [w.lower() for w in sent1]
    sent2 = [w.lower() for w in sent2]
 
    all_words = list(set(sent1 + sent2))
 
    vector1 = [0] * len(all_words)
    vector2 = [0] * len(all_words)
 
    # build the vector for the first sentence
    for w in sent1:
        if w in stopwords:
            continue
        vector1[all_words.index(w)] += 1
 
    # build the vector for the second sentence
    for w in sent2:
        if w in stopwords:
            continue
        vector2[all_words.index(w)] += 1
 
    return 1 - cosine_distance(vector1, vector2)
def build_similarity_matrix(sentences, stop_words):
    # Create an empty similarity matrix
    similarity_matrix = np.zeros((len(sentences), len(sentences)))
 
    for idx1 in range(len(sentences)):
        for idx2 in range(len(sentences)):
            if idx1 == idx2: #ignore if both are same sentences
                continue 
            similarity_matrix[idx1][idx2] = sentence_similarity(sentences[idx1], sentences[idx2], stop_words)

    return similarity_matrix
stop_words = stopwords.words('english')
summarize_text = []

    # Step 1 - Read text anc split it
new_sent =  read_article("C:\\Users\\Documents\\fedPressConference_0620.txt")

    # Step 2 - Generate Similary Martix across sentences
sentence_similarity_martix = build_similarity_matrix(new_sent1, stop_words)

    # Step 3 - Rank sentences in similarity martix
sentence_similarity_graph = nx.from_numpy_array(sentence_similarity_martix)
scores = nx.pagerank(sentence_similarity_graph)

    # Step 4 - Sort the rank and pick top sentences
ranked_sentence = sorted(((scores[i],s) for i,s in enumerate(new_sent1)), reverse=True)    
print("Indexes of top ranked_sentence order are ", ranked_sentence)    

for i in range(10):
    summarize_text.append(" ".join(ranked_sentence[i][1]))

    # Step 5 - Offcourse, output the summarize texr
print("Summarize Text: \n", ". ".join(summarize_text))


Tags: thetextinnewforlenifsentences
1条回答
网友
1楼 · 发布于 2024-10-05 14:29:40

也许你现在已经解决了

问题是您使用向量的时间太长。您的向量是使用整个词汇表构建的,这可能太长,模型无法在100个周期内收敛(这是pagerank的默认值)

您可以减少词汇表的长度(您是否检查了是否正确删除了停止词?),或者使用任何其他技术,如减少不太频繁的单词,或者使用TF-IDF

在我的例子中,我面临着同样的问题,但使用手套词嵌入。对于300维,我无法得到收敛性,这可以通过使用100维模型轻松解决

您可以尝试的另一件事是在调用nx.pagerank时扩展max_iter参数:

nx.pagerank(nx_graph, max_iter=600) # Or any number that will work for you.

默认值为100个周期

相关问题 更多 >