在python中查找所有语句中最相似的语句

2024-09-25 10:17:48 发布

您现在位置:Python中文网/ 问答频道 /正文

建议/参考链接/代码,不胜感激

我有一个超过1500行的数据。每行有一个句子。我正试图找出最好的方法来找出所有句子中最相似的句子

我尝试过的

  1. 我尝试过K-mean算法,它将相似的句子分组。但是我发现了一个缺点,我必须通过K来创建集群。很难猜到。我尝试了elbo方法来猜测集群,但将所有集群分组是不够的。在这种方法中,我将所有数据分组。我正在寻找的数据,这是类似以上0.90%的数据应与ID返回

  2. 我尝试了使用TfidfVectorizer创建矩阵的余弦相似性,然后传入余弦相似性。即使这种方法也不能正常工作

我在寻找什么

我想要一种方法,在这种方法中,我可以传递一个阈值示例0.90,结果应该返回所有行中彼此相似超过0.90%的数据

Data Sample
ID    |   DESCRIPTION
-----------------------------
10    | Cancel ASN WMS Cancel ASN   
11    | MAXPREDO Validation is corect
12    | Move to QC  
13    | Cancel ASN WMS Cancel ASN   
14    | MAXPREDO Validation is right
15    | Verify files are sent every hours for this interface from Optima
16    | MAXPREDO Validation are correct
17    | Move to QC  
18    | Verify files are not sent

预期结果

上述数据相似性高达0.90%,因此应使用ID

ID    |   DESCRIPTION
-----------------------------
10    | Cancel ASN WMS Cancel ASN
13    | Cancel ASN WMS Cancel ASN
11    | MAXPREDO Validation is corect  # even spelling is not correct
14    | MAXPREDO Validation is right
16    | MAXPREDO Validation are correct
12    | Move to QC  
17    | Move to QC  

Tags: to数据方法idmoveis集群cancel
3条回答

为什么余弦相似性和TFIDF矢量器对您不起作用

我试过了,它与以下代码一起工作:

import pandas as pd
import numpy as np

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity

df = pd.DataFrame(columns=["ID","DESCRIPTION"], data=np.matrix([[10,"Cancel ASN WMS Cancel ASN"],
                                                                [11,"MAXPREDO Validation is corect"],
                                                                [12,"Move to QC"],
                                                                [13,"Cancel ASN WMS Cancel ASN"],
                                                                [14,"MAXPREDO Validation is right"],
                                                                [15,"Verify files are sent every hours for this interface from Optima"],
                                                                [16,"MAXPREDO Validation are correct"],
                                                                [17,"Move to QC"],
                                                                [18,"Verify files are not sent"]
                                                                ]))

corpus = list(df["DESCRIPTION"].values)

vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(corpus)

threshold = 0.4

for x in range(0,X.shape[0]):
  for y in range(x,X.shape[0]):
    if(x!=y):
      if(cosine_similarity(X[x],X[y])>threshold):
        print(df["ID"][x],":",corpus[x])
        print(df["ID"][y],":",corpus[y])
        print("Cosine similarity:",cosine_similarity(X[x],X[y]))
        print()

阈值也可以调整,但如果阈值为0.9,则不会产生所需的结果

阈值为0.4的输出为:

10 : Cancel ASN WMS Cancel ASN
13 : Cancel ASN WMS Cancel ASN
Cosine similarity: [[1.]]

11 : MAXPREDO Validation is corect
14 : MAXPREDO Validation is right
Cosine similarity: [[0.64183024]]

12 : Move to QC
17 : Move to QC
Cosine similarity: [[1.]]

15 : Verify files are sent every hours for this interface from Optima
18 : Verify files are not sent
Cosine similarity: [[0.44897995]]

阈值为0.39时,所有预期句子都是输出中的特征,但也可以找到另外一对具有索引[15,18]的句子:

10 : Cancel ASN WMS Cancel ASN
13 : Cancel ASN WMS Cancel ASN
Cosine similarity: [[1.]]

11 : MAXPREDO Validation is corect
14 : MAXPREDO Validation is right
Cosine similarity: [[0.64183024]]

11 : MAXPREDO Validation is corect
16 : MAXPREDO Validation are correct
Cosine similarity: [[0.39895808]]

12 : Move to QC
17 : Move to QC
Cosine similarity: [[1.]]

14 : MAXPREDO Validation is right
16 : MAXPREDO Validation are correct
Cosine similarity: [[0.39895808]]

15 : Verify files are sent every hours for this interface from Optima
18 : Verify files are not sent
Cosine similarity: [[0.44897995]]

一种可能的方法是使用单词嵌入来创建句子的向量表示。就像使用预先训练的单词嵌入,让rnn层创建一个句子向量表示,其中每个句子的单词嵌入是组合的。然后你有一个向量,你可以计算出两个物体之间的距离。但是你需要决定,你想要设定哪一个阈值,这样一个句子就被认为是相似的,因为单词嵌入的尺度是不固定的

更新

我做了一些实验。在我看来,对于这样的任务,这是一种可行的方法,但是,您可能想自己了解一下,它在您的情况下工作得有多好。我在gitrepository中创建了一个示例

此外,单词移动距离算法也可用于此任务。您可以在此媒体article中找到有关此主题的更多信息

可以使用这个Python 3库来计算句子相似性:https://github.com/UKPLab/sentence-transformers

来自https://www.sbert.net/docs/usage/semantic_textual_similarity.html的代码示例:

from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer('paraphrase-MiniLM-L12-v2')

# Two lists of sentences
sentences1 = ['The cat sits outside',
             'A man is playing guitar',
             'The new movie is awesome']

sentences2 = ['The dog plays in the garden',
              'A woman watches TV',
              'The new movie is so great']

#Compute embedding for both lists
embeddings1 = model.encode(sentences1, convert_to_tensor=True)
embeddings2 = model.encode(sentences2, convert_to_tensor=True)

#Compute cosine-similarits
cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2)

#Output the pairs with their score
for i in range(len(sentences1)):
    print("{} \t\t {} \t\t Score: {:.4f}".format(sentences1[i], sentences2[i], cosine_scores[i][i]))

该库包含最先进的句子嵌入模型

请参见https://stackoverflow.com/a/68728666/395857以执行句子聚类

相关问题 更多 >