递归错误:超过最大递归深度

2024-09-27 07:30:25 发布

您现在位置:Python中文网/ 问答频道 /正文

from __future__ import print_function
import os, codecs, nltk.stem

english_stemmer = nltk.stem.SnowballStemmer('english')
for root, dirs, files in os.walk("/Users/Documents/corpus/source-document/test1"):
        for file in files:
            if file.endswith(".txt"):
                posts = codecs.open(os.path.join(root,file),"r", "utf-8-sig")
from sklearn.feature_extraction.text import CountVectorizer
class StemmedCountVectorizer(CountVectorizer):
    def build_analyzer(self):
        analyzer = super(StemmedCountVectorizer, self.build_analyzer())
        return lambda doc: (english_stemmer.stem(w) for w in  analyzer(doc))

vectorizer = StemmedCountVectorizer(min_df = 1, stop_words = 'english')
X_train = vectorizer.fit_transform(posts)
num_samples, num_features = X_train.shape
print("#samples: %d, #features: %d" % (num_samples, num_features))     #samples: 5, #features: 25
print(vectorizer.get_feature_names())

当我对目录中包含的所有文本文件运行上述代码时,它将引发以下错误: 递归错误:超过最大递归深度。在

我试着用sys.setrecursionlimit但都是徒劳的。当内核崩溃时,我提供20000的大错误值。在


Tags: inimportforenglishos错误analyzernum
1条回答
网友
1楼 · 发布于 2024-09-27 07:30:25

您的错误在analyzer = super(StemmedCountVectorizer, self.build_analyzer())这里您在超级调用之前调用函数build_analyzer,这将导致无限递归循环。将其更改为analyzer = super(StemmedCountVectorizer, self).build_analyzer()

相关问题 更多 >

    热门问题