Scikit:删除所有文档中存在的功能行

2024-06-28 20:23:59 发布

您现在位置:Python中文网/ 问答频道 /正文

我在做文本分类。我有大约32K(垃圾邮件和火腿)文件。在

import numpy as np
import pandas as pd
import sklearn.datasets as dataset
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import BernoulliNB
from sklearn.preprocessing import LabelEncoder
import re
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
from sklearn.linear_model import SGDClassifier
from BeautifulSoup import BeautifulSoup
from sklearn.feature_extraction import text 
from sklearn import cross_validation
from sklearn import svm
from sklearn.grid_search import GridSearchCV
from sklearn.feature_selection import VarianceThreshold

# Now load files from spam and ham
data = dataset.load_files("/home/voila/Downloads/enron1/")
xData = data.data
yData = data.target
print data.target_names


countVector  = CountVectorizer(decode_error='ignore' , stop_words = 'english')
countmatrix = countVector.fit_transform(xData)

countermatrix将是矩阵,其中countermatrix[i][j]表示文档中j的字数i

现在我想删除80%以上文档中出现countermatrix[i][j] > 1(意味着word太常见)的所有特性。在

我该怎么做?在

谢谢


Tags: textfromimportdataasloadfilessklearn
3条回答

您可以将max_df设置为小于1的值,请参见docs。在

我想你可以按列检索矩阵:

def remove_feature(): 
    remove_index = []
    for index in range(0, len(countermatrix.T)):
        if countermatrix[:, index].max() > 1:
            remove_index.append(index)

    return numpy.delete(countermatrix, remove_index, 1)

试试这个:

goodwords = ((countmatrix > 1).mean(axis=0) <= 0.8).nonzero()[0]

它首先计算一个布尔矩阵,如果countmatrix > 1为真,并计算其列平均值。如果平均值小于0.8(80%),则相应的列索引由nonzero()返回。在

因此,goodwords将包含不太频繁的单词的所有索引。现在你可以简单地减少矩阵

^{pr2}$

相关问题 更多 >