<p>在<code>RandomForestClassifier</code>中没有这样的选项,但是随机森林算法只是一个决策树的集合,其中每个树只考虑所有可能特征的子集,并在训练数据的引导子样本上进行训练。在</p>
<p>因此,对于那些被迫使用一组特定特性的树,我们自己手动创建这个并不困难。我在下面写了一个类来做这个。这不会执行健壮的输入验证或类似的操作,但是您可以参考sklearn的random forest<code>fit</code>函数的源代码。这是为了让您了解如何自己构建它:</p>
<blockquote>
<p>FixedFeatureRFC.py</p>
</blockquote>
<pre><code>import numpy as np
from sklearn.tree import DecisionTreeClassifier
class FixedFeatureRFC:
def __init__(self, n_estimators=10, random_state=None):
self.n_estimators = n_estimators
if random_state is None:
self.random_state = np.random.RandomState()
def fit(self, X, y, feats_fixed=None, max_features=None, bootstrap_frac=0.8):
"""
feats_fixed: indices of features (columns of X) to be
always used to train each estimator
max_features: number of features that each estimator will use,
including the fixed features.
bootstrap_frac: size of bootstrap sample that each estimator will use.
"""
self.estimators = []
self.feats_used = []
self.n_classes = np.unique(y).shape[0]
if feats_fixed is None:
feats_fixed = []
if max_features is None:
max_features = X.shape[1]
n_samples = X.shape[0]
n_bs = int(bootstrap_frac*n_samples)
feats_fixed = list(feats_fixed)
feats_all = range(X.shape[1])
random_choice_size = max_features - len(feats_fixed)
feats_choosable = set(feats_all).difference(set(feats_fixed))
feats_choosable = np.array(list(feats_choosable))
for i in range(self.n_estimators):
chosen = self.random_state.choice(feats_choosable,
size=random_choice_size,
replace=False)
feats = feats_fixed + list(chosen)
self.feats_used.append(feats)
bs_sample = self.random_state.choice(n_samples,
size=n_bs,
replace=True)
dtc = DecisionTreeClassifier(random_state=self.random_state)
dtc.fit(X[bs_sample][:,feats], y[bs_sample])
self.estimators.append(dtc)
def predict_proba(self, X):
out = np.zeros((X.shape[0], self.n_classes))
for i in range(self.n_estimators):
out += self.estimators[i].predict_proba(X[:,self.feats_used[i]])
return out / self.n_estimators
def predict(self, X):
return self.predict_proba(X).argmax(axis=1)
def score(self, X, y):
return (self.predict(X) == y).mean()
</code></pre>
<p>下面是一个测试脚本,以查看上面的类是否按预期工作:</p>
<blockquote>
<p>test.py</p>
</blockquote>
^{pr2}$
<p>输出为:</p>
<pre><code>n_features = 30
0.983739837398
</code></pre>
<p>没有一个断言失败,这表明我们选择修复的特性在每个随机特征子样本中使用,并且每个特征子样本的大小是所需的<code>max_features</code>大小。对保留数据的高精度表明分类器工作正常。在</p>