我有一些工作代码可以正确地将csv文件中的数据加载到PyBrain数据集中:
def old_get_dataset():
reader = csv.reader(open('test.csv', 'rb'))
header = reader.next()
fields = dict(zip(header, range(len(header))))
print header
# assume last field in csv is single target variable
# and all other fields are input variables
dataset = SupervisedDataSet(len(fields) - 1, 1)
for row in reader:
#print row[:-1]
#print row[-1]
dataset.addSample(row[:-1], row[-1])
return dataset
现在我试图重写这段代码,以使用numpy的loadtxt函数。我相信addSample可以使用numpy数组,而不必一次添加一行数据。
假设我加载的numpy数组是m x n维,那么如何将第一个m x(n-1)数据集作为第一个参数传递,将最后一列数据作为第二个参数传递?这就是我想要的:
def get_dataset():
array = numpy.loadtxt('test.csv', delimiter=',', skiprows=1)
# assume last field in csv is single target variable
# and all other fields are input variables
number_of_columns = array.shape[1]
dataset = SupervisedDataSet(number_of_columns - 1, 1)
#print array[0]
#print array[:,:-1]
#print array[:,-1]
dataset.addSample(array[:,:-1], array[:,-1])
return dataset
但我得到了以下错误:
Traceback (most recent call last):
File "C:\test.py", line 109, in <module>
(d, n, t) = main()
File "C:\test.py", line 87, in main
ds = get_dataset()
File "C:\test.py", line 45, in get_dataset
dataset.addSample(array[:,:-1], array[:,-1])
File "C:\Python27\lib\site-packages\pybrain\datasets\supervised.py",
line 45, in addSample self.appendLinked(inp, target)
File "C:\Python27\lib\site-packages\pybrain\datasets\dataset.py",
line 215, in appendLinked self._appendUnlinked(l, args[i])
File "C:\Python27\lib\site-packages\pybrain\datasets\dataset.py",
line 197, in _appendUnlinked self.data[label][self.endmarker[label], :] = row
ValueError: output operand requires a reduction, but reduction is not enabled
我该怎么解决?
经过大量的实验和重新读取dataset documentation后,以下命令运行无误:
我得再检查一下它做的对不对。
我写了一个函数来做这个
相关问题 更多 >
编程相关推荐