我需要在普拉雷尔处理一些文件。 我正在使用池,但在保存池处理的文件时遇到问题。 代码如下:
... All imports...
def extract(text_lines):
line_tr01 = []
line_tr02 = []
line_tr03 = []
line_tr03 = []
for line in text_lines:
treatment01 = treatment_a(line, args)
line_tr01.append(treatment01)
treatment02 = treatment_b(line, args)
line_tr02.append(treatment02)
treatment03 = treatment_c(line, args)
line_tr03.append(treatment03)
treatment04 = treatment_d(line, args)
line_tr04.append(treatment04)
for file in folder:
text_lines = read_file_into_list(file_path)
chunk_size=len(text_lines)/6
divided=[]
divided.append(text_lines[0:chunk_size])
divided.append(text_lines[chunk_size:2*chunk_size])
divided.append(text_lines[2*chunk_size:3*chunk_size])
divided.append(text_lines[3*chunk_size:4*chunk_size])
divided.append(text_lines[4*chunk_size:5*chunk_size])
divided.append(text_lines[5*chunk_size:6*chunk_size])
lines=[]
p = Pool(6)
lines.extend(p.map(extract(text_lines),divided))
p.close()
p.join()
p.terminate()
line_tr01=lines[0]
with open(pkl_filename, 'wb') as f:
pickle.dump(line_tr01, f)
line_tr02=lines[1]
with open(pkl_filename, 'wb') as f:
pickle.dump(line_tr02, f)
line_tr03=lines[2]
with open(pkl_filename, 'wb') as f:
pickle.dump(line_tr03, f)
line_tr04=lines[3]
with open(pkl_filename, 'wb') as f:
pickle.dump(line_tr04, f)
有没有关于我如何停止重写文件的线索 欢迎任何帮助。 提前谢谢
所以问题是,当您将内容分解到池中时,您不再拥有当前(ab)使用的通用全局名称空间。所以让我们重写它来正确地传递信息
这应该将所有数据堆积到一个名为
treatments
的dict中,因为它返回运行extract
的子进程的数据,然后您可以用任何方式写出数据相关问题 更多 >
编程相关推荐