回答此问题可获得 20 贡献值,回答如果被采纳可获得 50 分。
<p>我需要为数据库的每个元素运行一个函数。在</p>
<p>当我尝试以下操作时:</p>
<pre><code>from multiprocessing import Pool
from pymongo import Connection
def foo():
...
connection1 = Connection('127.0.0.1', 27017)
db1 = connection1.data
my_pool = Pool(6)
my_pool.map(foo, db1.index.find())
</code></pre>
<p>我得到以下错误:</p>
<blockquote>
<p>Job 1, 'python myscript.py ' terminated by signal SIGKILL (Forced quit)</p>
</blockquote>
<p>我认为,这是由于<code>db1.index.find()</code>在试图返回数百万个数据库元素时占用了所有可用的ram。。。在</p>
<p>我应该如何修改我的代码使其工作?在</p>
<p>这里有一些日志:</p>
^{pr2}$
<p>实际功能如下:</p>
<pre><code>def create_barrel(item):
connection = Connection('127.0.0.1', 27017)
db = connection.data
print db.index.count()
barrel = []
fls = []
if 'name' in item.keys():
barrel.<a href="https://www.cnpython.com/list/append" class="inner-link">append</a>(WhitespaceTokenizer().tokenize(item['name']))
name = item['name']
elif 'name.utf-8' in item.keys():
barrel.append(WhitespaceTokenizer().tokenize(item['name.utf-8']))
name = item['name.utf-8']
else:
print item.keys()
if 'files' in item.keys():
for file in item['files']:
if 'path' in file.keys():
barrel.append(WhitespaceTokenizer().tokenize(" ".join(file['path'])))
fls.append(("\\".join(file['path']),file['length']))
elif 'path.utf-8' in file.keys():
barrel.append(WhitespaceTokenizer().tokenize(" ".join(file['path.utf-8'])))
fls.append(("\\".join(file['path.utf-8']),file['length']))
else:
print file
barrel.append(WhitespaceTokenizer().tokenize(file))
if len(fls) < 1:
fls.append((name,item['length']))
barrel = sum(barrel,[])
for s in barrel:
vs = re.findall("\d[\d|\.]*\d", s) #versions i.e. numbes such as 4.2.7500
b0 = []
for s in barrel:
b0.append(re.split("[" + string.punctuation + "]", s))
b1 = filter(lambda x: x not in string.punctuation, sum(b0,[]))
flag = True
while flag:
bb = []
flag = False
for bt in b1:
if bt[0] in string.punctuation:
bb.append(bt[1:])
flag = True
elif bt[-1] in string.punctuation:
bb.append(bt[:-1])
flag = True
else:
bb.append(bt)
b1 = bb
b2 = b1 + barrel + vs
b3 = list(set(b2))
b4 = map(lambda x: x.lower(), b3)
b_final = {}
b_final['_id'] = item['_id']
b_final['tags'] = b4
b_final['name'] = name
b_final['files'] = fls
print db.barrels.insert(b_final)
</code></pre>
<p>我注意到了一件有趣的事。然后按ctrl+c停止进程,得到以下结果:</p>
<pre><code>python index2barrel.py
Traceback (most recent call last):
File "index2barrel.py", line 83, in <module>
my_pool.map(create_barrel, db1.index.find, 6)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 227, in map
return self.map_async(func, iterable, chunksize).get()
File "/usr/lib/python2.7/multiprocessing/pool.py", line 280, in map_async
iterable = list(iterable)
TypeError: 'instancemethod' object is not iterable
</code></pre>
<p>我的意思是,为什么多处理要把一些东西转换成列表?这不是问题的根源吗?在</p>
<p>从堆栈跟踪:</p>
<pre><code>brk(0x231ccf000) = 0x231ccf000
futex(0x1abb150, FUTEX_WAKE_PRIVATE, 1) = 1
sendto(3, "+\0\0\0\260\263\355\356\0\0\0\0\325\7\0\0\0\0\0\0data.index\0\0"..., 43, 0, NULL, 0) = 43
recvfrom(3, "Some text from my database."..., 491663, 0, NULL, NULL) = 491663
... [manymany times]
brk(0x2320d5000) = 0x2320d5000
.... manymany times
</code></pre>
<p>上面的示例在strace输出和strace-o日志文件python中运行myscript.py
不会停下来。它只会吃掉所有可用的ram并写入日志文件。在</p>
<p>更新。用imap代替map解决了我的问题。在</p>