<p>这太长,无法在注释中指定,因此:</p>
<p>同样,我在Django方面没有专业知识,但我认为这不会在Windows或Linux/Unix上造成问题。但是,您没有指定所请求的平台<strong>但是,此外,您提供的代码完成的很少,因为您的循环创建了一个进程,并在创建下一个进程之前等待它完成。最终,一次运行的进程不会超过一个,因此没有并行性</strong>。要更正此问题,请尝试以下操作:</p>
<pre><code>def functiontomultiprocess(request):
processes = []
for doc in alldocs: # where is alldocs defined?
p = multiprocess.Process(function2, args=(doc,)) # pass doc to function2
processess.append(p)
p.start()
# now wait for the processes to complete
for p in processes:
p.join()
</code></pre>
<p>或者,如果你想使用游泳池,你可以选择。这将使用<code>concurrent.futures</code>模块:</p>
<pre><code>import concurrent.futures
def functiontomultiprocess(request):
"""
Does it make sense to create more processes than CPUs you have?
It might if there is a lot of I/O. In which case try:
n_processes = len(alldocs)
"""
n_processes = min(len(alldocs), multiprocessing.cpu_count())
with concurrent.futures.ProcessPoolExecutor(max_workers=n_processes) as executor:
futures = [executor.submit(function2, doc) for doc in alldocs] # create sub-processes
return_values = [future.result() for future in futures] # get return values from function2
</code></pre>
<p>这将使用<code>multiprocessing</code>模块:</p>
<pre><code>import multiprocessing
def functiontomultiprocess(request):
n_processes = min(len(alldocs), multiprocessing.cpu_count())
with multiprocessing.Pool(processes=n_processes) as pool:
results = [pool.apply_async(function2, (doc,)) for doc in alldocs] # create sub-processes
return_values = [result.get() for result in results] # get return values from function2
</code></pre>
<p><strong>现在您只需试试看。</strong></p>