将多线程和多处理与concurrent.futures相结合

2024-09-28 21:00:49 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个高度依赖于I/O和CPU密集型的函数。我试图通过多处理和多线程来并行化它,但它被卡住了。这个问题以前提过,但背景不同。我的函数是完全独立的,不返回任何内容。为什么卡住了?怎么能修好呢

import concurrent.futures
import os
import numpy as np
import time


ids = [1,2,3,4,5,6,7,8]

def f(x):
    time.sleep(1)
    x**2

def multithread_accounts(AccountNumbers, f, n_threads = 2):

    slices = np.array_split(AccountNumbers, n_threads)
    slices = [list(i) for i in slices]

    with concurrent.futures.ThreadPoolExecutor() as executor:
        executor.map(f, slices)



def parallelize_distribute(AccountNumbers, f, n_threads = 2, n_processors = os.cpu_count()):

    slices = np.array_split(AccountNumbers, n_processors)
    slices = [list(i) for i in slices]

    with concurrent.futures.ProcessPoolExecutor(max_workers=n_processors) as executor:
        executor.map( lambda x: multithread_accounts(x, f, n_threads = n_threads) , slices)
        
parallelize_distribute(ids, f, n_processors=2, n_threads=2)

Tags: 函数importidstimeosdefasnp
1条回答
网友
1楼 · 发布于 2024-09-28 21:00:49

对不起,我没时间解释所有这些,所以我只给代码“那行得通”。我敦促你从更简单的事情开始,因为学习曲线是不平凡的。一开始就把努比排除在外;首先,只使用线程;然后移动到进程;除非您是专家,否则不要尝试并行化除命名模块级函数(不,不是函数本地匿名lambda)之外的任何东西

正如经常发生的那样,您“应该”得到的错误消息被抑制,因为它们是异步发生的,所以没有好的方法来报告它们。随意添加print()语句以查看您的进展

注意:我去掉了numpy,并添加了所需的内容,这样它也可以在Windows上运行。我希望改用numpy的array_split()可以很好地工作,但我当时使用的机器上没有numpy

import concurrent.futures as cf
import os
import time

def array_split(xs, n):
    from itertools import islice
    it = iter(xs)
    result = []
    q, r = divmod(len(xs), n)
    for i in range(r):
        result.append(list(islice(it, q+1)))
    for i in range(n - r):
        result.append(list(islice(it, q)))
    return result
    
ids = range(1, 11)

def f(x):
    print(f"called with {x}")
    time.sleep(5)
    x**2

def multithread_accounts(AccountNumbers, f, n_threads=2):
    with cf.ThreadPoolExecutor(max_workers=n_threads) as executor:
        for slice in array_split(AccountNumbers, n_threads):
            executor.map(f, slice)

def parallelize_distribute(AccountNumbers, f, n_threads=2, n_processors=os.cpu_count()):
    slices = array_split(AccountNumbers, n_processors)
    print("top slices", slices)
    with cf.ProcessPoolExecutor(max_workers=n_processors) as executor:
        executor.map(multithread_accounts, slices,
                                           [f] * len(slices),
                                           [n_threads] * len(slices))

if __name__ == "__main__":
    parallelize_distribute(ids, f, n_processors=2, n_threads=2)

顺便说一句,我建议这对螺纹部分更有意义:

def multithread_accounts(AccountNumbers, f, n_threads=2):
    with cf.ThreadPoolExecutor(max_workers=n_threads) as executor:
        executor.map(f, AccountNumbers)

也就是说,这里真的没有必要自己拆分列表-线程机制将自己拆分列表。您可能在最初的尝试中错过了这一点,因为您发布的代码中的ThreadPoolExecutor()调用忘记了指定max_workers参数

相关问题 更多 >