在Python中将多线程转换为多处理

2024-10-03 19:21:26 发布

您现在位置:Python中文网/ 问答频道 /正文

我如何将这件事从多线程转换为多处理?在多线程的情况下,它实际上运行得比较慢,而CPU却不多。所以我希望多处理能有所帮助。在

  def multiprocess(sentences):

     responselist = []

     #called by each thread
     def processfunction(asentence,i):
         pro_sentence = processthesentence(asentence[0],asentence[1],asentence[2],asentence[3],asentence[4],asentence[5],asentence[6],asentence[7],asentence[8])
         mytyple = asentence,pro_sentence
         responselist.append(mytyple)

     # ----- function end --------- #

     #start threading1
     threadlist = []
     for i in range (2):
         asentence = sentences[i]
         t = Thread(target=processfunction, args=(asentence,i,))
         threadlist.append(t)
         t.start()

     for thr in threadlist:
         thr.join()

     return responselist

我尝试了这个方法(用进程替换一个单词线程,但这不起作用):

^{pr2}$

但也有一些错误:

import greenlet
import gevent

def dotheprocess(sentences):

    responselist = []

    #called by each thread
    def task(asentence):
        thesentence = processsentence(asentence[0],asentence[1],asentence[2],asentence[3],asentence[4],asentence[5],asentence[6],asentence[7],asentence[8])

        mytyple = asentence,thesentence
        responselist.append(mytyple)

    # ----- function end --------- #

    def asynchronous():
        threads = [gevent.spawn(task, asentence) for asentence in sentences]
        gevent.joinall(threads)   

    asynchronous()

    return responselist

Tags: inforbydefsentencesgeventthreadeach
3条回答

除非线程中有一些等待函数(I/O实现),否则线程不会使函数更快。多处理在理论上会有帮助,但是简单的函数不会从中受益,因为它的开销,所以要小心使用它。对共享变量使用Manager。在

from multiprocessing import Process, Manager, freeze_support

class multiProcess():
    def __init__(self, sentences):
        self.responseList = Manager().list()
        self.processList = []
        self.sentences = sentences

    def processSentence(self,a0,a1,a2,a3,a4,a5,a6,a7,a8):
        reversedValue = a8+a7+a6+a5+a4+a3+a2+a1+a0
        return reversedValue

    #called by each process
    def processFunction(self,asentence):
        pro_sentence = self.processSentence(asentence[0],asentence[1],asentence[2],asentence[3],asentence[4],asentence[5],asentence[6],asentence[7],asentence[8])
        mytuple = (asentence,pro_sentence)
        self.responseList.append(mytuple)
        return

    def run(self):
        for i in range(2):
            asentence = self.sentences[i]
            p = Process(target=self.processFunction, args=(asentence,))
            self.processList.append(p)
            p.start()

        for pro in self.processList:
            pro.join()

        return self.responseList


if __name__=="__main__":
    freeze_support()
    sentences = ['interesting','wonderful']
    output = multiProcess(sentences).run()
    print(output)

这是对我最有效的方法-不使用它会快50%:

def processthesentence(asentence):
     return asentence

import multiprocessing as mympro 
if __name__=="__main__":
    sentences = ['I like something','Great cool']
    numberofprocesses = 3
    thepool = mympro.Pool(processes=numberofprocesses)
    results = [thepool.apply_async(processthesentence, args=(asent,)) for asent in sentences]
    output = [item.get() for item in results]

尝试使用gevent生成多个greenlets,允许您使用其他CPU。这里有一个你的例子。请注意,队列用于跨gevent的上下文切换正常工作

import greenlet
import gevent
from gevent import monkey
monkey.patch_all()

def dotheprocess(sentences):
    queue = gevent.queue.Queue()
    #called by each thread

    def task(asentence):
        thesentence = processsentence(asentence[0],asentence[1],asentence[2],asentence[3],asentence[4],asentence[5],asentence[6],asentence[7],asentence[8])
        queue.put((asentence,thesentence))

    threads = [gevent.spawn(task, asentence) for asentence in sentences]
    gevent.joinall(threads)   

    return queue
#call the dotheprocess function with your sentences

相关问题 更多 >