Python中的多处理技术在图像批处理中的应用

2024-10-08 20:15:46 发布

您现在位置:Python中文网/ 问答频道 /正文

用Python开发了一个多处理代码。 图像批处理流是在一个进程中完成的,批处理是在另一个进程中完成的。你知道吗

一旦批处理流有了预定义数量的图像,信号就可以使用批处理循环多进程事件(). 所以这两个过程需要在正确的时间。你知道吗

批流时间比批处理时间长,处理部分没有图像丢失。你知道吗

大部分时间批处理时间短于批流处理时间。因此,我的批处理方面看起来工作正常。但有时会发现批量流式处理重复两次,然后再进行批处理,例如

batch streaming 2.35
batch processing 2.05
batch streaming 2.25
batch processing 2.05
batch streaming 2.32  repeated
batch streaming 2.36
batch processing 3.25
batch streaming 2.35
batch processing 2.15
batch streaming 2.35
batch processing 2.25

这意味着我在处理部分有图像丢失。 我怎样才能解决这个问题?你知道吗

我不能把全部代码都发出去。 两个过程如下。你知道吗

批处理流循环

while (not stopbit.is_set()):
        if not cam_queue.empty():
            #print('Got frame')            
            cmd, val = cam_queue.get()

            # calculate FPS
            '''diffTime = time.time() - lastFTime
            fps = 1 / diffTime
            print(fps)
            lastFTime = time.time()'''

            # if cmd == vs.StreamCommands.RESOLUTION:
            #     pass #print(val)

            if cmd == vs.StreamCommands.FRAME:
                if val is not None:
                    missCount=0
                    image = np.array(val, dtype=np.float32, order='C')
                    image=image.transpose([2, 0, 1])
                    imgrshp=image.reshape(921600)
                    #print(str(val.shape))
                    if (batch1_is_processed == False):
                        batch1_[count] = imgrshp#it is 921,600 flat array
                        batch3_[count] = val
                    else:
                        batch2_[count] = imgrshp
                        batch4_[count] = val
                    count = count + 1
                    if (count >= BATCHSIZE):  # to start process for inference and post processing
                        diffTime = time.time() - lastFTime
                        print("batching time " + str(diffTime))
                        if (batch1_is_processed == False):  # process batch1
                            q.put('batch1')
                            batch1_is_processed = True
                            #print('batch1 is set')

                        else:  # process batch2
                            q.put('batch2')
                            batch1_is_processed = False
                            #print('batch2 is set')

                        e.set()#to signal the buffer is full
                        count = 0
                        lastFTime = time.time()
                else:
                    missCount = missCount + 1
                    print("miss frame after " + str(time.time() - startTime))
                    if(missCount >= 10):
                        q.put('lostframes')
                        e.set()#so that immediately will go back to caller to stop with lostframes option

批处理

while(self.stopbit is not None):
                self.e.wait()
                batch = self.queue.get()
                lastFTime = time.time()
                if(batch == 'batch1'):#process batch1
                    #print('batch1 is processed')
                    for idx in range(BATCHSIZE):
                        images[idx] = np.frombuffer(self.sharedbatch1[idx], dtype=np.float32)
                        uimg = np.frombuffer(self.sharedbatch3[idx], dtype=np.uint8)
                        uimgs[idx] = uimg.reshape(HEIGHT,WIDTH,CHANNEL)
                elif(batch == 'batch2'):#process batch1
                    #print('batch2 is processed')
                    for idx in range(BATCHSIZE):
                        images[idx]=np.frombuffer(self.sharedbatch2[idx], dtype=np.float32)
                        uimg = np.frombuffer(self.sharedbatch4[idx], dtype=np.uint8)
                        uimgs[idx] = uimg.reshape(HEIGHT,WIDTH,CHANNEL)
                elif(batch == 'lostframes'):
                    self.e.clear()
                    self.stopbit.set()#to stop streaming
                    break
                #do batch processing in Nvidia's TensorRT
                with engine.create_execution_context() as context:
                    inputs, outputs, bindings, stream = common.allocate_buffers(engine)
                    inputs[0].host = np.ascontiguousarray(images, dtype=np.float32)
                    [outputs] = common.do_inference(context, bindings, inputs, outputs, stream, BATCHSIZE)
                    outputs=outputs.reshape((BATCHSIZE, 60, 80, 57))
                    humans=[]
                    for i in range(BATCHSIZE):
                       heat_map=outputs[i, :, :, :19] 
                       puf_map=outputs[i, :, :, 19:]
                       humans.append(self.est.inference(heat_map, puf_map, 4.0))

                       #uimgs[i]=TfPoseEstimatorTRT.draw_humans(uimgs[i], humans[i], imgcopy=False)
                       #cv2.imwrite("images/image_"+str(cnt)+".jpeg", uimgs[i])
                       #cnt=cnt+1 
                    hdp.ProcessHumanData(humans, uimgs)
                    #for i in range(BATCHSIZE):
                    #   cv2.imwrite("images/image_"+str(cnt)+".jpeg", uimgs[i])
                    #   cnt=cnt+1
                       #cv2.imshow('display',uimgs[i])
                       #cv2.waitKey(1)
                    humans.clear()
                    diffTime = time.time() - lastFTime
                    print("batch processing time "+str(diffTime))
                    self.e.clear()

Tags: selfiftimeiscountnpbatchval
1条回答
网友
1楼 · 发布于 2024-10-08 20:15:46

利用多处理机的锁解决了这个问题。有时,使用print进行调试是一种误导。打印本身需要一些毫秒。在调试并行处理代码时需要注意这一点。你知道吗

相关问题 更多 >

    热门问题