并行运行进程,同时使用open

2024-06-26 01:59:44 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在用opencv从我的网络摄像头中捕获一段视频。每5秒钟,我就处理一帧/一幅可能需要几秒钟的图像。到目前为止一切正常。但每当处理一帧时,整个视频都会冻结几秒钟(直到处理完成)。我正试图用线程来摆脱它。以下是我迄今为止所做的:

在捕获视频的while循环中:

    while True:
        ret, image = cap.read()

        if next_time <= datetime.now():

            content_type = 'image/jpeg'
            headers = {'content-type': content_type}
            _, img_encoded = cv2.imencode('.jpg', image)

            loop = asyncio.get_event_loop()
            future = asyncio.ensure_future(self.async_faces(img_encoded, headers))
            loop.run_until_complete(future)

            next_time += period
            ...

        cv2.imshow('img', image)

方法如下:

async def async_faces(self, img, headers):
    with ThreadPoolExecutor(max_workers=10) as executor:

        loop = asyncio.get_event_loop()

        tasks = [
            loop.run_in_executor(
                executor,
                self.face_detection,
                *(img, headers)  # Allows us to pass in multiple arguments to `fetch`
            )
        ]

        for response in await asyncio.gather(*tasks):
            pass

def face_detection(self, img, headers):
    try:
        response = requests.post(self.url, data=img.tostring(), headers=headers)
        ...
    except Exception as e:
        ...

    ...

但不幸的是它不起作用。你知道吗

编辑1

在下面我加上整个事情应该做什么。你知道吗

最初,函数如下所示:

import requests
import cv2
from datetime import datetime, timedelta

def face_recognition(self):

    # Start camera
    cap = cv2.VideoCapture(0)

    cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)

    emotional_states = []
    font = cv2.FONT_HERSHEY_SIMPLEX

    period = timedelta(seconds=self.time_period)
    next_time = datetime.now() + period

    cv2.namedWindow('img', cv2.WND_PROP_FULLSCREEN)
    cv2.setWindowProperty('img', cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)

    while True:
        ret, image = cap.read()

        if next_time <= datetime.now():

            # Prepare headers for http request
            content_type = 'image/jpeg'
            headers = {'content-type': content_type}
            _, img_encoded = cv2.imencode('.jpg', image)

            try:
                # Send http request with image and receive response
                response = requests.post(self.url, data=img_encoded.tostring(), headers=headers)
                emotional_states = response.json().get("emotions")
                face_locations = response.json().get("locations")
            except Exception as e:
                emotional_states = []
                face_locations = []
                print(e)

            next_time += period

        for i in range(0, len(emotional_states)):
            emotion = emotional_states[i]
            face_location = face_locations[i]
            cv2.putText(image, emotion, (int(face_location[0]), int(face_location[1])),
                        font, 0.8, (0, 255, 0), 2, cv2.LINE_AA)

        cv2.imshow('img', image)
        k = cv2.waitKey(1) & 0xff
        if k == 27:
            cv2.destroyAllWindows()
            cap.release()
            break
        if k == ord('a'):
            cv2.resizeWindow('img', 700,700)

我用上述方法拍摄自己。这部电影将在我的屏幕上直播。此外,每5秒向API发送一帧,在API中以返回图像中的人的情感的方式处理图像。这种情绪显示在我的屏幕上,就在我旁边。问题是,在API返回情感之前,实时视频会冻结几秒钟。你知道吗

我的操作系统是Ubuntu。你知道吗

编辑2

API正在本地运行。我创建了一个Flask应用程序,下面的方法正在接收请求:

from flask import Flask, request, Response
import numpy as np
import cv2
import json

@app.route('/api', methods=['POST'])
def facial_emotion_recognition():

    # Convert string of image data to uint8
    nparr = np.fromstring(request.data, np.uint8)
    # Decode image
    img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)

    # Analyse the image
    emotional_state, face_locations = emotionDetection.analyze_facial_emotions(img)

    json_dump = json.dumps({'emotions': emotional_state, 'locations': face_locations}, cls=NumpyEncoder)

    return Response(json_dump, mimetype='application/json')

Tags: imageimportselfloopjsonimgtimetype