回答此问题可获得 20 贡献值,回答如果被采纳可获得 50 分。
<p>我有一个烧瓶应用程序,它从相机读取帧并将其流到网站</p>
<p>照相机.py</p>
<pre><code>from threading import Thread
from copy import deepcopy
import queue
import cv2
class Camera(Thread):
def __init__(self, cam, normalQue, detectedQue):
Thread.__init__(self)
self.__cam = cam
self.__normalQue = normalQue
self.__detectedQue = detectedQue
self.__shouldStop = False
def __del__(self):
self.__cam.release()
print('Camera released')
def run(self):
while True:
rval, frame = self.__cam.read()
if rval:
frame = cv2.resize(frame, None, fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA)
_, jpeg = cv2.imencode('.jpg', frame)
self.__normalQue.put(jpeg.tobytes())
self.__detectedQue.put(deepcopy(jpeg.tobytes()))
if self.__shouldStop:
break
def stopCamera(self):
self.__shouldStop = True
</code></pre>
<p>从你所看到的,我只是阅读框架,调整它的大小,并存储在两个不同的问题。没什么太复杂的。
我还有两个班负责mjpeg流:</p>
<p>NormalVideoStream.py</p>
<pre><code>from threading import Thread
import traceback
import cv2
class NormalVideoStream(Thread):
def __init__(self, framesQue):
Thread.__init__(self)
self.__frames = framesQue
self.__img = None
def run(self):
while True:
if self.__frames.empty():
continue
self.__img = self.__frames.get()
def gen(self):
while True:
try:
if self.__img is None:
print('Normal stream frame is none')
continue
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + self.__img + b'\r\n')
except:
traceback.print_exc()
print('Normal video stream genenation exception')
</code></pre>
<p>及</p>
<p>DetectionVideoStream.py</p>
<pre><code>from threading import Thread
import cv2
import traceback
class DetectionVideoStream(Thread):
def __init__(self, framesQue):
Thread.__init__(self)
self.__frames = framesQue
self.__img = None
self.__faceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
def run(self):
while True:
if self.__frames.empty():
continue
self.__img = self.__detectFace()
def gen(self):
while True:
try:
if self.__img is None:
print('Detected stream frame is none')
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + self.__img + b'\r\n')
except:
traceback.print_exc()
print('Detection video stream genenation exception')
def __detectFace(self):
retImg = None
try:
img = self.__frames.get()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = self.__faceCascade.detectMultiScale(gray, 1.1, 4)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
(_, encodedImage) = cv2.imencode('.jpg', img)
retImg = encodedImage.tobytes()
except:
traceback.print_exc()
print('Face detection exception')
return retImg
</code></pre>
<p>从这两条流中你们可以看到,我在无限循环中读取来自ques的相机帧。这两个类都有gen()方法,该方法生成站点本身的框架。唯一的区别是,在检测流中,我也在做人脸识别</p>
<p>现在在我的主文件中:</p>
<p>main.py</p>
<pre><code>from flask import Blueprint, render_template, Response, abort, redirect, url_for
from flask_login import login_required, current_user
from queue import Queue
from . import db
from .Camera import Camera
from .NormalVideoStream import NormalVideoStream
from .DetectionVideoStream import DetectionVideoStream
from .models import User
import cv2
main = Blueprint('main', __name__)
# Queues for both streams
framesNormalQue = Queue(maxsize=0)
framesDetectionQue = Queue(maxsize=0)
print('Queues created')
# RPi camera instance
camera = Camera(cv2.VideoCapture(0), framesNormalQue, framesDetectionQue)
camera.start()
print('Camera thread started')
# Streams
normalStream = NormalVideoStream(framesNormalQue)
detectionStream = DetectionVideoStream(framesDetectionQue)
print('Streams created')
normalStream.start()
print('Normal stream thread started')
detectionStream.start()
print('Detection stream thread started')
@main.route('/')
def index():
return render_template('index.html')
@main.route('/profile', methods=["POST", "GET"])
def profile():
if not current_user.is_authenticated:
abort(403)
return render_template('profile.html', name=current_user.name, id=current_user.id, detectionState=current_user.detectionState)
@main.route('/video_stream/<int:stream_id>')
def video_stream(stream_id):
if not current_user.is_authenticated:
abort(403)
print(f'Current user detection: {current_user.detectionState}')
global detectionStream
global normalStream
stream = None
if current_user.detectionState:
stream = detectionStream
print('Stream set to detection one')
else:
stream = normalStream
print('Stream set to normal one')
return Response(stream.gen(), mimetype='multipart/x-mixed-replace; boundary=frame')
@main.route('/detection')
def detection():
if not current_user.is_authenticated:
abort(403)
if current_user.detectionState:
current_user.detectionState = False
else:
current_user.detectionState = True
user = User.query.filter_by(id=current_user.id)
user.detectionState = current_user.detectionState
db.session.commit()
return redirect(url_for('main.profile', id=current_user.id, user_name=current_user.name))
@main.errorhandler(404)
def page_not_found(e):
return render_template('404.html'), 404
@main.errorhandler(403)
def page_forbidden(e):
return render_template('403.html'), 403
</code></pre>
<p>我正在全局创建摄影机、队列和流对象。此外,当用户登录网站时,他将能够看到实时视频流。还有一个按钮,用于更改当前显示的流</p>
<p>整个项目运行良好,只有一个例外:当我将流更改为检测流时,它将有巨大的延迟(大约10/15秒),这使得整个项目无法正常运行。试图在我自己的搜索中搜索bug/优化,但找不到任何东西。我故意在不同的线程上运行所有东西来卸载应用程序,但看起来这还不够。延迟1-2秒是可以接受的,但不超过10秒。伙计们,也许你们能看到一些虫子?或者知道如何优化它</p>
<p>还需要提到的是,整个应用程序都在RPI4B4GB上运行,我正在访问我桌面上的网站。默认服务器更改为Nginx和Gunicorn。从我可以看到Pi的CPU使用率是100%,当应用程序工作。在默认服务器上进行测试时,behaviout是相同的。估计1.5GHz的CPU有足够的功率使其运行更平稳</p>