我正在尝试使用python+aiortc来流式处理android屏幕。 我有一个POC获得设备屏幕使用adb+屏幕记录。提供的代码从screenrecord的执行中读取原始输出(h264),并使用ffmpeg和opencv显示它。你知道吗
import subprocess as sp
import cv2
import array
import numpy as np
adbCmd = ['adb', 'exec-out', 'screenrecord', '--output-format=h264', '-']
stream = sp.Popen(adbCmd, stdout= sp.PIPE, universal_newlines = True)
ffmpegCmd = ['ffmpeg', '-i', '-', '-f', 'rawvideo', '-vcodec', 'bmp', '-vf', 'fps=5', '-']
ffmpeg = sp.Popen(ffmpegCmd, stdin=stream.stdout, stdout=sp.PIPE)
while True:
fileSizeBytes = ffmpeg.stdout.read(6)
fileSize = 0
for i in range(4):
fileSize += fileSizeBytes[i + 2] * 256 ** i
bmpData = fileSizeBytes + ffmpeg.stdout.read(fileSize - 6)
image = cv2.imdecode(np.fromstring(bmpData, dtype=np.uint8), 1)
cv2.imshow("im", image)
cv2.waitKey(25)
目前,我正在尝试将ffmpeg或adb exec的输出插入aiortc媒体流。基于this示例,我将recv方法替换为fallowing代码
async def recv(self):
fileSizeBytes = ffmpeg.stdout.read(6)
fileSize = 0
for i in range(4):
fileSize += fileSizeBytes[i + 2] * 256 ** i
bmpData = fileSizeBytes + ffmpeg.stdout.read(fileSize - 6)
image = cv2.imdecode(numpy.fromstring(bmpData, dtype=numpy.uint8), 1)
frame = VideoFrame.from_ndarray(
image, format="bgr24"
)
pts, time_base = await self.next_timestamp()
frame.pts = pts
frame.time_base = time_base
self.counter += 1
return frame
但此代码无法从设备屏幕流式传输正确的视频,并且不会发生错误。我正在寻找解决这个问题的办法。我本来想用adbCmd的直接输出,但也没用。你知道吗
目前没有回答
相关问题 更多 >
编程相关推荐