如何在OpenCV和tensorflow中使用PIL屏幕录制作为视频源?

2024-05-01 00:45:44 发布

您现在位置:Python中文网/ 问答频道 /正文

但首先:如果有人有更好的方法将屏幕截图导入opencv,我洗耳恭听。我看到大多数人都是这样做的

我想在opencv中使用实时屏幕录制来进行对象检测。我没有问题让视频显示

printscreen_pil = ImageGrab.grab() printscreen_numpy = np.array(printscreen_pil.getdata(),dtype='uint8')\ .reshape((printscreen_pil.size[1],printscreen_pil.size[0],3)) cv2.imshow('window',printscreen_numpy)

但是当我尝试用“ret,frame=视频.read()'

我得到:'属性错误:'numpy.ndarray公司'对象没有'read'属性'

我必须假设printscreen_numpy的格式是错误的,如何将其转换为可由opencv读取的视频?你知道吗

这就是我得到密码的地方:

Screen Capture with OpenCV and Python-2.7

我尝试过将视频插入video.read()的各种组合,但都没有成功。你知道吗

编辑:我尝试过:

printscreen_pil = ImageGrab.grab() ---> printscreen_pil.read() 以及: printscreen_pil = np.array(ImageGrab.grab()) 等等

当前的相关代码块:

你知道吗` while(真):

printscreen_pil =  ImageGrab.grab()
printscreen_numpy =   np.array(printscreen_pil.getdata(),dtype='uint8')\
.reshape((printscreen_pil.size[1],printscreen_pil.size[0],3))
cv2.imshow('window',printscreen_numpy) 

# Acquire frame and expand frame dimensions to have shape: [1, None, None, 3]
# i.e. a single-column array, where each item in the column has the pixel RGB value
ret, frame = printscreen_numpy.read()
frame_expanded = np.expand_dims(frame, axis=0)

# Perform the actual detection by running the model with the image as input
(boxes, scores, classes, num) = sess.run(
    [detection_boxes, detection_scores, detection_classes, num_detections],
    feed_dict={image_tensor: frame_expanded})

# Draw the results of the detection (aka 'visulaize the results')
vis_util.visualize_boxes_and_labels_on_image_array(
    frame,
    np.squeeze(boxes),
    np.squeeze(classes).astype(np.int32),
    np.squeeze(scores),
    category_index,
    use_normalized_coordinates=True,
    line_thickness=8,
    min_score_thresh=0.60)

# All the results have been drawn on the frame, so it's time to display it.
cv2.imshow('Object detector', frame)

# Press 'q' to quit
if cv2.waitKey(1) == ord('q'):
    break

`


Tags: thenumpyreadsize视频pilnparray