使用python openCV进行视频处理

2024-06-14 08:24:16 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在开发一个手语识别系统,它可以从网络摄像头中获取实时视频。我使用了两个系统来协调整个系统。第一个系统给出手语使用者手的关键点和骨架,第二个系统获取关键点和骨架数据,并对其进行分析,给出手语使用者想要说的话。所以我的问题是将关键点和骨架视频文件传递给分析部分。我不知道我的问题有多严重,但我急需帮助

我已经尝试了在互联网上找到的几种方法,但这些方法似乎都不管用

cv2.imshow(“输出骨架”,帧) #imwrite(“video_output/{:03d}.jpg.”格式(k),帧)

key = cv2.waitKey(1)
if key == 27:
    break

print("total = {}".format(time.time() - t))

vid_writer.write(frame)


def predict(image_data):
    predictions = sess.run(softmax_tensor, \
                           {'DecodeJpeg/contents:0': image_data})

    # Sort to show labels of first prediction in order of confidence
    top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]

    max_score = 0.0
    res = ''
    for node_id in top_k:
        human_string = label_lines[node_id]
        score = predictions[0][node_id]
        if score > max_score:
            max_score = score
            res = human_string
    return res, max_score


# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line in tf.gfile.GFile("logs/trained_labels.txt")]

# Unpersists graph from file
with tf.gfile.FastGFile("logs/trained_graph.pb", 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    _ = tf.import_graph_def(graph_def, name='')

with tf.Session() as sess:
    # Feed the image_data as input to the graph and get first prediction
    softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')

    c = 0
    input_source =  cv2.imshow('frame',frame1)
    cap = cv2.VideoCapture(input_source)

    res, score = '', 0.0
    i = 0
    mem = ''
    consecutive = 0
    sequence = ''

Tags: imagedatatf系统defrescv2max