我尝试将冻结的SSD mobilenet v2模型转换为TFLITE格式以供android使用。 以下是我的所有步骤:
我用TF Object Detection API的train.py文件,使用ssd_mobilenet_v2_coco_2018_03_29model frok the model zoo重新训练。(确定)
使用TF Object Detection API提供的Export_inference_graph.py将训练后的model.ckpt导出到冻结的模型文件。(确定)
在python中使用GPU测试冻结的图形,并且只允许使用CPU。它起作用了。(确定)
缺点是,我尝试使用以下代码:
import tensorflow as tf
tf.enable_eager_execution()
saved_model_dir = 'inference_graph/saved_model/'
converter = tf.contrib.lite.TFLiteConverter.from_saved_model(saved_model_dir,input_arrays=input_arrays,output_arrays=output_arrays,input_shapes={"image_tensor": [1, 832, 832, 3]})
converter.post_training_quantize = True
首先,我尝试不向函数添加输入形状参数,但它不起作用。从那时起,我读到你可以在那里写任何不重要的东西。
到这一行的输出:
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:The specified SavedModel has no variables; no checkpoints were restored.
INFO:tensorflow:The given SavedModel MetaGraphDef contains SignatureDefs with the following keys: {'serving_default'}
INFO:tensorflow:input tensors info:
INFO:tensorflow:Tensor's key in saved_model's tensor_map: inputs
INFO:tensorflow: tensor name: image_tensor:0, shape: (-1, -1, -1, 3), type: DT_UINT8
INFO:tensorflow:output tensors info:
INFO:tensorflow:Tensor's key in saved_model's tensor_map: num_detections
INFO:tensorflow: tensor name: num_detections:0, shape: (-1), type: DT_FLOAT
INFO:tensorflow:Tensor's key in saved_model's tensor_map: detection_boxes
INFO:tensorflow: tensor name: detection_boxes:0, shape: (-1, 100, 4), type: DT_FLOAT
INFO:tensorflow:Tensor's key in saved_model's tensor_map: detection_scores
INFO:tensorflow: tensor name: detection_scores:0, shape: (-1, 100), type: DT_FLOAT
INFO:tensorflow:Tensor's key in saved_model's tensor_map: detection_classes
INFO:tensorflow: tensor name: detection_classes:0, shape: (-1, 100), type: DT_FLOAT
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:The specified SavedModel has no variables; no checkpoints were restored.
INFO:tensorflow:Froze 0 variables.
INFO:tensorflow:Converted 0 variables to const ops.
然后我想改变:
tflite_quantized_model = converter.convert()
这是输出:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-6-61a136476642> in <module>
----> 1 tflite_quantized_model = converter.convert()
~/.local/lib/python3.5/site-packages/tensorflow/contrib/lite/python/lite.py in convert(self)
451 input_tensors=self._input_tensors,
452 output_tensors=self._output_tensors,
--> 453 **converter_kwargs)
454 else:
455 # Graphs without valid tensors cannot be loaded into tf.Session since they
~/.local/lib/python3.5/site-packages/tensorflow/contrib/lite/python/convert.py in toco_convert_impl(input_data, input_tensors, output_tensors, *args, **kwargs)
340 data = toco_convert_protos(model_flags.SerializeToString(),
341 toco_flags.SerializeToString(),
--> 342 input_data.SerializeToString())
343 return data
344
~/.local/lib/python3.5/site-packages/tensorflow/contrib/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str)
133 else:
134 raise RuntimeError("TOCO failed see console for info.\n%s\n%s\n" %
--> 135 (stdout, stderr))
136
137
RuntimeError: TOCO failed see console for info.
我无法在这里复制控制台输出,因此它超过了30000个字符的限制,但在这里您可以看到它:https://pastebin.com/UyT2x2Vk
请在这一点上帮忙,我该怎么做才能使它起作用
我的配置: Ubuntu 16.04,Tensorflow GPU 1.12
在安万斯谢谢你!
上周也有同样的问题,按照here描述的步骤解决了它。
基本上问题是他们的主脚本不支持SSD模型。 我没有使用
bazel
来执行此操作,而是使用tflite_convert
实用程序。小心
export_tflite_ssd_graph.py
脚本,在使用它之前阅读它的所有选项(主要是拯救我生命的--max_detections)。希望这有帮助。
编辑: 您的步骤2无效。如果保存的_模型包含SSD,则无法将其转换为tflite模型。 您需要使用
export_tflite_ssd_graph.py
脚本导出经过训练的model.ckpt,并使用创建的.pb
文件使用tflite_convert
工具将其转换为tflite。相关问题 更多 >
编程相关推荐