<p>警告是因为这些操作是由TensorRT执行的<a href="https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html" rel="nofollow noreferrer">not supported yet</a>,正如您已经提到的那样。
不幸的是,没有简单的方法来解决这个问题。您要么必须修改图形(甚至是<a href="https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/graphsurgeon/graphsurgeon.html" rel="nofollow noreferrer">after training</a>)以仅使用支持组合的操作;要么自己将这些操作编写为<a href="https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#plugin_sample" rel="nofollow noreferrer">custom layer</a>。在</p>
<>但是,C++中有其他方法更好地运行推理。您可以使用<a href="https://devblogs.nvidia.com/tensorrt-integration-speeds-tensorflow-inference/" rel="nofollow noreferrer">TensorFlow mixed with TensorRT together</a>。TensorRT将分析图中它所支持的ops,并将它们转换为TensorRT节点,而图的其余部分将像往常一样由TensorFlow处理。更多信息<a href="https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html" rel="nofollow noreferrer">here</a>。这个解决方案比自己重写操作要快得多。唯一复杂的部分是从目标设备和<a href="https://medium.com/@fanzongshaoxing/use-tensorflow-c-api-with-opencv3-bacb83ca5683" rel="nofollow noreferrer">generating the dynamic library</a><code>tensorflow_cc</code>上的源代码构建TensorFlow。最近有很多关于TensorFlow端口的指南和支持,可用于各种架构,例如<a href="https://jkjung-avt.github.io/build-tensorflow-1.8.0/" rel="nofollow noreferrer">ARM</a>。在</p>