<h2>动态占位符</h2>
<p>Tensorflow允许在占位符中有多个<em>动态(即{<cd1>})维度。在构建图形时,引擎无法确保正确性,因此客户端负责提供正确的输入,但它提供了很大的灵活性。在</p>
<p>所以我要从。。。在</p>
<pre class="lang-py prettyprint-override"><code>x = tf.placeholder(tf.float32, shape=[None, N*M*P])
y_ = tf.placeholder(tf.float32, shape=[None, N*M*P, 3])
...
x_image = tf.reshape(x, [-1, N, M, P, 1])
</code></pre>
<p>到。。。在</p>
^{pr2}$
<p>既然您打算将输入重塑为5D,那么为什么不从一开始就在<code>x_image</code>中使用5D呢。在这一点上,<code>label</code>的第二维是任意的,但是我们保证张量流将与<code>x_image</code>匹配。在</p>
<h2>反褶积中的动态形状</h2>
<p>接下来,<a href="https://www.tensorflow.org/api_docs/python/tf/nn/conv3d_transpose" rel="noreferrer">^{<cd5>}</a>的好处是它的输出形状可以是动态的。所以不是这样:</p>
<pre class="lang-py prettyprint-override"><code># Hard-coded output shape
DeConnv1 = tf.nn.conv3d_transpose(layer1, w, output_shape=[1,32,32,7,1], ...)
</code></pre>
<p>。。。您可以这样做:</p>
<pre class="lang-py prettyprint-override"><code># Dynamic output shape
DeConnv1 = tf.nn.conv3d_transpose(layer1, w, output_shape=tf.shape(x_image), ...)
</code></pre>
<p>这样,转置卷积就可以应用到<em>任何</em>图像,结果将呈现在运行时实际传入的<code>x_image</code>的形状。在</p>
<p>注意,<code>x_image</code>的静态形状是<code>(?, ?, ?, ?, 1)</code>。在</p>
<h2>全卷积网络</h2>
<p>最后也是最重要的部分是使整个网络</strong>卷积,这也包括最后的密集层。稠密层必须静态地定义其维数,从而迫使整个神经网络固定输入图像的维数。在</p>
<p>幸运的是,al的Springenberg在<a href="https://arxiv.org/abs/1412.6806" rel="noreferrer">"Striving for Simplicity: The All Convolutional Net"</a>论文中描述了一种用CONV层代替FC层的方法。我将使用3<code>1x1x1</code>过滤器的卷积(另请参见<a href="https://stats.stackexchange.com/q/194142/130598">this question</a>):</p>
<pre class="lang-py prettyprint-override"><code>final_conv = conv3d_s1(final, weight_variable([1, 1, 1, 1, 3]))
y = tf.reshape(final_conv, [-1, 3])
</code></pre>
<p>如果我们确保<code>final</code>与<code>DeConnv1</code>(和其他人)具有相同的维度,那么<code>y</code>将成为我们想要的形状:<code>[-1, N * M * P, 3]</code>。在</p>
<h2>把它们结合在一起</h2>
<p>你的网络很大,但所有的反褶积基本上都遵循相同的模式,所以我将我的<em>概念证明</em>代码简化为一个反褶积。我们的目标只是展示什么样的网络能够处理任意大小的图像。最后一点:图像尺寸可以在不同批次之间变化,但在一个批次中,它们必须相同。在</p>
<p>完整代码:</p>
<pre class="lang-py prettyprint-override"><code>sess = tf.InteractiveSession()
def conv3d_dilation(tempX, tempFilter):
return tf.layers.conv3d(tempX, filters=tempFilter, kernel_size=[3, 3, 1], strides=1, padding='SAME', dilation_rate=2)
def conv3d(tempX, tempW):
return tf.nn.conv3d(tempX, tempW, strides=[1, 2, 2, 2, 1], padding='SAME')
def conv3d_s1(tempX, tempW):
return tf.nn.conv3d(tempX, tempW, strides=[1, 1, 1, 1, 1], padding='SAME')
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def max_pool_3x3(x):
return tf.nn.max_pool3d(x, ksize=[1, 3, 3, 3, 1], strides=[1, 2, 2, 2, 1], padding='SAME')
x_image = tf.placeholder(tf.float32, shape=[None, None, None, None, 1])
label = tf.placeholder(tf.float32, shape=[None, None, 3])
W_conv1 = weight_variable([3, 3, 1, 1, 32])
h_conv1 = conv3d(x_image, W_conv1)
# second convolution
W_conv2 = weight_variable([3, 3, 4, 32, 64])
h_conv2 = conv3d_s1(h_conv1, W_conv2)
# third convolution path 1
W_conv3_A = weight_variable([1, 1, 1, 64, 64])
h_conv3_A = conv3d_s1(h_conv2, W_conv3_A)
# third convolution path 2
W_conv3_B = weight_variable([1, 1, 1, 64, 64])
h_conv3_B = conv3d_s1(h_conv2, W_conv3_B)
# fourth convolution path 1
W_conv4_A = weight_variable([3, 3, 1, 64, 96])
h_conv4_A = conv3d_s1(h_conv3_A, W_conv4_A)
# fourth convolution path 2
W_conv4_B = weight_variable([1, 7, 1, 64, 64])
h_conv4_B = conv3d_s1(h_conv3_B, W_conv4_B)
# fifth convolution path 2
W_conv5_B = weight_variable([1, 7, 1, 64, 64])
h_conv5_B = conv3d_s1(h_conv4_B, W_conv5_B)
# sixth convolution path 2
W_conv6_B = weight_variable([3, 3, 1, 64, 96])
h_conv6_B = conv3d_s1(h_conv5_B, W_conv6_B)
# concatenation
layer1 = tf.concat([h_conv4_A, h_conv6_B], 4)
w = tf.Variable(tf.constant(1., shape=[2, 2, 4, 1, 192]))
DeConnv1 = tf.nn.conv3d_transpose(layer1, filter=w, output_shape=tf.shape(x_image), strides=[1, 2, 2, 2, 1], padding='SAME')
final = DeConnv1
final_conv = conv3d_s1(final, weight_variable([1, 1, 1, 1, 3]))
y = tf.reshape(final_conv, [-1, 3])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=label, logits=y))
print('x_image:', x_image)
print('DeConnv1:', DeConnv1)
print('final_conv:', final_conv)
def try_image(N, M, P, B=1):
batch_x = np.random.normal(size=[B, N, M, P, 1])
batch_y = np.ones([B, N * M * P, 3]) / 3.0
deconv_val, final_conv_val, loss = sess.run([DeConnv1, final_conv, cross_entropy],
feed_dict={x_image: batch_x, label: batch_y})
print(deconv_val.shape)
print(final_conv.shape)
print(loss)
print()
tf.global_variables_initializer().run()
try_image(32, 32, 7)
try_image(16, 16, 3)
try_image(16, 16, 3, 2)
</code></pre>