回答此问题可获得 20 贡献值,回答如果被采纳可获得 50 分。
<div>
</div>
<p>我正在训练TensorFlow 2上的U-net。当我加载这个模型时,它几乎占用了GPU的所有内存(26GB中的22GB),尽管我的模型最多应该占用1.5GB的内存和1.9亿个参数。为了理解这个问题,我尝试加载一个没有任何层的模型,但令我惊讶的是,它仍然占用了相同的内存量。我的型号代码附在下面:</p>
<pre><code>x = tf.keras.layers.Input(shape=(256,256,1))
model = Sequential(
[
Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Activation('relu')(Add()([conv5_0, conv5_2])),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(2048, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(2048, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(2048, 3, padding = 'same', kernel_initializer = 'he_normal'),
UpSampling2D(size = (2,2)),
Conv2D(1024, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
UpSampling2D(size = (2,2)),
Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
UpSampling2D(size = (2,2)),
Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
UpSampling2D(size = (2,2)),
Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
UpSampling2D(size = (2,2)),
Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(1, 3, activation = 'linear', padding = 'same', kernel_initializer = 'he_normal')
])
y = model(x)
</code></pre>
<p>我注释掉了所有的层,它仍然占用了22GB。我正在使用jupyter笔记本运行代码。我原以为在我的jupyter笔记本的开头添加<code>tf.compat.v1.GPUOptions(per_process_gpu_memory_fraction=x)</code>可以解决问题,但事实并非如此。我的目标是在GPU上同时运行多个脚本,以便更有效地利用我的时间。任何帮助都将不胜感激。多谢各位</p>
<p>注意:我们注意到,这不仅发生在这段代码中,其他任何Tensorflow模块也会发生。例如,在我的代码的某个地方,我在加载模型之前使用了<code>tf.signal.ifft2</code>,它占用的内存与模型几乎相同。如何解决这个问题</p>