训练多输出神经网络的每个“头”独立地

2024-05-12 22:19:06 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试训练一个模型,在这个模型中使用一个共享的特征提取器,然后将其分成n个由小层组成的“头”来产生不同的输出。你知道吗

当我首先训练头部“a”时,一切都很好,但是当我切换到头部“b”时,python从tensorflow抛出一个InvalidArgumentError。当我从头“b”开始,然后训练头“a”时也是一样的。你知道吗

我尝试了在stackoverflow上找到的不同方法,比如this one,但是没有成功。你知道吗

我建立我的模型如下

alphaLeaky=0.3 

inputs =Input(shape=(state_shape[0],state_shape[1],state_shape[2]))
outputs=ZeroPadding2D(padding=(1,1))(inputs)
outputs=LocallyConnected2D(1, (6,6), activation='linear', padding='valid')(outputs) 
outputs=Flatten()(outputs) 
outputs=Dense(768,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs)                        
outputs=advanced_activations.LeakyReLU(alpha=alphaLeaky)(outputs)

outputs=Dense(512,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs)                  
outputs=advanced_activations.LeakyReLU(alpha=alphaLeaky)(outputs)

outputs1=Dense(256,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs)
outputs1=advanced_activations.LeakyReLU(alpha=alphaLeaky)(outputs1)
outputs1=Dense(action_number,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs1)     
outputs1=Activation('linear')(outputs1)

outputs2=Dense(256,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs)
outputs2=advanced_activations.LeakyReLU(alpha=alphaLeaky)(outputs2)
outputs2=Dense(action_number,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs2)     
outputs2=Activation('linear')(outputs2)

outputs3=Dense(256,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs)                        
outputs3=advanced_activations.LeakyReLU(alpha=alphaLeaky)(outputs3)
outputs3=Dense(action_number,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs3)
outputs3=Activation('linear')(outputs3)

model1= Model(inputs=inputs, outputs=outputs1)
model2= Model(inputs=inputs, outputs=outputs2)
model3= Model(inputs=inputs, outputs=outputs3)

model1.compile(loss='mse', optimizer=Adamax(lr=PAS_INITIAL, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)


model2.compile(loss='mse', optimizer=Adamax(lr=PAS_INITIAL, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)

model3.compile(loss='mse', optimizer=Adamax(lr=PAS_INITIAL, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)

然后我用拟合方法训练他们。你知道吗

例如,如果我运行model1.fit(...),它可以工作,但是当我运行model2.fit(...)model3.fit(...)时,我得到一个错误消息:

W tensorflow/core/framework/op_kernel.cc:993] Invalid argument: You must feed a value for placeholder tensor 'activation_1_target' with dtype float
         [[Node: activation_1_target = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'activation_1_target' with dtype float
         [[Node: activation_1_target = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
         [[Node: dense_5/bias/read/_1075 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_60_dense_5/bias/read", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]


Caused by op 'activation_1_target', defined at:
  File "main.py", line 100, in <module>
    agent.init_brain()
  File "/dds/work/DQL/dql_last_version/8th_code_multi/agent_per.py", line 225, in init_brain
    self.brain = Brain_2D(self.state_shape,self.action_number)
  File "/dds/work/DQL/dql_last_version/8th_code_multi/brain.py", line 141, in __init__
    Brain.__init__(self, action_number)
  File "/dds/work/DQL/dql_last_version/8th_code_multi/brain.py", line 20, in __init__
    self.models, self.full_model = self._create_model()
  File "/dds/work/DQL/dql_last_version/8th_code_multi/brain.py", line 216, in _create_model
    neuralNet1.compile(loss='mse', optimizer=Adamax(lr=PAS_INITIAL, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0))
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/keras/engine/training.py", line 755, in compile
    dtype=K.dtype(self.outputs[i]))
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 497, in placeholder
    x = tf.placeholder(dtype, shape=shape, name=name)
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1502, in placeholder
    name=name)
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2149, in _placeholder
    name=name)
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
    op_def=op_def)
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2327, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1226, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'activation_1_target' with dtype float
         [[Node: activation_1_target = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
         [[Node: dense_5/bias/read/_1075 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_60_dense_5/bias/read", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

我只想优化我选择的头部的权重,但似乎一旦一些输入通过网络,它就在等待我再次通过同一头部。即使我想训练其他重量。你知道吗

我想只建立一个模型,有几个输出

model= Model(inputs=inputs, outputs=[outputs1,outputs2,outputs3,outputs4]) 

但我希望每个人都能接受不同批次数据的训练(我正在进行强化学习项目)。你知道吗

谢谢你!你知道吗


Tags: inpyselfdevicetensorflowlineoutputskernel
1条回答
网友
1楼 · 发布于 2024-05-12 22:19:06

我解决了我的问题。你知道吗

最后我只编译了一个模型,但是有n个输入和n个输出,其中n是头的数目。 我给每个输入关联不同的批次,这样他们就可以用不同的数据分布来训练每个头部。你知道吗

对于测试部分,我只需将相同的输入复制n次并将其输入到模型中。这也许不是最好的方法,但它是有效的。你知道吗

如果您对我的解决方案有什么想法或意见,请不要犹豫,我很乐意看到其他方法。你知道吗

谢谢

相关问题 更多 >