我创建了一个名为
def customLoss(true, pred)
//do_stuff
//print(variables)
return loss
现在我调用compile asmodel.compile(optimizer='Adamax', loss = customLoss)
编辑:我试过了tf.打印这是我的结果。在
def customLoss(params):
def lossFunc(true, pred):
true = tf.Print(true, [true.shape],'loss-func') #obviously this won't work because the tensors aren't the same shape; however, this is what I want to do.
#stuff
return loss
return lossFunc
model = Model(inputs=[inputs], outputs=[outputs])
parallel_model = multi_gpu_model(model, gpus=8)
parallel_model.compile(opimizer='Adam', loss = customLoss(params), metrics = [mean_iou)
history = parallel_model.fit(X_train, Y_train, validation_split=0.25, batch_size = 32, verbose=1)
输出是
^{pr2}$打印语句仍未打印。我是否遗漏了什么-我对tf.Print
的输入不正确吗?在
这并不是因为Keras丢弃缓冲区或是施展魔法,它只是不调用它们!一次调用损失函数来构造计算图,然后返回表示损失值的符号张量。Tensorflow用它来计算损耗、梯度等
相反,您可能会对tf.Print感兴趣,这是一个空操作,其副作用是打印传递的参数。由于
tf.Print
是计算图的一部分,因此在训练时也会运行它。根据文件:相关问题 更多 >
编程相关推荐