回答此问题可获得 20 贡献值,回答如果被采纳可获得 50 分。
<p>我正在测试一个损失函数来同时减少MSE和交叉熵。我定义了一个损失函数,但似乎损失并没有下降。我是新来的深入学习领域,欢迎任何建议。谢谢</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from sklearn.preprocessing import MinMaxScaler
x1=np.random.uniform(0,100, 1000)
x2=np.random.uniform(-20,20, 1000)
y1= [int(i) for i in x1 % 3]
y2= x1*1.5 + np.random.randn(1000)
y3= 2*x1*x1 + np.random.randn(1000)
y4= x1*2.5 -10 +np.random.randn(1000)
y5=x1*2.5 +10 +np.random.randn(1000)
xs=np.array([x1,x2]).T
y1s= np.array(y1).reshape(-1,1)
y2s= np.array([y2,y3, y4, y5]).T
y1s=to_categorical(y1s)
scaler= MinMaxScaler()
xs_normed=scaler.fit_transform(xs)
from keras.layers import Dense, Input, Concatenate, Lambda
from keras.models import Model
from keras.losses import mse, categorical_crossentropy
import keras.backend as K
def loss_fun(real,pred, alpha=1, beta=1):
c1_pred=pred[0]
c1_real=real[0]
c2_pred=pred[1]
c2_real=real[1]
loss1=mse(c2_real, c2_pred)
loss2=categorical_crossentropy(c1_real, c1_pred)
loss=K.sum(alpha*loss1 + beta*loss2)
return loss
inputs= Input(shape=(2,))
d1=Dense(256, activation='relu')(inputs)
d2=Dense(256, activation='relu')(d1)
d3=Dense(3, activation='softmax')(d2)
d5=Dense(256)(d2)
d6=Dense(4)(d5)
model= Model(inputs=inputs, outputs= [d3, d6])
model.compile(optimizer='Adam', loss=loss_fun)
model.fit(x=xs_normed, y= [y1s, y2s], batch_size=128, epochs=100)
</code></pre>