从tensorflow文档中:
https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization
“规范化每批前一层的激活,即应用转换,使平均激活接近0,激活标准偏差接近1。”
因此,我希望该层首先计算前一层输出的平均值和标准偏差,减去平均值,再除以批次中每个样品的标准偏差。但显然我错了
import numpy as np
import tensorflow as tf
if __name__ == "__main__":
# flattened tensor, batch size of 2
xnp = np.array([[1,2,3],[4,5,6]])
xtens = tf.constant(xnp,dtype=tf.float32)
nbatchnorm = tf.keras.layers.BatchNormalization()(xtens)
# tensorflow output
print(nbatchnorm)
# what I expect to see
xmean = np.mean(xnp,axis=1)
xstd = np.std(xnp,axis=1)
# set the mean to 0 and the standard deviation to 1 for each sample
normalized = (xnp - xmean.reshape(-1,1)) / xstd.reshape(-1,1)
print(normalized)
输出:
tf.Tensor(
[[0.9995004 1.9990008 2.9985013]
[3.9980016 4.997502 5.9970026]], shape=(2, 3), dtype=float32)
[[-1.22474487 0. 1.22474487]
[-1.22474487 0. 1.22474487]]
有人能给我解释一下为什么这些输出不相同或至少不相似吗?我看不出这是怎么正常化的
嗯,
Batch Normalization
取决于它的算法的许多因素,下面将对此进行解释相关问题 更多 >
编程相关推荐