import torch
import torch.nn as nn
batch_size = 2
hidden_size = 32
n_groups = 8
group_size = hidden_size // n_groups # = 4
# Input tensor that can be the result of a fully-connected layer
x = torch.rand(batch_size, hidden_size)
# GroupNorm with affine disabled to simplify the inspection of results
gn1 = nn.GroupNorm(n_groups, hidden_size, affine=False)
r = gn1(x)
# The rows are split into n_groups (8) groups of size group_size (4)
# and the normalization is applied to these pieces of rows.
# We can check it for the first group x[0, :group_size] with the following code
first_group = x[0, :group_size]
normalized_first_group = (first_group - first_group.mean())/torch.sqrt(first_group.var(unbiased=False) + gn1.eps)
print(r[0, :4])
print(normalized_first_group)
if(torch.allclose(r[0, :4], normalized_first_group)):
print('The result on the first group is the expected one')
您的代码是正确的,但让我们看看在一个小示例中会发生什么
一个完全连接的层的输出通常是一个具有
(batch_size, hidden_size)
形状的2D张量,因此我将重点讨论这种输入,但请记住GroupNorm
支持具有任意维数的张量。事实上,GroupNorm
总是在张量的最后一个维度上起作用GroupNorm
将批处理中的所有样本视为独立样本,并从张量的最后一个维度创建n_groups
,如图所示当输入张量为2D时,图像中的立方体会变成正方形,因为没有第三个垂直维度,因此在实践中,标准化是在固定大小的连续输入矩阵行上执行的
让我们看一个带有一些代码的示例
相关问题 更多 >
编程相关推荐