pytorch量子化卷积中的偏置是如何工作的?

2024-10-06 08:12:48 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图在PyTorch中执行静态训练后量化。对于本例,我尝试使用偏移量化Conv2d层:

def quantize(model, input_shape):
    with torch.no_grad():
        # model = tq.QuantWrapper(model)
        observer = tq.PerChannelMinMaxObserver()
        model.qconfig = torch.quantization.QConfig(activation=tq.MinMaxObserver,
                                                   weight=observer.with_args(dtype=torch.qint8,
                                                                             qscheme=torch.per_channel_affine))
        #model.qconfig = torch.quantization.get_default_qconfig('qnnpack')
        model = tq.QuantWrapper(model)
        tq.prepare(model, inplace=True)

        for i in range(1000):
            x = torch.ones(2, *input_shape)
            #x = torch.randn(2, *input_shape)
            tmp = model(x)
        tq.convert(model, inplace=True)
    return model

input_shape = (5, 7, 7)
model_b = nn.Conv2d(input_shape[0], 2, 3, bias=True)
for p in model_b.parameters():
    torch.nn.init.zeros_(p)
model_b.bias.data.fill_(.5)
model_b = quantize(model_b, input_shape)
model_b.eval()

PyTorch文档明确指出,偏差是而不是量子化的,并保持为浮点张量。 输出的整数表示产生:

tensor([[[[255, 255, 255, 255, 255],
          [255, 255, 255, 255, 255],
          [255, 255, 255, 255, 255],
          [255, 255, 255, 255, 255],
          [255, 255, 255, 255, 255]],

         [[255, 255, 255, 255, 255],
          [255, 255, 255, 255, 255],
          [255, 255, 255, 255, 255],
          [255, 255, 255, 255, 255],
          [255, 255, 255, 255, 255]]]], dtype=torch.uint8)

但是,浮点表示法产生:

tensor([[[[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
          [0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
          [0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
          [0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
          [0.5000, 0.5000, 0.5000, 0.5000, 0.5000]],

         [[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
          [0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
          [0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
          [0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
          [0.5000, 0.5000, 0.5000, 0.5000, 0.5000]]]], size=(1, 2, 5, 5),
       dtype=torch.quint8, quantization_scheme=torch.per_tensor_affine,
       scale=0.0019607844296842813, zero_point=0)

我搜索了有关该问题的信息,得出结论,用于重新量化卷积输出的刻度和零点考虑了偏差,并且在GEMM操作期间,偏差在被添加到GEMM的int32_t结果之前被量化到int32_t。从上面的示例中,如果将其简单地转换为int32_t,则整数和浮点输出将为0

我的问题是:如果不转换成量子化张量,偏置如何量子化为int32_t


Tags: trueinputmodeltqtorchpytorch浮点偏差