如何手动实现pytorch卷积的填充

2024-09-29 21:27:33 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试将一些pytorch代码移植到tensorflow 2.0,但我很难理解如何在两者之间转换卷积函数。这两个库处理填充的方式是关键。基本上,我想了解如何手动生成pytorch在引擎盖下所做的填充,以便将其转换为tensorflow

如果我不做任何填充,下面的代码就可以工作,但是一旦添加了任何填充,我就不知道如何使这两个实现匹配

output_padding = SOME NUMBER
padding = SOME OTHER NUMBER
strides = 128

tensor = np.random.rand(2, 258, 249)
filters = np.random.rand(258, 1, 256)

out_torch = F.conv_transpose1d(
    torch.from_numpy(tensor).float(),
    torch.from_numpy(filters).float(),
    stride=strides,
    padding=padding,
    output_padding=output_padding)

def pytorch_transpose_conv1d(inputs, filters, strides, padding, output_padding):
    N, L_in = inputs.shape[0], inputs.shape[2]
    out_channels, kernel_size = filters.shape[1], filters.shape[2]
    time_out = (L_in - 1) * strides - 2 * padding + (kernel_size - 1) + output_padding + 1
    padW = (kernel_size - 1) - padding
    
    # HOW DO I PAD HERE TO GET THE SAME OUTPUT AS IN PYTORCH
    inputs = tf.pad(inputs, [(?, ?), (?, ?), (?, ?)])

    return tf.nn.conv1d_transpose(
        inputs,
        tf.transpose(filters, perm=(2, 1, 0)),
        output_shape=(N, out_channels, time_out),
        strides=strides,
        padding="VALID",
        data_format="NCW")

out_tf = pytorch_transpose_conv1d(tensor, filters, strides, padding, output_padding)
assert np.allclose(out_tf.numpy(), out_torch.numpy())

Tags: numpyoutputtfnptorchpytorchoutfilters
1条回答
网友
1楼 · 发布于 2024-09-29 21:27:33

填充物


要在PytorchTensorflow之间转换卷积和转置卷积函数(使用paddingpadding),我们需要首先理解F.pad()tf.pad()函数

torch.nn.functional.pad(输入,填充大小,模式=常量,值=0):

  • padding size:从{}开始并向前描述用于填充某些输入维度的填充大小
  • 要仅填充输入张量的last dimension,则pad的形式为(padding\u left,padding\u right)
  • 要填充last 3 dimensions(左填充、右填充、上填充、下填充、前填充、后填充)

tensorflow.pad(输入,padding\u size,mode='CONSTANT',name=None,CONSTANT\u value=0)

  • padding_size:是一个形状为{}的整数张量,其中n是张量的秩。对于输入的每个维度D,填充[D,0]表示在该维度的张量内容之前要添加多少值,填充[D, 1]表示在该维度的张量内容之后要添加多少值

下面的表格表示F.pad和tf.pad等价物以及输入张量的输出张量
[[[1, 1], [1, 1]]]形状为{}

enter image description here


卷积填充


现在让我们转到卷积层中的PyTorch填充

  1. F.conv1d(输入,…,填充,…):

    • padding控制both sides上用于填充点数的隐式填充量
    • padding=(size)应用F.pad(input, [size, size]),即用(大小、大小)等于tf.pad(input, [[0, 0], [0, 0], [size, size]])的值填充最后一个维度
  2. F.conv2d(输入,…,填充,…):

    • padding=(size)应用F.pad(input, [size, size, size, size])即用(大小、大小)等于tf.pad(input, [[0, 0], [size, size], [size, size]])的值填充最后2个维度
    • padding=(size1, size2)应用F.pad(input, [size2, size2, size1, size1]),相当于tf.pad(input, [[0, 0], [size1, size1], [size2, size2]])

转置卷积中的填充


转置卷积层中的Pyrotch填充

  1. F.conv\u transpose1d(输入,…,填充,输出,…):
    • dilation * (kernel_size - 1) - padding填充将添加到输入中每个维度的both
    • Paddingtransposed卷积中,可以将fake输出分配为removed
    • output_padding控制添加到输出形状一侧的附加大小
    • 检查this以了解在pytorchtranspose convolution期间发生的确切情况
    • 下面是计算转置卷积输出大小的公式:

输出大小=(输入大小-1)stride+(kerenel大小-1)+1+输出大小padding-2padding


代码


转置卷积

import torch
import torch.nn as nn
import torch.nn.functional as F
import tensorflow as tf
import numpy as np

# to stop tf checkfailed error not relevent to actual code
import os
os.environ["CUDA_DEVICE_ORDER"]    = "PCI_BUS_ID"   
os.environ["CUDA_VISIBLE_DEVICES"] = "1"




def tconv(tensor, filters, output_padding=0, padding=0, strides=1):
    '''
    tensor         : input tensor of shape (batch_size, channels, W) i.e (NCW)
    filters        : input kernel of shape (in_ch, out_ch, kernel_size)
    output_padding : single number must be smaller than either stride or dilation
    padding        : single number should be less or equal to ((valid output size + output padding) // 2)
    strides        : single number
    '''
    bs, in_ch, W = tensor.shape
    in_ch, out_ch, k_sz = filters.shape
    
    out_torch = F.conv_transpose1d(torch.from_numpy(tensor).float(), 
                                   torch.from_numpy(filters).float(),
                                   stride=strides, padding=padding, 
                                   output_padding=output_padding)
    out_torch = out_torch.numpy()
 
    # output_size = (input_size - 1)*stride + (kerenel_size - 1) + 1 + output_padding - 2*padding
    # valid out size -> padding=0, output_padding=0 
    # -> valid_out_size =  (input_size - 1)*stride + (kerenel_size - 1) + 1
    out_size  = (W - 1)*strides + (k_sz - 1) + 1 

    # input shape -> (batch_size, W, in_ch) and filters shape -> (kernel_size, out_ch, in_ch) for tf conv
    valid_tf  = tf.nn.conv1d_transpose(np.transpose(tensor, axes=(0, 2, 1)), 
                                       np.transpose(filters, axes=(2, 1, 0)), 
                                       output_shape=(bs, out_size, out_ch), 
                                       strides=strides, padding='VALID', 
                                       data_format='NWC')
    # output padding
    tf_outpad = tf.pad(valid_tf, [[0, 0], [0, output_padding], [0, 0]])
    # NWC to NCW
    tf_outpad = np.transpose(tf_outpad, (0, 2, 1))

    # padding -> input, begin, shape -> remove `padding` elements on both side
    out_tf    = tf.slice(tf_outpad, [0, 0, padding], [bs, out_ch, tf_outpad.shape[2]-2*padding])

    out_tf    = np.array(out_tf)

    print('output size(tf, torch):', out_tf.shape, out_torch.shape)
    # print('out_torch:\n', out_torch)
    # print('out_tf:\n', out_tf)
    print('outputs are close:', np.allclose(out_tf, out_torch))



tensor  = np.random.rand(2, 1, 7)
filters = np.random.rand(1, 2, 3)
tconv(tensor, filters, output_padding=2, padding=5, strides=3)

结果

>>> tensor  = np.random.rand(2, 258, 249)
>>> filters = np.random.rand(258, 1, 7)
>>> tconv(tensor, filters, output_padding=4, padding=9, strides=6)
output size(tf, torch): (2, 1, 1481) (2, 1, 1481)
outputs are close: True

一些有用的链接:

  1. pytorch'SAME'卷积

  2. pytorchtransposeconv的工作原理

相关问题 更多 >

    热门问题