使numpy阵列规格化的性能更快?

2024-10-03 04:33:50 发布

您现在位置:Python中文网/ 问答频道 /正文

我目前正在规范化python中的numpy数组,该数组是通过windows拼接一个图像而创建的,它可以创建大约20K个补丁。当前的规范化实现在我的运行时是一个很大的痛点,我正试图用C扩展或其他类似的功能来替换它。我想看看社区有什么建议可以轻松简单地完成这项工作? 当前的运行时间大约是0.34s,仅对于规范化部分,我正在尝试低于0.1s或更好。你可以看到,你可以用windows创建非常有效的补丁。注意:您只需注释/取消注释标记为“#----Normalization”的行,就可以自己查看不同实现的运行时。在

以下是当前的实现:

import gc
import cv2, time
from libraries import GCN
from skimage.util.shape import view_as_windows


def create_imageArray(patch_list):
    returnImageArray = numpy.zeros(shape=(len(patch_list), 1, 40, 60))
    idx = 0
    for patch, name, coords in patch_list:
        imgArray = numpy.asarray(patch[:,:], dtype=numpy.float32)
        imgArray = imgArray[numpy.newaxis, ...]
        returnImageArray[idx] = imgArray
        idx += 1
    return returnImageArray
    # print "normImgArray[0]:",normImgArray[0]


def NormalizeData(imageArray):
    tempImageArray = imageArray

    # Normalize the data in batches
    batchSize = 25000
    dataSize = tempImageArray.shape[0]
    imageChannels = tempImageArray.shape[1]
    imageHeight = tempImageArray.shape[2]
    imageWidth = tempImageArray.shape[3]

    for i in xrange(0, dataSize, batchSize):
        stop = i + batchSize
        print("Normalizing data [{0} to {1}]...".format(i, stop))
        dataTemp = tempImageArray[i:stop]
        dataTemp = dataTemp.reshape(dataTemp.shape[0], imageChannels * imageHeight * imageWidth)
        #print("Performing GCN [{0} to {1}]...".format(i, stop))
        dataTemp = GCN(dataTemp)
        #print("Reshaping data again [{0} to {1}]...".format(i, stop))
        dataTemp = dataTemp.reshape(dataTemp.shape[0], imageChannels, imageHeight, imageWidth)
        #print("Updating data with new values [{0} to {1}]...".format(i, stop))
        tempImageArray[i:stop] = dataTemp
    del dataTemp
    gc.collect()

    return tempImageArray


start_time = time.time()
img1_path = "777628-1032-0048.jpg"
img_list = ["images/1.jpg", "images/2.jpg", "images/3.jpg", "images/4.jpg", "images/5.jpg"]

patchWidth = 60
patchHeight = 40
channels = 1
stride = patchWidth/6
multiplier = 1.31
finalImgArray = []
vaw_time = 0
norm_time = 0
array_time = 0

for im_path in img_list:
    start = time.time()
    baseFileWithExt = os.path.basename(im_path)
    baseFile = os.path.splitext(baseFileWithExt)[0]
    img = cv2.imread(im_path, cv2.IMREAD_GRAYSCALE)
    nxtWidth = 800
    nxtHeight = 1200
    patchesList = []
    for i in xrange(7):
        img = cv2.resize(img, (nxtWidth, nxtHeight))
        nxtWidth = int(nxtWidth//multiplier)
        nxtHeight = int(nxtHeight//multiplier)
        patches = view_as_windows(img, (patchHeight, patchWidth), stride)
        cols = patches.shape[0]
        rows = patches.shape[1]
        patchCount = cols*rows
        print "patchCount:",patchCount, "     patches.shape:",patches.shape
        returnImageArray = numpy.zeros(shape=(patchCount, channels, patchHeight, patchWidth))
        idx = 0

        for col in xrange(cols):
            for row in xrange(rows):
                patch = patches[col][row]
                imageName = "{0}-patch{1}-{2}.jpg".format(baseFile, i, idx)
                patchCoodrinates = (0, 1, 2, 3)  # don't need these for example
                patchesList.append((patch, imageName, patchCoodrinates))
                # ---- Normalization inside 7 iterations <> Part 1
                # imgArray = numpy.asarray(patch[:,:], dtype=numpy.float32)
                # imgArray = patch.astype(numpy.float32)
                # imgArray = imgArray[numpy.newaxis, ...] # Add a new axis for channel so goes from shape [40,60] to [1,40,60]
                # returnImageArray[idx] = imgArray
                idx += 1

        # if i == 0: finalImgArray = returnImageArray
        # else: finalImgArray = numpy.concatenate((finalImgArray, returnImageArray), axis=0)

    vaw_time += time.time() - start

    # ---- Normalizaion inside 7 iterations <> Part 2
    # start = time.time()
    # normImageArray = NormalizeData(finalImgArray)
    # norm_time += time.time() - start
    # print "returnImageArray.shape:", finalImgArray.shape

    # ---- Normalization outside 7 iterations
    start = time.time()
    imgArray = create_imageArray(patchesList)
    array_time += time.time() - start

    start = time.time()
    normImgArray = NormalizeData( imgArray )
    norm_time += time.time() - start
    print "len(patchesList):",len(patchesList)


total_time = (time.time() - start_time)/len(img_list)
print "\npatches_time per img: {0:.3f} s".format(vaw_time/len(img_list))
print "create imgArray per img: {0:.3f} s".format(array_time/len(img_list))
print "normalization_time per img: {0:.3f} s".format(norm_time/len(img_list))
print "total time per image: {0:.3f} s \n".format(total_time)

下面是GCN代码,以防您需要下载它才能使用它:http://pastebin.com/RdVMD2P3

有关GCN内部代码的详细信息

我用默认参数调用GCN。 GCN Params

在高层次上,它取所有像素的平均值,然后将所有像素除以该平均值。所以如果有一个像这样的图像数组,那么平均值是2。因此,我们将每个数除以2得到[0.5,1,1.5]。正常化就是这么做的。我忘了在上面的图像中突出显示平均值=X。平均值(轴=1)。 Normalization function

注意事项: 如果您想知道为什么我要重新迭代并创建一个新的imgArray来规范化,而不是在最初的补丁创建中这样做,那就是将数据传输保持在最低限度。我用多进程库来实现这一点,并且序列化数据需要很长时间,所以尽量将数据序列化保持在最小值(意味着从进程中传递回的数据量越小)。我已经测量了在7个循环内做还是在外做的区别,下面是注释,这样我就可以处理了。不过,如果你知道一个更快的实现,一定要告诉我。在

在7个循环中创建imageArray的运行时:

^{pr2}$

在7次迭代之外创建imageArray和规格化的运行时:

patches_time per img: 0.040 s
create imgArray per img: 0.146 s
normalization_time per img: 0.339 s
total time per image: 0.524 s 

我以前没见过这个,但创建数组似乎也需要一些时间。在


Tags: numpyformatimgfortimestartlistpatch