Cython Numpy代码不比纯python快

2024-10-01 09:36:30 发布

您现在位置:Python中文网/ 问答频道 /正文

首先我知道有很多类似主题的问题,但是经过一天的搜索、阅读和测试,我还是找不到答案。在

我有一个python函数,它计算numpy-ndarray(mxn)的成对相关性。我最初只是用numpy来做这个,但是这个函数也计算了倒数对(也就是说,除了计算矩阵的A行和B行之间的相关性,它还计算了B行和A行之间的相关性),所以我采用了一种稍微不同的方法,对于大m的矩阵(实际大小为我的问题是m~8000)。在

这是伟大的,但仍然有点慢,因为将有许多这样的矩阵,并且要完成所有这些将需要很长的时间。所以我开始调查cython作为一种加快速度的方法。我从我读到的书中了解到,cython不会真正加速numpy。这是真的,还是我遗漏了什么?在

我认为下面的瓶颈是np.sqrtnp.dot、对ndarray的.T方法的调用和np.absolute。我见过有人用sqrt中的libc.math来代替np.sqrt公司,所以我想我的第一个问题是,libc.math中其他方法的类似函数是否可以使用?恐怕我完全不熟悉C/C++和C语言或任何C族语言,所以这个类型和Cython业务对我来说是一个新的领域,如果原因/解决方案显而易见的话,道歉。在

如果做不到这一点,我能做些什么来提高性能吗?在

下面是我的pyx代码、设置代码和对pyx函数的调用。我不知道它是否重要,但当我调用python setup build_ext --inplace时,它起作用了,但是有很多警告我并不真正理解。这是否也是我没有看到速度提高的原因吗?在

非常感谢任何帮助,并为超长的帖子感到抱歉。在

设置.py

from distutils.core import setup
from distutils.extension import Extension
import numpy
from Cython.Distutils import build_ext


setup(
    cmdclass = {'build_ext': build_ext},
    ext_modules = [Extension("calcBrownCombinedP", 
                            ["calcBrownCombinedP.pyx"], 
                            include_dirs=[numpy.get_include()])]
)

以及设置的输出:

^{pr2}$

pyx代码-'calcBrownCombinedP.pyx'

import numpy as np
cimport numpy as np
from scipy import stats
DTYPE = np.int
ctypedef np.int_t DTYPE_t

def calcBrownCombinedP(np.ndarray genotypeArray):
    cdef int nSNPs, i
    cdef np.ndarray ms, datam, datass, d, rs, temp
    cdef float runningSum, sigmaSq, E, df 
    nSNPs = genotypeArray.shape[0]
    ms = genotypeArray.mean(axis=1)[(slice(None,None,None),None)]
    datam = genotypeArray - ms
    datass = np.sqrt(stats.ss(datam,axis=1)) 
    runningSum = 0
    for i in xrange(nSNPs):
        temp = np.dot(datam[i:],datam[i].T)
        d = (datass[i:]*datass[i])
        rs = temp / d
        rs = np.absolute(rs)[1:]
        runningSum += sum(rs*(3.25+(0.75*rs)))

    sigmaSq = 4*nSNPs+2*runningSum

    E = 2*nSNPs

    df = (2*(E*E))/sigmaSq

    runningSum = sigmaSq/(2*E)
    return runningSum

针对纯python测试上述内容的代码-'测试.py'

import numpy as np
from scipy import stats
import random
import time
from calcBrownCombinedP import calcBrownCombinedP
from PycalcBrownCombinedP import PycalcBrownCombinedP

ms = [10,50,100,500,1000,5000]

for m in ms:
    print '---testing implentation with m = {0}---'.format(m)    
    genotypeArray = np.empty((m,20),dtype=int)

    for i in xrange(m):
        genotypeArray[i] = [random.randint(0,2) for j in xrange(20)] 

    print genotypeArray.shape 


    start = time.time()
    print calcBrownCombinedP(genotypeArray)
    print 'cython implementation took {0}'.format(time.time() - start)

    start = time.time()
    print PycalcBrownCombinedP(genotypeArray)
    print 'python implementation took {0}'.format(time.time() - start)

代码的输出是:

---testing implentation with m = 10---
(10L, 20L)
2.13660168648
cython implementation took 0.000999927520752
2.13660167749
python implementation took 0.000999927520752
---testing implentation with m = 50---
(50L, 20L)
8.82721138
cython implementation took 0.00399994850159
8.82721130234
python implementation took 0.00500011444092
---testing implentation with m = 100---
(100L, 20L)
16.7438983917
cython implementation took 0.0139999389648
16.7438965333
python implementation took 0.0120000839233
---testing implentation with m = 500---
(500L, 20L)
80.5343856812
cython implementation took 0.183000087738
80.5343694046
python implementation took 0.161000013351
---testing implentation with m = 1000---
(1000L, 20L)
160.122573853
cython implementation took 0.615000009537
160.122491308
python implementation took 0.598000049591
---testing implentation with m = 5000---
(5000L, 20L)
799.813842773
cython implementation took 10.7159998417
799.813880445
python implementation took 11.2510001659

最后,纯python实现'PycalCbDownCombinedP.py'

import numpy as np
from scipy import stats
def PycalcBrownCombinedP(genotypeArray):
    nSNPs = genotypeArray.shape[0]
    ms = genotypeArray.mean(axis=1)[(slice(None,None,None),None)]
    datam = genotypeArray - ms
    datass = np.sqrt(stats.ss(datam,axis=1)) 
    runningSum = 0
    for i in xrange(nSNPs):
        temp = np.dot(datam[i:],datam[i].T)
        d = (datass[i:]*datass[i])
        rs = temp / d
        rs = np.absolute(rs)[1:]
        runningSum += sum(rs*(3.25+(0.75*rs)))

    sigmaSq = 4*nSNPs+2*runningSum

    E = 2*nSNPs

    df = (2*(E*E))/sigmaSq

    runningSum = sigmaSq/(2*E)
    return runningSum

Tags: fromimportnumpynonetimenpcythonimplementation
1条回答
网友
1楼 · 发布于 2024-10-01 09:36:30

使用^{}进行分析显示瓶颈是循环的最后一行:

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
<snip>
    16      5000      6145280   1229.1     86.6          runningSum += sum(rs*(3.25+(0.75*rs)))

这并不奇怪,因为在Python和Cython版本中都使用了Python内置函数sum。当输入数组的形状为(5000, 20)时,切换到np.sum会将代码速度提高4.5倍。在

如果精度有一点损失是可以的,那么您可以利用线性代数进一步加快最后一行的速度:

^{pr2}$

实际上是向量点积,即

np.dot(rs, 3.25 + 0.75 * rs)

这仍然是次优的,因为它在rs上循环了三次,并构造了两个rs大小的临时数组。使用初等代数,这个表达式可以重写为

3.25 * np.sum(rs) +  .75 * np.dot(rs, rs)

它不仅给出了原始结果而没有上一个版本中的舍入错误,而且只在rs上循环两次并使用常量内存。(*)

现在的瓶颈是np.dot,所以安装一个更好的BLAS库比用Cython重写整个程序要多。在

(*)或最新NumPy中的对数内存,它有一个递归的np.sum的重新实现,比旧的迭代实现更快。在

相关问题 更多 >