为什么Python中的子类化会让事情变得如此缓慢?

2024-05-19 10:08:58 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在开发一个扩展dict的简单类,我意识到pickle的键查找和使用非常缓慢

我认为这是我的班级的问题,所以我做了一些琐碎的基准测试:

(venv) marco@buzz:~/sources/python-frozendict/test$ python --version
Python 3.9.0a0
(venv) marco@buzz:~/sources/python-frozendict/test$ sudo pyperf system tune --affinity 3
[sudo] password for marco: 
Tune the system configuration to run benchmarks

Actions
=======

CPU Frequency: Minimum frequency of CPU 3 set to the maximum frequency

System state
============

CPU: use 1 logical CPUs: 3
Perf event: Maximum sample rate: 1 per second
ASLR: Full randomization
Linux scheduler: No CPU is isolated
CPU Frequency: 0-3=min=max=2600 MHz
CPU scaling governor (intel_pstate): performance
Turbo Boost (intel_pstate): Turbo Boost disabled
IRQ affinity: irqbalance service: inactive
IRQ affinity: Default IRQ affinity: CPU 0-2
IRQ affinity: IRQ affinity: IRQ 0,2=CPU 0-3; IRQ 1,3-17,51,67,120-131=CPU 0-2
Power supply: the power cable is plugged

Advices
=======

Linux scheduler: Use isolcpus=<cpu list> kernel parameter to isolate CPUs
Linux scheduler: Use rcu_nocbs=<cpu list> kernel parameter (with isolcpus) to not schedule RCU on isolated CPUs
(venv) marco@buzz:~/sources/python-frozendict/test$ python -m pyperf timeit --rigorous --affinity 3 -s '                    
x = {0:0, 1:1, 2:2, 3:3, 4:4}
' 'x[4]'
.........................................
Mean +- std dev: 35.2 ns +- 1.8 ns
(venv) marco@buzz:~/sources/python-frozendict/test$ python -m pyperf timeit --rigorous --affinity 3 -s '
class A(dict):
    pass             

x = A({0:0, 1:1, 2:2, 3:3, 4:4})
' 'x[4]'
.........................................
Mean +- std dev: 60.1 ns +- 2.5 ns
(venv) marco@buzz:~/sources/python-frozendict/test$ python -m pyperf timeit --rigorous --affinity 3 -s '
x = {0:0, 1:1, 2:2, 3:3, 4:4}
' '5 in x'
.........................................
Mean +- std dev: 31.9 ns +- 1.4 ns
(venv) marco@buzz:~/sources/python-frozendict/test$ python -m pyperf timeit --rigorous --affinity 3 -s '
class A(dict):
    pass

x = A({0:0, 1:1, 2:2, 3:3, 4:4})
' '5 in x'
.........................................
Mean +- std dev: 64.7 ns +- 5.4 ns
(venv) marco@buzz:~/sources/python-frozendict/test$ python
Python 3.9.0a0 (heads/master-dirty:d8ca2354ed, Oct 30 2019, 20:25:01) 
[GCC 9.2.1 20190909] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from timeit import timeit
>>> class A(dict):
...     def __reduce__(self):                 
...         return (A, (dict(self), ))
... 
>>> timeit("dumps(x)", """
... from pickle import dumps
... x = {0:0, 1:1, 2:2, 3:3, 4:4}
... """, number=10000000)
6.70694484282285
>>> timeit("dumps(x)", """
... from pickle import dumps
... x = A({0:0, 1:1, 2:2, 3:3, 4:4})
... """, number=10000000, globals={"A": A})
31.277778962627053
>>> timeit("loads(x)", """
... from pickle import dumps, loads
... x = dumps({0:0, 1:1, 2:2, 3:3, 4:4})
... """, number=10000000)
5.767975459806621
>>> timeit("loads(x)", """
... from pickle import dumps, loads
... x = dumps(A({0:0, 1:1, 2:2, 3:3, 4:4}))
... """, number=10000000, globals={"A": A})
22.611666693352163

结果真是令人惊讶。键查找速度慢2倍,pickle慢5倍

这怎么可能?其他方法,如get()__eq__()__init__(),以及keys()values()items()上的迭代与dict一样快


EDIT:我查看了Python 3.9的源代码,在Objects/dictobject.c中,__getitem__()方法似乎是由dict_subscript()实现的。而且dict_subscript()只有在缺少键的情况下才会减慢子类的速度,因为子类可以实现__missing__(),并尝试查看它是否存在。但基准是与现有的关键

但是我注意到:__getitem__()是用标志METH_COEXIST定义的。另外{},另一种速度慢2倍的方法,也有相同的标志。从official documentation开始:

The method will be loaded in place of existing definitions. Without METH_COEXIST, the default is to skip repeated definitions. Since slot wrappers are loaded before the method table, the existence of a sq_contains slot, for example, would generate a wrapped method named contains() and preclude the loading of a corresponding PyCFunction with the same name. With the flag defined, the PyCFunction will be loaded in place of the wrapper object and will co-exist with the slot. This is helpful because calls to PyCFunctions are optimized more than wrapper object calls.

因此,如果我理解正确,理论上{}应该会加快速度,但它似乎有相反的效果。为什么?


编辑2:我发现了更多的东西

__getitem__()__contains()__标记为METH_COEXIST,因为它们在PyDict_Type中声明了两次

它们都存在于插槽tp_methods中,其中它们被显式声明为__getitem__()__contains()__。但是official documentation表示tp_methods不是子类继承的

因此dict的子类不调用__getitem__(),而是调用子批mp_subscript。实际上,mp_subscript包含在插槽tp_as_mapping中,该插槽允许子类继承其子插槽

问题是__getitem__()mp_subscript都使用相同的函数dict_subscript。有没有可能只是遗传的方式让它变慢了


Tags: thetestvenvcpuirqpickledictns
1条回答
网友
1楼 · 发布于 2024-05-19 10:08:58

dict子类中,索引和in速度较慢,因为dict优化与用于继承C插槽的逻辑子类之间的交互不好。这应该是可以解决的,但不是从你的角度

CPython实现有两组用于操作符重载的钩子。有一些Python级别的方法,比如__contains____getitem__,但是在类型对象的内存布局中还有一组单独的C函数指针插槽。通常,Python方法将是C实现的包装器,或者C插槽将包含搜索和调用Python方法的函数。C插槽直接实现操作更有效,因为C插槽是Python实际访问的对象

用C编写的映射实现C插槽sq_containsmp_subscript,以提供in和索引。通常,Python级别的__contains____getitem__方法将自动生成为围绕C函数的包装器,但是dict类有explicit implementations{}和__getitem__,因为显式实现比生成的包装器快一点:

static PyMethodDef mapp_methods[] = {
    DICT___CONTAINS___METHODDEF
    {"__getitem__", (PyCFunction)(void(*)(void))dict_subscript,        METH_O | METH_COEXIST,
     getitem__doc__},
    ...

(实际上,显式__getitem__实现与mp_subscript实现的功能相同,只是使用了不同类型的包装器。)

通常,子类将继承其父类的C级钩子实现,如sq_containsmp_subscript,并且子类的速度与超类一样快。但是,^{}中的逻辑通过MRO搜索查找生成的包装器方法来查找父实现

dict没有为sq_containsmp_subscript生成包装,因为它提供了显式的__contains____getitem__实现

与继承sq_containsmp_subscript不同,update_one_slot最终给出了子类sq_containsmp_subscript实现,这些实现对__contains____getitem__执行MRO搜索并调用它们。这比直接继承C插槽的效率要低得多

修复此问题需要更改update_one_slot实现


除了上面所描述的,^ {CD32 >}还查找DITT子类的^ ^ {CD33>},所以固定时隙继承问题不会使子类与^ {CD2>}本身完全一致,以查找速度,但应该使它们更接近。p>


至于pickle,在dumps端,pickle实现有一个dedicated fast path用于dict,而dict子类通过object.__reduce_ex__save_reduce采用更迂回的路径

loads方面,时差主要来自额外的操作码和查找来检索和实例化__main__.A类,而dicts有一个专用的pickle操作码来生成新dict。如果我们比较pickle的反汇编:

In [26]: pickletools.dis(pickle.dumps({0: 0, 1: 1, 2: 2, 3: 3, 4: 4}))                                                                                                                                                           
    0: \x80 PROTO      4
    2: \x95 FRAME      25
   11: }    EMPTY_DICT
   12: \x94 MEMOIZE    (as 0)
   13: (    MARK
   14: K        BININT1    0
   16: K        BININT1    0
   18: K        BININT1    1
   20: K        BININT1    1
   22: K        BININT1    2
   24: K        BININT1    2
   26: K        BININT1    3
   28: K        BININT1    3
   30: K        BININT1    4
   32: K        BININT1    4
   34: u        SETITEMS   (MARK at 13)
   35: .    STOP
highest protocol among opcodes = 4

In [27]: pickletools.dis(pickle.dumps(A({0: 0, 1: 1, 2: 2, 3: 3, 4: 4})))                                                                                                                                                        
    0: \x80 PROTO      4
    2: \x95 FRAME      43
   11: \x8c SHORT_BINUNICODE '__main__'
   21: \x94 MEMOIZE    (as 0)
   22: \x8c SHORT_BINUNICODE 'A'
   25: \x94 MEMOIZE    (as 1)
   26: \x93 STACK_GLOBAL
   27: \x94 MEMOIZE    (as 2)
   28: )    EMPTY_TUPLE
   29: \x81 NEWOBJ
   30: \x94 MEMOIZE    (as 3)
   31: (    MARK
   32: K        BININT1    0
   34: K        BININT1    0
   36: K        BININT1    1
   38: K        BININT1    1
   40: K        BININT1    2
   42: K        BININT1    2
   44: K        BININT1    3
   46: K        BININT1    3
   48: K        BININT1    4
   50: K        BININT1    4
   52: u        SETITEMS   (MARK at 31)
   53: .    STOP
highest protocol among opcodes = 4

我们看到二者之间的区别在于,第二个pickle需要一大堆操作码来查找__main__.A并实例化它,而第一个pickle只需要EMPTY_DICT来获得一个空的dict。然后,两个pickle将相同的键和值推送到pickle操作数堆栈上并运行SETITEMS

相关问题 更多 >

    热门问题