Fastapi python代码执行速度受uvicorn与gunicorn部署的影响

2024-09-29 01:32:26 发布

您现在位置:Python中文网/ 问答频道 /正文

我写了一个fastapi应用程序。现在我正在考虑部署它,但是我似乎遇到了奇怪的意外性能问题,这似乎取决于我使用的是uvicorn还是gunicorn。特别是,如果我使用gunicorn,所有代码(甚至是标准库纯python代码)似乎都会变慢。为了进行性能调试,我编写了一个小应用程序来演示这一点:

import asyncio, time
from fastapi import FastAPI, Path
from datetime import datetime

app = FastAPI()

@app.get("/delay/{delay1}/{delay2}")
async def get_delay(
    delay1: float = Path(..., title="Nonblocking time taken to respond"),
    delay2: float = Path(..., title="Blocking time taken to respond"),
):
    total_start_time = datetime.now()
    times = []
    for i in range(100):
        start_time = datetime.now()
        await asyncio.sleep(delay1)
        time.sleep(delay2)
        times.append(str(datetime.now()-start_time))
    return {"delays":[delay1,delay2],"total_time_taken":str(datetime.now()-total_start_time),"times":times}

使用以下各项运行fastapi appi:

gunicorn api.performance_test:app -b localhost:8001 -k uvicorn.workers.UvicornWorker --workers 1

get to http://localhost:8001/delay/0.0/0.0的响应体始终类似于:

{
  "delays": [
    0.0,
    0.0
  ],
  "total_time_taken": "0:00:00.057946",
  "times": [
    "0:00:00.000323",
    ...smilar values omitted for brevity...
    "0:00:00.000274"
  ]
}

但是,使用:

uvicorn api.performance_test:app --port 8001 

我总是得到这样的时间安排

{
  "delays": [
    0.0,
    0.0
  ],
  "total_time_taken": "0:00:00.002630",
  "times": [
    "0:00:00.000037",
    ...snip...
    "0:00:00.000020"
  ]
}

当我取消对await asyncio.sleep(delay1)语句的注释时,这种差异变得更加明显

所以我想知道gunicorn/uvicorn对python/fastapi运行时做了什么,从而在代码执行速度上产生了10倍的差异

值得一提的是,我在OS X 11.2.3和intel I7处理器上使用Python 3.8.2执行了这些测试

这些是我的输出的相关部分

fastapi==0.65.1
gunicorn==20.1.0
uvicorn==0.13.4

Tags: importasyncioappdatetimetimestartnowtaken
3条回答

我无法复制你的结果

我的环境: Windows 10上WSL2上的ubuntu

我的pip freeze输出的相关部分:

fastapi==0.65.1
gunicorn==20.1.0
uvicorn==0.14.0

我稍微修改了代码:

import asyncio, time
from fastapi import FastAPI, Path
from datetime import datetime
import statistics

app = FastAPI()

@app.get("/delay/{delay1}/{delay2}")
async def get_delay(
    delay1: float = Path(..., title="Nonblocking time taken to respond"),
    delay2: float = Path(..., title="Blocking time taken to respond"),
):
    total_start_time = datetime.now()
    times = []
    for i in range(100):
        start_time = datetime.now()
        await asyncio.sleep(delay1)
        time.sleep(delay2)
        time_delta= (datetime.now()-start_time).microseconds
        times.append(time_delta)

    times_average = statistics.mean(times)

    return {"delays":[delay1,delay2],"total_time_taken":(datetime.now()-total_start_time).microseconds,"times_avarage":times_average,"times":times}

除了第一次加载网站,我对这两种方法的结果几乎相同

这两种方法的大多数时间都在0:00:00.0005300:00:00.000620之间

第一次尝试每种方法需要更长的时间:大约0:00:00.003000。 然而,在我重新启动Windows并再次尝试这些测试后,我注意到在服务器启动后,我不再增加第一次请求的次数(我认为这要感谢重新启动后的大量空闲RAM)


非首次运行的示例(3次尝试):

# `uvicorn performance_test:app  port 8083`

{"delays":[0.0,0.0],"total_time_taken":553,"times_avarage":4.4,"times":[15,7,5,4,4,4,4,5,5,4,4,5,4,4,5,4,4,5,4,4,5,4,4,5,4,4,4,5,4,4,5,4,4,5,4,4,4,4,4,5,4,5,5,4,4,4,4,4,4,5,4,4,4,5,4,4,4,4,4,4,5,4,4,5,4,4,4,4,5,4,4,5,4,4,4,4,4,5,4,4,5,4,4,5,4,4,5,4,4,4,4,4,4,4,5,4,4,4,5,4]}
{"delays":[0.0,0.0],"total_time_taken":575,"times_avarage":4.61,"times":[15,6,5,5,5,5,5,5,5,5,5,4,5,5,5,5,4,4,4,4,4,5,5,5,4,5,4,4,4,5,5,5,4,5,5,4,4,4,4,5,5,5,5,4,4,4,4,5,5,4,4,4,4,4,4,4,4,5,5,4,4,4,4,5,5,5,5,5,5,5,4,4,4,4,5,5,4,5,5,4,4,4,4,4,4,5,5,5,4,4,4,4,5,5,5,5,4,4,4,4]}
{"delays":[0.0,0.0],"total_time_taken":548,"times_avarage":4.31,"times":[14,6,5,4,4,4,4,4,4,4,5,4,4,4,4,4,4,5,4,4,5,4,4,4,4,4,4,4,5,4,4,4,5,4,4,4,4,4,4,4,4,5,4,4,4,4,4,4,5,4,4,4,4,4,5,5,4,4,4,4,4,4,4,5,4,4,4,4,4,5,4,4,5,4,4,5,4,4,5,4,4,4,4,4,4,4,5,4,4,5,4,4,5,4,4,5,4,4,4,4]}


# `gunicorn performance_test:app -b localhost:8084 -k uvicorn.workers.UvicornWorker  workers 1`

{"delays":[0.0,0.0],"total_time_taken":551,"times_avarage":4.34,"times":[13,6,5,5,5,5,5,4,4,4,5,4,4,4,4,4,5,4,4,5,4,4,5,4,4,4,4,4,5,4,4,4,4,4,5,4,4,4,4,4,4,4,5,4,4,5,4,4,4,4,4,4,4,4,5,4,4,4,4,4,4,4,5,4,4,4,4,4,4,4,4,4,5,4,4,5,4,5,4,4,5,4,4,4,4,5,4,4,5,4,4,4,4,4,4,4,5,4,4,5]}
{"delays":[0.0,0.0],"total_time_taken":558,"times_avarage":4.48,"times":[14,7,5,5,5,5,5,5,4,4,4,4,4,4,5,5,4,4,4,4,5,4,4,4,5,5,4,4,4,5,5,4,4,4,5,4,4,4,5,5,4,4,4,4,5,5,4,4,5,5,4,4,5,5,4,4,4,5,4,4,5,4,4,5,5,4,4,4,5,4,4,4,5,4,4,4,5,4,5,4,4,4,5,4,4,4,5,4,4,4,5,4,4,4,5,4,4,4,5,4]}
{"delays":[0.0,0.0],"total_time_taken":550,"times_avarage":4.34,"times":[15,6,5,4,4,4,4,4,4,5,4,4,4,4,4,5,4,4,5,4,4,5,4,4,4,4,4,5,4,4,4,4,5,5,4,4,4,4,5,4,4,4,4,4,5,4,4,5,4,4,5,4,4,5,4,4,5,4,4,5,4,4,4,4,4,4,5,4,4,5,4,4,4,4,4,4,4,4,4,5,4,4,5,4,4,4,4,4,4,4,4,5,4,4,5,4,4,4,4,4]}

非首次运行带有注释的await asyncio.sleep(delay1)(3次尝试)的示例:

# `uvicorn performance_test:app  port 8083`

{"delays":[0.0,0.0],"total_time_taken":159,"times_avarage":0.6,"times":[3,1,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,0,0,1,1,0,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,0,0,1,0,0,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,0]}
{"delays":[0.0,0.0],"total_time_taken":162,"times_avarage":0.49,"times":[3,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,0,1,0,0,0,0,1,1,1,1,1,0,0,0,0,1,1,1,1,0,0,1,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1]}
{"delays":[0.0,0.0],"total_time_taken":156,"times_avarage":0.61,"times":[3,1,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,1,0,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1]}


# `gunicorn performance_test:app -b localhost:8084 -k uvicorn.workers.UvicornWorker  workers 1`

{"delays":[0.0,0.0],"total_time_taken":159,"times_avarage":0.59,"times":[2,0,0,0,0,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,0,0,0,0,1,0,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,0,1,1,1,1,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,0,0,0,0,1,1,1,1,1,0,0]}
{"delays":[0.0,0.0],"total_time_taken":165,"times_avarage":0.62,"times":[3,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,1,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1]}
{"delays":[0.0,0.0],"total_time_taken":164,"times_avarage":0.54,"times":[2,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,0,0,0,1,1,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1]}

我制作了一个Python脚本来更精确地测试这些时间:

import statistics
import requests
from time import sleep

number_of_tests=1000

sites_to_test=[
    {
        'name':'only uvicorn    ',
        'url':'http://127.0.0.1:8083/delay/0.0/0.0'
    },
    {
        'name':'gunicorn+uvicorn',
        'url':'http://127.0.0.1:8084/delay/0.0/0.0'
    }]


for test in sites_to_test:

    total_time_taken_list=[]
    times_avarage_list=[]

    requests.get(test['url']) # first request may be slower, so better to not measure it

    for a in range(number_of_tests):
        r = requests.get(test['url'])
        json= r.json()

        total_time_taken_list.append(json['total_time_taken'])
        times_avarage_list.append(json['times_avarage'])
        # sleep(1) # results are slightly different with sleep between requests

    total_time_taken_avarage=statistics.mean(total_time_taken_list)
    times_avarage_avarage=statistics.mean(times_avarage_list)

    print({'name':test['name'], 'number_of_tests':number_of_tests, 'total_time_taken_avarage':total_time_taken_avarage, 'times_avarage_avarage':times_avarage_avarage})

结果:

{'name': 'only uvicorn    ', 'number_of_tests': 2000, 'total_time_taken_avarage': 586.5985, 'times_avarage_avarage': 4.820865}
{'name': 'gunicorn+uvicorn', 'number_of_tests': 2000, 'total_time_taken_avarage': 571.8415, 'times_avarage_avarage': 4.719035}

带有注释的结果await asyncio.sleep(delay1)

{'name': 'only uvicorn    ', 'number_of_tests': 2000, 'total_time_taken_avarage': 151.301, 'times_avarage_avarage': 0.602495}
{'name': 'gunicorn+uvicorn', 'number_of_tests': 2000, 'total_time_taken_avarage': 144.4655, 'times_avarage_avarage': 0.59196}

我还制作了上述脚本的另一个版本,每1个请求更改一次URL(它给出的次数稍高):

import statistics
import requests
from time import sleep

number_of_tests=1000

sites_to_test=[
    {
        'name':'only uvicorn    ',
        'url':'http://127.0.0.1:8083/delay/0.0/0.0',
        'total_time_taken_list':[],
        'times_avarage_list':[]
    },
    {
        'name':'gunicorn+uvicorn',
        'url':'http://127.0.0.1:8084/delay/0.0/0.0',
        'total_time_taken_list':[],
        'times_avarage_list':[]
    }]


for test in sites_to_test:
    requests.get(test['url']) # first request may be slower, so better to not measure it

for a in range(number_of_tests):

    for test in sites_to_test:
        r = requests.get(test['url'])
        json= r.json()

        test['total_time_taken_list'].append(json['total_time_taken'])
        test['times_avarage_list'].append(json['times_avarage'])
        # sleep(1) # results are slightly different with sleep between requests


for test in sites_to_test:
    total_time_taken_avarage=statistics.mean(test['total_time_taken_list'])
    times_avarage_avarage=statistics.mean(test['times_avarage_list'])

    print({'name':test['name'], 'number_of_tests':number_of_tests, 'total_time_taken_avarage':total_time_taken_avarage, 'times_avarage_avarage':times_avarage_avarage})

结果:

{'name': 'only uvicorn    ', 'number_of_tests': 2000, 'total_time_taken_avarage': 589.4315, 'times_avarage_avarage': 4.789385}
{'name': 'gunicorn+uvicorn', 'number_of_tests': 2000, 'total_time_taken_avarage': 589.0915, 'times_avarage_avarage': 4.761095}

带有注释的结果await asyncio.sleep(delay1)

{'name': 'only uvicorn    ', 'number_of_tests': 2000, 'total_time_taken_avarage': 152.8365, 'times_avarage_avarage': 0.59173}
{'name': 'gunicorn+uvicorn', 'number_of_tests': 2000, 'total_time_taken_avarage': 154.4525, 'times_avarage_avarage': 0.59768}

这个答案应该可以帮助您更好地调试结果

我认为如果你分享更多关于你的操作系统/机器的细节,调查你的结果可能会有所帮助

另外,请重新启动计算机/服务器,这可能会产生影响


更新1:

我看到我使用了比问题{}中所述更新的版本。 我还使用旧版本0.13.4进行了测试,但结果类似,我仍然无法重现您的结果


更新2:

我又运行了一些基准测试,发现了一些有趣的事情:

在requirements.txt中使用uvloop:

完整要求.txt:

uvicorn==0.14.0
fastapi==0.65.1
gunicorn==20.1.0
uvloop==0.15.2

结果:

{'name': 'only uvicorn    ', 'number_of_tests': 500, 'total_time_taken_avarage': 362.038, 'times_avarage_avarage': 2.54142}
{'name': 'gunicorn+uvicorn', 'number_of_tests': 500, 'total_time_taken_avarage': 366.814, 'times_avarage_avarage': 2.56766}

在requirements.txt中没有uvloop:

完整要求.txt:

uvicorn==0.14.0
fastapi==0.65.1
gunicorn==20.1.0

结果:

{'name': 'only uvicorn    ', 'number_of_tests': 500, 'total_time_taken_avarage': 595.578, 'times_avarage_avarage': 4.83828}
{'name': 'gunicorn+uvicorn', 'number_of_tests': 500, 'total_time_taken_avarage': 584.64, 'times_avarage_avarage': 4.7155}

更新3:

我在这个答案中只使用了Python 3.9.5

由于fastapi是一个ASGI框架,因此它将为ASGI服务器(如uvicornhypercorn)提供更好的性能WSGI类服务器gunicorn无法提供类似uvicorn的性能ASGI服务器针对asynchronous功能进行了优化。{}的官方文件也鼓励使用{}服务器,如{}或{}

https://fastapi.tiangolo.com/#installation

这种差异是由于您使用的底层web服务器造成的

一个类比可以是:two cars, same brand, same options, just a different engine, what's the difference?

网络服务器并不完全像汽车,但我想你明白我想说的意思了

基本上,gunicorn是一个synchronousweb服务器,而uvicorn是一个asynchronousweb服务器。由于您使用了fastapiawait关键字,我想您已经知道asyncio/asynchornous programming是什么了

我不知道代码的差异,所以对我的答案持保留态度,但是uvicorn由于asynchronous部分的原因,性能更高。我对时间差异的猜测是,如果您使用asyncweb服务器,它在启动时就已经配置为处理async函数,而如果您使用syncweb服务器,它不是,并且为了抽象该部分,会有某种开销

这不是一个正确的答案,但它给了你一个提示,告诉你区别在哪里

相关问题 更多 >