如何使用异步请求保存JSON响应?

2024-10-02 22:30:50 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个关于异步请求的问题:

如何动态地将response.json()保存到文件中

我想发出请求并将响应保存到.json文件,而不将其保存在内存中


import asyncio
import aiohttp


async def fetch(sem, session, url):
    async with sem:
        async with session.get(url) as response:
            return await response.json() # here


async def fetch_all(urls, loop):
    sem = asyncio.Semaphore(4) 
    async with aiohttp.ClientSession(loop=loop) as session:
        results = await asyncio.gather(
            *[fetch(sem, session, url) for url in urls]
        )
        return results


if __name__ == '__main__':

    urls = (
        "https://public.api.openprocurement.org/api/2.5/tenders/6a0585fcfb05471796bb2b6a1d379f9b",
        "https://public.api.openprocurement.org/api/2.5/tenders/d1c74ec8bb9143d5b49e7ef32202f51c",
        "https://public.api.openprocurement.org/api/2.5/tenders/a3ec49c5b3e847fca2a1c215a2b69f8d",
        "https://public.api.openprocurement.org/api/2.5/tenders/52d8a15c55dd4f2ca9232f40c89bfa82",
        "https://public.api.openprocurement.org/api/2.5/tenders/b3af1cc6554440acbfe1d29103fe0c6a",
        "https://public.api.openprocurement.org/api/2.5/tenders/1d1c6560baac4a968f2c82c004a35c90",
    ) 

    loop = asyncio.get_event_loop()
    data = loop.run_until_complete(fetch_all(urls, loop))
    print(data)

目前,该脚本只打印JSON文件,我可以在它们全部被删除后保存它们:

data = loop.run_until_complete(fetch_all(urls, loop))
for i, resp in enumerate(data):
    with open(f"{i}.json", "w") as f:
        json.dump(resp, f)

但我觉得它不对,因为它肯定会失败,例如,一旦我的内存用完

有什么建议吗


编辑

把我的帖子限制在一个问题上


Tags: httpsorgloopapiasynciojsonurlasync
1条回答
网友
1楼 · 发布于 2024-10-02 22:30:50

How do I save response.json() to a file, on the fly?

首先不要使用response.json(),而是使用streaming API

async def fetch(sem, session, url):
    async with sem, session.get(url) as response:
        with open("some_file_name.json", "wb") as out:
            async for chunk in response.content.iter_chunked(4096)
                out.write(chunk)

相关问题 更多 >