如何刮多页的麻烦与循环?

2024-10-05 15:21:46 发布

您现在位置:Python中文网/ 问答频道 /正文

这里是我的代码刮只有一页,但我有11000他们。区别在于他们的身份

https://www.rlsnet.ru/mkb_index_id_1.htm
https://www.rlsnet.ru/mkb_index_id_2.htm
https://www.rlsnet.ru/mkb_index_id_3.htm
....
https://www.rlsnet.ru/mkb_index_id_11000.htm

我如何循环我的代码来刮去11000页?有没有可能有这么多的页面?把它们列在一张单子里,然后刮去,这是可能的,但有11000个这样的单子,这将是一个漫长的过程。你知道吗

import requests
from pandas import DataFrame
import numpy as np
import pandas as pd
from bs4 import BeautifulSoup

page_sc = requests.get('https://www.rlsnet.ru/mkb_index_id_1.htm')
soup_sc = BeautifulSoup(page_sc.content, 'html.parser')
items_sc = soup_sc.find_all(class_='subcatlist__item')
mkb_names_sc = [item_sc.find(class_='subcatlist__link').get_text() for item_sc in items_sc]
mkb_stuff_sce = pd.DataFrame(
    {
        'first': mkb_names_sc,
    })
mkb_stuff_sce.to_csv('/Users/gfidarov/Desktop/Python/MKB/mkb.csv')


Tags: 代码fromhttpsimportidindexwwwru
2条回答

我的方法很简单。我只是在循环上面的代码。你知道吗

for i in range(1,11001):

    page_sc = requests.get('https://www.rlsnet.ru/mkb_index_id_{}.htm'.format(i))

    soup_sc = BeautifulSoup(page_sc.content, 'html.parser')
    items_sc = soup_sc.find_all(class_='subcatlist__item')
    mkb_names_sc = [item_sc.find(class_='subcatlist__link').get_text() for item_sc in items_sc]
    mkb_stuff_sce = pd.DataFrame(
        {
            'first': mkb_names_sc,
        })
    mkb_stuff_sce.to_csv('/Users/gfidarov/Desktop/Python/MKB/mkb.csv')

我所做的是使用for循环遍历代码,range()函数正在生成index列表,我使用format()方法将其放置在url中。你知道吗

这应该很有魅力。希望这有帮助:)

您可以像这样动态地创建url字符串。您可能还希望在循环的每一个其他迭代中使用一个定时延迟,以避免被服务器阻塞。你知道吗

import requests
from pandas import DataFrame
import numpy as np
import pandas as pd
from bs4 import BeautifulSoup


path_of_csv = '/Users/gfidarov/Desktop/Python/MKB/mkb.csv'

first_string = 'https://www.rlsnet.ru/mkb_index_id_'
third_string = '.htm'

df = pd.DataFrame(columns=['scraping results'])

try:
    for second_string in range(1, 11001):
        second_string = str(second_string)
        url = first_string + second_string + third_string
        page_sc = requests.get(url)
        soup_sc = BeautifulSoup(page_sc.content, 'html.parser')
        items_sc = soup_sc.find_all(class_='subcatlist__item')
        mkb_names_sc = [item_sc.find(class_='subcatlist__link').get_text() for item_sc in items_sc]
        df.append({'scraping results': mkb_names_sc}, ignore_index=True)

    df.to_csv(
        path_or_buf=path_of_csv
    )

except:
    # If it fails in the middle of the process, the results won't be lost
    path_of_csv = 'backup_' + path_of_csv
    df.to_csv(
        path_or_buf=path_of_csv 
    )
    print('Failed at index ' + second_string + '. Please start from here again by setting the beginning of the range to this index. A backup was made of the results that were already scraped. You may want to rename the backup to avoid overwriting in the next run.')

相关问题 更多 >