Python/Pandas跨多个页面抓取网页搜索结果

2024-10-03 02:35:20 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在和一个朋友合作,尝试将几个网页的结果拉入一个数据框(https://motos.coches.net/ocasion/barcelona/?pg=1&fi=oTitle&or=1&Tops=1,其中页码会增加)。我以前没有太多的网页抓取工作,尝试过使用Pandas read_html和BeautifulSoup,但我很难找到从哪里开始。

理想情况下,我们希望将所有5000+结果拉到一个CSV中,显示标题、发布日期、公里数、年份、CC和位置。

像这样的事情在熊猫和网络抓取图书馆中是否很容易实现?谢谢你的帮助!


Tags: or数据https网页net朋友fipg
2条回答

我想出了一个解决方案,尽管可能不是最优雅的:

import requests
from bs4 import BeautifulSoup
import pandas as pd
from time import sleep

base_url = 'https://motos.coches.net/ocasion/barcelona/?pg={}&fi=CreationDate&or=-1'

#excluding page from base_url for further adding
res = []

for page in range(1,300): # unknown last page

    request = requests.get(base_url.format(page), headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'}) # here adding page    
    if request.status_code == 404: #added just in case of error
        break
    soup = BeautifulSoup(request.content, 'lxml')

    for url in soup.find_all('div', class_ = 'col2-grid'):
        res.append([
            url.find('h2', class_ = 'floatleft').contents[0].encode('utf8')
            ,url.find('p', class_ = 'data floatright').contents[0].encode('utf8')
            ,url.find('p', class_ = 'preu').contents[0].encode('utf8')
            ,url.find('span', class_ = 'd1').contents[0].encode('utf8')
            ,url.find('span', class_ = 'd2').contents[0].encode('utf8')
            ,url.find('span', class_ = 'd3').contents[0].encode('utf8')
            ,url.find('span', class_ = 'lloc').contents[0].encode('utf8')
                    ]
                  )
    sleep(2) #pause code

    #create dataframe
    df = pd.DataFrame(data=res, columns=['title', 'date_posted', 'price_in_euros', 'km', 'year', 'engine_size', 'location'])
    df = df.replace({'<span>|</span>': ''}, regex=True) #remove span tags

    df['engine_size_metric'] = None
    df.loc[df['engine_size'].str.contains(' cc'),'engine_size_metric'] = 'cc'
    df.loc[df['engine_size'].str.contains(' kw'),'engine_size_metric'] = 'kw'

    df['price_in_euros'] = df['price_in_euros'].replace({'\.|€': ''}, regex=True)
    df['price_in_euros'] = df['price_in_euros'].astype(float)

    df['km'] = df['km'].replace({'\.| km': ''}, regex=True)
    df['km'] = df['km'].replace({'N/D': None}, regex=True)
    df['km'] = df['km'].astype(float)

    df['engine_size'] = df['engine_size'].str.split(' ').str[0].replace({'\.|cc|kw': ''}, regex=True)
    df.loc[df['engine_size']=='','engine_size'] = None
    df['engine_size'] = df['engine_size'].astype(float)

    df.to_csv('output.csv', index=False)

您还没有表现出自己努力达成解决方案的努力,但是您可以这样做:

offset = 0
pg = 1
base_url = 'https://url?start={0}&pg={1}'

url = base_url.format(offset,pg)
results = first page from BeautifulSoup scrape or requests.get
all_results= results

while results:
    # Rebuild url base on current start.
    start += rows
    url = base_url.format(offset, pg)
    results = next page from BeautifulSoup scrape or requests.get
    all_results += results

相关问题 更多 >