创建特定的Web刮板

2024-09-27 21:24:58 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在努力学习用Python进行刮取,在本例中,我的想法是制作一个从网页获取数据的工具。我有一个问题,建议“for”遍历页面并收集每个框(项目)的数据,因为它们是:

  • 伊多弗
  • 名单
  • 头衔
  • 位置
  • 内容
  • 电话

这不是一项任务,这是我自己的倡议,但我没有向前迈进,为此我感谢你的帮助

以下是我对代码的理解:

from bs4 import BeautifulSoup
import requests

URL_BASE = "https://www.milanuncios.com/ofertas-de-empleo-en-madrid/?dias=3&demanda=n&pagina="
MAX_PAGES = 2
counter = 0

for i in range(0, MAX_PAGES):

    #Building the URL
    if i > 0:
        url = "%s%d" % (URL_BASE, i)
    else:
        url = URL_BASE

    #We make the request to the web
    req = requests.get(url)
    
    #We check that the request returns a Status Code = 200
    statusCode = req.status_code
    if statusCode == 200:

        #We pass the HTML content of the web to a BeautifulSoup () object
        html = BeautifulSoup(req.text, "html.parser")

        #We get all the divs where the inputs are
        entradas_IDoffer = html.find_all('div', {'class': 'aditem-header'})
        
        #We go through all the inputs and extract info
        for entrada1 in entradas_IDoffer:
            
            #THIS ARE SOME ATTEMPS
            #Title = entrada.find('div', {'class': 'aditem-detail-title'}).getText()
            #location = entrada.find('div', {'class': 'list-location-region'}).getText()
            #content = entrada.find('div', {'class': 'tx'}).getText()
            #phone = entrada.find('div', {'class': 'telefonos'}).getText()
        
            #Offer Title
            entradas_Title = html.find_all('div', {'class': 'aditem-detail'})
            for entrada2 in entradas_Title:
                counter += 1
                Title = entrada2.find('a', {'class': 'aditem-detail-title'}).getText()
                
            counter += 1
            IDoffer = entrada1.find('div', {'class': 'x5'}).getText()
                    
                    

        #Location
        #entradas_location = html.find_all('div', {'class': 'aditem-detail'})
        #for entrada4 in entradas_location:
        #    counter += 1
        #    location = entrada4.find('div', {'class': 'list-location-region'}).getText()

                    #Offer content
                    #entradas_content = html.find_all('div', {'class': 'aditem-detail'})
                    #for entrada3 in entradas_content:
                     #   counter += 1
                      #  content = entrada3.find('div', {'class': 'tx'}).getText()

            print("%d - %s  \n%s\n%s" % (counter, IDoffer.strip(),url,Title))

    else:
        try:
            r = requests.head(req)
            print(r.status_code)

        except requests.ConnectionError:
            print("failed to connect")
        break
        #If the page no longer exists and it gives me a 400

Tags: theindivfortitlehtmlcounterlocation
1条回答
网友
1楼 · 发布于 2024-09-27 21:24:58

正确的陷阱

entradas_IDoffer = html.find_all("div", class_="aditem CardTestABClass")

标题位于“a”标签下,而不是“div”

title = entrada.find("a", class_="aditem-detail-title").text.strip()
location = entrada.find("div", class_="list-location-region").text.strip()
content = entrada.find("div", class_="tx").text.strip()

对于其他数据,请执行此操作

他们可能正在使用javascript加载电话号码,所以您可能无法使用bs4获取该号码,您可以使用selenium获取该号码

您编写了非常长的代码来循环多个页面,只需使用range来遍历第1页和第2页。将url放入格式化字符串中

for page in range(1, 3):
    url =  f'https://www.milanuncios.com/ofertas-de-empleo-en-madrid/?dias=3&demanda=n&pagina={page}'

完整代码:

import requests
from bs4 import BeautifulSoup

for page in range(1, 5):
    url =  f'https://www.milanuncios.com/ofertas-de-empleo-en-madrid/?dias=3&demanda=n&pagina={page}'
    response = requests.get(url)
    soup = BeautifulSoup(response.text, 'html.parser')
    entradas_IDoffer = soup.find_all("div", class_="aditem CardTestABClass")

    for entrada in entradas_IDoffer:
        title = entrada.find("a", class_="aditem-detail-title").text.strip()
        ID = entrada.find("div", class_="x5").text.strip()
        location = entrada.find("div", class_="list-location-region").text.strip()
        content = entrada.find("div", class_="tx").text.strip()
        
        print(title, ID, location, content)

相关问题 更多 >

    热门问题