如果缺少一个项目,则不刮

2024-10-03 02:31:47 发布

您现在位置:Python中文网/ 问答频道 /正文

在过去的两天里,我在几个小时内构建了我的第一个scray spider,但我现在陷入了困境——我想实现的主要目的是提取所有数据,以便稍后在csv中对其进行过滤。现在,真正对我至关重要的数据(公司没有!网页)被删除,因为如果一个项目有主页,scrapy找不到我提供的xpath。我在这里尝试了if语句,但它不起作用。你知道吗

网站示例:https://www.achern.de/de/Wirtschaft/Unternehmen-A-Z/Unternehmen?view=publish&item=company&id=1345

我使用xPath选择器:response.xpath("//div[@class='cCore_contactInformationBlockWithIcon cCore_wwwIcon']/a/@href").extract()

非网站示例:https://www.achern.de/de/Wirtschaft/Unternehmen-A-Z/Unternehmen?view=publish&item=company&id=1512

蜘蛛代码:

# -*- coding: utf-8 -*-
import scrapy

class AchernSpider(scrapy.Spider):
name = 'achern'
allowed_domains = ['www.achern.de']
start_urls = ['https://www.achern.de/de/Wirtschaft/Unternehmen-A-Z/']



def parse(self, response):
    for href in response.xpath("//ul[@class='cCore_list cCore_customList']/li[*][*]/a/@href"):
        url = response.urljoin(href.extract())
        yield scrapy.Request(url, callback= self.scrape)

def scrape(self, response):
    #Extracting the content using css selectors
    print("Processing:"+response.url)
    firma = response.css('div>#cMpu_publish_company>h2.cCore_headline::text').extract()
    anschrift = response.xpath("//div[contains(@class,'cCore_addressBlock_address')]/text()").extract()
    tel = response.xpath("//div[@class='cCore_contactInformationBlockWithIcon cCore_phoneIcon']/text()").extract()
    mail = response.xpath(".//div[@class='cCore_contactInformationBlock']//*[contains(text(), '@')]/text()").extract()
    web1 = response.xpath("//div[@class='cCore_contactInformationBlockWithIcon cCore_wwwIcon']/a/@href").extract()
    if "http:" not in web1:
        web = "na"
    else:
        web = web1

    row_data=zip(firma,anschrift,tel,mail,web1) #web1 must be changed to web but then it only give out "n" for every link
    #Give the extracted content row wise
    for item in row_data:
        #create a dictionary to store the scraped info
        scraped_info = {
            'Firma' : item[0],
            'Anschrift' : item[1] +' 77855 Achern',
            'Telefon' : item[2],
            'Mail' : item[3],
            'Web' : item[4],
        }

        #yield or give the scraped info to scrapy
        yield scraped_info

所以总的来说,即使没有“web”,它也应该导出丢弃的项目。。你知道吗

希望有人能帮忙,亲爱的


Tags: textdivresponsewwwextractdeitemxpath
1条回答
网友
1楼 · 发布于 2024-10-03 02:31:47

使用

response.css(".cCore_wwwIcon > a::attr(href)").get()

提供None或网站地址,然后可以使用or提供默认值:

website = response.css(".cCore_wwwIcon > a::attr(href)").get() or 'na'

另外,我重构了你的scraper来使用css选择器。请注意,我使用了.get()而不是.extract()来获取一个项目,而不是一个列表,这会大大清理代码。你知道吗

import scrapy
from scrapy.crawler import CrawlerProcess

class AchernSpider(scrapy.Spider):
    name = 'achern'
    allowed_domains = ['www.achern.de']
    start_urls = ['https://www.achern.de/de/Wirtschaft/Unternehmen-A-Z/']

    def parse(self, response):
        for url in response.css("[class*=cCore_listRow] > a::attr(href)").extract():
            yield scrapy.Request(url, callback=self.scrape)

    def scrape(self, response):
        # Extracting the content using css selectors
        firma = response.css('.cCore_headline::text').get()
        anschrift = response.css('.cCore_addressBlock_address::text').get()
        tel = response.css(".cCore_phoneIcon::text").get()
        mail = response.css("[href^=mailto]::attr(href)").get().replace('mailto:', '')
        website = response.css(".cCore_wwwIcon > a::attr(href)").get() or 'na'

        scraped_info = {
            'Firma': firma,
            'Anschrift': anschrift + ' 77855 Achern',
            'Telefon': tel,
            'Mail': mail,
            'Web': website,
        }
        yield scraped_info


if __name__ == "__main__":
    p = CrawlerProcess()
    p.crawl(AchernSpider)
    p.start()

输出:

with website:
{'Firma': 'Wölfinger Fahrschule GmbH', 'Anschrift': 'Güterhallenstraße 8 77855 Achern', 'Telefon': '07841 6738132', 'Mail': 'info@woelfinger-fahrschule.de', 'Web': 'http://www.woelfinger-fahrschule.de'}

without website:
{'Firma': 'Zappenduster-RC Steffen Liepe', 'Anschrift': 'Am Kirchweg 16 77855 Achern', 'Telefon': '07841 6844700', 'Mail': 'Zappenduster-Rc@hotmail.de', 'Web': 'na'}

相关问题 更多 >