如何用Scrapy刮掉一个新的链接

2024-09-28 01:32:22 发布

您现在位置:Python中文网/ 问答频道 /正文

我最近开始用Scrapy,所以我对它不太在行,所以这是一个新手问题。在

为了练习,我刮了一些随机的惯例,我刮了名字和展位号,但我也想要公司的链接,在一个新的窗口里,我找到并存储了锚定标签上的链接,但我不知道如何刮这些新链接,任何帮助或指导都会很好

import scrapy

class ConventionSpider(scrapy.Spider):
    name = 'convention'
    allowed_domains = ['events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']
    start_urls = ['https://events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']

    def parse(self, response):
        name = response.xpath('//*[@class="companyName"]')
        number = response.xpath('//*[@class="boothLabel"]')
        link = response.xpath('//*[@class="companyName"]')
        for row, row1, row2 in zip(name, number, link):
            company = row.xpath('.//*[@class="exhibitorName"]/text()').extract_first()
            booth_num = row1.xpath('.//*[@class="boothLabel aa-mapIt"]/text()').extract_first()
            url = row2.xpath('.//a/@href').extract_first()

            yield {'Company': company,'Booth Number': booth_num}

Tags: namecom链接responseextractpubliceventsxpath
3条回答

一种更简单的方法是将scrapy.spiders.CrawlSpider类作为子类,并指定rule属性

from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor

class ConventionSpider(CrawlSpider):
    name = 'convention'
    allowed_domains = ['events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']
    start_urls = ['https://events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']

    rules = (
    Rule(LinkExtractor(allow=('', ), # allow all links that match a given regex
        deny=('')), # deny all links that match given regex
        callback='parse_item', # function that gets called for each extracted link
        follow=True),
    )

    def parse_item(self, response):
        name = response.xpath('//*[@class="companyName"]')
        number = response.xpath('//*[@class="boothLabel"]')
        link = response.xpath('//*[@class="companyName"]')
        for row, row1, row2 in zip(name, number, link):
            company = row.xpath('.//*[@class="exhibitorName"]/text()').extract_first()
            booth_num = row1.xpath('.//*[@class="boothLabel aa-mapIt"]/text()').extract_first()
            # url = row2.xpath('.//a/@href').extract_first()
            # No need to parse links because we are using CrawlSpider

            yield {'Company': company,'Booth Number': booth_num}

但是,请确保不要使用parse作为回调函数,因为scrapy.spiders.CrawlSpider使用parse方法来实现其逻辑。在

您的代码中存在类函数parse_page的缩进问题,您错误地将其命名为“parse”而不是“parse\u page”。这可能就是你的代码不能正常工作的原因。修改后的代码如下所示,对我来说效果很好:

import scrapy
from scrapy import Request

class ConventionSpider(scrapy.Spider):
    name = 'Convention'
    allowed_domains = ['events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']
    start_urls = ['https://events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']

    def parse(self, response):
        name = response.xpath('//*[@class="companyName"]')
        number = response.xpath('//*[@class="boothLabel"]')
        link = response.xpath('//*[@class="companyName"]')
        for row, row1, row2 in zip(name, number, link):
            company = row.xpath('.//*[@class="exhibitorName"]/text()').extract_first(),
            booth_num = row1.xpath('.//*[@class="boothLabel aa-mapIt"]/text()').extract_first()

            next_page_url = row2.xpath('.//a/@href').extract_first()
            next_page_url = response.urljoin(next_page_url)
            yield Request(next_page_url, callback=self.parse_page, meta={'Company': company, 'Booth Number': booth_num}, dont_filter=True)

    def parse_page(self, response):
        company = response.meta.get('Company')
        booth_num = response.meta.get('Booth Number')
        website = response.xpath('//a[@class="aa-BoothContactUrl"]/text()').extract_first()
        yield {'Company': company, 'Booth Number': booth_num, 'Website': website}

参考https://github.com/NilanshBansal/Craigslist_Scrapy/blob/master/craigslist/spiders/jobs.py

import scrapy
from scrapy import Request

class ConventionSpider(scrapy.Spider):
name = 'convention'
# allowed_domains = ['events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']
start_urls = ['https://events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']

def parse(self, response):
    name = response.xpath('//*[@class="companyName"]')
    number = response.xpath('//*[@class="boothLabel"]')
    link = response.xpath('//*[@class="companyName"]')
    for row, row1, row2 in zip(name, number, link):
        company = row.xpath('.//*[@class="exhibitorName"]/text()').extract_first()
        booth_num = row1.xpath('.//*[@class="boothLabel aa-mapIt"]/text()').extract_first()
        url = row2.xpath('.//a/@href').extract_first()

        yield Request(url,callback=self.parse_page,meta={'Url':url,'Company': company,'Booth_Number': booth_num)

def parse_page(self,response):
    company = response.meta.get('Company')
    booth_num = response.meta.get('Booth Number')
    website = response.xpath('//a[@class="aa-BoothContactUrl"]/text()').extract_first()

    yield {'Company': company,'Booth Number': booth_num, 'Website': website}

编辑: 注释该行允许_domains也让爬虫在其他域上工作。

回复您在https://stackoverflow.com/a/52792350的代码

相关问题 更多 >

    热门问题