使用Scrapy1.5创建多级菜单

2024-06-28 11:17:31 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试从多级菜单获取所有链接。
起始URL=['https://www.bbcgoodfood.com/recipes/category/ingredients']

import scrapy

from foodisgood.items import FoodisgoodItem
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.loader import ItemLoader
from scrapy.loader.processors import TakeFirst


class BbcSpider(CrawlSpider):

    name = 'bbc'
    allowed_domains = ['bbcgoodfood.com']

    start_urls = ['https://www.bbcgoodfood.com/recipes/category/ingredients']

    rules = (
        Rule(LinkExtractor(allow=(r'/recipes/category/[\w-]+$'), restrict_xpaths='//article[contains(@class, "cleargridindent")]'), callback='parse_sub_categories', follow=True),
        Rule(LinkExtractor(allow=(r'/recipes/collection/[\w-]+$'), restrict_xpaths='//article[contains(@class, "cleargridindent")]'), callback='parse_collections', follow=True),
    )

    def parse_sub_categories(self, response):
        l = ItemLoader(item=FoodisgoodItem(), response=response)

        l.default_output_processor = TakeFirst()

        l.add_xpath('category_title', '//h1[@class="section-head--title"]/text()')
        l.add_value('page_url', response.url)

        yield l.load_item()

    def parse_collections(self, response):
        l = ItemLoader(item=FoodisgoodItem(), response=response)

        l.default_output_processor = TakeFirst()

        l.add_xpath('collection_title', '//h1[@class="section-head--title"]/text()')
        l.add_value('page_url', response.url)

        yield l.load_item()

Results of menu scraping 但我不明白如何在集合标题前填充空的第一列。你知道吗

目前我有:

空的|牛排食谱| https://www.bbcgoodfood.com/recipes/collection/steak

但我需要:

肉|牛排食谱| https://www.bbcgoodfood.com/recipes/collection/steak

有人能告诉我需要做什么才能得到第一列子类别的结果吗?你知道吗

感谢所有人)


Tags: fromhttpsimportcomaddparseresponsewww
1条回答
网友
1楼 · 发布于 2024-06-28 11:17:31

使用CrawlSpider的规则,您想要的并不是真正可行的(至少不是以一种简单的方式)。你知道吗

通常的方法见Passing additional data to callback functions
您将在第一次回调中提取类别,然后在metadict中创建一个传递此信息的新请求

相关问题 更多 >