用刮板爬行并刮取完整的场地

2024-09-30 01:36:13 发布

您现在位置:Python中文网/ 问答频道 /正文

import scrapy
from scrapy import Request

#scrapy crawl jobs9 -o jobs9.csv -t csv
class JobsSpider(scrapy.Spider):
name = "jobs9"
allowed_domains = ["vapedonia.com"]
start_urls = ["https://www.vapedonia.com/7-principiantes-kit-s-de-inicio-", 
              "https://www.vapedonia.com/10-cigarrillos-electronicos-", 
              "https://www.vapedonia.com/11-mods-potencia-", 
              "https://www.vapedonia.com/12-consumibles", 
              "https://www.vapedonia.com/13-baterias", 
              "https://www.vapedonia.com/23-e-liquidos", 
              "https://www.vapedonia.com/26-accesorios", 
              "https://www.vapedonia.com/31-atomizadores-reparables", 
              "https://www.vapedonia.com/175-alquimia-", 
              "https://www.vapedonia.com/284-articulos-en-liquidacion"]

def parse(self, response):
    products = response.xpath('//div[@class="product-container clearfix"]')
    for product in products:
        image = product.xpath('div[@class="center_block"]/a/img/@src').extract_first()
        link = product.xpath('div[@class="center_block"]/a/@href').extract_first()
        name = product.xpath('div[@class="right_block"]/p/a/text()').extract_first()
        price = product.xpath('div[@class="right_block"]/div[@class="content_price"]/span[@class="price"]/text()').extract_first().encode("utf-8")
        yield{'Image' : image, 'Link' : link, 'Name': name, 'Price': price}
        
    relative_next_url = response.xpath('//*[@id="pagination_next"]/a/@href').extract_first()
    absolute_next_url = "https://www.vapedonia.com" + str(relative_next_url)
    yield Request(absolute_next_url, callback=self.parse)

使用该代码,我可以正确地刮取页面及其子页面的产品。所有页面都已爬网

如果我想刮整个网站,我必须把类别的网址手动在“开始的网址”。gppd应该让我抓取这些URL以使抓取动态化

除了简单的分页爬行之外,我如何将爬行与刮削相结合

多谢各位

现在,我改进了代码,下面是新代码:

import scrapy
from scrapy import Request
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor

#scrapy crawl jobs10 -o jobs10.csv -t csv
class JobsSpider(scrapy.spiders.CrawlSpider):
name = "jobs10"
allowed_domains = ["vapedonia.com"]
start_urls = ["https://www.vapedonia.com/"]

rules = (Rule(LinkExtractor(allow=(r"https://www.vapedonia.com/\d+.*",)), callback='parse_category'), )

def parse_category(self, response):
    products = response.xpath('//div[@class="product-container clearfix"]')
    for product in products:
        image = product.xpath('div[@class="center_block"]/a/img/@src').extract_first()
        link = product.xpath('div[@class="center_block"]/a/@href').extract_first()
        name = product.xpath('div[@class="right_block"]/p/a/text()').extract_first()
        price = product.xpath('div[@class="right_block"]/div[@class="content_price"]/span[@class="price"]/text()').extract_first().encode("utf-8")
        yield{'Image' : image, 'Link' : link, 'Name': name, 'Price': price}

我所做的更改如下:

1-我导入爬行器、规则和LinkExtractor

from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor

2-jobSpider类不再从“scrapy.Spider”继承。它现在继承自scrapy.spider.CrawlSpider(已在上一步中导出)

3-“开始的URL”不再是由静态URL列表组成的,我们只需要域名,所以

start_urls = ["https://www.vapedonia.com/7-principiantes-kit-s-de-inicio-", 
    "https://www.vapedonia.com/10-cigarrillos-electronicos-", 
    "https://www.vapedonia.com/11-mods-potencia-", 
    "https://www.vapedonia.com/12-consumibles", 
    "https://www.vapedonia.com/13-baterias", 
    "https://www.vapedonia.com/23-e-liquidos", 
    "https://www.vapedonia.com/26-accesorios", 
    "https://www.vapedonia.com/31-atomizadores-reparables", 
    "https://www.vapedonia.com/175-alquimia-", 
    "https://www.vapedonia.com/284-articulos-en-liquidacion"]

被替换为

start_urls = ["https://www.vapedonia.com/"]

我们把规则

rules = (Rule(LinkExtractor(allow=(r"https://www.vapedonia.com/\d+.*",)), callback='parse_category'), )

我们不再称之为“解析”,而是“解析类别”

5-以前的分页爬网将消失。因此,下一个代码就消失了

relative_next_url = response.xpath('//*[@id="pagination_next"]/a/@href').extract_first()
absolute_next_url = "https://www.vapedonia.com" + str(relative_next_url)
yield Request(absolute_next_url, callback=self.parse)

所以,在我看来,这似乎是非常合乎逻辑的,分页爬行过程被url爬行过程所取代

但是。。。它不起作用,甚至使用encode(“utf-8”)的“price”字段也不再起作用


Tags: httpsimportdivcomurlwwwextractproduct
1条回答
网友
1楼 · 发布于 2024-09-30 01:36:13

在这种情况下,您需要使用具有规则的爬行器。下面是一个简单的翻译你的刮刀之一

class JobsSpider(scrapy.spiders.CrawlSpider):
    name = "jobs9"
    allowed_domains = ["vapedonia.com"]
    start_urls = ["https://www.vapedonia.com"]

    rules = (Rule(LinkExtractor(allow=(r"https://www.vapedonia.com/\d+.*",)), callback='parse_category'), )

    def parse_category(self, response):
        products = response.xpath('//div[@class="product-container clearfix"]')
        for product in products:
            image = product.xpath('div[@class="center_block"]/a/img/@src').extract_first()
            link = product.xpath('div[@class="center_block"]/a/@href').extract_first()
            name = product.xpath('div[@class="right_block"]/p/a/text()').extract_first()
            price = product.xpath(
                'div[@class="right_block"]/div[@class="content_price"]/span[@class="price"]/text()').extract_first().encode(
                "utf-8")
            yield {'Image': image, 'Link': link, 'Name': name, 'Price': price}

看看https://doc.scrapy.org/en/latest/topics/spiders.html上的不同蜘蛛

scrapy run

相关问题 更多 >

    热门问题