我的Scrapy脚本非常慢,在3分钟内提取100个项目

2024-09-29 19:35:16 发布

您现在位置:Python中文网/ 问答频道 /正文

我学习scrapy是因为我知道它是异步工作的,因此比Selenium更快。但实际上只需要3分钟就可以刮下100件物品。我不知道为什么。求你了,我需要帮助

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from scrapy.loader import ItemLoader
from  batt_data.items import BattDataItem
import urllib.parse
from selenium import webdriver

class BatterySpider(CrawlSpider):
    name = 'battery'
#     allowed_domains = ['web']
    start_urls = ['https://www.made-in-china.com/multi-search/24v%2Bbattery/F1/1.html']
    base_url = ['https://www.made-in-china.com/multi-search/24v%2Bbattery/F1/1.html']
    
    
    # driver = webdriver.Chrome()
    # driver.find_element_by_xpath('//a[contains(@class,"list-switch-btn list-switch-btn-right selected")]').click()

    rules = (
        Rule(LinkExtractor(restrict_xpaths='//*[contains(@class, "nextpage")]'),
             callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        price = response.css('.price::text').extract()
        
        description = response.xpath('//img[@class="J-firstLazyload"]/@alt').extract()
        chemistry = response.xpath('//li[@class="J-faketitle ellipsis"][1]/span/text()').extract()
        applications = response.xpath('//li[@class="J-faketitle ellipsis"][2]/span/text()').extract()
        discharge_rate = response.xpath('//li[@class="J-faketitle ellipsis"][4]/span/text()').extract()
        shape = response.xpath('//li[@class="J-faketitle ellipsis"][5]/span/text()').extract()
        
        data = zip(description,price,chemistry,applications,discharge_rate,shape)
        for item in data:
            scraped = {
                'description': item[0],
                'price' : item[1],
                'chemistry' : item[2],
                'applications' : item[3],
                'discharge_rate' : item[4],
                'shape' : item[5],
            }
                
        yield scraped

Tags: textfromimportdataresponseextractliitem
1条回答
网友
1楼 · 发布于 2024-09-29 19:35:16

我实际上发送了太多的请求。我通过在一个容器中循环来处理它,该容器包含我需要的所有项目。更新后的蜘蛛在不到1分钟内完成了任务

相关问题 更多 >

    热门问题