用Scrapy抓取ajax页面?

2024-09-28 13:15:10 发布

您现在位置:Python中文网/ 问答频道 /正文

我在用Scrapy从这个页面上收集数据

https://www.bricoetloisirs.ch/magasins/gardena

产品列表将动态显示。 查找获取产品的url

https://www.bricoetloisirs.ch/coop/ajax/nextPage/(cpgnum=1&layout=7.01-14_180_69_164_182&uiarea=2&carea=%24ROOT&fwrd=frwd0&cpgsize=12)/.do?page=2&_=1473841539272

但是当我一点一点地刮它的时候它给了我空白的一页

<span class="pageSizeInformation" id="page0" data-page="0" data-pagesize="12">Page: 0 / Size: 12</span>

这是我的密码

# -*- coding: utf-8 -*-
import scrapy

from v4.items import Product


class GardenaCoopBricoLoisirsSpider(scrapy.Spider):
    name = "Gardena_Coop_Brico_Loisirs_py"

    start_urls = [
            'https://www.bricoetloisirs.ch/coop/ajax/nextPage/(cpgnum=1&layout=7.01-14_180_69_164_182&uiarea=2&carea=%24ROOT&fwrd=frwd0&cpgsize=12)/.do?page=2&_=1473841539272'
        ]

    def parse(self, response):
        print response.body

Tags: https产品wwwpageajaxchlayoutcoop
3条回答

我相信你需要像浏览器一样发送额外的请求。尝试按如下方式修改代码:

# -*- coding: utf-8 -*-
import scrapy

from scrapy.http import Request
from v4.items import Product


class GardenaCoopBricoLoisirsSpider(scrapy.Spider):
    name = "Gardena_Coop_Brico_Loisirs_py"

    start_urls = [
        'https://www.bricoetloisirs.ch/coop/ajax/nextPage/'
    ]

    def parse(self, response):
        request_body = '(cpgnum=1&layout=7.01-14_180_69_164_182&uiarea=2&carea=%24ROOT&fwrd=frwd0&cpgsize=12)/.do?page=2&_=1473841539272'
        yield Request(url=response.url, body=request_body, callback=self.parse_page)

    def parse_page(self, response):
        print response.body

据我所知,网站使用JavaScript进行Ajax调用。
当您使用scrapy时,页面的JS不会加载。

您需要查看一下Selenium是否可以刮掉此类页面。

或者找出正在进行的ajax调用并亲自发送它们。
检查一下这个Can scrapy be used to scrape dynamic content from websites that are using AJAX?可能对你也有帮助

我来解决这个问题。

# -*- coding: utf-8 -*-
import scrapy

from v4.items import Product


class GardenaCoopBricoLoisirsSpider(scrapy.Spider):
    name = "Gardena_Coop_Brico_Loisirs_py"

    start_urls = [
            'https://www.bricoetloisirs.ch/magasins/gardena'
        ]

    def parse(self, response):
        for page in xrange(1, 50):
            url = response.url + '/.do?page=%s&_=1473841539272' % page
            yield scrapy.Request(url, callback=self.parse_page)

    def parse_page(self, response):
        print response.body

相关问题 更多 >

    热门问题