Scrapy - 发送 AJAX FormRequest 返回错误419

2024-09-28 18:59:20 发布

您现在位置:Python中文网/ 问答频道 /正文

https://unicodono.com.br/anuncios?page=3&uf_id=&cidade_id=&marca=&modelo=&versao=&cambio=&valor_minimo=&valor_maximo=&ano_minimo=&ano_maximo=&km_minimo=&km_maximo=&usado=&novos=&moto=&carro=&orderby=menor_valor&blindagem=&nao_blindagem=&cep=

这是我正在启动爬网过程的URL。在

https://unicodono.com.br/anuncios/itens/ajax?page=1

这是AJAX调用,它动态地将数据加载到页面。在

我认为我正确地发送了表单和头,但响应返回错误419,爬虫程序停止运行。我不知道如何解释这个错误。我的蜘蛛身上漏了什么吗?在

class MySpider(CrawlSpider):

    name = 'myspider'    

    start_urls = ['https://unicodono.com.br/anuncios?page=1&uf_id=&cidade_id=&marca=&modelo=&versao=&cambio=&valor_minimo=&valor_maximo=&ano_minimo=&ano_maximo=&km_minimo=&km_maximo=&usado=&novos=&moto=&carro=&orderby=menor_valor&blindagem=&nao_blindagem=&cep=',]

    form_data = {'uf_id': '',
                 'cidade_id': '',
                 'marca': '',
                 'modelo': '',
                 'versao': '',
                 'valor_minimo': '',
                 'valor_maximo': '',
                 'ano_minimo': '',
                 'ano_maximo': '',
                 'km_minimo': '',
                 'km_maximo': '',
                 'orderby': 'menor_valor'}

    def parse(self, response):
        for url in self.start_urls:
            yield scrapy.FormRequest(
                url='https://unicodono.com.br/anuncios/itens/ajax?page=1',
                method='POST',
                headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
                         'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36',
                         'Referer': url},
                callback=self.parse_page,
                formdata=self.form_data
            )

    def parse_page(self, response):

        logging.info('parse_page function called on %s', response.url)

        from scrapy.shell import inspect_response
        inspect_response(response, self)

        yield {'data': response.text}

Tags: httpsbrselfcomidresponsepagevalor
2条回答

为了避免硬编码headers并使用post请求获取json响应,您可能需要这样做。在

import scrapy
import json

class UnicodonoSpider(scrapy.Spider):
    name = "unicodono"
    start_urls = ["https://unicodono.com.br/anuncios?page=3&uf_id=&cidade_id=&marca=&modelo=&versao=&cambio=&valor_minimo=&valor_maximo=&ano_minimo=&ano_maximo=&km_minimo=&km_maximo=&usado=&novos=&moto=&carro=&orderby=menor_valor&blindagem=&nao_blindagem=&cep="]
    purl = 'https://unicodono.com.br/anuncios/itens/ajax?page=3'

    def parse(self,response):
        cookie = response.headers.getlist('Set-Cookie')[0]
        csrf = response.css("meta[name='csrf-token']::attr(content)").get()
        yield scrapy.FormRequest(self.purl,headers={"Cookie":cookie,"X-CSRF-TOKEN":csrf},formdata={'orderby':'menor_valor'},callback=self.parse_json)

    def parse_json(self,response):
        print(json.loads(response.body_as_unicode()))

使用requests library

import requests

form_data = {'orderby': 'menor_valor'}

headers = {
    "Cookie": "ga=GA1.3.500010858.1561116466; _gid=GA1.3.1602312084.1561116466; _gat_gtag_UA_15308183_2=1; XSRF-TOKEN=eyJpdiI6IlZQaFZua1pFUW9ORDFlNTV0UGpnT3c9PSIsInZhbHVlIjoiMkpPd1lac1VTNzcrV2hETzk0V3grcFNrSnA2eEJ3SmdUZkpIUGdIQjNCa01tdWJNdDI4VlR4ODlkVlVTemRcLzUiLCJtYWMiOiJhY2I1NDJiYmFmZDA2MWNlNTQ5NGJmYjZhNDM3NTExMTIzZDYyYTY5YjM3MmJhZWE1NTE1MzA0MGNmMjY5M2M1In0%3D; unicodono_session=eyJpdiI6ImhyWE8xbGhtQVBacnpyTTJ6NmxPanc9PSIsInZhbHVlIjoieEwybHBaNTFzaWR0elplRjcxWHc2RFJjK1Q1WlJmZmFsdGVyZVZtaEhPcmdrNVQ1bVZpZFBoS2RuNDVreEhBWCIsIm1hYyI6ImIyZWNhNWE5ODE1YjU5OTEyNjRkNWQ4ZTg5ZmMwOTVmNWEyYjhiMzE0MzJmODE4OWM3NTQ2ZTNmOTliMzZhNjQifQ%3D%3D",
    "X-CSRF-TOKEN": "R8UOoGWhksZEUJsdmIsQwpA9Gx9qSTXpvIgBZcXX"
}

url='https://unicodono.com.br/anuncios/itens/ajax?page=1'

response = requests.post(url,data=form_data,headers=headers)
print(response.json())

相关问题 更多 >