这是我正在启动爬网过程的URL。在
https://unicodono.com.br/anuncios/itens/ajax?page=1
这是AJAX调用,它动态地将数据加载到页面。在
我认为我正确地发送了表单和头,但响应返回错误419,爬虫程序停止运行。我不知道如何解释这个错误。我的蜘蛛身上漏了什么吗?在
class MySpider(CrawlSpider):
name = 'myspider'
start_urls = ['https://unicodono.com.br/anuncios?page=1&uf_id=&cidade_id=&marca=&modelo=&versao=&cambio=&valor_minimo=&valor_maximo=&ano_minimo=&ano_maximo=&km_minimo=&km_maximo=&usado=&novos=&moto=&carro=&orderby=menor_valor&blindagem=&nao_blindagem=&cep=',]
form_data = {'uf_id': '',
'cidade_id': '',
'marca': '',
'modelo': '',
'versao': '',
'valor_minimo': '',
'valor_maximo': '',
'ano_minimo': '',
'ano_maximo': '',
'km_minimo': '',
'km_maximo': '',
'orderby': 'menor_valor'}
def parse(self, response):
for url in self.start_urls:
yield scrapy.FormRequest(
url='https://unicodono.com.br/anuncios/itens/ajax?page=1',
method='POST',
headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36',
'Referer': url},
callback=self.parse_page,
formdata=self.form_data
)
def parse_page(self, response):
logging.info('parse_page function called on %s', response.url)
from scrapy.shell import inspect_response
inspect_response(response, self)
yield {'data': response.text}
为了避免硬编码
headers
并使用post请求获取json响应,您可能需要这样做。在使用
requests library
相关问题 更多 >
编程相关推荐