无法使用网络爬虫(scrapy)登录到网站

2024-09-28 22:20:31 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在做一个项目,我必须在登录网站“http://app.bmiet.net/student/login”后将其删除。但是,我无法使用scrapy登录。我想这是因为我的代码无法从网站上读取CSRF代码,但我仍在学习使用scrapy,所以我不确定。请帮我写代码,告诉我我犯了什么错误。代码如下所示

import scrapy
from scrapy.http import FormRequest
from scrapy.utils.response import open_in_browser


class spidey(scrapy.Spider):
     name = 'spidyy'
     start_urls = [
        'http://app.bmiet.net/student/login'
      ]


def parse(self, response):
         token = response.css('form input::attr(value)').extract_first()
         return FormRequest.from_response(response, formdata={
            'csrf_token' : token,
             'username' : '//username//',
             'password' : '//password//'
         }, callback = self.start_scrapping)

def start_scrapping(self, response):
    open_in_browser(response)
    all = response.css('.table-hover td')
    for x in all:
        att = x.css('td:nth-child(2)::text').extract()
        sub = x.css('td~ td+ td::text').extract()
        yield {
            'Subject': sub,
            'Status': att
        }

由于明显的原因,我删除了用户名和密码。 我也在分享我在终端运行程序时得到的信息

2020-03-21 17:06:49 [scrapy.core.engine] INFO: Spider opened
2020-03-21 17:06:49 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-03-21 17:06:49 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-03-21 17:06:50 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://app.bmiet.net/robots.txt> (referer: None)
2020-03-21 17:06:54 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://app.bmiet.net/student/login> (referer: None)
2020-03-21 17:06:54 [scrapy.core.scraper] ERROR: Spider error processing <GET http://app.bmiet.net/student/login> (referer: None)
Traceback (most recent call last):
  File "c:\users\administrator\pycharmprojects\sarthak_project\venv\lib\site-packages\twisted\internet\defer.py", line 654, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "c:\users\administrator\pycharmprojects\sarthak_project\venv\lib\site-packages\scrapy\spiders\__init__.py", line 84, in parse
    raise NotImplementedError('{}.parse callback is not defined'.format(self.__class__.__name__))
NotImplementedError: spidey.parse callback is not defined
2020-03-21 17:06:54 [scrapy.core.engine] INFO: Closing spider (finished)

Tags: 代码incoreselfapphttpnetparse
1条回答
网友
1楼 · 发布于 2024-09-28 22:20:31

我建议您重新格式化代码并缩进方法,使它们成为类的一部分,如下所示:

import scrapy
from scrapy.http import FormRequest
from scrapy.utils.response import open_in_browser


class spidey(scrapy.Spider):
    name = 'spidyy'
    start_urls = [
        'http://app.bmiet.net/student/login'
    ]

    def parse(self, response):
        token = response.css('form input::attr(value)').extract_first()
        return FormRequest.from_response(response, formdata={
            'csrf_token' : token,
             'username' : '//username//',
             'password' : '//password//'
        }, callback = self.start_scrapping)

    def start_scrapping(self, response):
        open_in_browser(response)
        all = response.css('.table-hover td')
        for x in all:
            att = x.css('td:nth-child(2)::text').extract()
            sub = x.css('td~ td+ td::text').extract()
            yield {
                'Subject': sub,
                'Status': att
            }

相关问题 更多 >