Scrapy crawler在作为脚本运行时忽略“DOWNLOADER”middleware

2024-06-01 06:09:41 发布

您现在位置:Python中文网/ 问答频道 /正文

我想用Scrapy从几个不同的站点获取数据,并对这些数据进行一些分析。由于爬虫程序和分析数据的代码都与同一个项目有关,所以我希望将所有内容存储在同一个Git存储库中。我创建了一个minimal reproducible example on Github。在

项目结构如下:

./crawlers
./crawlers/__init__.py
./crawlers/myproject
./crawlers/myproject/__init__.py
./crawlers/myproject/myproject
./crawlers/myproject/myproject/__init__.py
./crawlers/myproject/myproject/items.py
./crawlers/myproject/myproject/pipelines.py
./crawlers/myproject/myproject/settings.py
./crawlers/myproject/myproject/spiders
./crawlers/myproject/myproject/spiders/__init__.py
./crawlers/myproject/myproject/spiders/example.py
./crawlers/myproject/scrapy.cfg
./scrapyScript.py

./crawlers/myproject文件夹中,我可以通过键入以下内容来执行爬虫程序:

^{pr2}$

爬虫程序使用一些下载中间件,特别是alecxe's优秀scrapy-fake-useragent。来自settings.py

DOWNLOADER_MIDDLEWARES = {
    'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': None,
    'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400,
}

当使用scrapy crawl ...执行时,useragent看起来像一个真正的浏览器。以下是来自Web服务器的示例记录:

24.8.42.44 - - [16/Jun/2015:05:07:59 +0000] "GET / HTTP/1.1" 200 27161 "-" "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36"

看看documentation,可以从脚本中执行scrapy crawl ...的等效操作。基于文档的scrapyScript.py文件如下所示:

from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals

from scrapy.utils.project import get_project_settings
from crawlers.myproject.myproject.spiders.example import ExampleSpider

spider = ExampleSpider()
settings = get_project_settings()

crawler = Crawler(settings)
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)

crawler.start()
log.start()
reactor.run()

当我执行脚本时,我可以看到爬虫程序发出页面请求。不幸的是,它忽略了DOWNLOADER_MIDDLEWARES。例如,useragent不再是欺骗:

24.8.42.44 - - [16/Jun/2015:05:32:04 +0000] "GET / HTTP/1.1" 200 27161 "-" "Scrapy/0.24.6 (+http://scrapy.org)"

不知何故,当爬虫程序从脚本执行时,它似乎忽略了settings.py中的设置。在

你能看出我做错了什么吗?在


Tags: frompyimport程序settingsinitexamplemyproject
1条回答
网友
1楼 · 发布于 2024-06-01 06:09:41

为了使get_project_settings()找到所需的settings.py,请设置^{}environment variable

import os
import sys

# ...

sys.path.append(os.path.join(os.path.curdir, "crawlers/myproject"))
os.environ['SCRAPY_SETTINGS_MODULE'] = 'myproject.settings'

settings = get_project_settings()

注意,由于运行程序脚本的位置,您需要将myproject添加到sys.path中。或者,将scrapyScript.py移到myproject目录下。在

相关问题 更多 >