从脚本抓取总是在抓取后阻止脚本执行

2024-09-21 07:38:48 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在按照本指南http://doc.scrapy.org/en/0.16/topics/practices.html#run-scrapy-from-a-script运行脚本中的scrapy。 以下是我剧本的一部分:

    crawler = Crawler(Settings(settings))
    crawler.configure()
    spider = crawler.spiders.create(spider_name)
    crawler.crawl(spider)
    crawler.start()
    log.start()
    reactor.run()
    print "It can't be printed out!"

它的工作原理应该是:访问页面、收集所需信息并将输出json存储在我告诉它的地方(通过FEED-URI)。但当spider完成工作(我可以在输出json中看到数字)时,我的脚本将不会继续执行。 也许这不是小问题。答案应该在twisted的反应堆里。 如何释放线程执行?


Tags: runorg脚本jsonhttpdochtml指南
2条回答

在scrapy 0.19.x中,您应该执行以下操作:

from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from testspiders.spiders.followall import FollowAllSpider
from scrapy.utils.project import get_project_settings

spider = FollowAllSpider(domain='scrapinghub.com')
settings = get_project_settings()
crawler = Crawler(settings)
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run() # the script will block here until the spider_closed signal was sent

注意这些行

settings = get_project_settings()
crawler = Crawler(settings)

没有它,你的蜘蛛将不会使用你的设置,也不会保存项目。 我花了一段时间才弄明白为什么文档中的示例没有保存我的项目。我发送了一个请求来修复doc示例。

另一种方法是直接从脚本调用命令

from scrapy import cmdline
cmdline.execute("scrapy crawl followall".split())  #followall is the spider's name

蜘蛛完成后你需要停止反应器。您可以通过监听spider_closed信号来实现这一点:

from twisted.internet import reactor

from scrapy import log, signals
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy.xlib.pydispatch import dispatcher

from testspiders.spiders.followall import FollowAllSpider

def stop_reactor():
    reactor.stop()

dispatcher.connect(stop_reactor, signal=signals.spider_closed)
spider = FollowAllSpider(domain='scrapinghub.com')
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
log.msg('Running reactor...')
reactor.run()  # the script will block here until the spider is closed
log.msg('Reactor stopped.')

命令行日志输出可能类似于:

stav@maia:/srv/scrapy/testspiders$ ./api
2013-02-10 14:49:38-0600 [scrapy] INFO: Running reactor...
2013-02-10 14:49:47-0600 [followall] INFO: Closing spider (finished)
2013-02-10 14:49:47-0600 [followall] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 23934,...}
2013-02-10 14:49:47-0600 [followall] INFO: Spider closed (finished)
2013-02-10 14:49:47-0600 [scrapy] INFO: Reactor stopped.
stav@maia:/srv/scrapy/testspiders$

相关问题 更多 >

    热门问题