使用不同的设置从脚本运行两个连续的Scrapy CrawlerProcess

2024-05-17 06:25:33 发布

您现在位置:Python中文网/ 问答频道 /正文

我有两个不同的Scrapy蜘蛛,当启动时:

scrapy crawl spidername -o data\whatever.json

当然,我知道我可以使用脚本中的系统调用来复制该命令,但我更愿意坚持使用CrawlerProcess或任何其他方法,使其在脚本中工作。在

问题是:正如在this SO question(也在Scrapy docs中)中所读的,我必须在CrawlerProcess构造函数的设置中设置输出文件:

^{pr2}$

问题是我不希望两个spider都将数据存储到同一个输出文件中,而是将数据存储到两个不同的文件中。因此,我的第一次尝试显然是在第一次作业完成后,用不同的设置创建一个新的CrawlerProcess

session_date_format = '%Y%m%d'
session_date = datetime.now().strftime(session_date_format)

try:
    process = CrawlerProcess({
        'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
        'FEED_FORMAT': 'json',
        'FEED_URI': os.path.join('data', 'an_origin', '{}.json'.format(session_date)),
        'DOWNLOAD_DELAY': 3,
        'LOG_STDOUT': True,
        'LOG_FILE': 'scrapy_log.txt',
        'ROBOTSTXT_OBEY': False,
        'RETRY_ENABLED': True,
        'RETRY_HTTP_CODES': [500, 503, 504, 400, 404, 408],
        'RETRY_TIMES': 5
    })
    process.crawl(MyFirstSpider)
    process.start()  # the script will block here until the crawling is finished
except Exception as e:
    print('ERROR while crawling: {}'.format(e))
else:
    print('Data successfuly crawled')

time.sleep(3)  # Wait 3 seconds

try:
    process = CrawlerProcess({
        'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
        'FEED_FORMAT': 'json',
        'FEED_URI': os.path.join('data', 'other_origin', '{}.json'.format(session_date)),
        'DOWNLOAD_DELAY': 3,
        'LOG_STDOUT': True,
        'LOG_FILE': 'scrapy_log.txt',
        'ROBOTSTXT_OBEY': False,
        'RETRY_ENABLED': True,
        'RETRY_HTTP_CODES': [500, 503, 504, 400, 404, 408],
        'RETRY_TIMES': 5
    })
    process.crawl(MyOtherSpider)
    process.start()  # the script will block here until the crawling is finished
except Exception as e:
    print('ERROR while crawling: {}'.format(e))
else:
    print('Data successfuly crawled')

当我这样做的时候,第一个Crawler就如预期的那样工作。但是,第二个创建一个空的输出文件并失败。如果我将第二个CrawlerProcess存储到另一个变量中,例如process2,也会发生这种情况。显然,我试图改变蜘蛛的顺序来检查这是否是某个蜘蛛的问题,但失败的总是第二个。在

如果我检查日志文件,在第一个作业完成后,似乎启动了2个垃圾机器人,所以可能发生了一些奇怪的事情:

2017-05-29 23:51:41 [scrapy.extensions.feedexport] INFO: Stored json feed (2284 items) in: data\one_origin\20170529.json
2017-05-29 23:51:41 [scrapy.core.engine] INFO: Spider closed (finished)
2017-05-29 23:51:41 [stdout] INFO: Data successfuly crawled
2017-05-29 23:51:44 [scrapy.utils.log] INFO: Scrapy 1.3.2 started (bot: scrapybot)
2017-05-29 23:51:44 [scrapy.utils.log] INFO: Scrapy 1.3.2 started (bot: scrapybot)
2017-05-29 23:51:44 [scrapy.utils.log] INFO: Overridden settings: {'LOG_FILE': 'scrapy_output.txt', 'FEED_FORMAT': 'json', 'FEED_URI': 'data\\other_origin\\20170529.json', 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)', 'LOG_STDOUT': True, 'RETRY_TIMES': 5, 'RETRY_HTTP_CODES': [500, 503, 504, 400, 404, 408], 'DOWNLOAD_DELAY': 3}
2017-05-29 23:51:44 [scrapy.utils.log] INFO: Overridden settings: {'LOG_FILE': 'scrapy_output.txt', 'FEED_FORMAT': 'json', 'FEED_URI': 'data\\other_origin\\20170529.json', 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)', 'LOG_STDOUT': True, 'RETRY_TIMES': 5, 'RETRY_HTTP_CODES': [500, 503, 504, 400, 404, 408], 'DOWNLOAD_DELAY': 3}
...
2017-05-29 23:51:44 [scrapy.core.engine] INFO: Spider opened
2017-05-29 23:51:44 [scrapy.core.engine] INFO: Spider opened
2017-05-29 23:51:44 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-29 23:51:44 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-29 23:51:44 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2017-05-29 23:51:44 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2017-05-29 23:51:44 [stdout] INFO: ERROR while crawling:
2017-05-29 23:51:44 [stdout] INFO: ERROR while crawling:

你知道发生了什么事,怎么解决这个问题吗?在


Tags: 文件infologjsontrueformatdatasession