擅长:python、mysql、java
<p>从垃圾文件:<a href="https://doc.scrapy.org/en/latest/topics/practices.html#running-multiple-spiders-in-the-same-process" rel="nofollow noreferrer">https://doc.scrapy.org/en/latest/topics/practices.html#running-multiple-spiders-in-the-same-process</a></p>
<pre><code>import scrapy
from scrapy.crawler import CrawlerProcess
from .spiders import Spider1, Spider2
process = CrawlerProcess()
process.crawl(Crawler1)
process.crawl(Crawler2)
process.start() # the script will block here until all crawling jobs are finished
</code></pre>
<p>编辑:如果希望两个一个地运行多个spider,可以执行以下操作:</p>
^{pr2}$
<p>虽然没有测试所有这些,让我知道它是否有效!在</p>