学习python并尝试实现scrapy

2024-10-01 11:24:48 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在复习那篇无聊的教程http://doc.scrapy.org/en/latest/intro/tutorial.html 我一直跟着它直到我执行这个命令

scrapy crawl dmoz

它给了我一个错误的输出

^{pr2}$

我对python不是很熟悉,也不知道它在抱怨什么

这是我的domz_蜘蛛网.py文件

from scrapy.spider import BaseSpider

class DmozSpider(BaseSpider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
    ]

    def parse(self, response):
        filename = response.url.split("/")[-2]
        open(filename, 'wb').write(response.body)

And here is my items file

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html

from scrapy.item import Item, Field

    class DmozItem(Item):
        title = Field()
        link = Field()
        desc = Field()

这是目录结构

scrapy.cfg
tutorial/
tutorial/items.py
tutorial/pipelines.py
tutorial/settings.py
tutorial/spiders/
tutorial/spiders/domz_spider.py

这是设置.py文件

    BOT_NAME = 'tutorial'

    SPIDER_MODULES = ['tutorial.spiders']
    NEWSPIDER_MODULE = 'tutorial.spiders'

Tags: pyorghttpfielddocresponsehtmlitems
1条回答
网友
1楼 · 发布于 2024-10-01 11:24:48

好吧,我发现这个解决了这个问题

sudo pip安装升级zope.接口在

我不知道一旦发出这个命令,发生了什么,但这解决了我的问题,现在我看到了这一点

2013-08-25 13:30:05-0700 [scrapy] INFO: Scrapy 0.18.0 started (bot: tutorial)
2013-08-25 13:30:05-0700 [scrapy] DEBUG: Optional features available: ssl, http11
2013-08-25 13:30:05-0700 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'}
2013-08-25 13:30:05-0700 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2013-08-25 13:30:05-0700 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-08-25 13:30:05-0700 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-08-25 13:30:05-0700 [scrapy] DEBUG: Enabled item pipelines:
2013-08-25 13:30:05-0700 [dmoz] INFO: Spider opened
2013-08-25 13:30:05-0700 [dmoz] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-08-25 13:30:05-0700 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-08-25 13:30:05-0700 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-08-25 13:30:06-0700 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None)
2013-08-25 13:30:06-0700 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None)
2013-08-25 13:30:06-0700 [dmoz] INFO: Closing spider (finished)
2013-08-25 13:30:06-0700 [dmoz] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 530,
     'downloader/request_count': 2,
     'downloader/request_method_count/GET': 2,
     'downloader/response_bytes': 14738,
     'downloader/response_count': 2,
     'downloader/response_status_count/200': 2,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2013, 8, 25, 20, 30, 6, 559375),
     'log_count/DEBUG': 10,
     'log_count/INFO': 4,
     'response_received_count': 2,
     'scheduler/dequeued': 2,
     'scheduler/dequeued/memory': 2,
     'scheduler/enqueued': 2,
     'scheduler/enqueued/memory': 2,
     'start_time': datetime.datetime(2013, 8, 25, 20, 30, 5, 664310)}
2013-08-25 13:30:06-0700 [dmoz] INFO: Spider closed (finished)

相关问题 更多 >