无法从中爬网是的

2024-10-04 09:27:35 发布

您现在位置:Python中文网/ 问答频道 /正文

我无法运行此脚本。有人能告诉我这个剧本到底出了什么问题吗?所有的xpath都正确吗?在

我觉得这部分是错的:

item['job_title'] = site.select('h2/a/@title').extract()
link_url= site.select('h2/a/@href').extract()

因为xpath不正确。在

^{pr2}$

以下是错误日志:

hakuna@hakuna-Inspiron-3542:~/indeeda$ scrapy crawl indeed
/home/hakuna/indeeda/indeeda/spiders/test.py:1: ScrapyDeprecationWarning: Module `scrapy.spider` is deprecated, use `scrapy.spiders` instead
  from scrapy.spider import BaseSpider
/home/hakuna/indeeda/indeeda/spiders/test.py:3: ScrapyDeprecationWarning: Module `scrapy.contrib.spiders` is deprecated, use `scrapy.spiders` instead
  from scrapy.contrib.spiders import CrawlSpider, Rule
/home/hakuna/indeeda/indeeda/spiders/test.py:5: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors` is deprecated, use `scrapy.linkextractors` instead
  from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
/home/hakuna/indeeda/indeeda/spiders/test.py:5: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors.sgml` is deprecated, use `scrapy.linkextractors.sgml` instead
  from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
/home/hakuna/indeeda/indeeda/spiders/test.py:15: ScrapyDeprecationWarning: SgmlLinkExtractor is deprecated and will be removed in future releases. Please use scrapy.linkextractors.LinkExtractor
  Rule(SgmlLinkExtractor(allow=('/jobs.q=linux&l=Chicago&sort=date$','q=linux&l=Chicago&sort=date&start=[0-9]+$',),deny=('/my/mysearches', '/preferences', '/advanced_search','/my/myjobs')), callback='parse_item', follow=True),
2016-01-21 21:31:22 [scrapy] INFO: Scrapy 1.0.4 started (bot: indeeda)
2016-01-21 21:31:22 [scrapy] INFO: Optional features available: ssl, http11, boto
2016-01-21 21:31:22 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'indeeda.spiders', 'SPIDER_MODULES': ['indeeda.spiders'], 'BOT_NAME': 'indeeda'}
2016-01-21 21:31:22 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2016-01-21 21:31:22 [boto] DEBUG: Retrieving credentials from metadata server.
2016-01-21 21:31:23 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
  File "/home/hakuna/anaconda/lib/python2.7/site-packages/boto/utils.py", line 210, in retry_url
    r = opener.open(req, timeout=timeout)
  File "/home/hakuna/anaconda/lib/python2.7/urllib2.py", line 431, in open
    response = self._open(req, data)
  File "/home/hakuna/anaconda/lib/python2.7/urllib2.py", line 449, in _open
    '_open', req)
  File "/home/hakuna/anaconda/lib/python2.7/urllib2.py", line 409, in _call_chain
    result = func(*args)
  File "/home/hakuna/anaconda/lib/python2.7/urllib2.py", line 1227, in http_open
    return self.do_open(httplib.HTTPConnection, req)
  File "/home/hakuna/anaconda/lib/python2.7/urllib2.py", line 1197, in do_open
    raise URLError(err)
URLError: <urlopen error timed out>
2016-01-21 21:31:23 [boto] ERROR: Unable to read instance data, giving up
2016-01-21 21:31:23 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-01-21 21:31:23 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-01-21 21:31:23 [scrapy] INFO: Enabled item pipelines: 
2016-01-21 21:31:23 [scrapy] INFO: Spider opened
2016-01-21 21:31:23 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-01-21 21:31:23 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-01-21 21:31:23 [scrapy] DEBUG: Crawled (200) <GET http://www.indeed.com/jobs?q=linux&l=Chicago&sort=date?> (referer: None)
2016-01-21 21:31:23 [scrapy] ERROR: Spider error processing <GET http://www.indeed.com/jobs?q=linux&l=Chicago&sort=date?> (referer: None)
Traceback (most recent call last):
  File "/home/hakuna/anaconda/lib/python2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
    yield next(it)
  File "/home/hakuna/anaconda/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 28, in process_spider_output
    for x in result:
  File "/home/hakuna/anaconda/lib/python2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 22, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/home/hakuna/anaconda/lib/python2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/hakuna/anaconda/lib/python2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 54, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/hakuna/anaconda/lib/python2.7/site-packages/scrapy/spiders/crawl.py", line 73, in _parse_response
    for request_or_item in self._requests_to_follow(response):
  File "/home/hakuna/anaconda/lib/python2.7/site-packages/scrapy/spiders/crawl.py", line 52, in _requests_to_follow
    links = [l for l in rule.link_extractor.extract_links(response) if l not in seen]
  File "/home/hakuna/anaconda/lib/python2.7/site-packages/scrapy/linkextractors/sgml.py", line 138, in extract_links
    links = self._extract_links(body, response.url, response.encoding, base_url)
  File "/home/hakuna/anaconda/lib/python2.7/site-packages/scrapy/linkextractors/__init__.py", line 103, in _extract_links
    return self.link_extractor._extract_links(*args, **kwargs)
  File "/home/hakuna/anaconda/lib/python2.7/site-packages/scrapy/linkextractors/sgml.py", line 36, in _extract_links
    self.feed(response_text)
  File "/home/hakuna/anaconda/lib/python2.7/sgmllib.py", line 104, in feed
    self.goahead(0)
  File "/home/hakuna/anaconda/lib/python2.7/sgmllib.py", line 174, in goahead
    k = self.parse_declaration(i)
  File "/home/hakuna/anaconda/lib/python2.7/markupbase.py", line 98, in parse_declaration
    decltype, j = self._scan_name(j, i)
  File "/home/hakuna/anaconda/lib/python2.7/markupbase.py", line 392, in _scan_name
    % rawdata[declstartpos:declstartpos+20])
  File "/home/hakuna/anaconda/lib/python2.7/sgmllib.py", line 111, in error
    raise SGMLParseError(message)
SGMLParseError: expected name token at "<!\\\\])/g, '\\\\$1').\n "
2016-01-21 21:31:23 [scrapy] INFO: Closing spider (finished)
2016-01-21 21:31:23 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 245,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 28427,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 1, 22, 3, 31, 23, 795599),
 'log_count/DEBUG': 3,
 'log_count/ERROR': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'spider_exceptions/SGMLParseError': 1,
 'start_time': datetime.datetime(2016, 1, 22, 3, 31, 23, 504391)}
2016-01-21 21:31:23 [scrapy] INFO: Spider closed (finished)

上面说大多数图书馆都不赞成。在


Tags: inpyinfohomeresponselibpackagesline
1条回答
网友
1楼 · 发布于 2024-10-04 09:27:35

SgmlLinkExtractor不推荐使用,请改用LinkExtractor。在

from scrapy.linkextractors import LinkExtractor

...
    rules = ( 
        Rule(LinkExtractor(allow=('/jobs.q=linux&l=Chicago&sort=date$','q=linux&l=Chicago&sort=date&start=[0-9]+$',),deny=('/my/mysearches', '/preferences', '/advanced_search','/my/myjobs')), callback='parse_item', follow=True),
    )
...

相关问题 更多 >