无法使用Scrapy shell Python刮取数据

2024-09-28 22:26:14 发布

您现在位置:Python中文网/ 问答频道 /正文

我使用Scrapy shell作为URLhttp://www.yelp.com/search?find_desc=&find_loc=60089

我知道我需要获取该链接中的数据和URL。。 例如,我需要在该链接中刮取以下数据

  1. 木柴烤肉地中海烤架
  2. 卢·马尔纳蒂比萨店
  3. 白洋寿司
  4. 指甲及;温泉工作室等

我曾经

hxs.select('//span[@class=“index biz name”]/a/text()).extract()
命令提取该数据

我尝试了很多方法来获取其他数据,但这些数据与该页面无关

请尽快将代码发送给我/


Tags: 数据comurlsearch链接wwwfindshell
1条回答
网友
1楼 · 发布于 2024-09-28 22:26:14

你的表达方式有效:

paul@wheezy:~$ scrapy shell "http://www.yelp.com/search?find_desc=&find_loc=60089"
2014-01-29 22:48:22+0100 [scrapy] INFO: Scrapy 0.23.0 started (bot: scrapybot)
2014-01-29 22:48:22+0100 [scrapy] INFO: Optional features available: ssl, http11, boto, django
2014-01-29 22:48:22+0100 [scrapy] INFO: Overridden settings: {'LOGSTATS_INTERVAL': 0}
2014-01-29 22:48:22+0100 [scrapy] INFO: Enabled extensions: TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-01-29 22:48:22+0100 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-01-29 22:48:22+0100 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-01-29 22:48:22+0100 [scrapy] INFO: Enabled item pipelines: 
2014-01-29 22:48:22+0100 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-01-29 22:48:22+0100 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-01-29 22:48:22+0100 [default] INFO: Spider opened
2014-01-29 22:48:24+0100 [default] DEBUG: Crawled (200) <GET http://www.yelp.com/search?find_desc=&find_loc=60089> (referer: None)
[s] Available Scrapy objects:
[s]   item       {}
[s]   request    <GET http://www.yelp.com/search?find_desc=&find_loc=60089>
[s]   response   <200 http://www.yelp.com/search?find_desc=&find_loc=60089>
[s]   sel        <Selector xpath=None data=u'<html xmlns:fb="http://www.facebook.com/'>
[s]   settings   <CrawlerSettings module=None>
[s]   spider     <Spider 'default' at 0x3ba6b50>
[s] Useful shortcuts:
[s]   shelp()           Shell help (print this help)
[s]   fetch(req_or_url) Fetch request (or URL) and update local objects
[s]   view(response)    View response in a browser

In [1]: sel.xpath('//span[@class="indexed-biz-name"]/a/text()').extract()
Out[1]: 
[u'Firewood Kabob Mediterranean Grill',
 u"Lou Malnati's Pizzeria",
 u'Hakuya Sushi',
 u'Nails & Spa Studio',
 u'Wooil Korean Restaurant',
 u"Grande Jake's Fresh Mexican Grill",
 u'Hanabi Japanese Restaurant',
 u'India House',
 u'Deerfields Bakery',
 u'Wiener Take All']

In [2]: 

相关问题 更多 >