刮削时使用爬行器

2024-09-30 22:13:19 发布

您现在位置:Python中文网/ 问答频道 /正文

我试着用爬行器来做,这是代码,但是爬行器没有返回结果(打开和关闭之后):

from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from torent.items import TorentItem

class MultiPagesSpider(CrawlSpider):
    name = 'job'
    allowed_domains = ['tanitjobs.com/']
    start_urls = ['http://tanitjobs.com/browse-by-category/Nurse/?searchId=1393459812.065&action=search&page=1&view=list',]
    rules = (
            Rule (SgmlLinkExtractor(allow=('page=*',),restrict_xpaths=('//div[@class="pageNavigation"]',))
            , callback='parse_item', follow= True),
            )

    def parse_item(self, response):
        hxs = HtmlXPathSelector(response)
        items= hxs.select('//div[@class="offre"]/div[@class="detail"]')
        scraped_items =[]
        for item in items:
            scraped_item = TorentItem() 
            scraped_item["title"] = item.select('a/strong/text()').extract() 
            scraped_items.append(scraped_item)
        return items   

Tags: fromimportdivitemsitemcontribruleclass
1条回答
网友
1楼 · 发布于 2024-09-30 22:13:19

@paul t.在上面的评论中说了什么,但是另外您需要返回scraped_items而不是items,否则您会得到大量的错误,如下所示:

2014-02-26 23:40:59+0000 [job] ERROR: Spider must return Request, BaseItem or None, got 'HtmlXPathSelector' in 
<GET http://tanitjobs.com/browse-by-category/Nurse/?action=search&page=3&searchId=1393459812.065&view=list>

相关问题 更多 >