Scrapy:crawlspider不生成给定链接中的所有链接和数据

2024-10-01 04:46:38 发布

您现在位置:Python中文网/ 问答频道 /正文

我不能在下面的网址中删除数据。我试着取消它,但在我的机器里它显示了一些不相关的数据

URL 1:http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Daps&field-keywords=samsung%20appliances&sprefix=samsung+applia%2Caps&rh=i%3Aaps%2Ck%3Asamsung%20appliances

网址2:http://www.amazon.com/s/ref=sr_pg_2?rh=i%3Aaps%2Ck%3Asamsung+appliances&page=2&keywords=samsung+appliances&ie=UTF8&qid=1391033912

编码:

第1行:hxs.select('//h3[@class=“newaps”]/span/text()').extract()

第2行:hxs.select('//h3[@class=“newaps”]/a/@href').extract() 预期产量:

对于URL 1&;第1行

三星RF4289HARS 三星发热元件DC47-00019A 三星WIS12ABGNX无线局域网适配器 三星SMH1816S 1.8 Cu。不锈钢微波炉 三星RF4287 28立方英尺法式门冰箱,4门,集成水和;冰,真正的不锈钢 . . . 等等

我需要上面的第2行代码,然后我也需要URL 2

查看我的代码

    From scrapy.spider import BaseSpider      
    from scrapy.http import Request    
    from urlparse import urljoin        
    from scrapy.selector import HtmlXPathSelector    
    import inspect    
    from amazon.items import AmazonItem    

    class amzspider(BaseSpider):
        name="amz"    

        start_urls=["http://www.amazon.com/s/ref=sr_pg_2?rh=i%3Aaps%2Ck%3Asamsung+appliances&page=2&keywords=samsung+appliances&ie=UTF8&qid=1386153209"]
        print start_urls           

    def parse(self,response):

        hxs = HtmlXPathSelector(response)


        ul=hxs.select('//div/ul[@class="rsltGridList grey"]').extract()
        l=len(hxs.select('//h3[@class="newaps"]/a/@href').extract())

        x=[]

        x1=[]
        url1=[]
        for i in range(l):
            x1.append(hxs.select('//h3[@class="newaps"]/a/@href').extract()[i].encode('utf-8').strip())


        print "URl parsed"          

        for i in range(l):
            url1.append(urljoin(response.url, x1[i]))

        for i in range(l):
            if url1[i]:
                yield Request(url1[i], callback=self.parse_sub)     

        r=hxs.select('//a[@id="pagnNextLink"]/@href').extract()[0].encode('utf-8')

        if r:
            yield Request(urljoin(response.url, r), callback=self.parse)

    def parse_sub(self,response):
        print " sub callled"
        itm=[]
#       item = response.meta.get('item')
        item=AmazonItem()
        hxs = HtmlXPathSelector(response)

Tags: fromimportselfhttpamazonparseresponseextract