如何遍历整个域而不是提供单个链接

2024-10-02 10:25:59 发布

您现在位置:Python中文网/ 问答频道 /正文

目前,我们的蜘蛛作品了硬编码的网址列表,想改变这只是工作的主要领域

我们怎样才能改变下面的代码,使之只需要域

https://www.example.com/shop/

如果有一个很好的例子来源,那将是伟大的

    def start_requests(self):
        urls = [
#                'https://www.example.com/shop/outdoors-unknown-hart-creek-fleece-hoodie',
                'https://www.example.com/shop/adidas-unknown-essentials-cotton-fleece-3s-over-head-hoodie#repChildCatSku=111767466',
                'https://www.example.com/shop/unknown-metallic-long-sleeve-shirt#repChildCatSku=115673740',
                'https://www.example.com/shop/unknown-fleece-full-zip-hoodie#repChildCatSku=111121673',
                'https://www.example.com/shop/unknown-therma-fleece-training-hoodie#repChildCatSku=114784077',
                'https://www.example.com/shop/under-unknown-rival-fleece-crew-sweater#repChildCatSku=114636980',
                'https://www.example.com/shop/unknown-element-1-2-zip-top#repChildCatSku=114794996',
                'https://www.example.com/shop/unknown-element-1-2-zip-top#repChildCatSku=114794996',
                'https://www.example.com/shop/under-unknown-rival-fleece-full-zip-hoodie#repChildCatSku=115448841',
                'https://www.example.com/shop/under-unknown-rival-fleece-crew-sweater#repChildCatSku=114636980',
                'https://www.example.com/shop/adidas-unknown-essentials-3-stripe-fleece-sweatshirt#repChildCatSku=115001812',
                'https://www.example.com/shop/under-unknown-fleece-logo-hoodie#repChildCatSku=115305875',
                'https://www.example.com/shop/under-unknown-heatgear-long-sleeve-shirt#repChildCatSku=107534192',
                'https://www.example.com/shop/unknown-long-sleeve-legend-hoodie#repChildCatSku=112187421',
                'https://www.example.com/shop/unknown-element-1-2-zip-top#repChildCatSku=114794996',
                'https://www.example.com/shop/unknown-sportswear-funnel-neck-hoodie-111112208#repChildCatSku=111112208',
                'https://www.example.com/shop/unknown-therma-swoosh-fleece-training-hoodie#repChildCatSku=114784481',
            ]
        for url in urls:
                yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
            page = response.url.split("/")[-1]
            filename = 'academy-%s.txt' % page
            res2 = response.xpath("//span[@itemprop='price']/text()|//span[@itemprop='sku']/text()").extract()             

            res = '\n'.join(res2)

            with open(filename, 'w') as f:
                    f.write(res)
                    self.log('Saved file %s' % filename)


Tags: httpsselfcomurlexamplewwwzipshop
1条回答
网友
1楼 · 发布于 2024-10-02 10:25:59

对于纯遍历,您可以:

class MySpider(scrapy.Spider):
    name = 'my'
    allowed_domains = ['example.com']
    start_urls = ['https://www.example.com/shop/']

    def parse(self, response):
        for link in response.css('a'):
            yield response.follow(link)

但这项任务似乎毫无意义。你能详细说明你的问题吗

相关问题 更多 >

    热门问题