一旦页面数被刮掉,如何分页?(发痒)

2024-09-30 05:21:11 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图通过一个没有超链接分页按钮的评论站点来分页。我已经编写了分页逻辑,并对每个链接的页数进行了硬编码。但是,我想知道是否可以使用我从start_requests中获取的信息作为给定链接的页数。在

Spider代码在这里(有两个分页链接):

class TareviewsSpider(scrapy.Spider):
    name = 'tareviews'
    allowed_domains = ['tripadvisor.com']
    # start_urls = []

    def start_requests(self):
        for page in range(0,395,5):
            yield self.make_requests_from_url('https://www.tripadvisor.com/Hotel_Review-g60795-d102542-Reviews-or{}-Courtyard_Philadelphia_Airport-Philadelphia_Pennsylvania.html'.format(page))
        for page in range(0,1645,5):
            yield self.make_requests_from_url('https://www.tripadvisor.com/Hotel_Review-g60795-d122332-Reviews-or{}-The_Ritz_Carlton_Philadelphia-Philadelphia_Pennsylvania.html'.format(page))

    def parse(self, response):
        for idx,review in enumerate(response.css('div.review-container')):
            item = {
                'num_reviews': response.css('span.reviews_header_count::text')[0].re(r'\d{0,3}\,?\d{1,3}'),
                'hotel_name': response.css('h1.heading_title::text').extract_first(),
                'review_title': review.css('span.noQuotes::text').extract_first(),
                'review_body': review.css('p.partial_entry::text').extract_first(),
                'review_date': review.xpath('//*[@class="ratingDate relativeDate"]/@title')[idx].extract(),
                'num_reviews_reviewer': review.css('span.badgetext::text').extract_first(),
                'reviewer_name': review.css('span.scrname::text').extract(),
                'bubble_rating': review.xpath("//div[contains(@class, 'reviewItemInline')]//span[contains(@class, 'ui_bubble_rating')]/@class")[idx].re(r'(?<=ui_bubble_rating bubble_).+?(?=0)')
            }
            yield item

'num_reviews'是每个链接的最后一页的编号。在for loop中,它是395和{}。在

这可能吗?如果可能的话,我想避免使用无头浏览器。谢谢!在


Tags: textselffor链接responsepageextractrequests
1条回答
网友
1楼 · 发布于 2024-09-30 05:21:11

我做了这个密码

我使用普通的url(不带-or{})来获取页面并查找评论数量。
接下来,我将-or{}添加到url(它可以在任何位置)以生成包含评论的页面的url。
然后我使用for循环和Request()来获取包含评论的页面。
评论是用不同的方法解析的-parse_reviews()

在代码中,我使用scrapy.crawler.CrawlerProcess()在没有完整项目的情况下运行它,
所以每个人都可以轻松地运行和测试它。在

它将数据保存在output.csv

import scrapy

class TareviewsSpider(scrapy.Spider):

    name = 'tareviews'
    allowed_domains = ['tripadvisor.com']

    start_urls = [ # without `-or{}`
        'https://www.tripadvisor.com/Hotel_Review-g60795-d102542-Reviews-Courtyard_Philadelphia_Airport-Philadelphia_Pennsylvania.html',
        'https://www.tripadvisor.com/Hotel_Review-g60795-d122332-Reviews-The_Ritz_Carlton_Philadelphia-Philadelphia_Pennsylvania.html',
    ]

    def parse(self, response):
        # get number of reviews
        num_reviews = response.css('span.reviews_header_count::text').extract_first()
        num_reviews = num_reviews[1:-1] # remove `( )`
        num_reviews = num_reviews.replace(',', '') # remove `,`
        num_reviews = int(num_reviews) # convert to integer
        print('num_reviews:', num_reviews, type(num_reviews))

        # create template to generate urls to pages with reviews
        url = response.url.replace('.html', '-or{}.html')
        print('template:', url)

        # add requests to list
        for offset in range(0, num_reviews, 5):
            print('url:', url.format(offset))
            yield scrapy.Request(url=url.format(offset), callback=self.parse_reviews)

    def parse_reviews(self, response):
        print('reviews')
        for idx,review in enumerate(response.css('div.review-container')):
            item = {
                'num_reviews': response.css('span.reviews_header_count::text')[0].re(r'\d{0,3}\,?\d{1,3}'),
                'hotel_name': response.css('h1.heading_title::text').extract_first(),
                'review_title': review.css('span.noQuotes::text').extract_first(),
                'review_body': review.css('p.partial_entry::text').extract_first(),
                'review_date': review.xpath('//*[@class="ratingDate relativeDate"]/@title')[idx].extract(),
                'num_reviews_reviewer': review.css('span.badgetext::text').extract_first(),
                'reviewer_name': review.css('span.scrname::text').extract(),
                'bubble_rating': review.xpath("//div[contains(@class, 'reviewItemInline')]//span[contains(@class, 'ui_bubble_rating')]/@class")[idx].re(r'(?<=ui_bubble_rating bubble_).+?(?=0)')
            }
            yield item


#  - run without project  -

import scrapy.crawler

c = scrapy.crawler.CrawlerProcess({
    "FEED_FORMAT": 'csv',
    "FEED_URI": 'output.csv',
})
c.crawl(TareviewsSpider)
c.start())

顺便说一句:要获取您需要的页面url

^{pr2}$

url中的其他单词只用于SEO-在谷歌搜索结果中获得更好的位置。在

相关问题 更多 >

    热门问题