如何刮取子页面并将其与页面信息合并?

2024-09-30 14:20:59 发布

您现在位置:Python中文网/ 问答频道 /正文

我使用scrapy来解析页面。该页面有子页面(类别),我还需要从中获取信息并将其合并到一个元素中(可能将其他页面中的信息保存为json),然后将其添加到csv中。我尝试过不同的选择,例如:

requests = scrapy.Request(url, meta={'meta_item': item}, callback=self.parse_category)

yield scrapy.Request(url, meta={'meta_item': item}, callback=self.parse_category)

但这两种方法都不是我想要的

例如,我从https://www.webscorer.com/findraces?pg=results(示例:https://www.webscorer.com/seriesresult?seriesid=211565)获取页面并从该页面获取信息。在此之后,我需要从category(例如:https://www.webscorer.com/seriesresult?seriesid=211565&gender=F)中获取更多信息: example并将它们全部放入csv。我现在的代码:

class WebscorerSpider(scrapy.Spider):
name = 'webscorer'
allowed_domains = ['webscorer.com']

def start_requests(self):
    url = f'https://www.webscorer.com/findraces?pg=results'
    yield scrapy.Request(url, callback=self.parse_page)

def parse_page(self, response, **kwargs):
    for href in response.css('table.results-table tbody tr a::attr("href")').extract():
        url = response.urljoin(href)
        url = 'https://www.webscorer.com/seriesresult?seriesid=211565'
        yield scrapy.Request(url, callback=self.parse)

def parse(self, response: Response, **kwargs):
    latlong_match = re.search('lat=(.*)&lng=(.*)', response.css('span#FSrc::text').get())
    item = dict()

    for href in response.css('table.category-table .category-name').css('a::attr("href")').extract():
        url = response.urljoin(href)

        # requests = scrapy.Request(url, meta={'meta_item': item}, callback=self.parse_category)

        yield scrapy.Request(url, meta={'meta_item': item}, callback=self.parse_category)

    yield WebscorerEvent(name=response.css('h1.race-name::text').get(),
                         source_url=response.request.url,
                         sport_discipline=response.css('td.spec+td').css('strong::text').get(),
                         description=response.css('span.regnotes span::text').get(),
                         hero_image=response.css('p.associated-race-pic img::attr(src)').get(),
                         start_date=parse_webscorer_date(response.css('p.race-date::text').get()),
                         location={
                              "link": f"https://www.google.com/maps/search/?api=1&query={latlong_match.group(1)},{latlong_match.group(2)}",
                              "description": response.css('td.spec:contains("Location:")+td strong::text').get()})

def parse_category(self, response, **kwargs):
    item = response.meta['meta_item']
    # print(item)
    item['winner'] = response.css('table.results-table .r-racername span::text').get()

    return item

Tags: textselfcomurlgetparseresponserequest
1条回答
网友
1楼 · 发布于 2024-09-30 14:20:59

您确实yield WebscorerEvent,因此在获取下一页所需的数据之前,您已经“删除”了该项

你可以这样做:

def parse(self, response: Response, **kwargs):
    latlong_match = re.search('lat=(.*)&lng=(.*)', response.css('span#FSrc::text').get())
    item = {
        "name": response.css('h1.race-name::text').get(),
        "source_url": response.request.url,
        "sport_discipline": response.css('td.spec+td').css('strong::text').get(),
        "description": response.css('span.regnotes span::text').get(),
        "hero_image": response.css('p.associated-race-pic img::attr(src)').get(),
        "start_date": parse_webscorer_date(response.css('p.race-date::text').get()),
        "location": {
            "link": f"https://www.google.com/maps/search/?api=1&query={latlong_match.group(1)},{latlong_match.group(2)}",
            "description": response.css('td.spec:contains("Location:")+td strong::text').get()
        }
    }

    for href in response.css('table.category-table .category-name').css('a::attr("href")').extract():
        url = response.urljoin(href)

        yield scrapy.Request(url, meta={'meta_item': item}, callback=self.parse_category)

def parse_category(self, response, **kwargs):
    item = response.meta['meta_item']
    item['winner'] = response.css('table.results-table .r-racername span::text').get()

    yield WebscorerEvent(item)

因此,通过这种方式,您最终只需要yield该项,以及所需的所有数据

相关问题 更多 >