scrapy python find href递归引用

2024-10-01 02:27:25 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试从起始页查找并打印所有href:

class Ejercicio2(scrapy.Spider):
    name = "Ejercicio2"
    Ejercicio2 = {}
    category = None
    lista_urls =[] #defino una lista para meter las urls

def __init__(self, *args, **kwargs):
    super(Ejercicio2, self).__init__(*args, **kwargs)
    self.start_urls = ['http://www.masterdatascience.es/']
    self.allowed_domains = ['www.masterdatascience.es/']
    url = ['http://www.masterdatascience.es/']


def parse(self, response):
    print(response)
    # hay_enlace=response.css('a::attr(href)')
    # if hay_enlace:
    links = response.xpath("a/@href")
    for el in links:
        url = response.css('a::attr(href)').extract()
        print(url)
        next_url = response.urljoin(el.xpath("a/@href").extract_first())
        print(next_url)
        print('pasa por aqui')
        yield scrapy.Request(url, self.parse())
        # yield scrapy.Request(next_url, callback=self.parse)
        print(next_url)

但并没有如预期的那样工作,没有遵循“href”遇到的引用,只有第一个引用。在


Tags: selfurlesparseresponsedefwwwurls
2条回答

下面的代码将打印出页面上的所有href:

import scrapy

class stackoverflow20170129Spider(scrapy.Spider):
    name = "stackoverflow20170129"
    allowed_domains = ["masterdatascience.es"]
    start_urls = ["http://www.masterdatascience.es/",]

    def parse(self, response):
        for href in response.xpath('//a/@href'):
           url = response.urljoin(href.extract())
           print url
#           yield scrapy.Request(url, callback=self.parse_dir_contents)

还有一件事:值得把www.从“允许的域名”中删除——如果你深入网站并开始访问诸如anewpage.masterdatascience.es那么包括www.会阻止这个页面

您可以尝试将xpath修改为//a/@href

相关问题 更多 >