Python Scrapy Selenium请求下一个pag

2024-09-23 04:29:31 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试制作一个webcrawler,它会转到一个链接并等待Javascript内容加载。然后在进入下一页之前,它应该获得所列文章的所有链接。问题是它总是从第一个url(“https://techcrunch.com/search/heartbleed”)抓取,而不是跟随我给它的。为什么下面的代码没有从我在请求中传递的新url中刮取?我没主意了。。。在

import scrapy
from scrapy.http.request import Request
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
import time


class TechcrunchSpider(scrapy.Spider):
    name = "techcrunch_spider_performance"
    allowed_domains = ['techcrunch.com']
    start_urls = ['https://techcrunch.com/search/heartbleed']



    def __init__(self):
        self.driver = webdriver.PhantomJS()
        self.driver.set_window_size(1120, 550)
        #self.driver = webdriver.Chrome("C:\Users\Daniel\Desktop\Sonstiges\chromedriver.exe")
        self.driver.wait = WebDriverWait(self.driver, 5)    #wartet bis zu 5 sekunden

    def parse(self, response):
        start = time.time()     #ZEITMESSUNG
        self.driver.get(response.url)

        #wartet bis zu 5 sekunden(oben definiert) auf den eintritt der condition, danach schmeist er den TimeoutException error
        try:    

            self.driver.wait.until(EC.presence_of_element_located(
                (By.CLASS_NAME, "block-content")))
            print("Found : block-content")

        except TimeoutException:
            self.driver.close()
            print(" block-content NOT FOUND IN TECHCRUNCH !!!")


        #Crawle durch Javascript erstellte Inhalte mit Selenium

        ahref = self.driver.find_elements(By.XPATH,'//h2[@class="post-title st-result-title"]/a')

        hreflist = []
        #Alle Links zu den jeweiligen Artikeln sammeln
        for elem in ahref :
            hreflist.append(elem.get_attribute("href"))


        for elem in hreflist :
            print(elem)
            yield scrapy.Request(url=elem , callback=self.parse_content)


        #Den link fuer die naechste seite holen
        try:    
            next = self.driver.find_element(By.XPATH,"//a[@class='page-link next']")
            nextpage = next.get_attribute("href")
            print("JETZT KOMMT NEXT :")
            print(nextpage)
            #newresponse = response.replace(url=nextpage)
            yield scrapy.Request(url=nextpage, dont_filter=False)

        except TimeoutException:
            self.driver.close()
            print(" NEXT NOT FOUND(OR EOF) IM CLOSING MYSELF !!!")



        end = time.time()
        print("Time elapsed : ")
        finaltime = end-start
        print(finaltime)


    def parse_content(self, response):    
        title = self.driver.find_element(By.XPATH,"//h1")
        titletext = title.get_attribute("innerHTML")
        print(" h1 : ")
        print(title)
        print(titletext)

Tags: fromimportselfurlbytimetitledriver
1条回答
网友
1楼 · 发布于 2024-09-23 04:29:31

第一个问题是:

for elem in hreflist :
        print(elem)
        yield scrapy.Request(url=elem , callback=self.parse_content)

这段代码会对找到的所有链接产生垃圾请求。但是:

^{pr2}$

parse_content函数尝试使用驱动程序解析页面。您可以尝试使用scray中的response元素进行解析,或者使用webdriver加载页面(self.driver.get(……)

此外,scrapy是异步的,selenium不是这样的。scrapy不是在scrapy yield请求之后阻塞代码,而是继续执行代码,因为它构建在twisted之上,可以启动多个并发请求。selenium驱动程序实例将无法跟踪来自scrapy的多个并发请求。(一种方法是用selenium代码替换每个产量,即使这意味着要损失执行时间)

相关问题 更多 >