如何使用webdriver将多个页面中的数据保存到单个csv中

2024-09-20 22:54:24 发布

您现在位置:Python中文网/ 问答频道 /正文

因此,我尝试使用selenium(webdriver)从Google Scholar保存数据,目前为止,我可以打印我想要的数据,但当我将数据保存到csv时,它只保存第一页

from selenium import webdriver
from selenium.webdriver.common.by import By
# Import statements for explicit wait
from selenium.webdriver.support.ui import WebDriverWait as W
from selenium.webdriver.support import expected_conditions as EC
import time
import csv
from csv import writer

exec_path = r"C:\Users\gvste\Desktop\proyecto\chromedriver.exe"
URL = r"https://scholar.google.com/citations?view_op=view_org&hl=en&authuser=2&org=8337597745079551909"

button_locators = ['//*[@id="gsc_authors_bottom_pag"]/div/button[2]', '//*[@id="gsc_authors_bottom_pag"]/div/button[2]','//*[@id="gsc_authors_bottom_pag"]/div/button[2]']
wait_time = 3
driver = webdriver.Chrome(executable_path=exec_path)
driver.get(URL)
wait = W(driver, wait_time)
#driver.maximize_window()
for j in range(len(button_locators)):
    button_link = wait.until(EC.element_to_be_clickable((By.XPATH, button_locators[j])))

address = driver.find_elements_by_class_name("gsc_1usr")

    #for post in address:
        #print(post.text)
time.sleep(4)

with open('post.csv','a') as s:
    for i in range(len(address)):

        addresst = address
            #if addresst == 'NONE':
            #   addresst = str(address)
            #else:
        addresst = address[i].text.replace('\n',',')
        s.write(addresst+ '\n')

button_link.click()
time.sleep(4)

    #driver.quit()

Tags: csv数据fromimportfortimeaddressas
1条回答
网友
1楼 · 发布于 2024-09-20 22:54:24

您只获得一个首页数据,因为您的程序在单击“下一页”按钮后停止。你必须把所有这些放在一个for循环中

注意我在范围(7)中写的,因为我知道有7页要打开,实际上我们不应该这样做。想象一下,如果我们有数千页。我们应该添加一些逻辑来检查“下一页按钮”是否存在,或者是否存在其他内容,并循环直到它不存在

exec_path = r"C:\Users\gvste\Desktop\proyecto\chromedriver.exe"
URL = r"https://scholar.google.com/citations?view_op=view_org&hl=en&authuser=2&org=8337597745079551909"

button_locators = "/html/body/div/div[8]/div[2]/div/div[12]/div/button[2]"
wait_time = 3
driver = webdriver.Chrome(executable_path=exec_path)
driver.get(URL)
wait = W(driver, wait_time)

time.sleep(4)

# 7 pages. In reality, we should get this number programmatically 
for page in range(7):

    # read data from new page
    address = driver.find_elements_by_class_name("gsc_1usr")

    # write to file
    with open('post.csv','a') as s:
        for i in range(len(address)):
            addresst = address[i].text.replace('\n',',')
            s.write(addresst+ '\n')

    # find and click next page button
    button_link = wait.until(EC.element_to_be_clickable((By.XPATH, button_locators)))
    button_link.click()
    time.sleep(4)

而且在将来,您应该将所有这些time.sleeps更改为wait.until。因为有时候你的页面加载得更快,程序可以更快地完成它的工作。甚至更糟糕的是,您的网络可能会出现延迟,这会使您的脚本出错

相关问题 更多 >

    热门问题