需要使用python(selenium)通过ajax加载一个表

2024-09-26 17:43:38 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个page有一个表(table id=“ctl00\u ContentPlaceHolder\u ctl00\u ctl00\u GV” class="GridListings" )i need to scrape. I usually use BeautifulSoup & urllib for it,but in this case the problem is that the table takes some time to load ,so it isnt captured when i try to fetch it using BS. I cannot use PyQt4,drysracpe or windmill because of some installation issues,so the only possible way is to use Selenium/PhantomJS I tried the following,still no success:

from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.PhantomJS()
driver.get(url)
wait = WebDriverWait(driver, 10)
table = wait.until(EC.presence_of_element_located(By.CSS_SELECTOR, 'table#ctl00_ContentPlaceHolder_ctl00_ctl00_GV'))

上面的代码没有给出表中所需的内容。 我该如何实现这个目标


Tags: thetofromimportisusedriverselenium
2条回答

如果你想废弃一些东西,最好先安装一个web调试器(Firebug例如Mozilla Firefox)来观察你想要废弃的网站是如何工作的

接下来,您需要复制网站连接到后台的过程

正如您所说的,您想要废弃的内容正在异步加载(仅当文档准备就绪时)

假设调试器正在运行,并且您已经刷新了页面,您将在“网络”选项卡上看到以下请求:

邮政https://seahawks.strmarketplace.com/Charter-Seat-Licenses/Charter-Seat-Licenses.aspx

实现目标的最终流程是:

  • 1/使用requests python module
  • 2/打开索引页网站的请求会话(使用cookies处理)
  • 3/放弃特定POST表单请求的所有输入
  • 4/构建包含所有输入和;上一步中废弃的值字段+添加一些特定的固定参数
  • 5/发布请求(提供所需数据)
  • 6/使用finally BS4 module(像往常一样)来处理已回答的html以废弃数据

请参见以下工作代码:

#!/usr/bin/env python
# -*- coding: UTF-8 -*-

from bs4 import BeautifulSoup
import requests

base_url="https://seahawks.strmarketplace.com/Charter-Seat-Licenses/Charter-Seat-Licenses.aspx"

#create requests session
s = requests.session()

#get index page
r=s.get(base_url)

#soup page
bs=BeautifulSoup(r.text)

#extract FORM html
form_soup= bs.find('form',{'name':'aspnetForm'})

#extracting all inputs
input_div = form_soup.findAll("input")

#build the data parameters for POST request
#we add some required <fixed> data parameters for post
data={
    '__EVENTARGUMENT':'LISTINGS;0',
    '__EVENTTARGET':'ctl00$ContentPlaceHolder$ctl00$ctl00$RadAjaxPanel_GV',
    '__EVENTVALIDATION':'/wEWGwKis6fzCQLDnJnSDwLq4+CbDwK9jryHBQLrmcucCgL56enHAwLRrPHhCgKDk6P+CwL1/aWtDQLm0q+gCALRvI2QDAKch7HjBAKWqJHWBAKil5XsDQK58IbPAwLO3dKwCwL6uJOtBgLYnd3qBgKyp7zmBAKQyTBQK9qYAXAoieq54JAuG/rDkC1djKyQMC1qnUtgoC0OjaygUCv4b7sAhfkEODRvsa3noPfz2kMsxhAwlX3Q=='
}
#we add some <dynamic> data parameters
for input_d in input_div:
    try:
        data[ input_d['name'] ] =input_d['value'] 
    except:
        pass #skip unused input field

#post request
r2=s.post(base_url,data=data)

#write the result
with open("post_result.html","w") as f:
    f.write(r2.text.encode('utf8'))

现在,请查看“post_result.html”内容,您将找到数据

问候

您可以使用请求bs4,获取数据,几乎所有asp站点都需要提供一些post参数,如\uu EVENTTARGET\uu EVENTVALIDATION等:

from bs4 import BeautifulSoup
import requests

data = {"__EVENTTARGET": "ctl00$ContentPlaceHolder$ctl00$ctl00$RadAjaxPanel_GV",
    "__EVENTARGUMENT": "LISTINGS;0",
    "ctl00$ContentPlaceHolder$ctl00$ctl00$ctl00$hdnProductID": "139",
    "ctl00$ContentPlaceHolder$ctl00$ctl00$hdnProductID": "139",
    "ctl00$ContentPlaceHolder$ctl00$ctl00$drpSortField": "Listing Number",
    "ctl00$ContentPlaceHolder$ctl00$ctl00$drpSortDirection": "A-Z, Low-High",
    "__ASYNCPOST": "true"}

对于实际的post,我们需要为out post数据添加更多值:

post = "https://seahawks.strmarketplace.com/Charter-Seat-Licenses/Charter-Seat-Licenses.aspx"
with requests.Session() as s:
    s.headers.update({"User-Agent":"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0"})
    soup = BeautifulSoup(s.get(post).content)

    data["__VIEWSTATEGENERATOR"] = soup.select_one("#__VIEWSTATEGENERATOR")["value"]
    data["__EVENTVALIDATION"] = soup.select_one("#__EVENTVALIDATION")["value"]
    data["__VIEWSTATE"] = soup.select_one("#__VIEWSTATE")["value"]

    r = s.post(post, data=data)
    soup2 = BeautifulSoup(r.content)
    table = soup2.select_one("div.GridListings")
    print(table)

运行代码时,您将看到打印的表格

相关问题 更多 >

    热门问题