我正在研究数据收集和机器学习。我对Python和Scraping都是新手。我正在努力清理这个网站。你知道吗
根据我的监测,他们在登录和下一页之间执行几个脚本。因此他们得到这些表数据。我能够成功地登录,然后通过session从下一页获取数据,但我缺少的是从中间执行脚本获取的数据。我需要表格里的数据
satcat
实现分页。下面是我的代码
import requests
from bs4 import BeautifulSoup
import urllib
from urllib.request import urlopen
import html2text
import time
from requests_html import HTMLSession
from requests_html import AsyncHTMLSession
with requests.Session() as s:
#s = requests.Session()
session = HTMLSession()
url = 'https://www.space-track.org/'
headers = {'User-Agent':'Mozilla/5.0(X11; Ubuntu; Linux x86_64; rv:66.0)Gecko/20100101 Firefox/66.0'}
login_data = { "identity": "",
"password": "",
"btnLogin": "LOGIN"
}
login_data_extra={"identity": "", "password": ""}
preLogin = session.get(url + 'auth/login', headers=headers)
time.sleep(3)
print('*******************************')
print('\n')
print('data to retrive csrf cookie')
#print(preLogin.text)
#soup = BeautifulSoup(preLogin.content,'html.parser')
#afterpretty = soup.prettify()
#login_data['spacetrack_csrf_token'] = soup.find('input',attrs={'name':'spacetrack_csrf_token'})['value']
csrf = dict(session.cookies)['spacetrack_csrf_cookie']
#csrf = p.headers['Set-Cookie'].split(";")[0].split("=")[-1]
login_data['spacetrack_csrf_token'] = csrf
#print(login_data)
# html = open(p.content).read()
# print (html2text.html2text(p.text))
#login_data['spacetrack_csrf_token'] = soup.find('spacetrack_csrf_token"')
#print(login_data)
login = session.post(url+'auth/login',data=login_data,headers=headers,allow_redirects=True)
time.sleep(1)
print('****************************************')
print('\n')
print('login api status code')
print(login.url)
#print(r.url)
#print(r.content)
print('******************************')
print(' ')
print(' ')
print('\n')
print('data post login')
#async def get_pyclock():
# r = await session.get(url)
# await r.html.arender()
# return r
#postLogin = session.run(get_pyclock)
time.sleep(3)
postLogin = session.get(url)
postLogin.html.render(sleep=5, keep_page=True)
如您所见,我使用了html库来呈现html,但是我在获取数据时失败了。这是js内部执行的url,它获取我的数据
有人能帮我刮取那些数据或javascript吗?你知道吗
谢谢:)
你可以去
selenium
。它有一个函数browser.execute_script()
。这将帮助您执行脚本。希望这有帮助:)相关问题 更多 >
编程相关推荐