如何在浏览器中从python强制执行或呈现脚本?

2024-09-30 04:31:14 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在研究数据收集和机器学习。我对Python和Scraping都是新手。我正在努力清理这个网站。你知道吗

https://www.space-track.org/

根据我的监测,他们在登录和下一页之间执行几个脚本。因此他们得到这些表数据。我能够成功地登录,然后通过session从下一页获取数据,但我缺少的是从中间执行脚本获取的数据。我需要表格里的数据

satcat

实现分页。下面是我的代码

 import requests
from bs4 import BeautifulSoup
import urllib
from urllib.request import urlopen
import html2text
import time
from requests_html import HTMLSession
from requests_html import AsyncHTMLSession
with requests.Session() as s:
    #s = requests.Session()
    session = HTMLSession()

    url = 'https://www.space-track.org/'
    headers = {'User-Agent':'Mozilla/5.0(X11; Ubuntu; Linux x86_64; rv:66.0)Gecko/20100101 Firefox/66.0'}
    login_data = { "identity": "",
         "password": "",
         "btnLogin": "LOGIN"
     }
    login_data_extra={"identity": "", "password": ""}
    preLogin = session.get(url + 'auth/login', headers=headers)
    time.sleep(3)
    print('*******************************')
    print('\n')
    print('data to retrive csrf cookie')
    #print(preLogin.text)
    #soup = BeautifulSoup(preLogin.content,'html.parser')
    #afterpretty = soup.prettify()
    #login_data['spacetrack_csrf_token'] = soup.find('input',attrs={'name':'spacetrack_csrf_token'})['value']
    csrf = dict(session.cookies)['spacetrack_csrf_cookie']
    #csrf = p.headers['Set-Cookie'].split(";")[0].split("=")[-1]
    login_data['spacetrack_csrf_token'] = csrf
    #print(login_data)
   # html = open(p.content).read()
   # print (html2text.html2text(p.text))    

    #login_data['spacetrack_csrf_token'] = soup.find('spacetrack_csrf_token"')
    #print(login_data)

    login = session.post(url+'auth/login',data=login_data,headers=headers,allow_redirects=True)
    time.sleep(1)

    print('****************************************')
    print('\n')
    print('login api status code')
    print(login.url)
    #print(r.url)
    #print(r.content)
    print('******************************')
    print(' ')
    print(' ')
    print('\n')
    print('data post login')
    #async def get_pyclock():
    # r = await session.get(url)
    # await r.html.arender()
    # return r
    #postLogin  = session.run(get_pyclock)




    time.sleep(3)
    postLogin = session.get(url)
    postLogin.html.render(sleep=5, keep_page=True)

如您所见,我使用了html库来呈现html,但是我在获取数据时失败了。这是js内部执行的url,它获取我的数据

https://www.space-track.org/master/loadSatCatData

有人能帮我刮取那些数据或javascript吗?你知道吗

谢谢:)


Tags: 数据fromimporttokenurldatagetsession

热门问题