我正在努力抓取所有的.pdf链接,pdf的标题和它在这个webpage上收到的时间。在尝试从页面中查找href链接时,我尝试了以下代码-
from bs4 import BeautifulSoup
import requests
source = requests.get('https://www.bseindia.com/corporates/ann.html?scrip=532538').text
soup = BeautifulSoup(source, 'lxml')
for link in soup.find_all('a'):
if link.has_attr('href'):
print(link.attrs['href'])
我得到以下输出-
{{CorpannData.Table[0].NSURL}}
{{CorpannData.Table[0].NSURL}}
#
/xml-data/corpfiling/AttachLive/{{cann.ATTACHMENTNAME}}
/xml-data/corpfiling/AttachHis/{{cann.ATTACHMENTNAME}}
/xml-data/corpfiling/AttachLive/{{CorpannDataByNewsId[0].ATTACHMENTNAME}}
/xml-data/corpfiling/AttachHis/{{CorpannDataByNewsId[0].ATTACHMENTNAME}}
我想要的输出是获得所有pdf链接,如下所示:
https://www.bseindia.com/xml-data/corpfiling/AttachHis/e525dbbb-5ec1-4327-a5ea-9662c66f32a5.pdf
https://www.bseindia.com/xml-data/corpfiling/AttachHis/d2355247-3287-4c41-be61-2a5655276e79.pdf
(可选)我对整个程序的期望输出是-
Title: Compliances-Reg. 39 (3) - Details of Loss of Certificate / Duplicate Certificate
Exchange received time: 19-12-2019 13:49:14
PDF link: https://www.bseindia.com/xml-data/corpfiling/AttachHis/e525dbbb-5ec1-4327-a5ea-9662c66f32a5.pdf
...
让程序每秒钟在网页上寻找新的更新。你知道吗
输出:
等等。。。你知道吗
输出到
CSV
文件:(Copy of the CSV file)
相关问题 更多 >
编程相关推荐