从网页收集并打印事件标题

2024-05-19 14:31:46 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图让我的程序收集和打印的标题,从一个网站上的事件。我的代码的问题是,它打印的内容多于事件的标题。它还提供了超链接。如何删除超链接?你知道吗

from urllib.request import urlopen
from bs4 import BeautifulSoup

url_toscrape = "https://www.ntu.edu.sg/events/Pages/default.aspx"
response = urllib.request.urlopen(url_toscrape)
info_type = response.info()
responseData = response.read()
soup = BeautifulSoup(responseData, 'lxml')

events_absAll = soup.find_all("div",{"class": "ntu_event_summary_title_first"})
for events in events_absAll:
    if len(events.text) > 0:
        print(events.text.strip())
print(events_absAll)

另外,如何让for循环不断重复,以便获得完整的事件列表,如下面的列表?你知道吗

-​​7th ASEF Rectors' Conference and Students' Forum (ARC7)
-Be a Youth Corps Leader 
-NIE Visiting Artist Programme January 2019
- Exercise Classes for You: Healthy Campus@NTU
-[eLearning Course] Information & Media Literacy (From January 2019)

先谢谢你


Tags: fromimporturl标题forresponserequest事件
3条回答

可以使用带有^(start with)操作符的attribute=value选择器来定位每个标题的class属性的开始部分

import requests
from bs4 import BeautifulSoup

url = 'https://www.ntu.edu.sg/events/Pages/default.aspx'
response = requests.get(url)  
headers = {'User-Agent','Mozilla/5.0'}
soup = BeautifulSoup(response.content,'lxml')
titles = [item.text.replace('\u200b','') for item in soup.select("[class^='ntu_event_summary_title']")]
print(titles)

非常感谢你帮助我。我现在有另一个问题。我正在收集活动的日期、时间和地点。他们很成功地出版了,但它对读者并不友好。如何使日期、时间和地点分别显示为:

- event
Date:
Time:
Venue:

我本来想分手的,但结果却有了很多,这让我看起来更难看。我想剥离我的正则表达式,但它似乎没有做任何事情。有什么建议吗?你知道吗


from urllib.request import urlopen
from bs4 import BeautifulSoup
import re

url_toscrape = "https://www.ntu.edu.sg/events/Pages/default.aspx"
response = urllib.request.urlopen(url_toscrape)
info_type = response.info()
responseData = response.read()
soup = BeautifulSoup(responseData, 'lxml')

events_absFirst = soup.find_all("div",{"class": "ntu_event_summary_title_first"})
date_absAll = tr.find_all("div",{"class": "ntu_event_summary_date"})
events_absAll = tr.find_all("div",{"class": "ntu_event_summary_title"})

for first in events_absFirst:
    print('-',first.text.strip())
    print (' ',date)

for tr in soup.find_all("div",{"class":"ntu_event_detail"}):
    date_absAll = tr.find_all("div",{"class": "ntu_event_summary_date"})
    events_absAll = tr.find_all("div",{"class": "ntu_event_summary_title"})

    for events in events_absAll:
        events = events.text.strip()
    for date in date_absAll:
        date = date.text.strip('^Time.*')
    print ('-',events)
    print (' ',date)

继续评论:

from urllib.request import urlopen
from bs4 import BeautifulSoup

url_toscrape = "https://www.ntu.edu.sg/events/Pages/default.aspx"
response = urllib.request.urlopen(url_toscrape)
info_type = response.info()
responseData = response.read()
soup = BeautifulSoup(responseData, 'lxml')

events_absFirst = soup.find_all("div",{"class": "ntu_event_summary_title_first"})
events_absAll = soup.find_all("div",{"class": "ntu_event_summary_title"})
for first in events_absFirst:
    print(first.text.strip())
for events in events_absAll:
        print(events.text.strip())

或(更好)

使用类ntu_event_detail并查找其中的a

import requests
from bs4 import BeautifulSoup

page = requests.get("https://www.ntu.edu.sg/events/Pages/default.aspx")
soup = BeautifulSoup(page.content, 'html.parser')

events_absAll = soup.find_all("div",{"class": "ntu_event_detail"})
for events in events_absAll:
    for a in events.find_all('a'):
        print(a.text.strip())

输出: 你知道吗

7th ASEF Rectors' Conference and Students' Forum (ARC7)
​Be a Youth Corps Leader
​NIE Visiting Artist Programme January 2019
​Exercise Classes for You: Healthy Campus@NTU
​[eLearning Course] Information & Media Literacy (From January 2019)
​[Workshop] Introduction to Zotero (Jan to Apr 2019)
​[Workshop] Introduction to Mendeley (Jan to Apr 2019)
​Sembcorp
Marine Green Wave Environmental Care Competition 2019 - Submit by 31 March 2019
​[Consultation] Consultation for EndNote-Mac Users (Jan to Apr 2019)
​The World Asian Business Case Competition, WACC 2019 at Seoul (proposal submission by 01 April 2019)
​Heartware Network
.
.
.

编辑: 更好的方法是创建list,将结果存储在其中,过滤空字符串(如果有的话):

data =[]
for events in events_absAll:
    for a in events.find_all('a'):
        data.append(a.text)

filtered = list(filter(None, data))  # fastest
for elem in filtered: print(elem)

相关问题 更多 >