美团没有抓到元标签

2024-09-28 23:14:47 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个简单的脚本,它获取一个html页面,并尝试输出关键字的meta标记的内容。不知何故,它没有拾取关键字元标记的内容,即使html包含该标记。感谢您的帮助。你知道吗

    url = “https://www.mediapost.com/publications/article/316086/google-facebook-others-pitch-in-app-ads-brand-s.html”
    req = urllib2.Request(url=url)
    f = urllib2.urlopen(req)
    mycontent = f.read()
    soup = BeautifulSoup(mycontent, 'html.parser')
    keywords = soup.find("meta", property="keywords")
    print keywords

Tags: https标记脚本url内容htmlwww关键字
3条回答

我强烈推荐你。你知道吗

代码:

from bs4 import BeautifulSoup
import requests

r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
keywords = soup.select_one('meta[name="keywords"]')['content']
>>> keywords
'Many more major brands are pumping big ad dollars into mobile games, pushing Google, Facebook and others into the in-app gaming ad space. Some believe this is in response to brands searching for a secure, safe place to run video ads and engage with consumers. 03/16/2018'

如果您正确地检查了它,那么您要查找的meta标记具有attributename而不是property,因此请将代码更改为

keywords = soup.find("meta", attrs={'name':'keywords'})

然后显示您需要编写的内容

print keywords['content']

输出:

Many more major brands are pumping big ad dollars into mobile games, pushing Google, Facebook and others into the in-app gaming ad space. Some believe this is in response to brands searching for a secure, safe place to run video ads and engage with consumers. 03/16/2018

使用'lxml'代替'html.parser',并使用soup.find_all

soup = BeautifulSoup(doc, 'lxml')
keywords = soup.find_all('meta',attrs={"name": 'keywords'})
for x in keywords:
    print(x['content'])

输出

Many more major brands are pumping big ad dollars into mobile games, pushing Google, Facebook and others into the in-app gaming ad space. Some believe this is in response to brands searching for a secure, safe place to run video ads and engage with consumers. 03/16/2018

相关问题 更多 >