使用用户代理标头时拒绝Webscraping CrunchBase访问

2024-06-02 21:49:12 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图通过webscrape Crunch Base找到某些公司的总融资额Here is a link是一个例子

一开始,我试着只喝漂亮的汤,但我不断得到一个错误,说:

Access to this page has been denied because we believe you are using automation tools to browse the\nwebsite.

于是,我查了一下如何伪造浏览器访问,并修改了代码,但仍然出现同样的错误。我做错了什么

import requests
from bs4 import BeautifulSoup as BS


url = 'https://www.crunchbase.com/organization/incube-labs'
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) 
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}

response = requests.get(url, headers=headers)
print(response.content)

Tags: toimporturlbasehereisresponse错误
1条回答
网友
1楼 · 发布于 2024-06-02 21:49:12

总之,你的代码看起来很棒!看起来,您试图废弃的网站需要比您现有的更复杂的标题。以下代码应该可以解决您的问题:

import requests
from bs4 import BeautifulSoup as BS


url = 'https://www.crunchbase.com/organization/incube-labs'
headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:66.0) Gecko/20100101 Firefox/66.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-US,en;q=0.5", "Accept-Encoding": "gzip, deflate", "DNT": "1", "Connection": "close", "Upgrade-Insecure-Requests": "1"}

response = requests.get(url, headers=headers)
print(response.content)

希望这有帮助

相关问题 更多 >