Python网站抓取如何抓取这类网站?

2024-09-28 01:26:18 发布

您现在位置:Python中文网/ 问答频道 /正文

好的,我需要抓取以下网页:https://www.programmableweb.com/category/all/apis?deadpool=1

这是一个API列表。大约有22000个原料药需要清理。你知道吗


我需要:

1)获取表(第1-889页)中每个API的URL,并获取以下信息:

  • API名称
  • 说明
  • 类别
  • 已提交

2)然后我需要从每个URL中搜集一堆信息。你知道吗

3)将数据导出到CSV


问题是,我有点不知道该怎么考虑这个项目。据我所见,没有AJAX调用来填充表,这意味着我必须直接解析HTML(对吗?)你知道吗


在我看来,逻辑是这样的:

  1. 使用requests&BS4库来刮表

  2. 然后,从每一行抓取HREF

  3. 访问该HREF,刮取数据,移动到下一个

  4. 冲洗并重复所有表格行。


我是否在正确的轨道上,这是否适用于请求和BS4?你知道吗

以下是我一直试图解释的一些screenshots。你知道吗

非常感谢你的帮助。这伤到我的头了哈哈


Tags: 数据httpscomapi信息url网页www
2条回答

这里我们使用requestsBeautifulSouppandas

import requests
from bs4 import BeautifulSoup
import pandas as pd

url = 'https://www.programmableweb.com/category/all/apis?deadpool=1&page='

num = int(input('How Many Page to Parse?> '))
print('please wait....')
name = []
desc = []
cat = []
sub = []
for i in range(0, num):
    r = requests.get(f"{url}{i}")
    soup = BeautifulSoup(r.text, 'html.parser')
    for item1 in soup.findAll('td', attrs={'class': 'views-field views-field-title col-md-3'}):
        name.append(item1.text)
    for item2 in soup.findAll('td', attrs={'class': 'views-field views-field-search-api-excerpt views-field-field-api-description hidden-xs visible-md visible-sm col-md-8'}):
        desc.append(item2.text)
    for item3 in soup.findAll('td', attrs={'class': 'views-field views-field-field-article-primary-category'}):
        cat.append(item3.text)
    for item4 in soup.findAll('td', attrs={'class': 'views-field views-field-created'}):
        sub.append(item4.text)

result = []
for item in zip(name, desc, cat, sub):
    result.append(item)

df = pd.DataFrame(
    result, columns=['API Name', 'Description', 'Category', 'Submitted'])
df.to_csv('output.csv')

print('Task Completed, Result saved to output.csv file.')

结果可以在线查看:Check Here

输出简单:

enter image description here

现在进行href解析:

import requests
from bs4 import BeautifulSoup
import pandas as pd

url = 'https://www.programmableweb.com/category/all/apis?deadpool=0&page='

num = int(input('How Many Page to Parse?> '))
print('please wait....')

links = []
for i in range(0, num):
    r = requests.get(f"{url}{i}")
    soup = BeautifulSoup(r.text, 'html.parser')
    for link in soup.findAll('td', attrs={'class': 'views-field views-field-title col-md-3'}):
        for href in link.findAll('a'):
            result = 'https://www.programmableweb.com'+href.get('href')
            links.append(result)

spans = []
for link in links:
    r = requests.get(link)
    soup = soup = BeautifulSoup(r.text, 'html.parser')
    span = [span.text for span in soup.select('div.field span')]
    spans.append(span)

data = []
for item in spans:
    data.append(item)

df = pd.DataFrame(data)
df.to_csv('data.csv')
print('Task Completed, Result saved to data.csv file.')

在线检查结果:Here

示例视图如下:

enter image description here

如果您希望将这2csv个文件放在一起,那么下面是代码:

import pandas as pd

a = pd.read_csv("output.csv")
b = pd.read_csv("data.csv")
merged = a.merge(b)
merged.to_csv("final.csv", index=False)

联机结果:Here

如果你想继续做,你应该多读一些关于报废的书。你知道吗

from bs4 import BeautifulSoup
import csv , os , requests
from urllib import parse


def SaveAsCsv(list_of_rows):
    try:
        with open('data.csv', mode='a',  newline='', encoding='utf-8') as outfile:
            csv.writer(outfile).writerow(list_of_rows)
    except PermissionError:
        print("Please make sure data.csv is closed\n")

if os.path.isfile('data.csv') and os.access('data.csv', os.R_OK):
    print("File data.csv Already exists \n")
else:
    SaveAsCsv([ 'api_name','api_link','api_desc','api_cat'])
BaseUrl = 'https://www.programmableweb.com/category/all/apis?deadpool=1&page={}'
for i in range(1, 890):
    print('## Getting Page {} out of 889'.format(i))    
    url = BaseUrl.format(i)
    res = requests.get(url)
    soup = BeautifulSoup(res.text,'html.parser')
    table_rows = soup.select('div.view-content > table[class="views-table cols-4 table"] > tbody tr')
    for row in table_rows:
        tds = row.select('td')
        api_name = tds[0].text.strip()
        api_link = parse.urljoin(url, tds[0].find('a').get('href'))
        api_desc = tds[1].text.strip()
        api_cat  = tds[2].text.strip()  if len(tds) >= 3 else ''
        SaveAsCsv([api_name,api_link,api_desc,api_cat])

相关问题 更多 >

    热门问题