如何使用自定义函数将多个html表转换为数据帧?

2024-09-30 01:21:39 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试将多个html表转换为一个数据帧, 对于这个任务,我定义了一个函数,将所有这些html表作为数据帧返回

但是,当函数返回一个数据帧时,它返回一个空列表[]

以下是我迄今为止所尝试的:

以列表的形式获取所有需要的链接

import requests
from bs4 import BeautifulSoup
import lxml
import html5lib
import pandas as pd
import string

###  defining a list for all the needed links ###

first_url='https://www.salario.com.br/tabela-salarial/?cargos='
second_url='#listaSalarial'
allTheLetters = string.ascii_uppercase

links = []

for letter in allTheLetters:
   links.append(first_url+letter+second_url)

定义函数


### defining function to parse html objects ###

def getUrlTables(links):
  for link in links:

      # requesting link, parsing and finding tag:table #
      page = requests.get(link)
      soup = BeautifulSoup(page.content, 'html.parser')
      tab_div = soup.find_all('table', {'class':'listas'})

  # writing html files into directory #
  with open('listas_salariales.html', "w") as file:
    file.write(str(tab_div))
    file.close
  
  # reading html file as a pandas dataframe #
  tables=pd.read_html('listas_salariales.html')
  return tables 

测试输出

getUrlTables(links)

[]

我在getUrlTables()中遗漏了什么吗

有没有更简单的方法来完成这项任务


Tags: 数据函数importurl列表for定义html
1条回答
网友
1楼 · 发布于 2024-09-30 01:21:39

下面的代码将从所有链接中提取HTML,解析它们以提取表数据并构造一个大的组合数据帧(我没有将中间数据帧存储到磁盘,如果表的大小变得太大,可能需要这样做):

import requests
from bs4 import BeautifulSoup
import lxml
import html5lib
import pandas as pd
import string

###  defining a list for all the needed links ###

first_url='https://www.salario.com.br/tabela-salarial/?cargos='
second_url='#listaSalarial'
allTheLetters = string.ascii_uppercase

links = []

for letter in allTheLetters:
    links.append(first_url+letter+second_url)

### defining function to parse html objects ###

def getUrlTables(links, master_df):
    for link in links:
        page = requests.get(link)
        soup = BeautifulSoup(page.content, 'lxml')   # using the lxml parser
        try:
            table = soup.find('table', attrs={'class':'listas'})

            # finding table headers
            heads = table.find('thead').find('tr').find_all('th')
            colnames = [hdr.text for hdr in heads]
            #print(colnames)
            
            # Now extracting the values
            data = {k:[] for k in colnames}
            rows = table.find('tbody').find_all('tr')
            for rw in rows:
                for col in colnames:
                    cell = rw.find('td', attrs={'data-label':'{}'.format(col)})
                    data[col].append(cell.text)

            # Constructing a pandas dataframe using the data just parsed
            df = pd.DataFrame.from_dict(data)
            master_df = pd.concat([master_df, df], ignore_index=True)
        except AttributeError as e:
            print('No data from the link: {}'.format(link))
    return master_df


master_df = pd.DataFrame()
master_df = getUrlTables(links, master_df)
print(master_df)

上述代码的输出如下所示:

         CBO                    Cargo  ... Teto Salarial Salário Hora
0     612510            Abacaxicultor  ...      2.116,16         6,86
1     263105                    Abade  ...      5.031,47        17,25
2     263105                 Abadessa  ...      5.031,47        17,25
3     622020  Abanador na Agricultura  ...      2.075,81         6,27
4     862120  Abastecedor de Caldeira  ...      3.793,98        11,65
...      ...                      ...  ...           ...          ...
9345  263110      Zenji (missionário)  ...      3.888,52        12,65
9346  723235                 Zincador  ...      2.583,20         7,78
9347  203010               Zoologista  ...      4.615,45        14,21
9348  203010                  Zoólogo  ...      4.615,45        14,21
9349  223310              Zootecnista  ...      5.369,59        16,50

[9350 rows x 8 columns]

相关问题 更多 >

    热门问题