使用bs4提取html文件中的文本

2024-05-05 19:26:32 发布

您现在位置:Python中文网/ 问答频道 /正文

想从我的html文件中提取文本。如果我对特定文件使用以下命令:

import bs4, sys
from urllib import urlopen
#filin = open(sys.argv[1], 'r')
filin = '/home/iykeln/Desktop/R_work/file1.html' 
webpage = urlopen(filin).read().decode('utf-8')
soup = bs4.BeautifulSoup(webpage)
for node in soup.findAll('html'):
    print u''.join(node.findAll(text=True)).encode('utf-8')

它会起作用的。 但在下面尝试使用open(sys.argv[1],'r')处理非特定文件:

import bs4, sys
from urllib import urlopen
filin = open(sys.argv[1], 'r')
#filin = '/home/iykeln/Desktop/R_work/file1.html' 
webpage = urlopen(filin).read().decode('utf-8')
soup = bs4.BeautifulSoup(webpage)
for node in soup.findAll('html'):
    print u''.join(node.findAll(text=True)).encode('utf-8')

或者

import bs4, sys
from urllib import urlopen
with open(sys.argv[1], 'r') as filin:
    webpage = urlopen(filin).read().decode('utf-8')
    soup = bs4.BeautifulSoup(webpage)
    for node in soup.findAll('html'):
        print u''.join(node.findAll(text=True)).encode('utf-8')

我将得到以下错误:

Traceback (most recent call last):
  File "/home/iykeln/Desktop/py/clean.py", line 5, in <module>
    webpage = urlopen(filin).read().decode('utf-8')
  File "/usr/lib/python2.7/urllib.py", line 87, in urlopen
    return opener.open(url)
  File "/usr/lib/python2.7/urllib.py", line 180, in open
    fullurl = unwrap(toBytes(fullurl))
  File "/usr/lib/python2.7/urllib.py", line 1057, in unwrap
    url = url.strip()
AttributeError: 'file' object has no attribute 'strip'

Tags: inpyimportnodehtmlsysopenurllib
1条回答
网友
1楼 · 发布于 2024-05-05 19:26:32

您不应该调用open,只需将文件名传递给urlopen

import bs4, sys
from urllib import urlopen

webpage = urlopen(sys.argv[1]).read().decode('utf-8')
soup = bs4.BeautifulSoup(webpage)
for node in soup.findAll('html'):
    print u''.join(node.findAll(text=True)).encode('utf-8')

仅供参考,打开本地文件不需要urllib

import bs4, sys

with open(sys.argv[1], 'r') as f:
    webpage = f.read().decode('utf-8')

soup = bs4.BeautifulSoup(webpage)
for node in soup.findAll('html'):
    print u''.join(node.findAll(text=True)).encode('utf-8')

希望能有所帮助。

相关问题 更多 >