加载Word文档python

2024-10-02 22:37:46 发布

您现在位置:Python中文网/ 问答频道 /正文

对于我的课程作业,我必须建立一个网页刮板,它为img,word文档和pdf的网站,并下载到一个文件,我有img下载工作,但当我改变代码下载文档或pdf,它没有找到任何在所有,我用beautifulsoup刮网站,我知道网站上有一些文档和pdf文档无法下载。你知道吗

from bs4 import BeautifulSoup
import urllib.request
import shutil
import requests
from urllib.parse import urljoin
import sys
import time
import os
import url
import hashlib
import re

url = 'http://www.soc.napier.ac.uk/~40009856/CW/'

path=('c:\\temp\\')

def ensure_dir(path):
    directory = os.path.dirname(path)
    if not os.path.exists(path):
        os.makedirs(directory) 
    return path

os.chdir(ensure_dir(path))

def webget(url): 
    response = requests.get(url)
    html = response.content
    return html

def get_docs(url):
    soup = make_soup(url)
    docutments = [docs for docs in soup.findAll('doc')]
    print (str(len(docutments)) + " docutments found.")
    print('Downloading docutments to current working directory.')
    docutments_links = [each.get('src') for each in docutments]
    for each in docutments_links:
        try:
            filename = each.strip().split('/')[-1].strip()
            src = urljoin(url, each)
            print ('Getting: ' + filename)
            response = requests.get(src, stream=True)
            # delay to avoid corrupted previews
            time.sleep(1)
            with open(filename, 'wb') as out_file:
                shutil.copyfileobj(response.raw, out_file)
        except:
            print('  An error occured. Continuing.')
    print ('Done.')

if __name__ == '__main__':
     get_docs(url)

Tags: path文档importurldocsgetpdfos
2条回答

更多的是旁白,但您可以使用或CSS选择器语法来组合pdf、docx等。注意,您仍然需要完成一些路径,例如带有前缀"http://www.soc.napier.ac.uk/~40009856/CW/"。下面将attribute = valuecss选择器语法与$operator一起使用(这意味着属性字符串的值以结尾)

from bs4 import BeautifulSoup
import requests
url= 'http://www.soc.napier.ac.uk/~40009856/CW/'
res = requests.get(url)
soup = BeautifulSoup(res.content, 'lxml')
items = soup.select("[href$='.docx'], [href$='.pdf'], img[src]")
print([item['href'] if 'href' in item.attrs.keys()  else item['src'] for item in items])

首先,您应该阅读.find\u all()和其他方法的作用:.find_all()

.find_all()的第一个参数是标记名。对我来说还可以

<img src='some_url'>

标签。你有所有的img标签吗汤。全部找到('img'),将所有URL提取到实际文件并下载它们。你知道吗

现在你正在寻找这样的标签

URL包含“.doc”。像这样的事情应该可以做到:

soup.select('a[href*=".doc"]')

相关问题 更多 >