代码运行20分钟以上,然后停止,没有输出,有什么问题?

2024-05-20 17:09:15 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图从Transfermarkt上的500张个人资料图片中获取“src”,即每个玩家个人资料上的图片,而不是列表中的小图片。我已经设法将每个玩家的URL存储到一个列表中。现在,当我尝试遍历它时,代码只是不断地运行,然后在20分钟后停止,没有任何错误或print命令的输出。正如我所说的,我希望每个玩家的图片来源(src)在他们各自的个人资料上

我不确定代码到底出了什么问题,因为我没有收到任何错误消息。我在stackoverflow上的不同帖子的帮助下构建了它

from bs4 import BeautifulSoup
import requests
import pandas as pd


playerID = []
playerImgSrc = []


result = []

for page in range(1, 21):

    r = requests.get("https://www.transfermarkt.com/spieler-statistik/wertvollstespieler/marktwertetop?land_id=0&ausrichtung=alle&spielerposition_id=alle&altersklasse=alle&jahrgang=0&kontinent_id=0&plus=1",
        params= {"page": page},
        headers= {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0"}
    )
    soup = BeautifulSoup(r.content, "html.parser")

    links = soup.select('a.spielprofil_tooltip')

    for i in range(len(links)):
        playerID.append(links[i].get('id'))

    playerProfile = ["https://www.transfermarkt.com/josh-maja/profil/spieler/" + x for x in playerID]

    for p in playerProfile:
        html = requests.get(p).text
        soup = BeautifulSoup(html, "html.parser")

        link = soup.select('div.dataBild')

    for i in range(len(link)):
        playerImgSrc.append(link[i].get('src'))
print(playerImgSrc)

Tags: inimportsrcidforgethtml玩家
1条回答
网友
1楼 · 发布于 2024-05-20 17:09:15

基本上,网站navigation正在使用AJAX技术,这非常快,就像您在本地机器上浏览文件夹一样

因此,在UI(用户界面)中显示的data实际上来自主机内的XHR请求的背景,该主机在marktwertetop中使用AJAX

我已经能够找到对它发出的XHR请求,然后在pages上循环时,直接用所需的parameters调用它

我发现{}和{}照片之间的区别实际上是{}的一个不同的{},即{}和{},所以我在{}本身中替换了它

我还认为我受到了antibiotic的保护(😋) 在{}下意味着在{}和{}期间维护{}和{},这意味着在{}期间防止{}层安全性

想象一下,您已经打开了一个browser,在同一个website页面之间导航,有一个cookies{}创建的established,只要您连接到site,如果idle,它就会刷新自己

但是你做这件事的方式,只是你打开一个浏览器,然后关闭它,然后再打开它,然后关闭它,等等。{}方将其视为{}攻击?!或洪水行为。这是{}操作的一个非常基本的部分

import requests
from bs4 import BeautifulSoup

site = "https://www.transfermarkt.com/spieler-statistik/wertvollstespieler/marktwertetop?ajax=yw1&page={}"

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0'
}


def main(url):
    with requests.Session() as req:
        allin = []
        for item in range(1, 21):
            print(f"Collecting Links From Page# {item}")
            r = req.get(url.format(item), headers=headers)
            soup = BeautifulSoup(r.content, 'html.parser')
            img = [item.get("src") for item in soup.findAll(
                "img", class_="bilderrahmen-fixed")]
            convert = [item.replace("small", "header") for item in img]
            allin.extend(convert)
    return allin


def download():
    urls = main(site)
    with requests.Session() as req:
        for url in urls:
            r = req.get(url, headers=headers)
            name = url[52:]
            name = name.split('?')[0]
            print(f"Saving {name}")
            with open(f"{name}", 'wb') as f:
                f.write(r.content)


download()

enter image description here

按用户评论更新

import requests
from bs4 import BeautifulSoup
import csv

site = "https://www.transfermarkt.com/spieler-statistik/wertvollstespieler/marktwertetop?ajax=yw1&page={}"

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0'
}


def main(url):
    with requests.Session() as req:
        allin = []
        names = []
        for item in range(1, 21):
            print(f"Collecting Links From Page# {item}")
            r = req.get(url.format(item), headers=headers)
            soup = BeautifulSoup(r.content, 'html.parser')
            img = [item.get("src") for item in soup.findAll(
                "img", class_="bilderrahmen-fixed")]
            convert = [item.replace("small", "header") for item in img]
            name = [name.text for name in soup.findAll(
                "a", class_="spielprofil_tooltip")][:-5]
            allin.extend(convert)
            names.extend(name)
    with open("data.csv", 'w', newline="", encoding="UTF-8") as f:
        writer = csv.writer(f)
        writer.writerow(["Name", "IMG"])
        data = zip(names, allin)
        writer.writerows(data)


main(site)

输出:view online

enter image description here

相关问题 更多 >