使用Python/Anaconda中的urllib来抓取多个url(循环问题)

2024-10-01 00:26:04 发布

您现在位置:Python中文网/ 问答频道 /正文

第一次海报在这里,所以请温柔点!我对Python非常陌生,使用以下代码在抓取多个url时遇到了一些问题:

    from urllib import urlopen as uReq
    from bs4 import BeautifulSoup as soup

    my_url = ["https://www.zoopla.co.uk/for-sale/property/birmingham/?q=birmingham&results_sort=newest_listings&search_source=home&page_size=100", "https://www.zoopla.co.uk/for-sale/property/birmingham/?identifier=birmingham&page_size=100&q=birmingham&search_source=home&radius=0&pn=2"]



for urls in my_url:


uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()


page_soup = soup(page_html,"html.parser")


containers = page_soup.findAll("div",{"class":"listing-results-wrapper"})

filename = "links.csv"
f = open (filename, "w")

headers = "link\n"

f.write(headers)


for container in containers:
    link =  container.div.div.a["href"]

    print("link: " + link)

    f.write(link + "\n")

   f.close()

我猜我犯了一个非常基本的错误,但我似乎找不到任何东西通过搜索论坛/谷歌等,因为我一定是在错误的地方。在

编辑:我想我最好能解释一下我要达到的目标!我试图创建一个csv文件,其中包含变量“containers”获取的信息。在

这段代码似乎只适用于1个url,但我得到AttributeError:“添加其他url时list对象没有属性”strip“。”。在

有人愿意帮忙吗?在

任何帮助都将不胜感激!在


Tags: 代码fromimportdivurlformyhtml
1条回答
网友
1楼 · 发布于 2024-10-01 00:26:04

代码被搞乱了,但是列表正在被调用。在

from urllib import urlopen as uReq
from bs4 import BeautifulSoup as soup

my_url = ["https://www.zoopla.co.uk/for-sale/property/birmingham/?q=birmingham&results_sort=newest_listings&search_source=home&page_size=100", "https://www.zoopla.co.uk/for-sale/property/birmingham/?identifier=birmingham&page_size=100&q=birmingham&search_source=home&radius=0&pn=2"]



for urls in my_url:
    uClient = uReq(urls)
    page_html = uClient.read()
    uClient.close()
    page_soup = soup(page_html,"html.parser")
    containers = page_soup.findAll("div",{"class":"listing-results-wrapper"})
    filename = "links.csv"
    f = open (filename, "w")
    headers = "link\n"
    f.write(headers)
    for container in containers:
        link =  container.div.div.a["href"]
        print("link: " + link)
        f.write(link + "\n")
    f.close()

相关问题 更多 >