Python中从CSV加载迭代Url

2024-04-25 08:41:58 发布

您现在位置:Python中文网/ 问答频道 /正文

请帮帮我 我在CSV文件中有一个数据url,该文件中有100行和1列, 我想使用Python将数据行1从CSV加载到行100,如何编写代码行

但是,在运行之后,重复只能在其中一行中工作一次,并且不会到达CSV中url的末尾,也不会继续到下一个url

disc_information = html.find('div', class_='alert alert-info global-promo').text.strip().strip('\n')
AttributeError: 'NoneType' object has no attribute 'text'

如果找不到html时出现错误,我该如何处理

下面的代码行我使用python,请帮助使循环刮运行到url列表的末尾

from bs4 import BeautifulSoup
import requests
import pandas as pd
import csv
import pandas


with open('Url Torch.csv','rt') as f:
  data = csv.reader(f, delimiter=',')
  for row in data:
      URL_GO = row[2]

def variable_Scrape(url):
    try:
        cookies = dict(cookie="............")
        request = requests.get(url, cookies=cookies)
        html = BeautifulSoup(request.content, 'html.parser')
        title = html.find('div', class_='title').text.strip().strip('\n')
        desc = html.find('div', class_='content').text
        link = html.find_all('img', class_='lazyload slide-item owl-lazy')
        normal_price = html.find('div', class_='amount public').text.strip().strip('\n')
        disc_information = html.find('div', class_='alert alert-info global-promo').text.strip().strip('\n')

    except AttributeError as e:
        print(e)
        #ConnectionAbortedError
        return False
    else:
        print(title)
        #print(desc)
        #print(link)
    finally:
        print(title)
        print(desc)
        print(link)
        print('Finally.....')
variable_Scrape(URL_GO)

Tags: csvtextimportdivurltitlehtmlas
2条回答

如果看不到您的csv文件,很难给出准确答案,但请尝试以下方法:

import csv

f = open('you_file.csv')
csv_f = csv.reader(f)

for row in csv_f:
  print row[0]

这是密码

import csv

data = []  #create an empty list to store rows on it
with open('emails.csv') as csv_file:
    reader = csv.reader(csv_file)
    for row in reader:
        data.append(row) #add each row to the list

根据您关于在url不正常时传递循环的评论:

for url in data:   # data is the list where url stored
    try:
        # do your code here (requests, beautifulsoup) :
        # r = requests.get(url) ...
    except:
        pass
        # will go to the next loop (next url) if an error happens

相关问题 更多 >