使用BeautifulSoup从文本文件获取URL

2024-10-04 09:19:38 发布

您现在位置:Python中文网/ 问答频道 /正文

如何从.txt文件BeautifulSoup获取URL? 我对网络垃圾还不熟悉。我想做多页废纸,我需要从txt文件中提取这些页面。你知道吗

import pandas as pd
import requests
from bs4 import BeautifulSoup
from selenium import webdriver

chrome_driver_path = r'C:\chromedriver_win32\chromedriver.exe'
driver = webdriver.Chrome(executable_path=chrome_driver_path)


urls = r'C:\chromedriver_win32\asin.txt'
url = ('https://www.amazon.com/dp/'+urls)
driver.get(url)
soup = BeautifulSoup(driver.page_source, 'lxml')

stock = soup.find(id='availability').get_text()

stok_kontrol = pd.DataFrame(  {  'Url': [url], 'Stok Durumu': [stock] })
stok_kontrol.to_csv('stok-kontrol.csv', encoding='utf-8-sig')


print(stok_kontrol)

这个记事本有亚马逊asin号码。你知道吗

C:\chromedriver_win32\asin.txt

文件位于:

B00004SU18

B07L9178GQ

B01M35N6CZ

Tags: 文件pathfromimporttxturldriverchromedriver
2条回答

如果我正确理解了这个问题,你只需要把ASIN的数字传给url,告诉BeautifulSoup要刮什么,这只是一个简单的文件操作,然后在文件上循环得到数字,并把每个数字传给BeautifulSoup刮

urls = r'C:\chromedriver_win32\asin.txt'
with open(urls, 'r') as f:
    for line in f:
        url = ('https://www.amazon.com/dp/'+line)
        driver.get(url)
        soup = BeautifulSoup(driver.page_source, 'lxml')
        stock = soup.find(id='availability').get_text()
        stok_kontrol = pd.DataFrame(  {  'Url': [url], 'Stok Durumu': [stock]  }  )
        stok_kontrol.to_csv('stok-kontrol.csv', encoding='utf-8-sig')

        print(stok_kontrol)

这将获得产品URL以及产品是否有库存。
将该信息打印到控制台,然后
保存到文件'stok'-康特罗.csv'

测试环境:Python 3.7.4

import pandas as pd
from bs4 import BeautifulSoup
from selenium import webdriver
import re

chrome_driver_path = r'C:\chromedriver_win32\chromedriver.exe'
driver = webdriver.Chrome(executable_path=chrome_driver_path)

# Gets whether the products in the array, are in stock, from  www.amazon.com
# Returns an Array of Dictionaries, with keys ['asin','instock','url']
def IsProductsInStock(array_of_ASINs):
    results = []
    for asin in array_of_ASINs:
        url = 'https://www.amazon.com/dp/'+str(asin)
        driver.get(url)
        soup = BeautifulSoup(driver.page_source, 'lxml')

        stock = soup.find(id='availability').get_text().strip()

        isInStock = False
        if('In Stock' in stock): 
            # If 'In Stock' is the text of 'availability' element
            isInStock=True
        else: 
            # If Not, extract the number from it, if any, and see if it's in stock.
            tmp = re.search(re.compile('[0-9]+'), stock)
            if( tmp is not None and int(tmp[0]) > 0):
                isInStock = True

        results.append({"asin": asin, "instock": isInStock, "url": url})
    return results

# Saves the product information to 'toFile'
# Returns a pandas.core.frame.DataFrame object, with the product info ['url', 'instock'] as columns
# inStockDict MUST be either a Dictionary, or a 'list' of Dictionaries with, ['asin','instock','url'] keys
def SaveProductInStockInformation(inStockDict, toFile):
    if(isinstance(inStockDict, dict)):
        stok_kontrol = pd.DataFrame(  {  'Url': [inStockDict['url']], 'Stok Durumu': [inStockDict['instock']]  } )
    elif(isinstance(inStockDict, list)):
        stocksSimple = []
        for stock in inStockDict:
            stocksSimple.append([stock['url'], stock['instock']])
        stok_kontrol = pd.DataFrame(stocksSimple, columns=['Url', 'Stok Durumu'])
    else:
        raise Exception("inStockDict parm, Must be Either a dictionary, or a 'list' of dictionaries with, ['asin','instock','url'] keys!")

    stok_kontrol.to_csv(toFile, encoding='utf-8-sig')
    return stok_kontrol

# Get ASINs From File
f = open(r'C:\chromedriver_win32\asin.txt','r')
urls = f.read().split()
f.close()

# Get a list of Dictionaries containing all the products information
stocks = IsProductsInStock(urls)

# Save and Print the ['url', 'instock'] information
print( SaveProductInStockInformation(stocks, 'stok-kontrol.csv') )


# Remove if you need to use the driver later on in the program
driver.close() 

结果:(文件'stok-康特罗.csv')

,Url,Stok Durumu
0,https://www.amazon.com/dp/B00004SU18,True
1,https://www.amazon.com/dp/B07L9178GQ,True
2,https://www.amazon.com/dp/B01M35N6CZ,True

相关问题 更多 >