尝试使用BeautifulSoup和CSV Wri迭代网站页面的逻辑流

2024-10-03 19:29:52 发布

您现在位置:Python中文网/ 问答频道 /正文

我似乎无法找出正确的缩进/子句位置,以使此循环超过一页。这段代码可以很好地打印CSV文件,但只打印第一页。你知道吗

#THIS WORKS BUT ONLY PRINTS THE FIRST PAGE

from bs4 import BeautifulSoup
from urllib2 import urlopen
import csv

page_num = 1
total_pages = 20

with open("MegaMillions.tsv","w") as f:
    fieldnames = ['date', 'numbers', 'moneyball']
    writer = csv.writer(f, delimiter = '\t')
    writer.writerow(fieldnames)

    while page_num < total_pages:
        page_num = str(page_num)
        soup = BeautifulSoup(urlopen('http://www.usamega.com/mega-millions-history.asp?p='+page_num).read())

    for row in soup('table',{'bgcolor':'white'})[0].findAll('tr'):

        tds = row('td')
        if tds[1].a is not None:
            date = tds[1].a.string.encode("utf-8")
            if tds[3].b is not None:
                uglynumber = tds[3].b.string.split()
                betternumber = [int(uglynumber[i]) for i in range(len(uglynumber)) if i%2==0]
                moneyball = tds[3].strong.string.encode("utf-8")

                writer.writerow([date, betternumber, moneyball])
        page_num = int(page_num)
        page_num += 1

print 'We\'re done here.'

当然,这只打印最后一页:

#THIS WORKS BUT ONLY PRINTS THE LAST PAGE

from bs4 import BeautifulSoup
from urllib2 import urlopen
import csv

page_num = 1
total_pages = 20

while page_num < total_pages:
    page_num = str(page_num)
    soup = BeautifulSoup(urlopen('http://www.usamega.com/mega-millions-history.asp?p='+page_num).read())

    with open("MegaMillions.tsv","w") as f:
        fieldnames = ['date', 'numbers', 'moneyball']
        writer = csv.writer(f, delimiter = '\t')
        writer.writerow(fieldnames)

        for row in soup('table',{'bgcolor':'white'})[0].findAll('tr'):

            tds = row('td')
            if tds[1].a is not None:
                date = tds[1].a.string.encode("utf-8")
                if tds[3].b is not None:
                    uglynumber = tds[3].b.string.split()
                    betternumber = [int(uglynumber[i]) for i in range(len(uglynumber)) if i%2==0]
                    moneyball = tds[3].strong.string.encode("utf-8")

                    writer.writerow([date, betternumber, moneyball])
        page_num = int(page_num)
        page_num += 1

print 'We\'re done here.'

Tags: csvfromimportdatestringifpagenum
2条回答

多亏了这些建议,这里有一个可行的变体:

from bs4 import BeautifulSoup
from urllib2 import urlopen
import csv

page_num = 1
total_pages = 73

with open("MegaMillions.tsv","w") as f:
    fieldnames = ['date', 'numbers', 'moneyball']
    writer = csv.writer(f, delimiter = '\t')
    writer.writerow(fieldnames)

    while page_num <= total_pages:
        page_num = str(page_num)
        soup = BeautifulSoup(urlopen('http://www.usamega.com/mega-millions-history.asp?p='+page_num).read())

        for row in soup('table',{'bgcolor':'white'})[0].findAll('tr'):

            tds = row('td')
            if tds[1].a is not None:
                date = tds[1].a.string.encode("utf-8")
                if tds[3].b is not None:
                    uglynumber = tds[3].b.string.split()
                    betternumber = [int(uglynumber[i]) for i in range(len(uglynumber)) if i%2==0]
                    moneyball = tds[3].strong.string.encode("utf-8")

                    writer.writerow([date, betternumber, moneyball])
        page_num = int(page_num)
        page_num += 1

print 'We\'re done here.'

选择这个而不是“a”,因为这样每一页都会有标题。你知道吗

第二个代码示例的问题是,每次都要覆盖文件。而不是

open("MegaMillions.tsv","w")

使用

open("MegaMillions.tsv","a")

“a”打开要追加的文件,这是您要执行的操作

相关问题 更多 >