用BeautifulSoup在多页上写循环

2024-10-01 04:58:56 发布

您现在位置:Python中文网/ 问答频道 /正文

我正试图从县搜索工具中抓取几页结果:http://www2.tceq.texas.gov/oce/waci/index.cfm?fuseaction=home.main

但我似乎不知道如何迭代不止第一页。在

import csv
from mechanize import Browser
from bs4 import BeautifulSoup

url = 'http://www2.tceq.texas.gov/oce/waci/index.cfm?fuseaction=home.main'

br = Browser()
br.set_handle_robots(False)
br.open(url)

br.select_form("county_search_form")

br.form['county_select'] = ['111111111111180']
br.form['start_date_month'] = ['1']
br.form['start_date_day'] = ['1']
br.form['start_date_year'] = ['2014']

br.submit()

soup = BeautifulSoup(br.response())

complaints = soup.find('table', class_='waciList')

output = []

import requests
for i in xrange(1,8):
    page = requests.get("http://www2.tceq.texas.gov/oce/waci/index.cfm?fuseaction=home.search&pageNumber={}".format(i))
    if not page.ok:
        continue
    soup = BeautifulSoup(requests.text)

    for tr in complaints.findAll('tr'):
        print tr
        output_row = []
        for td in tr.findAll('td'):
            output_row.append(td.text.strip())

        output.append(output_row)

br.open(url)
print 'page 2'
complaints = soup.find('table', class_='waciList')

for tr in complaints.findAll('tr'):
    print tr

with open('out-tceq.csv', 'w') as csvfile:
    my_writer = csv.writer(csvfile, delimiter='|')
    my_writer.writerows(output)

我得到的结果只是第一页在输出CSV。在查看了其他使用bs4的scrape示例后,我尝试添加import requests循环,但得到了错误消息“ImportError:No module named requests”

有什么想法,我应该循环所有八页的结果,使他们进入.csv?在


Tags: csvinbrimportformhttpforoutput
1条回答
网友
1楼 · 发布于 2024-10-01 04:58:56

实际上不需要requests模块来遍历分页搜索结果,mechanize就足够了。这是使用mechanize的一种可能的方法。在

首先,从当前页面获取所有页面链接:

links = br.links(url_regex=r"fuseaction=home.search&pageNumber=")

然后遍历页面链接,打开每个链接,在每次迭代中从每个页面收集有用信息:

^{pr2}$

相关问题 更多 >