使用BS4和python从HTML文件目录中进行Webscraping

2024-09-30 00:34:05 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个网站,每个人的详细信息都存储在单独的.HTML文件中。因此,总共有100个人的详细信息存储在100个不同的.html文件中。但它们都有相同的HTML结构。在

这是网站链接 http://www.coimbatore.com/doctors/home.htm。在

所以,如果你看到这个网站有很多类别,~all-doctors.html~文件在同一个目录中。在

{a2}

有5个医生的联系。如果我点击任何医生的名字

http://www.coimbatore.com/doctors/该doctorName.htm。所以所有的文件都在同一个目录/doctors/如果我没有错的话。那么我该如何收集每个医生的详细资料呢?在

我计划在LINUX中使用join函数将http://www.coimbatore.com/doctors/URL中的所有文件保存在本地并合并为一个whole.html文件。还有更好的办法吗?在

更新

letters = ['doctor1','doctor2'...]
for i in range(30):
    try:
        page = urllib2.urlopen("http://www.coimbatore.com/doctors/{}.htm".format(letters[i]))
    except urllib2.HTTPError:
        continue
    else:

Tags: 文件目录comhttp网站htmlwww详细信息
2条回答

{a1}方法是使用^:

创建项目:

scrapy startproject doctors && cd doctors

定义要加载的数据(items.py):

^{pr2}$

创建蜘蛛。这个basic似乎不适合这个任务:

scrapy genspider -t basic doctors_spider 'coimbatore.com'

将其更改为返回一个Request对象,直到每个页面都包含医生的信息:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from doctors.items import DoctorsItem
from scrapy.http import Request
from urlparse import urljoin

class DoctorsSpiderSpider(BaseSpider):
    name = "doctors_spider"
    allowed_domains = ["coimbatore.com"]
    start_urls = [ 
        'http://www.coimbatore.com/doctors/home.htm'
    ]   


    def parse(self, response):
        hxs = HtmlXPathSelector(response)

        for row in hxs.select('/html/body/center[1]/table[@cellpadding = 0]'):
            i = DoctorsItem()
            i['doctor_name'] = '|'.join(row.select('./tr[1]/td[2]//font[@size = -1]/text()').extract()).replace('\n', ' ')
            i['qualification'] ='|'.join( row.select('./tr[2]/td[2]//font[@size = -1]/text()').extract()).replace('\n', ' ')
            i['membership'] = '|'.join(row.select('./tr[3]/td[2]//font[@size = -1]/text()').extract()).replace('\n', ' ')
            i['visiting_hospitals'] = '|'.join(row.select('./tr[4]/td[2]//font[@size = -1]/text()').extract()).replace('\n', ' ')
            i['phone'] = '|'.join(row.select('./tr[5]/td[2]//font[@size = -1]/text()').extract()).replace('\n', ' ')
            i['consulting_hours'] = '|'.join(row.select('./tr[6]/td[2]//font[@size = -1]/text()').extract()).replace('\n', ' ')
            i['specialist_in'] = '|'.join(row.select('./tr[7]/td[2]//font[@size = -1]/text()').extract()).replace('\n', ' ')
            yield i

        for url in hxs.select('/html/body/center[3]//a/@href').extract():
            yield Request(urljoin(response.url, url), callback=self.parse)

        for url in hxs.select('/html/body//a/@href').extract():
            yield Request(urljoin(response.url, url), callback=self.parse)

运行方式如下:

scrapy crawl doctors_spider -o doctors.csv -t csv

这将创建一个csv文件,如:

phone,membership,visiting_hospitals,qualification,specialist_in,consulting_hours,doctor_name
(H)00966 4 6222245|(R)00966 4 6230143 ,,Domat Al Jandal Hospital|Al Jouf |Kingdom Of Saudi Arabia ,"MBBS, MS, MCh ( Cardio-Thoracic)",Cardio Thoracic Surgery,,Dr. N. Rajaratnam
210075,FRCS(Edinburgh) FIACS,"SRI RAMAKRISHNA HOSPITAL|CHEST CLINIC,COWLEY BROWN ROAD,R.S.PURAM,CBE-2","MD.,DPPR.,FACP",PULMONOLOGY/ RESPIRATORY MEDICINE,"9-1, 5-8",DR.T.MOHAN KUMAR
+91-422-827784-827790,Member -IAPMR,"Kovai Medical Center & Hospital, Avanashi Road,|Coimbatore-641 014","M.B.B.S., Dip.in. Physical Medicine & Rehabilitation","Neck and Back pain, Joint pain, Amputee Rehabilitation,|Spinal cord Injuries & Stroke",9.00am to 5.00pm (Except Sundays),Dr.Edmund M.D'Couto
+91-422-303352,*********,"206, Puliakulam Road, Coimbatore-641 045","M.B.B.S., M.D., D.V.",Sexually Transonitted Diseases.,5.00pm - 7.00pm,Dr.M.Govindaswamy
...

这段代码应该能让你开始。在

import urllib2
from bs4 import BeautifulSoup

doctors = ['thomas']
for doctor in doctors:
    try:
        page = urllib2.urlopen("http://www.coimbatore.com/doctors/{}.htm".format(doctor))
        soup = BeautifulSoup(page)
    except urllib2.HTTPError:
        continue

    rows = soup.find("table", cellspacing=0).find_all('tr')

    for row in rows:
        cols = row.find_all('td')
        print "%s: %s" % (cols[0].get_text().replace('\n', ' '), cols[1].get_text().replace('\n', ' '))

它的输出是

^{pr2}$

一些注意事项,你可能希望以不同的方式处理。我用空格替换了所有换行符(\n),因为代码有奇怪的换行符,如下所示:

<td><b><font face="Arial,Helvetica"><font color="#0000FF"><font size=-1>Name
of Doctor</font></font></font></b></td>

请注意,它强制在Nameof之间断开。在

如果您正试图从中生成一个CSV,那么可以很容易地修改脚本,使其只提取每行上的第二个单元格。在

相关问题 更多 >

    热门问题