无法通过带有scrapy的表单

2024-09-28 20:56:44 发布

您现在位置:Python中文网/ 问答频道 /正文

我刚开始使用scrapy,我想从房地产网站上获取一些信息。 该站点有一个带有搜索表单(GET方法)的主页。 我正试图转到start_请求(recherche.php)中的results页面,并设置在formdata参数的地址栏中看到的所有get参数。 我也准备了我吃的饼干,但他也不工作。。

这是我的蜘蛛:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http import FormRequest, Request

from robots_immo.items import AnnonceItem

class ElyseAvenueSpider(BaseSpider):
    name = "elyse_avenue"
    allowed_domains = ["http://www.elyseavenue.com/"]

    def start_requests(self):
        return [FormRequest(url="http://www.elyseavenue.com/recherche.php",
                            formdata={'recherche':'recherche',
                                      'compteurLigne':'2',
                                      'numLigneCourante':'0',
                                      'inseeVille_0':'',
                                      'num_rubrique':'',
                                      'rechercheOK':'recherche',
                                      'recherche_budget_max':'',
                                      'recherche_budget_min':'',
                                      'recherche_surface_max':'',
                                      'recherche_surface_min':'',
                                      'recherche_distance_km_0':'20',
                                      'recherche_reference_bien':'',
                                      'recherche_type_logement':'9',
                                      'recherche_ville_0':''
                                     },
                            cookies={'PHPSESSID':'4e1d729f68d3163bb110ad3e4cb8ffc3',
                                     '__utma':'150766562.159027263.1340725224.1340725224.1340727680.2',
                                     '__utmc':'150766562',
                                     '__utmz':'150766562.1340725224.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)',
                                     '__utmb':'150766562.14.10.1340727680'
                                    },
                            callback=self.parseAnnonces
                           )]



    def parseAnnonces(self, response):
        hxs = HtmlXPathSelector(response)
        annonces = hxs.select('//div[@id="contenuCentre"]/div[@class="blocVignetteBien"]')
        items = []
        for annonce in annonces:
            item = AnnonceItem()
            item['nom'] = annonce.select('span[contains(@class,"nomBienImmo")]/a/text()').extract()
            item['superficie'] = annonce.select('table//tr[2]/td[2]/span/text()').extract()
            item['prix'] = annonce.select('span[@class="prixVignette"]/span[1]/text()').extract()
            items.append(item)
        return items


SPIDER = ElyseAvenueSpider()

当我运行spider时,没有问题,但是加载的页面不是很好(它说的是“请指定您的搜索”,我没有得到任何结果…)

2012-06-26 20:04:54+0200 [elyse_avenue] INFO: Spider opened
2012-06-26 20:04:54+0200 [elyse_avenue] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2012-06-26 20:04:54+0200 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2012-06-26 20:04:54+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2012-06-26 20:04:54+0200 [elyse_avenue] DEBUG: Crawled (200) <POST http://www.elyseavenue.com/recherche.php> (referer: None)
2012-06-26 20:04:54+0200 [elyse_avenue] INFO: Closing spider (finished)
2012-06-26 20:04:54+0200 [elyse_avenue] INFO: Dumping spider stats:
    {'downloader/request_bytes': 808,
     'downloader/request_count': 1,
     'downloader/request_method_count/POST': 1,
     'downloader/response_bytes': 7590,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2012, 6, 26, 18, 4, 54, 924624),
     'scheduler/memory_enqueued': 1,
     'start_time': datetime.datetime(2012, 6, 26, 18, 4, 54, 559230)}
2012-06-26 20:04:54+0200 [elyse_avenue] INFO: Spider closed (finished)
2012-06-26 20:04:54+0200 [scrapy] INFO: Dumping global stats:
    {'memusage/max': 27410432, 'memusage/startup': 27410432}

谢谢你的帮助!


Tags: fromimportinfohttpresponseitemsdownloadermin
2条回答

我会使用^{}为您完成所有工作,因为您可能仍然会错过一些字段:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http import FormRequest, Request

from robots_immo.items import AnnonceItem

class ElyseAvenueSpider(BaseSpider):

    name = "elyse_avenue"
    allowed_domains = ["elyseavenue.com"] # i fixed this
    start_urls = ["http://www.elyseavenue.com/"] # i added this

    def parse(self, response):
        yield FormRequest.from_response(response, formname='moteurRecherche', formdata={'recherche_distance_km_0':'20', 'recherche_type_logement':'9'}, callback=self.parseAnnonces)

    def parseAnnonces(self, response):
        hxs = HtmlXPathSelector(response)
        annonces = hxs.select('//div[@id="contenuCentre"]/div[@class="blocVignetteBien"]')
        items = []
        for annonce in annonces:
            item = AnnonceItem()
            item['nom'] = annonce.select('span[contains(@class,"nomBienImmo")]/a/text()').extract()
            item['superficie'] = annonce.select('table//tr[2]/td[2]/span/text()').extract()
            item['prix'] = annonce.select('span[@class="prixVignette"]/span[1]/text()').extract()
            items.append(item)
        return items

在您的日志输出中,它说spider向http://www.elyseavenue.com/recherche.php发出了POST请求,但是您说表单使用GET。

如果您对URL发出POST请求并搜索“请指定您的搜索”,请确保:

➜ curl -d "" http://www.elyseavenue.com/recherche.php | grep "Merci de préciser votre recherche."
% Total    % Received % Xferd  Average Speed   Time    Time     Time   Dload  Upload   Total   Spent    Left  Speed
100 37494    0 37494    0     0  54582      0 --:--:-- --:--:-- --:--:-- 60866
    <span class="Nbannonces">Merci de préciser votre recherche.</span>

FormRequestRequest的一个子类,它允许您指定请求类型。您应该指定GET,例如:

FormRequest(url="http://www.elyseavenue.com/recherche.php",
                        formdata={'recherche':'recherche',
                                  'compteurLigne':'2',
                                  'numLigneCourante':'0',
                                  'inseeVille_0':'',
                                  'num_rubrique':'',
                                  'rechercheOK':'recherche',
                                  'recherche_budget_max':'',
                                  'recherche_budget_min':'',
                                  'recherche_surface_max':'',
                                  'recherche_surface_min':'',
                                  'recherche_distance_km_0':'20',
                                  'recherche_reference_bien':'',
                                  'recherche_type_logement':'9',
                                  'recherche_ville_0':''
                                 },
                        cookies={'PHPSESSID':'4e1d729f68d3163bb110ad3e4cb8ffc3',
                                 '__utma':'150766562.159027263.1340725224.1340725224.1340727680.2',
                                 '__utmc':'150766562',
                                 '__utmz':'150766562.1340725224.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)',
                                 '__utmb':'150766562.14.10.1340727680'
                                },
                        callback=self.parseAnnonces,
                        method="GET"
                       )

相关问题 更多 >