Scrapy:重试图像下载后出现错误10054

2024-10-06 12:13:17 发布

您现在位置:Python中文网/ 问答频道 /正文

我用python运行一个Scrapy spider从一个网站上抓取图片。其中一张图片无法下载(即使我尝试通过网站定期下载),这是该网站的内部错误。这很好,我不在乎尝试获取图像,我只想跳过图像,当它失败时转移到其他图像上,但是我一直得到10054错误。在

> Traceback (most recent call last):   File
> "c:\python27\lib\site-packages\twisted\internet\defer.py", line 588,
> in _runCallbacks
>     current.result = callback(current.result, *args, **kw)   File "C:\Python27\Scripts\nhtsa\nhtsa\spiders\NHTSA_spider.py", line 137,
> in parse_photo_page
>     self.retrievePhoto(base_url_photo + url[0], url_text)   File "C:\Python27\Scripts\nhtsa\nhtsa\retrying.py", line 49, in wrapped_f
>     return Retrying(*dargs, **dkw).call(f, *args, **kw)   File "C:\Python27\Scripts\nhtsa\nhtsa\retrying.py", line 212, in call
>     raise attempt.get()   File "C:\Python27\Scripts\nhtsa\nhtsa\retrying.py", line 247, in get
>     six.reraise(self.value[0], self.value[1], self.value[2])   File "C:\Python27\Scripts\nhtsa\nhtsa\retrying.py", line 200, in call
>     attempt = Attempt(fn(*args, **kwargs), attempt_number, False)   File "C:\Python27\Scripts\nhtsa\nhtsa\spiders\NHTSA_spider.py", line
> 216, in retrievePhoto
>     code.write(f.read())   File "c:\python27\lib\socket.py", line 355, in read
>     data = self._sock.recv(rbufsize)   File "c:\python27\lib\httplib.py", line 612, in read
>     s = self.fp.read(amt)   File "c:\python27\lib\socket.py", line 384, in read
>     data = self._sock.recv(left) error: [Errno 10054] An existing connection was forcibly closed by the remote

下面是我的parse函数,它查看照片页面并找到重要的url:

^{pr2}$

下面是我的下载函数,带有retry decorator:

from retrying import retry
@retry(stop_max_attempt_number=5, wait_fixed=2000)
    def retrievePhoto(self, url, filename): 
        fullPath = self.saveLocation + "/" + filename
        urllib.urlretrieve(url, fullPath)

它重试下载5次,但随后抛出10054错误,不继续下一个映像。如何让蜘蛛在重试后继续?同样,我不在乎下载问题图片,我只想跳过它。在


Tags: inpyselfurlreadliblinescripts
1条回答
网友
1楼 · 发布于 2024-10-06 12:13:17

正确的是,您不应该在scray中使用urllib,因为它会阻止所有东西。尝试阅读与“scrapy twisted”和“scrapy asynchronous”相关的资源。不管怎样。。。我不认为您的主要问题是“重试后继续”,但没有在表达式中使用“相关XPath”。下面是一个适合我的版本(注意'./td/font/a/@href'中的./):

import scrapy
import string
import urllib
import os

class MyspiderSpider(scrapy.Spider):
    name = "myspider"
    start_urls = (
        'file:index.html',
    )

    saveLocation = os.getcwd()

    def parse(self, response):
        for sel in response.xpath('//table[@id="tblData"]/tr'):
            url = sel.xpath('./td/font/a/@href').extract()
            table_fields = sel.xpath('./td/font/text()').extract()
            if url:
                base_url_photo = "http://www-nrd.nhtsa.dot.gov/"
                url_text = table_fields[3]
                url_text = string.replace(url_text, "&nbsp","")
                url_text = string.replace(url_text," ","")
                self.retrievePhoto(base_url_photo + url[0], url_text)

    from retrying import retry
    @retry(stop_max_attempt_number=5, wait_fixed=2000)
    def retrievePhoto(self, url, filename): 
        fullPath = self.saveLocation + "/" + filename
        urllib.urlretrieve(url, fullPath)

这里有一个(更好的)版本,它遵循您的模式,但是使用了@paultrmbth提到的ImagesPipeline。在

^{pr2}$

我使用的演示文件是:

$ cat index.html 
<table id="tblData"><tr>

<td><font>hi <a href="img/2015/cav.jpg"> foo </a> <span /> <span /> green.jpg     </font></td>

</tr><tr>

<td><font>hi <a href="img/2015/caw.jpg"> foo </a> <span /> <span /> blue.jpg     </font></td>

</tr></table>

相关问题 更多 >