Heroku上的Scrapy爬虫返回503服务不可用

2024-10-05 14:30:45 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个scrapy crawler,它可以从网站上抓取数据,并将抓取的数据上传到远程MongoDB服务器。我想把它放在heroku上,以便在很长一段时间内自动刮取。 我使用scrapy用户代理在不同的用户代理之间轮换。 当我在我的pc上本地使用scrapy crawl <spider>时,spider会正确运行并将数据返回到MongoDB数据库

但是,当我在heroku上部署项目时,我在heroku日志中得到以下行:

2020-12-22T12:50:21.132731+00:00 app[web.1]: 2020-12-22 12:50:21 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://indiankanoon.org/browse/> (failed 1 times): 503 Service Unavailable

2020-12-22T12:50:21.134186+00:00 app[web.1]: 2020-12-22 12:50:21 [scrapy_user_agents.middlewares] DEBUG: Assigned User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36

(类似失败9次,直到:)

2020-12-22T12:50:23.594655+00:00 app[web.1]: 2020-12-22 12:50:23 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://indiankanoon.org/browse/> (failed 9 times): 503 Service Unavailable

2020-12-22T12:50:23.599310+00:00 app[web.1]: 2020-12-22 12:50:23 [scrapy.core.engine] DEBUG: Crawled (503) <GET https://indiankanoon.org/browse/> (referer: None)

2020-12-22T12:50:23.701386+00:00 app[web.1]: 2020-12-22 12:50:23 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <503 https://indiankanoon.org/browse/>: HTTP status code is not handled or not allowed

2020-12-22T12:50:23.714834+00:00 app[web.1]: 2020-12-22 12:50:23 [scrapy.core.engine] INFO: Closing spider (finished)

总之,我的本地IP地址能够抓取数据,而当Heroku尝试时,它无法。更改settings.py文件中的某些内容可以更正它吗

My settings.py文件:

    BOT_NAME = 'indKanoon'
    
    SPIDER_MODULES = ['indKanoon.spiders']
    NEWSPIDER_MODULE = 'indKanoon.spiders'
    MONGO_URI = ''
    MONGO_DATABASE = 'casecounts'    
    ROBOTSTXT_OBEY = False
    CONCURRENT_REQUESTS = 32
    DOWNLOAD_DELAY = 3
    COOKIES_ENABLED = False
    DOWNLOADER_MIDDLEWARES = {
        'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
        'scrapy_user_agents.middlewares.RandomUserAgentMiddleware': 400,
    }
    ITEM_PIPELINES = {
   'indKanoon.pipelines.IndkanoonPipeline': 300,
}
    RETRY_ENABLED = True
    RETRY_TIMES = 8
    RETRY_HTTP_CODES = [500, 502, 503, 504, 522, 524, 408]

Tags: 数据httpsorgdebugappgetherokumongodb
1条回答
网友
1楼 · 发布于 2024-10-05 14:30:45

这可能是由于DDoS保护或您试图从中获取的服务器的IP黑名单造成的

要克服这种情况,可以使用代理

我会推荐一个中间件,比如scrapy代理。使用此选项,您可以旋转、过滤错误的代理,或者为您的请求使用单个代理。此外,这将为您省去每次设置代理的麻烦

这直接来自devs GitHub自述文件(Github Link

安装scrapy旋转代理库

pip install scrapy_proxies

在your settings.py中添加以下设置

# Retry many times since proxies often fail
RETRY_TIMES = 10
# Retry on most error codes since proxies fail for different reasons
RETRY_HTTP_CODES = [500, 503, 504, 400, 403, 404, 408]

DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.retry.RetryMiddleware': 90,
    'scrapy_proxies.RandomProxy': 100,
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110,
}

# Proxy list containing entries like
# http://host1:port
# http://username:password@host2:port
# http://host3:port
# ...
PROXY_LIST = '/path/to/proxy/list.txt'

# Proxy mode
# 0 = Every requests have different proxy
# 1 = Take only one proxy from the list and assign it to every requests
# 2 = Put a custom proxy to use in the settings
PROXY_MODE = 0

# If proxy mode is 2 uncomment this sentence :
#CUSTOM_PROXY = "http://host1:port"

在这里,您可以更改重试时间、设置单个代理或旋转代理

然后将代理添加到list.txt文件,如下所示

http://host1:port
http://username:password@host2:port
http://host3:port

使用此选项,您的所有请求都将通过代理发送,该代理会针对每个请求进行随机轮换,因此不会影响并发性

其他类似的中间件也可用,如

scrapy-rotating-proxies

scrapy-proxies-tool

相关问题 更多 >