为什么这些代理服务器会收到400个错误的请求?

2024-09-28 01:30:11 发布

您现在位置:Python中文网/ 问答频道 /正文

所以我对网络和代理服务器的使用还很陌生。我有一个刮板,刮某些网站,但我意识到我需要改变我的IP地址和其他什么,这样我就不会被从网站启动。我在GitHub上找到了我想使用的以下程序:

https://github.com/aivarsk/scrapy-proxies

我已按以下方式执行了所有操作:

蜘蛛:

# This package will contain the spiders of your Scrapy project
#
# Please refer to the documentation for information on how to create and manage
# your spiders.
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from backpage_scrape import items
#from toolz import first
#import ipdb
#from lxml import html
from datetime import datetime, timedelta
import os

HOME = os.environ['HOMEPATH']
os.chdir(HOME +       "/Desktop/GitHub/Rover/backpage_scrape/backpage_scrape/spiders/") 

# Method that gets today's date
def backpage_date_today():
    now = datetime.utcnow() - timedelta(hours=4)
    weekdays = ['Mon. ','Tue. ','Wed. ','Thu. ','Fri. ','Sat. ','Sun. ']
    months = ['Jan. ','Feb. ','Mar. ','Apr. ','May. ', 'Jun. ','Jul. ','Aug. ','Sep. ','Oct. ','Nov. ','Dec. ']
    backpage_date = weekdays[now.weekday()] + months[now.month-1] + str(now.day)
    return backpage_date

# Method that gets yesterday's date
def backpage_date_yesterday():
    now = datetime.utcnow() - timedelta(days=1, hours=4)
    weekdays = ['Mon. ','Tue. ','Wed. ','Thu. ','Fri. ','Sat. ','Sun. ']
months = ['Jan. ','Feb. ','Mar. ','Apr. ','May. ', 'Jun. ','Jul. ','Aug. ','Sep. ','Oct. ','Nov. ','Dec. ']
backpage_date = weekdays[now.weekday()] + months[now.month-1] + str(now.day)
return backpage_date

# Open file which contains input urls
with open("test_urls.txt","rU") as infile:
    urls = [row.strip("\n") for row in infile]

class BackpageSpider(CrawlSpider):
name = 'backpage'
allowed_domains = ['backpage.com']
start_urls = urls

def parse(self,response):

    if response.status < 600:

        todays_links = []

        backpage_date = backpage_date_today()
        yesterday_date = backpage_date_yesterday()

        if backpage_date in response.body:
            # Get all URLs to iterate through
            todays_links = response.xpath("//div[@class='date'][1]/following-sibling::div[@class='date'][1]/preceding-sibling::div[preceding-sibling::div[@class='date']][contains(@class, 'cat')]/a/@href").extract()

        # timeOut = 0
        for url in todays_links: 
            # Iterate through pages and scrape
            # if timeOut == 10:
            #   time.sleep(600)
            #   timeOut = 0
            # else:
            #   timeOut += 1

            yield scrapy.Request(url,callback=self.parse_ad_into_content)

        for url in set(response.xpath('//a[@class="pagination next"]/@href').extract()):
            yield scrapy.Request(url,callback=self.parse)

    else:
        time.sleep(600)
        yield scrapy.Request(response.url,callback=self.parse)

# Parse page
def parse_ad_into_content(self,response):
    item = items.BackpageScrapeItem(url=response.url,
        backpage_id=response.url.split('.')[0].split('/')[2].encode('utf-8'),
        text = response.body,
        posting_body= response.xpath("//div[@class='postingBody']").extract()[0].encode('utf-8'),
        date = datetime.utcnow()-timedelta(hours=5),
        posted_date = response.xpath("//div[@class='adInfo']/text()").extract()[0].encode('utf-8'),
        posted_age = response.xpath("//p[@class='metaInfoDisplay']/text()").extract()[0].encode('utf-8'),
        posted_title = response.xpath("//div[@id='postingTitle']//h1/text()").extract()[0].encode('utf-8')
        )
    return item

部分设置.py公司名称:

^{pr2}$

在randomproxy.py与GitHub链接上的完全相同。在

在代理.txt公司名称:

https://6.hidemyass.com/ip-4
https://5.hidemyass.com/ip-1
https://4.hidemyass.com/ip-1
https://4.hidemyass.com/ip-2
https://4.hidemyass.com/ip-3
https://3.hidemyass.com/ip-1
https://3.hidemyass.com/ip-2
https://3.hidemyass.com/ip-3
https://2.hidemyass.com/ip-1
https://2.hidemyass.com/ip-2
https://2.hidemyass.com/ip-3
https://1.hidemyass.com/ip-1
https://1.hidemyass.com/ip-2
https://1.hidemyass.com/ip-3
https://1.hidemyass.com/ip-4
https://1.hidemyass.com/ip-5
https://1.hidemyass.com/ip-6
https://1.hidemyass.com/ip-7
https://1.hidemyass.com/ip-8

所以,如果你看一下GitHub自述文件的顶部,你会看到上面写着“复制粘贴到文本文件中并重新格式化为http://host:port格式。”我不确定我是如何做到的,或者是否已经是这种格式。在

就像我说的,我的错误是400个错误请求。我不确定它是否有用,但控制台显示:

Retrying <GET http://sf.backpage.com/restOfURL> <failed 10 times>: 400 Bad Request

它是否应该显示上面URL中的代理,在sf.backpage.com网站“部分?在

非常感谢你的时间…我真的很感谢你的帮助。在

编辑:另外,我不确定在GitHub自述文件底部插入代码片段的位置/方式。关于这一点的任何建议也将是有益的。在


Tags: fromhttpsimportipdivcomurldate

热门问题