Foreach循环获取下一页的beauthoulsoup/Mechanize/Python链接

2024-09-30 06:26:48 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个观点

def Processinitialscan(request):
    EnteredDomain = request.GET.get('domainNm')

    #get raw output
    getDomainLinksFromGoo = settings.GOOGLE_BASEURL_FOR_HARVEST+settings.GOO_RESULT_DOMAIN_QUERIED+EnteredDomain
    rawGatheredGooOutput = mechanizeBrowser.open(getDomainLinksFromGoo)

    beautifulSoupObj = BeautifulSoup(mechanizeBrowser.response().read()) #read the raw response
    getFirstPageLinks = beautifulSoupObj.find_all('cite') #get first page of urls

    pattern = re.compile('^.*start=')   #set regex to search on - find anything like: " <domain and path here>start= "
    getRemainingPageUrls = beautifulSoupObj.find_all('a',attrs={'class': 'fl', 'href': pattern})

    NumberOfUrlsFound = len(getRemainingPageUrls)

    MaxUrlsToGather = ((NumberOfUrlsFound*10)+settings.GOOGLE_RESULT_AMT_ACCOUNT_FOR_PAGE_1) # +10 because 10 represents the urls on the first page

    url_data = UrlData(NumberOfUrlsFound, pattern) 
    #return HttpResponse(MaxUrlsToGather)

    return render(request, 'VA/scan/process_scan.html', {
        'url_data':url_data,'EnteredDomain':EnteredDomain,'getDomainLinksFromGoo':getDomainLinksFromGoo,
        'getRemainingPageUrls' : getRemainingPageUrls, 'NumberOfUrlsFound':NumberOfUrlsFound,
        'getFirstPageLinks' : getFirstPageLinks, 'MaxUrlsToGather' : MaxUrlsToGather
    })

模板

^{pr2}$

此模板输出:

url used: https://www.google.com/search?q=site%3Aasite.com


first page of links [<cite>www.google.com/webmasters/</cite>, <cite>www.asite.com</cite>, <cite>www.asite.com/blog/</cite>, <cite>www.asite.com/blog/projects/</cite>, <cite>www.asite.com/blog/category/internet/</cite>, <cite>www.asite.com/blog/category/goals/</cite>, <cite>www.asite.com/blog/category/uncategorized/</cite>, <cite>www.asite.com/blog/why-i-left-facebook/2013/01/</cite>, <cite>www.asite.com/blog/category/startups-2/</cite>, <cite>www.asite.com/blog/category/goals/</cite>, <cite>www.asite.com/blog/category/internet/</cite>]


number of "next" links 2

我的问题是:如何在模板内的循环中使用NumberOfUrlsFound来生成链接,比如:/search?q=site:entereddomain.com&start=10/search?q=site:entereddomain.com&start=20,然后根据NumberOfUrlsFound的值使用beautifulsGroup跟踪链接。因此,如果NumberOfUrlsFound=2,那么应该生成url search?q=site:asite.com&start=10search?q=site:asite.com&start=20,而且:

(puesdo代码….):

if(NumberOfUrlsFound > 1)
    foreach(NumberOfUrlsFound)
        # generate url with start=n+10  
        ## asite.com?/search?start=10  
        ## Then ...
        ## asite.com?/search?start=20
        ## and so on ..
        # where n represents the previous number
        # this n number is determined by `NumberOfUrlsFound` which might have a value of 2 for example
        # this value of 2 represents a max value of start=20 value to generate urls on.

Tags: ofthecomurlsearchonwwwblog
1条回答
网友
1楼 · 发布于 2024-09-30 06:26:48

您可能可以创建一个数据对象来表示要在模板中显示的数据。在

class UrlData(object):
    def __init__(self, num_of_urls, url_pattern):
        self.num_of_urls = num_of_urls
        self.url_pattern = url_pattern

    def url_list(self):
        # Returns a list of strings that represent the urls you want based on num_of_urls
        # e.g. asite.com/?search?start=10
        urls = []
        for i in xrange(self.num_of_urls):
            urls.append(self.url_pattern + 'start=' + str((i + 1) * 10))
        return urls

在你的视图.py在

^{pr2}$

只需使用视图函数中的数据创建对象并将对象传递给模板。在

在模板中,您可以执行以下操作:

# Mirroring your check
{% if url_data.num_of_urls > 1 %} 
    # We'll iterate through the url_list created from the function defined in UrlData
    {% for url in url_data.url_list %}
         {{ url }} # asite.com/?search...
    {% endfor %}
{% endif %}

在模板中,当您调用url_data.url_list时,它将运行UrlData中的函数

相关问题 更多 >

    热门问题