使用Google scrapuls搜索结果

2024-10-01 22:42:29 发布

您现在位置:Python中文网/ 问答频道 /正文

我的目标是使用beauthoulsoup从网上抓取谷歌搜索结果。我正在使用anacondapython并使用Ipython作为IDE控制台。为什么不在运行以下命令时得到一个输出输出?在

def google_scrape(query):
    address = "http://www.google.com/search?q=%s&num=100&hl=en&start=0" % (urllib.quote_plus(query))
    request = urllib2.Request(address, None, {'User-Agent':'Mosilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11'})
    urlfile = urllib2.urlopen(request)
    page = urlfile.read()
    soup = BeautifulSoup(page)

    linkdictionary = {}

    for li in soup.findAll('li', attrs={'class':'g'}):
        sLink = li.find('a')
        print sLink['href']
        sSpan = li.find('span', attrs={'class':'st'})
        print sSpan

    return linkdictionary

if __name__ == '__main__':
    links = google_scrape('english')

Tags: addressrequestgooglepagelifindurllib2query
1条回答
网友
1楼 · 发布于 2024-10-01 22:42:29

你从未向linkedDictionary添加任何内容

def google_scrape(query):
    address = "http://www.google.com/search?q=%s&num=100&hl=en&start=0" % (urllib.quote_plus(query))
    request = urllib2.Request(address, None, {'User-Agent':'Mosilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11'})
    urlfile = urllib2.urlopen(request)
    page = urlfile.read()
    soup = BeautifulSoup(page)

    linkdictionary = {}

    for li in soup.findAll('li', attrs={'class':'g'}):
        sLink = li.find('a')
        sSpan = li.find('span', attrs={'class':'st'})

        linkeDictionary['href'] = sLink['href']
        linkedDictionary['sSpan'] = sSpan

    return linkdictionary

if __name__ == '__main__':
    links = google_scrape('english')

相关问题 更多 >

    热门问题