1)urllib2+BeautifulSoup抓取Goolge搜尋連結
近期,參與的專案需要對Google搜尋結果進行處理,之前學習了Python處理網頁相關的工具。在實際應用程式中,使用了urllib2和beautifulsoup來進行網頁的抓取,但是在抓取google搜尋結果的時候,發現如果是直接對google搜尋結果頁面的原始碼進行處理,會得到很多「髒」連結。
看下圖為搜尋「titanic james」的結果:
圖中紅色標記的是不需要的,藍色標記的是需要抓取處理的。
這種「髒連結」當然可以透過規則過濾的方法來過濾掉,但是這樣程式的複雜度就高了。正當自己愁眉苦臉的正在寫過濾規則時。同學提醒說google應該提供相關的api,才恍然大明白。
(2)Google Web Search API+多執行緒
文件中給出使用Python進行搜尋的例子:
圖中紅色標記的是不需要的,藍色標記的是需要抓取處理的。
這種「髒連結」當然可以透過規則過濾的方法來過濾掉,但是這樣程式的複雜度就高了。正當自己愁眉苦臉的正在寫過濾規則時。同學提醒說google應該提供相關的api,才恍然大明白。
(2)Google Web Search API+多執行緒
文件中給出使用Python進行搜尋的例子:
import simplejson # The request also includes the userip parameter which provides the end # user's IP address. Doing so will help distinguish this legitimate # server-side traffic from traffic which doesn't come from an end-user. url = ('https://ajax.googleapis.com/ajax/services/search/web' '?v=1.0&q=Paris%20Hilton&userip=USERS-IP-ADDRESS') request = urllib2.Request( url, None, {'Referer': /* Enter the URL of your site here */}) response = urllib2.urlopen(request) # Process the JSON string. results = simplejson.load(response) # now have some fun with the results... import simplejson # The request also includes the userip parameter which provides the end # user's IP address. Doing so will help distinguish this legitimate # server-side traffic from traffic which doesn't come from an end-user. url = ('https://ajax.googleapis.com/ajax/services/search/web' '?v=1.0&q=Paris%20Hilton&userip=USERS-IP-ADDRESS') request = urllib2.Request( url, None, {'Referer': /* Enter the URL of your site here */}) response = urllib2.urlopen(request) # Process the JSON string. results = simplejson.load(response) # now have some fun with the results..
實際應用中可能需要抓取google的許多網頁,所以還需要使用多執行緒來分擔抓取任務。使用google web search api的參考詳細介紹,請看此處(這裡介紹了Standard URL Arguments)。另外要特別注意,url中參數rsz必須是8(包括8)以下的值,若大於8,會報錯的!
(3)程式碼實作
程式碼實作仍有問題,但是能夠運行,魯棒性差,還需要進行改進,希望各路大神指出錯誤(初學Python),不勝感激。
#-*-coding:utf-8-*- import urllib2,urllib import simplejson import os, time,threading import common, html_filter #input the keywords keywords = raw_input('Enter the keywords: ') #define rnum_perpage, pages rnum_perpage=8 pages=8 #定义线程函数 def thread_scratch(url, rnum_perpage, page): url_set = [] try: request = urllib2.Request(url, None, {'Referer': 'http://www.sina.com'}) response = urllib2.urlopen(request) # Process the JSON string. results = simplejson.load(response) info = results['responseData']['results'] except Exception,e: print 'error occured' print e else: for minfo in info: url_set.append(minfo['url']) print minfo['url'] #处理链接 i = 0 for u in url_set: try: request_url = urllib2.Request(u, None, {'Referer': 'http://www.sina.com'}) request_url.add_header( 'User-agent', 'CSC' ) response_data = urllib2.urlopen(request_url).read() #过滤文件 #content_data = html_filter.filter_tags(response_data) #写入文件 filenum = i+page filename = dir_name+'/related_html_'+str(filenum) print ' write start: related_html_'+str(filenum) f = open(filename, 'w+', -1) f.write(response_data) #print content_data f.close() print ' write down: related_html_'+str(filenum) except Exception, e: print 'error occured 2' print e i = i+1 return #创建文件夹 dir_name = 'related_html_'+urllib.quote(keywords) if os.path.exists(dir_name): print 'exists file' common.delete_dir_or_file(dir_name) os.makedirs(dir_name) #抓取网页 print 'start to scratch web pages:' for x in range(pages): print "page:%s"%(x+1) page = x * rnum_perpage url = ('https://ajax.googleapis.com/ajax/services/search/web' '?v=1.0&q=%s&rsz=%s&start=%s') % (urllib.quote(keywords), rnum_perpage,page) print url t = threading.Thread(target=thread_scratch, args=(url,rnum_perpage, page)) t.start() #主线程等待子线程抓取完 main_thread = threading.currentThread() for t in threading.enumerate(): if t is main_thread: continue t.join()