You may feel that crawlers are very mysterious, but in fact they are not as magical as we imagine (of course, the crawlers of Google and Baidu are complex and powerful. Its power is not the strength of the crawler itself, but the background data processing and data mining. The algorithm is very powerful), today we will unveil its mystery. Haha, you can implement a web weather crawler program in two simple steps. . .
Simply speaking, the crawler consists of two parts: 1. Obtain the text information of the web page. 2. Data analysis to obtain the data we want.
1. Obtain web page text information.
Python is very convenient for obtaining html. With the help of the urllib library, it only takes a few lines of code to achieve the functions we need.
#引入urllib库 import urllib def getHtml(url): page = urllib.urlopen(url) html = page.read() page.close() return html
What is returned here is the source code of the web page, which is the html code.
So how do we get the information we want from it? Then you need to use the most commonly used tool in web analysis - regular expressions.
2. Obtain the required content based on regular expressions, etc.
When using regular expressions, you need to carefully observe the structure of the web page information and write correct regular expressions.
The use of python regular expressions is also very simple:
#引入正则表达式库 import re def getWeather(html): reg = '<a title=.*?>(.*?)</a>.*?<span>(.*?)</span>.*?<b>(.*?)</b>' weatherList = re.compile(reg).findall(html) return weatherList
Instructions:
where reg is the regular expression and html is the text obtained in the first step. The function of findall is to find all strings in html that match regular matches and store them in weatherList. Then enumerate the data output in weathereList.
There are two things to note about the regular expression reg here.
One is "(.*?)". As long as the content in () is the content we will get, if there are multiple brackets, then each result of findall will contain the content in these brackets. There are three brackets above, corresponding to the city, the lowest temperature and the highest temperature.
The other one is ".*?". Python's regular matching is greedy by default, that is, it matches as many strings as possible by default. If you add a question mark at the end, it means non-greedy mode, that is, match as few strings as possible. Here, since there are multiple cities that need to be matched, the non-greedy mode needs to be used, otherwise there will only be one matching result left, and it will be incorrect.