Home >Backend Development >Python Tutorial >Why choose python as a crawler?

Why choose python as a crawler?

silencement
silencementOriginal
2019-06-27 11:03:082743browse

Why choose python as a crawler?

What is a web crawler?

A web crawler is a program that automatically extracts web pages. It downloads web pages from the World Wide Web for search engines and is an important component of search engines. The traditional crawler starts from the URL of one or several initial web pages and obtains the URL on the initial web page. During the process of crawling the web page, it continuously extracts new URLs from the current page and puts them into the queue until certain stopping conditions of the system are met

What is the use of crawlers?

As a general search engine web page collector. (google, baidu) is a vertical search engine. Scientific research: online human behavior, online community evolution, human dynamics research, econometric sociology, complex networks, data mining, and other fields require a large amount of data. Web crawlers are A powerful tool for collecting relevant data. Peeping, hacking, sending spam...

Crawler is the first and easiest step for search engines

Webpage collection

Build index

Query sorting

What language should I use to write the crawler?

C,C. Highly efficient and fast, suitable for general search engines to crawl the entire web. Disadvantages: development is slow and writing is stinky and long, for example: Skynet search source code.

Scripting language: Perl, Python, Java, Ruby. Simple, easy to learn, and good text processing can facilitate the detailed extraction of web content, but the efficiency is often not high and it is suitable for focused crawling of a small number of websites

C#? (It seems to be a language that people in information management prefer)

Why did you choose Python in the end?

Cross-platform, has good support for Linux and windows.

Scientific computing, numerical fitting: Numpy, Scipy

Visualization: 2d: Matplotlib (the drawing is very beautiful), 3d: Mayavi2

Complex network: Networkx

Statistics: Interface with R language: Rpy

Interactive terminal

Rapid development of websites

A simple Python crawler

 1 import urllib
 2 import urllib.request
 3 
 4 def loadPage(url,filename):
 5     """
 6     作用:根据url发送请求,获取html数据;
 7     :param url:
 8     :return:
 9     """
10     request=urllib.request.Request(url)
11     html1= urllib.request.urlopen(request).read()
12     return  html1.decode('utf-8')
13 
14 def writePage(html,filename):
15     """
16     作用将html写入本地
17 
18     :param html: 服务器相应的文件内容
19     :return:
20     """
21     with open(filename,'w') as f:
22         f.write(html)
23     print('-'*30)
24 def tiebaSpider(url,beginPage,endPage):
25     """
26     作用贴吧爬虫调度器,负责处理每一个页面url;
27     :param url:
28     :param beginPage:
29     :param endPage:
30     :return:
31     """
32     for page in range(beginPage,endPage+1):
33         pn=(page - 1)*50
34         fullurl=url+"&pn="+str(pn)
35         print(fullurl)
36         filename='第'+str(page)+'页.html'
37         html= loadPage(url,filename)
38 
39         writePage(html,filename)
40 
41 
42 
43 if __name__=="__main__":
44     kw=input('请输入你要需要爬取的贴吧名:')
45     beginPage=int(input('请输入起始页'))
46     endPage=int(input('请输入结束页'))
47     url='https://tieba.baidu.com/f?'
48     kw1={'kw':kw}
49     key = urllib.parse.urlencode(kw1)
50     fullurl=url+key
51     tiebaSpider(fullurl,beginPage,endPage)

The above is the detailed content of Why choose python as a crawler?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn