This article mainly introduces the basic writing method of Python web crawler function. Web crawler, namely Web Spider, is a very vivid name. Comparing the Internet to a spider web, then Spider is a spider crawling around on the web. Friends who are interested in web crawlers can refer to this article
Web crawlers, namely Web Spider, are A very vivid name. If the Internet is compared to a spider web, then a spider is a spider crawling around on the web.
1. The definition of web crawler
Web spiders search for web pages through the link addresses of web pages. Starting from a certain page of the website (usually the home page), read the content of the web page, find other link addresses in the web page, and then find the next web page through these link addresses, and continue loop until the All pages of this website have been crawled. If the entire Internet is regarded as a website, then web spiders can use this principle to crawl all web pages on the Internet. In this way, a web crawler is a crawler, a program that crawls web pages. The basic operation of a web crawler is to crawl web pages.
2. The process of browsing the webpage
The process of crawling the webpage is actually the same as the way readers usually use IE browserto browse the webpage . For example, you enter the address www.baidu.com in the address bar of the browser.
The process of opening a web page is actually that the browser, as a browsing "client", sends a request to the server, "grabs" the server-side files locally, and then interprets and displays them.
HTML is a markup language that uses tags to mark content and parse and differentiate it. The function of the browser is to parse the obtained HTML code, and then convert the original code into the website page we see directly.
3. Web crawler function based on python
1). Get html page with python
In fact, the most basic website grabbing is just two sentences:
import urllib2 content = urllib2.urlopen('http://XXXX').read()
In this way, you can get the entire html document. The key issue is that we may need to start from this document. Get the useful information we need instead of the entire document. This requires parsing html filled with various tags.
2). How to parse html after python crawler crawls the page
python crawler html parsing library SGMLParser
Python comes with parsers such as HTMLParser and SGMLParser by default. The former is really difficult to use, so I wrote a sample program using SGMLParser:
import urllib2 from sgmllib import SGMLParser class ListName(SGMLParser): def init(self): SGMLParser.init(self) self.is_h4 = "" self.name = [] def start_h4(self, attrs): self.is_h4 = 1 def end_h4(self): self.is_h4 = "" def handle_data(self, text): if self.is_h4 == 1: self.name.append(text) content = urllib2.urlopen('http://169it.com/xxx.htm').read() listname = ListName() listname.feed(content) for item in listname.name: print item.decode('gbk').encode('utf8')
It's very simple. A class called ListName is defined here, Inherits the methods in SGMLParser. Use a variable is_h4 as a mark to determine the h4 tag in the html file. If an h4 tag is encountered, the content in the tag is added to the List variable name. Explain start_h4() and end_h4() function, their prototype is
##
start_tagname(self, attrs) end_tagname(self)
python crawler html parsing library pyQuery
pyQuery is the implementation ofsudo apt-get install python-pyquery
from pyquery import PyQuery as pyq doc=pyq(url=r'http://169it.com/xxx.html') cts=doc('.market-cat') for i in cts: print '====',pyq(i).find('h4').text() ,'====' for j in pyq(i).find('.sub'): print pyq(j).text() , print '\n'
Python crawler html parsing library BeautifulSoup
One of the headaches is that most web pages are not written in full compliance with standards, and there are all kinds of inexplicable errors that make people want to Find the person who wrote the web page and beat him up. In order to solve this problem, we can choose the famous BeautifulSoup to parse HTML documents, which has good fault tolerance. The above is the entire content of this article. It provides a detailed analysis and introduction to the implementation of the Python web crawler function. I hope it will be helpful to everyone's learning.The above is the detailed content of Introduction to the basic writing method of Python web crawler function. For more information, please follow other related articles on the PHP Chinese website!