Python is a computer programming language. It is an object-oriented dynamically typed language. Python can make web crawlers, but do you know what python crawlers can do?
#Crawlers can crawl information on web pages and other networks to achieve intelligent analysis and push. Most of the world's crawlers are developed based on Python. Crawlers can provide important and huge data sources for big data analysis, mining, machine learning, etc.
1. The python crawler can start from a certain page of the website (usually the home page), read the content of the web page, find other link addresses in the web page, and then find the next web page through these link addresses, and so on. The loop continues until all web pages of this website have been crawled. If the entire Internet is regarded as a website, then web spiders can use this principle to crawl all web pages on the Internet.
2. Web crawlers (also known as web spiders, web robots, and more commonly known as web page chasers in the FOAF community) are automatic crawlers that follow certain rules. A program or script that retrieves information from the World Wide Web. Other less commonly used names include ants, autoindexers, emulators or worms.
Crawl the authors and answers of Zhihu, crawl the resources of Baidu network disk, save them in the database (of course, just save the links and titles of the resources), and then create a search engine for the network disk. Same as above, search for seed websites The same goes for the engine
The above is the detailed content of What can python web crawler do?. For more information, please follow other related articles on the PHP Chinese website!