Home >Backend Development >Python Tutorial >What is a python crawler in layman's terms?
80% of the world's crawlers are developed based on Python. Learning crawler skills well can provide important data sources for subsequent big data analysis, mining, machine learning, etc.
#What is a crawler?
Web crawlers (also known as web spiders, web robots, and more commonly known as web chasers in the FOAF community) are programs that automatically crawl information from the World Wide Web according to certain rules. Or script. Other less commonly used names include ants, autoindexers, emulators, or worms.
In fact, in layman's terms, it is to obtain the data you want on the web page through a program, that is, to automatically capture the data.
What can a crawler do?
You can use crawlers to crawl pictures, crawl videos, and other data you want to crawl. As long as you can access the data through the browser, you can obtain it through the crawler.
What is the nature of a crawler?
Simulate the browser to open the web page and obtain the part of the data we want in the web page
The process of the browser opening the web page:
After you enter the address in the browser, find the server host through the DNS server and send it to The server sends a request, and after parsing, the server sends the results to the user's browser, including html, js, css and other file contents. The browser parses the results and finally presents the results to the user on the browser.
So the browser results that users see are composed of HTML code. Our crawler is to obtain this content by analyzing and filtering the HTML code to obtain the resources we want.
For more Python-related technical articles, please visit the Python Tutorial column to learn!
The above is the detailed content of What is a python crawler in layman's terms?. For more information, please follow other related articles on the PHP Chinese website!