Home  >  Article  >  Backend Development  >  What are the python crawler frameworks?

What are the python crawler frameworks?

(*-*)浩
(*-*)浩Original
2019-06-12 14:38:163868browse

Today I recommend some more efficient Python crawler frameworks to everyone. Share it with everyone.

What are the python crawler frameworks?

1.Scrapy

Scrapy is an application framework written to crawl website data and extract structural data. It can be used in a series of programs including data mining, information processing or storing historical data. Using this framework, you can easily crawl down data such as Amazon product information. (Recommended learning: Python video tutorial)

Project address: https://scrapy.org/

2.PySpider

pyspider is a powerful web crawler system implemented in python. It can write scripts, schedule functions and view crawling results in real time on the browser interface. The backend uses commonly used databases to store crawling results. , and can also set tasks and task priorities regularly.

Project address: https://github.com/binux/pyspider

3.Crawley

Crawley can crawl the content of the corresponding website at high speed , supports relational and non-relational databases, and data can be exported to JSON, XML, etc.

Project address: http://project.crawley-cloud.com/

4.Newspaper

Newspaper can be used to extract news and articles and content analysis. Use multi-threading, support more than 10 languages, etc.

Project address: https://github.com/codelucas/newspaper

5.Beautiful Soup

Beautiful Soup is a tool that can be downloaded from HTML or A Python library for extracting data from XML files. It enables customary document navigation, search, and modification methods through your favorite converter. Beautiful Soup will save you hours or even days of work.

Project address: https://www.crummy.com/software/BeautifulSoup/bs4/doc/

##6.Grab

Grab is a Python framework for building web scrapers. With Grab, you can build web scrapers of varying complexity, from simple 5-line scripts to complex asynchronous website scrapers that handle millions of web pages. Grab provides an API for performing network requests and processing received content, such as interacting with the DOM tree of an HTML document.


Project address: http://docs.grablib.org/en/latest/#grab-spider-user-manual

7.Cola

Cola is a distributed crawler framework. For users, they only need to write a few specific functions without paying attention to the details of distributed operation. Tasks are automatically distributed across multiple machines, and the entire process is transparent to the user.


Project address: https://github.com/chineking/cola

For more Python related technical articles, please visit the

Python Tutorial column to learn!

The above is the detailed content of What are the python crawler frameworks?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Previous article:where is idle in pythonNext article:where is idle in python