Home > Backend Development > Python Tutorial > How does Python crawler use MongoDB?

How does Python crawler use MongoDB?

Guanhui
Release: 2020-07-28 18:24:52
forward
2556 people have browsed it

How does Python crawler use MongoDB?

The reason why python crawler uses mongodb:

1. The storage method of document structure

To put it simply, you can directly store json, list

2. Do not define the "table" in advance, you can create it at any time

3. The data length in the "table" can be different

That is, the first record has 10 values, and the second record does not require 10 values.

It is very suitable for messy data like crawlers.

Content expansion:

mongoDB introduction:

It is characterized by high performance, easy deployment, easy use, storage Data is very convenient. The main functional features are:

*Oriented to collection storage, easy to store object type data.

*Free mode.

*Support dynamic query.

*Supports full indexing, including internal objects.

*Support query.

*Supports replication and failure recovery.

*Use efficient binary data storage, including large objects (such as videos, etc.).

* Automatically handle fragmentation to support scalability at the cloud computing level.

*Supports Golang, RUBY, PYTHON, JAVA, C, PHP, C# and other languages.

*The file storage format is BSON (an extension of JSON).

*Accessible via the web.

The above is the detailed content of How does Python crawler use MongoDB?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jb51.net
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template