Home>Article>Backend Development> How does Python crawler use MongoDB?
The reason why python crawler uses mongodb:
1. The storage method of document structure
To put it simply, you can directly store json, list
2. Do not define the "table" in advance, you can create it at any time
3. The data length in the "table" can be different
That is, the first record has 10 values, and the second record does not require 10 values.
It is very suitable for messy data like crawlers.
Content expansion:
mongoDB introduction:
It is characterized by high performance, easy deployment, easy use, storage Data is very convenient. The main functional features are:
*Oriented to collection storage, easy to store object type data.
*Free mode.
*Support dynamic query.
*Supports full indexing, including internal objects.
*Support query.
*Supports replication and failure recovery.
*Use efficient binary data storage, including large objects (such as videos, etc.).
* Automatically handle fragmentation to support scalability at the cloud computing level.
*Supports Golang, RUBY, PYTHON, JAVA, C, PHP, C# and other languages.
*The file storage format is BSON (an extension of JSON).
*Accessible via the web.
The above is the detailed content of How does Python crawler use MongoDB?. For more information, please follow other related articles on the PHP Chinese website!