It is recommended that you use the Shenjianshou Cloud Crawler (http://www.shenjianshou.cn). The crawler is completely written and executed on the cloud. There is no need to configure any development environment, and rapid development and implementation are possible.
A simple few lines of javascript can implement complex crawlers and provide many functional functions: anti-anti-crawlers, js rendering, data publishing, chart analysis, anti-leeching, etc., which are often encountered in the process of developing crawlers. All problems will be solved by Archer. Collected data: (1) You can choose to publish it to a website, such as wecenterwordpressdiscuzdedeempire and other cms systems (2) You can also publish it to a database (3) or export a file To local The specific settings are in "Data Publishing & Export"
What he means here is that the captured web pages are directly stored in the local disk in the form of files
Can use object storage components.
It is recommended that you use the Shenjianshou Cloud Crawler (http://www.shenjianshou.cn). The crawler is completely written and executed on the cloud. There is no need to configure any development environment, and rapid development and implementation are possible.
A simple few lines of javascript can implement complex crawlers and provide many functional functions: anti-anti-crawlers, js rendering, data publishing, chart analysis, anti-leeching, etc., which are often encountered in the process of developing crawlers. All problems will be solved by Archer.
Collected data:
(1) You can choose to publish it to a website, such as wecenterwordpressdiscuzdedeempire and other cms systems
(2) You can also publish it to a database
(3) or export a file To local
The specific settings are in "Data Publishing & Export"