Unless it is very complex front-end logic (for example, there is a lot of logic to calculate tokens), it is not recommended to simulate the execution of js If it is dynamically loaded data, it would be simpler to just get json directly#🎜🎜 #
As far as crawlers are concerned, it is not advisable to directly simulate the browser to parse Javascript. You can directly capture and generate the json of the corresponding web page to achieve this.
phantomJS can.
Unless it is very complex front-end logic (for example, there is a lot of logic to calculate tokens), it is not recommended to simulate the execution of js
If it is dynamically loaded data, it would be simpler to just get json directly#🎜🎜 #
There is a jar package for parsing js scripts, but I can’t remember the specific package.
As far as crawlers are concerned, it is not advisable to directly simulate the browser to parse Javascript. You can directly capture and generate the json of the corresponding web page to achieve this.
If you use java, you can try Selinium’s WebDriver. If you use js, just use phantomjs
Refer to this document
How to crawl data dynamically generated by JS? http://doc.shenjianshou.cn/de...