If you use a server-side crawler, you will encounter various problems. How can you use the visitor's IP to access the crawled website when opening the web page, and then upload the data? Can this be achieved by a distributed crawler? Ajax gets the crawled data and then sends it to your own server?
Are there any similar examples or open source projects?
If you use a server-side crawler, you will encounter various problems. How can you use the visitor's IP to access the crawled website when opening the web page, and then upload the data? Can this be achieved by a distributed crawler? Ajax gets the crawled data and then sends it to your own server?
Are there any similar examples or open source projects?
You are stealing user privacy, it won’t work~
The basic principle is to create a hidden iframe and then request the target website. After the request is successful, use ajax to save it to the local server. . Because many websites have implemented anti-crawling strategies, server-side crawlers often fail. In this case, it is very useful to use client-side crawlers.
However, the user experience is not very good. . . .