In actual applications, we often encounter some special situations, such as the need for news, weather forecast, etc. However, as a personal site or a small site, we cannot have so much manpower, material and financial resources to do these things. How? What to do?
Fortunately, the Internet is a resource sharing system. We can use programs to automatically capture pages from other sites and process them for our use.
What should I use? What the comrade gave me won’t work. In fact, PHP has this function, which is to use the curl library. Please look at the code below!
$ch = curl_init ("http://dailynews.sina.com.cn");
$fp = fopen ("php_homepage.txt", "w");
curl_setopt ($ch, CURLOPT_FILE, $fp);
curl_setopt ($ch, CURLOPT_HEADER, 0);
curl_exec ($ch);
curl_close ($ch);
fclose ($fp);
?>
But sometimes some errors will appear, but the download has actually been completed! I asked the foreigners, but they didn't give me a reply. I thought it wouldn't work, so just add a ◎; in front of the function. In this way, as long as we perform appropriate analysis on $txt, we can secretly grab Sina news! However, it’s better not to use it! To avoid legal disputes, I just want to tell you that Php is very powerful! There are many things you can do!
[The copyright of this article is jointly owned by the author and Oso.com. If you need to reprint, please indicate the author and source]
The above introduces the supplement of how to cross-site crawl the pages of other sites, including the relevant content. I hope it will be helpful to friends who are interested in PHP tutorials.