Home > Backend Development > PHP Tutorial > 用curl抓取网页的时候,为什么小弟我的每次都只能抓取18个左右就自动停止了

用curl抓取网页的时候,为什么小弟我的每次都只能抓取18个左右就自动停止了

WBOY
Release: 2016-06-13 13:12:11
Original
997 people have browsed it

用curl抓取网页的时候,为什么我的每次都只能抓取18个左右就自动停止了?
用curl抓取网页的时候,为什么我的每次都只能抓取18个左右就自动停止了?
代码如下

PHP code
<!--

Code highlighting produced by Actipro CodeHighlighter (freeware)
http://www.CodeHighlighter.com/

-->
<?php for($i=2;$i<30;$i++)
   {
$ch = curl_init("http://www.readnovel.com/novel/169509/$i.html");
$fp = fopen("novel-$i.txt", "w");

curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_TIMEOUT, "100");

curl_exec($ch);
curl_close($ch);
fclose($fp);
echo "第".$i."页成功保存</br>";
   }
echo "抓取完成";
?> 

Copy after login



------解决方案--------------------
如果不是每个网站都这样,就可能是这个网站有限制
Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template