Home > Backend Development > PHP Tutorial > How to crawl web pages using PHP

How to crawl web pages using PHP

不言
Release: 2023-04-03 08:58:01
Original
6732 people have browsed it

The main process of crawling web page data in PHP is to first obtain the entire web page data, and then regularly match the (critical) required web page data.

The main method of reading pages in PHP. There are several examples of methods to obtain page data. They are based on the experience of senior people on the Internet. They have not been used yet. Save them first and try them later.

1.file() function

2.file_get_contents() function

3.fopen()->fread()->fclose() mode

4.curl method (I mainly use this)

5.fsockopen() function socket mode

6.Plug-in (such as: http://sourceforge.net/projects/ snoopy/)

7.file() function

<?php
//定义url
$url=&#39;[http://t.qq.com](http://t.qq.com/)&#39;;//fiel函数读取内容数组
$lines_array=file($url);//拆分数组为字符串
$lines_string=implode(&#39;&#39;,$lines_array);//输出内容
echo $lines_string; 
Copy after login

2. Use the file_get_contents method to implement, which is relatively simple.

Using file_get_contents and fopen must enable allow_url_fopen. Method: Edit php.ini and set allow_url_fopen = On. When allow_url_fopen is turned off, neither fopen nor file_get_contents can open remote files.

$url="[http://news.sina.com.cn/c/nd/2016-10-23/doc-ifxwztru6951143.shtml](http://news.sina.com.cn/c/nd/2016-10-23/doc-ifxwztru6951143.shtml)";
$html=file_get_contents($url);
//如果出现中文乱码使用下面代码`
//$getcontent = iconv("gb2312", "utf-8",$html);
echo"<textarea style=&#39;width:800px;height:600px;&#39;>".$html."</textarea>";
Copy after login

3.fopen()->fread()->fclose() mode, I haven’t used it yet, I will write it down when I see it

<?php
//定义url
$url=&#39;[http://t.qq.com](http://t.qq.com/)&#39;;//fopen以二进制方式打开 
$handle=fopen($url,"rb");//变量初始化
$lines_string="";//循环读取数据
do{
$data=fread($handle,1024);  
if(strlen($data)==0) {`
break; 
} 
$lines_string.=$data;
}while(true);//关闭fopen句柄,释放资源
fclose($handle);//输出内容
echo $lines_string;
Copy after login

4. Use curl to implement (I usually use this).

Using curl requires space to enable curl. Method: Modify php.ini under Windows, remove the semicolon in front of extension=php_curl.dll, and copy ssleay32.dll and libeay32.dll to C:\WINDOWS\system32; under Linux, install the curl extension.

<?php
header("Content-Type: text/html;charset=utf-8");
date_default_timezone_set(&#39;PRC&#39;);
$url = "https://***********ycare";//要爬取的网址
$res = curl_get_contents($url);//curl封装方法
preg_match_all(&#39;/<script>(.*?)<\/script>/&#39;,$res,$arr_all);//这个网页中数据通过js包过来,所以直接抓js就可以
preg_match_all(&#39;/"id"\:"(.*?)",/&#39;,$arr_all[1][1],$arr1);//从js块中匹配要的数据
$list = array_unique($arr1[1]);//(可省)保证不重复
//以下则是同理,循环则可
for($i=0;$i<=6;$i=$i+2){
  $detail_url = &#39;ht*****em/&#39;.$list[$i];
  $detail_res = curl_get_contents($detail_url);
  preg_match_all(&#39;/<script>(.*?)<\/script>/&#39;,$detail_res,$arr_detail);
  preg_match(&#39;/"desc"\:"(.*?)",/&#39;,$arr_detail[1][1],$arr_content);
  ***
    ***
    ***
  $ret=curl_post(&#39;http://**********cms.php&#39;,$result);//此脚本未放在服务器上,原因大家懂就好哈。
}
function curl_get_contents($url,$cookie=&#39;&#39;,$referer=&#39;&#39;,$timeout=300,$ishead=0) {
  $curl = curl_init();
  curl_setopt($curl, CURLOPT_RETURNTRANSFER,1);
  curl_setopt($curl, CURLOPT_FOLLOWLOCATION,1);
  curl_setopt($curl, CURLOPT_URL,$url);
  curl_setopt($curl, CURLOPT_TIMEOUT,$timeout);
  curl_setopt($curl, CURLOPT_USERAGENT,&#39;Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36&#39;);
  if($cookie)
  {
    curl_setopt( $curl, CURLOPT_COOKIE,$cookie);
  }
  if($referer)
  {
    curl_setopt ($curl,CURLOPT_REFERER,$referer);
  }
  $ssl = substr($url, 0, 8) == "https://" ? TRUE : FALSE;
  if ($ssl)
  {
    curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, false);
    curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
  }
  $res = curl_exec($curl);
  return $res;
  curl_close($curl);
}
//curl post数据到服务器
function curl_post($url,$data){
  $ch = curl_init();
  curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
  //curl_setopt($ch,CURLOPT_FOLLOWLOCATION, 1);
  curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
  curl_setopt($ch,CURLOPT_USERAGENT,&#39;Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36&#39;);
  curl_setopt($ch,CURLOPT_URL,$url);
  curl_setopt($ch,CURLOPT_POST,true);
  curl_setopt($ch,CURLOPT_POSTFIELDS,$data);
  $output = curl_exec($ch);
  curl_close($ch);
  return $output; 
}
?>
Copy after login

5.fsockopen() function socket mode (never used, you can try it in the future)

Whether the socket mode can be executed correctly also depends on the server It depends on the settings. You can check which communication protocols are enabled on the server through phpinfo

<?php
$fp = fsockopen("t.qq.com", 80, $errno, $errstr, 30);
if (!$fp) {
  echo "$errstr ($errno)<br />\n";
} else {
  $out = "GET / HTTP/1.1\r\n";
  $out .= "Host: t.qq.com\r\n";
  $out .= "Connection: Close\r\n\r\n";
  fwrite($fp, $out);
  while (!feof($fp)) {
    echo fgets($fp, 128);
  }
  fclose($fp);
}
Copy after login

6.snoopy plug-in, the latest version is Snoopy-1.2.4.zip Last Update: 2013-05-30, it is recommended that you use

to use snoopy, which is very popular on the Internet, for collection. This is a very powerful collection plug-in, and it is very convenient to use. You can also set an agent in it to simulate Browser information.

Note: Setting the agent is in line 45 of the Snoopy.class.php file. Please search for "var formula input error_SERVER['HTTP_USER_AGENT']; in the file to get the browser information. Just copy the echoed content into the agent.

<?php
//引入snoopy的类文件
require(&#39;Snoopy.class.php&#39;);
//初始化snoopy类
$snoopy=new Snoopy;
$url="[http://t.qq.com](http://t.qq.com/)";
//开始采集内容`
$snoopy->fetch($url);
//保存采集内容到$lines_string
$lines_string=$snoopy->results;
//输出内容,嘿嘿,大家也可以保存在自己的服务器上
echo $lines_string;
Copy after login

Related recommendations:

Web page crawling: Summary of ways to implement web crawlers with PHP , crawl crawler

php web page analysis content crawler data analysis

##

The above is the detailed content of How to crawl web pages using PHP. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
php
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template