How to crawl and analyze web pages with PHP

不言
Release: 2023-03-30 14:04:01
Original
2502 people have browsed it

This article mainly introduces the method of PHP crawling and analyzing web pages. It analyzes the principles of PHP web crawling and analysis techniques in detail in the form of examples. Friends in need can refer to the following

The examples of this article are explained Learn how to crawl and analyze web pages with PHP. Sharing it with everyone for your reference, the details are as follows:

Crawling and analyzing a file is very simple. This tutorial will take you step by step through an example to implement it. let's start!

First, we must decide which URL address we will crawl. This can be set in a script or passed via $QUERY_STRING. For simplicity, let's set the variable directly in the script.

<?php
$url = &#39;http://www.php.net&#39;;
?>
Copy after login

The second step is to grab the specified file and store it in an array through the file() function.

<?php
$url = &#39;http://www.php.net&#39;;
$lines_array = file($url);
?>
Copy after login

Okay, now there are files in the array. However, the text we want to analyze may not all be in one line. To resolve this file, we can simply convert the array $lines_array into a string. We can use the implode(x,y) function to achieve this. If you want to use explode later (array of string variables), it may be better to set x to "|" or "!" or other similar delimiter. But for our purposes, it's best to set x to a space. y is another necessary parameter because it is the array you want to process with implode().

<?php
$url = &#39;http://www.php.net&#39;;
$lines_array = file($url);
$lines_string = implode(&#39;&#39;, $lines_array);
?>
Copy after login

Now that the crawling work is done, it’s time to analyze. For the purposes of this example, we want to get everything between and

. In order to parse out the string, we also need something called a regular expression.

<?php
$url = &#39;http://www.php.net&#39;;
$lines_array = file($url);
$lines_string = implode(&#39;&#39;, $lines_array);
eregi("<head>(.*)</head>", $lines_string, $head);
?>
Copy after login

Let’s take a look at the code. As you can see, the eregi() function is executed in the following format:

eregi("<head>(.*)</head>", $lines_string, $head);
Copy after login

"(.*)" means everything and can be interpreted as, "analysis Everything between and

". $lines_string is the string we are analyzing, and $head is the array where the analyzed results are stored.

Finally, we can enter the data. Since there is only one instance between and

, we can safely assume that there is only one element in the array, and it is the one we want. Let's print it out.

Copy after login

This is all the code.

<?php
//获取所有内容url保存到文件
function get_index ( $save_file , $prefix = "index_" ){
   $count = 68 ;
   $i = 1 ;
  if ( file_exists ( $save_file )) @ unlink ( $save_file );
   $fp = fopen ( $save_file , "a+" ) or die( "Open " . $save_file . " failed" );
  while( $i < $count ){
     $url = $prefix . $i . ".htm" ;
    echo "Get " . $url . "..." ;
     $url_str = get_content_url ( get_url ( $url ));
    echo " OK/n" ;
     fwrite ( $fp , $url_str );
    ++ $i ;
  }
   fclose ( $fp );
}
//获取目标多媒体对象
function get_object ( $url_file , $save_file , $split = "|--:**:--|" ){
  if (! file_exists ( $url_file )) die( $url_file . " not exist" );
   $file_arr = file ( $url_file );
  if (! is_array ( $file_arr ) || empty( $file_arr )) die( $url_file . " not content" );
   $url_arr = array_unique ( $file_arr );
  if ( file_exists ( $save_file )) @ unlink ( $save_file );
   $fp = fopen ( $save_file , "a+" ) or die( "Open save file " . $save_file . " failed" );
  foreach( $url_arr as $url ){
    if (empty( $url )) continue;
    echo "Get " . $url . "..." ;
     $html_str = get_url ( $url );
    echo $html_str ;
    echo $url ;
    exit;
     $obj_str = get_content_object ( $html_str );
    echo " OK/n" ;
     fwrite ( $fp , $obj_str );
  }
   fclose ( $fp );
}
//遍历目录获取文件内容
function get_dir ( $save_file , $dir ){
   $dp = opendir ( $dir );
  if ( file_exists ( $save_file )) @ unlink ( $save_file );
   $fp = fopen ( $save_file , "a+" ) or die( "Open save file " . $save_file . " failed" );
  while(( $file = readdir ( $dp )) != false ){
    if ( $file != "." && $file != ".." ){
      echo "Read file " . $file . "..." ;
       $file_content = file_get_contents ( $dir . $file );
       $obj_str = get_content_object ( $file_content );
      echo " OK/n" ;
       fwrite ( $fp , $obj_str );
    }
  }
   fclose ( $fp );
}
//获取指定url内容
function get_url ( $url ){
   $reg = &#39;/^http:////[^//].+$/&#39; ;
  if (! preg_match ( $reg , $url )) die( $url . " invalid" );
   $fp = fopen ( $url , "r" ) or die( "Open url: " . $url . " failed." );
  while( $fc = fread ( $fp , 8192 )){
     $content .= $fc ;
  }
   fclose ( $fp );
  if (empty( $content )){
    die( "Get url: " . $url . " content failed." );
  }
  return $content ;
}
//使用socket获取指定网页
function get_content_by_socket ( $url , $host ){
   $fp = fsockopen ( $host , 80 ) or die( "Open " . $url . " failed" );
   $header = "GET /" . $url . " HTTP/1.1/r/n" ;
   $header .= "Accept: */*/r/n" ;
   $header .= "Accept-Language: zh-cn/r/n" ;
   $header .= "Accept-Encoding: gzip, deflate/r/n" ;
   $header .= "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; Maxthon; InfoPath.1; .NET CLR 2.0.50727)/r/n" ;
   $header .= "Host: " . $host . "/r/n" ;
   $header .= "Connection: Keep-Alive/r/n" ;
   //$header .= "Cookie: cnzz02=2; rtime=1; ltime=1148456424859; cnzz_eid=56601755-/r/n/r/n";
   $header .= "Connection: Close/r/n/r/n" ;
   fwrite ( $fp , $header );
   while (! feof ( $fp )) {
     $contents .= fgets ( $fp , 8192 );
   }
   fclose ( $fp );
   return $contents ;
}
//获取指定内容里的url
function get_content_url ( $host_url , $file_contents ){
   //$reg = &#39;/^(#|<a href="http://lib.csdn.net/base/18" class=&#39;replace_word&#39; title="JavaScript知识库" target=&#39;_blank&#39; style=&#39;color:#df3434; font-weight:bold;&#39;>JavaScript</a>.*?|ftp:////.+|http:////.+|.*?href.*?|play.*?|index.*?|.*?asp)+$/i&#39;;
   //$reg = &#39;/^(down.*?/.html|/d+_/d+/.htm.*?)$/i&#39;;
   $rex = "/([hH][rR][eE][Ff])/s*=/s*[&#39;/"]*([^>&#39;/"/s]+)[/"&#39;>]*/s*/i" ;
   $reg = &#39;/^(down.*?/.html)$/i&#39; ;
   preg_match_all ( $rex , $file_contents , $r );
   $result = "" ; //array();
   foreach( $r as $c ){
    if ( is_array ( $c )){
      foreach( $c as $d ){
        if ( preg_match ( $reg , $d )){ $result .= $host_url . $d . "/n" ; }
      }
    }
  }
  return $result ;
}
//获取指定内容中的多媒体文件
function get_content_object ( $str , $split = "|--:**:--|" ){
   $regx = "/href/s*=/s*[&#39;/"]*([^>&#39;/"/s]+)[/"&#39;>]*/s*(.*?<//b>)/i" ;
   preg_match_all ( $regx , $str , $result );
  if ( count ( $result ) == 3 ){
     $result [ 2 ] = str_replace ( "多媒体: " , "" , $result [ 2 ]);
     $result [ 2 ] = str_replace ( " " , "" , $result [ 2 ]);
     $result = $result [ 1 ][ 0 ] . $split . $result [ 2 ][ 0 ] . "/n" ;
  }
  return $result ;
}
?>
Copy after login

The above is the entire content of this article. I hope it will be helpful to everyone’s study. For more related content, please pay attention to the PHP Chinese website !

Related recommendations:

thinkphp implements paging display function

The above is the detailed content of How to crawl and analyze web pages with PHP. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
php
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template