Home >Backend Development >PHP Tutorial >How to crawl and analyze web pages with PHP
This article mainly introduces the method of PHP crawling and analyzing web pages. It analyzes the principles of PHP web crawling and analysis techniques in detail in the form of examples. Friends in need can refer to the following
The examples of this article are explained Learn how to crawl and analyze web pages with PHP. Sharing it with everyone for your reference, the details are as follows:
Crawling and analyzing a file is very simple. This tutorial will take you step by step through an example to implement it. let's start!
First, we must decide which URL address we will crawl. This can be set in a script or passed via $QUERY_STRING. For simplicity, let's set the variable directly in the script.
<?php $url = 'http://www.php.net'; ?>
The second step is to grab the specified file and store it in an array through the file() function.
<?php $url = 'http://www.php.net'; $lines_array = file($url); ?>
Okay, now there are files in the array. However, the text we want to analyze may not all be in one line. To resolve this file, we can simply convert the array $lines_array into a string. We can use the implode(x,y) function to achieve this. If you want to use explode later (array of string variables), it may be better to set x to "|" or "!" or other similar delimiter. But for our purposes, it's best to set x to a space. y is another necessary parameter because it is the array you want to process with implode().
<?php $url = 'http://www.php.net'; $lines_array = file($url); $lines_string = implode('', $lines_array); ?>
Now that the crawling work is done, it’s time to analyze. For the purposes of this example, we want to get everything between 93f0f5c25f18dab9d176bd4f6de5d30e and 9c3bca370b5104690d9ef395f2c5f8d1. In order to parse out the string, we also need something called a regular expression.
<?php $url = 'http://www.php.net'; $lines_array = file($url); $lines_string = implode('', $lines_array); eregi("<head>(.*)</head>", $lines_string, $head); ?>
Let’s take a look at the code. As you can see, the eregi() function is executed in the following format:
eregi("<head>(.*)</head>", $lines_string, $head);
"(.*)" means everything and can be interpreted as, "analysis Everything between 93f0f5c25f18dab9d176bd4f6de5d30e and 9c3bca370b5104690d9ef395f2c5f8d1". $lines_string is the string we are analyzing, and $head is the array where the analyzed results are stored.
Finally, we can enter the data. Since there is only one instance between 93f0f5c25f18dab9d176bd4f6de5d30e and 9c3bca370b5104690d9ef395f2c5f8d1, we can safely assume that there is only one element in the array, and it is the one we want. Let's print it out.
This is all the code.
<?php //获取所有内容url保存到文件 function get_index ( $save_file , $prefix = "index_" ){ $count = 68 ; $i = 1 ; if ( file_exists ( $save_file )) @ unlink ( $save_file ); $fp = fopen ( $save_file , "a+" ) or die( "Open " . $save_file . " failed" ); while( $i < $count ){ $url = $prefix . $i . ".htm" ; echo "Get " . $url . "..." ; $url_str = get_content_url ( get_url ( $url )); echo " OK/n" ; fwrite ( $fp , $url_str ); ++ $i ; } fclose ( $fp ); } //获取目标多媒体对象 function get_object ( $url_file , $save_file , $split = "|--:**:--|" ){ if (! file_exists ( $url_file )) die( $url_file . " not exist" ); $file_arr = file ( $url_file ); if (! is_array ( $file_arr ) || empty( $file_arr )) die( $url_file . " not content" ); $url_arr = array_unique ( $file_arr ); if ( file_exists ( $save_file )) @ unlink ( $save_file ); $fp = fopen ( $save_file , "a+" ) or die( "Open save file " . $save_file . " failed" ); foreach( $url_arr as $url ){ if (empty( $url )) continue; echo "Get " . $url . "..." ; $html_str = get_url ( $url ); echo $html_str ; echo $url ; exit; $obj_str = get_content_object ( $html_str ); echo " OK/n" ; fwrite ( $fp , $obj_str ); } fclose ( $fp ); } //遍历目录获取文件内容 function get_dir ( $save_file , $dir ){ $dp = opendir ( $dir ); if ( file_exists ( $save_file )) @ unlink ( $save_file ); $fp = fopen ( $save_file , "a+" ) or die( "Open save file " . $save_file . " failed" ); while(( $file = readdir ( $dp )) != false ){ if ( $file != "." && $file != ".." ){ echo "Read file " . $file . "..." ; $file_content = file_get_contents ( $dir . $file ); $obj_str = get_content_object ( $file_content ); echo " OK/n" ; fwrite ( $fp , $obj_str ); } } fclose ( $fp ); } //获取指定url内容 function get_url ( $url ){ $reg = '/^http:////[^//].+$/' ; if (! preg_match ( $reg , $url )) die( $url . " invalid" ); $fp = fopen ( $url , "r" ) or die( "Open url: " . $url . " failed." ); while( $fc = fread ( $fp , 8192 )){ $content .= $fc ; } fclose ( $fp ); if (empty( $content )){ die( "Get url: " . $url . " content failed." ); } return $content ; } //使用socket获取指定网页 function get_content_by_socket ( $url , $host ){ $fp = fsockopen ( $host , 80 ) or die( "Open " . $url . " failed" ); $header = "GET /" . $url . " HTTP/1.1/r/n" ; $header .= "Accept: */*/r/n" ; $header .= "Accept-Language: zh-cn/r/n" ; $header .= "Accept-Encoding: gzip, deflate/r/n" ; $header .= "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; Maxthon; InfoPath.1; .NET CLR 2.0.50727)/r/n" ; $header .= "Host: " . $host . "/r/n" ; $header .= "Connection: Keep-Alive/r/n" ; //$header .= "Cookie: cnzz02=2; rtime=1; ltime=1148456424859; cnzz_eid=56601755-/r/n/r/n"; $header .= "Connection: Close/r/n/r/n" ; fwrite ( $fp , $header ); while (! feof ( $fp )) { $contents .= fgets ( $fp , 8192 ); } fclose ( $fp ); return $contents ; } //获取指定内容里的url function get_content_url ( $host_url , $file_contents ){ //$reg = '/^(#|<a href="http://lib.csdn.net/base/18" class='replace_word' title="JavaScript知识库" target='_blank' style='color:#df3434; font-weight:bold;'>JavaScript</a>.*?|ftp:////.+|http:////.+|.*?href.*?|play.*?|index.*?|.*?asp)+$/i'; //$reg = '/^(down.*?/.html|/d+_/d+/.htm.*?)$/i'; $rex = "/([hH][rR][eE][Ff])/s*=/s*['/"]*([^>'/"/s]+)[/"'>]*/s*/i" ; $reg = '/^(down.*?/.html)$/i' ; preg_match_all ( $rex , $file_contents , $r ); $result = "" ; //array(); foreach( $r as $c ){ if ( is_array ( $c )){ foreach( $c as $d ){ if ( preg_match ( $reg , $d )){ $result .= $host_url . $d . "/n" ; } } } } return $result ; } //获取指定内容中的多媒体文件 function get_content_object ( $str , $split = "|--:**:--|" ){ $regx = "/href/s*=/s*['/"]*([^>'/"/s]+)[/"'>]*/s*(.*?<//b>)/i" ; preg_match_all ( $regx , $str , $result ); if ( count ( $result ) == 3 ){ $result [ 2 ] = str_replace ( "多媒体: " , "" , $result [ 2 ]); $result [ 2 ] = str_replace ( " " , "" , $result [ 2 ]); $result = $result [ 1 ][ 0 ] . $split . $result [ 2 ][ 0 ] . "/n" ; } return $result ; } ?>
The above is the entire content of this article. I hope it will be helpful to everyone’s study. For more related content, please pay attention to the PHP Chinese website !
Related recommendations:
thinkphp implements paging display function
The above is the detailed content of How to crawl and analyze web pages with PHP. For more information, please follow other related articles on the PHP Chinese website!