Web crawler tool phpSpider: How to maximize its effectiveness?
With the rapid development of the Internet, access to information has become more and more convenient. With the advent of the big data era, obtaining and processing large amounts of data has become a need for many companies and individuals. As an effective data acquisition tool, web crawlers have received more and more attention and use. As a very powerful web crawler framework, phpSpider is easy to use and highly scalable, and has become the first choice of many people.
This article will introduce the basic use of phpSpider and demonstrate how to maximize the effectiveness of phpSpider.
1. Installation and configuration of phpSpider
The installation of phpSpider is very simple and can be installed through composer. First, enter the root directory of the project on the command line, and then execute the following command:
composer require phpspider/phpspider
After the installation is completed, create a spider.php
file in the root directory of the project to write Our crawler code.
Before writing code, we also need to configure some basic information and set some crawler parameters. The following is a simple configuration example:
<?php require './vendor/autoload.php'; use phpspidercorephpspider; $configs = array( 'name' => 'phpSpider demo', 'domains' => array( 'example.com', ), 'scan_urls' => array( 'https://www.example.com/', ), 'content_url_regexes' => array( 'https://www.example.com/article/w+', ), 'list_url_regexes' => array( 'https://www.example.com/article/w+', ), 'fields' => array( array( 'name' => "title", 'selector' => "//h1", 'required' => true ), array( 'name' => "content", 'selector' => "//div[@id='content']", 'required' => true ), ), ); $spider = new phpspider($configs); $spider->on_extract_field = function($fieldname, $data, $page) { if ($fieldname == 'content') { $data = strip_tags($data); } return $data; }; $spider->start(); ?>
The above is a simple crawler configuration example. This crawler is mainly used to crawl https://www.example.com/
Article title and content.
2. The core functions and extended usage of phpSpider
In the above example, we set ## The #scan_urls and
list_url_regexes parameters are used to determine the list page URL to be crawled, and the
content_url_regexes parameter is set to determine the content page URL to be crawled. You can configure it according to your own needs.
fields parameter in the example, we define the field name and extraction rules to be extracted (using XPath syntax) and whether it is a required field. phpSpider will automatically extract data from the page according to the extraction rules and store it in the results.
$spider->on_extract_field callback function to perform data preprocessing, such as removal HTML tags and other operations.
$spider->on_download_page = function($page, $phpspider) { // 将页面内容保存到本地文件 file_put_contents('/path/to/save', $page['body']); return true; };
worker_num parameter. Multi-threading can speed up crawling, but it will also increase the consumption of server resources. You need to choose the appropriate number of threads based on server performance and bandwidth.
$configs['worker_num'] = 10;
proxy parameter.
$configs['proxy'] = array( 'host' => '127.0.0.1', 'port' => 8888, );
The above is the detailed content of Web crawler tool phpSpider: How to maximize its effectiveness?. For more information, please follow other related articles on the PHP Chinese website!