Home > Backend Development > PHP Tutorial > Practical sharing of using Swoole to asynchronously crawl web pages

Practical sharing of using Swoole to asynchronously crawl web pages

*文
Release: 2023-03-18 08:02:01
Original
5321 people have browsed it

php programmers all know that programs written in php are all synchronous. How to write an asynchronous program in php? The answer is Swoole. Here we take grabbing web content as an example to show how to use Swoole to write asynchronous programs.

php synchronization program

Before writing an asynchronous program, don’t worry, first use php to implement the synchronization program.

<?php
/**
 * Class Crawler
 * Path: /Sync/Crawler.php
 */
class Crawler
{
    private $url;
    private $toVisit = [];
    public function __construct($url)
    {
        $this->url = $url;
    }
    public function visitOneDegree()
    {
        $this->loadPageUrls();
        $this->visitAll();
    }
    private function loadPageUrls()
    {
        $content = $this->visit($this->url);
        $pattern = &#39;#((http|ftp)://(\S*?\.\S*?))([\s)\[\]{},;"\&#39;:<]|\.\s|$)#i&#39;;
        preg_match_all($pattern, $content, $matched);
        foreach ($matched[0] as $url) {
            if (in_array($url, $this->toVisit)) {
                continue;
            }
            $this->toVisit[] = $url;
        }
    }
    private function visitAll()
    {
        foreach ($this->toVisit as $url) {
            $this->visit($url);
        }
    }
    private function visit($url)
    {
        return @file_get_contents($url);
    }
}
Copy after login
<?php
/**
 * crawler.php
 */
require_once &#39;Sync/Crawler.php&#39;;
$start = microtime(true);
$url = &#39;http://www.swoole.com/&#39;;
$ins = new Crawler($url);
$ins->visitOneDegree();
$timeUsed = microtime(true) - $start;
echo "time used: " . $timeUsed;
/* output:
time used: 6.2610177993774
*/
Copy after login

A preliminary study on Swoole's implementation of asynchronous crawlers

First refer to the official asynchronous crawling page.
Usage example

Swoole\Async::dnsLookup("www.baidu.com", function ($domainName, $ip) {
    $cli = new swoole_http_client($ip, 80);
    $cli->setHeaders([
        &#39;Host&#39; => $domainName,
        "User-Agent" => &#39;Chrome/49.0.2587.3&#39;,
        &#39;Accept&#39; => &#39;text/html,application/xhtml+xml,application/xml&#39;,
        &#39;Accept-Encoding&#39; => &#39;gzip&#39;,
    ]);
    $cli->get(&#39;/index.html&#39;, function ($cli) {
        echo "Length: " . strlen($cli->body) . "\n";
        echo $cli->body;
    });
});
Copy after login

It seems that by slightly modifying the synchronous file_get_contents code, asynchronous implementation can be achieved. It seems that success is easy.
So, we got the following code:

<?php
/**
 * Class Crawler
 * Path: /Async/CrawlerV1.php
 */
class Crawler
{
    private $url;
    private $toVisit = [];
    private $loaded = false;
    public function __construct($url)
    {
        $this->url = $url;
    }
    public function visitOneDegree()
    {
        $this->visit($this->url, true);
        $retryCount = 3;
        do {
            sleep(1);
            $retryCount--;
        } while ($retryCount > 0 && $this->loaded == false);
        $this->visitAll();
    }
    private function loadPage($content)
    {
        $pattern = &#39;#((http|ftp)://(\S*?\.\S*?))([\s)\[\]{},;"\&#39;:<]|\.\s|$)#i&#39;;
        preg_match_all($pattern, $content, $matched);
        foreach ($matched[0] as $url) {
            if (in_array($url, $this->toVisit)) {
                continue;
            }
            $this->toVisit[] = $url;
        }
    }
    private function visitAll()
    {
        foreach ($this->toVisit as $url) {
            $this->visit($url);
        }
    }
    private function visit($url, $root = false)
    {
        $urlInfo = parse_url($url);
        Swoole\Async::dnsLookup($urlInfo[&#39;host&#39;], function ($domainName, $ip) use($urlInfo, $root) {
            $cli = new swoole_http_client($ip, 80);
            $cli->setHeaders([
                &#39;Host&#39; => $domainName,
                "User-Agent" => &#39;Chrome/49.0.2587.3&#39;,
                &#39;Accept&#39; => &#39;text/html,application/xhtml+xml,application/xml&#39;,
                &#39;Accept-Encoding&#39; => &#39;gzip&#39;,
            ]);
            $cli->get($urlInfo[&#39;path&#39;], function ($cli) use ($root) {
                if ($root) {
                    $this->loadPage($cli->body);
                    $this->loaded = true;
                }
            });
        });
    }
}
Copy after login
<?php
/**
 * crawler.php
 */
require_once &#39;Async/CrawlerV1.php&#39;;
$start = microtime(true);
$url = &#39;http://www.swoole.com/&#39;;
$ins = new Crawler($url);
$ins->visitOneDegree();
$timeUsed = microtime(true) - $start;
echo "time used: " . $timeUsed;
/* output:
time used: 3.011773109436
*/
Copy after login

The result ran for 3 seconds. Pay attention to my implementation. After initiating a request to crawl the home page, I will poll for the results every second, and it will end after polling three times. The 3 seconds here seem to be the exit caused by polling 3 times without results.
It seems that I was too impatient and did not give them enough preparation time. Okay, let's change the number of polls to 10 and see the results.

time used: 10.034232854843
Copy after login

You know how I feel at this time.

Is it a performance problem with swoole? Why is there no result after 10 seconds? Is it because my posture is wrong? The old man Marx said: "Practice is the only criterion for testing truth." It seems that we need to debug it to find out the reason.

So, I added breakpoints at

$this->visitAll();
Copy after login

and

$this->loadPage($cli->body);
Copy after login

. Finally, I found that visitAll() is always executed first, and then loadPage() is executed.

After thinking about it for a while, I probably understand the reason. what is the reason behind the scene?

The asynchronous dynamic model I expect is like this:

Practical sharing of using Swoole to asynchronously crawl web pages

However, the real scene is not like this. Through debugging, I roughly understand that the actual model should be like this:


Practical sharing of using Swoole to asynchronously crawl web pages

In other words, no matter how I increase the number of retries , the data will never be ready. The data will only start executing after the current function is ready. The asynchronous here only reduces the time to prepare the connection.
Then the question is, how can I make the program perform the functions I expect after preparing the data.
Let’s first take a look at how Swoole’s official code for executing asynchronous tasks is written

$serv = new swoole_server("127.0.0.1", 9501);
//设置异步任务的工作进程数量
$serv->set(array(&#39;task_worker_num&#39; => 4));
$serv->on(&#39;receive&#39;, function($serv, $fd, $from_id, $data) {
    //投递异步任务
    $task_id = $serv->task($data);
    echo "Dispath AsyncTask: id=$task_id\n";
});
//处理异步任务
$serv->on(&#39;task&#39;, function ($serv, $task_id, $from_id, $data) {
    echo "New AsyncTask[id=$task_id]".PHP_EOL;
    //返回任务执行的结果
    $serv->finish("$data -> OK");
});
//处理异步任务的结果
$serv->on(&#39;finish&#39;, function ($serv, $task_id, $data) {
    echo "AsyncTask[$task_id] Finish: $data".PHP_EOL;
});
$serv->start();
Copy after login

It can be seen that the official pass the subsequent execution logic through a function anonymous function. Seen this way, things become much simpler.

url = $url;
    }
    public function visitOneDegree()
    {
        $this->visit($this->url, function ($content) {
            $this->loadPage($content);
            $this->visitAll();
        });
    }
    private function loadPage($content)
    {
        $pattern = '#((http|ftp)://(\S*?\.\S*?))([\s)\[\]{},;"\':<]|\.\s|$)#i';
        preg_match_all($pattern, $content, $matched);
        foreach ($matched[0] as $url) {
            if (in_array($url, $this->toVisit)) {
                continue;
            }
            $this->toVisit[] = $url;
        }
    }
    private function visitAll()
    {
        foreach ($this->toVisit as $url) {
            $this->visit($url);
        }
    }
    private function visit($url, $callBack = null)
    {
        $urlInfo = parse_url($url);
        Swoole\Async::dnsLookup($urlInfo['host'], function ($domainName, $ip) use($urlInfo, $callBack) {
            if (!$ip) {
                return;
            }
            $cli = new swoole_http_client($ip, 80);
            $cli->setHeaders([
                'Host' => $domainName,
                "User-Agent" => 'Chrome/49.0.2587.3',
                'Accept' => 'text/html,application/xhtml+xml,application/xml',
                'Accept-Encoding' => 'gzip',
            ]);
            $cli->get($urlInfo['path'], function ($cli) use ($callBack) {
                if ($callBack) {
                    call_user_func($callBack, $cli->body);
                }
                $cli->close();
            });
        });
    }
}
Copy after login

After reading this code, I feel like I have seen it before. In nodejs development, the callbacks that can be seen everywhere have their own reasons. Now I suddenly understand that callback exists to solve asynchronous problems.
I executed the program, and it only took 0.0007s, and it was over before it even started! Can asynchronous efficiency really be improved so much? The answer is of course no, there is something wrong with our code.
Due to the use of asynchronous, the logic of calculating the end time has been executed without waiting for the task to be completely completed. It seems it's time to use callback again.

/**
Async/Crawler.php
**/
    public function visitOneDegree($callBack)
    {
        $this->visit($this->url, function ($content) use($callBack) {
            $this->loadPage($content);
            $this->visitAll();
            call_user_func($callBack);
        });
    }
Copy after login
<?php
/**
 * crawler.php
 */
require_once &#39;Async/Crawler.php&#39;;
$start = microtime(true);
$url = &#39;http://www.swoole.com/&#39;;
$ins = new Crawler($url);
$ins->visitOneDegree(function () use($start) {
    $timeUsed = microtime(true) - $start;
    echo "time used: " . $timeUsed;
});
/*output:
time used: 0.068463802337646
*/
Copy after login

Looking at it now, the results are much more credible.
Let us compare the difference between synchronous and asynchronous. Synchronization takes 6.26s and asynchronous takes 0.068 seconds, which is a full 6.192s difference. No, to put it more accurately, it should be nearly 10 times worse!
Of course, in terms of efficiency, asynchronous code is much higher than synchronous code, but logically speaking, asynchronous logic is more convoluted than synchronous, and the code will bring a lot of callbacks, which is not easy to understand.
Swoole official has a description about the choice of asynchronous and synchronous, which is very pertinent. I would like to share it with you:

我们不赞成用异步回调的方式去做功能开发,传统的PHP同步方式实现功能和逻辑是最简单的,也是最佳的方案。像node.js这样到处callback,只是牺牲可维护性和开发效率。
Copy after login

Related reading:

How to install swoole extension with php7

Swoole development key points introduction

php asynchronous Multi-threaded swoole usage examples

The above is all the content of this article. If students have any questions, they can discuss it in the comment area below~

The above is the detailed content of Practical sharing of using Swoole to asynchronously crawl web pages. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template