php程序员都知道,使用php写的程序都是同步的,如何用php写一个异步程序呢,答案就是Swoole。
Swoole是php的一个扩展,安装了这个扩展,就可以调用swoole提供的方法来实现异步。
这里以抓取网页内容为例,来展示如何用Swoole来编写异步程序。
php的同步程序
在写异步程序之前,不要着急,先用php实现一下同步的程序。
<?php
/**
* Class Crawler
* Path: /Sync/Crawler.php
*/
class Crawler
{
private $url;
private $toVisit = [];
public function __construct($url)
{
$this->url = $url;
}
public function visitOneDegree()
{
$this->loadPageUrls();
$this->visitAll();
}
private function loadPageUrls()
{
$content = $this->visit($this->url);
$pattern = '#((http|ftp)://(\S*?\.\S*?))([\s)\[\]{},;"\':<]|\.\s|$)#i';
preg_match_all($pattern, $content, $matched);
foreach ($matched[0] as $url) {
if (in_array($url, $this->toVisit)) {
continue;
}
$this->toVisit[] = $url;
}
}
private function visitAll()
{
foreach ($this->toVisit as $url) {
$this->visit($url);
}
}
private function visit($url)
{
return @file_get_contents($url);
}
}
<?php
/**
* crawler.php
*/
require_once 'Sync/Crawler.php';
$start = microtime(true);
$url = 'http://www.swoole.com/';
$ins = new Crawler($url);
$ins->visitOneDegree();
$timeUsed = microtime(true) - $start;
echo "time used: " . $timeUsed;
/* output:
time used: 6.2610177993774
*/
Swoole实现异步爬虫初探
先参考一下官方的异步抓取页面怎么搞。
使用示例
Swoole\Async::dnsLookup("www.baidu.com", function ($domainName, $ip) {
$cli = new swoole_http_client($ip, 80);
$cli->setHeaders([
'Host' => $domainName,
"User-Agent" => 'Chrome/49.0.2587.3',
'Accept' => 'text/html,application/xhtml+xml,application/xml',
'Accept-Encoding' => 'gzip',
]);
$cli->get('/index.html', function ($cli) {
echo "Length: " . strlen($cli->body) . "\n";
echo $cli->body;
});
});
貌似稍微改造一下同步的file_get_contents代码,就可以实现异步了,看起来成功轻而易举嘛。
于是,我们得到了下面的代码:
<?php
/**
* Class Crawler
* Path: /Async/CrawlerV1.php
*/
class Crawler
{
private $url;
private $toVisit = [];
private $loaded = false;
public function __construct($url)
{
$this->url = $url;
}
public function visitOneDegree()
{
$this->visit($this->url, true);
$retryCount = 3;
do {
sleep(1);
$retryCount--;
} while ($retryCount > 0 && $this->loaded == false);
$this->visitAll();
}
private function loadPage($content)
{
$pattern = '#((http|ftp)://(\S*?\.\S*?))([\s)\[\]{},;"\':<]|\.\s|$)#i';
preg_match_all($pattern, $content, $matched);
foreach ($matched[0] as $url) {
if (in_array($url, $this->toVisit)) {
continue;
}
$this->toVisit[] = $url;
}
}
private function visitAll()
{
foreach ($this->toVisit as $url) {
$this->visit($url);
}
}
private function visit($url, $root = false)
{
$urlInfo = parse_url($url);
Swoole\Async::dnsLookup($urlInfo['host'], function ($domainName, $ip) use($urlInfo, $root) {
$cli = new swoole_http_client($ip, 80);
$cli->setHeaders([
'Host' => $domainName,
"User-Agent" => 'Chrome/49.0.2587.3',
'Accept' => 'text/html,application/xhtml+xml,application/xml',
'Accept-Encoding' => 'gzip',
]);
$cli->get($urlInfo['path'], function ($cli) use ($root) {
if ($root) {
$this->loadPage($cli->body);
$this->loaded = true;
}
});
});
}
}
<?php
/**
* crawler.php
*/
require_once 'Async/CrawlerV1.php';
$start = microtime(true);
$url = 'http://www.swoole.com/';
$ins = new Crawler($url);
$ins->visitOneDegree();
$timeUsed = microtime(true) - $start;
echo "time used: " . $timeUsed;
/* output:
time used: 3.011773109436
*/
结果运行了3秒。注意一下我的实现,在发起抓取首页的请求以后,我会隔一秒轮询一次结果,轮询三次还没有就结束了。这里的3秒好像是轮询了3次还没有结果导致的退出。
看来是我太急躁了,给人家的准备时间还不够充分。好吧,那我们把轮询次数改为10次,看看结果。
time used: 10.034232854843
此时我的心情是这样的。
fuckthedog.jpg难道说是swoole的性能问题?为什么10秒还没有结果,难道是我的姿势不对?马克思老人家说过:“实践是检验真理的唯一标准”。看来需要debug一下才知道原因了。
于是,我在
$this->visitAll();
和
$this->loadPage($cli->body);
两处加了断点。最后发现总是先执行到visitAll(),再去执行loadPage()。
想了一下,大概明白原因了。到底是什么原因呢?
写满800字了,先到这里。欲知后事如何,且听下回分解。
网友评论