https://github.com/nietaki/crawlie
Crawlie 使用了 Elixir 的 GenStage 功能实现了并行爬虫。大部分工作通过 Crawlie.Stage.UrlManager 来完成。consume 用户提供的 url 列表,receive 接下来处理得到的 url,并保证每个 url 只被处理一次,保证 discovered urls 集合尽可能小,通过深度优先查找 url 树。
urls 通过 Crawlie.Stage.UrlManager
获取,GenStage Flow,然后使用 HTTPoison 并行获取 urls,并使用用户提供的回调函数处理响应。被发现的 urls 被发回 UrlManager。
下面是它的流程图
如果你对爬虫的统计数据感兴趣,可以查看Crawlie.crawl_and_track_stats/3
。它开启了一个Stats GenServer
在 Crawlie's supervision tree。收集爬虫过程的数据。
这个扩展通过如下方式使用:
Add crawlie to your list of dependencies in mix.exs:
def deps do
[{:crawlie, "~> 0.6.0"}]
end
Ensure crawlie is started before your application:
def application do
[applications: [:crawlie]]
end
看看示例
https://github.com/nietaki/crawlie_example
$ mix deps.get
$ mix crawlie.example
most popular words longer than 5 letters in the vicinity of ["https://en.wikipedia.org/wiki/Elixir_(programming_language)", "https://en.wikipedia.org/wiki/Mainframe_computer"]:
{word, count}
=============
{"system", 1973}
{"computer", 1618}
{"systems", 1257}
{"programming", 1165}
{"language", 1147}
{"software", 1052}
{"operating", 1022}
{"computers", 887}
{"languages", 873}
{"program", 825}
{"memory", 814}
{"number", 798}
{"called", 767}
{"between", 724}
{"company", 693}
{"support", 678}
{"different", 649}
{"including", 623}
{"however,", 620}
{"control", 590}
运行结果
源代码主要看这两个文件:
lib/mix/tasks/crawlie/example.ex and lib/crawlie_example/word_count_logic.ex.
网友评论