美文网首页tornado
2.3、User’s guide (Queue)

2.3、User’s guide (Queue)

作者: 宝宝家的隔壁老王 | 来源:发表于2018-03-28 15:30 被阅读9次

    Queue

    example a concurrent web spider

    tornado 的 tornado.queues 模块实现了异步协程生产/消费者模式,类似于 python 标准库中的 queue 模块实现的多线程模式。
    
    一个协程 yield Queue.get 会暂停直到队列中有对象。如果队列设置了最大值,一个协程 yield Queue.put 会暂停直到队列中有位置。
    
    一个 Queue 维护着未完成的任务数量,始于0。
    
    put 增加了任务值。task_done 消除一个计数。
    
    在 web-爬虫例子里,队列开始只包含 base_url。当一个 worker 获取一个页面他解析链接并 put 到队列中,然后调用 task_done 消除一次计数。
    
    最终,当一个 worker 在一个页面中解析的所有的 urls 都已记录,并且队列中也没有剩下 work。这时,worker's 调用 task_done 将计数消除为 0。等待 join 的主协程不再停顿并且结束。
    
    #!/usr/bin/env python
    
    import time
    from datetime import timedelta
    
    try:
        from HTMLParser import HTMLParser
        from urlparse import urljoin, urldefrag
    except ImportError:
        from html.parser import HTMLParser
        from urllib.parse import urljoin, urldefrag
    
    from tornado import httpclient, gen, ioloop, queues
    
    base_url = 'http://www.tornadoweb.org/en/stable/'
    concurrency = 10
    
    
    @gen.coroutine
    def get_links_from_url(url):
        """Download the page at `url` and parse it for links.
    
        Returned links have had the fragment after `#` removed, and have been made
        absolute so, e.g. the URL 'gen.html#tornado.gen.coroutine' becomes
        'http://www.tornadoweb.org/en/stable/gen.html'.
        """
        try:
            response = yield httpclient.AsyncHTTPClient().fetch(url)
            print('fetched %s' % url)
    
            html = response.body if isinstance(response.body, str) \
                else response.body.decode(errors='ignore')
            urls = [urljoin(url, remove_fragment(new_url))
                    for new_url in get_links(html)]
        except Exception as e:
            print('Exception: %s %s' % (e, url))
            raise gen.Return([])
    
        raise gen.Return(urls)
    
    
    def remove_fragment(url):
        pure_url, frag = urldefrag(url)
        return pure_url
    
    
    def get_links(html):
        class URLSeeker(HTMLParser):
            def __init__(self):
                HTMLParser.__init__(self)
                self.urls = []
    
            def handle_starttag(self, tag, attrs):
                href = dict(attrs).get('href')
                if href and tag == 'a':
                    self.urls.append(href)
    
        url_seeker = URLSeeker()
        url_seeker.feed(html)
        return url_seeker.urls
    
    
    @gen.coroutine
    def main():
        q = queues.Queue()
        start = time.time()
        fetching, fetched = set(), set()
    
        @gen.coroutine
        def fetch_url():
            current_url = yield q.get()
            try:
                if current_url in fetching:
                    return
    
                print('fetching %s' % current_url)
                fetching.add(current_url)
                urls = yield get_links_from_url(current_url)
                fetched.add(current_url)
    
                for new_url in urls:
                    # Only follow links beneath the base URL
                    if new_url.startswith(base_url):
                        yield q.put(new_url)
    
            finally:
                q.task_done()
    
        @gen.coroutine
        def worker():
            while True:
                yield fetch_url()
    
        q.put(base_url)
    
        # Start workers, then wait for the work queue to be empty.
        for _ in range(concurrency):
            worker()
        yield q.join(timeout=timedelta(seconds=300))
        assert fetching == fetched
        print('Done in %d seconds, fetched %s URLs.' % (
            time.time() - start, len(fetched)))
    
    
    if __name__ == '__main__':
        io_loop = ioloop.IOLoop.current()
        io_loop.run_sync(main)
    

    上一篇: 2.2、User’s guide (Asynchronous and non-Blocking IO)
    下一篇: 2.4、User’s guide (Coroutines)

    相关文章

      网友评论

        本文标题:2.3、User’s guide (Queue)

        本文链接:https://www.haomeiwen.com/subject/hjtucftx.html