美文网首页Crawler
scrapy入门教程

scrapy入门教程

作者: 疯帮主 | 来源:发表于2018-11-06 13:55 被阅读0次

    1. 创建一个Scrapy项目

    (Crawler) master@ubuntu-of-master:~/code/crawler/project$ scrapy startproject tutorial
    New Scrapy project 'tutorial', using template directory '/home/master/anaconda3/envs/Crawler/lib/python3.7/site-packages/scrapy/templates/project', created in:
        /home/master/code/crawler/project/tutorial
    
    You can start your first spider with:
        cd tutorial
        scrapy genspider example example.com
    (Crawler) master@ubuntu-of-master:~/code/crawler/project$ cd tutorial/
    (Crawler) master@ubuntu-of-master:~/code/crawler/project/tutorial$ scrapy genspider douban douban.com
    Created spider 'douban' using template 'basic' in module:
      tutorial.spiders.douban
    (Crawler) master@ubuntu-of-master:~/code/crawler/project/tutorial$ tree
    .
    ├── scrapy.cfg
    └── tutorial
        ├── __init__.py
        ├── items.py
        ├── middlewares.py
        ├── pipelines.py
        ├── __pycache__
        │   ├── __init__.cpython-37.pyc
        │   └── settings.cpython-37.pyc
        ├── settings.py
        └── spiders
            ├── douban.py
            ├── __init__.py
            └── __pycache__
                └── __init__.cpython-37.pyc
    
    4 directories, 11 files
    

    2. 定义提取的Item

    class TutorialItem(scrapy.Item):
        # define the fields for your item here like:
        # name = scrapy.Field()
        name = scrapy.Field()
        url = scrapy.Field()
        description = scrapy.Field()
    

    3. 编写爬取网站的spider

    # -*- coding: utf-8 -*-
    import scrapy
    from tutorial.items import TutorialItem
    
    class DoubanSpider(scrapy.Spider):
        name = 'douban'
        allowed_domains = ['douban.com']
        start_urls = ['https://movie.douban.com/top250']
    
        def parse(self, response):
            movie_list = response.xpath("//ol[@class='grid_view']/li/div[@class='item']")
    
            for movie in movie_list:
                item = TutorialItem()
                item['title'] = movie.xpath(".//div[@class='hd']/a/span[1]/text()").extract_first()
                item['url'] = movie.xpath(".//div[@class='hd']/a/@href").extract_first()
                item['description'] = movie.xpath(".//span[@class='inq']/text()").extract_first()
                yield item
    
    

    4. 编写Item Pipeline来存储提取到的Item(即数据)

    class TutorialPipeline(object):
        def open_spider(self, spider):
            self.fo = open('doubantop250.csv', 'a')
    
        def process_item(self, item, spider):
            data = "{},{},{}\n".format(item['title'], item['description'], item['url'])
            self.fo.write(data)
            return item
    
        def close_spider(self, spider):
            self.fo.close()
    

    5.配置

    # filename: settings.py
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = False
    
    # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
       'tutorial.pipelines.TutorialPipeline': 300,
    }
    
    

    6. 爬取

    (Crawler) master@ubuntu-of-master:~/code/crawler/project/tutorial$ scrapy crawl douban
    2018-11-06 12:30:20 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: tutorial)
    2018-11-06 12:30:20 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.7.1 (default, Oct 23 2018, 19:19:42) - [GCC 7.3.0], pyOpenSSL 18.0.0 (OpenSSL 1.1.1  11 Sep 2018), cryptography 2.3.1, Platform Linux-4.15.0-38-generic-x86_64-with-debian-stretch-sid
    2018-11-06 12:30:20 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'tutorial', 'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'}
    2018-11-06 12:30:20 [scrapy.middleware] INFO: Enabled extensions:
    ['scrapy.extensions.corestats.CoreStats',
     'scrapy.extensions.telnet.TelnetConsole',
     'scrapy.extensions.memusage.MemoryUsage',
     'scrapy.extensions.logstats.LogStats']
    2018-11-06 12:30:20 [scrapy.middleware] INFO: Enabled downloader middlewares:
    ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
     'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
     'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
     'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
     'scrapy.downloadermiddlewares.retry.RetryMiddleware',
     'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
     'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
     'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
     'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
     'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
     'scrapy.downloadermiddlewares.stats.DownloaderStats']
    2018-11-06 12:30:20 [scrapy.middleware] INFO: Enabled spider middlewares:
    ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
     'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
     'scrapy.spidermiddlewares.referer.RefererMiddleware',
     'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
     'scrapy.spidermiddlewares.depth.DepthMiddleware']
    2018-11-06 12:30:20 [scrapy.middleware] INFO: Enabled item pipelines:
    ['tutorial.pipelines.TutorialPipeline']
    2018-11-06 12:30:20 [scrapy.core.engine] INFO: Spider opened
    2018-11-06 12:30:20 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
    2018-11-06 12:30:20 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
    2018-11-06 12:30:20 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://movie.douban.com/top250> (referer: None)
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '希望让人自由。',
     'title': '肖申克的救赎',
     'url': 'https://movie.douban.com/subject/1292052/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '风华绝代。',
     'title': '霸王别姬',
     'url': 'https://movie.douban.com/subject/1291546/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '怪蜀黍和小萝莉不得不说的故事。',
     'title': '这个杀手不太冷',
     'url': 'https://movie.douban.com/subject/1295644/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '一部美国近现代史。',
     'title': '阿甘正传',
     'url': 'https://movie.douban.com/subject/1292720/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '最美的谎言。',
     'title': '美丽人生',
     'url': 'https://movie.douban.com/subject/1292063/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '失去的才是永恒的。 ',
     'title': '泰坦尼克号',
     'url': 'https://movie.douban.com/subject/1292722/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '最好的宫崎骏,最好的久石让。 ',
     'title': '千与千寻',
     'url': 'https://movie.douban.com/subject/1291561/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '拯救一个人,就是拯救整个世界。',
     'title': '辛德勒的名单',
     'url': 'https://movie.douban.com/subject/1295124/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '诺兰给了我们一场无法盗取的梦。',
     'title': '盗梦空间',
     'url': 'https://movie.douban.com/subject/3541415/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '小瓦力,大人生。',
     'title': '机器人总动员',
     'url': 'https://movie.douban.com/subject/2131459/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '永远都不能忘记你所爱的人。',
     'title': '忠犬八公的故事',
     'url': 'https://movie.douban.com/subject/3011091/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '英俊版憨豆,高情商版谢耳朵。',
     'title': '三傻大闹宝莱坞',
     'url': 'https://movie.douban.com/subject/3793023/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '每个人都要走一条自己坚定了的路,就算是粉身碎骨。 ',
     'title': '海上钢琴师',
     'url': 'https://movie.douban.com/subject/1292001/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '天籁一般的童声,是最接近上帝的存在。 ',
     'title': '放牛班的春天',
     'url': 'https://movie.douban.com/subject/1291549/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '一生所爱。',
     'title': '大话西游之大圣娶亲',
     'url': 'https://movie.douban.com/subject/1292213/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '如果再也不能见到你,祝你早安,午安,晚安。',
     'title': '楚门的世界',
     'url': 'https://movie.douban.com/subject/1292064/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '千万不要记恨你的对手,这样会让你失去理智。',
     'title': '教父',
     'url': 'https://movie.douban.com/subject/1291841/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '人人心中都有个龙猫,童年就永远不会消失。',
     'title': '龙猫',
     'url': 'https://movie.douban.com/subject/1291560/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '爱是一种力量,让我们超越时空感知它的存在。',
     'title': '星际穿越',
     'url': 'https://movie.douban.com/subject/1889243/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '我们一路奋战不是为了改变世界,而是为了不让世界改变我们。',
     'title': '熔炉',
     'url': 'https://movie.douban.com/subject/5912992/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '香港电影史上永不过时的杰作。',
     'title': '无间道',
     'url': 'https://movie.douban.com/subject/1307914/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '满满温情的高雅喜剧。',
     'title': '触不可及',
     'url': 'https://movie.douban.com/subject/6786002/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '平民励志片。 ',
     'title': '当幸福来敲门',
     'url': 'https://movie.douban.com/subject/1849031/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': 'Tomorrow is another day.',
     'title': '乱世佳人',
     'url': 'https://movie.douban.com/subject/1300267/'}
    2018-11-06 12:30:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
    {'description': '真正的幸福是来自内心深处。',
     'title': '怦然心动',
     'url': 'https://movie.douban.com/subject/3319755/'}
    2018-11-06 12:30:21 [scrapy.core.engine] INFO: Closing spider (finished)
    2018-11-06 12:30:21 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 302,
     'downloader/request_count': 1,
     'downloader/request_method_count/GET': 1,
     'downloader/response_bytes': 12878,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2018, 11, 6, 4, 30, 21, 48580),
     'item_scraped_count': 25,
     'log_count/DEBUG': 27,
     'log_count/INFO': 7,
     'memusage/max': 53002240,
     'memusage/startup': 53002240,
     'response_received_count': 1,
     'scheduler/dequeued': 1,
     'scheduler/dequeued/memory': 1,
     'scheduler/enqueued': 1,
     'scheduler/enqueued/memory': 1,
     'start_time': datetime.datetime(2018, 11, 6, 4, 30, 20, 394134)}
    2018-11-06 12:30:21 [scrapy.core.engine] INFO: Spider closed (finished)
    
    

    结果

    image.png

    参考文档:http://scrapy-chs.readthedocs.io/zh_CN/1.0/intro/tutorial.html

    相关文章

      网友评论

        本文标题:scrapy入门教程

        本文链接:https://www.haomeiwen.com/subject/ngofbxtx.html