美文网首页【原创】Python网络爬虫
Python爬虫-Scrapy框架之Spider

Python爬虫-Scrapy框架之Spider

作者: 复苏的兵马俑 | 来源:发表于2020-04-27 12:26 被阅读0次

    1、Scrapy架构图

    Scrapy架构图(1) Scrapy架构图(2)

      模块介绍:
      1)Scrapy Engine(引擎):Scrapy框架的核心部分,负责在Spider和Item Pipeline、Downloader、Scheduler中间通信、传递数据等;
      2)Spider(爬虫):发送需要爬取的链接给引擎,最后引擎把其他模块请求回来的数据再发送给爬虫,爬虫就去解析想要的数据,这个部分是我们开发者自己写的,因为要爬取哪些链接,页面中的那些数据是我们需要的,都是由程序员自己决定的;
      3)Scheduler(调度器):负责接收引擎发送过来的请求,并按照一定的方式进行排列和整理,负责调度请求的顺序等;
      4)Downloader(下载器):负责接收引擎传过来的下载请求,然后去网络上下载对应的数据再交还给引擎;
      5)Item Pipeline(管道):负责将Spider(爬虫)传递过来的数据进行保存,具体保存在哪里,应该看开发者自己的需求;
      6)Downloader Middlewares(下载中间件):可以扩展下载器和引擎之间通信功能的中间件;
      7 )Spider Middlewares(Spider中间件):可以扩展引擎和爬虫之间通信功能的中间件。

    2、安装和文档

      1)安装:通过pip install scrapy即可安装;
      2)Scrapy官方文档:http://doc.scrapy.org/en/latest
      3)Scrapy中文文档:http://scrapy-chs.readthedocs.io/zh_CN/latest/index.html

      注意:
      1)在Ubuntu上安装Scrapy之前,需要先安装以下依赖:sudo apt-get install python3 python3-dev python3-pip libxml2-dev libxslt1-dev zlib1g-dev libffi-dev libssl-dev,然后再通过pip install scrapy安装;
      2)如果在Windows系统下,提示这个错误ModuleNotFoundError: No module named 'win32api',那么使用以下命令可以解决:pip install pypiwin32

    3、创建项目

      要使用Scrapy框架创建项目,需要通过命令来创建,首先进入到你想把这个项目存放的目录,然后使用命令scrapy startproject [项目名]创建即可。

    D:\学习笔记\Python学习\Python_Crawler>scrapy startproject qiushibk
    New Scrapy project 'qiushibk', using template directory 'c:\python38\lib\site-packages\scrapy\templates\project', created in:
        D:\学习笔记\Python学习\Python_Crawler\qiushibk
    
    You can start your first spider with:
        cd qiushibk
        scrapy genspider example example.com
    
    Scrapy项目目录结构

      主要文件的作用:
      1)items.py:用来存放爬虫爬取下来数据的模型;
      2)middlewares.py:用来存放各种中间件的文件;
      3)pipelines.py:用来将items的模型存储到本地磁盘中;
      4)settings.py:本爬虫的一些配置信息(比如请求头、多久发送一次请求、IP代理池等);
      5)scrapy.cfg:项目的配置文件;
      6)spiders:以后所有的爬虫,都是存放到这个里面。

      spiders/init.py文件初始内容如下:

    # This package will contain the spiders of your Scrapy project
    #
    # Please refer to the documentation for information on how to create and manage
    # your spiders.
    

      items.py文件初始内容如下:

    # -*- coding: utf-8 -*-
    
    # Define here the models for your scraped items
    #
    # See documentation in:
    # https://docs.scrapy.org/en/latest/topics/items.html
    
    import scrapy
    
    
    class QiushibkItem(scrapy.Item):
        # define the fields for your item here like:
        # name = scrapy.Field()
        pass
    

      middlewares.py文件初始内容如下:

    # -*- coding: utf-8 -*-
    
    # Define here the models for your spider middleware
    #
    # See documentation in:
    # https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    
    from scrapy import signals
    
    
    class QiushibkSpiderMiddleware:
        # Not all methods need to be defined. If a method is not defined,
        # scrapy acts as if the spider middleware does not modify the
        # passed objects.
    
        @classmethod
        def from_crawler(cls, crawler):
            # This method is used by Scrapy to create your spiders.
            s = cls()
            crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
            return s
    
        def process_spider_input(self, response, spider):
            # Called for each response that goes through the spider
            # middleware and into the spider.
    
            # Should return None or raise an exception.
            return None
    
        def process_spider_output(self, response, result, spider):
            # Called with the results returned from the Spider, after
            # it has processed the response.
    
            # Must return an iterable of Request, dict or Item objects.
            for i in result:
                yield i
    
        def process_spider_exception(self, response, exception, spider):
            # Called when a spider or process_spider_input() method
            # (from other spider middleware) raises an exception.
    
            # Should return either None or an iterable of Request, dict
            # or Item objects.
            pass
    
        def process_start_requests(self, start_requests, spider):
            # Called with the start requests of the spider, and works
            # similarly to the process_spider_output() method, except
            # that it doesn’t have a response associated.
    
            # Must return only requests (not items).
            for r in start_requests:
                yield r
    
        def spider_opened(self, spider):
            spider.logger.info('Spider opened: %s' % spider.name)
    
    
    class QiushibkDownloaderMiddleware:
        # Not all methods need to be defined. If a method is not defined,
        # scrapy acts as if the downloader middleware does not modify the
        # passed objects.
    
        @classmethod
        def from_crawler(cls, crawler):
            # This method is used by Scrapy to create your spiders.
            s = cls()
            crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
            return s
    
        def process_request(self, request, spider):
            # Called for each request that goes through the downloader
            # middleware.
    
            # Must either:
            # - return None: continue processing this request
            # - or return a Response object
            # - or return a Request object
            # - or raise IgnoreRequest: process_exception() methods of
            #   installed downloader middleware will be called
            return None
    
        def process_response(self, request, response, spider):
            # Called with the response returned from the downloader.
    
            # Must either;
            # - return a Response object
            # - return a Request object
            # - or raise IgnoreRequest
            return response
    
        def process_exception(self, request, exception, spider):
            # Called when a download handler or a process_request()
            # (from other downloader middleware) raises an exception.
    
            # Must either:
            # - return None: continue processing this exception
            # - return a Response object: stops process_exception() chain
            # - return a Request object: stops process_exception() chain
            pass
    
        def spider_opened(self, spider):
            spider.logger.info('Spider opened: %s' % spider.name)
    

      pipelines.py文件初始内容如下:

    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    
    
    class QiushibkPipeline:
        def process_item(self, item, spider):
            return item
    

      settings.py文件初始内容如下:

    # -*- coding: utf-8 -*-
    
    # Scrapy settings for qiushibk project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     https://docs.scrapy.org/en/latest/topics/settings.html
    #     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #     https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'qiushibk'
    
    SPIDER_MODULES = ['qiushibk.spiders']
    NEWSPIDER_MODULE = 'qiushibk.spiders'
    
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    #USER_AGENT = 'qiushibk (+http://www.yourdomain.com)'
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = True
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    # Enable or disable spider middlewares
    # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'qiushibk.middlewares.QiushibkSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'qiushibk.middlewares.QiushibkDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See https://docs.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    #ITEM_PIPELINES = {
    #    'qiushibk.pipelines.QiushibkPipeline': 300,
    #}
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    

      scrapy.cfg文件初始内容如下:

    # Automatically created by: scrapy startproject
    #
    # For more information about the [deploy] section see:
    # https://scrapyd.readthedocs.io/en/latest/deploy.html
    
    [settings]
    default = qiushibk.settings
    
    [deploy]
    #url = http://localhost:6800/
    project = qiushibk
    

    4、创建爬虫

    D:\学习笔记\Python学习\Python_Crawler>cd qiushibk
    D:\学习笔记\Python学习\Python_Crawler\qiushibk>scrapy genspider qsbk "qiushibaike.com"
    Created spider 'qsbk' using template 'basic' in module:
      qiushibk.spiders.qsbk
    

      注意:爬虫名字不能和项目名字一致。


    生成qsbk.py文件

      如上图,创建了一个名为qsbk.py的爬虫文件,并且能够爬取的网页只会限制在qiushibaike.com这个域名下。
      qsbk.py文件初始内容如下:

    # -*- coding: utf-8 -*-
    import scrapy
    
    
    class QsbkSpider(scrapy.Spider):
        name = 'qsbk'
        allowed_domains = ['qiushibaike.com']
        start_urls = ['http://qiushibaike.com/']
    
        def parse(self, response):
            pass
    

    5、爬虫实例(糗事百科)

      在做一个爬虫之前,一定要记得修改settings.py中的设置,两个地方是强烈建议设置的:
      1)ROBOTSTXT_OBEY设置为False,默认是True,即遵守机器协议,那么在爬虫的时候,scrapy首先去找robots.txt文件,如果没有找到,则直接停止爬取。
      2)DEFAULT_REQUEST_HEADERS添加User-Agent,这个也是告诉服务器,我这个请求是一个正常请求,不是一个爬虫。

      说明:
      1)response是一个scrapy.http.response.html.HtmlResponse对象,可以执行xpathcss语法来提取数据;
      2)提取出来的数据,是一个Selector或者是一个SelectorList对象,如果想要获取其中的字符串,那么应该执行getall或者get方法;
      3)getall方法:获取Selector中所有的文本,返回的是一个列表;
      4)get方法:获取的是Selector中的第一个文本,返回的是一个str类型;
      5)如果数据解析回来,要传给pipeline处理,那么可以使用yield来返回,或者是收集所有的item,最后统一使用return返回;
      6)item:建议在items.py中定义好模型,以后就不要使用字典了;
      7)pipeline:这个是专门用来保存数据的,其中有三个方法是会经常用的:
      7.1)open_spider(self,spider):当爬虫被打开的时候执行;
      7.2)process_item(self,item,spider):当爬虫有item传过来的时候会被调用;
      7.3)close_spider(self,spider):当爬虫关闭的时候会被调用。
      要激活pipeline,应该在settings.py中设置ITEM_PIPELINES

      A)settings.py设置

    ROBOTSTXT_OBEY = False
    
    DOWNLOAD_DELAY = 1
    
    DEFAULT_REQUEST_HEADERS = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
        'Accept-Language': 'en',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.9 Safari/537.36',
    }
    
    ITEM_PIPELINES = {
       'qiushibk.pipelines.QiushibkPipeline': 300,
    }
    

      B)创建start.py文件
     &esmp;在项目的根目录下创建start.py文件,并编写代码。

    from scrapy import cmdline
    cmdline.execute("scrapy crawl qsbk".split())
    

      C)qsbk.py代码如下:

    # -*- coding: utf-8 -*-
    import scrapy
    from qiushibk.items import QiushibkItem
    # from scrapy.http.response.html import HtmlResponse
    # from scrapy.selector.unified import SelectorList
    
    
    class QsbkSpider(scrapy.Spider):
        name = 'qsbk'
        allowed_domains = ['qiushibaike.com']
        baseDomain = "https://www.qiushibaike.com"
        start_urls = ['https://www.qiushibaike.com/text/page/1/']
    
        def parse(self, response):
            duanziDivs = response.xpath("//div[@class='col1 old-style-col1']/div")
            for duanziDiv in duanziDivs:
                author = duanziDiv.xpath(".//h2/text()").get().strip()
                content = duanziDiv.xpath(".//div[@class='content']//text()").getall()
                content = "".join(content).strip()
                item = QiushibkItem(author=author,content=content)
                yield item
            nextUrl = response.xpath("//ul[@class='pagination']/li[last()]/a/@href").get()
            if not nextUrl:
                return
            else:
                yield scrapy.Request(self.baseDomain+nextUrl,callback=self.parse)
    

      D)items.py代码如下:

    # -*- coding: utf-8 -*-
    
    # Define here the models for your scraped items
    #
    # See documentation in:
    # https://docs.scrapy.org/en/latest/topics/items.html
    
    import scrapy
    
    
    class QiushibkItem(scrapy.Item):
        # define the fields for your item here like:
        # name = scrapy.Field()
        author = scrapy.Field()
        content = scrapy.Field()
    

      E)pipelines.py代码如下:

    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    import json
    
    class QiushibkPipeline:
        def __init__(self):
            self.fp = open("duanzi.json", "w", encoding='utf-8')
    
        def open_spider(self,spider):
            print("爬虫开始了……")
    
        def process_item(self, item, spider):
            itemJson = json.dumps(dict(item), ensure_ascii=False)
            self.fp.write(itemJson+"\n")
            return item
    
        def close_spider(self,spider):
            self.fp.close()
            print("爬虫结束了……")
    

      E.1)pipelines.py优化代码(1)如下:
      保存json数据的时候,可以使用JsonItemExporter类,这个类是每次把数据添加到内存中,最后统一写入到磁盘中,好处是存储的数据是一个满足json规则的数据,坏处是如果数据量比较大,会比较耗内存。

    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    
    from scrapy.exporters import JsonItemExporter
    
    class QiushibkPipeline:
        def __init__(self):
            self.fp = open("duanzi.json", "wb")
            self.exporter = JsonItemExporter(self.fp, ensure_ascii = False, encoding='utf-8')
            self.exporter.start_exporting()
    
        def open_spider(self,spider):
            print("爬虫开始了……")
    
        def process_item(self, item, spider):
            self.exporter.export_item(item)
            return item
    
        def close_spider(self,spider):
            self.exporter.finish_exporting()
            self.fp.close()
            print("爬虫结束了……")
    

      E.2)pipelines.py优化代码(2)如下:
      保存json数据的时候,可以使用JsonLinesItemExporter类,这个是每次调用export_item的时候,就把这个item存储到硬盘中,坏处是每一个字典是一行,整个文件不是一个满足json格式的文件,好处是每次处理数据的时候就直接存储到硬盘中,这样不会耗内存,数据也比较安全。

    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    
    from scrapy.exporters import JsonLinesItemExporter
    
    class QiushibkPipeline:
        def __init__(self):
            self.fp = open("duanzi.json", "wb")
            self.exporter = JsonLinesItemExporter(self.fp, ensure_ascii = False, encoding='utf-8')
    
        def open_spider(self,spider):
            print("爬虫开始了……")
    
        def process_item(self, item, spider):
            self.exporter.export_item(item)
            return item
    
        def close_spider(self,spider):
            self.fp.close()
            print("爬虫结束了……")
    

    相关文章

      网友评论

        本文标题:Python爬虫-Scrapy框架之Spider

        本文链接:https://www.haomeiwen.com/subject/zkiswhtx.html