美文网首页
Scrapy(python爬虫框架)入门笔记

Scrapy(python爬虫框架)入门笔记

作者: 草丛里的黄盖 | 来源:发表于2018-05-14 11:59 被阅读99次

    本文章仅作为个人笔记

    Scrpy官网

    Scrpy官方文档

    Scrpy中文文档

    个人ScrapyDemo项目地址

    python环境安装
    • win下安装:
      • python:下载python安装包直接安装即可
      • pip: easy_install pip
    • mac下安装:
      • python:mac下自带python2.7
      • pip: easy_install pip
    • centos7下安装:
      • python:centos7下自带python2.7
      • pip: easy_install pip
    scrapy 安装
    pip install Scrapy
    
    创建项目
    scrapy startproject <project_name>
    
    创建爬虫
    scrapy genspider <spider_name> <host_name>
    
    在文件夹根目录创建 requirements.txt文件并加入需要的组件,例如:
    Scrapy==1.5.0
    beautifulsoup4==4.6.0
    requests==2.18.4
    
    项目环境搭建
    pip install -r requirements.txt
    
    运行单个爬虫
    scrapy crawl <spider_name>
    
    运行多个爬虫(Scrapy本身并不支持命令行直接运行多个Spiders,创建一个新的python文件加入如下内容运行此python文件便可)(需按需更改)
    # -*- coding: utf-8 -*-
    import sys
        from scrapy.crawler import CrawlerProcess
    from scrapy.utils.project import get_project_settings
    from ScrapyDemo.spiders.news_estadao import EstadaoSpider
    from ScrapyDemo.spiders.news_gazetaesportiva import DemoSpider
    from ScrapyDemo.spiders.news_megacurioso import MegacuriosoSpider
    
    if sys.getdefaultencoding != 'utf-8':
        reload(sys)
        sys.setdefaultencoding('utf-8')
    
    process = CrawlerProcess(get_project_settings())
    process.crawl(EstadaoSpider)
    process.crawl(DemoSpider)
    process.crawl(MegacuriosoSpider)
    process.start()
    
    启用pipelines用于处理结果
    • 打开settings.py文件在ITEM_PIPELINES选项下加入peline并赋值,0-1000,数字越小越优先
    输出单个spider运行结果到文件
    scrapy crawl demo -o /path/to/demo.json
    
    多个spider的结果混合处理:
    • 上面的运行多个爬虫脚本并不能将多个spider的结果混合处理
    • 因为业务需要,只可退而求其次
      • 思路:借助commands库运行linux命令顺序运行并输出结果到文件,最后读取文件内容解析为对象进行混合处理

      • 代码(需按需更改):

        #!/usr/bin/env python
        # encoding: utf-8
        import commands
        
        def test():
          result = []
          try:
              commands.getoutput("echo '' > /path/to/megacurioso.json") #清空上次运行结果
              commands.getoutput("scrapy crawl demo -o /path/to/demo.json") # 运行结果并输出
              result = json.loads(commands.getoutput("cat /path/to/megacurioso.json")) # 获取运行结果
          except:
              print "Get megacurioso error."
          return result
        
    解决结果爬虫信息乱码问题:
    • 在有乱码问题python文件顶部加入如下代码:

      if sys.getdefaultencoding != 'utf-8':
        reload(sys)
        sys.setdefaultencoding('utf-8')
      
    爬虫示例,也可以使用文顶给出的github链接
    • item示例(items.py):

      # -*- coding: utf-8 -*-
      
      # Define here the models for your scraped items
      #
      # See documentation in:
      # https://doc.scrapy.org/en/latest/topics/items.html
      
      import scrapy
      
      
      class ScrapydemoItem(scrapy.Item):
          title = scrapy.Field()
          imageUrl = scrapy.Field()
          des = scrapy.Field()
          source = scrapy.Field()
          actionUrl = scrapy.Field()
          contentType = scrapy.Field()
          itemType = scrapy.Field()
          createTime = scrapy.Field()
          country = scrapy.Field()
          headUrl = scrapy.Field()
          pass
      
    • pipelines示例(pipelines.py):

      # -*- coding: utf-8 -*-
      
      # Define your item pipelines here
      #
      # Don't forget to add your pipeline to the ITEM_PIPELINES setting
      # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
      from ScrapyDemo.items import ScrapydemoItem
      import json
      
      
      class ScrapydemoPipeline(object):
          DATA_LIST_NEWS = []
      
          def open_spider(self, spider):
              DATA_LIST_NEWS = []
              print 'Spider start.'
      
          def process_item(self, item, spider):
              if isinstance(item, ScrapydemoItem):
                  self.DATA_LIST_NEWS.append(dict(item))
              return item
      
          def close_spider(self, spider):
              print json.dumps(self.DATA_LIST_NEWS)
              print 'Spider end.'
      
    • spider示例(demo.py):

          # -*- coding: utf-8 -*-
          import scrapy
          from ScrapyDemo.items import ScrapydemoItem
      
      
          class DemoSpider(scrapy.Spider):
              name = 'news_gazetaesportiva'
              allowed_domains = ['www.gazetaesportiva.com']
              start_urls = ['https://www.gazetaesportiva.com/noticias/']
              headers = {
                  'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
                  'accept-language': 'zh-CN,zh;q=0.9,en-US;q=0.8,en;q=0.7',
                  'cache-control': 'max-age=0',
                  'upgrade-insecure-requests': '1',
                  'User-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36'
              }
      
              def parse(self, response):
                  print('Start parse.')
                  for element in response.xpath('//article'):
                      title = element.xpath(".//h3[@class='entry-title no-margin']/a/text()").extract_first()
                      imageUrl = [element.xpath(".//img[@class='medias-object wp-post-image']/@src").extract_first()]
                      des = element.xpath(".//div[@class='entry-content space']/text()").extract_first()
                      source = 'gazeta'
                      actionUrl = element.xpath(".//a[@class='blog-image']/@href").extract_first()
                      contentType = ''
                      itemType = ''
                      createTime = element.xpath(".//small[@class='updated']/text()").extract_first()
                      country = 'PZ'
                      headUrl = ''
                      if title is not None and title != "" and actionUrl is not None and actionUrl != "" and imageUrl is not None and imageUrl != "":
                          item = ScrapydemoItem()
                          item['title'] = title
                          item['imageUrl'] = imageUrl
                          item['des'] = des
                          item['source'] = source
                          item['actionUrl'] = actionUrl
                          item['contentType'] = contentType
                          item['itemType'] = itemType
                          item['createTime'] = createTime
                          item['country'] = country
                          item['headUrl'] = headUrl
                          yield item
                  print('End parse.')
      
    • 代码个人理解:

      • settings可配置公共配置及配置pipelines对spiders结果进行汇总,例如(后面的数值越大优先级越低,取值0-1000):

        ITEM_PIPELINES = {
            'DemoScrapy.pipelines.ScrapydemoPipeline': 300,
        }
        
      • 配置pipelines后命令行运行spiders是会先运行open_spider方法,然后每个结果解析时运行process_item方法,最后spider结束时运行close_spider方法

      • items文件是用来配置描述结果对象的

      • spiders文件夹里根据命令行创建的spiders文件配置需要抓取的数据的网页及需要伪装的请求头参数等,抓取数据后数据结果进入 parse方法进行解析,可使用xpath进行解析。xpath的具体使用可参考前文给出的链接,个人进行数据抓取前使用chrom定位标签,复制源码后根据规则找到标签位置最后进行规则匹配,因为每次数据规则匹配不可能一次性完成,建议使用debug功能来进行匹配,最后一次性完成规则书写。

    • pycharm下debug spiders:

      • 打开pycharm后如果遇到部分插件无法安装的情况可使用虚拟环境:


        image.png
        image.png
        image.png
      • debug运行scrapy:


        image.png
        image.png
        image.png
        image.png
        image.png
        image.png
        • 运行到断点后右击选择 Evaluate Expresion


          如此便可随意运行代码查看执行结果

    相关文章

      网友评论

          本文标题:Scrapy(python爬虫框架)入门笔记

          本文链接:https://www.haomeiwen.com/subject/oukddftx.html