美文网首页
Scrapy爬取炉石传说原画实践

Scrapy爬取炉石传说原画实践

作者: CNSTT | 来源:发表于2018-12-17 22:50 被阅读0次

    前言:

    本文章适用于在Pycharm上使用Scrapy爬取炉石传说原画。

    思路:

    炉石传说原画网址

    真相是假
    当我们打开开发者工具发现,并不需要模拟点击“点击查看更多”,而是被display:none所隐藏
    因此我们只要获取所有的<li></li>标签即可
    真相是真
    $x('//*[@id="dq_list"]/li')  //获取所有的li标签
    
    Chrome xpath
    最后只需要获取<img>下的src内容 以及 div class="kp-name"的内容即可
    item 分析

    代码准备:

    快速构建一个scrapy项目

    scrapy startproject lushichuanshuo
    
    项目目录结构

    items.py (爬取字段)

    # -*- coding: utf-8 -*-
    
    # Define here the models for your scraped items
    #
    # See documentation in:
    # https://doc.scrapy.org/en/latest/topics/items.html
    
    import scrapy
    
    
    class LushichuanshuoItem(scrapy.Item):
        img = scrapy.Field()
        name = scrapy.Field()
    
    

    pipelines.py (管道 存放验证数据)

    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
    
    import os
    
    
    class LushichuanshuoPipeline(object):
        def process_item(self, item, spider):
            file = open(os.path.dirname(__file__) + '/data/lushi1.txt', "a", encoding='utf-8')
            item_string = str(item)
            file.write(item_string)
            file.write('\n')
            file.close()
            return item
    
    
        def close_spider(self, spider):
            # 关闭爬虫时顺便将文件保存退出
            self.file.close()
    
    

    settings.py (配置文件)

    
    BOT_NAME = 'lushichuanshuo'
    
    SPIDER_MODULES = ['lushichuanshuo.spiders']
    NEWSPIDER_MODULE = 'lushichuanshuo.spiders'
    
    ITEM_PIPELINES = {
        'lushichuanshuo.pipelines.LushichuanshuoPipeline': 100,
    }
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    #USER_AGENT = 'lushichuanshuo (+http://www.yourdomain.com)'
    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = True
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    # Enable or disable spider middlewares
    # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'lushichuanshuo.middlewares.LushichuanshuoSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'lushichuanshuo.middlewares.LushichuanshuoDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See https://doc.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
    #ITEM_PIPELINES = {
    #    'lushichuanshuo.pipelines.LushichuanshuoPipeline': 300,
    #}
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://doc.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
    

    lushispider1.py (爬虫文件)

    import scrapy
    from lushichuanshuo.items import LushichuanshuoItem
    
    
    class DoubanshuSpiderSpider(scrapy.Spider):
        name = "lushichuanshuo_spider1"
        start_urls = (
            'http://news.4399.com/gonglue/lscs/kptj/',
        )
        def parse(self, response):
            sel = scrapy.Selector(response)
            illustrates = sel.xpath('//*[@id="dq_list"]/li')
            item = LushichuanshuoItem()
            for each in illustrates:
                img = each.xpath('a/img/@lz_src').extract()  //这里根据网页得到的内容用@lz_src替代@src
                name = each.xpath('a/div/text()').extract()
                item['img'] = img
                item['name'] = name
                yield item
            # next_page = sel.xpath('//*[@id="content"]/div/div[1]/div[31]/span[@class="next"]/a/@href').extract()
            # if next_page:
            #     next = next_page[0]
            #     yield scrapy.http.Request(next, callback=self.parse)
    

    main.py (程序执行处)

    
    from scrapy import cmdline
    
    cmdline.execute(["scrapy", "crawl", "lushichuanshuo_spider1"])  
    //lushichuanshuo_spider1必须与lushispider1.py中的name一致
    
    

    爬取数据

    运行main.py


    运行

    发现800张原画已经被我们爬取下来


    成果展示

    至此在Pycharm下使用Scrapy爬取第一个网页的Demo已完成!

    谢谢阅读,有帮助的点个❤!

    相关文章

      网友评论

          本文标题:Scrapy爬取炉石传说原画实践

          本文链接:https://www.haomeiwen.com/subject/ivehkqtx.html