美文网首页爬虫Python
基于Scrapy框架爬取ajax图片并保存本地

基于Scrapy框架爬取ajax图片并保存本地

作者: 乔治大叔 | 来源:发表于2020-06-20 12:29 被阅读0次

    先上一波福利

    66347-2.jpg 66250-5.jpg 66158-3.jpg 68185.jpg 68205.jpg
    俗话说:授人予鱼不如授人以渔,上代码讲解

    项目结构


    WeChat74777e08e1d4298d3488d1c28ed68255.png

    首先获取到妹子图的定位来获取URL


    屏幕快照 2020-06-20 上午11.44.27.png

    右击copy选择Xpath就可以直接复制了

    #找到妹子图的链接
     url = response.xpath('//*[@id="home-collections"]/ul/li[4]/div/div[1]/a/@href').extract()[0] 
    

    进入到下个页面获取页面的全部URL,但是我发现是用ajax动态加载图片的。
    怎么获得ajax信息获取下一页呢?
    我们F12选择Network,点击XHR可以发现静态文件


    屏幕快照 2020-06-20 上午11.55.48.png

    点击滑到最下面,可以看到FormData的数据


    屏幕快照 2020-06-20 上午11.57.08.png
    没错,就是这数据
            for i in range(76):   #妹子图一共有76页
                formdata = {
                    'type': 'collection29',
                    'paged': str(i+1)   #paged是从1开始的
                }
                # Scrapy动态加载ajax请求,并使用回调函数进行深一层的爬取
                yield scrapy.FormRequest(url, callback=self.parse_page, formdata=formdata)
    

    然后进入更深层次的挖掘,获取到所有页面上的URL


    屏幕快照 2020-06-20 下午12.03.24.png
        def parse_page(self, response):
            urls = response.xpath('//*[@id="main"]/div[1]/div/div/div[1]/a/@href').extract() #获取到当前页面的全部url
            for url in urls:
          #使用回调函数进行深一层的爬取
                yield scrapy.Request(url, callback=self.parse_img)  
    

    然后进入更深层次的挖掘,图片标题的爬取,和每张图片的爬取


    屏幕快照 2020-06-20 下午12.07.23.png
        def parse_img(self, response):
            item = BeautifulgirlItem()
            name = response.xpath('//*[@id="post-single"]/h1/text()').extract()[0] #获取到每个图集的标题
            print(name)
            item['name'] = name
            img_urls = response.xpath('//*[@id="content-innerText"]/p/img/@src').extract() #获取到每张图片的URL
            for img_url in img_urls:
                item['img_url'] = img_url
                print(img_url)
                yield item
    

    下面我把代码整合一下
    peibanni.py

    # -*- coding: utf-8 -*-
    import scrapy
    from beautifulgirl.items import BeautifulgirlItem
    
    class PeibanniSpider(scrapy.Spider):
        name = 'peibanni'
        allowed_domains = ['www.peibanni.com']
        start_urls = ['https://www.peibanni.com/']
    
        def parse(self, response):
            url = response.xpath('//*[@id="home-collections"]/ul/li[4]/div/div[1]/a/@href').extract()[0] #找到妹子图的链接
            for i in range(76):
                formdata = {
                    'type': 'collection29',
                    'paged': str(I+1)
                }
                # Scrapy动态加载ajax请求,并使用回调函数进行深一层的爬取
                yield scrapy.FormRequest(url, callback=self.parse_page, formdata=formdata)
    
        def parse_page(self, response):
            urls = response.xpath('//*[@id="main"]/div[1]/div/div/div[1]/a/@href').extract() #获取到当前页面的全部url
            for url in urls:
                yield scrapy.Request(url, callback=self.parse_img)  #使用回调函数进行深一层的爬取
    
        def parse_img(self, response):
            item = BeautifulgirlItem()
            name = response.xpath('//*[@id="post-single"]/h1/text()').extract()[0] #获取到每个图集的标题
            print(name)
            item['name'] = name
            img_urls = response.xpath('//*[@id="content-innerText"]/p/img/@src').extract() #获取到每张图片的URL
            for img_url in img_urls:
                item['img_url'] = img_url
                print(img_url)
                yield item
    
    

    接着是图片数据的保存
    在item.py中

    import scrapy
    
    class BeautifulgirlItem(scrapy.Item):
        # define the fields for your item here like:
        # name = scrapy.Field()
        name = scrapy.Field() #图片的名称
        img_url = scrapy.Field() #图片的URL
    

    在pipeline中,使用内置的图片解析ImagesPipeline

    import scrapy
    from scrapy.pipelines.images import ImagesPipeline
    
    # class BeautifulgirlPipeline(object):
    #     def process_item(self, item, spider):
    #         return item
    
    class BeautifulgirlPipeline(ImagesPipeline):
        def get_media_requests(self, item, info):
            #这个方法会循环执行,前面每次传入一个item,这个item就交给引擎
            #引擎又会交给管道来执行,管道里有一系列的内置方法,这些方法会依次执行
            yield scrapy.Request(url=item['img_url'], meta={'item':item})
    
        def file_path(self, request, response=None, info=None):
            item = request.meta['item']
            #获取到每张图片在URL中的名字,来达到去重到目的
            # 比如"https://www.peibanni.com/wp-content/uploads/2020/06/68215.jpg",就获取到了68215.jpg,
            img_tail = request.url.split('/')[-1]
            path = u'{0}/{1}'.format(item['name'],img_tail)
            return path
    
    
    屏幕快照 2020-06-20 下午12.23.17.png

    最后别忘记在setting.py中设置,
    IMAGES_STORE不能拼错,拼错了就无法保存本地了!

    IMAGES_STORE = 'images'
    

    setting.py的完整版

    # -*- coding: utf-8 -*-
    
    # Scrapy settings for beautifulgirl project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     https://docs.scrapy.org/en/latest/topics/settings.html
    #     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #     https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'beautifulgirl'
    
    SPIDER_MODULES = ['beautifulgirl.spiders']
    NEWSPIDER_MODULE = 'beautifulgirl.spiders'
    
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    #USER_AGENT = 'beautifulgirl (+http://www.yourdomain.com)'
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = False
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    DEFAULT_REQUEST_HEADERS = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_3) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0.3 Safari/605.1.15',
    }
    
    
    # Enable or disable spider middlewares
    # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'beautifulgirl.middlewares.BeautifulgirlSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'beautifulgirl.middlewares.BeautifulgirlDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See https://docs.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
       'beautifulgirl.pipelines.BeautifulgirlPipeline': 300,
    }
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
    
    IMAGES_STORE = 'images'
    

    如果觉得我写的还不错,就给我一个大大赞,你的支持是我的动力!

    相关文章

      网友评论

        本文标题:基于Scrapy框架爬取ajax图片并保存本地

        本文链接:https://www.haomeiwen.com/subject/tuuwxktx.html