美文网首页Python 开发python爬虫实战网络爬虫
Python3[爬虫实战] 爬虫之scrapy爬取爱上程序网存M

Python3[爬虫实战] 爬虫之scrapy爬取爱上程序网存M

作者: 简书用户9527 | 来源:发表于2017-08-26 00:56 被阅读336次

    爱上程序网

    http://www.aichengxu.com/android

    缘由:这个网站是在工作中谷歌找问题找出来的,然后发现里面的文章挺多的, 毕竟自己平时比较喜欢看技术文章,什么都想懂,什么都懂得不深入,这不,想要转爬虫工作的,现在还在继续android开发中。。

    废话不多说。

    来个数据库的结果:


    这里写图片描述

    为什么暂时是这些呢?因为用的循环用了10000次,可能还会多,数据爬取到了2013年了,想必这个网站确实是建站比较久了,可能遍历的话还会有更多吧。在第二次爬取数据的时候一共爬取了24万左右的数据,可能还有多吧。

    这里总结一下经验:
    之前在pycharm中直接在文件夹中进行写scrapy工程导入模块一直出错,原因是导入不进去。可以看以下截图对比:

    这里写图片描述

    可以看出来下面那个项目在工程中没有变黑,所以在spider中导入items里面的类,是一直导入报错的,说是no module named ‘’xxx‘’

    解决方法:

    需要我们在file菜单栏中重新open。。然后导入该项目,再关联到该工程中,add xxx的 然后项目就变成黑色了, 这样子导包就正常了。

    抓取的数据主要是:

    阅读量,标题,标题链接(内容详情页),内容,时间


    这里写图片描述

    使用scrapy 进行数据的爬取,在parse()方法中用的还不是很熟练,一边调,一边用把。

    步骤一:
    settings文件的配置:
    
    # -*- coding: utf-8 -*-
    
    # Scrapy settings for aichengxu project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     http://doc.scrapy.org/en/latest/topics/settings.html
    #     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    #     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'aichengxu'
    
    SPIDER_MODULES = ['aichengxu.spiders']
    NEWSPIDER_MODULE = 'aichengxu.spiders'
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    # USER_AGENT = 'aichengxu (+http://www.yourdomain.com)'
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = False
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    # CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    # DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    # CONCURRENT_REQUESTS_PER_DOMAIN = 16
    # CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    # COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    # TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    DEFAULT_REQUEST_HEADERS = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
        'Accept-Language': 'zh-CN,zh;q=0.8',
        'Cache-Control': 'max-age=0',
        'Connection': 'keep-alive',
        'Cookie': 'ras=24656333; cids_AC31=24656333',
        'Host': 'www.aichengxu.com',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 Safari/537.36',
    }
    
    # 配置mongoDB
    MONGO_HOST = "127.0.0.1"  # 主机IP
    MONGO_PORT = 27017  # 端口号
    MONGO_DB = "aichengxu2"  # 库名
    MONGO_COLL = "ai_chengxu"  # collection
    
    # Enable or disable spider middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    # SPIDER_MIDDLEWARES = {
    #    'aichengxu.middlewares.AichengxuSpiderMiddleware': 543,
    # }
    
    # Enable or disable downloader middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    # DOWNLOADER_MIDDLEWARES = {
    #    'aichengxu.middlewares.MyCustomDownloaderMiddleware': 543,
    #     'aichengxu.middlewares.MyCustomDownloaderMiddleware': 543,
    #     'aichengxu.middlewares.MyCustomDownloaderMiddleware': 543,
    # }
    
    # Enable or disable extensions
    # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
    # EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    # }
    
    # Configure item pipelines
    # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
        'aichengxu.pipelines.AichengxuPipeline': 300,
        'aichengxu.pipelines.DuoDuoMongo': 300,
        'aichengxu.pipelines.JsonWritePipline': 300,
    }
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
    # AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    # AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    # AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    # AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    # HTTPCACHE_ENABLED = True
    # HTTPCACHE_EXPIRATION_SECS = 0
    # HTTPCACHE_DIR = 'httpcache'
    # HTTPCACHE_IGNORE_HTTP_CODES = []
    # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
    

    当然有用的也就那几个没有注释掉的,这是一种习惯吧,不然后面有时候都在想为什么scrapy没有跑起来,或者直接报错,呵呵。

    步骤二:

    进行items里面的定义:
    
    # -*- coding: utf-8 -*-
    
    # Define here the models for your scraped items
    #
    # See documentation in:
    # http://doc.scrapy.org/en/latest/topics/items.html
    
    import scrapy
    
    
    class AichengxuItem(scrapy.Item):
        # define the fields for your item here like:
        # name = scrapy.Field()
        pass
    
    class androidItem(scrapy.Item):
        # 阅读量
        count = scrapy.Field()
        # 标题
        title = scrapy.Field()
        # 链接
        titleLink = scrapy.Field()
        # 描述
        desc = scrapy.Field()
        # 时间
        time = scrapy.Field()
    
    

    步骤三:

    spiders里面的代码编写
    
    # -*- coding: utf-8 -*-
    # @Time    : 2017/8/25 21:54
    # @Author  : 蛇崽
    # @Email   : 17193337679@163.com
    # @File    : aichengxuspider.py 爱程序网 www.aichengxu.com
    
    import scrapy
    from aichengxu.items import androidItem
    import logging
    class aiChengxu(scrapy.Spider):
    
        name = 'aichengxu'
    
        allowed_domains = ['www.aichengxu.com']
    
        start_urls = ["http://www.aichengxu.com/android/{}/".format(n) for n in range(1,10000)]
    
        def parse(self, response):
            node_list = response.xpath("//*[@class='item-box']")
            print('nodelist',node_list)
            for node in node_list:
                android_item = androidItem()
                count = node.xpath("./div[@class='views']/text()").extract()
                title_link = node.xpath("./div[@class='bd']/h3/a/@href").extract()
                title = node.xpath("./div[@class='bd']/h3/a/text()").extract()
                desc = node.xpath("./div[@class='bd']/div[@class='desc']/text()").extract()
                time = node.xpath("./div[@class='bd']/div[@class='item-source']/span[2]").extract()
                print(count,title,title_link,desc,time)
                android_item['title'] = title
                android_item['titleLink'] = title_link
                android_item['desc'] = desc
                android_item['count'] = count
                android_item['time'] = time
                yield android_item
    
    

    这里对yield这个关键字说一下自己的看法:就是返回该条目,并继续执行下一个任务,或者说,把item返回去,然后继续下一条爬虫吧。

    步骤四:

    piplines代码:
    
    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
    import json
    
    import pymongo
    from scrapy.conf import settings
    
    class AichengxuPipeline(object):
        def process_item(self, item, spider):
            return item
    class DuoDuoMongo(object):
        def __init__(self):
            self.client = pymongo.MongoClient(host=settings['MONGO_HOST'], port=settings['MONGO_PORT'])
            self.db = self.client[settings['MONGO_DB']]
            self.post = self.db[settings['MONGO_COLL']]
    
        def process_item(self, item, spider):
            postItem = dict(item)
            self.post.insert(postItem)
            return item
    
    # 写入json文件
    class JsonWritePipline(object):
        def __init__(self):
            self.file = open('爱上程序网2.json','w',encoding='utf-8')
    
        def process_item(self,item,spider):
            line  = json.dumps(dict(item),ensure_ascii=False)+"\n"
            self.file.write(line)
            return item
    
        def spider_closed(self,spider):
            self.file.close()
    

    这里主要写了两种类型的存储,一个是mogodb,一个是json文件的存储

    这样,代码就爬完了,然后就这样子结束了,代码久了不写,生疏了许多。

    有一个疑问:这里爬取了24万的数据,没有在中途因为cookie或者代理的原因被静止,可能在settings里面设置了请求头的缘故吧,DEFAULT_REQUEST_HEADERS

    再说一下这个网站为什么自己为什么想爬,总感觉自己对技术驱动性很强吧,当然啦,android并不是我最喜欢的吧,可能或者有些畏惧它,android适配很恶心的。机型,屏幕什么的。

    这个网站还有很多其他条目:


    这里写图片描述

    如果全部爬取下来,老实说,我没有进行爬取过,有想法的小伙伴可是试一试,最后把代码上传到我的github上:

    我的github

    相关文章

      网友评论

        本文标题:Python3[爬虫实战] 爬虫之scrapy爬取爱上程序网存M

        本文链接:https://www.haomeiwen.com/subject/votmdxtx.html