美文网首页
scrapy 爬取百度相关搜索

scrapy 爬取百度相关搜索

作者: 今夕何夕_walker | 来源:发表于2017-02-15 22:35 被阅读1022次

    简介

    scrapy做简单的大量数据的爬虫太方便了,一般就三个文件 setting.py,item.py ,xxx_spider.py,代码量很少。存json的时候最高爬取过600多MB的文本。去年存入postgresql的时候最多的一次大概一次性爬取了1000多万的关键词(key,[related1,related2],key,related对调的时候靠用redis放内存中分批计算才成功,把老化的机械硬盘换成固态硬盘之后,就不用redis了)。

    代码部分

    spider 代码,从本地muci.txt中获取关键词,setting控制爬取深度(DEEP_LIMIT = 1就是只爬取当前关键词的相关搜索词)
    # -*- coding:utf8 -*-
    
    from scrapy.spiders import CrawlSpider
    from scrapy import Request
    
    from mbaidu.items import baiduItemtj
    import os
    from scrapy.conf import settings
    
    settings.overrides['RESULT_JSONFILE'] = 'mbaidutj.json'
    
    class MbaiduSpider(CrawlSpider):
    
        name = 'mbaidu_xg'
        allowed_domains = ['m.baidu.com']
        def start_requests(self):
            mucifile = open(os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), "muci.txt"), 'r')
            for key in mucifile.readlines():
                nextword = key.strip("\n").strip()
                if nextword != "":
                    yield Request('https://m.baidu.com/s?word=' + nextword, self.parse)
    
        def parse(self, response):
            related = response.css('#reword .rw-list a::text').extract()
            if related:
                for rw in related:
                    item = baiduItemtj()
                    item['keyword'],item['description'] = [rw,'']
                    yield item
            rwlink = response.css('#reword .rw-list a::attr(href)').extract()
            if rwlink:
                for link in rwlink:
                    yield Request(link,self.parse)
            tj = response.css('.wa-sigma-celebrity-rela-entity.c-scroll-element-gap a')
            if tj:
                for i in tj:
                    item = baiduItemtj()
                    item['keyword'],item['description'] = i.css('p::text').extract()
                    yield item
                tjlink = response.css('.wa-sigma-celebrity-rela-entity.c-scroll-element-gap a::attr(href)').extract()
                if tjlink:
                    for link in tjlink:
                        yield Request(link,self.parse)
    
    处理json编码的代码 piplines.py中,本地存储python乱码是使用(加入setting.py中),估计是python2的锅,python3不一定要
    class JsonWriterPipeline(object):
        def __init__(self):
            self.file = codecs.open(settings.get('RESULT_JSONFILE','default.json'), 'w', encoding='utf-8')
    
        def process_item(self, item, spider):
            line = json.dumps(dict(item)) + "\n"
            self.file.write(line.decode('unicode_escape'))
            # item = {"haha": "hehe"}
            # return {"log": "可以不需要return数据了,返回的数据会再次转成Unicode,交给系统自带的输出"}
            # return item
    
    item.py
    class baiduItemtj(scrapy.Item):
        # 右侧推荐有description 底部相关搜索没有 为空
        keyword = scrapy.Field()
        description = scrapy.Field()
    

    相关文章

      网友评论

          本文标题:scrapy 爬取百度相关搜索

          本文链接:https://www.haomeiwen.com/subject/tohcwttx.html