美文网首页
爬虫小记(三)

爬虫小记(三)

作者: 虎七 | 来源:发表于2018-09-13 19:26 被阅读0次

云词库

http://blog.csdn.net/xiemanr/article/details/72796739

https://github.com/adobe-fonts/source-han-serif/tree/release

文本分析

http://blog.csdn.net/ns2250225/article/details/51291775

https://www.cnblogs.com/zhzhang/p/6785125.html

# -*- coding: utf-8 -*-

    # Define here the models for your spider middleware

    #

    # See documentation in:

    # https://doc.scrapy.org/en/latest/topics/spider-middleware.html

    from scrapy import signals

    class HelloscrapySpiderMiddleware(object):

        # Not all methods need to be defined. If a method is not defined,

        # scrapy acts as if the spider middleware does not modify the

        # passed objects.

        @classmethod

        def from_crawler(cls, crawler):

            # This method is used by Scrapy to create your spiders.

            s = cls()

            crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)

            return s

        def process_spider_input(self, response, spider):

            # Called for each response that goes through the spider

            # middleware and into the spider.

            # Should return None or raise an exception.

            return None

        def process_spider_output(self, response, result, spider):

            # Called with the results returned from the Spider, after

            # it has processed the response.

            # Must return an iterable of Request, dict or Item objects.

            for i in result:

                yield i

        def process_spider_exception(self, response, exception, spider):

            # Called when a spider or process_spider_input() method

            # (from other spider middleware) raises an exception.

            # Should return either None or an iterable of Response, dict

            # or Item objects.

            pass

        def process_start_requests(self, start_requests, spider):

            # Called with the start requests of the spider, and works

            # similarly to the process_spider_output() method, except

            # that it doesn’t have a response associated.

            # Must return only requests (not items).

            for r in start_requests:

                yield r

        def spider_opened(self, spider):

            spider.logger.info('Spider opened: %s' % spider.name)

    class HelloscrapyDownloaderMiddleware(object):

        # Not all methods need to be defined. If a method is not defined,

        # scrapy acts as if the downloader middleware does not modify the

        # passed objects.

        @classmethod

        def from_crawler(cls, crawler):

            # This method is used by Scrapy to create your spiders.

            s = cls()

            crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)

            return s

        def process_request(self, request, spider):

            # Called for each request that goes through the downloader

            # middleware.

            # Must either:

            # - return None: continue processing this request

            # - or return a Response object

            # - or return a Request object

            # - or raise IgnoreRequest: process_exception() methods of

            #  installed downloader middleware will be called

            return None

        def process_response(self, request, response, spider):

            # Called with the response returned from the downloader.

            # Must either;

            # - return a Response object

            # - return a Request object

            # - or raise IgnoreRequest

            return response

        def process_exception(self, request, exception, spider):

            # Called when a download handler or a process_request()

            # (from other downloader middleware) raises an exception.

            # Must either:

            # - return None: continue processing this exception

            # - return a Response object: stops process_exception() chain

            # - return a Request object: stops process_exception() chain

            pass

        def spider_opened(self, spider):

            spider.logger.info('Spider opened: %s' % spider.name)

相关文章

  • 爬虫小记(三)

    云词库 http://blog.csdn.net/xiemanr/article/details/72796739...

  • 解决crontab command not found 问题

    linux crontab 小记 最近在搞网页爬虫,用的phantomjs 和 casperjs ,环境都是安在了...

  • 爬虫小记

    利用selenium模拟登入,获取cookies 爬取https链接,解决SSL证书错误问题:SSLError: ...

  • 爬虫小记

    对于编程,我的阴影应该来自于大学的Fortain 77,一个听起来很古老的课程。编程课是大班教学的安排,最...

  • 爬虫小记

    内容包含:1,简略描述爬虫代码编写流程2,对于动态页面的爬取代码:https://github.com/zackL...

  • 爬虫学习小记

    1.requests库安装 举例: 这样,百度的主页已经成功被抓取下来了。 requests的7个主要的方法 2....

  • 01-认识爬虫

    一、爬虫介绍 什么是爬虫 Python爬虫的优势 Python爬虫需要掌握什么 爬虫与反爬虫与反反爬虫三角之争 网...

  • python爬虫小记----百度翻译api的使用

    #python 爬虫小记1--百度翻译API使用偶遇python2与3的一些改变:cookielib 模块改名为 ...

  • python-爬虫基础(慕课网)

    二.爬虫简介以及爬虫的技术价值 2-1:爬虫是什么? 2-2:爬虫技术的价值? 三.简单爬虫架构 3-1:简单爬虫...

  • Python爬虫学习(十六)初窥Scrapy

    Python爬虫学习(一)概述Python爬虫学习(二)urllib基础使用Python爬虫学习(三)urllib...

网友评论

      本文标题:爬虫小记(三)

      本文链接:https://www.haomeiwen.com/subject/jlkyfftx.html