美文网首页
【Scrapy】绕过反爬虫策略和存储(二)

【Scrapy】绕过反爬虫策略和存储(二)

作者: 是Jonathan | 来源:发表于2017-03-25 22:24 被阅读625次
    Scrapy原理

    Scrapy中的数据流由执行引擎控制,其过程如下:
    1.引擎打开一个网站(open a domain),找到处理该网站的Spider并向该Spider请求第一个要爬取的URL(s)
    2.引擎从Spider中获取到第一个要爬取的URL并在调度器(Scheduler)以Request调度。
    3.引擎向调度器请求下一个要爬取的URL。
    4.调度器返回下一个要爬取的URL给引擎,引擎将URL通过下载器中间件(请求(request)方向)转发给下载器(Downloader)。
    5.一旦页面下载完毕,下载器生成一个该页面的Response,并将其通过下载中间件(返回(response))发送给引擎。
    6.引擎从下载器中接收到Response并通过Spider中间件(输入方向)发送 给Spider处理。
    7.Spider处理Response并返回爬取到的item及(跟进的)新的Request给引擎。
    8.引擎将(Spider返回的)爬取到的item给item Pipeline,将(Spider返回的)Request给调度器
    9.(从第二步)重复直到调度器中没有更多地Request,引擎关闭该网站。

    0x01 过反爬虫策略
    1.设置延迟下载
    download_delay参数,在settings.py文件中设置
    2.禁止Cookie
    在settings.py中设置COOKIES_ENABLES=False。也就是不启用cookies middleware,不想web server发送cookies。
    3.使用user agent池
    修改settings.py配置USER_AGENTSPROXIES
    a):添加USER_AGENTS
    USER_AGENTS = [ "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)", "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)", "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)", "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)", "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)", "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)", "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6", "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0", "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5", "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20", "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52", ]
    所谓的user agent,是指包含浏览器信息、操作系统信息等的一个字符串,也称之为一种特殊的网络协议。服务器通过它判断当前访问对象是浏览器、邮件客户端还是网络爬虫。

    4.使用IP池
    web server应对爬虫的策略之一就是直接将你的IP或者是整个IP段都封掉禁止访问,这时候,当IP封掉后,转换到其他的IP继续访问即可。
    b):添加代理IP设置PROXIES
    PROXIES = [ {'ip_port': '111.11.228.75:80', 'user_pass': ''}, {'ip_port': '120.198.243.22:80', 'user_pass': ''}, {'ip_port': '111.8.60.9:8123', 'user_pass': ''}, {'ip_port': '101.71.27.120:80', 'user_pass': ''}, {'ip_port': '122.96.59.104:80', 'user_pass': ''}, {'ip_port': '122.224.249.122:8088', 'user_pass': ''}, ]
    代理IP可以网上搜索一下,上面的代理IP获取自:http://www.xici.net.co/

    5.分布式爬取
    采用分布式爬取。此处暂时不说。

    0x02 存储
    采用MongoDB数据存储爬取的漏洞数据。

    代码示例部分

    pipelines.py文件中代码

    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
    import json
    import codecs
    import pymongo
    from scrapy.conf import settings
    from vuls360.items import Vuls360Item
    
    class Vuls360Pipeline(object):
        '''
        def __init__(self):
            self.file = codecs.open('vul.json', 'wb', encoding='utf-8')
    
        def process_item(self, item, spider):
            line = json.dumps(dict(item)) + '\n'  
            self.file.write(line.decode("unicode_escape")) 
            return item
        '''
    
        def __init__(self):
            host = settings['MONGODB_HOST']
            port = settings['MONGODB_PORT']
            dbname = settings['MONGODB_DBNAME']  # 数据库名
            client = pymongo.MongoClient(host=host, port=port)
            tdb = client[dbname]
            self.port = tdb[settings['MONGODB_DOCNAME']]  # 表名
    
        def process_item(self, item, spider):
            vul_info = dict(item)
            self.port.insert(vul_info)
            return item
    

    settings.py文件中代码

    # -*- coding: utf-8 -*-
    
    BOT_NAME = 'vuls360'
    
    SPIDER_MODULES = ['vuls360.spiders']
    NEWSPIDER_MODULE = 'vuls360.spiders'
    
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    #USER_AGENT = 'vuls360 (+http://www.yourdomain.com)'
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = True
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    # Enable or disable spider middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'vuls360.middlewares.Vuls360SpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    DOWNLOADER_MIDDLEWARES = {
       'vuls360.middlewares.RandomUserAgent': 543,
    }
    
    # Configure item pipelines
    # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
        'vuls360.pipelines.Vuls360Pipeline': 30,
    }
    
    #save to mongdodb
    # MONGO_URI = 'mongodb://127.0.0.1:27017'
    MONGODB_HOST = '127.0.0.1:'
    MONGODB_PORT = 27017
    MONGODB_DBNAME = 'vuls360'
    MONGODB_DOCNAME = 'vuls360_info'
    
    USER_AGENTS = [
        "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
        "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
        "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
        "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
        "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
        "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
        "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
        "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
        "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
        "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
        "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
        "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
        "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
        "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
    ]
    

    爬虫文件代码vuls.py:

    class VulsSpider(scrapy.Spider):
        name = "vuls"
        allowed_domains = ["bobao.360.cn"]
        start_urls = ['http://bobao.360.cn/vul/index?type=all&page=']
        headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36',
        }
    
        def parse(self, response):
            start_urls = 'http://bobao.360.cn/vul/index?type=all&page='
            pages = response.xpath('/html/body/div[2]/div[2]/div[2]/div[1]/div/div[3]/div/ul/li[8]/a/@href').extract()[0].split('=')[2]
            # for page in range(1,int(pages)+1):
            for page in range(1, 5):
                url = 'http://bobao.360.cn/vul/index?type=all&page={}'.format(page)
                # print url
                yield scrapy.Request(url, self.parse_detail)  # , headers=self.headers)
    
        def parse_detail(self, response):
            vuls = response.xpath('/html/body/div[2]/div[2]/div[2]/div[1]/div/div[3]/ul/li')
            # print vuls
            for vul in vuls:
                label_danger = vul.xpath('.//div/div/span/text()').extract()[0] if len( \
                    vul.xpath('.//div/div/span/text()').extract()) else "null"
                yield {
                    'url': 'http://bobao.360.cn' + vul.xpath('.//div/div/a/@href').extract()[0],
                    'title': vul.xpath('.//div/div/a/text()').extract()[0],
                    'label_danger': label_danger,
                    'ori': vul.xpath('.//div/span[2]/text()').extract()[0],
                }
    

    参考:
    http://www.tuicool.com/articles/VRfQR3U
    Scrapy的wiki资料
    http://wiki.jikexueyuan.com/project/scrapy

    定义user-agent池&****Scrapy HTTP代理
    http://www.2cto.com/os/201406/312688.html
    https://www.baidu.com/s?ie=utf-8&f=8&rsv_bp=1&tn=baidu&wd=scrapy%20agent&oq=scrapy%25E6%2589%25A9%25E5%25B1%2595&rsv_pq=fee1578b0000ca7a&rsv_t=6a10nCV%2Fwr3THCK7rFn%2FH8Pmru%2F%2FcsmS%2BBr8Y%2FeWNzi9dBSZjm5ZvNZOYqY&rqlang=cn&rsv_enter=1&rsv_sug3=9&rsv_sug1=6&rsv_sug7=100&rsv_sug2=0&inputT=5409&rsv_sug4=5722
    https://github.com/jackgitgz/CnblogsSpider

    Scrapy存储
    http://blog.csdn.net/u012150179?viewmode=contents
    使用MongoDB

    #删除数据库操作
    > use vuls360
    switched to db vuls360
    > db.dropDatabase()
    { "dropped" : "vuls360", "ok" : 1 }
    #查看爬虫数据
    > use vuls360
    switched to db vuls360
    > show collections
    system.indexes
    vuls360_info
    > db.vuls360_info.find()
    

    Scrapy默认情况下深度优先顺序,也可以设置广度优先
    DEPTH_PRIORITY = 1
    SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
    SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'

    xpath和selector选择器的资料
    http://www.cnblogs.com/lonenysky/p/4649455.html
    http://www.cnblogs.com/sufei-duoduo/p/5868027.html
    xpath和lxml的相关资料
    http://cuiqingcai.com/2621.html

    其他爬虫
    https://www.figotan.org/2016/08/10/pyspider-as-a-web-crawler-system/

    相关文章

      网友评论

          本文标题:【Scrapy】绕过反爬虫策略和存储(二)

          本文链接:https://www.haomeiwen.com/subject/avchottx.html