Python爬虫基础 | 爬虫反ban的技巧

作者: JaeGwen | 来源:发表于2017-05-05 17:50 被阅读724次

    根据scrapy官方文档:http://doc.scrapy.org/en/master/topics/practices.html#avoiding-getting-banned里面的描述,要防止scrapy被ban,主要有以下几个策略。

    • 动态设置user agent
    • 禁用cookies/启用cookie
    • 设置延迟下载
    • 使用Google cache (未记录)
    • 使用IP地址池(Tor project、VPN和代理IP)
    • 利用第三方平台crawlera做scrapy爬虫防屏蔽 (未记录)

    动态设置user agent

    # -*- coding:utf-8 -*-
    
    import random
    
    def get_headers():
        useragent_list = [
            'Mozilla/5.0 (Windows NT 6.1; rv,2.0.1) Gecko/20100101 Firefox/4.0.1',
            'Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11',
            'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
            'Mozilla/5.0 (Windows NT 6.1; rv,2.0.1) Gecko/20100101 Firefox/4.0.1',
            'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36',
            'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Maxthon/4.9.2.1000 Chrome/39.0.2146.0 Safari/537.36',
            'Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11',
            'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3',
            'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3',
            'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/532.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/532.3',
            'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5',
            'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.81 Safari/537.36'
        ]
        useragent = random.choice(useragent_list)
        header = {'User-Agent': useragent}
        return header
    
    
    

    禁用cookies/启用cookie

    Cookies

    cookies(来自维基百科)因为HTTP协议是无状态的,即服务器不知道用户上一次做了什么,这严重阻碍了交互式Web应用程序的实现。在典型的网上购物场景中,用户浏览了几个页面,买了一盒饼干和两饮料。最后结帐时,由于HTTP的无状态性,不通过额外的手段,服务器并不知道用户到底买了什么。 所以Cookie就是用来绕开HTTP的无状态性的“额外手段”之一。服务器可以设置或读取Cookies中包含信息,借此维护用户跟服务器会话中的状态。
    Cookie另一个典型的应用是当登录一个网站时,网站往往会请求用户输入用户名和密码,并且用户可以勾选“下次自动登录”。如果勾选了,那么下次访问同一网站时,用户会发现没输入用户名和密码就已经登录了。这正是因为前一次登录时,服务器发送了包含登录凭据(用户名加密码的某种加密形式)的Cookie到用户的硬盘上。第二次登录时,(如果该Cookie尚未到期)浏览器会发送该Cookie,服务器验证凭据,于是不必输入用户名和密码就让用户登录了。

    采用selenium + PhantomJS 模拟浏览器登录Lagou,获取cookie

    PhantomJS 下载页面

    # -*- coding:utf-8 -*-
    
    import sys
    import time
    import random
    from selenium import webdriver
    reload(sys)
    sys.setdefaultencoding('utf-8')
    
    def random_sleep_time():
        sleeptime = random.randint(0, 10)
        return time.sleep(sleeptime)
    
    def get_headers_with_cookie():
        driver = webdriver.PhantomJS(executable_path="D:\phantomjs-2.1.1-windows\\bin\phantomjs.exe") #需下载PhantomJS并解压到某一路径
        url_login = 'https://passport.lagou.com/login/login.html'
        driver.get(url_login)
        driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[1]/input').clear()
        driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[1]/input').send_keys('username') #需替换可用账户
        random_sleep_time()
        driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[2]/input').clear()
        driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[2]/input').send_keys('password') #需替换可用账户
        random_sleep_time()
        driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[5]/input').click()
        random_sleep_time()
        cookies = "; ".join([item["name"] + "=" + item["value"] for item in driver.get_cookies()])
        headers = get_headers()
        headers['cookie'] = cookies.encode('utf-8')
        return headers
    
    
    

    XPath解析 Copy XPath技巧 参考向右奔跑-009 - 使用XPath解析网页

    driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[1]/input')  
    
    driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[2]/input')
    
    driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[5]/input')
    
    Copy XPath

    Scrapy 禁用Cookie

    setting.py中设置

    COOKIES_ENABLED=False
    
    

    代理设置 PROXIES

    setting.py中设置

    PROXIES = [
        {'ip_port': '111.11.228.75:80', 'user_pass': ''},
        {'ip_port': '120.198.243.22:80', 'user_pass': ''},
        {'ip_port': '111.8.60.9:8123', 'user_pass': ''},
        {'ip_port': '101.71.27.120:80', 'user_pass': ''},
        {'ip_port': '122.96.59.104:80', 'user_pass': ''},
        {'ip_port': '122.224.249.122:8088', 'user_pass': ''},
    ]
    

    设置下载延迟

    setting.py中设置

    DOWNLOAD_DELAY=3
    
    

    创建中间件(middlewares.py)

    import random
    import base64
    from settings import PROXIES
    
    class RandomUserAgent(object):
        """Randomly rotate user agents based on a list of predefined ones"""
    
        def __init__(self, agents):
            self.agents = agents
    
        @classmethod
        def from_crawler(cls, crawler):
            return cls(crawler.settings.getlist('USER_AGENTS'))
    
        def process_request(self, request, spider):
            # print "**************************" + random.choice(self.agents)
            request.headers.setdefault('User-Agent', random.choice(self.agents))
    
    class ProxyMiddleware(object):
    
        def process_request(self, request, spider):
    
            proxy = random.choice(PROXIES)
            if proxy['user_pass'] is not None:
    
                request.meta['proxy'] = "http://%s" % proxy['ip_port']
                encoded_user_pass = base64.encodestring(proxy['user_pass'])
                request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
    
                print "**************ProxyMiddleware have pass************" + proxy['ip_port']
    
            else:
                print "**************ProxyMiddleware no pass************" + proxy['ip_port']
                request.meta['proxy'] = "http://%s" % proxy['ip_port']
    
    

    设置下载中间件

    DOWNLOADER_MIDDLEWARES = {
    #   'myproject.middlewares.MyCustomDownloaderMiddleware': 543,
        'myproject.middlewares.RandomUserAgent': 1,
        'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110,
    #   'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110,
        'myproject.middlewares.ProxyMiddleware': 100,
    }
    

    参考资料

    [1] 如何让你的scrapy爬虫不再被ban
    [2] 为何大量网站不能抓取?爬虫突破封禁的6种常见方法
    [3] 互联网网站的反爬虫策略浅析
    [4] 用 Python 爬虫抓站的一些技巧总结
    [5] 如何识别PhantomJs爬虫
    [6] 麻袋理财之反爬虫实践
    [7] 中间件

    相关文章

      网友评论

        本文标题:Python爬虫基础 | 爬虫反ban的技巧

        本文链接:https://www.haomeiwen.com/subject/mplstxtx.html