美文网首页
Python3.5.0爬虫基础urllib的使用

Python3.5.0爬虫基础urllib的使用

作者: 改变自己_now | 来源:发表于2016-12-12 14:41 被阅读610次

    Get

    urllib的request模块可以非常方便地抓取URL内容,也就是发送一个GET请求到指定的页面,然后返回HTTP的响应:

    from urllib import request
    
    with request.urlopen('http://www.baidu.com') as f:
    data = f.read()
    # 打印状态码看是否成功
    print('status:',f.status,f.reason)
    for k,v in f.getheaders():
        print('%s:%s' % (k,v))
    print('data:',data.decode('utf-8'))
    

    就能看到下面的打印的东西

    Connection:close
    Content-Type:text/html
    Last-Modified:Mon Dec 12 14:35:49 2016
    Vary:Accept-Encoding
    Date:Mon Dec 12 14:35:49 2016
    Cache-Control:no-cache
    Content-Length:2090
    data: <!doctype html><html><body><script>function R(u){if(/webkit/i.test(navigator.userAgent)){var l=top.window.document.createElement("a");l.rel="noreferrer";l.href=u;top.window.document.body.appendChild(l);var h=top.window.document.createEvent("MouseEvents");h.initEvent("click",true,true);setTimeout(function(){l.dispatchEvent(h)},300);}else{top.window.document.write('<meta http-equiv="Refresh" Content="0; Url='+u+'"/>');}
    setTimeout(function(){top.document.location.replace(u)},1000);}
    function B(){if((window.navigator.userAgent.search(/MSIE 6.0/i)!=-1||window.navigator.userAgent.search(/MSIE 7.0/i)!=-1||window.navigator.userAgent.search(/MSIE 8.0/i)!=-1||window.navigator.userAgent.search(/MSIE 9.0/i)!=-1)&&typeof window.navigator.vendor=="undefined"){return"ie9"}else if(window.navigator.userAgent.search(/Chrome/i)!=-1&&window.navigator.vendor.search(/Google/i)!=-1){return"chrome"}else if(window.navigator.userAgent.search(/FIREFOX/i)!=-1){return"ff"}
    return"unknown"}
    function SW(){var b=B();if(b!="i9"&&b!="chrom"&&b!="f"){R("https://www.baidu.com/?tn=13087099_91_hao_pg");return true;}
    return false;}
    function CK(){var c=window.document.cookie.split(";");for(var d=0;d<c.length;d++){var g=c[d].split("=")[0];if(g.indexOf("apxlp")>-1)
    continue;window.document.cookie=g+"=; path=/; expires=Thu, 01 Jan 1970 00:00:01 GMT;";window.document.cookie=g+"=; domain=.baidu.com; path=/; expires=Thu, 01 Jan 1970 00:00:01 GMT;";window.document.cookie=g+"=; domain=.www.baidu.com; path=/; expires=Thu, 01 Jan 1970 00:00:01 GMT;";};try{localStorage.clear()}catch(e){}}
    function IS(){CK();if(SW())
    return;R("https://www.baidu.com/?tn=13087099_91_hao_pg");}
    var xmlhttp;function reqHome(url){xmlhttp=null;if (window.XMLHttpRequest){  xmlhttp=new XMLHttpRequest();} else if (window.ActiveXObject){  xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");}if (xmlhttp!=null){  xmlhttp.open("GET",url,true);  xmlhttp.send(null); }}IS();setTimeout(reqHome("https://www.hao123.com/?tn=13087099_91_hao_pg"),300);</script><script>if(typeof IS=='undefined'){location.reload(true);}</script></body>```
    
    
    2.如果我们要想模拟浏览器发送GET请求,就需要使用Request对象,通过往Request对象添加HTTP头,我们就可以把请求伪装成浏览器。例如,模拟iPhone 6去请求豆瓣首页:
    
        from urllib import request
    
        # 创建request对象
        req = request.Request('http://www.douban.com/')
        # 添加请求头
        req.add_header('User-Agent', 'Mozilla/6.0 (iPhone; CPU iPhone OS 8_0       like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/8.0 Mobile/10A5376e Safari/8536.25')
        with request.urlopen(req) as f:
        data = f.read()
        # 打印状态码看是否成功
        print('status:',f.status,f.reason)
        for k,v in f.getheaders():
            print('%s:%s' % (k,v))
        print('data:',data.decode('utf-8'))
    
    ###POST
    如果要以POST发送一个请求,只需要把参数data以bytes形式传入。
    
    我们模拟一个微博登录,先读取登录的邮箱和口令,然后按照weibo.cn的登录页的格式以username=xxx&password=xxx的编码传入:
    
        from urllib import request,parse
    
          print('login to weibo.cn....')
        email = input('Email' )
        password = input('Password')
        # post 请求体,相当于oc中的httpBody
        login_data = parse.urlencode([('username',email),('password',password),('entry','mweibo')])
    
        # 创建requestduix
        req = request.Request('https://passport.weibo.cn/sso/login')
    
        # 设置请求头
        req.add_header('Origin', 'https://passport.weibo.cn')
        req.add_header('User-Agent', 'Mozilla/6.0 (iPhone; CPU iPhone OS 8_0   like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/8.0 Mobile/10A5376e Safari/8536.25')
        req.add_header('Referer', 'https://passport.weibo.cn/signin/login?entry=mweibo&res=wel&wm=3349&r=http%3A%2F%2Fm.weibo.cn%2F')
    
        with request.urlopen(req,data=login_data.encode('utf-8')) as f:
        print('status:',f.status,f.reason)
    
    Handler
    
    如果还需要更复杂的控制,比如通过一个Proxy去访问网站,我们需要利用ProxyHandler来处理,示例代码如下:
    
          proxy_handler = urllib.request.ProxyHandler({'http':     'http://www.example.com:3128/'})
        proxy_auth_handler = urllib.request.ProxyBasicAuthHandler()
        proxy_auth_handler.add_password('realm', 'host', 'username', 'password')
        opener = urllib.request.build_opener(proxy_handler, proxy_auth_handler)
        with opener.open('http://www.example.com/login.html') as f:
        pass
            
    
    
    
    

    相关文章

      网友评论

          本文标题:Python3.5.0爬虫基础urllib的使用

          本文链接:https://www.haomeiwen.com/subject/zlhmmttx.html