美文网首页
python爬虫urllib库-demo

python爬虫urllib库-demo

作者: 青铜搬砖工 | 来源:发表于2018-05-06 21:37 被阅读0次

    1 urlopen(),urlencode(),编码解码demo

    from urllib import request,parse
    data ={
        "first":'true',
        "pn":1,
        "kd":"Android"
    }
    header ={
        "User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.117 Safari/537.36",
        "Referer":"https://www.lagou.com/jobs/list_Android?px=default&city=%E5%B9%BF%E5%B7%9E"
    }
    req = request.Request("https://www.lagou.com/jobs/positionAjax.json?px=default&city=%E5%B9%BF%E5%B7%9E&needAddtionalResult=false",data = parse.urlencode(data).encode("utf-8"),headers = header,method="POST")
    result = request.urlopen(req)
    print(result.read().decode("utf-8"))
    

    urlopen()函数向指定url发送一个request请求,并接收返回的response
    网络上传输的数据类型为byte所以需要使用urlencode()函数对dict类型数据进行编码。并在接收数据后对result进行decode()解码。(不然容易出现乱码)

    2代理ip

    from urllib import request
    url ="http://ip.chinaz.com/getip.aspx"
    #1.ProxyHandle生成代理handle
    handle = request.ProxyHandler({'http':'183.159.80.115:18118'})#报错很可能是代理ip不稳定的原因
    #2.使用handel构建opener
    opener = request.build_opener(handle)
    #3.使用opener发送一个请求
    resp = opener.open(url)
    print(resp.read().decode("utf-8"))
    

    handler理解不是太深,目前只是知道它是一个处理器,opener作用我理解为相当于浏览器,去请求url

    3 cookie

    from urllib import request
    
    url ="http://www.renren.com/880792860/profile"
    header ={
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.90 Safari/537.36 2345Explorer/9.3.2.17331',
        'Cookie':'anonymid=jgtj84q4-txf6wv; depovince=HUB; _r01_=1; JSESSIONID=abcP30X8kXtzn-f8q4Wmw; ick_login=2fd80652-4a11-46c0-8390-620909c51cad; t=72bbd4f70c95d18f1459f9679acb27ad0; societyguester=72bbd4f70c95d18f1459f9679acb27ad0; id=965779570; xnsid=cf1e945b; jebecookies=e0a3ddd4-105d-47aa-8381-14d732843363|||||; ver=7.0; loginfrom=null; jebe_key=a928342b-f0b3-4a41-b26d-ed332bb8120b%7C7e4751b99e97b478a812137eab779aed%7C1525533471094%7C1%7C1525533472021; wp_fold=0'
    }
    req = request.Request(url,headers=header)
    result = request.urlopen(req)
    
    print(result.read().decode("utf-8"))#read()方法读取全部的文件内容,所以文件指针已经到最后了,再次read()时就为空
    
    with open("renren.html",'w',encoding="utf-8") as file:
        file.write(result.read().decode("utf-8"))
    

    可以在header中添加cookie信息去爬取需要登录的页面。

    4.保存和载入cookie

    from urllib import request
    from http.cookiejar import MozillaCookieJar
    
    
    cookie =MozillaCookieJar("cookie.txt")
    handel = request.HTTPCookieProcessor(cookie)
    opener = request.build_opener(handel)
    
    opener.open("http://httpbin.org/cookies/set?name=lizhe22")
    
    cookie.save(ignore_discard=True)#保存
    cookie.load(ignore_discard=True)#载入
    for c in cookie:
        print(c)
    

    通过cookie.save()cookie.load()函数来读取指定文件的cookie值,ignore_discard=True参数是当cookie有效期为临时的时候需要设置,因为cookie有效期为临时的时候,关闭浏览器后,cookie值就销毁了,在程序中当我理解的当opener对象销毁的时候就相当于关闭浏览器,cookie就会销毁,不设置的话就会看不到cookie

    5.自动获取cookie

    from urllib import request,parse
    from http.cookiejar import CookieJar
    
    # 1.创建cookiejar对象
    cookie = CookieJar()
    handle = request.HTTPCookieProcessor(cookie)#handler处理器,向request或者response加入特定信息
    opener = request.build_opener(handle)#opener,用于打开request的请求,相当于浏览器对象
    
    # 2.输入账号密码访问登录地址返回cookie
    login_url = "http://www.renren.com/PLogin.do"#form action属性获得login_url
    header ={
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.90 Safari/537.36 2345Explorer/9.3.2.17331'
    }
    data = {
        "email":"15925550603",
        "password":"1111130330"
    }
    req = request.Request(login_url,headers=header,data=parse.urlencode(data).encode("utf-8"))
    rep = opener.open(req)
    print(rep.read().decode("utf-8"))
    #3.访问主页
    main_url ="http://www.renren.com/880792860/profile"
    req = request.Request(main_url,headers=header)
    rep  = opener.open(req)
    
    with open("renren.html",'w',encoding="utf-8") as file:
        file.write(rep.read().decode("utf-8"))
    

    相关文章

      网友评论

          本文标题:python爬虫urllib库-demo

          本文链接:https://www.haomeiwen.com/subject/fiqprftx.html