美文网首页
爬虫第三讲:基本的urllib库

爬虫第三讲:基本的urllib库

作者: 谢谢_d802 | 来源:发表于2018-08-20 14:18 被阅读0次

    Urllib库是python内置的库

    什么是Urllib

    1.urllib.request 请求模块
    2.urllib.error 异常处理模块
    3.urllib.parse url解析模块
    4.urllib.robotparser robots.txt解析模块

    用法

    • urlopen

      urllib.request.urlopen(url, data=None, [timeout, ]*, cafile=None, capath=None, cadefault=False, context=None)

      GET类型的请求

      import urllib.request
      response = urllib.request.urlopen('http://www.baidu.com')
      print(response.read().decode('utf-8'))
      

      POST类型的请求

      import urllib.parse
      import urllib.request
      data = bytes(urllib.parse.urlencode({'word':'hello'}),encoding='utf8')
      response = urllib.request.urlopen('http://httpbin.org/post',data=data)
      print(response.read())
      

      带超时参数

      response = urllib.request.urlopen('http://httpbin.org/get',timeout=1)
      print(response.read())
      

      测试较短的超时参数

      import socket
      import urllib.request
      import urllib.error
      try:
        response = urllib.request.urlopen('http://httpbin.org/get',timeout=0.1)
      except urllib.error.URLError as e:
        if isinstance(e.reason,socket.timeout):
            print('TIMEOUT')
      else:
        print('It is OK!')
      

      查看urlopen返回值的类型

      import urllib.request
      response = urllib.request.urlopen('https://www.python.org')
      print(type(response))
      

      输出结果:<class 'http.client.HTTPResponse'>

    响应内容--状态码和响应头

    import urllib.request
    response = urllib.request.urlopen('https://www.python.org')
    print(response.status)
    print(response.getheaders())
    print(response.getheader('Server'))
    print(response.read().decode('utf-8'))  #响应体,用utf-8解码
    

    输出结果:
    200
    [('Server', 'nginx'), ('Content-Type', 'text/html; charset=utf-8'), ('X-Frame-Options', 'SAMEORIGIN'), ('x-xss-protection', '1; mode=block'), ('X-Clacks-Overhead', 'GNU Terry Pratchett'), ('Via', '1.1 varnish'), ('Content-Length', '48809'), ('Accept-Ranges', 'bytes'), ('Date', 'Sat, 18 Aug 2018 12:56:38 GMT'), ('Via', '1.1 varnish'), ('Age', '129'), ('Connection', 'close'), ('X-Served-By', 'cache-iad2128-IAD, cache-nrt6150-NRT'), ('X-Cache', 'HIT, HIT'), ('X-Cache-Hits', '2, 48'), ('X-Timer', 'S1534596999.663138,VS0,VE0'), ('Vary', 'Cookie'), ('Strict-Transport-Security', 'max-age=63072000; includeSubDomains')]
    nginx

    • request方法

    如果使用复杂的请求可以在urlopen方法中使用request参数,通过构造request参数可以方便的设定请求的方式。

    import urllib.request
    request = urllib.request.Request('https://python.org')
    response = urllib.request.urlopen(request)
    print(response.read().decode('utf-8'))
    
    

    输出结果就是请求https://python.org的响应体。
    使用POST方法发送请求,并用request构造函数构造request,作为参数调用urlopen

    from urllib import request,parse
    url = 'http://httpbin.org/post'
    headers = {
          'User-Agent':'Mozilla/4.0(bompatible;MSIE 5.5;Windows NT)',
          'Host':'httpbin.org'
          }
    dict = {
          'name':"Germey"
          }
    data = bytes(parse.urlencode(dict),encoding='utf8')
    req = request.Request(url=url,data=data,headers=headers,method='POST')
    response = request.urlopen(req)
    print(response.read().decode('utf-8'))
    

    依然是使用POST方法发送请求,用Request构造函数构造request,作为参数传递给urlopen,但request中的headers不在构造函数中指定,而在使用request.add_header添加header。如果有很多键值对要传递,可以用for循环多次调用add_header

    from urllib import request,parse
    url = 'http://httpbin.org/post'
    dict = {
          'name':'XieZ'
          }
    data = bytes(parse.urlencode(dict),encoding='utf8')
    req = request.Request(url=url,data=data,method='POST')
    req.add_header('User-Agent','Mozilla/4.0(compatible;MSIE 5.5;Windows NT)')
    response = request.urlopen(req)
    print(response.read().decode('utf-8'))
    

    handler -- urllib中的高级用法,代理、cookie等等各种高级功能都是各种handler实现的。

    • 代理

      import urllib.request
      import urllib.request
      proxy_handler = urllib.request.ProxyHandler({
        'http':'http://127.0.0.1:9743',
        'http':'https://127.0.0.1:9743'
        })
      opener = urllib.request.build_opener(proxy_handler)
      response = opener.open('http://www.baidu.com')
      print(response.read())
      
    • Cookie--Cookie可以用来保存登录会话信息

      import http.cookiejar,urllib.request
      cookie = http.cookiejar.CookieJar()
      handler = urllib.request.HTTPCookieProcessor(cookie)
      opener = urllib.request.build_opener(handler)
      response = opener.open('http://www.baidu.com')
      for item in cookie:
        print(item.name+"="+item.value)
      

      把Cookie保存至文件,方便将来爬虫使用cookie登录网站,保持登录状态

      import http.cookiejar,urllib.request
      filename = "cookie.txt"
      cookie = http.cookiejar.MozillaCookieJar(filename)
      handler = urllib.request.HTTPCookieProcessor(cookie)
      opener = urllib.request.build_opener(handler)
      response = opener.open('http://www.baidu.com')
      cookie.save(ignore_discard=True,ignore_expires=True)
      

      这样Cookie就保存在文件中了,MozillaCookieJar是cookiejar的子类,是火狐浏览器的cookie保存格式,还有其他的cookie保存格式,比如LWPCookieJar。在使用时,用什么格式保存就用什么格式读取cookie就行

      使用LWPCookieJar将cookie保存到文件,并且读取此文件中的cookie,并请求页面

      import http.cookiejar,urllib.request
      filename = "cookie.txt"
      cookie = http.cookiejar.LWPCookieJar(filename)
      handler = urllib.request.HTTPCookieProcessor(cookie)
      opener = urllib.request.build_opener(handler)
      response = opener.open('http://www.baidu.com')
      cookie.save(ignore_discard=True,ignore_expires=True)
      mycookie = http.cookiejar.LWPCookieJar()
      mycookie.load('cookie.txt',ignore_discard=True,ignore_expires=True)
      handler = urllib.request.HTTPCookieProcessor(mycookie)
      opener = urllib.request.build_opener(handler)
      response = opener.open('http://www.baidu.com')
      print(response.read().decode('utf-8'))
      
    • urllib的异常处理模块

      from urllib import request,error
      try:
        response = request.urlopen('http://ljlhhljl.com/index.htm')
      except error.URLError as e:
        print(e.reason)
      

      结果显示:[Errno -2] Name or service not known

      HTTPError含有reason、code、headers属性

      from urllib import request,error
      try:
        response = request.urlopen('http://www.sina.com.cn/99999.html')
      except error.HTTPError as e:
        print(e.reason,e.code,e.headers,sep='\n')
        print('This is end of HTTPError\n')
      except error.URLError as e:
        print(e.reason)
      else:
        print('Request Sucessfully')
      
      结果显示:
      Not Found
      404
      Server: nginx
      Date: Sun, 19 Aug 2018 22:06:25 GMT
      Content-Type: text/html
      Transfer-Encoding: chunked
      Connection: close
      Vary: Accept-Encoding
      Age: 0
      Via: http/1.1 ctc.nanjing.ha2ts4.77 (ApacheTrafficServer/6.2.1 [cMsSf ])
      X-Cache: MISS.77
      X-Via-CDN:     f=edge,s=ctc.nanjing.ha2ts4.65.nb.sinaedge.com,c=61.171.236.224;f=Edge,s=ctc.nanjing.ha2ts4.77,c=202.102.94.65
      X-Via-Edge: 1534716385494e0ecab3d7c5e66ca3150b8e6
      
      
      This is end of HTTPError
      
    • URL解析模块--urlparse和urlunparse
      1.urlparse函数

      from urllib.parse import urlparse
      result = urlparse('http://www.baidu.com/index.html;user?id=5#comment')
      print(type(result),result)
      

      *返回结果:<class 'urllib.parse.ParseResult'> ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html', params='user', query='id=5', fragment='comment')

    2.urlunparse函数

    from urllib.parse import urlunparse
    data = ['http','www.baidu.com','index.html','user','a=6','comment']
    print(urlunparse(data))
    

    *返回结果:http://www.baidu.com/index.html;user?a=6#comment
    urlunparse就是urlparse的反函数,把各种参数拼接为一个url
    3.urljpin函数

    from urllib.parse import urljoin
    print(urljoin('http://www.baidu.com','FAQ.html'))
    print(urljoin('http://www.baidu.com','https://lllll.com/FAQ.html'))
    

    *返回结果:http://www.baidu.com/FAQ.html
    https://lllll.com/FAQ.html
    4.urlencode--可以把字典对象转换成get请求参数,很常用

    from urllib.parse import urlencode
    params = {
          'name':'xiezheng',
          'age':23
          }
    base_url = 'http://www.baidu.com?'
    url = base_url + urlencode(params)
    print(url)
    

    *返回结果:http://www.baidu.com?name=xiezheng&age=23

    • urllib.robotparser模块,用来解析robot.txt文件

    相关文章

      网友评论

          本文标题:爬虫第三讲:基本的urllib库

          本文链接:https://www.haomeiwen.com/subject/qrqliftx.html