什么是Requests库
Requests 是使用Python语言编写,基于urllib,采用Apache2 Licensed开源协议的HTTP库。
它比urllib库更加方便,可以节约我们大量的工作,完全满足HTTP测试需求。
一句话--Python实现的简单易用的HTTP库。
requests的安装
pip install requests
实例引入
这里使用requests向百度发送get请求,并输出状态码和cookie信息:
import requests
response = requests.get('http://www.baidu.com')
print(response.status_code)
# print(response.text)
print(response.cookies)
200
<RequestsCookieJar[<Cookie BDORZ=27315 for .baidu.com/>]>
各种请求方式
使用requests可以很简单的发送各种请求,相比urllib库来说,方便很多。
import requests
requests.post('http://httpbin.org/post')
requests.put('http://httpbin.org/put')
requests.head('http://httpbin.org/head')
requests.delete('http://httpbin.org/delete')
<Response [200]>
基本的GET请求
import requests
response = requests.get('http://httpbin.org/get')
print(response.text) # response.text返回响应体的源码
{"args":{},"headers":{"Accept":"*/*","Accept-Encoding":"gzip, deflate","Connection":"close","Host":"httpbin.org","User-Agent":"python-requests/2.18.4"},"origin":"117.139.10.7","url":"http://httpbin.org/get"}
带参数的GET请求
将一个字典传入params来构造params参数,params是一个构造URL的参数,可以使用它来动态构造URL
import requests
data = {
'name':'gemmey',
'age':22
}
response = requests.get('http://httpbin.org/get', params=data)
print(response.text)
{"args":{"age":"22","name":"gemmey"},"headers":{"Accept":"*/*","Accept-Encoding":"gzip, deflate","Connection":"close","Host":"httpbin.org","User-Agent":"python-requests/2.18.4"},"origin":"117.172.26.83","url":"http://httpbin.org/get?name=gemmey&age=22"}
解析JSON
response.text
返回的大都是json格式的内容,我们可以使用response.json()
方法进行解析。效果和json.loads(response.text)
是一样的
import requests
import json
response = requests.get('http://httpbin.org/get')
print(type(response.text))
print(response.json())
print(json.loads(response.text))
<class 'str'>
{'args': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'close', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.18.4'}, 'origin': '117.139.10.7', 'url': 'http://httpbin.org/get'}
{'args': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'close', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.18.4'}, 'origin': '117.139.10.7', 'url': 'http://httpbin.org/get'}
获取二进制数据
response.content
用于获取响应的二进制流内容,这种处理方式常见于爬取图片或视频的二进制文件。
import requests
response = requests.get('https://github.com/favicon.ico')
print(type(response.text), type(response.content))
print(response.status_code)
<class 'str'> <class 'bytes'>
200
然后可以将二进制内容保存为图片或者视频,只需要指定相应的格式,这里注意打开文件要使用wb
格式写入。
# 将图片信息保存下来
import requests
response = requests.get('https://github.com/favicon.ico')
with open('favicon.ico', 'wb') as f:
f.write(response.content)
f.close()
添加headers
这里将字典格式的headers信息传入给参数headers
import requests
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64)'
}
response = requests.get('https://www.zhihu.com/explore', headers=headers)
print(response.status_code)
# print(response.text)
200
基本的POST请求
将字典格式的信息传入给data
参数,构造post请求,其实我们发现,requests.get和requests.post大部分参数都是要构造成字典格式的。
import requests
data = {'name':'gemmey', 'age':22}
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64)'
}
response = requests.post('http://httpbin.org/post', data=data, headers=headers)
print(response.text)
{"args":{},"data":"","files":{},"form":{"age":"22","name":"gemmey"},"headers":{"Accept":"*/*","Accept-Encoding":"gzip, deflate","Connection":"close","Content-Length":"18","Content-Type":"application/x-www-form-urlencoded","Host":"httpbin.org","User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64)"},"json":null,"origin":"117.139.10.7","url":"http://httpbin.org/post"}
响应(response)属性
import requests
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64)'
}
response = requests.get('http://jianshu.com', headers=headers)
print(type(response.status_code), response.status_code)
print(type(response.headers), response.headers)
print(type(response.url), response.url)
print(type(response.history), response.history)
<class 'int'> 200
<class 'requests.structures.CaseInsensitiveDict'> {'Date': 'Wed, 30 May 2018 06:49:54 GMT', 'Server': 'Tengine', 'Content-Type': 'text/html; charset=utf-8', 'Transfer-Encoding': 'chunked', 'X-Frame-Options': 'DENY', 'X-XSS-Protection': '1; mode=block', 'X-Content-Type-Options': 'nosniff', 'ETag': 'W/"29924b69c281f1f5708aa7338021b7da"', 'Cache-Control': 'max-age=0, private, must-revalidate', 'Set-Cookie': 'locale=zh-CN; path=/', 'X-Request-Id': '0d995087-8c71-4405-bf82-e67ba1dbcd3b', 'X-Runtime': '0.008407', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'Content-Encoding': 'gzip', 'X-Via': '1.1 PSbjwjBGP2uz240:4 (Cdn Cache Server V2.0), 1.1 PSzjtzsx2kf43:3 (Cdn Cache Server V2.0), 1.1 myd65:5 (Cdn Cache Server V2.0)', 'Connection': 'keep-alive', 'X-Dscp-Value': '0'}
<class 'str'> https://www.jianshu.com/
<class 'list'> [<Response [301]>, <Response [301]>]
状态码的判断
import requests
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64)'
}
response = requests.get('http://jianshu.com', headers=headers)
exit() if not response.status_code == requests.codes.ok else print('Request Successfully')
Request Successfully
高级操作
文件上传
文件上传使用POST请求,将文件信息传给参数files
import requests
files = {'file': open('favicon.ico', 'rb')}
response = requests.post('http://httpbin.org/post', files=files)
print(response.status_code)
200
获取cookie
响应response
有一个cookies
属性,可以直接获取响应中的cookies
信息。
import requests
response = requests.get('http://www.baidu.com')
print(response.cookies)
for key, value in response.cookies.items():
print(key+ ' = ' +value)
<RequestsCookieJar[<Cookie BDORZ=27315 for .baidu.com/>]>
BDORZ = 27315
会话维持
我们通过requests.Session()
方法获取服务器端的登陆状态信息。这里有两次GET请求,第一次GET是获取服务器端的登录信息,第二次GET是携带第一次获得的登录信息进行GET请求。这两次是连贯的。
import requests
s = requests.Session()
s.get('http://httpbin.org/cookies/set/number/123456789')
response = s.get('http://httpbin.org/cookies')
print(response.text)
{"cookies":{"number":"123456789"}}
证书验证
requests进行网页请求时,会自动进行网页证书的验证,如果网站证书不合法,则会抛出一个异常,比如12306官网的证书就是不合法的,如果我们像以往那样进行GET请求,则会失败。这时我们设置verify
参数为False
(默认为True
),就是让它请求时不进行证书验证,但是这样它仍会抛出一个警告,这个我们就不用管它了。
import requests
response = requests.get('https://www.12306.cn', verify=False)
print(response.status_code)
200
D:\Anaconda\Anaconda\lib\site-packages\urllib3\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
超时设置
timeout
参数用来设置响应超时
import requests
response = requests.get('http://httpbin.org/get', timeout=1)
print(response.status_code)
200
异常处理
这里的ReadTimeout
异常是HTTPError
异常的子类,而HTTPError
异常又是RequestException
异常的子类,如果我们要简写捕捉异常的代码,我们可以直接写一个RequestException
就可以捕捉到requests的所有可能异常。
import requests
from requests.exceptions import ReadTimeout, HTTPError, RequestException
try:
response = requests.get('http://httpbin.org/get', timeout=0.5)
print(response.status_code)
except ReadTimeout:
print('TIME OUT')
except HTTPError:
print('HTTP ERROR')
except RequestException:
print('error')
200
每天学习一点点,每天进步一点点。
网友评论