网络爬虫:urllib模块应用12--err错误处理
作者:
牛耀 | 来源:发表于
2018-12-23 15:02 被阅读0次def check_urlerror():
"""
1.没有网络
2. 服务器连接失败
3. 找不到指定服务器
:return:
"""
url = 'http://www.baiduxxx.com/'
try:
req_header = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:64.0) Gecko/20100101 Firefox/64.0',
}
# 构建一个request对象
req = request.Request(url, headers=req_header)
# 根据Request对象发起请求
response = request.urlopen(req)
print(response.reason)
except error.URLError as err:
print(err.reason)
check_urlerror()
# error.HTTPError:URLError的子类
def check_httperror():
url = 'https://www.qidian.com/all/nsacnscn.htm'
try:
response = request.urlopen(url)
print(response.status)
except error.HTTPError as err:
# HTTPError三个属性
# 状态码
print(err.code)
# 原因
print(err.reason)
# 返回响应头
print(err.headers)
except error.URLError as err:
print(err.reason)
check_httperror()
本文标题:网络爬虫:urllib模块应用12--err错误处理
本文链接:https://www.haomeiwen.com/subject/kvrnkqtx.html
网友评论