之前熟悉了OkHttp大概的调用 这次学习一下各拦截器的作用
贴一贴拦截器调用入口方法,方便梳理
//RealCall.Java
Response getResponseWithInterceptorChain() throws IOException {
// Build a full stack of interceptors.
// 建立拦截器列表
List<Interceptor> interceptors = new ArrayList<>();
interceptors.addAll(client.interceptors());//自定义的拦截器,例如日志打印拦截器
interceptors.add(retryAndFollowUpInterceptor);//重连及重定向拦截器
interceptors.add(new BridgeInterceptor(client.cookieJar()));//桥拦截器?主要是用来加Header信息
interceptors.add(new CacheInterceptor(client.internalCache()));//缓存拦截器
interceptors.add(new ConnectInterceptor(client));//连接拦截器,用于建立连接
if (!forWebSocket) {
interceptors.addAll(client.networkInterceptors());//仅在建立WebSocket时才会加的拦截器
}
interceptors.add(new CallServerInterceptor(forWebSocket));//发送信息拦截器(之前的连接拦截器负责建立连接但是啥也没干,这个拦截器负责给服务器发消息并接收返回)
Interceptor.Chain chain = new RealInterceptorChain(interceptors, null, null, null, 0,
originalRequest, this, eventListener, client.connectTimeoutMillis(),
client.readTimeoutMillis(), client.writeTimeoutMillis());
return chain.proceed(originalRequest);//开启链式调用
}
那么接下来就是依次探讨一下RetryAndFollowUpInterceptor,BridgeInterceptor,CacheInterceptor,ConnectInterceptor,CallServerInterceptor了
重连及重定向拦截器(RetryAndFollowUpInterceptor)
@Override public Response intercept(Chain chain) throws IOException {
Request request = chain.request();
RealInterceptorChain realChain = (RealInterceptorChain) chain;
Call call = realChain.call();
EventListener eventListener = realChain.eventListener();
//申请网络连接的关键类,具体在ConnectInterceptor会进一步调用
StreamAllocation streamAllocation = new StreamAllocation(client.connectionPool(),
createAddress(request.url()), call, eventListener, callStackTrace);
this.streamAllocation = streamAllocation;
int followUpCount = 0;
Response priorResponse = null;
//维护一个死循环,该拦截器能实现重连和重定向的关键
while (true) {
if (canceled) {
//取消则抛异常,诸如异步超时或者代码主动调用cancel方法都会改变布尔值
streamAllocation.release();
throw new IOException("Canceled");
}
Response response;
boolean releaseConnection = true;
try {
//前期工作已做好(主要是建立死循环和申请连接的StreamAllocation类 ),交给调用链调度给下一个拦截器处理
response = realChain.proceed(request, streamAllocation, null, null);
releaseConnection = false; //来到这里说明请求服务器没有问题,啥异常都没有出现,服务器也有返回
} catch (RouteException e) {
// The attempt to connect via a route failed. The request will not have been sent.
//出现异常,是预料中的异常,调用recover方法看看能不能拯救一下(重连)
//注意这里仅仅是判断能不能救(重连),并没有发起真正的重连
if (!recover(e.getLastConnectException(), streamAllocation, false, request)) {
//判断救不了了,直接抛异常,跳出死循环
throw e.getFirstConnectException();
}
releaseConnection = false;
continue;
} catch (IOException e) {
// An attempt to communicate with a server failed. The request may have been sent.
//出现异常,是预料中的异常,调用recover方法看看能不能拯救一下(重连)
//注意这里仅仅是判断能不能救(重连),并没有发起真正的重连
boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
if (!recover(e, streamAllocation, requestSendStarted, request)) throw e; //判断救不了了,直接抛异常,跳出死循环
releaseConnection = false;
continue;
} finally {
// We're throwing an unchecked exception. Release any resources.
//假如出现预料外的错误(作者都想不出来会有这种错误未加捕获的),释放连接
if (releaseConnection) {
streamAllocation.streamFailed(null);
streamAllocation.release();
}
}
// Attach the prior response if it exists. Such responses never have a body.
//如果先前已经有一个response说明先前已经有过一次请求了,那么不是重连就是重定向
if (priorResponse != null) {
//标记一下,给后面的followUpRequest做进一步判断(大概意思就是告诉后面这个请求有前科的,之前已经请求过一次了)
response = response.newBuilder()
.priorResponse(priorResponse.newBuilder()
.body(null)
.build())
.build();
}
Request followUp;
try {
//综合判断假如是符合能重连或者重定向的条件返回一个不为空的Request,其他诸如请求成功,不符合重连和重定向条件的一律返回空
//判断是否要重连或重定向的关键函数
followUp = followUpRequest(response, streamAllocation.route());
} catch (IOException e) {
streamAllocation.release();
throw e;
}
if (followUp == null) {
//返回request为空,那么尝试释放资源
streamAllocation.release();
return response;
}
//来到这里说明不是重连就是重定向了,关闭当前response的输入流
closeQuietly(response.body());
if (++followUpCount > MAX_FOLLOW_UPS) {
//跟进数量超过最大值直接抛异常, MAX_FOLLOW_UPS值为20,这里主要是防止不停重定向,重连最多也就一次
streamAllocation.release();
throw new ProtocolException("Too many follow-up requests: " + followUpCount);
}
if (followUp.body() instanceof UnrepeatableRequestBody) {
streamAllocation.release();
throw new HttpRetryException("Cannot retry streamed HTTP body", response.code());
}
if (!sameConnection(response, followUp.url())) {
//假如跟原来的连接不一样,那么重新申请一个连接
streamAllocation.release();
streamAllocation = new StreamAllocation(client.connectionPool(),
createAddress(followUp.url()), call, eventListener, callStackTrace);
this.streamAllocation = streamAllocation;
} else if (streamAllocation.codec() != null) {
throw new IllegalStateException("Closing the body of " + response
+ " didn't close its backing stream. Bad interceptor?");
}
request = followUp;//赋值用于下个循环(重定向或重连)的请求
priorResponse = response;//赋值用于下个循环判断(主要用于重连判断)
}
}
再看看判断是否要重连或重定向的关键函数 followUpRequest,这里根据返回ResposeCode有不同的判断,这里选取其中的一个重连的情况分析
private Request followUpRequest(Response userResponse, Route route) throws IOException {
if (userResponse == null) throw new IllegalStateException();
int responseCode = userResponse.code();
final String method = userResponse.request().method();
switch (responseCode) {
.....
//前面省略若干其他状态的处理代码
case HTTP_CLIENT_TIMEOUT:
// 408's are rare in practice, but some servers like HAProxy use this response code. The
// spec says that we may repeat the request without modifications. Modern browsers also
// repeat the request (even non-idempotent ones.)
if (!client.retryOnConnectionFailure()) {
// The application layer has directed us not to retry the request.
// 当连接失败是否进行重连 值为false,则返回空Request
// 该布尔值可在OkHttpClient初始化里设置
return null;
}
if (userResponse.request().body() instanceof UnrepeatableRequestBody) {
return null;
}
if (userResponse.priorResponse() != null
&& userResponse.priorResponse().code() == HTTP_CLIENT_TIMEOUT) {
// 之前已经请求过一次并且之前同样是 超时,说明重连一次已经失败了,不再重连
return null;
}
if (retryAfter(userResponse, 0) > 0) {
//服务器有重连时间间隔,那么直接不重连了
return null;
}
//符合重连条件,返回Request对象以进行再一次请求
return userResponse.request();
...........
//前面省略若干其他状态的处理代码
default:
//默认返回Null,例如服务器正常返回200,那么也没必要跟进了,直接返回Null
return null;
}
}
其他状态码的流程也是相似的,都是根据返回Response信息进行下一步处理,这里就不多叙述了
整体的调用流程就是拦截器维护一个死循环,若是正常返回那么循环体只走一遍就直接返回;假如是超时等情况导致重连再请求一次,再走一次循环体无论成功与否都会直接跳出循环,循环体走了两遍;假如是重定向,最多允许重定向20次,即循环体走20次然后结束死循环
桥接拦截器(BridgeInterceptor)
@Override public Response intercept(Chain chain) throws IOException {
Request userRequest = chain.request();
Request.Builder requestBuilder = userRequest.newBuilder();
RequestBody body = userRequest.body();
if (body != null) {
//添加各种请求头
MediaType contentType = body.contentType();
if (contentType != null) {
requestBuilder.header("Content-Type", contentType.toString());
}
long contentLength = body.contentLength();
//如果传输长度不为-1,则表示完整传输
if (contentLength != -1) {
//设置头信息 Content-Length
requestBuilder.header("Content-Length", Long.toString(contentLength));
requestBuilder.removeHeader("Transfer-Encoding");
} else {
//如果传输长度为-1,则表示分块传输,自动设置头信息
requestBuilder.header("Transfer-Encoding", "chunked");
requestBuilder.removeHeader("Content-Length");
}
}
if (userRequest.header("Host") == null) {
requestBuilder.header("Host", hostHeader(userRequest.url(), false));
}
//如果没有设置头信息 Connection,则自动设置为 Keep-Alive
if (userRequest.header("Connection") == null) {
requestBuilder.header("Connection", "Keep-Alive");
}
// If we add an "Accept-Encoding: gzip" header field we're responsible for also decompressing
// the transfer stream.
// 是否运用GZIP传输
//在 HTTP 传输时是支持 gzip 压缩的,采用 gzip 压缩后可以大幅减少传输内容大小,这样可以提高传输速
// 度,减少流量的使用。
boolean transparentGzip = false;
if (userRequest.header("Accept-Encoding") == null && userRequest.header("Range") == null) {
transparentGzip = true;
requestBuilder.header("Accept-Encoding", "gzip");
}
//查找之前有没有访问保存的cookie信息并带上
List<Cookie> cookies = cookieJar.loadForRequest(userRequest.url());
if (!cookies.isEmpty()) {
requestBuilder.header("Cookie", cookieHeader(cookies));
}
if (userRequest.header("User-Agent") == null) {
requestBuilder.header("User-Agent", Version.userAgent());
}
//转发下一个拦截器
Response networkResponse = chain.proceed(requestBuilder.build());
//保存服务器返回的cookie信息
HttpHeaders.receiveHeaders(cookieJar, userRequest.url(), networkResponse.headers());
Response.Builder responseBuilder = networkResponse.newBuilder()
.request(userRequest);
//如果服务器传输有用到gzip压缩,那么还要进行解压缩
if (transparentGzip
&& "gzip".equalsIgnoreCase(networkResponse.header("Content-Encoding"))
&& HttpHeaders.hasBody(networkResponse)) {
GzipSource responseBody = new GzipSource(networkResponse.body().source());
Headers strippedHeaders = networkResponse.headers().newBuilder()
.removeAll("Content-Encoding")
.removeAll("Content-Length")
.build();
responseBuilder.headers(strippedHeaders);
String contentType = networkResponse.header("Content-Type");
responseBuilder.body(new RealResponseBody(contentType, -1L, Okio.buffer(responseBody)));
}
return responseBuilder.build();
}
该拦截器主要负责添加请求头,同时在服务器返回时保存必要的服务器信息例如cookie等,如果服务器有采用gzip传输还会先解压得到原始数据再进行返回
缓存拦截器(CacheInterceptor)
OKHttp默认是没有缓存的,除非我们在OkHttpClient初始化设置,基本用法如下
//缓存文件夹
File cacheFile = new File(getExternalCacheDir().toString(),"cache");
//缓存大小为10M
int cacheSize = 10 * 1024 * 1024;
//创建缓存对象
Cache cache = new Cache(cacheFile,cacheSize);//从这里也能看出缓存是直接存到本地存储(SD卡)上的
OkHttpClient client = new OkHttpClient.Builder()
.cache(cache)//设置缓存
.build();
在设置了Cache后 缓存拦截器才算有用武之地,接下来惯例分析下缓存拦截器具体流程
@Override public Response intercept(Chain chain) throws IOException {
//是否有设置缓存,有?那么从本地存储查找是否有该请求的缓存
Response cacheCandidate = cache != null
? cache.get(chain.request())
: null;
long now = System.currentTimeMillis();
//这个get方法是判断最后该请求是用缓存还是网络请求的关键,之前的本地存储并没有在时间上判断缓存是否可用(例如这个缓存是一年前得了,那肯定是无效的),该函数内做进一步判断(当然还有其他必要的判断)
CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
Request networkRequest = strategy.networkRequest;
Response cacheResponse = strategy.cacheResponse;
if (cache != null) {
cache.trackResponse(strategy);
}
if (cacheCandidate != null && cacheResponse == null) {
//本地缓存有,但是不符合某个条件,舍弃了
closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
}
// If we're forbidden from using the network and the cache is insufficient, fail.
//又没网络请求又没缓存,直接返回504
if (networkRequest == null && cacheResponse == null) {
return new Response.Builder()
.request(chain.request())
.protocol(Protocol.HTTP_1_1)
.code(504)
.message("Unsatisfiable Request (only-if-cached)")
.body(Util.EMPTY_RESPONSE)
.sentRequestAtMillis(-1L)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
}
// If we don't need the network, we're done.
if (networkRequest == null) {
//通过new CacheStrategy.Factory类get方法得出网络请求为空,同时越过上面代码来到这里也说明了缓存不为空,那么直接返回缓存
return cacheResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.build();
}
Response networkResponse = null;
try {
//抛给下一个拦截器进行网络连接
networkResponse = chain.proceed(networkRequest);
} finally {
// If we're crashing on I/O or otherwise, don't leak the cache body.
if (networkResponse == null && cacheCandidate != null) {
closeQuietly(cacheCandidate.body());
}
}
// If we have a cache response too, then we're doing a conditional get.
if (cacheResponse != null) {
if (networkResponse.code() == HTTP_NOT_MODIFIED) {
//假如缓存不为空同时服务器返回并无修改,进行合并后关闭新的网络请求Response里的输入流
Response response = cacheResponse.newBuilder()
.headers(combine(cacheResponse.headers(), networkResponse.headers()))
.sentRequestAtMillis(networkResponse.sentRequestAtMillis())
.receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
networkResponse.body().close();
// Update the cache after combining headers but before stripping the
// Content-Encoding header (as performed by initContentStream()).
cache.trackConditionalCacheHit();
//同时更新缓存
cache.update(cacheResponse, response);
return response;
} else {
closeQuietly(cacheResponse.body());
}
}
Response response = networkResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
if (cache != null) {
if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
// Offer this request to the cache.
//用户有设置缓存,把本次网络请求缓存到本地.同时返回给上一级拦截器
CacheRequest cacheRequest = cache.put(response);
return cacheWritingResponse(cacheRequest, response);
}
//检测Http的Method参数,诸如POST,DELETE等不需要缓存的请求直接移除
if (HttpMethod.invalidatesCache(networkRequest.method())) {
try {
cache.remove(networkRequest);
} catch (IOException ignored) {
// The cache cannot be written.
}
}
}
return response;
}
连接拦截器(ConnectInterceptor)
@Override public Response intercept(Chain chain) throws IOException {
RealInterceptorChain realChain = (RealInterceptorChain) chain;
Request request = realChain.request();
//获取StreamAllocation 对象,先前分析由RetryAndFollowUpInterceptor生成并传递
StreamAllocation streamAllocation = realChain.streamAllocation();
// We need the network to satisfy this request. Possibly for validating a conditional GET.
boolean doExtensiveHealthChecks = !request.method().equals("GET");
//根据请求类型判断在申请连接时是否要对连接的健全性进行检查
HttpCodec httpCodec = streamAllocation.newStream(client, chain, doExtensiveHealthChecks);//关键核心
//获取连接
RealConnection connection = streamAllocation.connection();
//向下传递
return realChain.proceed(request, streamAllocation, httpCodec, connection);
}
连接拦截器的代码看起来貌似是各拦截器中代码最小的,其实代码量是5个拦截器中最庞大的,这里的关键在于 streamAllocation.newStream(client, chain, doExtensiveHealthChecks).就是这里面获取网络连接,那么我们就跟进去看看
//StreamAllocation.java
public HttpCodec newStream(
OkHttpClient client, Interceptor.Chain chain, boolean doExtensiveHealthChecks) {
int connectTimeout = chain.connectTimeoutMillis();
int readTimeout = chain.readTimeoutMillis();
int writeTimeout = chain.writeTimeoutMillis();
int pingIntervalMillis = client.pingIntervalMillis();
boolean connectionRetryEnabled = client.retryOnConnectionFailure();
try {
//根据条件返回一个符合条件且健全的连接
RealConnection resultConnection = findHealthyConnection(connectTimeout, readTimeout,
writeTimeout, pingIntervalMillis, connectionRetryEnabled, doExtensiveHealthChecks);//关键核心
//同时返回Http层面处理类(个人理解这类只对外暴露Http层面的方法和变量,细化操作)
HttpCodec resultCodec = resultConnection.newCodec(client, chain, this);
synchronized (connectionPool) {
codec = resultCodec;
return resultCodec;
}
} catch (IOException e) {
throw new RouteException(e);
}
}
//StreamAllocation.java
private RealConnection findHealthyConnection(int connectTimeout, int readTimeout,
int writeTimeout, int pingIntervalMillis, boolean connectionRetryEnabled,
boolean doExtensiveHealthChecks) throws IOException {
//维护一个死循环
while (true) {
//找到符合条件的连接
RealConnection candidate = findConnection(connectTimeout, readTimeout, writeTimeout,
pingIntervalMillis, connectionRetryEnabled);//关键核心
// If this is a brand new connection, we can skip the extensive health checks.
synchronized (connectionPool) {
if (candidate.successCount == 0) {
//没请求过一次说明是新连接,直接返回
return candidate;
}
}
// Do a (potentially slow) check to confirm that the pooled connection is still good. If it
// isn't, take it out of the pool and start again.
//否则对连接进行健全性检查
if (!candidate.isHealthy(doExtensiveHealthChecks)) {
noNewStreams();
//不健康的,那么继续死循环,直到找到可用的健全的连接为止
continue;
}
return candidate;
}
}
从findHealthyConnection方法中我们可以很直观地看到申请连接的表面流程:死循环,返回后对候选链接做健全性检查,不合格就继续申请,直到找到可用连接为止
那么具体申请连接的流程我们再跟进去看看
private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
int pingIntervalMillis, boolean connectionRetryEnabled) throws IOException {
boolean foundPooledConnection = false;
RealConnection result = null;
Route selectedRoute = null;
Connection releasedConnection;
Socket toClose;
synchronized (connectionPool) {
if (released) throw new IllegalStateException("released");
if (codec != null) throw new IllegalStateException("codec != null");
if (canceled) throw new IOException("Canceled");
// Attempt to use an already-allocated connection. We need to be careful here because our
// already-allocated connection may have been restricted from creating new streams.
releasedConnection = this.connection;
toClose = releaseIfNoNewStreams();
if (this.connection != null) {
//如果连接不为空,说明之前就有申请一个了,直接赋值
result = this.connection;
releasedConnection = null;
}
if (!reportedAcquired) {
// If the connection was never reported acquired, don't report it as released!
releasedConnection = null;
}
if (result == null) {
//没有现成连接,那么从连接池申请
Internal.instance.get(connectionPool, address, this, null);//从连接池申请并赋值到该类的connection变量
if (connection != null) {
//从连接池找到了可用连接
foundPooledConnection = true;
result = connection;
} else {
selectedRoute = route;
}
}
}
closeQuietly(toClose);
if (releasedConnection != null) {
eventListener.connectionReleased(call, releasedConnection);
}
if (foundPooledConnection) {
eventListener.connectionAcquired(call, result);
}
if (result != null) {
// If we found an already-allocated or pooled connection, we're done.
return result;
}
// If we need a route selection, make one. This is a blocking operation.
//看看是否需要选定一个路由,没有的话需要选择一个路由,并重新从连接池取连接
boolean newRouteSelection = false;
if (selectedRoute == null && (routeSelection == null || !routeSelection.hasNext())) {
newRouteSelection = true;
routeSelection = routeSelector.next();
}
synchronized (connectionPool) {
if (canceled) throw new IOException("Canceled");
if (newRouteSelection) {
// Now that we have a set of IP addresses, make another attempt at getting a connection from
// the pool. This could match due to connection coalescing.
List<Route> routes = routeSelection.getAll();
for (int i = 0, size = routes.size(); i < size; i++) {
Route route = routes.get(i);
//根据路由,主机地址等相关信息在连接池中找出符合条件的连接
Internal.instance.get(connectionPool, address, this, route);
if (connection != null) {
foundPooledConnection = true;
result = connection;
this.route = route;
break;
}
}
}
if (!foundPooledConnection) {
if (selectedRoute == null) {
selectedRoute = routeSelection.next();
}
//没有现成的连接,连接池也没有复用连接
route = selectedRoute;
refusedStreamCount = 0;
//那么直接新建一个连接
result = new RealConnection(connectionPool, selectedRoute);
acquire(result, false);
}
}
// If we found a pooled connection on the 2nd time around, we're done.
if (foundPooledConnection) {
eventListener.connectionAcquired(call, result);
return result;
}
//开始握手建立连接,这是一个阻塞操作
result.connect(connectTimeout, readTimeout, writeTimeout, pingIntervalMillis,
connectionRetryEnabled, call, eventListener);//关键核心
routeDatabase().connected(result.route());
Socket socket = null;
synchronized (connectionPool) {
reportedAcquired = true;
// 连接已建立,那么存进连接池准备复用
Internal.instance.put(connectionPool, result);
//假如该RealConnection有多路连接(跟进去发现是Http2相关,下面这段可暂时无视),那么关掉多余的socket连接
if (result.isMultiplexed()) {
socket = Internal.instance.deduplicate(connectionPool, address, this);
result = connection;
}
}
closeQuietly(socket);
eventListener.connectionAcquired(call, result);
return result;
}
具体的找连接流程就是先找找有没有之前就用过的连接(例如重连或重定向的情况),没有看看连接池有没有复用的连接,我们再看看RealConnection类的connect方法
public void connect(int connectTimeout, int readTimeout, int writeTimeout,
int pingIntervalMillis, boolean connectionRetryEnabled, Call call,
EventListener eventListener) {
.....
//前面省略若干代码
while (true) {
try {
if (route.requiresTunnel()) {
//假如是通过http端口做https请求
connectTunnel(connectTimeout, readTimeout, writeTimeout, call, eventListener);
if (rawSocket == null) {
// We were unable to connect the tunnel but properly closed down our resources.
break;
}
} else {
//建立socket连接,关键核心
connectSocket(connectTimeout, readTimeout, call, eventListener);
}
establishProtocol(connectionSpecSelector, pingIntervalMillis, call, eventListener);
eventListener.connectEnd(call, route.socketAddress(), route.proxy(), protocol);
break;
} catch (IOException e) {
....
//省略若干代码
}
}
.....
//省略若干代码
}
private void connectSocket(int connectTimeout, int readTimeout, Call call,
EventListener eventListener) throws IOException {
Proxy proxy = route.proxy();
Address address = route.address();
rawSocket = proxy.type() == Proxy.Type.DIRECT || proxy.type() == Proxy.Type.HTTP
? address.socketFactory().createSocket()
: new Socket(proxy);
eventListener.connectStart(call, route.socketAddress(), proxy);
rawSocket.setSoTimeout(readTimeout);
try {
//建立socket连接
Platform.get().connectSocket(rawSocket, route.socketAddress(), connectTimeout);
} catch (ConnectException e) {
ConnectException ce = new ConnectException("Failed to connect to " + route.socketAddress());
ce.initCause(e);
throw ce;
}
// The following try/catch block is a pseudo hacky way to get around a crash on Android 7.0
// More details:
// https://github.com/square/okhttp/issues/3245
// https://android-review.googlesource.com/#/c/271775/
try {
//通过Okio与Socket建立输入输出流
source = Okio.buffer(Okio.source(rawSocket));
sink = Okio.buffer(Okio.sink(rawSocket));
} catch (NullPointerException npe) {
if (NPE_THROW_WITH_NULL.equals(npe.getMessage())) {
throw new IOException(npe);
}
}
}
至此完成了返回连接的过程,从代码也能看出OkHttp3直接用Socket配合Okio输入输出流,大家都知道Http本身就是基于Tcp,Tcp用Socket做的,相当于OkHttp3重做了一遍Http,至于为什么要重做,我个人认为有两点原因:
1.不满意原有JAVAIO 流,用OKIO替代
2.取出底层的Socket连接,用于连接池复用
呼叫服务器拦截器(CallServerInterceptor)
@Override public Response intercept(Chain chain) throws IOException {
RealInterceptorChain realChain = (RealInterceptorChain) chain;
HttpCodec httpCodec = realChain.httpStream();
StreamAllocation streamAllocation = realChain.streamAllocation();
RealConnection connection = (RealConnection) realChain.connection();
Request request = realChain.request();
long sentRequestMillis = System.currentTimeMillis();
realChain.eventListener().requestHeadersStart(realChain.call());
//写入请求头信息,内部写入到RealBufferSink中的缓冲区(并未射出到服务器的输出流)
httpCodec.writeRequestHeaders(request);
realChain.eventListener().requestHeadersEnd(realChain.call(), request);
Response.Builder responseBuilder = null;
// 100-continue用于客户端在发送POST数据给服务器前,征询服务器情况,看服务器是否处理POST的数据。
//如果不处理,客户端则不上传POST数据,如果处理,则POST上传数据。在现实应用中,通过在POST大数据时,才会使用100-continue协议。
if (HttpMethod.permitsRequestBody(request.method()) && request.body() != null) {
if ("100-continue".equalsIgnoreCase(request.header("Expect"))) {
//请求头确实有100-continue,那么把缓冲区的输出内容全部射出给服务器
httpCodec.flushRequest();
realChain.eventListener().responseHeadersStart(realChain.call());
//根据服务器返回信息构建ResponseBuilder
//这里假如是代码100返回Null,其他返回均不为Null
responseBuilder = httpCodec.readResponseHeaders(true);
}
if (responseBuilder == null) {
//服务器返回100,那么开始那么就把请求体加上
realChain.eventListener().requestBodyStart(realChain.call());
long contentLength = request.body().contentLength();
CountingSink requestBodyOut =
new CountingSink(httpCodec.createRequestBody(request, contentLength));
BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);
//请求体写入缓冲区
request.body().writeTo(bufferedRequestBody);
bufferedRequestBody.close();
realChain.eventListener()
.requestBodyEnd(realChain.call(), requestBodyOut.successfulCount);
} else if (!connection.isMultiplexed()) {
// If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1 connection
// from being reused. Otherwise we're still obligated to transmit the request body to
// leave the connection in a consistent state.
streamAllocation.noNewStreams();
}
}
//再次射出到与服务器的输出流里,至此请求已发送完毕
httpCodec.finishRequest();//跟 httpCodec.flushRequest()一样的效果
if (responseBuilder == null) {
//之前的分析过状态码100该类是空的,那么要再读一次构建responseBuilder用来构建真正的Response
realChain.eventListener().responseHeadersStart(realChain.call());
//获取从服务器读取头信息构建responseBuilder
responseBuilder = httpCodec.readResponseHeaders(false);
}
//计划构建实际的Response
Response response = responseBuilder
.request(request)
.handshake(streamAllocation.connection().handshake())
.sentRequestAtMillis(sentRequestMillis)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
int code = response.code();
if (code == 100) {
//服务器居然还返回100,那么再读一次构建新的ResponseBuilder
responseBuilder = httpCodec.readResponseHeaders(false);
//这下构建的ResponseBuilder基本上能构建有实际信息返回的Response了
response = responseBuilder
.request(request)
.handshake(streamAllocation.connection().handshake())
.sentRequestAtMillis(sentRequestMillis)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
code = response.code();
}
realChain.eventListener()
.responseHeadersEnd(realChain.call(), response);
if (forWebSocket && code == 101) {
// Connection is upgrading, but we need to ensure interceptors see a non-null response body.
response = response.newBuilder()
.body(Util.EMPTY_RESPONSE)
.build();
} else {
//构建带实际返回信息的Response
// 其实在之前 httpCodec.finishRequest()中服务器收到请求立刻就进行处理并返回数据给客户端的流里面了,但是客户端还没有通过输入流获取流里面的信息
response = response.newBuilder()
.body(httpCodec.openResponseBody(response))//包装一个输入流,供后面从流读取数据
.build();
}
//假如服务器或客户端并不希望建立长连接,那么这里先把noStream标记为true,后面一旦请求到返回数据就把连接断掉
if ("close".equalsIgnoreCase(response.request().header("Connection"))
|| "close".equalsIgnoreCase(response.header("Connection"))) {
streamAllocation.noNewStreams();
}
if ((code == 204 || code == 205) && response.body().contentLength() > 0) {
throw new ProtocolException(
"HTTP " + code + " had non-zero Content-Length: " + response.body().contentLength());
}
//返回Response给上层
return response;
}
看到这里其实我们发现即使到了最后一个拦截器也并没有从输入流读取数据(尽管服务器早已把数据输入到流里面了),仅仅相当于打开了输入流的入口,那么真正从输入流读取数据是在哪呢
这里以我以前发起的一个异步网络请求为例,代码如下
NetWorkUtil.postNetWork(jsonObject, Constants.getBaseUrl(), new Callback() {
@Override
public void onFailure(Call call, IOException e) {
...
}
@Override
public void onResponse(Call call, Response response) throws IOException {
String stringGson = response.body().string();
...
}
});
我们跳进去这个response.body().string()方法看看
//ResponseBody.java
public final String string() throws IOException {
BufferedSource source = source();
try {
//判断用什么样的字符集类型解析数据
Charset charset = Util.bomAwareCharset(source, charset());
return source.readString(charset);//从输入流读取数据并根据字符集类型转换成字符串
} finally {
//关闭socket输入流,清空缓冲区
Util.closeQuietly(source);
}
}
总结
通过学习大概了解了OkHttp3的基本调用流程,发现内部原来很多东西跟自己所想的完全不一样,也学到了
很多,其实OkHttp3还有很多需要探讨的东西,本文也只是描绘了OkHttp3个各拦截器的流程,具体还有很多细节需要深挖例如OkIo的内部实现,连接池具体是咋操作的等等。
网友评论