本文为本人原创,转载请注明作者和出处。
在上一章我们分析了Okhttp分发器对同步/异步请求的处理,本章将和大家一起分析Okhttp的最核心模块--拦截链的代码。在这里你将会了解到Okhttp究竟如何处理请求的。
零、拦截链总流程
上一章我们分析到了RealCall的getResponseWithInterceptorChain方法,现在我们继续,先来看下这个方法:
Response getResponseWithInterceptorChain() throws IOException {
// Build a full stack of interceptors.
List<Interceptor> interceptors = new ArrayList<>();
interceptors.addAll(client.interceptors());
interceptors.add(retryAndFollowUpInterceptor);
interceptors.add(new BridgeInterceptor(client.cookieJar()));
interceptors.add(new CacheInterceptor(client.internalCache()));
interceptors.add(new ConnectInterceptor(client));
if (!forWebSocket) {
interceptors.addAll(client.networkInterceptors());
}
interceptors.add(new CallServerInterceptor(forWebSocket));
Interceptor.Chain chain = new RealInterceptorChain(
interceptors, null, null, null, 0, originalRequest);
return chain.proceed(originalRequest);
}
可以看到这个方法前面一大堆都很简单,new了一个存放拦截器的数组。这个Interceptor是接口,后面添加的各个元素都是它的实现类。这里先添加了okhttpClient对象里的拦截器,这里是用户自定义的拦截器,全都会在这里添加进去。然后一次添加retryAndFollowUpInterceptor,BridgeInterceptor,CacheInterceptor,ConnectInterceptor,CallServerInterceptor五个拦截器,其中在倒数第二的位置上,如果上websocket连接,还会额外添加networkInterceptor拦截器组。这里的添加顺序要注意,后面我们会看到,拦截器的拦截顺序也是和添加顺序是一致的。
添加完成后,执行了RealInterceptorChain也就是拦截器链的proceed方法,传入原始request。其实拦截器链我们看名字大致能猜到,它会将各个拦截器串起来,像链条一样依次去拦截或者说加工request,最终得到response后,依次再向上返回,各个拦截器再依次处理response,最终返回给用户。现在我们来看下proceed方法是如何串起各个拦截器的:
// Call the next interceptor in the chain.
RealInterceptorChain next = new RealInterceptorChain(
interceptors, streamAllocation, httpCodec, connection, index + 1, request);
Interceptor interceptor = interceptors.get(index);
Response response = interceptor.intercept(next);
源代码有点长,这里我们略去各种异常判断,只看核心的这段代码。看源码的时候一定要记得抓住主脉络,因为我们不可能一次弄明白或记住所有的源码细节。抓住重点,才不会只见树木,不见森林。
在这里它又重复创建了一个RealInterceptorChain对象,只是对index做了+1处理。这个index就是记录interceptors拦截器数组的角标的。之后会从数组中取出当前的拦截器,调用intercept方法并传入这个新new出来的RealInterceptorChain,得到response并返回。
看到这里可能有点懵,直觉告诉我们应该只要一个RealInterceptorChain对象去管理所有拦截器,然而这里似乎没拦截一次都会产生一个新的RealInterceptorChain,只不过把拦截器数组的角标index往后移了一位。不要着急,我们看了这里关键的拦截方法intercept就可以知道是怎么一回事了。这是个抽象方法,它的所有实现,都会调用chain.proceed(request)这句代码,包括我们自己自定义的拦截器,在intercept方法中我们也必须调用这句方法才能生效。在后面的贴出来的代码中大家也都会看到这句代码。这个方法仿佛之前见过?是的没错,在getResponseWithInterceptorChain中我们第一次new拦截器链对象就调用了该方法。这里又调用了该方法,只是这里的拦截器对象是我们新new出来的,index+1的拦截器链。于是乎在这次调用中,又会new出一个新的拦截器链对象,并会根据index取出下一个拦截器,执行该拦截器的intercept方法。如此递归调用,直至拦截器数组中没有下一个拦截器,得到response并向上传递。
可以看到这里的拦截链使用的非常巧妙,有点像栈的数据结构。依次将各个拦截器的方法入栈,最后得到response,再依次弹栈。如果是我来写的话,可能就直接一个for循环依次调用每个拦截器的拦截方法。但是这样的话我还得再来一遍反循环,再来依次处理加工response。很明显这里栈的结构更符合我们的业务场景。
一、RetryAndFollowUpInterceptor
下面我们来看第一个拦截器RetryAndFollowUpInterceptor,看名字可以猜到这个拦截器是负责失败重连和重定向的拦截器。
直接来看它的intercept方法:
@Override public Response intercept(Chain chain) throws IOException {
Request request = chain.request();
streamAllocation = new StreamAllocation(
client.connectionPool(), createAddress(request.url()), callStackTrace);
int followUpCount = 0;
Response priorResponse = null;
while (true) {
if (canceled) {
streamAllocation.release();
throw new IOException("Canceled");
}
Response response = null;
boolean releaseConnection = true;
try {
response = ((RealInterceptorChain) chain).proceed(request, streamAllocation, null, null);
releaseConnection = false;
} catch (RouteException e) {
// The attempt to connect via a route failed. The request will not have been sent.
if (!recover(e.getLastConnectException(), false, request)) {
throw e.getLastConnectException();
}
releaseConnection = false;
continue;
} catch (IOException e) {
// An attempt to communicate with a server failed. The request may have been sent.
boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
if (!recover(e, requestSendStarted, request)) throw e;
releaseConnection = false;
continue;
} finally {
// We're throwing an unchecked exception. Release any resources.
if (releaseConnection) {
streamAllocation.streamFailed(null);
streamAllocation.release();
}
}
// Attach the prior response if it exists. Such responses never have a body.
if (priorResponse != null) {
response = response.newBuilder()
.priorResponse(priorResponse.newBuilder()
.body(null)
.build())
.build();
}
Request followUp = followUpRequest(response);
if (followUp == null) {
if (!forWebSocket) {
streamAllocation.release();
}
return response;
}
closeQuietly(response.body());
if (++followUpCount > MAX_FOLLOW_UPS) {
streamAllocation.release();
throw new ProtocolException("Too many follow-up requests: " + followUpCount);
}
if (followUp.body() instanceof UnrepeatableRequestBody) {
streamAllocation.release();
throw new HttpRetryException("Cannot retry streamed HTTP body", response.code());
}
if (!sameConnection(response, followUp.url())) {
streamAllocation.release();
streamAllocation = new StreamAllocation(
client.connectionPool(), createAddress(followUp.url()), callStackTrace);
} else if (streamAllocation.codec() != null) {
throw new IllegalStateException("Closing the body of " + response
+ " didn't close its backing stream. Bad interceptor?");
}
request = followUp;
priorResponse = response;
}
}
这里的代码非常长,但是又不太好精简我就都贴上来了。不要害怕,还是比较容易理顺的。首先它new了一个StreamAllocation对象,这个对象封装了RealConnection、RouteSelector、HttpCodec等http连接要用到的重要对象,有兴趣的同学可以看看这部分源码,这里不展开讲它,只要知道这是个http连接必须要用到的重要资源对象。
之后定义了followUpCount也就是重连次数,以及一个空的response。紧接着就是一个while(true)死循环。这个死循环不用看也能想到,肯定是不断的进行重连/重定向的。核心代码是这句response = ((RealInterceptorChain) chain).proceed(request, streamAllocation, null, null),是的,所谓的重连就是重复调用下一个拦截器继续拦截。这里和第一次在RealCall中的调用不一样的地方是传入了streamAllocation给下一个拦截器链,之前传的是null。
那么这个死循环如何结束呢?这里除了获得response后的且没有重定向得return,好像没找到其他退出循环的地方。是不是没有response之前会一直重复请求直到有结果呢?这明显不肯能,这里是通过抛异常的方式来结束重连。我们来看抛异常的地方有哪些:
- 当前请求被cancel的时候
- 重连次数超过上限(默认是20,用户可修改)
- 重定向的请求体属于不能重复的请求体
- 没有正确释放streamAllocation内的codec
除了最后一条,其他时候都会调用streamAllocation.release()来释放资源。另外,这里还有一处要点是调用了followUpRequest方法进行重定向。这个方法非常长,这里简单讲下是干嘛的:先获得传入的响应体的code,根据不同的code去构建不同的request进行重定向。如果不需要重定向,直接返回null,如果是301、302这样的重定向响应码,使用重定向的url重新构建request,在上面的死循环中重新进行请求。
到这里RetryAndFollowUpInterceptor就讲的差不多了,它主要负责的任务就是失败重连,和重定向重新构建request重连,是不是还挺简单的?
二、BridgeInterceptor
BridgeInterceptor是五个拦截器中最简单的一个了,它做的事情很简单,正如其名字,它是一个连接网络数据流和我们程序能用的request、response对象的桥梁。
@Override public Response intercept(Chain chain) throws IOException {
Request userRequest = chain.request();
Request.Builder requestBuilder = userRequest.newBuilder();
RequestBody body = userRequest.body();
if (body != null) {
MediaType contentType = body.contentType();
if (contentType != null) {
requestBuilder.header("Content-Type", contentType.toString());
}
long contentLength = body.contentLength();
if (contentLength != -1) {
requestBuilder.header("Content-Length", Long.toString(contentLength));
requestBuilder.removeHeader("Transfer-Encoding");
} else {
requestBuilder.header("Transfer-Encoding", "chunked");
requestBuilder.removeHeader("Content-Length");
}
}
if (userRequest.header("Host") == null) {
requestBuilder.header("Host", hostHeader(userRequest.url(), false));
}
if (userRequest.header("Connection") == null) {
requestBuilder.header("Connection", "Keep-Alive");
}
// If we add an "Accept-Encoding: gzip" header field we're responsible for also decompressing
// the transfer stream.
boolean transparentGzip = false;
if (userRequest.header("Accept-Encoding") == null && userRequest.header("Range") == null) {
transparentGzip = true;
requestBuilder.header("Accept-Encoding", "gzip");
}
List<Cookie> cookies = cookieJar.loadForRequest(userRequest.url());
if (!cookies.isEmpty()) {
requestBuilder.header("Cookie", cookieHeader(cookies));
}
if (userRequest.header("User-Agent") == null) {
requestBuilder.header("User-Agent", Version.userAgent());
}
Response networkResponse = chain.proceed(requestBuilder.build());
HttpHeaders.receiveHeaders(cookieJar, userRequest.url(), networkResponse.headers());
Response.Builder responseBuilder = networkResponse.newBuilder()
.request(userRequest);
if (transparentGzip
&& "gzip".equalsIgnoreCase(networkResponse.header("Content-Encoding"))
&& HttpHeaders.hasBody(networkResponse)) {
GzipSource responseBody = new GzipSource(networkResponse.body().source());
Headers strippedHeaders = networkResponse.headers().newBuilder()
.removeAll("Content-Encoding")
.removeAll("Content-Length")
.build();
responseBuilder.headers(strippedHeaders);
responseBuilder.body(new RealResponseBody(strippedHeaders, Okio.buffer(responseBody)));
}
return responseBuilder.build();
}
可以看到它将我们传入的request新增了好几个请求头,这就是为什么我们平时使用的时候即使完全不加请求头,抓出来的包依然有好多请求头。这里要注意一下“Keep-Alive”,这个请求头是告诉服务器,在请求完成后不要断开连接。我们知道http是短链接,为什么okhttp还要默认加这么个请求头呢?这是为了复用连接,当再次发起相同请求的时候,可以节省再次开启连接的消耗。
当得到response的时候,如果http连接使用了gzip压缩,会对响应流进行gzip解压缩,因此我们自己在使用的时候完全不需要再做这些操作。
三、CacheInterceptor
CacheInterceptor顾名思义,是负责缓存相关处理的一个拦截器。由于在下一章我会专门讲okhttp的缓存机制,这里有些细节我会先跳过。它的intercept代码非常长,而且每一段都要分析,所以我分开来一段一段地分析。
Response cacheCandidate = cache != null
? cache.get(chain.request())
: null;
long now = System.currentTimeMillis();
CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
Request networkRequest = strategy.networkRequest;
Response cacheResponse = strategy.cacheResponse;
首先从cache对象根据当前的request取出缓存的response命名为cacheCandidate,也就是候选的缓存,不一定会使用。之后使用当前时间、request和候选缓存构建出了一个CacheStrategy对象,这个对象是负责缓存策略实施的。它会根据不同的请求码、响应码返回不同的networkRequest和cacheResponse。这个类会放到下一章展开讨论,目前我们只要知道,它会根据request和cacheCandidate返回networkRequest(实际要进行网络请求的request)以及cacheResponse(缓存的response)。如果networkRequest为null(比如requset中含有only-if-cached请求头的时候),表示不进行网络请求。当cacheResponse为null的时候,说明没有缓存,或者当前缓存过期或者当前缓存策略不允许使用缓存等,总之就代表无缓存可用。
if (cache != null) {
cache.trackResponse(strategy);
}
if (cacheCandidate != null && cacheResponse == null) {
closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
}
Cache的trackResponse是起计数作用的,不是重点。下面一个判断如果cacheCandidate不为null代表我们取到了缓存,cacheResponse为null代表这个缓存过期或者由于缓存策略等原因用不了,那么这个cacheCandidate没有用了,我门需要关闭它的流。
// If we're forbidden from using the network and the cache is insufficient, fail.
if (networkRequest == null && cacheResponse == null) {
return new Response.Builder()
.request(chain.request())
.protocol(Protocol.HTTP_1_1)
.code(504)
.message("Unsatisfiable Request (only-if-cached)")
.body(Util.EMPTY_RESPONSE)
.sentRequestAtMillis(-1L)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
}
// If we don't need the network, we're done.
if (networkRequest == null) {
return cacheResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.build();
}
前面说过了networkRequest为null代表我们将不进行网络请求,这时候分为两种情况。如果cacheResponse也为null说明我们没有有效的缓存response,而我们又不会进行网络请求,因此给上层构建了一个响应码味504的response。如果cacheResponse不为null,说明我们有可用缓存,而此次请求又不会再请求网络,因此直接将缓存response返回。
Response networkResponse = null;
try {
networkResponse = chain.proceed(networkRequest);
} finally {
// If we're crashing on I/O or otherwise, don't leak the cache body.
if (networkResponse == null && cacheCandidate != null) {
closeQuietly(cacheCandidate.body());
}
}
// If we have a cache response too, then we're doing a conditional get.
if (cacheResponse != null) {
if (networkResponse.code() == HTTP_NOT_MODIFIED) {
Response response = cacheResponse.newBuilder()
.headers(combine(cacheResponse.headers(), networkResponse.headers()))
.sentRequestAtMillis(networkResponse.sentRequestAtMillis())
.receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
networkResponse.body().close();
// Update the cache after combining headers but before stripping the
// Content-Encoding header (as performed by initContentStream()).
cache.trackConditionalCacheHit();
cache.update(cacheResponse, response);
return response;
} else {
closeQuietly(cacheResponse.body());
}
}
上面networkRequest为null也就是不进行网络请求的情况我们已经全部处理了,下面必须要进行网络请求了,调用拦截链的proceed方法让后面的拦截器去请求网络数据得到response。如果cacheResponse不为null,那么此时我们有两个response:一个是缓存的,一个是网络请求返回的,此时我们需要处理到底使用哪个。此时如果服务器返回的响应码为HTTP_NOT_MODIFIED,也就是我们常见的304,代表服务器的资源没有变化,客户端去取本地缓存即可,此时服务器不会返回响应体。那么这个时候我们会使用缓存的cacheResponse构建一个新的response并返回。同时我们会记录这次我们的cache命中了,因为okhttp默认使用了LruCache算法,Least Recently Used最近最少使用原则,需要我们在使用的时候计数。同时把新构建的response刷新到缓存中去。当然了,如果没有返回304,说明缓存已经过期,我们需要将它的流关闭。
Response response = networkResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
if (cache != null) {
if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
// Offer this request to the cache.
CacheRequest cacheRequest = cache.put(response);
return cacheWritingResponse(cacheRequest, response);
}
if (HttpMethod.invalidatesCache(networkRequest.method())) {
try {
cache.remove(networkRequest);
} catch (IOException ignored) {
// The cache cannot be written.
}
}
}
return response;
最后剩下来的情况就是缓存过期,我们需要使用网络返回的response这种情况。直接使用networkResponse构建response并返回。此时我们还需要做一件事,就是更新我们的缓存,将最终response写入到cache对象中去。但此时如果我们的请求方式不支持缓存(常见的两种请求方式,get请求支持缓存,post不支持),我们不但不能更新缓存,还要将缓存删掉。
至此,缓存拦截器也讲的差不多了,在下一章我将会进一步讲解okhttp的缓存策略实现。
四、ConnectInterceptor
经历过前面三个拦截器的处理,我们发现目前为止我们还没有进行真正的请求。别急,ConnectInterceptor就是一个负责建立http连接的拦截器。它的intercept代码不多,赶紧来看下:
@Override public Response intercept(Chain chain) throws IOException {
RealInterceptorChain realChain = (RealInterceptorChain) chain;
Request request = realChain.request();
StreamAllocation streamAllocation = realChain.streamAllocation();
// We need the network to satisfy this request. Possibly for validating a conditional GET.
boolean doExtensiveHealthChecks = !request.method().equals("GET");
HttpCodec httpCodec = streamAllocation.newStream(client, doExtensiveHealthChecks);
RealConnection connection = streamAllocation.connection();
return realChain.proceed(request, streamAllocation, httpCodec, connection);
}
非常简短是不是,它只做了两件事,一是生成HttpCodec对象,该对象是给http请求/响应编码、解码的。另外还调用了streamAllocation.connection方法取出RealConnection连接对象。但是取出connection对象后,我找了半天没找到在哪建立连接的,最后发现其实是在streamAllocation.newStream()方法中建立的,来看看这个方法:
public HttpCodec newStream(OkHttpClient client, boolean doExtensiveHealthChecks) {
int connectTimeout = client.connectTimeoutMillis();
int readTimeout = client.readTimeoutMillis();
int writeTimeout = client.writeTimeoutMillis();
boolean connectionRetryEnabled = client.retryOnConnectionFailure();
try {
RealConnection resultConnection = findHealthyConnection(connectTimeout, readTimeout,
writeTimeout, connectionRetryEnabled, doExtensiveHealthChecks);
HttpCodec resultCodec = resultConnection.newCodec(client, this);
synchronized (connectionPool) {
codec = resultCodec;
return resultCodec;
}
} catch (IOException e) {
throw new RouteException(e);
}
}
这里从okhttpclient中取出了一些连接参数,调用了findHealthyConnection方法获得连接,并且通过这个RealConnection对象创建了此次请求的HttpCodec对象。点进来看看findHealthyConnection方法,该方法是个无限循环,直到findConnection方法找到健康的连接,直接看findConnection方法:
private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
boolean connectionRetryEnabled) throws IOException {
Route selectedRoute;
synchronized (connectionPool) {
if (released) throw new IllegalStateException("released");
if (codec != null) throw new IllegalStateException("codec != null");
if (canceled) throw new IOException("Canceled");
// Attempt to use an already-allocated connection.
RealConnection allocatedConnection = this.connection;
if (allocatedConnection != null && !allocatedConnection.noNewStreams) {
return allocatedConnection;
}
// Attempt to get a connection from the pool.
Internal.instance.get(connectionPool, address, this, null);
if (connection != null) {
return connection;
}
selectedRoute = route;
}
// If we need a route, make one. This is a blocking operation.
if (selectedRoute == null) {
selectedRoute = routeSelector.next();
}
RealConnection result;
synchronized (connectionPool) {
if (canceled) throw new IOException("Canceled");
// Now that we have an IP address, make another attempt at getting a connection from the pool.
// This could match due to connection coalescing.
Internal.instance.get(connectionPool, address, this, selectedRoute);
if (connection != null) {
route = selectedRoute;
return connection;
}
// Create a connection and assign it to this allocation immediately. This makes it possible
// for an asynchronous cancel() to interrupt the handshake we're about to do.
route = selectedRoute;
refusedStreamCount = 0;
result = new RealConnection(connectionPool, selectedRoute);
acquire(result);
}
// Do TCP + TLS handshakes. This is a blocking operation.
result.connect(connectTimeout, readTimeout, writeTimeout, connectionRetryEnabled);
routeDatabase().connected(result.route());
Socket socket = null;
synchronized (connectionPool) {
// Pool the connection.
Internal.instance.put(connectionPool, result);
// If another multiplexed connection to the same address was created concurrently, then
// release this connection and acquire that one.
if (result.isMultiplexed()) {
socket = Internal.instance.deduplicate(connectionPool, address, this);
result = connection;
}
}
closeQuietly(socket);
return result;
}
又是一个比较长的方法。前半段是在尝试寻找connection,最开始寻找是否有可以复用的已经建立好连接并且空闲的connection。没有的话继续从连接池里取,如果连接池里没有,就需要new一个connection并加入连接池。有了connection对象后,终于可以调用其connect方法真正进行连接啦!看注释可以知道,这里的connect方法进行了TCP + TLS握手操作,是个阻塞操作。同时这里还会判断下连接是否multiplexed,也就是和当前某个已有的连接重复了,如果重复的话会释放该连接。已经到这步了,当然要看看connect方法是怎样建立连接的:
public void connect(
int connectTimeout, int readTimeout, int writeTimeout, boolean connectionRetryEnabled) {
if (protocol != null) throw new IllegalStateException("already connected");
RouteException routeException = null;
List<ConnectionSpec> connectionSpecs = route.address().connectionSpecs();
ConnectionSpecSelector connectionSpecSelector = new ConnectionSpecSelector(connectionSpecs);
if (route.address().sslSocketFactory() == null) {
if (!connectionSpecs.contains(ConnectionSpec.CLEARTEXT)) {
throw new RouteException(new UnknownServiceException(
"CLEARTEXT communication not enabled for client"));
}
String host = route.address().url().host();
if (!Platform.get().isCleartextTrafficPermitted(host)) {
throw new RouteException(new UnknownServiceException(
"CLEARTEXT communication to " + host + " not permitted by network security policy"));
}
}
while (true) {
try {
if (route.requiresTunnel()) {
connectTunnel(connectTimeout, readTimeout, writeTimeout);
} else {
connectSocket(connectTimeout, readTimeout);
}
establishProtocol(connectionSpecSelector);
break;
} catch (IOException e) {
closeQuietly(socket);
closeQuietly(rawSocket);
socket = null;
rawSocket = null;
source = null;
sink = null;
handshake = null;
protocol = null;
http2Connection = null;
if (routeException == null) {
routeException = new RouteException(e);
} else {
routeException.addConnectException(e);
}
if (!connectionRetryEnabled || !connectionSpecSelector.connectionFailed(e)) {
throw routeException;
}
}
}
if (http2Connection != null) {
synchronized (connectionPool) {
allocationLimit = http2Connection.maxConcurrentStreams();
}
}
}
这里的代码比较接近底层了,本人看起来也是十分吃力。首先它会判断Protocol对象是否为null,Protocol是封装的通信协议对象,如果已经存在,则表明已连接。然后会根据路由地址获取connectionSpecs集合,ConnectionSpec点进去看了下是更一步的有TLS协议的连接。之后判断如果TLS协议不合格,会抛RouteException不进行连接。这也从一方面解释了okhttp为什么强调它是一个很安全的http框架。
紧接着是一个无限循环进行连接,调用establishProtocol建立真正的Client-Server通信,这个方法会抛出IO异常。如果顺利的话跳出循环,有异常的话会根据情况是否抛异常放弃连接还是继续尝试下一次连接。最后还会对http流的数量做一个限制。
继续往下看的话establishProtocol方法会调用connectTls方法进行三次握手操作,与https有关的ssl认证过程就在其中,限于篇幅就不仔细讲了。最终会调用Http2Connection对象的start方法开启连接。这个过程的细节非常多,由于我们重点是讲拦截器的,这里就此打住。以后有机会的再开一篇文章专门讲这里的代码。
CallServerInterceptor
这是okhttp网络请求的最后一个拦截器,ConnectInterceptor是负责建立http连接的,而它负责真正向服务器发起请求。直接看它的intercept代码:
RealInterceptorChain realChain = (RealInterceptorChain) chain;
HttpCodec httpCodec = realChain.httpStream();
StreamAllocation streamAllocation = realChain.streamAllocation();
RealConnection connection = (RealConnection) realChain.connection();
Request request = realChain.request();
long sentRequestMillis = System.currentTimeMillis();
httpCodec.writeRequestHeaders(request);
首先获取HttpCodec和RealConnection连接对象,通过HttpCodec编码最终的请求头。
Response.Builder responseBuilder = null;
if (HttpMethod.permitsRequestBody(request.method()) && request.body() != null) {
// If there's a "Expect: 100-continue" header on the request, wait for a "HTTP/1.1 100
// Continue" response before transmitting the request body. If we don't get that, return what
// we did get (such as a 4xx response) without ever transmitting the request body.
if ("100-continue".equalsIgnoreCase(request.header("Expect"))) {
httpCodec.flushRequest();
responseBuilder = httpCodec.readResponseHeaders(true);
}
if (responseBuilder == null) {
// Write the request body if the "Expect: 100-continue" expectation was met.
Sink requestBodyOut = httpCodec.createRequestBody(request, request.body().contentLength());
BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);
request.body().writeTo(bufferedRequestBody);
bufferedRequestBody.close();
} else if (!connection.isMultiplexed()) {
// If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1 connection from
// being reused. Otherwise we're still obligated to transmit the request body to leave the
// connection in a consistent state.
streamAllocation.noNewStreams();
}
}
这段代码则是编码请求体的,首先判断下请求方法是否支持请求体(比如get请求就没有请求体),然后通过HttpCodec编码最终的请求体。
httpCodec.finishRequest();
if (responseBuilder == null) {
responseBuilder = httpCodec.readResponseHeaders(false);
}
Response response = responseBuilder
.request(request)
.handshake(streamAllocation.connection().handshake())
.sentRequestAtMillis(sentRequestMillis)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
int code = response.code();
if (forWebSocket && code == 101) {
// Connection is upgrading, but we need to ensure interceptors see a non-null response body.
response = response.newBuilder()
.body(Util.EMPTY_RESPONSE)
.build();
} else {
response = response.newBuilder()
.body(httpCodec.openResponseBody(response))
.build();
}
if ("close".equalsIgnoreCase(response.request().header("Connection"))
|| "close".equalsIgnoreCase(response.header("Connection"))) {
streamAllocation.noNewStreams();
}
if ((code == 204 || code == 205) && response.body().contentLength() > 0) {
throw new ProtocolException(
"HTTP " + code + " had non-zero Content-Length: " + response.body().contentLength());
}
return response;
最终的requst完成后,通过HttpCodec读取响应头和响应体并进行解码,构建最终的response。其中如果响应码为101(表明要升级协议),则返回一个空的response。204或205(表明响应体没有内容)但响应体长度缺不为0则抛异常。如果Connection响应头为close,表明请求完毕,需要关闭流。
最终将response逐层向上返回,经过所有拦截器的处理后返回给用户。
终于写完了,限于我的知识水平有限,有错误的地方请大家及时纠正,感谢您的阅读!
网友评论