Volley源码阅读

作者: Jaesoon | 来源:发表于2018-07-13 16:14 被阅读38次

    Volley源码阅读

    如何阅读源码?关于这个,前人有很多的方法。一般我分析这类工程,首先做的第一步是打开这类工程的开发者官网,官网上都会对这个工具库进行介绍。比如,它是什么?能解决什么问题?与其它同类工具比较,有什么优点和劣势(可能有些工程不会在官网上提到这个)。
    想要分析它,首先要知道开发者是怎么开发它的,开发过程中使用了什么技巧,软件的架构是什么样的。还有就是教程,开发者一般都给出了使用教程。从使用教程中,我们知道程序的使用方式,也就是程序的入口。还有就是有一些前辈已经分析过了源码,他们的博客能为我们解决不少困惑。
    接下来,我们开始源码分析。从教程中,我们知道,要想发起一个请求,我们至少需要创建两个对象:RequestQueue和Request。Request就是我们想要发起的请求,可以设置请求方式、Header和Url。RequestQueue是一个执行Request的对象。
    创建一个RequestQueue有几个简单的方式,分别是:

        /**
         * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
         *
         * @param context A {@link Context} to use for creating the cache dir.
         * @param stack A {@link BaseHttpStack} to use for the network, or null for default.
         * @return A started {@link RequestQueue} instance.
         */
        public static RequestQueue newRequestQueue(Context context, BaseHttpStack stack)
    
        /**
         * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
         *
         * @param context A {@link Context} to use for creating the cache dir.
         * @param stack An {@link HttpStack} to use for the network, or null for default.
         * @return A started {@link RequestQueue} instance.
         * @deprecated Use {@link #newRequestQueue(Context, BaseHttpStack)} instead to avoid depending
         *     on Apache HTTP. This method may be removed in a future release of Volley.
         */
        @Deprecated
        @SuppressWarnings("deprecation")
        public static RequestQueue newRequestQueue(Context context, HttpStack stack)
    
        private static RequestQueue newRequestQueue(Context context, Network network)
    
        /**
         * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
         *
         * @param context A {@link Context} to use for creating the cache dir.
         * @return A started {@link RequestQueue} instance.
         */
        public static RequestQueue newRequestQueue(Context context)
    

    四个构建方式,第一个构建方式,入参有Context和BaseHttpStack,从注释中,我们知道,Context是用于创建缓存地址,而BaseHTTPStack用于网路请求,如果传的是空的,则会使用默认的。然后,会返回一个已经开始运行的RequestQueue实例(即,创建了缓存线程和工作线程池,并启动了他们)。
    第二个被废弃了,我们不仔细讲了,可以看注释。
    第三个比较简单,传入了一个Context和Network,同样Network也是用来实现网络请求的。
    第四个是我们常用的,直接传入Context,用于创建缓存路径。其实,从源码中看,它其实调用了第一个构建方式,只不过第二个参数传的是空。
    那么我们来看看第一个和第三个构建方式的源码。

        /** Default on-disk cache directory. */
        private static final String DEFAULT_CACHE_DIR = "volley";
    
        /**
         * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
         *
         * @param context A {@link Context} to use for creating the cache dir.
         * @param stack A {@link BaseHttpStack} to use for the network, or null for default.
         * @return A started {@link RequestQueue} instance.
         */
        public static RequestQueue newRequestQueue(Context context, BaseHttpStack stack) {
            BasicNetwork network;
            if (stack == null) {
                if (Build.VERSION.SDK_INT >= 9) {
                    network = new BasicNetwork(new HurlStack());
                } else {
                    // Prior to Gingerbread, HttpUrlConnection was unreliable.
                    // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                    // At some point in the future we'll move our minSdkVersion past Froyo and can
                    // delete this fallback (along with all Apache HTTP code).
                    String userAgent = "volley/0";
                    try {
                        String packageName = context.getPackageName();
                        PackageInfo info =
                                context.getPackageManager().getPackageInfo(packageName, /* flags= */ 0);
                        userAgent = packageName + "/" + info.versionCode;
                    } catch (NameNotFoundException e) {
                    }
    
                    network =
                            new BasicNetwork(
                                    new HttpClientStack(AndroidHttpClient.newInstance(userAgent)));
                }
            } else {
                network = new BasicNetwork(stack);
            }
    
            return newRequestQueue(context, network);
        }
    
        private static RequestQueue newRequestQueue(Context context, Network network) {
            File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
            RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
            queue.start();
            return queue;
        }
    

    先看第三个构建方式,因为,第一个、第二个和第四个都间接的调用了第三个构建方式。(其实正常的分析过程中,我是先看的第四个构建方式,然后再分析到了第一个,最后是第三个。嗯,对的,这种方法叫做顺藤摸瓜)
    这个构建方式是静态私有的,有两个参数,分别是Context和Network。Context用于创建缓存路径;Network用于实现网络传输。函数的第一行,根据context获取App的cacheDir和Volley默认的相对DIR,创建了一个文件cacheDir。然后,根据这个cacheDir创建了一个DiskBasedCache,使用这个DiskBasedCache和传进来的network创建了一个RequestQueue。调用RequestQueue的start()方法,启动它。然后返回这个RequestQueue。这个印证了,在教程中说的,使用newRequestQueue方法,会创建一个RequestQueue并启动它。关键点是RequestQueue类,我们打开源码分析它。

    RequestQueue类

    这个类的描述是:

    /**
     * A request dispatch queue with a thread pool of dispatchers.
     *
     * <p>Calling {@link #add(Request)} will enqueue the given Request for dispatch, resolving from
     * either cache or network on a worker thread, and then delivering a parsed response on the main
     * thread.
     */
    

    简单说,就是一个带有派发者线程池的请求派发队列。调用add()方法,将会入队你传入的请求,分析是从缓存还是在工作线程的网络请求中得到结果,然后在主线程中将的解析后的响应结果传送出来。
    分析一个具体的类,看完类的基本描述后,我首先会看看它的构造函数。这个类有三个构造函数,分别是:

        /** Number of network request dispatcher threads to start. */
        private static final int DEFAULT_NETWORK_THREAD_POOL_SIZE = 4;
    
        /** Cache interface for retrieving and storing responses. */
        private final Cache mCache;
    
        /** Network interface for performing requests. */
        private final Network mNetwork;
    
        /** Response delivery mechanism. */
        private final ResponseDelivery mDelivery;
    
        /** The network dispatchers. */
        private final NetworkDispatcher[] mDispatchers;
    
        /** The cache dispatcher. */
        private CacheDispatcher mCacheDispatcher;
    
        private final List<RequestFinishedListener> mFinishedListeners = new ArrayList<>();
    
        /**
         * Creates the worker pool. Processing will not begin until {@link #start()} is called.
         *
         * @param cache A Cache to use for persisting responses to disk
         * @param network A Network interface for performing HTTP requests
         * @param threadPoolSize Number of network dispatcher threads to create
         * @param delivery A ResponseDelivery interface for posting responses and errors
         */
        public RequestQueue(
                Cache cache, Network network, int threadPoolSize, ResponseDelivery delivery) {
            mCache = cache;
            mNetwork = network;
            mDispatchers = new NetworkDispatcher[threadPoolSize];
            mDelivery = delivery;
        }
    
        /**
         * Creates the worker pool. Processing will not begin until {@link #start()} is called.
         *
         * @param cache A Cache to use for persisting responses to disk
         * @param network A Network interface for performing HTTP requests
         * @param threadPoolSize Number of network dispatcher threads to create
         */
        public RequestQueue(Cache cache, Network network, int threadPoolSize) {
            this(
                    cache,
                    network,
                    threadPoolSize,
                    new ExecutorDelivery(new Handler(Looper.getMainLooper())));
        }
    
        /**
         * Creates the worker pool. Processing will not begin until {@link #start()} is called.
         *
         * @param cache A Cache to use for persisting responses to disk
         * @param network A Network interface for performing HTTP requests
         */
        public RequestQueue(Cache cache, Network network) {
            this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
        }
    

    我们直接分析第一个构造函数,因为后面两个构造函数均是调用第一个构造函数来实现功能的。
    注释说了,这个方法的作用是创建线程池,当然,此时还没有启动,只有调用了start()方法,才会开启。有四个参数,分别是:cache、network、threadPoolSize和ResponseDelivery。cache用于持久化响应结果到磁盘上;network用于完成HTTP请求;threadPoolSize用于设置网络派发者线程的数量;ResponseDelivery接口,用于发布响应和错误。在调用该方法时,其将你传入的这些参数,赋值到私有变量中。并根据你传的threadPoolSize,创建线程池。
    第二个构造函数有三个参数,调用了第一个构造函数,并创建一个ResponseDelivery。这个ResponseDelivery是回调的关键。我们看一下它的源码。

    ResponseDelivery

    它是一个接口,定义了三个方法:

    public interface ResponseDelivery {
        /** Parses a response from the network or cache and delivers it. */
        void postResponse(Request<?> request, Response<?> response);
    
        /**
         * Parses a response from the network or cache and delivers it. The provided Runnable will be
         * executed after delivery.
         */
        void postResponse(Request<?> request, Response<?> response, Runnable runnable);
    
        /** Posts an error for the given request. */
        void postError(Request<?> request, VolleyError error);
    }
    
    • void postResponse(Request<?> request, Response<?> response);
      解析来自网络或缓存的响应,并传送到它
    • void postResponse(Request<?> request, Response<?> response, Runnable runnable);
      同上,不同的是,多一个参数Runnable,这个Runnable将会在传送后被执行
    • void postError(Request<?> request, VolleyError error);
      为指定的请求发送一个错误
      实际上,我们使用的ExecutorDelivery,它是ResponseDelivery的实现类。
    import android.os.Handler;
    import java.util.concurrent.Executor;
    
    /** Delivers responses and errors. */
    public class ExecutorDelivery implements ResponseDelivery {
        /** Used for posting responses, typically to the main thread. */
        private final Executor mResponsePoster;
    
        /**
         * Creates a new response delivery interface.
         *
         * @param handler {@link Handler} to post responses on
         */
        public ExecutorDelivery(final Handler handler) {
            // Make an Executor that just wraps the handler.
            mResponsePoster =
                    new Executor() {
                        @Override
                        public void execute(Runnable command) {
                            handler.post(command);
                        }
                    };
        }
    
        /**
         * Creates a new response delivery interface, mockable version for testing.
         *
         * @param executor For running delivery tasks
         */
        public ExecutorDelivery(Executor executor) {
            mResponsePoster = executor;
        }
    
        @Override
        public void postResponse(Request<?> request, Response<?> response) {
            postResponse(request, response, null);
        }
    
        @Override
        public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
            request.markDelivered();
            request.addMarker("post-response");
            mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
        }
    
        @Override
        public void postError(Request<?> request, VolleyError error) {
            request.addMarker("post-error");
            Response<?> response = Response.error(error);
            mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, null));
        }
    
        /** A Runnable used for delivering network responses to a listener on the main thread. */
        @SuppressWarnings("rawtypes")
        private static class ResponseDeliveryRunnable implements Runnable {
            private final Request mRequest;
            private final Response mResponse;
            private final Runnable mRunnable;
    
            public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
                mRequest = request;
                mResponse = response;
                mRunnable = runnable;
            }
    
            @SuppressWarnings("unchecked")
            @Override
            public void run() {
                // NOTE: If cancel() is called off the thread that we're currently running in (by
                // default, the main thread), we cannot guarantee that deliverResponse()/deliverError()
                // won't be called, since it may be canceled after we check isCanceled() but before we
                // deliver the response. Apps concerned about this guarantee must either call cancel()
                // from the same thread or implement their own guarantee about not invoking their
                // listener after cancel() has been called.
    
                // If this request has canceled, finish it and don't deliver.
                if (mRequest.isCanceled()) {
                    mRequest.finish("canceled-at-delivery");
                    return;
                }
    
                // Deliver a normal response or error, depending.
                if (mResponse.isSuccess()) {
                    mRequest.deliverResponse(mResponse.result);
                } else {
                    mRequest.deliverError(mResponse.error);
                }
    
                // If this is an intermediate response, add a marker, otherwise we're done
                // and the request can be finished.
                if (mResponse.intermediate) {
                    mRequest.addMarker("intermediate-response");
                } else {
                    mRequest.finish("done");
                }
    
                // If we have been provided a post-delivery runnable, run it.
                if (mRunnable != null) {
                    mRunnable.run();
                }
            }
        }
    }
    
    • 构造函数
      老规矩,我们先看一看构造函数。它有两个构造函数,如下:
        /**
         * Creates a new response delivery interface.
         *
         * @param handler {@link Handler} to post responses on
         */
        public ExecutorDelivery(final Handler handler)
    
        /**
         * Creates a new response delivery interface, mockable version for testing.
         *
         * @param executor For running delivery tasks
         */
        public ExecutorDelivery(Executor executor)
    

    区别是,一个传入了Handler,一个是传入了Executor。目的是创建一个Executor,用于传递请求,典型的作用是传到主线程。Show you the code.

        /**
         * Creates a new response delivery interface.
         *
         * @param handler {@link Handler} to post responses on
         */
        public ExecutorDelivery(final Handler handler) {
            // Make an Executor that just wraps the handler.
            mResponsePoster =
                    new Executor() {
                        @Override
                        public void execute(Runnable command) {
                            handler.post(command);
                        }
                    };
        }
    

    看到没,我们在创建的Executor的execute方法中,使用handler.post()方式,将runnable放到handler所在线程执行。至于另外一个构造函数,就一行代码,将传入的参数赋值给mResponsePoster。

    • ResponseDeliveryRunnable
      一个用来将网络请求的结果传送到在主线程的回调监听者中的Runnable。功能简单但很重要,代码不长,准确地诠释了简单却不简单。
        /** A Runnable used for delivering network responses to a listener on the main thread. */
        @SuppressWarnings("rawtypes")
        private static class ResponseDeliveryRunnable implements Runnable {
            private final Request mRequest;
            private final Response mResponse;
            private final Runnable mRunnable;
    
            public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
                mRequest = request;
                mResponse = response;
                mRunnable = runnable;
            }
    
            @SuppressWarnings("unchecked")
            @Override
            public void run() {
                // NOTE: If cancel() is called off the thread that we're currently running in (by
                // default, the main thread), we cannot guarantee that deliverResponse()/deliverError()
                // won't be called, since it may be canceled after we check isCanceled() but before we
                // deliver the response. Apps concerned about this guarantee must either call cancel()
                // from the same thread or implement their own guarantee about not invoking their
                // listener after cancel() has been called.
    
                // If this request has canceled, finish it and don't deliver.
                if (mRequest.isCanceled()) {
                    mRequest.finish("canceled-at-delivery");
                    return;
                }
    
                // Deliver a normal response or error, depending.
                if (mResponse.isSuccess()) {
                    mRequest.deliverResponse(mResponse.result);
                } else {
                    mRequest.deliverError(mResponse.error);
                }
    
                // If this is an intermediate response, add a marker, otherwise we're done
                // and the request can be finished.
                if (mResponse.intermediate) {
                    mRequest.addMarker("intermediate-response");
                } else {
                    mRequest.finish("done");
                }
    
                // If we have been provided a post-delivery runnable, run it.
                if (mRunnable != null) {
                    mRunnable.run();
                }
            }
        }
    

    从构造方法说开,它有三个参数,分别是Request,Response,Runnable。作用就是请求信息、响应信息和一个Runnable。这个方法比较简单,就是分别赋值给内部变量。
    然后,最重要的是run()方法。在这个方法中,我们进行了一些判断。首先判断请求是不是被取消了,如果被取消了,调用mRequest.finish("canceled-at-delivery")方法,结束请求,并不传递到主线程(不产生回调)。
    接着判断得到的响应结果是不是成功的,如果是,则调用mRequest.deliverResponse()方法,将MResponse.result内容传递到handler线程。如果没有成功,则调用mRequest.deliverError()将错误内容mResponse.error传递到handler线程。如果发现设置了一个Runnable,就调用它的run()方法。

    • 实现ResponseDelivery的三个方法
        @Override
        public void postResponse(Request<?> request, Response<?> response) {
            postResponse(request, response, null);
        }
    
        @Override
        public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
            request.markDelivered();
            request.addMarker("post-response");
            mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
        }
    
        @Override
        public void postError(Request<?> request, VolleyError error) {
            request.addMarker("post-error");
            Response<?> response = Response.error(error);
            mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, null));
        }
    

    比较简单,就不详细讲解了。

    NetworkDispatcher

    回到RequestQueue的构造方法,我们知道,它还创建了一个工作线程池。这就需要分析下NetworkDispatcher类。先看一下类描述。

     Provides a thread for performing network dispatch from a queue of requests.
    
     <p>Requests added to the specified queue are processed from the network via a specified {@link
     Network} interface. Responses are committed to cache, if eligible, using a specified {@link
     Cache} interface. Valid responses and errors are posted back to the caller via a {@link
     ResponseDelivery}.
    

    意思就是提供用于从请求队列执行网络分派的线程。所以,这个类是继承于线程的。
    这个类有一个构造函数

        /**
         * Creates a new network dispatcher thread. You must call {@link #start()} in order to begin
         * processing.
         *
         * @param queue Queue of incoming requests for triage
         * @param network Network interface to use for performing requests
         * @param cache Cache interface to use for writing responses to cache
         * @param delivery Delivery interface to use for posting responses
         */
        public NetworkDispatcher(
                BlockingQueue<Request<?>> queue,
                Network network,
                Cache cache,
                ResponseDelivery delivery) {
            mQueue = queue;
            mNetwork = network;
            mCache = cache;
            mDelivery = delivery;
        }
    

    从注释中,我们看到,它用来创建一个新的网络派发线程,你需要调用start()方法来让它运行。有四个参数,分别是:queue,network,cache和delivery。queue是传入的分类请求的队列;network是用于实现网络请求;cache是用于将网络响应写入缓存;delivery用于递送响应结果。
    再看看线程必须要重写的方法:run()。

        @Override
        public void run() {
            Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
            while (true) {
                try {
                    processRequest();
                } catch (InterruptedException e) {
                    // We may have been interrupted because it was time to quit.
                    if (mQuit) {
                        Thread.currentThread().interrupt();
                        return;
                    }
                    VolleyLog.e(
                            "Ignoring spurious interrupt of NetworkDispatcher thread; "
                                    + "use quit() to terminate it");
                }
            }
        }
    

    首先调用Process.setThreadPriority将线程设置为Process.THREAD_PRIORITY_BACKGROUND。在这里列一下Android的线程优先级级别:

    Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); //设置线程优先级为后台,这样当多个线程并发后很多无关紧要的线程分配的CPU时间将会减少,有利于主线程的处理,有以下几种:
    int THREAD_PRIORITY_AUDIO //标准音乐播放使用的线程优先级
    int THREAD_PRIORITY_BACKGROUND //标准后台程序
    int THREAD_PRIORITY_DEFAULT // 默认应用的优先级
    int THREAD_PRIORITY_DISPLAY //标准显示系统优先级,主要是改善UI的刷新
    int THREAD_PRIORITY_FOREGROUND //标准前台线程优先级
    int THREAD_PRIORITY_LESS_FAVORABLE //低于favorable
    int THREAD_PRIORITY_LOWEST //有效的线程最低的优先级
    int THREAD_PRIORITY_MORE_FAVORABLE //高于favorable
    int THREAD_PRIORITY_URGENT_AUDIO //标准较重要音频播放优先级
    int THREAD_PRIORITY_URGENT_DISPLAY //标准较重要显示优先级,对于输入事件同样适用。

    接下来,调用了内部方法processRequest()。说明:提取到自己的方法,以确保本地有一个受GC限制的活跃范围。这是为了避免在不确定的时间内保持先前的请求引用存活。

    
        @TargetApi(Build.VERSION_CODES.ICE_CREAM_SANDWICH)
        private void addTrafficStatsTag(Request<?> request) {
            // Tag the request (if API >= 14)
            if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH) {
                TrafficStats.setThreadStatsTag(request.getTrafficStatsTag());
            }
        }
    
        // Extracted to its own method to ensure locals have a constrained liveness scope by the GC.
        // This is needed to avoid keeping previous request references alive for an indeterminate amount
        // of time. Update consumer-proguard-rules.pro when modifying this. See also
        // https://github.com/google/volley/issues/114
        private void processRequest() throws InterruptedException {
            // Take a request from the queue.
            Request<?> request = mQueue.take();
            processRequest(request);
        }
    
        @VisibleForTesting
        void processRequest(Request<?> request) {
            // 获取从设备boot后经历的时间值
            long startTimeMs = SystemClock.elapsedRealtime();
            try {
                request.addMarker("network-queue-take");
    
                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    request.notifyListenerResponseNotUsable();
                    return;
                }
    
                addTrafficStatsTag(request);
    
                // Perform the network request.
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");
    
                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    request.notifyListenerResponseNotUsable();
                    return;
                }
    
                // Parse the response here on the worker thread.
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");
    
                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) {
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }
    
                // Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);
                request.notifyListenerResponseReceived(response);
            } catch (VolleyError volleyError) {
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
                request.notifyListenerResponseNotUsable();
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
                request.notifyListenerResponseNotUsable();
            }
        }
    
        private void parseAndDeliverNetworkError(Request<?> request, VolleyError error) {
            error = request.parseNetworkError(error);
            mDelivery.postError(request, error);
        }
    

    可以看出来,所有的处理都是在这个方法中进行的:void processRequest(Request<?> request);
    我们一行一行的来分析。
    第一行,取出获取从设备boot后经历的时间值;

    long startTimeMs = SystemClock.elapsedRealtime();
    

    第二行,给request添加一个Marker。作用是: Adds an event to this request's event log; for debugging. 不重要,所以,我们不管它。

    request.addMarker("network-queue-take");
    

    然后接下来的几行的作用是,判断这个请求是不是被cancel了。如果是,则不进行网络请求,同时发送本请求不可用通知。

        // If the request was cancelled already, do not perform the
                // network request.
        if (request.isCanceled()) {
            request.finish("network-discard-cancelled");
            request.notifyListenerResponseNotUsable();
            return;
        }
    

    接下来是为这笔请求设置流量统计Tag。关于TrafficStats类的用法,可以参考这篇文章,就不当搬运工了。我是任意门

    addTrafficStatsTag(request);
    

    再下来就是重点: 网络请求。调用Network.performRequest()方法完成网络请求,注意这个是阻塞的,所以,执行完毕之后,我们就给这个request设置了一个标志:network-http-complete。

        // Perform the network request.
        NetworkResponse networkResponse = mNetwork.performRequest(request);
        request.addMarker("network-http-complete");
    

    然后再判断是不是上次请求的内容没有发生变化(返回代码304),并且响应已经被传递了,就以“not-modified”结束,同时发送本请求不可用通知。

        // If the server returned 304 AND we delivered a response already,
        // we're done -- don't deliver a second identical response.
        if (networkResponse.notModified && request.hasHadResponseDelivered()) {
            request.finish("not-modified");
            request.notifyListenerResponseNotUsable();
            return;
        }
    

    接下来是最精彩的地方,我们的网络请求结果就是这样转换成我们想要的对象的。如何实现的呢?通过调用reques.parseNetworkResponse()。

        // Parse the response here on the worker thread.
        Response<?> response = request.parseNetworkResponse(networkResponse);
        request.addMarker("network-parse-complete");
    

    然后是缓存。

        // Write to cache if applicable.如果适用,写入缓存。
        // TODO: Only update cache metadata instead of entire record for 304s.
        if (request.shouldCache() && response.cacheEntry != null) {
            mCache.put(request.getCacheKey(), response.cacheEntry);
            request.addMarker("network-cache-written");
        }
    

    最最最关键的部分来了~~~ ,缓存之后,我们需要将解析之后的结果传送到主线程。首先,设置标志,然后post出去,最后发送响应收到通知。

        // Post the response back.
        request.markDelivered();
        mDelivery.postResponse(request, response);
        request.notifyListenerResponseReceived(response);
    

    --M: 好,打完收工。
    --Y: 咦,不对,怎么能没有错误处理?
    --M: 对,你很机智。
    我们再看看异常。总共catch了两个异常。先看一个异常处理方法。在这个方法中,request调用parseNetworkError解析了网络异常。然后,调用Delivery的postError方法,将错误传递到主线程。

    private void parseAndDeliverNetworkError(Request<?> request, VolleyError error) {
        error = request.parseNetworkError(error);
        mDelivery.postError(request, error);
    }
    
    • VolleyError
      volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
      parseAndDeliverNetworkError(request, volleyError);
      request.notifyListenerResponseNotUsable();
      
      设置网络请求的总时间,然后翻译错误原因并传递到主线程,最后发送一个通知。
    • Exception
      VolleyLog.e(e, "Unhandled exception %s", e.toString());
      VolleyError volleyError = new VolleyError(e);
      volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
      mDelivery.postError(request, volleyError);
      request.notifyListenerResponseNotUsable();
      
      代码比较简单,根据Exception创建一个VolleyError,然后设置网络请求的总时间。然后传递到主线程。最后发送一个通知。

    好了,Network类解析完成,一个完整的网络请求过程也捋清楚了。
    我们再来看一张图,捋一捋Volley的工作流。


    纸上得来终觉浅,绝知此时要有图

    相关文章

      网友评论

        本文标题:Volley源码阅读

        本文链接:https://www.haomeiwen.com/subject/xtcspftx.html