美文网首页
Volley源码学习

Volley源码学习

作者: 俗人浮生 | 来源:发表于2019-03-31 21:09 被阅读0次

说起网络连接,我想很多人都会对下面这3种非常熟悉:HttpClient、HttpURLConnection、OKHttp
1)HttpClient这个就不说了,在Android6.0之后直接被谷歌废弃删除了
2)HttpURLConnection这个相信很多人刚开学Android的时候都用过它,我还记得那会还不会用什么第三方框架时,就自己用了HttpURLConnection进行网络请求什么的,自己封装了方法,后来用了第三方框架后,反倒对它有些陌生了
3)OKHttp目前得到了官方的认可,成为谷歌推荐的android网络请求框架,我们可以在AndroidStudio中通过如下图的方式直接看到OKHttp的身影:

OKHttp获得官方的认可和推荐

而Volley又是什么呢?Volley于2013年Google I/O大会上被推荐,它提供了HttpClient 和 HttpUrlConnection两种连接方式,是个混合体来着,下面我们会通过源码进行分析,目前该框架已停止更新。

其实既然OKHttp已获得官方的认可和推荐,那我们又为什么要再来学习Volley这个已经停止更新的框架呢?那是因为笔者公司项目用的就是Volley,而正好笔者最近又在学习部分框架的源码,正好拿它来学习学习,特此记录一下罢了,大神们可以绕路了O(∩_∩)O~

首先我们不要被“源码”这两个字吓住了,而且还是网络请求框架的源码,感觉很高深的样子,其实不然,我们可以先开口Volley包源码截图如下:

Volley源码

看到没?也就差不多这样子,没你想的那么复杂的,不怕,盘它!

下面我们一步步对源码进行分析学习:

1、创建RequestQueue

我们使用Volley的第一步就是创建RequestQueue,一般我们都是直接这样调用的:

RequestQueue mRequestQueue = Volley.newRequestQueue(context);

那么,我们来看看源码

    public static RequestQueue newRequestQueue(Context context) {
         return newRequestQueue(context, null);
    }
    //主要是下面这个方法
     public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }

        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }

        Network network = new BasicNetwork(stack);

        RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        queue.start();

        return queue;
    }

注意到第二个方法中的第二个参数:HttpStack,我们看看源码:

HttpStack
它是一个接口,在Volley包中有HttpClientStack和HurlStack两个实现类,这两个类就分别对应了我们上面说的HttpClient和HttpURLConnection两种网络连接方式。
当然,聪明的你在这里必须有一个很nice的想法:如果网络层想使用谷歌目前推荐的OKHttp的话,那么我们是不是可以模仿HttpClientStack和HurlStack,写一个类来实现HttpStack接口即可呢?又或者直接继承于HurlStack?

答应是肯定的,必须可以啊,这也是框架设计巧妙之处,利用接口来使得框架更具有扩展性,在这里就不相应展开了,我们今天是来学习Volley源码的。

我们看到上面的源码:

    if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }

当我们不指定HttpStack时,系统自动帮我们进行了区分,也就是说,如果API大于等于9的话,网络连接用的是HttpURLConnection,如果API小于9的话,网络连接用的就是HttpClient(不过目前估计没有API小于9的手机吧,老古董了)

我们目的在于创建RequestQueue,那我们就来看看RequestQueue的构造方法:

       /** Number of network request dispatcher threads to start. */
    private static final int DEFAULT_NETWORK_THREAD_POOL_SIZE = 4;

    public RequestQueue(Cache cache, Network network) {
        this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
    }

    public RequestQueue(Cache cache, Network network, int threadPoolSize) {
        this(cache, network, threadPoolSize,new ExecutorDelivery(new Handler(Looper.getMainLooper())));
    }

     public RequestQueue(Cache cache, Network network, int threadPoolSize,ResponseDelivery delivery) {
        mCache = cache;
        mNetwork = network;
        mDispatchers = new NetworkDispatcher[threadPoolSize];
        mDelivery = delivery;
    }

多个构造函数,一层调过一层,我直接看最后一个构造函数,简单说明一下:
1)Cache这个作为缓存的,Volley默认使用的是DiskBasedCache,里面使用了LinkedHashMap,那么说明遵照的是Lru算法,默认缓存大小为5M,当然,也提供了相应的构造函数可进行设置
2)Network这个是用来发送网络请求的,Volley默认使用的BasicNetwork,在这里进行网络请求的发送,统一返回了NetworkResponse对象,下面我们简单看看返回的这个对象是什么样子的,相关的说明已在注释中:

 public NetworkResponse(int statusCode, byte[] data, Map<String, String> headers,
            boolean notModified, long networkTimeMs) {
        this.statusCode = statusCode;//返回Http状态码
        this.data = data;//返回的请求体,一般我们从这里获取所需的数据
        this.headers = headers;//返回的请求头,可为null
        this.notModified = notModified;//如果是true的话,说明状态码为304,也就是数据已经存在于缓存中
        this.networkTimeMs = networkTimeMs;//整个网络请求所消耗的时间
    }

另外,在BasicNetwork中使用了ByteArrayPool来缓存网络请求获得的数据,有利于减少内存区域堆内存的波动和减少GC的回收,提高了性能,这是谷歌做的一个优化。
相关文章可参考:
https://blog.csdn.net/tiankongcheng6/article/details/57085806
https://blog.csdn.net/hfy8971613/article/details/81952375
3)new NetworkDispatcher[threadPoolSize],首先NetworkDispatcher继承于Thread,而这里很明显是创建一个线程数组,数组的大小默认为4,也就是说默认有4个线程用于网络请求,当然也可进行设置,NetworkDispatcher这个类是重要的类,下面我们会进一步进行分析。
4)ResponseDelivery这个主要用来进行线程的切换,将结果在主线程中返回,这就是为什么我们使用Volley框架后不用自己切回主线程更新UI的原因了,我们来看看它是如何做到的?注意上面的创建方法:

//创建
new ExecutorDelivery(new Handler(Looper.getMainLooper()))
//ExecutorDelivery的构造函数
public ExecutorDelivery(final Handler handler) {
        // Make an Executor that just wraps the handler.
        mResponsePoster = new Executor() {
            @Override
            public void execute(Runnable command) {
                handler.post(command);
            }
        };
    }

很明显,使用的是Handler机制,传了Main线程的Looper对象,这样一来, handler.post(command)就切回到主线程中去了。

好啦,这样我们完成了RequestQueue的创建,注意到在返回RequestQueue对象前,我们调用了start方法,我们继续看看源码:

  public void start() {
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();

        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }

   public void stop() {
        if (mCacheDispatcher != null) {
            mCacheDispatcher.quit();
        }
        for (int i = 0; i < mDispatchers.length; i++) {
            if (mDispatchers[i] != null) {
                mDispatchers[i].quit();
            }
        }
    }

start方法其实也没什么可说的,CacheDispatcher和NetworkDispatcher是两个很重要的类,它们都继承于Thread,下面我们会对这两个类进行说明的,这里我们只需知道在RequestQueue类中的start方法就是将一个CacheDispatcher线程和一个NetworkDispatcher数组中的所有线程全部启动起来。
当然,如上注释所说的,在启动之前,先停止到所有线程。

2、CacheDispatcher和NetworkDispatcher分析

上面我们说了,这两个类是很重要的类,我们就对它们进行一下分析:
首先,它们都是在上面RequestQueue中的start方法创建的,我们看一下它们的创建及构造函数:

  //CacheDispatcher的创建
CacheDispatcher mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
  //NetworkDispatcher的创建
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,mCache, mDelivery);
  //CacheDispatcher的构造函数
   public CacheDispatcher(
            BlockingQueue<Request<?>> cacheQueue, BlockingQueue<Request<?>> networkQueue,
            Cache cache, ResponseDelivery delivery) {
        mCacheQueue = cacheQueue;
        mNetworkQueue = networkQueue;
        mCache = cache;
        mDelivery = delivery;
    }
 //NetworkDispatcher的构造函数
  public NetworkDispatcher(BlockingQueue<Request<?>> queue,
            Network network, Cache cache,
            ResponseDelivery delivery) {
        mQueue = queue;
        mNetwork = network;
        mCache = cache;
        mDelivery = delivery;
    }

其中,mCache、mNetwork和mDelivery我们在上面讲RequestQueue的构造函数时已经讲了,我们这里来看看mCacheQueue和mNetworkQueue,很明显,从名字看它们是一个队列,我们在RequestQueue的源码中可以找到这两个队列:

      /** The cache triage queue. */
    private final PriorityBlockingQueue<Request<?>> mCacheQueue = new PriorityBlockingQueue<Request<?>>();

    /** The queue of requests that are actually going out to the network. */
    private final PriorityBlockingQueue<Request<?>> mNetworkQueue =new PriorityBlockingQueue<Request<?>>();   

mCacheQueue和mNetworkQueue都是PriorityBlockingQueue,一种带优先级别的队列,我们直接看一下Request类中关于优先级的相关代码:

    public enum Priority {
        LOW,
        NORMAL,
        HIGH,
        IMMEDIATE
    }
   public Priority getPriority() {
        return Priority.NORMAL;
    }
    @Override
    public int compareTo(Request<T> other) {
        Priority left = this.getPriority();
        Priority right = other.getPriority();

        // High-priority requests are "lesser" so they are sorted to the front.
        // Equal priorities are sorted by sequence number to provide FIFO ordering.
        return left == right ?
                this.mSequence - other.mSequence :
                right.ordinal() - left.ordinal();
    }

从上面代码我们知道:
1)Request有4种优先级,从低到高依次为:LOW、NORMAL、HIGH、IMMEDIATE
2)默认的优先级是NORMAL,当然,Request是一个抽象类,继承于Request的话可覆盖getPriority()方法,返回相应的优先级,比如ImageRequest的优先级就是LOW,比如ClearCacheRequest的优先级就是IMMEDIATE
3)我们都知道PriorityBlockingQueue是根据对象的compareTo返回值来判断优先级的,上面源码显示,如果优先级相等的话,那么按当前队列的排序依照FIFO先进先出原则进行;如果优先级不等,那么优先级高的排在前面。

既然CacheDispatcher和NetworkDispatcher都是线程,那么我们当然会更加关心其run方法了。
我们先来看看CacheDispatcher中的run方法:

    @Override
    public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher");
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        //设置线程优先级为后台,这样当多个线程并发后,很多无关紧要的线程分配的CPU时间将会减少,有利于主线程的处理
        // Make a blocking call to initialize the cache.
        mCache.initialize();//对DiskBasedCache进行对初始化

        while (true) {//死循环
            try {
                // Get a request from the cache triage queue, blocking until
                // at least one is available.
                final Request<?> request = mCacheQueue.take();//PriorityBlockingQueue如果为空的话,会阻塞当前线程
                request.addMarker("cache-queue-take");

                // If the request has been canceled, don't bother dispatching it.
                if (request.isCanceled()) {//如果请求已经取消,则继续下一次循环
                    request.finish("cache-discard-canceled");//完成请求,会调用相应RequestQueue中的finish方法
                    continue;
                }

                // Attempt to retrieve this item from cache.
                Cache.Entry entry = mCache.get(request.getCacheKey());//从缓存总获取
                if (entry == null) {
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);//如果缓存不存在,则将请求加到mNetworkQueue中去
                    continue;
                }

                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);//如果缓存存在,但过期,仍然将请求加到mNetworkQueue中去
                    continue;
                }

                // We have a cache hit; parse its data for delivery back to the request.
                request.addMarker("cache-hit");
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");

                if (!entry.refreshNeeded()) {
                    // Completely unexpired cache hit. Just deliver the response.
                    mDelivery.postResponse(request, response);//缓存不需要刷新,则直接将结果返回
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);

                    // Mark the response as intermediate.
                    response.intermediate = true;

                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    mDelivery.postResponse(request, response, new Runnable() {
               //缓存需要刷新,我们将结果返回的同时,将请求加到mNetworkQueue中去刷新缓存
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(request);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }

            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
        }
    }

好啦,需要说明的地方已经用中文在代码中进行了相应的注释。

下面我们再来看看NetworkDispatcher中的run方法:

    @Override
    public void run() {
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        //设置线程优先级为后台,这样当多个线程并发后,很多无关紧要的线程分配的CPU时间将会减少,有利于主线程的处理
        while (true) {//死循环
            long startTimeMs = SystemClock.elapsedRealtime();//获取网路请求开始的时间
            Request<?> request;
            try {
                // Take a request from the queue.
                request = mQueue.take();//PriorityBlockingQueue如果为空的话,会阻塞当前线程
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {//
                    return;
                }
                continue;
            }

            try {
                request.addMarker("network-queue-take");

                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {//如果请求已经取消,则继续下一次循环
                    request.finish("network-discard-cancelled");//完成请求,会调用相应RequestQueue中的finish方法
                    continue;
                }

                addTrafficStatsTag(request);

                // Perform the network request.
                NetworkResponse networkResponse = mNetwork.performRequest(request);//正式发送请求,返回结果
                request.addMarker("network-http-complete");

                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
               //如果返回状态为304而且我们已经返回了结果,那么我们将不再进行第二次返回,这是一种优化
                    request.finish("not-modified");//完成请求,会调用相应RequestQueue中的finish方法
                    continue;
                }

                // Parse the response here on the worker thread.
                Response<?> response = request.parseNetworkResponse(networkResponse);//对返回结果进行转换
                request.addMarker("network-parse-complete");

                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) {//如果需要进行缓存且结果不为空的话,将数据进行本地缓存
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }

                // Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);//投递返回结果
            } catch (VolleyError volleyError) {
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
            }
        }
    }

好啦,也详见上述的中文注释,这里也不多说了。

3、发送请求

上面我们已经将相关重要的类都说明清楚了,那么,接下来就来看看发送请求了,我们都知道我们用Volley发送一个请求很简单: mRequestQueue.add(request),我们接下来就来看看其源码:

  public <T> Request<T> add(Request<T> request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);//将request与当前的RequestQueue进行绑定
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }

        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());//设置请求队列序号,使用了AtomicInteger的incrementAndGet
        request.addMarker("add-to-queue");

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {//如果当前请求不使用缓存的话,直接进行网络请求,默认使用缓存
            mNetworkQueue.add(request);
            return request;
        }

        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            String cacheKey = request.getCacheKey();//默认URL作为CacheKey
            if (mWaitingRequests.containsKey(cacheKey)) {//如果当前请求已经在请求的队列中
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);//如果当前请求不在请求队列中,将请求加到mCacheQueue中去
            }
            return request;
        }
    }

相应注释代码也已经说明了,这里需要说明一点的就是,如果请求未在请求队列中,我们是直接将请求加到mCacheQueue中去,这里是mCacheQueue而不是mNetworkQueue,这样才合理嘛!

好啦,Volley的主要源码学习也就到此结束了,我看什么时候有空再根据源码补个流程图吧!

相关文章

网友评论

      本文标题:Volley源码学习

      本文链接:https://www.haomeiwen.com/subject/dysvpqtx.html