Glide基本用法
Glide.with(Context).load(ImgResource).into(ImageView);
关于磁盘缓存涉及到的类: DiskCacheStrategy
public abstract class DiskCacheStrategy {
// Caches remote data with both Data and Resource, and local data with Resource only.
DiskCacheStrategy ALL;
// Saves no data to cache.
DiskCacheStrategy NONE;
// Writes retrieved data directly to the disk cache befor it's decoded.
DiskCacheStrategy DATA;
// Writes resources to disk after they've been decoded.
DiskCacheStrategy RESOURCE;
// Tries to intelligently choose a strategy based on the data source of the
DiskCacheStrategy AUTOMATIC;
}
-
DiskCacheStrategy.ALL:
表示既缓存原始图片, 也缓存转换过后的图片. 对于远程图片, 缓存DATA和RESOURCE. 对于本地图片, 只缓存RESOURCE. -
DiskCacheStrategy.NONE:
表示不缓存任何内容. -
DiskCacheStrategy.DATA:
表示只缓存未被处理的文件. 我的理解就是我们获得的stream, 它是不会被展示出来的, 需要经过装载decode, 对图片进行压缩和转换等等操作, 得到最终的图片才能被展示. -
DiskCacheStrategy.RESOURCE:
表示只缓存转换过后的图片. (也就是经过decode, 转化裁剪的图片) -
DiskCacheStrategy.AUTOMATIC:
它会尝试对本地和远程图片使用最佳的策略. 当你加载远程数据(比如从URL下载)时, AUTOMATIC策略仅会存储未被你加载过程修改过(不如变换、裁剪)的原始数据(DATA), 因为下载远程数据相比调整磁盘上已经存在的数据要昂贵的多. 对于本地数据, AUTOMATIC策略则仅会存储变换过的缩略图(RESOURCE), 因为即使你需要再次生成另外一个尺寸或类型的图片, 取回原始数据也很容易.
1、into里面的流程非常的复杂, 使用了太多的接口回调, 看源码时经常跟着跟着就跟丢了.
2、接下来打算分以下几个模块进行分段分析:
一、Glide.into加载资源之前的准备工作.
二、从磁盘中获取数据.
三、从网络获取数据.
(1) 获取数据之后先进行DATA格式数据存储.
(2) 然后从磁盘中获取该数据, 并对该数据进行解码、采样率等操作生成Resource对象.
(3) 磁盘缓存该Resource数据, 即RESOURCE格式数据.
- 针对每个模块, 方法调用链太长, 所以先列出每个模块方法调用链, 下面具体到每个模块时会再次列出调用链.
3.1 模块一方法调用链:
RequestBuilder.into
RequestManager.track
RequestTracker.runRequest
SingleRequest.begin
SingleRequest.onSizeReady
Engine.load
3.2 模块二方法调用链:
EngineJob.start
DecodeJob.willDecodeFromCache
(1) GlideExecutor::diskCacheExecutor.execute;
(2) GlideExecutor::sourceExecutor.execute;
DecodeJob.run
DecodeJob.runWrapped
DecodeJob. runGenerators
3.3 模块三方法调用链:
SourceGenerator.startNext
HttpUrlFetcher.loadData
SourceGenerator.onDataReady
DecodeJob.reschedule
DecodeJob.run
DecodeJob.runWrapper
DecodeJob.runGenerators
SourceGenerator.startNext
SourceGenerator.cacheData
DataCacheGenerator.startNext
DataCacheGenerator.onDataReady
DecodeJob.onDataFetcherReady
DecodeJob.onDataFetcherReady
DecodeJob.decodeFromRetrievedData
DecodeJob. notifyEncodeAndRelease
DeferredEncodeManager.encode
- 大量使用接口回调的方式进行方法的调用, 这个也是读代码容易跟掉的一个原因, 先列出每个模块变量的实际指向
4.1 模块一变量实际指向
模块一变量 | 变量实际指向 |
---|---|
Engine.load(ResourceCallback cb) | SingleRequest |
4.2 模块二变量实际指向
模块二变量 | 变量实际指向 |
---|---|
EngineJob.diskCacheExecutor | GlideExecutor.newDiskCacheExecutor() |
EngineJob.sourceExecutor | GlideExecutor.newSourceExecutor() |
4.2 模块二变量实际指向
模块二变量 | 变量实际指向 |
---|---|
SourceGenerator.cb | DecodeJob |
SourceGenerator.loadData.fetcher | HttpUrlFetcher |
HttpUrlFetcher.callback | SourceGenerator |
DecodeJob.callback | EngineJob |
DataCacheGenerator.cb | DecodeJob |
一、Glide.into资源加载前的准备工作
方法调用链
RequestBuilder.into
RequestManager.track
RequestTracker.runRequest
SingleRequest.begin
SingleRequest.onSizeReady
Engine.load
1.1 RequestBuilder.into
@NonNull
public ViewTarget<ImageView, TranscodeType> into(@NonNull ImageView view) {
...
BaseRequestOptions<?> requestOptions = this;
if (!requestOptions.isTransformationSet()
&& requestOptions.isTransformationAllowed()
&& view.getScaleType() != null) {
switch (view.getScaleType()) {
case CENTER_CROP:
requestOptions = requestOptions.clone().optionalCenterCrop();
break;
case CENTER_INSIDE:
requestOptions = requestOptions.clone().optionalCenterInside();
break;
case FIT_CENTER:
case FIT_START:
case FIT_END:
requestOptions = requestOptions.clone().optionalFitCenter();
break;
case FIT_XY:
requestOptions = requestOptions.clone().optionalCenterInside();
break;
case CENTER:
case MATRIX:
default:
// Do nothing.
}
}
return into(
glideContext.buildImageViewTarget(view, transcodeClass),
/*targetListener=*/ null,
requestOptions,
Executors.mainThreadExecutor());
}
中间有些过程直接跳过了, 直接看SingleRequest的流程.
1.2 获取SingleRequest
private Request obtainRequest(
Target<TranscodeType> target,
RequestListener<TranscodeType> targetListener,
BaseRequestOptions<?> requestOptions,
RequestCoordinator requestCoordinator,
TransitionOptions<?, ? super TranscodeType> transitionOptions,
Priority priority,
int overrideWidth,
int overrideHeight,
Executor callbackExecutor) {
return SingleRequest.obtain(
context,
glideContext,
model,
transcodeClass,
requestOptions,
overrideWidth,
overrideHeight,
priority,
target,
targetListener,
requestListeners,
requestCoordinator,
glideContext.getEngine(),
transitionOptions.getTransitionFactory(),
callbackExecutor);
}
1.3 Engine.load资源加载的起始点
Glide采用了大量回调的方式, 先把接口进行交待:
ResourceCallback cb = SingleRequest;
public synchronized <R> LoadStatus load(
GlideContext glideContext,
Object model,
Key signature,
int width,
int height,
Class<?> resourceClass,
Class<R> transcodeClass,
Priority priority,
DiskCacheStrategy diskCacheStrategy,
Map<Class<?>, Transformation<?>> transformations,
boolean isTransformationRequired,
boolean isScaleOnlyOrNoTransform,
Options options,
boolean isMemoryCacheable,
boolean useUnlimitedSourceExecutorPool,
boolean useAnimationPool,
boolean onlyRetrieveFromCache,
ResourceCallback cb,
Executor callbackExecutor) {
long startTime = VERBOSE_IS_LOGGABLE ? LogTime.getLogTime() : 0;
// 1. 构建key.
EngineKey key = keyFactory.buildKey(model, signature, width, height, transformations,
resourceClass, transcodeClass, options);
// 2. 从内存中获取缓存, 至于是软引用缓存还是强引用缓存, 这里先不做分析.
EngineResource<?> active = loadFromActiveResources(key, isMemoryCacheable);
if (active != null) {
// 如果有缓存, 则使用缓存.
cb.onResourceReady(active, DataSource.MEMORY_CACHE);
return null;
}
// 3. 再次从引用中获取缓存.
EngineResource<?> cached = loadFromCache(key, isMemoryCacheable);
if (cached != null) {
cb.onResourceReady(cached, DataSource.MEMORY_CACHE);
return null;
}
// 执行到这里说明内存中没有缓存对应的数据.
EngineJob<?> current = jobs.get(key, onlyRetrieveFromCache);
if (current != null) {
current.addCallback(cb, callbackExecutor);
return new LoadStatus(cb, current);
}
EngineJob<R> engineJob =
engineJobFactory.build(
key,
isMemoryCacheable,
useUnlimitedSourceExecutorPool,
useAnimationPool,
onlyRetrieveFromCache);
DecodeJob<R> decodeJob =
decodeJobFactory.build(
glideContext,
model,
key,
signature,
width,
height,
resourceClass,
transcodeClass,
priority,
diskCacheStrategy,
transformations,
isTransformationRequired,
isScaleOnlyOrNoTransform,
onlyRetrieveFromCache,
options,
engineJob);
jobs.put(key, engineJob);
engineJob.addCallback(cb, callbackExecutor);
// 这里开始开启线程执行资源的加载: 从磁盘中或者是从网络获取.
engineJob.start(decodeJob);
// cb指向SingleRequest.
return new LoadStatus(cb, engineJob);
}
二、从磁盘中或者网络获取数据
方法调用链:
EngineJob.start
DecodeJob.willDecodeFromCache
(1) GlideExecutor::diskCacheExecutor.execute;
(2) GlideExecutor::sourceExecutor.execute;
DecodeJob.run
DecodeJob.runWrapped
DecodeJob. runGenerators
2.1 EngineJob.start
// diskCacheExecutor = GlideExecutor.newDiskCacheExecutor(); 在GlideBuilder进行初始化.
// sourceExecutor = GlideExecutor.newSourceExecutor(); 在GlideBuilder进行初始化.
public synchronized void start(DecodeJob<R> decodeJob) {
this.decodeJob = decodeJob;
//假设缓存策略是DiskCacheStrategy.ALL的情况下, 获取缓存数据到底是获取的源数据还是转换后的数据?
GlideExecutor executor = decodeJob.willDecodeFromCache()
? diskCacheExecutor
: getActiveSourceExecutor();
// 最终触发decodeJob.run的执行
executor.execute(decodeJob);
}
2.2 DecodeJob.willDecodeFromCache是否从磁盘中获取缓存数据
boolean willDecodeFromCache() {
// 判断磁盘缓存的类型, 这个主要取决于使用Glide使用时传入的磁盘缓存策略.
Stage firstStage = getNextStage(Stage.INITIALIZE);
// RESOURCE_CACHE: 对源数据修改以后的缓存(比如缩放什么的).
// DATA_CACHE: 源数据的缓存.
return firstStage == Stage.RESOURCE_CACHE || firstStage == Stage.DATA_CACHE;
}
磁盘缓存包括两种情况, 一种是对源数据进行缓存, 还一种是对源数据转化以后的数据的缓存.
2.2.1 DecodeJob.getNextStage
private Stage getNextStage(Stage current) {
switch (current) {
case INITIALIZE:
// 这个取决于DiskCacheStrategy的缓存策略:
return diskCacheStrategy.decodeCachedResource()
? Stage.RESOURCE_CACHE : getNextStage(Stage.RESOURCE_CACHE);
case RESOURCE_CACHE:
return diskCacheStrategy.decodeCachedData()
? Stage.DATA_CACHE : getNextStage(Stage.DATA_CACHE);
case DATA_CACHE:
// Skip loading from source if the user opted to only retrieve the resource from cache.
return onlyRetrieveFromCache ? Stage.FINISHED : Stage.SOURCE;
case SOURCE:
case FINISHED:
return Stage.FINISHED;
default:
throw new IllegalArgumentException("Unrecognized stage: " + current);
}
}
DiskCacheStrategy 策略 | 缓存内容 |
---|---|
ALL | DATA和RESOURCE |
RESOURCE | RESOURCE |
DATA | DATA |
NONE | NONE |
2.3 DecodeJob.run
@SuppressWarnings("PMD.AvoidRethrowingException")
@Override
public void run() {
// This should be much more fine grained, but since Java's thread pool implementation silently
// swallows all otherwise fatal exceptions, this will at least make it obvious to developers
// that something is failing.
GlideTrace.beginSectionFormat("DecodeJob#run(model=%s)", model);
// Methods in the try statement can invalidate currentFetcher, so set a local variable here to
// ensure that the fetcher is cleaned up either way.
DataFetcher<?> localFetcher = currentFetcher;
try {
// 现在开始在子线程中进行了数据加载, 但是数据加载到底是从磁盘中加载还是从网络获取还是未确定.
// 而从磁盘中获取又分到底是从DATA中获取缓存还是从RESOURCE中获取也未知.
runWrapped();
} catch (CallbackException e) {
...
} finally {
// Keeping track of the fetcher here and calling cleanup is excessively paranoid, we call
// close in all cases anyway.
if (localFetcher != null) {
localFetcher.cleanup();
}
GlideTrace.endSection();
}
}
到这里其实有两个疑问了:
- 1、数据加载到底是从磁盘中加载还是网络中进行获取?
- 2、从磁盘中加载数据是加载的DATA数据还是RESOURCE数据?
2.4 DecodeJob.runWrapped
private void runWrapped() {
switch (runReason) {
case INITIALIZE:
stage = getNextStage(Stage.INITIALIZE);
currentGenerator = getNextGenerator();
runGenerators();
break;
case SWITCH_TO_SOURCE_SERVICE:
runGenerators();
break;
case DECODE_DATA:
decodeFromRetrievedData();
break;
default:
throw new IllegalStateException("Unrecognized run reason: " + runReason);
}
}
如果缓存策略为DiskCacheStrategy.ALL的情况下, 如果会首先尝试从磁盘中获取RESOURCE
格式的数据?
2.4.1 DecodeJob.getNextGenerator
private DataFetcherGenerator getNextGenerator() {
switch (stage) {
case RESOURCE_CACHE:
// 用于加载磁盘缓存的RESOURCE格式的数据.
return new ResourceCacheGenerator(decodeHelper, this);
case DATA_CACHE:
// 用于加载磁盘缓存的DATA格式的数据.
return new DataCacheGenerator(decodeHelper, this);
case SOURCE:
// 从网络获取数据.
return new SourceGenerator(decodeHelper, this);
case FINISHED:
return null;
default:
throw new IllegalStateException("Unrecognized stage: " + stage);
}
}
每一种加载方式都对应一个ResourceCacheGenerator.
2.5 DecodeJob. runGenerators
private void runGenerators() {
currentThread = Thread.currentThread();
startFetchTime = LogTime.getLogTime();
boolean isStarted = false;
// 1. 这里采取这个while的方式, 依次调用DataFetcherGenerator.startNext方法.
// 2. 如果当前策略没有获取到数据, 则将数据的加载交给下一个策略.
// 3. 直到stage = Stage.SOURCE时, 从网络获取数据.
while (!isCancelled && currentGenerator != null
&& !(isStarted = currentGenerator.startNext())) {
stage = getNextStage(stage);
currentGenerator = getNextGenerator();
if (stage == Stage.SOURCE) {
reschedule();
return;
}
}
// We've run out of stages and generators, give up.
if ((stage == Stage.FINISHED || isCancelled) && !isStarted) {
notifyFailed();
}
// Otherwise a generator started a new load and we expect to be called back in
// onDataFetcherReady.
}
总结:
关于runGenerators中while的分析需要结合磁盘缓存策略、startNext、getNextStage、getNextGenerator综合进行分析, 流程如下:
- 1、磁盘缓存策略为DiskCacheStrategy.ALL的前提下, 首先尝试从磁盘中获取RESOURCE格式的数据.
- 2、如果没有缓存RESOURCE格式的数据, 再尝试获取DATA格式的数据.
- 3、如果没有获取到DATA格式的数据, 则尝试从网络中获取数据.
针对上面的流程, 关于磁盘缓存接下来需要思考两个问题:
- 1、如果RESOURCE格式的数据被删除, 从DATA中加载数据之后, 会不会再次进行RESOURCE缓存?
- 2、从网络获取数据之后, DATA和RESOURCE的缓存时机?
三、从网络获取数据
方法调用链:
SourceGenerator.startNext
相关变量简介
变量 | 实际指向 |
---|---|
SourceGenerator.cb | DecodeJob |
SourceGenerator.loadData.fetcher | HttpUrlFetcher |
HttpUrlFetcher.callback | SourceGenerator |
DecodeJob.callback | EngineJob |
DataCacheGenerator.cb | DecodeJob |
3.1 SourceGenerator.startNext
@Override
public boolean startNext() {
if (dataToCache != null) {
Object data = dataToCache;
dataToCache = null;
cacheData(data);
}
if (sourceCacheGenerator != null && sourceCacheGenerator.startNext()) {
return true;
}
sourceCacheGenerator = null;
loadData = null;
boolean started = false;
while (!started && hasNextModelLoader()) {
loadData = helper.getLoadData().get(loadDataListIndex++);
if (loadData != null
&& (helper.getDiskCacheStrategy().isDataCacheable(loadData.fetcher.getDataSource())
|| helper.hasLoadPath(loadData.fetcher.getDataClass()))) {
started = true;
// 中间忽略了一大串方法, 直接跟到最终结果loadData.fetcher = HttpUrlFetcher;
loadData.fetcher.loadData(helper.getPriority(), this);
}
}
return started;
}
3.2 HttpUrlFetcher.loadData
@Override
public void loadData(Priority priority, DataCallback<? super InputStream> callback) {
long startTime = LogTime.getLogTime();
try {
// loadDataWithRedirects从服务器获取原始流数据.
InputStream result = loadDataWithRedirects(glideUrl.toURL(), 0, null, glideUrl.getHeaders());
// callback = SourceGenerator;
callback.onDataReady(result);
} catch (IOException e) {
if (Log.isLoggable(TAG, Log.DEBUG)) {
Log.d(TAG, "Failed to load data for url", e);
}
callback.onLoadFailed(e);
} finally {
if (Log.isLoggable(TAG, Log.VERBOSE)) {
Log.v(TAG, "Finished http url fetcher fetch in " + LogTime.getElapsedMillis(startTime));
}
}
}
3.3 SourceGenerator.onDataReady传入的原始流数据
传入的实际上是未经任何处理的原始的流数据, 这里未经处理包括: 未解码、未缩放等操作
@Override
public void onDataReady(Object data) {
DiskCacheStrategy diskCacheStrategy = helper.getDiskCacheStrategy();
// 判断data数据能否进行dataCache的条件取决于数据的来源以及磁盘缓存策略.
if (data != null && diskCacheStrategy.isDataCacheable(loadData.fetcher.getDataSource())) {
// 将data赋给dataToCache, 后期磁盘缓存时会用到dataToCache.
dataToCache = data;
// cb = DecodeJob.
cb.reschedule();
} else {
cb.onDataFetcherReady(loadData.sourceKey, data, loadData.fetcher,
loadData.fetcher.getDataSource(), originalKey);
}
}
总结:
关于data数据能否进行磁盘缓存, 取决于数据来源以及缓存策略, 此时还是假设缓存策略为DiskCacheStrategy.ALL模式.
那么能否进行data缓存的前提就是data数据不是来自于dataDiskCache
以及memoryCache
.
方法调用链
DecodeJob.reschedule
DecodeJob.run
DecodeJob.runWrapper
DecodeJob.runGenerators
SourceGenerator.startNext
3.4 DecodeJob.reschedule
@Override
public void reschedule() {
// 这个需要注意, runWrapped()方法时会用到
runReason = RunReason.SWITCH_TO_SOURCE_SERVICE;
// callback = EngineJob;
callback.reschedule(this);
}
3.5 DecodeJob.runWrapped
private void runWrapped() {
switch (runReason) {
case INITIALIZE:
stage = getNextStage(Stage.INITIALIZE);
currentGenerator = getNextGenerator();
runGenerators();
break;
// DecodeJob.reschedule时已经将runReason设置为了SWITCH_TO_SOURCE_SERVICE.
case SWITCH_TO_SOURCE_SERVICE:
runGenerators();
break;
case DECODE_DATA:
decodeFromRetrievedData();
break;
default:
throw new IllegalStateException("Unrecognized run reason: " + runReason);
}
}
3.6 SourceGenerator.startNext
@Override
public boolean startNext() {
// 这里dataToCache已经有值了.
if (dataToCache != null) {
Object data = dataToCache;
dataToCache = null;
cacheData(data);
}
// sourceCacheGenerator指向cacheData(...)最后创建的DataCacheGenerator.
if (sourceCacheGenerator != null && sourceCacheGenerator.startNext()) {
return true;
}
sourceCacheGenerator = null;
loadData = null;
boolean started = false;
while (!started && hasNextModelLoader()) {
loadData = helper.getLoadData().get(loadDataListIndex++);
if (loadData != null
&& (helper.getDiskCacheStrategy().isDataCacheable(loadData.fetcher.getDataSource())
|| helper.hasLoadPath(loadData.fetcher.getDataClass()))) {
started = true;
loadData.fetcher.loadData(helper.getPriority(), this);
}
}
return started;
}
总结:
再次执行到这里的条件是: 获取到网络请求的数据, 并且已经将网络获取的值赋给了dataToCache, 进而触发cacheData
将该数据进行缓存.
3.6.1 SourceGenerator.cacheData
private void cacheData(Object dataToCache) {
long startTime = LogTime.getLogTime();
try {
// 获取与dataToCache对应的编码器.
Encoder<Object> encoder = helper.getSourceEncoder(dataToCache);
DataCacheWriter<Object> writer =
new DataCacheWriter<>(encoder, dataToCache, helper.getOptions());
originalKey = new DataCacheKey(loadData.sourceKey, helper.getSignature());
helper.getDiskCache().put(originalKey, writer);
if (Log.isLoggable(TAG, Log.VERBOSE)) {
Log.v(TAG, "Finished encoding source to cache"
+ ", key: " + originalKey
+ ", data: " + dataToCache
+ ", encoder: " + encoder
+ ", duration: " + LogTime.getElapsedMillis(startTime));
}
} finally {
loadData.fetcher.cleanup();
}
// 这里创建DataCacheGenerator又回应了SourceGenerator.startNext.
sourceCacheGenerator =
new DataCacheGenerator(Collections.singletonList(loadData.sourceKey), helper, this);
}
小节:
到这里, 缓存策略为DiskCacheStrategy.ALL情况下, 从网络获取数据然后进行原始数据(DATA数据)
的存储已经完成. 流程如下:
- 1、请求之前, 先尝试获取RESOURCE格式的缓存数据.
- 2、如果没有RESOURCE格式数据, 尝试获取DATA格式的缓存数据.
- 3、如果没有找到, 尝试从网络获取数据.
- 4、从网络获取数据之后, 进行DATA格式的本地磁盘缓存.
- 5、DATA格式数据磁盘缓存完成以后, 再创建
DataCacheGenerator
, 接下来数据在从磁盘中读取该数据进行解析.
3.6.2 DataCacheGenerator.startNext
@Override
public boolean startNext() {
while (modelLoaders == null || !hasNextModelLoader()) {
sourceIdIndex++;
if (sourceIdIndex >= cacheKeys.size()) {
return false;
}
Key sourceId = cacheKeys.get(sourceIdIndex);
// PMD.AvoidInstantiatingObjectsInLoops The loop iterates a limited number of times
// and the actions it performs are much more expensive than a single allocation.
@SuppressWarnings("PMD.AvoidInstantiatingObjectsInLoops")
Key originalKey = new DataCacheKey(sourceId, helper.getSignature());
cacheFile = helper.getDiskCache().get(originalKey);
if (cacheFile != null) {
this.sourceKey = sourceId;
modelLoaders = helper.getModelLoaders(cacheFile);
modelLoaderIndex = 0;
}
}
loadData = null;
boolean started = false;
while (!started && hasNextModelLoader()) {
ModelLoader<File, ?> modelLoader = modelLoaders.get(modelLoaderIndex++);
loadData =
modelLoader.buildLoadData(cacheFile, helper.getWidth(), helper.getHeight(),
helper.getOptions());
if (loadData != null && helper.hasLoadPath(loadData.fetcher.getDataClass())) {
started = true;
// 中间的跳转流程省略, 此时只需要记住this作为callback被传入, 然后通过callback再回调
// this.onDataReady方法
loadData.fetcher.loadData(helper.getPriority(), this);
}
}
return started;
}
接下来需要分析DiskCacheStrategy.ALL情况下, RESOURCE格式的数据是何时被缓存的.
3.6.2.1 DataCacheGenerator.onDataReady
@Override
public void onDataReady(Object data) {
// cb = DecodeJob.
cb.onDataFetcherReady(sourceKey, data, loadData.fetcher, DataSource.DATA_DISK_CACHE, sourceKey);
}
3.6.2.2 DecodeJob.onDataFetcherReady
@Override
public void onDataFetcherReady(Key sourceKey, Object data, DataFetcher<?> fetcher,
DataSource dataSource, Key attemptedKey) {
this.currentSourceKey = sourceKey;
// 从磁盘中获取的DATA格式的数据.
this.currentData = data;
this.currentFetcher = fetcher;
// 数据的来源: 磁盘缓存的DATA格式.
this.currentDataSource = dataSource;
this.currentAttemptingKey = attemptedKey;
if (Thread.currentThread() != currentThread) {
runReason = RunReason.DECODE_DATA;
callback.reschedule(this);
} else {
GlideTrace.beginSection("DecodeJob.decodeFromRetrievedData");
try {
decodeFromRetrievedData();
} finally {
GlideTrace.endSection();
}
}
}
小节:
从磁盘中获取到DATA格式的数据以后(DATA格式的数据根据上文缓存逻辑可知, DATA格式就是原始数据), 然后对该数据进行解码、采样率等一系列的转换然后最终显示到View中.
方法调用链:
DecodeJob.decodeFromRetrievedData
DecodeJob.notifyEncodeAndRelease
DecodeJob.notifyComplete
DeferredEncodeManager.hasResourceToEncode
DeferredEncodeManager.encode
3.6.2.3 DecodeJob.decodeFromRetrievedData
private void decodeFromRetrievedData() {
if (Log.isLoggable(TAG, Log.VERBOSE)) {
logWithTimeAndKey("Retrieved data", startFetchTime,
"data: " + currentData
+ ", cache key: " + currentSourceKey
+ ", fetcher: " + currentFetcher);
}
Resource<R> resource = null;
try {
resource = decodeFromData(currentFetcher, currentData, currentDataSource);
} catch (GlideException e) {
e.setLoggingDetails(currentAttemptingKey, currentDataSource);
throwables.add(e);
}
if (resource != null) {
notifyEncodeAndRelease(resource, currentDataSource);
} else {
runGenerators();
}
}
3.6.2.4 DecodeJob.notifyEncodeAndRelease
private void notifyEncodeAndRelease(Resource<R> resource, DataSource dataSource) {
if (resource instanceof Initializable) {
((Initializable) resource).initialize();
}
Resource<R> result = resource;
LockedResource<R> lockedResource = null;
if (deferredEncodeManager.hasResourceToEncode()) {
lockedResource = LockedResource.obtain(resource);
result = lockedResource;
}
// 这里进行Resource数据的缓存.
notifyComplete(result, dataSource);
stage = Stage.ENCODE;
try {
if (deferredEncodeManager.hasResourceToEncode()) {
// 这里完成的是RESOURCE格式数据的缓存.
deferredEncodeManager.encode(diskCacheProvider, options);
}
} finally {
if (lockedResource != null) {
lockedResource.unlock();
}
}
onEncodeComplete();
}
小节:
notifyEncodeAndRelease完成了两件事:
- 1、通过
notifyComplete
完成Resource的显示. 但是这次先不分析这里的细节. - 2、deferredEncodeManager.encode完成Resource数据的本地磁盘存储, 注意此时Resource是解码并经过转过以后的数据, 所以存储的是RESOURCE格式.
到此, 磁盘策略为DiskCacheStrategy.ALL的情况下, 没有内存缓存的情况下, 加载图片数据的流程分析完成.
网友评论