chromium使用grit工具打包程序所需的资源,如图片、Web页、字符串等,打包后的资源在安装目录的chrome_100_percent.pak中(图片会根据dpi不同分开为多个包)。这里就说一下程序是如何从这些pak文件中读取出单个的资源的。
通过跟踪代码可以发现,chromium中负责加载资源的地方最终会调用到以下代码:
base::RefCountedMemory* ChromeContentClient::GetDataResourceBytes(
int resource_id) const {
return ResourceBundle::GetSharedInstance().LoadDataResourceBytes(resource_id);
}
从中可以看出,ResourceBundle是资源加载的关键类,查看ResourceBundle::GetSharedInstance可以看出这是个全局单例类,进程内共享一个实例;经过跟踪,这个实例是在ChromeBrowserMainParts::PreCreateThreadsImpl进行初始化的,初始化的时候主要会传递一个Delegate对象用来构造一个ResoruceBundle对象,根据系统的语言设置加载locale中的本地化字符串资源,并获取屏幕dpi等辅助信息。
顺着进去看LoadDataResourceBytes方法,可以看到它调用了GetRawDataResourceForScale,并将返回的数据包装为一个RefCountedStaticMemory对象。这个RefCountedStaticMemory对象的实现暂且不谈,先看一下数据是从哪里来的:
base::StringPiece ResourceBundle::GetRawDataResourceForScale
(int resource_id, ScaleFactor scale_factor) const {
base::StringPiece data;
......
for (size_t i = 0; i < data_packs_.size(); i++) {
if ((data_packs_[i]->GetScaleFactor() == ui::SCALE_FACTOR_100P ||
data_packs_[i]->GetScaleFactor() == ui::SCALE_FACTOR_200P ||
data_packs_[i]->GetScaleFactor() == ui::SCALE_FACTOR_300P ||
data_packs_[i]->GetScaleFactor() == ui::SCALE_FACTOR_NONE) &&
data_packs_[i]->GetStringPiece(static_cast<uint16_t>(resource_id),
&data))
return data;
}
return base::StringPiece();
}
到这里其实就接近事情的真相了,可以看出来chromium会根据资源适用的屏幕像素倍率将不同的资源加载到对应的data_pack中。从头文件中看到data_packs是ResourceHandle类的实例,跳转过去看到这是一个抽象类,附加调试可以看到它是ui::DataPack的实例。这就是关键实现了:
#pragma pack(push, 2)
struct DataPackEntry {
uint16_t resource_id;
uint32_t file_offset;
static int CompareById(const void* void_key, const void* void_entry) {
uint16_t key = *reinterpret_cast<const uint16_t*>(void_key);
const DataPackEntry* entry =
reinterpret_cast<const DataPackEntry*>(void_entry);
if (key < entry->resource_id) {
return -1;
} else if (key > entry->resource_id) {
return 1;
} else {
return 0;
}
}
};
#pragma pack(pop)
static_assert(sizeof(DataPackEntry) == 6, "size of entry must be six");
......
bool DataPack::GetStringPiece(uint16_t resource_id,
base::StringPiece* data) const {
// It won't be hard to make this endian-agnostic, but it's not worth
// bothering to do right now.
#if defined(__BYTE_ORDER)
// Linux check
static_assert(__BYTE_ORDER == __LITTLE_ENDIAN,
"datapack assumes little endian");
#elif defined(__BIG_ENDIAN__)
// Mac check
#error DataPack assumes little endian
#endif
const DataPackEntry* target = reinterpret_cast<const DataPackEntry*>(bsearch(
&resource_id, data_source_->GetData() + kHeaderLength, resource_count_,
sizeof(DataPackEntry), DataPackEntry::CompareById));
if (!target) {
return false;
}
const DataPackEntry* next_entry = target + 1;
// If the next entry points beyond the end of the file this data pack's entry
// table is corrupt. Log an error and return false. See
// http://crbug.com/371301.
if (next_entry->file_offset > data_source_->GetLength()) {
size_t entry_index = target - reinterpret_cast<const DataPackEntry*>(
data_source_->GetData() + kHeaderLength);
LOG(ERROR) << "Entry #" << entry_index << " in data pack points off end "
<< "of file. This should have been caught when loading. Was the "
<< "file modified?";
return false;
}
MaybePrintResourceId(resource_id);
size_t length = next_entry->file_offset - target->file_offset;
data->set(reinterpret_cast<const char*>(data_source_->GetData() +
target->file_offset),
length);
return true;
}
可以看出,每个资源会在文件头对应一个DataPackEntry的结构,这个结构大小为6个字节,前两个字节为资源id,后面四个字节为偏移量。当根据资源ID获取对应的资源时,使用二分查找的方式,从文件中获取到该资源ID与其下一个资源ID对应的偏移量,两个偏移量之间的内容即为所取内容。进一步分析代码,可以画出下面的pak文件结构图:
image.png
网友评论