本篇介绍
servicemanager是android中binder服务的管家,一般binder服务会先注册到servicemanager中,然后其他客户端会通过servicemanager来获取目标binder的proxy binder,这时候就可以通过获取的proxy binder进行ipc调用了,本篇介绍下servicemanager的工作流程。
servicemanager的启动
servicemanager是由init拉起的,下面是配置的rc文件。
service servicemanager /system/bin/servicemanager
class core animation
user system
group system readproc
critical
onrestart restart apexd
onrestart restart audioserver
onrestart restart gatekeeperd
onrestart class_restart main
onrestart class_restart hal
onrestart class_restart early_hal
writepid /dev/cpuset/system-background/tasks
shutdown critical
可以看到servicemanager很关键,如果一分钟内crash 4次就会导致设备重启进入bootloader模式。
接下来看下启动流程:
int main(int argc, char** argv) {
if (argc > 2) {
LOG(FATAL) << "usage: " << argv[0] << " [binder driver]";
}
const char* driver = argc == 2 ? argv[1] : "/dev/binder";
sp<ProcessState> ps = ProcessState::initWithDriver(driver); // 初始化进程间的单例对象processstate
ps->setThreadPoolMaxThreadCount(0); // 设置线程池线程数量
ps->setCallRestriction(ProcessState::CallRestriction::FATAL_IF_NOT_ONEWAY); // 设置binder线程阻塞时候的行为,是打印日志还是退出进程
sp<ServiceManager> manager = sp<ServiceManager>::make(std::make_unique<Access>()); // 创建servicemanager
if (!manager->addService("manager", manager, false /*allowIsolated*/, IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT).isOk()) {
LOG(ERROR) << "Could not self register servicemanager";
} // 添加servicemanager自己到服务列表中
IPCThreadState::self()->setTheContextObject(manager);
ps->becomeContextManager();通知内核,设置binder 管家
sp<Looper> looper = Looper::prepare(false /*allowNonCallbacks*/);
BinderCallback::setupTo(looper); // 针对binder的fd设置回调
ClientCallbackCallback::setupTo(looper, manager);
while(true) {
looper->pollAll(-1); // 开始监听binder节点
}
// should not be reached
return EXIT_FAILURE;
}
可以看到servicemanager的启动做的几件事情
- 初始化ProcessState
- 创建ServiceManager对象,并设置成binder管家
- 通过looper设置binder驱动节点回调
接下来挨个介绍下
初始化ProcessState
···
sp<ProcessState> ProcessState::initWithDriver(const char* driver)
{
return init(driver, true /requireDefault/);
}
···
前面文章defaultServiceManager介绍
提到过,android中有三个binder域,分别是/dev/binder, /dev/hwbinder, /dev/vndbinder, 这三个binder是不通的,也就是一个binder服务是无法拿到vndbinder中的服务的,这个是Google在treble项目中做系统解耦的一种方法。
sp<ProcessState> ProcessState::init(const char *driver, bool requireDefault)
{
[[clang::no_destroy]] static sp<ProcessState> gProcess;
[[clang::no_destroy]] static std::mutex gProcessMutex;
if (driver == nullptr) { // driver不能为空
std::lock_guard<std::mutex> l(gProcessMutex);
return gProcess;
}
[[clang::no_destroy]] static std::once_flag gProcessOnce;
std::call_once(gProcessOnce, [&](){
if (access(driver, R_OK) == -1) {
ALOGE("Binder driver %s is unavailable. Using /dev/binder instead.", driver);
driver = "/dev/binder";
}
std::lock_guard<std::mutex> l(gProcessMutex);
gProcess = sp<ProcessState>::make(driver); // 构造ProcessState
});
if (requireDefault) {
// Detect if we are trying to initialize with a different driver, and
// consider that an error. ProcessState will only be initialized once above.
LOG_ALWAYS_FATAL_IF(gProcess->getDriverName() != driver,
"ProcessState was already initialized with %s,"
" can't initialize with %s.",
gProcess->getDriverName().c_str(), driver);
}
return gProcess;
}
这儿就是用提供的driver构造一个ProcessState,默认是/dev/binder。接下来再看下ProcessState的构造方法:
ProcessState::ProcessState(const char *driver)
: mDriverName(String8(driver))
, mDriverFD(open_driver(driver)) // 通过binder节点,创建对应的binder_proc对象
, mVMStart(MAP_FAILED)
, mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
, mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
, mExecutingThreadsCount(0)
, mWaitingForThreads(0)
, mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
, mStarvationStartTimeMs(0)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
, mCallRestriction(CallRestriction::NONE)
{
// TODO(b/166468760): enforce in build system
#if defined(__ANDROID_APEX__)
LOG_ALWAYS_FATAL("Cannot use libbinder in APEX (only system.img libbinder) since it is not stable.");
#endif
// apex 里面不能用libbinder,apex是android 10引入的一个机制。目的还是提升系统的升级速度,以前的treble项目也是这个目的
//treble是google希望可以自己独立升级自己的android代码,不用关芯片厂商的硬件代码。
//apex就更近一步,毕竟升级android镜像还是比较麻烦,apex干脆让系统分为一个个模块,每个模块就是一个apex,
//可以直接从play store上下载升级,这样就更加简单了。
if (mDriverFD >= 0) {
// mmap the binder, providing a chunk of virtual address space to receive transactions.
mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
// *sigh*
ALOGE("Using %s failed: unable to mmap transaction memory.\n", mDriverName.c_str());
close(mDriverFD);
mDriverFD = -1;
mDriverName.clear();
}
}
#ifdef __ANDROID__
LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver '%s' could not be opened. Terminating.", driver);
#endif
}
这时候就完成了ProcessState的构造。这里面提到了一个话题外的概念就是apex,也就是模块化升级,这块后面再专门介绍下。
创建ServiceManager对象,并设置成binder管家
创建servicemanager后,调用addService将自己添加到服务列表中。
看下addService的内容
Status ServiceManager::addService(const std::string& name, const sp<IBinder>& binder, bool allowIsolated, int32_t dumpPriority) {
auto ctx = mAccess->getCallingContext();
// apps cannot add services
if (multiuser_get_app_id(ctx.uid) >= AID_APP) {
return Status::fromExceptionCode(Status::EX_SECURITY);
}
if (!mAccess->canAdd(ctx, name)) { // selinux检查
return Status::fromExceptionCode(Status::EX_SECURITY);
}
if (binder == nullptr) {
return Status::fromExceptionCode(Status::EX_ILLEGAL_ARGUMENT);
}
if (!isValidServiceName(name)) {
LOG(ERROR) << "Invalid service name: " << name;
return Status::fromExceptionCode(Status::EX_ILLEGAL_ARGUMENT);
}
#ifndef VENDORSERVICEMANAGER
if (!meetsDeclarationRequirements(binder, name)) {
// already logged
return Status::fromExceptionCode(Status::EX_ILLEGAL_ARGUMENT);
}
#endif // !VENDORSERVICEMANAGER
// implicitly unlinked when the binder is removed
if (binder->remoteBinder() != nullptr &&
binder->linkToDeath(sp<ServiceManager>::fromExisting(this)) != OK) { // 注册死亡通知
LOG(ERROR) << "Could not linkToDeath when adding " << name;
return Status::fromExceptionCode(Status::EX_ILLEGAL_STATE);
}
// Overwrite the old service if it exists
mNameToService[name] = Service { // 将服务添加到服务列表
.binder = binder,
.allowIsolated = allowIsolated,
.dumpPriority = dumpPriority,
.debugPid = ctx.debugPid,
};
auto it = mNameToRegistrationCallback.find(name); // 查看是否有client关心该service的注册事件,如果有,则通知对应client
if (it != mNameToRegistrationCallback.end()) {
for (const sp<IServiceCallback>& cb : it->second) {
mNameToService[name].guaranteeClient = true;
// permission checked in registerForNotifications
cb->onRegistration(name, binder);
}
}
return Status::ok();
}
从上面代码可以看出来,addservice其实就是将servie的名字和对应的binder保存起来即可,在一些场景中,可能需要获得指定名字的binder,可是又不知道什么时候可以获取到,那么就可以有2种方法来实现了。一种是直接在轮询中使用getService,一种就是注册添加服务通知。
这时候会有一个疑问,那就是如果重复名字的binder服务注册会怎样?从上面的实现看,后者会替换掉前者,并不会因为已经注册了而注册失败。
接着看下如何成为binder管家
bool ProcessState::becomeContextManager()
{
AutoMutex _l(mLock);
flat_binder_object obj {
.flags = FLAT_BINDER_FLAG_TXN_SECURITY_CTX,
};
int result = ioctl(mDriverFD, BINDER_SET_CONTEXT_MGR_EXT, &obj);
// fallback to original method
if (result != 0) {
android_errorWriteLog(0x534e4554, "121035042");
int unused = 0;
result = ioctl(mDriverFD, BINDER_SET_CONTEXT_MGR, &unused);
}
if (result == -1) {
ALOGE("Binder ioctl to become context manager failed: %s\n", strerror(errno));
}
return result == 0;
}
直接ioctl进入内核,发送命令BINDER_SET_CONTEXT_MGR_EXT。
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
...
case BINDER_SET_CONTEXT_MGR_EXT: {
struct flat_binder_object fbo;
if (copy_from_user(&fbo, ubuf, sizeof(fbo))) {
ret = -EINVAL;
goto err;
}
ret = binder_ioctl_set_ctx_mgr(filp, &fbo); // 设置binder管家
if (ret)
goto err;
break;
}
...
}
再看下binder_ioctl_set_ctx_mgr
static int binder_ioctl_set_ctx_mgr(struct file *filp,
struct flat_binder_object *fbo)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
struct binder_context *context = proc->context;
struct binder_node *new_node;
kuid_t curr_euid = current_euid();
mutex_lock(&context->context_mgr_node_lock);
if (context->binder_context_mgr_node) { // 如果已经有管家了,就返回
pr_err("BINDER_SET_CONTEXT_MGR already set\n");
ret = -EBUSY;
goto out;
}
ret = security_binder_set_context_mgr(proc->tsk); // 设置binder管家
if (ret < 0)
goto out;
if (uid_valid(context->binder_context_mgr_uid)) {
if (!uid_eq(context->binder_context_mgr_uid, curr_euid)) {
pr_err("BINDER_SET_CONTEXT_MGR bad uid %d != %d\n",
from_kuid(&init_user_ns, curr_euid),
from_kuid(&init_user_ns,
context->binder_context_mgr_uid));
ret = -EPERM;
goto out;
}
} else {
context->binder_context_mgr_uid = curr_euid;
}
new_node = binder_new_node(proc, fbo); // 创建binder管家实体
if (!new_node) {
ret = -ENOMEM;
goto out;
}
binder_node_lock(new_node);
new_node->local_weak_refs++;
new_node->local_strong_refs++;
new_node->has_strong_ref = 1;
new_node->has_weak_ref = 1;
context->binder_context_mgr_node = new_node;
binder_node_unlock(new_node);
binder_put_node(new_node);
out:
mutex_unlock(&context->context_mgr_node_lock);
return ret;
}
再看下servicemanager中提供的接口
接口名称 | 含义 |
---|---|
getService | 获取binder服务,如果binder服务没有在运行,则会尝试拉起该服务 |
checkService | 获取binder服务,如果没找到, 也不会执行拉起动作 |
addService | 添加binder服务 |
listServices | 获取binder服务列表 |
un/registerForNotifications | 注销/添加监听服务注册回调 |
getDeclaredInstances | 获取制定接口的binder服务 |
registerClientCallback | 注册感知client变化回调,clients变化时会调用回调 |
getServiceDebugInfo | 注册的binder服务名字和pid列表 |
设置驱动回调
首先创建了一个Looper,然后设置两个回调,一个用于处理binder请求,一个用于定时通知client变化,先看下第一个。
BinderCallback::setupTo(looper);
class BinderCallback : public LooperCallback {
public:
static sp<BinderCallback> setupTo(const sp<Looper>& looper) {
sp<BinderCallback> cb = sp<BinderCallback>::make();
int binder_fd = -1;
IPCThreadState::self()->setupPolling(&binder_fd); // 获取binder的fd
LOG_ALWAYS_FATAL_IF(binder_fd < 0, "Failed to setupPolling: %d", binder_fd);
int ret = looper->addFd(binder_fd, // 监听binder 的事件
Looper::POLL_CALLBACK,
Looper::EVENT_INPUT,
cb,
nullptr /*data*/);
LOG_ALWAYS_FATAL_IF(ret != 1, "Failed to add binder FD to Looper");
return cb;
}
int handleEvent(int /* fd */, int /* events */, void* /* data */) override {
IPCThreadState::self()->handlePolledCommands(); // 处理binder请求,读取命令,并执行
return 1; // Continue receiving callbacks.
}
};
再看下第二个回调:
class ClientCallbackCallback : public LooperCallback {
public:
static sp<ClientCallbackCallback> setupTo(const sp<Looper>& looper, const sp<ServiceManager>& manager) {
sp<ClientCallbackCallback> cb = sp<ClientCallbackCallback>::make(manager);
int fdTimer = timerfd_create(CLOCK_MONOTONIC, 0 /*flags*/); // 创建定时timerfd
LOG_ALWAYS_FATAL_IF(fdTimer < 0, "Failed to timerfd_create: fd: %d err: %d", fdTimer, errno);
itimerspec timespec {
.it_interval = {
.tv_sec = 5,
.tv_nsec = 0,
},
.it_value = {
.tv_sec = 5,
.tv_nsec = 0,
},
};
int timeRes = timerfd_settime(fdTimer, 0 /*flags*/, ×pec, nullptr);
LOG_ALWAYS_FATAL_IF(timeRes < 0, "Failed to timerfd_settime: res: %d err: %d", timeRes, errno);
int addRes = looper->addFd(fdTimer, // 监听定时通知
Looper::POLL_CALLBACK,
Looper::EVENT_INPUT,
cb,
nullptr);
LOG_ALWAYS_FATAL_IF(addRes != 1, "Failed to add client callback FD to Looper");
return cb;
}
int handleEvent(int fd, int /*events*/, void* /*data*/) override {
uint64_t expirations;
int ret = read(fd, &expirations, sizeof(expirations));
if (ret != sizeof(expirations)) {
ALOGE("Read failed to callback FD: ret: %d err: %d", ret, errno);
}
mManager->handleClientCallbacks(); // 固定周期通知client变化
return 1; // Continue receiving callbacks.
}
private:
friend sp<ClientCallbackCallback>;
ClientCallbackCallback(const sp<ServiceManager>& manager) : mManager(manager) {}
sp<ServiceManager> mManager;
};
最后servicemanager就只要在这个looper上poll就可以了。
本篇总结
本篇介绍了下servicemanager的内容,包含servicemanager的创建,启动,运行等,可以看到servicemanager相比以前添加了service和client的互相监听机制,使用上更加方便。
网友评论