美文网首页
从AMS.attachApplicationLocked()分析

从AMS.attachApplicationLocked()分析

作者: 我叫王菜鸟 | 来源:发表于2017-08-20 19:10 被阅读0次

    当系统创建进程以后会调用AMS.attachApplicationLocked(),在这个方法内部会注册该进程的死亡回调

    //其中thread是ActivityThread通过夸进程通信获取Binder的代理对象,然后调用linkToDeath()
    AppDeathRecipient adr = new AppDeathRecipient(app, pid, thread);
    thread.asBinder().linkToDeath(adr, 0);
    

    我们会发现这个一个空实现

    ApplicationThread.java

    /**
     * Local implementation is a no-op.
     */
    public void linkToDeath(DeathRecipient recipient, int flags) {
    }
    

    空实现我们肯定会很好奇,什么也没做呀,但是我们想想,thread.asBinder()代表的是ActivityThread但是实际上是ActivityThread对象本身吗?答案:不是的。带着这个疑问,我们继续倒退代码,这个thread到底谁。

    我们会在ActivityThread.main中去开始我们创建子进程后的操作所以流程如下:

    ActivityThread.main

    ActivityThread thread = new ActivityThread();//这里thread是ActivityThread
    thread.attach(false);
    

    attach()

     final ApplicationThread mAppThread = new ApplicationThread();//AT的成员变量
    -------
    final IActivityManager mgr = ActivityManagerNative.getDefault();//这个时候我们需要夸进程通信到AMS的attachApplicationLocked方法,又回到了最初的原点。
    try {
        mgr.attachApplication(mAppThread);
    } catch (RemoteException ex) {
        // Ignore
    }
    

    所以到这里我们清楚了,那个thread.asBinder()代表的是ApplicationThread,注意这里我说的是代表的是看下面。

    ActivityManagerNative.java

    public void attachApplication(IApplicationThread app) throws RemoteException
    {
        Parcel data = Parcel.obtain();
        Parcel reply = Parcel.obtain();
        data.writeInterfaceToken(IActivityManager.descriptor);
        data.writeStrongBinder(app.asBinder());//看这里看这里
        mRemote.transact(ATTACH_APPLICATION_TRANSACTION, data, reply, 0);
        reply.readException();
        data.recycle();
        reply.recycle();
    }
    

    传的是Binder的代理,也就是ApplicationThread的代理,那我们现在肯定还不死心,非得要看看ApplicationThread的asBinder()是什么鬼。

    ApplicationThread.java

    private class ApplicationThread extends ApplicationThreadNative {
    ...
    }
    

    ApplicationThreadNative.java

    public abstract class ApplicationThreadNative extends Binder
            implements IApplicationThread {
        public IBinder asBinder()
        {
            return this;//代表的是ApplicationThread,因为是继承关系
        }
    }        
    

    到这里我们清楚了thread.asBinder()ApplicationThreadNative,通过attachApplication传递进去的是ApplicationThread。ApplicationThread对象的asBinder是ApplicationThread本身,ApplicationThread继承了ApplicationThreadNative,也就是传递的是引用本身。通过binder传递对端得到的就是ApplicationThread实体对象的代理对象,所以我们需要关注的是ApplicationThread这个对象代理对象ApplicationThreadProxy既然是代理对象,那就使用的是BinderProxy,所以我们就知道了linkToDeath是在BinderProxy中。


    继续来到BinderProxy.java中

    BinderProxy.java

    //是native的
    public native void linkToDeath(DeathRecipient recipient, int flags)
            throws RemoteException;
    
    

    这个问题也证明了BinderProxy代理端持有者,也就是那些client端才需要处理死亡回调。而Binder服务端不需要,所以为空。

    我们看看native怎么写的

    static const JNINativeMethod gBinderProxyMethods[] = {
         {"linkToDeath", "(Landroid/os/IBinder$DeathRecipient;I)V", (void*)android_os_BinderProxy_linkToDeath}
     };
    

    android_util_Binder.cpp
    //我们传递进来的参数:创建的是通过子进程pid,name封装的AppDeathRecipient对象,0

    static void android_os_BinderProxy_linkToDeath(JNIEnv* env, jobject obj,
            jobject recipient, jint flags) // throws RemoteException
    {
        //这里顺便可以学习一下jni抛出异常的形式
        if (recipient == NULL) {
            jniThrowNullPointerException(env, NULL);
            return;
        }
        //获取BpBinder引用
        IBinder* target = (IBinder*)
            env->GetLongField(obj, gBinderProxyOffsets.mObject);//[1.0]
        if (target == NULL) {
            ALOGW("Binder has been finalized when calling linkToDeath() with recip=%p)\n", recipient);
            assert(false);
        }
        //也要注意这里打印的日志
        LOGDEATH("linkToDeath: binder=%p recipient=%p\n", target, recipient);
    
        if (!target->localBinder()) {//[1.0]BpBinder必须不为空
            DeathRecipientList* list = (DeathRecipientList*)
                    env->GetLongField(obj, gBinderProxyOffsets.mOrgue);
            //创建JavaDeathRecipient对象
            sp<JavaDeathRecipient> jdr = new JavaDeathRecipient(env, recipient, list);
            //这里才是真正建立死亡回调的地方[3.0]
            status_t err = target->linkToDeath(jdr, NULL, flags);
            if (err != NO_ERROR) {
                // Failure adding the death recipient, so clear its reference
                // now.
                jdr->clearReference();//[2.0]
                signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/);
            }
        }
    }
    

    1.0

    IBinder* target = (IBinder*)
    env->GetLongField(obj, gBinderProxyOffsets.mObject);
    -------------------
    使用jni里面的函数
    jlong       (*GetLongField)(JNIEnv*, jobject, jfieldID);
    这个函数目的是从obj中胡群殴对应mObject那个字段的值
    --------------------
    obj是传递过来的参数
    也就是我们通过子进程封装的AppDeathRecipient对象
    //注意这里jid的设置
    jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val){
        // The proxy holds a reference to the native object.
        env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
    }
    
    

    1.0.1

    例如这种:
    jfieldID fid = (*env)->GetFieldID(env, cls, "key", "Ljava/lang/String;");//得到字段jfieldID
    jstring jstr = (*env)->GetObjectField(env, jobj, fid);//获取jfieldID对应字段的属性值
    
    
    Get<type>Field
    NativeType Get<type>Field(JNIEnv *env, jobject obj, jfieldID fieldID);
    函数作用:
      该访问器例程系列返回对象的实例(非静态)域的值。要访问的域由通过调用GetFieldID() 而得到的域 ID 指定。
    参数说明:
      env:JNI 接口指针。
      obj:Java 对象(不能为 NULL)。
      fieldID:有效的域 ID。
    
    <type>可以是Boolean、Char等类型,所有的Get<type>Field参考下面的函数
    
    jboolean (*GetBooleanField)(JNIEnv*, jobject, jfieldID);
    jbyte (*GetByteField)(JNIEnv*, jobject, jfieldID);
    jchar (*GetCharField)(JNIEnv*, jobject, jfieldID);
    jshort (*GetShortField)(JNIEnv*, jobject, jfieldID);
    jint (*GetIntField)(JNIEnv*, jobject, jfieldID);
    jlong (*GetLongField)(JNIEnv*, jobject, jfieldID);
    jfloat (*GetFloatField)(JNIEnv*, jobject, jfieldID);
    jdouble (*GetDoubleField)(JNIEnv*, jobject, jfieldID);
    

    1.1

    191BBinder* BBinder::localBinder()
    192{
    193    return this;
    194}
    

    到这里我们小节一下我们的android_os_BinderProxy_linkToDeath方法:

    我们首先会得到BpBinder。然后获取到DeathRecipientList,主要记录BpBinder的JavaDeathRecipient信息列表,因为一个BpBnder可以注册多个死亡回调。
    创建JavaDeathRecipient继承了IBinder::DeathRecipient

    class JavaDeathRecipient : public IBinder::DeathRecipient
    {
    public:
        JavaDeathRecipient(JNIEnv* env, jobject object, const sp<DeathRecipientList>& list)
            : mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object)),
              mObjectWeak(NULL), mList(list)
        {
            //将当前对象sp添加到列表DeathRecipientList
            LOGDEATH("Adding JDR %p to DRL %p", this, list.get());
            list->add(this);
    
            android_atomic_inc(&gNumDeathRefs);
            incRefsCreated(env);
        }
    }
    
    • 通过env->NewGlobalRef(object),为recipient创建相应的全局引用,并保存到mObject成员变量;
    • 将当前对象JavaDeathRecipient的强指针sp添加到DeathRecipientList;

    android_util_Binder.cpp

    static void incRefsCreated(JNIEnv* env)
    {
        int old = android_atomic_inc(&gNumRefsCreated);
        if (old == 2000) {
            android_atomic_and(0, &gNumRefsCreated);
            //触发forceGc
            env->CallStaticVoidMethod(gBinderInternalOffsets.mClass,
                    gBinderInternalOffsets.mForceGc);
        }
    }
    

    这个方法主要计数,每计数到2000则会执行一次forceGc

    调用的场景如下:

    JavaBBinder构造中
        JavaBBinder(JNIEnv* env, jobject object)
            : mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object))
        {
            ALOGV("Creating JavaBBinder %p\n", this);
            android_atomic_inc(&gNumLocalRefs);
            incRefsCreated(env);
        }
    
    创建JavaDeathRecipient对象时
    JavaDeathRecipient(JNIEnv* env, jobject object, const sp<DeathRecipientList>& list)
        : mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object)),
          mObjectWeak(NULL), mList(list)
    {
        // These objects manage their own lifetimes so are responsible for final bookkeeping.
        // The list holds a strong reference to this object.
        LOGDEATH("Adding JDR %p to DRL %p", this, list.get());
        list->add(this);
    
        android_atomic_inc(&gNumDeathRefs);
        incRefsCreated(env);
    }
    
    
    将native层BpBinder对象转换为Java层BinderProxy对象的过程;
    jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
    {
     incRefsCreated(env);
    }
    
    

    2.0 clearReference

    //清除引用,将JavaDeathRecipient从DeathRecipientList列表中移除.
    void clearReference()
     {
         sp<DeathRecipientList> list = mList.promote();
         if (list != NULL) {
             list->remove(this); //从列表中移除引用
         }
     }
    

    3.0

    status_t BpBinder::linkToDeath(
        const sp<DeathRecipient>& recipient, void* cookie, uint32_t flags)
    {
        Obituary ob;
        ob.recipient = recipient; //该对象为JavaDeathRecipient
        ob.cookie = cookie; // cookie=NULL
        ob.flags = flags; // flags=0
        {
            AutoMutex _l(mLock);
            if (!mObitsSent) { //没有执行过sendObituary,则进入该方法
                if (!mObituaries) {
                    mObituaries = new Vector<Obituary>;
                    if (!mObituaries) {
                        return NO_MEMORY;
                    }
                    getWeakRefs()->incWeak(this);
                    IPCThreadState* self = IPCThreadState::self();
                    //[3.1]
                    self->requestDeathNotification(mHandle, this);
                    //[3.2]
                    self->flushCommands();
                }
                //将新创建的Obituary添加到mObituaries
                ssize_t res = mObituaries->add(ob);
                return res >= (ssize_t)NO_ERROR ? (status_t)NO_ERROR : res;
            }
        }
        return DEAD_OBJECT;
    }
    

    3.1requestDeathNotification

    直接写命令BC_REQUEST_DEATH_NOTIFICATION

    status_t IPCThreadState::requestDeathNotification(int32_t handle, BpBinder* proxy)
    {
        mOut.writeInt32(BC_REQUEST_DEATH_NOTIFICATION);
        mOut.writeInt32((int32_t)handle);
        mOut.writePointer((uintptr_t)proxy);
        return NO_ERROR;
    }
    

    3.2 flushCommands
    给驱动发消息,false是不会阻塞等待。

    void IPCThreadState::flushCommands()
    {
        if (mProcess->mDriverFD <= 0)
            return;
        talkWithDriver(false);
    }
    

    binder.c

    static int binder_thread_write(struct binder_proc *proc,
          struct binder_thread *thread,
          binder_uintptr_t binder_buffer, size_t size,
          binder_size_t *consumed)
    {
      uint32_t cmd;
      //proc, thread都是指当前发起端进程的信息
      struct binder_context *context = proc->context;
      void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
      void __user *ptr = buffer + *consumed; 
      void __user *end = buffer + size;
      while (ptr < end && thread->return_error == BR_OK) {
        get_user(cmd, (uint32_t __user *)ptr); //获取BC_REQUEST_DEATH_NOTIFICATION
        ptr += sizeof(uint32_t);
        switch (cmd) {
            case BC_REQUEST_DEATH_NOTIFICATION:{ //注册死亡通知
                uint32_t target;
                void __user *cookie;
                struct binder_ref *ref;
                struct binder_ref_death *death;
    
                get_user(target, (uint32_t __user *)ptr); //获取target
                ptr += sizeof(uint32_t);
                get_user(cookie, (void __user * __user *)ptr); //获取BpBinder
                ptr += sizeof(void *);
    
                ref = binder_get_ref(proc, target); //拿到目标服务的binder_ref
    
                if (cmd == BC_REQUEST_DEATH_NOTIFICATION) {
                    //native Bp可注册多个,但Kernel只允许注册一个死亡通知
                    if (ref->death) {
                        break; 
                    }
                    death = kzalloc(sizeof(*death), GFP_KERNEL);
    
                    INIT_LIST_HEAD(&death->work.entry);
                    death->cookie = cookie;
                    ref->death = death;
                    //当目标binder服务所在进程已死,则直接发送死亡通知。这是非常规情况
                    if (ref->node->proc == NULL) { 
                        ref->death->work.type = BINDER_WORK_DEAD_BINDER;
                        //当前线程为binder线程,则直接添加到当前线程的todo队列. 
                        if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
                            list_add_tail(&ref->death->work.entry, &thread->todo);
                        } else {
                            list_add_tail(&ref->death->work.entry, &proc->todo);
                            wake_up_interruptible(&proc->wait);
                        }
                    }
                } else {
                    ...
                }
            } break;
          case ...;
        }
        *consumed = ptr - buffer;
      }    }
    

    可见现在已经在Binder的todo链表中添加了BpBinder的信息。所以现在意味着,只要对端进程挂掉,Binder是在底层可以从todo链表中拿出来client的然后调用对应的回调方法。

    通过上面的分析,我们已经知道,可以有多个BpBinder绑定到当前服务端的死亡列表中,然后通过真正的BpBinder中的linkToDeath添加到Binder内核中的todo链表中。todo链表记录着所有的binder,在这里通过work.type区分这个Binder是已经linkToDeath的。

     DeathRecipientList* list = (DeathRecipientList*)env->GetLongField(obj, gBinderProxyOffsets.mOrgue);
    //创建JavaDeathRecipient对象
    sp<JavaDeathRecipient> jdr = new JavaDeathRecipient(env, recipient, list);
    //这里才是真正建立死亡回调的地方[3.0]
    status_t err = target->linkToDeath(jdr, NULL, flags);
    

    那么什么时候才会触发呢?

    我们按着这个思路往下想,既然内核todo链表中有linkToDeath的Binder引用,那么我们什么时候才能触发遍历带有特殊type的linkToDeath的Binder呢?这个就和我们的目的有关,答案是Binder服务端死亡的时候会触发。既然这样我们就需要知道Binder死亡后的一些事情。我们下面就分析Binder死亡后的过程。

    小发现

    start


    当我们调试Binder的时候,log中会有一些调试信息,比如

    当打开调试开关BINDER_DEBUG_OPEN_CLOSE时,主要输出binder的open, mmap, close, flush, release方法中的log信息

    具体kernel log,如下:

    • binder_open: 4681:4681
    • binder_mmap: 4681 b6b42000-b6c40000 (1016 K) vma 200071 pagep 79f
    • binder: 4681 close vm area b6b42000-b6c40000 (1016 K) vma 2220051 pagep 79f
    • binder_flush: 4681 woke 0 threads
    • binder_release: 4681 threads 1, nodes 0 (ref 0), refs 2, active transactions 0, buffers 1, pages 1

    对应的log信息是:

    • binder_open: group_leader->pid:pid
    • binder_mmap: pid vm_start-vm_end (vm_size K) vma vm_flags pagep vm_page_prot
    • binder: pid close vm area vm_start-vm_end (vm_size K) vma vm_flags pagep vm_page_prot
    • binder_flush: pid woke wake_count threads
    • binder_release: pid threads threads, nodes nodes (ref incoming_refs), refs outgoing_refs, active transactions active_transactions, buffers buffers, pages page_count

    具体的含义:

    • vm_page_prot:是指当前进程的VMA访问权限;
    • wake_count:是指该进程唤醒了处于BINDER_LOOPER_STATE_WAITING休眠等待状态的线程个数;
    • threads是指该进程中的线程个数;
    • nodes代表该进程中创建binder_node个数;
    • incoming_refs指向当前node的refs个数;
    • outgoing_refs指向其他进程的refs个数;
    • active_transactions是指当前进程中所有binder线程的transactions总和;
    • buffers是指当前进程已分配的buffer个数;
      page_count是指当前进程已分配的物理page个数。

    对应的函数:

    • binder_open()
    • binder_vma_open() 或者 binder_mmap()
    • binder_vma_close()
    • binder_deferred_flush() 由binder_flush调用(见下方调用栈)
    • binder_deferred_release() 由binder_release调用(见下方调用栈)

    end


    我们在这里着重看binder_release的调用栈

    binder_release  
      binder_defer_work(proc, BINDER_DEFERRED_RELEASE);
        queue_work(binder_deferred_workqueue, &binder_deferred_work);
          binder_deferred_func    //通过 DECLARE_WORK(binder_deferred_work, binder_deferred_func);
            binder_deferred_release
    

    顾名思义,当binder所在进程结束时候会调用binder_release,binder_open打开binder驱动/dev/binder,这是字符设备,获取文件苗舒服,在进程结束的时候会有关闭文件系统的过程,会调用close(0,对应的方法就是release()。

    我们在来思考一下,Linux系统是一个文件系统,android中操作很多文件节点,有输入的event事件,binder节点文件等等,既然是文件,那就有文件的操作,既然有文件的操作,那就必须涉及到文件的打开和关闭,我们也从binder中验证了这一点。binder_open(),那么肯定对应有关闭这个文件节点,所以我们从close入手就利索应当了。

    binder.c

    void binder_release(struct binder_state *bs, uint32_t target)
    {
        uint32_t cmd[2];
        cmd[0] = BC_RELEASE;
        cmd[1] = target;
        binder_write(bs, cmd, sizeof(cmd));
    }
    
    int binder_write(struct binder_state *bs, void *data, size_t len)
    {
        struct binder_write_read bwr;
        int res;
    
        bwr.write_size = len;
        bwr.write_consumed = 0;
        bwr.write_buffer = (uintptr_t) data;
        bwr.read_size = 0;
        bwr.read_consumed = 0;
        bwr.read_buffer = 0;
        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
        if (res < 0) {
            fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                    strerror(errno));
        }
        return res;
    }
    

    我们知道所有binder的请求都是通过binder_thread_write

    binder_thread_write(){
        while (ptr < end && thread->return_error == BR_OK) {
            get_user(cmd, (uint32_t __user *)ptr);//获取IPC数据中的Binder协议(BC码)
            switch (cmd) {
                case BC_INCREFS: ...
                case BC_ACQUIRE: ...
                case BC_RELEASE: ...
                case BC_DECREFS: ...
                case BC_INCREFS_DONE: ...
                case BC_ACQUIRE_DONE: ...
                case BC_FREE_BUFFER: ...
                
                case BC_TRANSACTION:
                case BC_REPLY: {
                    struct binder_transaction_data tr;
                    copy_from_user(&tr, ptr, sizeof(tr)); //拷贝用户空间tr到内核
                    // 【见小节2.2.1】
                    binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
                    break;
    
                case BC_REGISTER_LOOPER: ...
                case BC_ENTER_LOOPER: ...
                case BC_EXIT_LOOPER: ...
                case BC_REQUEST_DEATH_NOTIFICATION: ...
                case BC_CLEAR_DEATH_NOTIFICATION:  ...
                case BC_DEAD_BINDER_DONE: ...
                }
            }
        }
    }
    

    我们清晰的看见,对应有BC_RELEASE
    这个函数我们就不用多说了,之前binder有过分析,看我的其他博客。
    通过给驱动写如BINDER_WRITE_READ来告诉驱动,我要写一个数据,数据具体带有BC_RELEASE这个命令
    最后BC_RELEASE功能是实现文件描述引用-1.当引用清0的时候这个Binder就是调用close的时候,

    binder.c

    static const struct file_operations binder_fops = {
      .owner = THIS_MODULE,
      .poll = binder_poll,
      .unlocked_ioctl = binder_ioctl,
      .compat_ioctl = binder_ioctl,
      .mmap = binder_mmap,
      .open = binder_open,
      .flush = binder_flush,
      .release = binder_release, //对应于release的方法
    };
    
    
    static int binder_release(struct inode *nodp, struct file *filp)
    {
      struct binder_proc *proc = filp->private_data;
      debugfs_remove(proc->debugfs_entry);
      binder_defer_work(proc, BINDER_DEFERRED_RELEASE);//下面
      return 0;
    }
    
    static void binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer)
    {
      mutex_lock(&binder_deferred_lock); //获取锁
      //添加BINDER_DEFERRED_RELEASE
      proc->deferred_work |= defer; 
      if (hlist_unhashed(&proc->deferred_work_node)) {
        hlist_add_head(&proc->deferred_work_node, &binder_deferred_list);
        //向工作队列添加binder_deferred_work [见小节4.4]
        queue_work(binder_deferred_workqueue, &binder_deferred_work);
      }
      mutex_unlock(&binder_deferred_lock); //释放锁
    }
    
    //全局工作队列
    static struct workqueue_struct *binder_deferred_workqueue;
    
    static int __init binder_init(void)
    {
      int ret;
      //创建了名叫“binder”的工作队列
      binder_deferred_workqueue = create_singlethread_workqueue("binder");
      if (!binder_deferred_workqueue)
        return -ENOMEM;
      ...
    }
    
    device_initcall(binder_init);
    
    
    static DECLARE_WORK(binder_deferred_work, binder_deferred_func);
    
    #define DECLARE_WORK(n, f)            \
      struct work_struct n = __WORK_INITIALIZER(n, f)
    
    #define __WORK_INITIALIZER(n, f) {          \
      .data = WORK_DATA_STATIC_INIT(),        \
      .entry  = { &(n).entry, &(n).entry },        \
      .func = (f),              \
      __WORK_INIT_LOCKDEP_MAP(#n, &(n))        \
      }
    

    在Binder设备驱动初始化的过程执行binder_init()方法中,调用 create_singlethread_workqueue(“binder”),创建了名叫“binder”的工作队列(workqueue)。 workqueue是kernel提供的一种实现简单而有效的内核线程机制,可延迟执行任务。

    binder_deferred_func

    static void binder_deferred_func(struct work_struct *work)
    {
        binder_deferred_release(proc);
    }
    
    static void binder_deferred_release(struct binder_proc *proc)
    {
      struct binder_transaction *t;
      struct rb_node *n;
      int threads, nodes, incoming_refs, outgoing_refs, buffers,
        active_transactions, page_count;
    
      hlist_del(&proc->proc_node); //删除proc_node节点
    
      if (binder_context_mgr_node && binder_context_mgr_node->proc == proc) {
        binder_context_mgr_node = NULL;
      }
    
      //释放binder_thread
      threads = 0;
      active_transactions = 0;
      while ((n = rb_first(&proc->threads))) {
        struct binder_thread *thread;
        thread = rb_entry(n, struct binder_thread, rb_node);
        threads++;
        active_transactions += binder_free_thread(proc, thread);
      }
    
      //释放binder_node 
      nodes = 0;
      incoming_refs = 0;
      while ((n = rb_first(&proc->nodes))) {
        struct binder_node *node;
        node = rb_entry(n, struct binder_node, rb_node);
        nodes++;
        rb_erase(&node->rb_node, &proc->nodes);
        incoming_refs = binder_node_release(node, incoming_refs);
      }
    
      //释放binder_ref 
      outgoing_refs = 0;
      while ((n = rb_first(&proc->refs_by_desc))) {
        struct binder_ref *ref;
    
        ref = rb_entry(n, struct binder_ref, rb_node_desc);
        outgoing_refs++;
        binder_delete_ref(ref);
      }
      
      //释放binder_work 
      binder_release_work(&proc->todo);
      binder_release_work(&proc->delivered_death);
    
      buffers = 0;
      while ((n = rb_first(&proc->allocated_buffers))) {
        struct binder_buffer *buffer;
        buffer = rb_entry(n, struct binder_buffer, rb_node);
    
        t = buffer->transaction;
        if (t) {
          t->buffer = NULL;
          buffer->transaction = NULL;
        }
        //释放binder_buf 
        binder_free_buf(proc, buffer);
        buffers++;
      }
    
      binder_stats_deleted(BINDER_STAT_PROC);
    
      page_count = 0;
      if (proc->pages) {
        int i;
    
        for (i = 0; i < proc->buffer_size / PAGE_SIZE; i++) {
          void *page_addr;
          if (!proc->pages[i])
            continue;
    
          page_addr = proc->buffer + i * PAGE_SIZE;
          unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
          __free_page(proc->pages[i]);
          page_count++;
        }
        kfree(proc->pages);
        vfree(proc->buffer);
      }
      put_task_struct(proc->tsk);
      kfree(proc);
    }
    

    此处proc是来自Bn端的binder_proc.

    binder_deferred_release的主要工作有:

    • binder_free_thread(proc, thread)
    • binder_node_release(node, incoming_refs);
    • binder_delete_ref(ref);
    • binder_release_work(&proc->todo);
    • binder_release_work(&proc->delivered_death);
    • binder_free_buf(proc, buffer);
      以及释放各种内存信息

    我们现在关心binder_node也就是binder实体释放

    
    static int binder_node_release(struct binder_node *node, int refs)
    {
      struct binder_ref *ref;
      int death = 0;
    
      list_del_init(&node->work.entry);
      binder_release_work(&node->async_todo);//重点
    
      if (hlist_empty(&node->refs)) {
        kfree(node); //引用为空,则直接删除节点
        binder_stats_deleted(BINDER_STAT_NODE);
        return refs;
      }
    
      node->proc = NULL;
      node->local_strong_refs = 0;
      node->local_weak_refs = 0;
      hlist_add_head(&node->dead_node, &binder_dead_nodes);
    
      hlist_for_each_entry(ref, &node->refs, node_entry) {
        refs++;
        if (!ref->death)
          continue;
        death++;
    
        if (list_empty(&ref->death->work.entry)) {
          //添加BINDER_WORK_DEAD_BINDER事务到todo队列重点
          ref->death->work.type = BINDER_WORK_DEAD_BINDER;
          list_add_tail(&ref->death->work.entry, &ref->proc->todo);
          wake_up_interruptible(&ref->proc->wait);
        } 
      }
      return refs;
    }
    

    该方法会遍历该binder_node所有的binder_ref, 当存在binder死亡通知,则向相应的binder_ref 所在进程的todo队列添加BINDER_WORK_DEAD_BINDER事务并唤醒处于proc->wait的binder线程。

    static void binder_release_work(struct list_head *list)
    {
      struct binder_work *w;
      while (!list_empty(list)) {
        w = list_first_entry(list, struct binder_work, entry);
        list_del_init(&w->entry); //删除binder_work
        switch (w->type) {
        case BINDER_WORK_TRANSACTION: {
          struct binder_transaction *t;
          t = container_of(w, struct binder_transaction, work);
          if (t->buffer->target_node &&
              !(t->flags & TF_ONE_WAY)) {
            //发送failed回复
            binder_send_failed_reply(t, BR_DEAD_REPLY);
          } else {
            t->buffer->transaction = NULL;
            kfree(t);
            binder_stats_deleted(BINDER_STAT_TRANSACTION);
          }
        } break;
        
        case BINDER_WORK_TRANSACTION_COMPLETE: {
          kfree(w);
          binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
        } break;
        
        case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
        case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
          struct binder_ref_death *death;
          death = container_of(w, struct binder_ref_death, work);
          kfree(death);
          binder_stats_deleted(BINDER_STAT_DEATH);
        } break;
        
        default:
          break;
        }
      }
    
    }
    

    到这里我们已经清楚了,binder_node_release这个过程中,BINDER_WORK_DEAD_BINDER事务并唤醒处于proc->wait的binder线程。

    我们回过头来看

    static int binder_thread_read(struct binder_proc *proc,
                      struct binder_thread *thread,
                      binder_uintptr_t binder_buffer, size_t size,
                      binder_size_t *consumed, int non_block)
        ...
        //唤醒等待中的binder线程
        wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
        binder_lock(__func__); //加锁
    
        if (wait_for_proc_work)
            proc->ready_threads--; //空闲的binder线程减1
        thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
    
        while (1) {
            uint32_t cmd;
            struct binder_transaction_data tr;
            struct binder_work *w;
            struct binder_transaction *t = NULL;
    
            //从todo队列拿出前面放入的binder_work, 此时type为BINDER_WORK_DEAD_BINDER
            if (!list_empty(&thread->todo)) {
                w = list_first_entry(&thread->todo, struct binder_work,
                             entry);
            } else if (!list_empty(&proc->todo) && wait_for_proc_work) {
                w = list_first_entry(&proc->todo, struct binder_work,
                             entry);
            }
    
            switch (w->type) {
              case BINDER_WORK_DEAD_BINDER:
                case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
                case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
                    struct binder_ref_death *death;
                    uint32_t cmd;
    
                    death = container_of(w, struct binder_ref_death, work);
                    if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)
                        cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE; //清除完成
                    ...
                    if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) {
                        list_del(&w->entry); //清除死亡通知的work队列
                        kfree(death);
                        binder_stats_deleted(BINDER_STAT_DEATH);
                    } 
                    ...
                    if (cmd == BR_DEAD_BINDER)
                        goto done;
                } break;
            }
        }
        ...
        return 0;
    }
    

    queue_work(binder_deferred_workqueue,&binder_deferred_work);

    给工作队列中添加binder_deferred_workqueue,其中binder_deferred_workqueue=create_singlethread_workqueue("binder");

    static DECLARE_WORK(binder_deferred_work,binder_deferred_func);这个是定义就是添加一个函数引用在工作队列中,以后对应binder_deferred_func方法

    在这个binder_deferred_func方法中,可见将

     if (defer & BINDER_DEFERRED_RELEASE)
          binder_deferred_release(proc);
    

    我们现在来精简一下调用栈:

    static int binder_release(struct inode *nodp, struct file *filp)
    {
        binder_defer_work(proc, BINDER_DEFERRED_RELEASE);
    }
    
    static void binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer)
    {
        //添加BINDER_DEFERRED_RELEASE
        proc->deferred_work |= defer; 
        //向工作队列添加binder_deferred_work
        queue_work(binder_deferred_workqueue, &binder_deferred_work);
    }
    

    binder_deferred_workqueue我们现在已经知道了,对应这binder_deferred_func这个方法。

    static void binder_deferred_func(struct work_struct *work)
    {
        if (defer & BINDER_DEFERRED_RELEASE)
          binder_deferred_release(proc); 
    }
    
    
    static void binder_deferred_release(struct binder_proc *proc)
    {
        hlist_del(&proc->proc_node); //删除proc_node节点
        //释放binder_thread,binder_node,binder_ref,binder_work,binder_buf
        //其中在释放binder_node的时候会调用binder_node_release
        incoming_refs = binder_node_release(node, incoming_refs);
    }
    
    static int binder_node_release(struct binder_node *node, int refs)
    {
        binder_release_work(&node->async_todo);
        if (list_empty(&ref->death->work.entry)) {
            //添加BINDER_WORK_DEAD_BINDER事务到todo队列
            ref->death->work.type = BINDER_WORK_DEAD_BINDER;
            list_add_tail(&ref->death->work.entry, &ref->proc->todo);
            wake_up_interruptible(&ref->proc->wait);
        }
    }
    

    到这里我们就已经明白,binder_node_release这个方法会遍历该binder_node所有的binder_ref, 当存在binder死亡通知,则向相应的binder_ref 所在进程的todo队列添加BINDER_WORK_DEAD_BINDER事务并唤醒处于proc->wait的binder线程

    还是那句老话,binder是数据传输中枢还是binder_thread_read这个方法,这个方法内部我们看看是如何处理,binder死亡的。

    static int binder_thread_read(struct binder_proc *proc,
                      struct binder_thread *thread,
                      binder_uintptr_t binder_buffer, size_t size,
                      binder_size_t *consumed, int non_block){
        while (1) {
            //从todo队列拿出前面放入的binder_work, 此时type为BINDER_WORK_DEAD_BINDER
            if (!list_empty(&thread->todo)) {
                w = list_first_entry(&thread->todo, struct binder_work,
                             entry);
            } else if (!list_empty(&proc->todo) && wait_for_proc_work) {
                w = list_first_entry(&proc->todo, struct binder_work,
                             entry);
            }
            switch (w->type) {
                case BINDER_WORK_DEAD_BINDER: {
                    //将这个binder的描述体写入用户空间
                    put_user(cmd, (uint32_t __user *)ptr);
                    //把该work加入到delivered_death队列
                    list_move(&w->entry, &proc->delivered_death);
                }
                
            }
        }          
    }
    

    写入到用户空间,那么用户空间一定在阻塞等待读取操作

    IPCThreadState.java

    
    status_t IPCThreadState::getAndExecuteCommand()
    {
        status_t result;
        int32_t cmd;
        result = talkWithDriver(); //该Binder Driver进行交互
        if (result >= NO_ERROR) {
            cmd = mIn.readInt32(); //读取命令
            result = executeCommand(cmd);//核心
        }
        return result;
    }
    
    status_t IPCThreadState::executeCommand(int32_t cmd)
    {
        BBinder* obj;
        switch ((uint32_t)cmd) {
          case BR_DEAD_BINDER:
          {
              BpBinder *proxy = (BpBinder*)mIn.readPointer();
              proxy->sendObituary();
              mOut.writeInt32(BC_DEAD_BINDER_DONE);
              mOut.writePointer((uintptr_t)proxy);
          } break;
          ...
        }
        ...
        return result;
    }
    

    这里死亡只调用一次的原因是实体Binder只有一个,所以死亡回调之发送一次。

    Bp.sendObituary

    
    void BpBinder::sendObituary()
    {
            IPCThreadState* self = IPCThreadState::self();
            //清空死亡通知[见小节6.2]
            self->clearDeathNotification(mHandle, this);
            self->flushCommands();
            reportOneDeath(obits->itemAt(i));//在清空之前已经保存了引用。所以这里里发送死亡通知
        }
    }
    

    reportOneDeath

    void BpBinder::reportOneDeath(const Obituary& obit)
    {
        //将弱引用提升到sp
        sp<DeathRecipient> recipient = obit.recipient.promote();
        if (recipient == NULL) return;
        //回调死亡通知的方法
        recipient->binderDied(this);
    }
    

    binderDied

    private final class AppDeathRecipient implements IBinder.DeathRecipient {
        ...
        public void binderDied() {
            synchronized(ActivityManagerService.this) {
                appDiedLocked(mApp, mPid, mAppThread, true);
            }
        }
    }
    

    到这里我们终于亲切的看到appDiedLocked这个方法。我们在下次会分析这个方法

    unlinkeToDeath

    有了上面的基础,我们就很好分析这个了。

    BpBinder

    status_t BpBinder::unlinkToDeath(
        const wp<DeathRecipient>& recipient, void* cookie, uint32_t flags,
        wp<DeathRecipient>* outRecipient)
    {
        mObituaries->removeAt(i); //移除死亡通知
        //清理死亡通知
        self->clearDeathNotification(mHandle, this);
        self->flushCommands();
    }
    
    status_t IPCThreadState::clearDeathNotification(int32_t handle, BpBinder* proxy)
    {
        mOut.writeInt32(BC_CLEAR_DEATH_NOTIFICATION);
        mOut.writeInt32((int32_t)handle);
        mOut.writePointer((uintptr_t)proxy);
        return NO_ERROR;
    }
    

    还是通过内核写入BC_CLEAR_DEATH_NOTIFICATION

    还是那句老话,就不用我说了哈。

    static int binder_thread_write(struct binder_proc *proc,
          struct binder_thread *thread,
          binder_uintptr_t binder_buffer, size_t size,
          binder_size_t *consumed)
    {
        switch (cmd) {
            case BC_CLEAR_DEATH_NOTIFICATION: { //清除死亡通知
            
                ref = binder_get_ref(proc, target); //拿到目标服务的binder_ref
                //添加BINDER_WORK_CLEAR_DEATH_NOTIFICATION事务
                death->work.type = BINDER_WORK_CLEAR_DEATH_NOTIFICATION;
                list_add_tail(&death->work.entry, &thread->todo);
                
            }
        }
    }
    

    将对应的type设置成BINDER_WORK_CLEAR_DEATH_NOTIFICATION,然后添加到todo链表中

    也就是说将对应的type换成BINDER_WORK_CLEAR_DEATH_NOTIFICATION了。

    对于Binder IPC进程都会打开/dev/binder文件,当进程异常退出时,Binder驱动会保证释放将要退出的进程中没有正常关闭的/dev/binder文件,实现机制是binder驱动通过调用/dev/binder文件所对应的release回调函数,执行清理工作,并且检查BBinder是否有注册死亡通知,当发现存在死亡通知时,那么就向其对应的BpBinder端发送死亡通知消息。

    相关文章

      网友评论

          本文标题:从AMS.attachApplicationLocked()分析

          本文链接:https://www.haomeiwen.com/subject/cjzfdxtx.html