美文网首页
Android跨进程通信IPC之18——Binder之Frame

Android跨进程通信IPC之18——Binder之Frame

作者: 凯玲之恋 | 来源:发表于2018-08-21 13:51 被阅读32次

    移步系列Android跨进程通信IPC系列

    系统服务的使用Android跨进程通信IPC之20——系统服务的使用

    1 ServiceManager.getService()方法

    //frameworks/base/core/java/android/os/ServiceManager.java    49行
    public static IBinder getService(String name) {
        try {
            //先从缓存中查看
            IBinder service = sCache.get(name); 
            if (service != null) {
                return service;
            } else {
                return getIServiceManager().getService(name); 
            }
        } catch (RemoteException e) {
            Log.e(TAG, "error in getService", e);
        }
        return null;
    }
    
      1. 先从缓存中取出,如果有,则直接return。其中sCache是以HashMap格式的缓存
    • 2、如果没有调用getIServiceManager().getService(name)获取一个,并且return

    Android跨进程通信IPC之17——Binder之Framework层Java篇--注册服务中我们知道

    getIServiceManager()
    

    等价于

    new  ServiceManagerProxy(new BinderProxy())
    

    2 ServiceManagerProxy.getService(name)

    // frameworks/base/core/java/android/os/ServiceManagerNative.java     118行
        public IBinder getService(String name) throws RemoteException {
            Parcel data = Parcel.obtain();
            Parcel reply = Parcel.obtain();
            data.writeInterfaceToken(IServiceManager.descriptor);
            data.writeString(name);
            //mRemote为BinderProxy
            mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
            //从replay里面解析出获取的IBinder对象
            IBinder binder = reply.readStrongBinder();
            reply.recycle();
            data.recycle();
            return binder;
        }
    

    这里面有两个重点方法,一个是 mRemote.transact(),一个是 reply.readStrongBinder()。那我们就逐步研究下

    3 mRemote.transact()方法

    我们mRemote其实是BinderPoxy,那我们来看下BinderProxy的transact方法

    //frameworks/base/core/java/android/os/Binder.java   501行
        public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
            Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");
            if (Binder.isTracingEnabled()) { Binder.getTransactionTracker().addTrace(); }
            return transactNative(code, data, reply, flags);
        }
    
         // frameworks/base/core/java/android/os/Binder.java   507行
        public native boolean transactNative(int code, Parcel data, Parcel reply,
                int flags) throws RemoteException;
    

    关于Binder.checkParcel()方法,上面已经说过了,就不详细说了。transact()方法其实是调用了natvie的transactNative()方法,这样就进入了JNI里面了

    3.1 mRemote.transact()方法

    // frameworks/base/core/jni/android_util_Binder.cpp     1083行
    static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
            jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
    {
        if (dataObj == NULL) {
            jniThrowNullPointerException(env, NULL);
            return JNI_FALSE;
        }
        //Java的 Parcel 转为native的 Parcel
        Parcel* data = parcelForJavaObject(env, dataObj);
        if (data == NULL) {
            return JNI_FALSE;
        }
        Parcel* reply = parcelForJavaObject(env, replyObj);
        if (reply == NULL && replyObj != NULL) {
            return JNI_FALSE;
        }
        // gBinderProxyOffsets.mObject中保存的的是new BpBinder(0)对象
        IBinder* target = (IBinder*)
            env->GetLongField(obj, gBinderProxyOffsets.mObject);
        if (target == NULL) {
            jniThrowException(env, "java/lang/IllegalStateException", "Binder has been finalized!");
            return JNI_FALSE;
        }
    
        ALOGV("Java code calling transact on %p in Java object %p with code %" PRId32 "\n",
                target, obj, code);
    
        bool time_binder_calls;
        int64_t start_millis;
        if (kEnableBinderSample) {
            // Only log the binder call duration for things on the Java-level main thread.
            // But if we don't
            time_binder_calls = should_time_binder_calls();
    
            if (time_binder_calls) {
                start_millis = uptimeMillis();
           }
        }
    
        //printf("Transact from Java code to %p sending: ", target); data->print();
        //gBinderProxyOffseets.mObject中保存的是new BpBinder(0) 对象
        status_t err = target->transact(code, *data, reply, flags);
        //if (reply) printf("Transact from Java code to %p received: ", target); reply->print();
    
        if (kEnableBinderSample) {
            if (time_binder_calls) {
                conditionally_log_binder_call(start_millis, target, code);
            }
        }
    
        if (err == NO_ERROR) {
            return JNI_TRUE;
        } else if (err == UNKNOWN_TRANSACTION) {
            return JNI_FALSE;
        }
    
        signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/, data->dataSize());
        return JNI_FALSE;
    }
    

    上面代码中,有一段重点代码

    status_t err = target->transact(code, *data, reply, flags);
    

    3.2 BpBinder::transact()函数

    /frameworks/native/libs/binder/BpBinder.cpp    159行
    status_t BpBinder::transact(
        uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
    {
        if (mAlive) {
            status_t status = IPCThreadState::self()->transact(
                mHandle, code, data, reply, flags);
            if (status == DEAD_OBJECT) mAlive = 0;
            return status;
        }
        return DEAD_OBJECT;
    }
    

    其实是调用的IPCThreadState的transact()函数

    3.3 BpBinder::transact()函数

    //frameworks/native/libs/binder/IPCThreadState.cpp    548行
    status_t IPCThreadState::transact(int32_t handle,
                                      uint32_t code, const Parcel& data,
                                      Parcel* reply, uint32_t flags)
    {
        status_t err = data.errorCheck(); //数据错误检查
        flags |= TF_ACCEPT_FDS;
        ....
        if (err == NO_ERROR) {
             // 传输数据
            err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
        }
        ...
    
        // 默认情况下,都是采用非oneway的方式, 也就是需要等待服务端的返回结果
        if ((flags & TF_ONE_WAY) == 0) {
            if (reply) {
                //等待回应事件
                err = waitForResponse(reply);
            }else {
                Parcel fakeReply;
                err = waitForResponse(&fakeReply);
            }
        } else {
            err = waitForResponse(NULL, NULL);
        }
        return err;
    }
    

    主要就是两个步骤

    • 首先,调用writeTransactionData()函数 传输数据
    • 其次,调用waitForResponse()函数来获取返回结果

    那我们来看下waitForResponse()函数里面的重点实现

    3.4 IPCThreadState::waitForResponse函数

    //frameworks/native/libs/binder/IPCThreadState.cpp    712行
    status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
    {
        int32_t cmd;
        int32_t err;
        while (1) {
            if ((err=talkWithDriver()) < NO_ERROR) break; 
            ...
            cmd = mIn.readInt32();
            switch (cmd) {
              case BR_REPLY:
              {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        //当reply对象回收时,则会调用freeBuffer来回收内存
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
                        ...
                    }
                }
              }
              case :...
            }
        }
        ...
        return err;
    }
    

    这时候就在等待回复了,如果有回复,则通过cmd = mIn.readInt32()函数获取命令

    3.5 IPCThreadState::waitForResponse函数

    //
    void binder_send_reply(struct binder_state *bs,
                           struct binder_io *reply,
                           binder_uintptr_t buffer_to_free,
                           int status)
    {
        struct {
            uint32_t cmd_free;
            binder_uintptr_t buffer;
            uint32_t cmd_reply;
            struct binder_transaction_data txn;
        } __attribute__((packed)) data;
        //free buffer命令
        data.cmd_free = BC_FREE_BUFFER; 
        data.buffer = buffer_to_free;
        // reply命令
        data.cmd_reply = BC_REPLY; // reply命令
        data.txn.target.ptr = 0;
        data.txn.cookie = 0;
        data.txn.code = 0;
        if (status) {
            ...
        } else {=
        
            data.txn.flags = 0;
            data.txn.data_size = reply->data - reply->data0;
            data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);
            data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
            data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
        }
        //向Binder驱动通信
        binder_write(bs, &data, sizeof(data));
    }
    

    binder_write将BC_FREE_BUFFER和BC_REPLY命令协议发送给驱动,进入驱动。
    在驱动里面bingder_ioctl -> binder_ioctl_write_read ->binder_thread_write,由于是BC_REPLY命令协议,则进入binder_transaction,该方法会向请求服务的线程TODO队列插入事务。接来下,请求服务的进程在执行talkWithDriver的过程执行到binder_thread_read(),处理TODO队列的事物。

    4 Parcel.readStrongBinder()方法

    其实Parcel.readStrongBinder()的过程基本上就是writeStrongBinder的过程。
    我们先来看下它的源码

    //frameworks/base/core/java/android/os/Parcel.java    1686行
        /**
         * Read an object from the parcel at the current dataPosition().
         * 在当前的 dataPosition()位置上读取一个对象
         */
        public final IBinder readStrongBinder() {
            return nativeReadStrongBinder(mNativePtr);
        }
    
    
      private static native IBinder nativeReadStrongBinder(long nativePtr);
    

    其实它内部是调用的是nativeReadStrongBinder()方法,通过上面的源码我们知道nativeReadStrongBinder是一个native的方法,所以通过JNI调用到android_os_Parcel_readStrongBinder这个函数

    4.1 android_os_Parcel_readStrongBinder()函数

    //frameworks/base/core/jni/android_os_Parcel.cpp          429行
    static jobject android_os_Parcel_readStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr)
    {
        Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
        if (parcel != NULL) {
            return javaObjectForIBinder(env, parcel->readStrongBinder());
        }
        return NULL;
    }
    

    javaObjectForIBinder将native层BpBinder对象转换为Java层的BinderProxy对象。
    上面的函数中,调用了readStrongBinder()函数

    4.2 readStrongBinder()函数

    //frameworks/native/libs/binder/Parcel.cpp  1334行
    sp<IBinder> Parcel::readStrongBinder() const
    {
        sp<IBinder> val;
        unflatten_binder(ProcessState::self(), *this, &val);
        return val;
    }
    

    这里面也很简单,主要是调用unflatten_binder()函数

    4.3 unflatten_binder()函数

    //frameworks/native/libs/binder/Parcel.cpp  293行
    status_t unflatten_binder(const sp<ProcessState>& proc,
        const Parcel& in, sp<IBinder>* out)
    {
        const flat_binder_object* flat = in.readObject(false);
        if (flat) {
            switch (flat->type) {
                case BINDER_TYPE_BINDER:
                    *out = reinterpret_cast<IBinder*>(flat->cookie);
                    return finish_unflatten_binder(NULL, *flat, in);
                case BINDER_TYPE_HANDLE:
                    //进入该分支
                    *out = proc->getStrongProxyForHandle(flat->handle);
                    //创建BpBinder对象
                    return finish_unflatten_binder(
                        static_cast<BpBinder*>(out->get()), *flat, in);
            }
        }
        return BAD_TYPE;
    }
    

    PS:在/frameworks/native/libs/binder/Parcel.cpp/frameworks/native/libs/binder/Parcel.cpp 里面有两个unflatten_binder()函数,其中区别点是,最后一个入参,一个是sp<IBinder>* out,另一个是wp<IBinder>* out。大家别弄差了。

    在unflatten_binder里面进入 case BINDER_TYPE_HANDLE: 分支,然后执行getStrongProxyForHandle()函数。

    4.4 getStrongProxyForHandle()函数

    sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
    {
        sp<IBinder> result;
    
        AutoMutex _l(mLock);
        //查找handle对应的资源项
        handle_entry* e = lookupHandleLocked(handle);
    
        if (e != NULL) {
            IBinder* b = e->binder;
            if (b == NULL || !e->refs->attemptIncWeak(this)) {
                ...
                //当handle值所对应的IBinder不存在或弱引用无效时,则创建BpBinder对象
                b = new BpBinder(handle);
                e->binder = b;
                if (b) e->refs = b->getWeakRefs();
                result = b;
            } else {
                result.force_set(b);
                e->refs->decWeak(this);
            }
        }
        return result;
    }
    

    经过该方法,最终创建了指向Binder服务端的BpBinder代理对象。所以说javaObjectForIBinder将native层的BpBinder对象转化为Java层BinderProxy对象。也就是说通过getService()最终取得了指向目标Binder服务器的代理对象BinderProxy。

    5 总结

    所以说getService的核心过程:

    public static IBinder getService(String name) {
        ...
        //此处还需要将Java层的Parcel转化为Native层的Parcel
        Parcel reply = Parcel.obtain(); 
        // 与Binder驱动交互
        BpBinder::transact(GET_SERVICE_TRANSACTION, *data, reply, 0);  
        IBinder binder = javaObjectForIBinder(env, new BpBinder(handle));
        ...
    }
    

    javaObjectForIBinder作用是创建BinderProxy对象,并将BpBinder对象的地址保存到BinderProxy对象的mObjects中,获取服务过程就是通过BpBinder来发送GET_SERVICE_TRANSACTION命令,实现与binder驱动进行数据交互。

    参考

    Android跨进程通信IPC之10——Binder之Framework层Java篇

    相关文章

      网友评论

          本文标题:Android跨进程通信IPC之18——Binder之Frame

          本文链接:https://www.haomeiwen.com/subject/uybdiftx.html