美文网首页Android 面试
Binder——从发起通信到talkWithDriver()

Binder——从发起通信到talkWithDriver()

作者: nianxing | 来源:发表于2020-09-26 00:10 被阅读0次

    Binder初探

    在调查ANR问题的过程中,经常会遇到一些应用主线程trace显示其正在调用目标进程的方法,进行bindercall。
    由于经常看到这一类的trace,所以对binder call产生了它到底做了哪些事的疑问。

    当trace的native层中显示正在talkWithDriver()的时候,一般就显示此次bindercall成功。这是什么原因呢?

    虽然binder又难又多,但是作为打通整个安卓系统各个进程的通信的binder 进程间通信,并一解心中的疑惑,我决定开始对它的第一步探索。

    注:本文的代码均为AndroidQ的代码, 也就是安卓10

    这是微信某次binder' call的trace。
    ----- pid 3007 at 2020-08-18 17:55:36 -----
    Cmd line: com.tencent.mm
     
    "main" prio=5 tid=1 Native
     
    group="main" sCount=1 dsCount=0 flags=1 obj=0x72f22ab8 self=0xb4000071bbc3a380
    sysTid=3007 nice=-10 cgrp=default sched=0/0 handle=0x72e298c4f8
    state=S schedstat=( 1467439708 342974433 1729 ) utm=117 stm=29 core=3 HZ=100
    stack=0x7fc0a57000-0x7fc0a59000 stackSize=8192KB
    held mutexes=
    native: #00 pc 000000000009ab94 /apex/com.android.runtime/lib64/bionic/libc.so (__ioctl+4)
    native: #01 pc 00000000000576c8 /apex/com.android.runtime/lib64/bionic/libc.so (ioctl+156)
    native: #02 pc 0000000000050a44 /system/lib64/libbinder.so (android::IPCThreadState::talkWithDriver(bool)+296)
    native: #03 pc 0000000000051a30 /system/lib64/libbinder.so (android::IPCThreadState::waitForResponse(android::Parcel*, int*)+60)
    native: #04 pc 00000000000517a0 /system/lib64/libbinder.so (android::IPCThreadState::transact(int, unsigned int, android::Parcel const&, android::Parcel*, unsigned int)+184)
    native: #05 pc 000000000004a014 /system/lib64/libbinder.so (android::BpBinder::transact(unsigned int, android::Parcel const&, android::Parcel*, unsigned int)+180)
    native: #06 pc 0000000000128ce4 /system/lib64/libandroid_runtime.so (android_os_BinderProxy_transact(_JNIEnv*, _jobject*, int, _jobject*, _jobject*, int)+304)
    at android.os.BinderProxy.transactNative(Native method)
    at android.os.BinderProxy.transact(BinderProxy.java:540)
    at android.media.IAudioService$Stub$Proxy.isBluetoothScoOn(IAudioService.java:3511)
    at android.media.AudioManager.isBluetoothScoOn(AudioManager.java:2141)
    at com.tencent.mm.plugin.audio.d.a.bEu(SourceFile:27)
    at com.tencent.mm.plugin.audio.broadcast.bluetooth.a.U(SourceFile:38)
    at com.tencent.mm.plugin.audio.broadcast.bluetooth.BluetoothReceiver.onReceive(SourceFile:60)
    at android.app.LoadedApk$ReceiverDispatcher$Args.lambda$getRunnable$0$LoadedApk$ReceiverDispatcher$Args(LoadedApk.java:1562)
    at android.app.-$$Lambda$LoadedApk$ReceiverDispatcher$Args$_BumDX2UKsnxLVrE6UJsJZkotuA.run(lambda:-1)
    at android.os.Handler.handleCallback(Handler.java:938)
    at android.os.Handler.dispatchMessage(Handler.java:99)
    at android.os.Looper.loop(Looper.java:236)
    at android.app.ActivityThread.main(ActivityThread.java:7869)
    at java.lang.reflect.Method.invoke(Native method)
    at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:656)
    at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:967)
    

    Binder通信过程探索

    在探索之前,先通俗化的把binder通信过程的步骤图贴一下。这是一次MediaPlayService的注册过程,使用了binder与servicemanager通信。

    (1):Client进程将进程间通信数据封装成Parcel对象,但是这个对象binder驱动不认识,之后后需要一步一步封装成可以与binder驱动交互的结构体。

    (2):Client进程向Binder驱动程序发送一个BC_TRANSACTION命令协议,意思是我要请求/注册XX进程的服务了。Binder驱动是一个打工仔,他根据协议内容找到目标Server进程之后,就会向Client进程发送一个BR_TRANSACTION_COMPLETE返回协议,意思是说我找到你让我找的人了。然后,binder向目标Server进程发送一个BR_TRANSACTION返回协议,请求目标进程处理该次通信请求,意思是有人托我找你干事。

    (3):Client接受到BR_TRANSACTION_COMPLETE,处理,他这个时候知道要找的人找到了,接下来就是等待回复就行,就会再次进入Binder驱动中等待目标Server进程返回通信结果。

    (4):Server进程收到binder请求协议之后,发现有新任务,就开始处理,然后向binder发送一个BC_REPLY命令协议。这个时候BINDER收到之后,会告诉Server进程说我收到了,发送一个BR_TRANSACTION_COMPLETE命令协议。然后告诉Client进程说,你要我帮你办是事情我搞定了,发送一个BR_REPLY。Server进程与Client接受并处理,这个时候,一次通信就完成了。
    【注意】:由于工作时间比较紧张,又临近技术分享的日子,这一张图我是直接拿的gityuan博客的图片。

    image2020-9-24_18-45-9.png

    通过服务注册的方式探索Binder通信

    下面进入枯燥的源码分析环节。
    注册服务麽,其实就是服务注册方,要向ServiceManager注册自己的服务。此时服务注册方作为客户端,ServiceManager作为服务方。

    frameworks/av/media/mediaserver/main_mediaserver.cpp

    int main(int argc __unused, char **argv __unused)
    {
        signal(SIGPIPE, SIG_IGN);
        //1:获得ProceeState实例
        //
        sp<ProcessState> proc(ProcessState::self());
        //2:获得BpServiceManager对象
        sp<IServiceManager> sm(defaultServiceManager());
        AIcu_initializeIcuOrDie();
        //3:MediaPlayerService的注册
        MediaPlayerService::instantiate();
        ResourceManagerService::instantiate();
        registerExtensions();
        ::android::hardware::configureRpcThreadpool(16, false); 
        //4:驱动binder线程池
        ProcessState::self()->startThreadPool();
        //5:将当前线程加入线程池
        IPCThreadState::self()->joinThreadPool();
        ::android::hardware::joinRpcThreadpool();
    }
    

    注册Media组件

    void MediaPlayerService::instantiate() {
        //获取Service Manager代理对象,调用addService成员函数将MediaService组件注册到Service Manager中。
        defaultServiceManager()->addService(
            //"media.player"->服务名, new MediaPlayerService()->服务对象。
                String16("media.player"), new MediaPlayerService());
    }
    

    在这里进行Parcel打包。
    frameworks/native/libs/binder/IServiceManager.cpp

    183    virtual status_t addService(const String16& name, const sp<IBinder>& service,
    184                                bool allowIsolated, int dumpsysPriority) {
               //parcel打包
    185        Parcel data, reply;
    186        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
    187        data.writeString16(name);//服务名
    188        data.writeStrongBinder(service);//<------------------这里将新建MediaPlayService存入
        
    189        data.writeInt32(allowIsolated ? 1 : 0);
    190        data.writeInt32(dumpsysPriority);
        
    191        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
    192        return err == NO_ERROR ? reply.readExceptionCode() : err;
    193    }
    

    将Service扁平化存入Parcel对象。这里扁平化service的原因是为了保证service在内存中的地址保持连续。
    system/libhwbinder/[Parcel.cpp]

    715status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
    716{
    717    return flatten_binder(ProcessState::self(), val, this);
    718}
    
    204status_t flatten_binder(const sp<ProcessState>& /*proc*/,// ProcessState对象
    205    const sp<IBinder>& binder, Parcel* out)//binder指向了一个MediaService组件, out就是Parcel对象
    206{
        //定义flat_binder_object结构体obj
    207    flat_binder_object obj;
    208    ...
    216
    217    if (binder != nullptr) {
        //binder->localBinder()返回一个Binder本地对象的接口,这里返回MediaService组件的binder本地对象
    218        BBinder *local = binder->localBinder();
    219        if (!local) {
    220           ...
    229        } else {
                  ...
    233            obj.hdr.type = BINDER_TYPE_BINDER;
    234            obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
    235            obj.cookie = reinterpret_cast<uintptr_t>(local);
    236        }
    237    } else {
                  ...
    241    }
    242//调用finish_flatten_binder()将flat_binder_object结构体obj写入Paecel对象out中。
    243    return finish_flatten_binder(binder, obj, out);
    244}
    
    
    

    system/libhwbinder/[Parcel.cpp]

    Parcel内部有两个缓冲区:mData与mObjects

    mData:数据缓冲区,用来存储将flat_binder_object结构体(binder对象)。

    mObjects:偏移数组,记录了mData中所有flat_binder_object结构体的位置。

    binder驱动通过该偏移数组找到IPC通信数据中的binder对象。

    198inline static status_t finish_flatten_binder(
    199    const sp<IBinder>& /*binder*/, const flat_binder_object& flat, Parcel* out)
    200{
        //调用Parcel的writeObject成员方法,flat_binder_object结构体obj作为参数传递。
    201    return out->writeObject(flat, false);
    202}
    203
        
    1370status_t Parcel::writeObject(const flat_binder_object& val, bool nullMetaData)//val:flat_binder_object obj, nullMetaData:false
    1371{
        //数据缓冲区mData是否拥有足够空间写入下一个Binder对象
    1372    const bool enoughData = (mDataPos+sizeof(val)) <= mDataCapacity;
        //偏移数组是否还有足够空间记录下一个Binder对象的偏移位置
    1373    const bool enoughObjects = mObjectsSize < mObjectsCapacity;
    1374    if (enoughData && enoughObjects) {
    1375restart_write:
    1376        *reinterpret_cast<flat_binder_object*>(mData+mDataPos) = val;
    1377
    1387        // Need to write meta-data?
    1388        if (nullMetaData || val.binder != 0) {
    1389            mObjects[mObjectsSize] = mDataPos;
    1390            acquire_object(ProcessState::self(), val, this, &mOpenAshmemSize);
    1391            mObjectsSize++;
    1392        }
    1393
    1394        return finishWrite(sizeof(flat_binder_object));
    1395    }
    1396
        //不够就扩容
    1397    if (!enoughData) {
    1398        const status_t err = growData(sizeof(val));
    1399        if (err != NO_ERROR) return err;
    1400    }
    1401    if (!enoughObjects) {
    1402        size_t newSize = ((mObjectsSize+2)*3)/2;
    1403        if (newSize*sizeof(binder_size_t) < mObjectsSize) return NO_MEMORY;   // overflow
    1404        binder_size_t* objects = (binder_size_t*)realloc(mObjects, newSize*sizeof(binder_size_t));
    1405        if (objects == nullptr) return NO_MEMORY;
    1406        mObjects = objects;
    1407        mObjectsCapacity = newSize;
    1408    }
    1409//扩容之后再跳转到上面进行binder对象写入
    1410    goto restart_write;
    1411}
    

    至此,注册MediaService组件所需要的进程间通信数据都封装到Parcel对象中。接下来使用BC_TRANSACTION命令协议将它传送给Binder驱动。

    在接下来的过程中,为了让Binder驱动认识我们的IPC通信数据,需要一步一步的将Parcel对象中的数据封装成binder驱动认识的结构体数据。

    ServiceManager的本地代理对象开始进行transact。

    frameworks/native/libs/binder/[BpBinder.cpp]

    217status_t BpBinder::transact(
    218    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
    219{
    220    // Once a binder has died, it will never come back to life.
        //mAlive用来判断,该BpBinder所引用的本地Binder对象是否还活着
    221    if (mAlive) {
    227        status_t status = IPCThreadState::self()->transact(
    228            mHandle, code, data, reply, flags);
    241        if (status == DEAD_OBJECT) mAlive = 0;
    242        return status;
    243    }
    244
    245    return DEAD_OBJECT;
    246}
    

    mHandle:表示该binder代理对象的句柄值,由于现在是与ServiceManager通信,这个mHandle为0。

    code: ADD_SERVICE_TRANSACTION

    data: 这一次binder请求的Parcel对象数据

    reply:用来保存IPC通信的结果,与data对应,也是一个Parcel对象。

    flags:描述进程间通信是否为异步binder(ONEWAY),默认为0,同步binder通信。

    frameworks/native/libs/binder/[IPCThreadState.cpp]

    650status_t IPCThreadState::transact(int32_t handle,
    651                                  uint32_t code, const Parcel& data,
    652                                  Parcel* reply, uint32_t flags)
    653{
    654    status_t err;
        //binder.h
    655 //enum transaction_flags {
    247 //TF_ONE_WAY    = 0x01, /* this is a one-way call: async, no return */
    248 //TF_ROOT_OBJECT    = 0x04, /* contents are the component's root object */
    249 //TF_STATUS_CODE    = 0x08, /* contents are a 32-bit status code */
    250 //TF_ACCEPT_FDS = 0x10, /* allow replies with file descriptors */
    251 //};
        
        //1:|=操作将flags的TF_ACCEPT_FDS位设为1,表示允许Server进程在返回结果中携带文件描述符
    656    flags |= TF_ACCEPT_FDS;
    657 //2:直接将Parcel对象data的内容写入到一个binder_transaction_data结构体中。
    667    err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, nullptr);
    668
    669    if (err != NO_ERROR) {
    670        if (reply) reply->setError(err);
    671        return (mLastError = err);
    672    }
    673 //3: flags的TF_ONE_WAY位是否为0
    674    if ((flags & TF_ONE_WAY) == 0) {
        ...//若reply不为空,则调用waitForResponse发送BR_TRANSACTION命令协议
    692        if (reply) {
    693            err = waitForResponse(reply);
    694        } else {
    695            Parcel fakeReply;
    696            err = waitForResponse(&fakeReply);
    697        }
        ...
    713    } else {
    714        err = waitForResponse(nullptr, nullptr);
    715    }
    716
    717    return err;
    718}
    

    在这里,Parcel类型的data——>binder_transaction_data结构体tr

    1025status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    1026    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
    1027{
            //binder_transaction_data结构体tr声明。
    1028    binder_transaction_data tr;
    1029    //结构体初始化
    1030    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
        
    1031    tr.target.handle = handle;//0
    1032    tr.code = code;//ADD_SERVICE_TRANSACTION
    1033    tr.flags = binderFlags;//TF_ACCEPT_FDS
        
    1034    tr.cookie = 0;
    1035    tr.sender_pid = 0;
    1036    tr.sender_euid = 0;
    1037    
        //Parcel data检查
    1038    const status_t err = data.errorCheck();
    1039    if (err == NO_ERROR) {
        //为了与binder驱动交换数据,这里把Parcel对象data封装为tr.dataxxz。
        //将Parcel对象data内部的数据缓冲区mData和偏移数组,设置为binder_transaction_data结构体tr的数据缓冲区与偏移数组
    1040        tr.data_size = data.ipcDataSize();//设置tr数据缓冲区大小
    1041        tr.data.ptr.buffer = data.ipcData();//设置tr的数据缓冲区
    1042        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);//设置tr偏移数组大小
    1043        tr.data.ptr.offsets = data.ipcObjects();//设置tr偏移数组
    1044    } else if() {
                ...
    }
    1051    } else {
    1052        return (mLastError = err);
    1053    }
        //IPCThreadState.h
        //Parcel类
        //Parcel              mIn;
        //Parcel              mOut;
    1054//将cmd“BC_TRANSACTION” 以及封装好的tr写入IPCThreadState的成员变量mOut中,表示有一个BC_TRANSACTION命令协议需要发送给binder处理。mOut:命令协议out缓冲区。
    1055    mOut.writeInt32(cmd);
    1056    mOut.write(&tr, sizeof(tr));
    1058    return NO_ERROR;
    1059}
    
    image-20200924150056263.png

    现在已经将BC_TRANSACTION命令协议与binder_transaction_data结构体的tr写入mOut命令协议缓冲区之中了。

    IPCThreadState::transaction 调用waitForRespone()方法,不断通过talkWithDriver()与binder驱动进行交互。由于这一次是同步binder通信,会等待驱动将同新结果返回,并根据返回的命令协议进行处理。

    803status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
    804{
    805    uint32_t cmd;
    806    int32_t err;
    807    //不断while(1)调用IPCThreadState的成员函数与binder驱动进行交互,以便可以将BC_TRANSACTION命令协议发送给binder驱动进行处理,并等待驱动程序将通信结果返回。
    808    while (1) {
    809        if ((err=talkWithDriver()) < NO_ERROR) break;
    810        err = mIn.errorCheck();
    811        if (err < NO_ERROR) break;
    812        if (mIn.dataAvail() == 0) continue;
    813
    814        cmd = (uint32_t)mIn.readInt32();
    815
    820
    821        switch (cmd) {
    822        case BR_TRANSACTION_COMPLETE:
    823            if (!reply && !acquireResult) goto finish;
    824            break;
    825
    842        case ...
    843        case BR_REPLY:
    844            {
    845                binder_transaction_data tr;
    846                err = mIn.read(&tr, sizeof(tr));
    848                if (err != NO_ERROR) goto finish;
    849
    850                if (reply) {
    851                    if ((tr.flags & TF_STATUS_CODE) == 0) {
    852                        reply->ipcSetDataReference(
    853                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
    854                            tr.data_size,
    855                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
    856                            tr.offsets_size/sizeof(binder_size_t),
    857                            freeBuffer, this);
    858                    } else {
    859                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
    860                        freeBuffer(nullptr,
    861                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
    862                            tr.data_size,
    863                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
    864                            tr.offsets_size/sizeof(binder_size_t), this);
    865                    }
    866                } else {
    867                    freeBuffer(nullptr,
    868                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
    869                        tr.data_size,
    870                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
    871                        tr.offsets_size/sizeof(binder_size_t), this);
    872                    continue;
    873                }
    874            }
    875            goto finish;
    876
    877        default:
    878            err = executeCommand(cmd);
    879            if (err != NO_ERROR) goto finish;
    880            break;
    881        }
    882    }
    883
    884finish:
    885    if (err != NO_ERROR) {
    886        if (acquireResult) *acquireResult = err;
    887        if (reply) reply->setError(err);
    888        mLastError = err;
    889    }
    890
    891    return err;
    892}
    
    894status_t IPCThreadState::talkWithDriver(bool doReceive)
    895{
    899//talkWithDriver是使用IO控制命令BINDER_WRITER_READ与binder驱动进行交互,声明binder_write_read结构体bwr来指定输入缓冲区与输出缓冲区,
        //其中,输出缓冲区用以保存将要发送给binder驱动的命令协议,输出缓冲区用以保存binder驱动发送给进程的返回协议。
        //在即将进入内核之前,定义一个bwr
    900    binder_write_read bwr;
    901
    902    // Is the read buffer empty?
           //needRead为true表示IPCThread的返回协议缓冲区mIn的返回协议已经处理完成。
        //表示mIn命令协议缓冲区中已经处理完成binder驱动发送来的协议,处理完成了则为true。
    903    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    904
    905    // We don't want to write anything if we are still reading
    906    // from data left in the input buffer and the caller
    907    // has requested to read the next data.
           //doReceive表示调用者调用talkWithDriver只希望接受binder驱动返回的命令协议,默认值为true。
           //在这种情况下,如果needRead仍然需要读的话,那么就令bwr.write_size为0。先读再写,已成习惯。
        //如果不用从binder驱动读数据的话,那么令bwr.write_size = mOut.dataSize(),准备写入内核。
    908    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    909
    910    bwr.write_size = outAvail;
    911    bwr.write_buffer = (uintptr_t)mOut.data();
    912
    913    // This is what we'll read.
        //这里有几个点,doReceive与needRead共同决定了bwr的read_size,read_buffer
        //doReceive默认为true,表示调用者调用talkWithDriver只希望接受来自binder驱动的返回命令协议。
        //1:doReceive为true, needRead为true 当mIn中已经处理完成binder驱动发送的协议。
        //2:doReceive为true, needRead为false 当mIn中有没有处理完成binder驱动发送来的协议。
        
    914    if (doReceive && needRead) {
        //read_size第一次设置为mIn命令协议缓冲区的容量大小。
    915        bwr.read_size = mIn.dataCapacity();
    916        bwr.read_buffer = (uintptr_t)mIn.data();
    917    } else {
    918        bwr.read_size = 0;
    919        bwr.read_buffer = 0;
    920    }
    921
    934
    935    // Return immediately if there is nothing to do.
    936    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
    937
    938    bwr.write_consumed = 0;
    939    bwr.read_consumed = 0;
    940    status_t err;
    941    do {
        //通过ioctl系统调用进入系统内核,调用到binder_ioctl()方法不停的读写操作,跟Binder Driver进行通信。
    946        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
    947            err = NO_ERROR;
    948        else
    949            err = -errno;
    959    } while (err == -EINTR);//当被中断,则继续执行
    960
    966
    967    if (err >= NO_ERROR) {
        //当talkWithDriver返回之后,将已经处理的命令协议从mOut中移除。
    968        if (bwr.write_consumed > 0) {
    969            if (bwr.write_consumed < mOut.dataSize())
    970                mOut.remove(0, bwr.write_consumed);
    971            else {
    972                mOut.setDataSize(0);
    974            }
    975        }
        //然后将binder驱动读取出来的返回协议保存到mIn中。然后waitForResponds可以通过switchcase进入到相关返回协议的处理流程当中。
    976        if (bwr.read_consumed > 0) {
    977            mIn.setDataSize(bwr.read_consumed);
    978            mIn.setDataPosition(0);
    979        }
    989        return NO_ERROR;
    990    }
    991
    992    return err;
    993}
    

    kernel/msm-4.14/drivers/android/[binder.c]

    现在已经来带内核空间了,binder_ioctl会根据不同的cmd跳转到不同方法去执行相应操作。

    5045static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
    5046{
    5047    int ret;
    5048    struct binder_proc *proc = filp->private_data;
    5049    struct binder_thread *thread;
    5050    unsigned int size = _IOC_SIZE(cmd);
    5051    void __user *ubuf = (void __user *)arg;
    5052
    5053    /*pr_info("binder_ioctl: %d:%d %x %lx\n",
    5054            proc->pid, current->pid, cmd, arg);*/
    5055
    5056    binder_selftest_alloc(&proc->alloc);
    5057
    5058    trace_binder_ioctl(cmd, arg);
    5059
    5060    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    5061    if (ret)
    5062        goto err_unlocked;
    5063
    5064    thread = binder_get_thread(proc);
    5065    if (thread == NULL) {
    5066        ret = -ENOMEM;
    5067        goto err;
    5068    }
    5069
    5070    switch (cmd) {
    5071    case BINDER_WRITE_READ:
    5072        delayacct_binder_start();
    5073        ret = binder_ioctl_write_read(filp, cmd, arg, thread);
    5074        delayacct_binder_end();
    5075        if (ret)
    5076            goto err;
    5077        break;
    
    5165    default:
    5166        ret = -EINVAL;
    5167        goto err;
    5168    }
    5169    ret = 0;
    5178    return ret;
    5179}
    
    4870static int binder_ioctl_write_read(struct file *filp,
    4871                unsigned int cmd, unsigned long arg,
    4872                struct binder_thread *thread)
    4873{
    4874    int ret = 0;
    4875    struct binder_proc *proc = filp->private_data;
    4876    unsigned int size = _IOC_SIZE(cmd);
    4877    void __user *ubuf = (void __user *)arg;
            //binder事务数据
    4878    struct binder_write_read bwr;
    4879
    4880    if (size != sizeof(struct binder_write_read)) {
    4881        ret = -EINVAL;
    4882        goto out;
    4883    }
            //将用户空间bwr结构体拷贝到内核空间
    4884    if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
    4885        ret = -EFAULT;
    4886        goto out;
    4887    }
    4893
    4894    if (bwr.write_size > 0) {
            //将数据放入目标进程
    4895        ret = binder_thread_write(proc, thread,
    4896                      bwr.write_buffer,
    4897                      bwr.write_size,
    4898                      &bwr.write_consumed);
    4899        trace_binder_write_done(ret);
    4900        if (ret < 0) {
    4901            bwr.read_consumed = 0;
    4902            if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
    4903                ret = -EFAULT;
    4904            goto out;
    4905        }
    4906    }
    4907    if (bwr.read_size > 0) {
            //读取自己队列的数据
    4908        ret = binder_thread_read(proc, thread, bwr.read_buffer,
    4909                     bwr.read_size,
    4910                     &bwr.read_consumed,
    4911                     filp->f_flags & O_NONBLOCK);
    4912        trace_binder_read_done(ret);
    4913        binder_inner_proc_lock(proc);
    4914        if (!binder_worklist_empty_ilocked(&proc->todo))
    4915            binder_wakeup_proc_ilocked(proc);
    4916        binder_inner_proc_unlock(proc);
    4917        if (ret < 0) {
    4918            if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
    4919                ret = -EFAULT;
    4920            goto out;
    4921        }
    4922    }
            //将内核空间bwr结构体拷贝到用户空间
    4928    if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
    4929        ret = -EFAULT;
    4930        goto out;
    4931    }
    4932out:
    4933    return ret;
    4934}
    

    这里顺便贴一下copy_from_user,把数据从用户空间拷贝到内核空间的方法

    37static inline int copy_from_user(void *to, const void __user volatile *from,
    38               unsigned long n)
    39{
    //这一步是检查检查用户空间的地址是否正确
    40  __chk_user_ptr(from, n);
    //这一步没有自己实现,而是调用了volatile_memcpy()方法完成用户空间数据到内核空间的拷贝
    41  volatile_memcpy(to, from, n);
    42  return 0;
    43}
    
    static void volatile_memcpy(volatile char *to, const volatile char *from, 
                    unsigned long n)
    {
        while (n--)
            *(to++) = *(from++);
    }
    
    3691static int binder_thread_write(struct binder_proc *proc,
    3692            struct binder_thread *thread,
    3693            binder_uintptr_t binder_buffer, size_t size,
    3694            binder_size_t *consumed)
    3695{
    3696    uint32_t cmd;
    3697    struct binder_context *context = proc->context;
    3698    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    3699    void __user *ptr = buffer + *consumed;
    3700    void __user *end = buffer + size;
    3701
    3702    while (ptr < end && thread->return_error.cmd == BR_OK) {
    3703        int ret;
    3704
    3705        if (get_user(cmd, (uint32_t __user *)ptr))
    3706            return -EFAULT;
    3707        ptr += sizeof(uint32_t);
    3708        trace_binder_command(cmd);
    3714        switch (cmd) {
    3912        ....
            //cmd为BC_TRANSACTION,会走到这个case。
    3924        case BC_TRANSACTION:
    3925        case BC_REPLY: {
    3926            struct binder_transaction_data tr;
    3927
    3928            if (copy_from_user(&tr, ptr, sizeof(tr)))
    3929                return -EFAULT;
    3930            ptr += sizeof(tr);
    3931            binder_transaction(proc, thread, &tr,
    3932                       cmd == BC_REPLY, 0);
    3933            break;
    3934        }
    3935        case...
                case ...
                ...
    4154    }
    4155    return 0;
    4156}
    

    frameworks/native/cmds/servicemanager/[binder.c]
    由于这一次时间比较紧张,binder驱动的部分我还没有整理完毕,直接先来到对端看一下ServiceManager进程

    415void binder_loop(struct binder_state *bs, binder_handler func)
    416{
    417    int res;
    418    struct binder_write_read bwr;
    419    uint32_t readbuf[32];
    420
    421    bwr.write_size = 0;
    422    bwr.write_consumed = 0;
    423    bwr.write_buffer = 0;
    424
    425    readbuf[0] = BC_ENTER_LOOPER;
    426    binder_write(bs, readbuf, sizeof(uint32_t));
    427
    428    for (;;) {
    429        bwr.read_size = sizeof(readbuf);
    430        bwr.read_consumed = 0;
    431        bwr.read_buffer = (uintptr_t) readbuf;
    432
    433        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    434
    435        if (res < 0) {
    436            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
    437            break;
    438        }
    439
    440        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
    441        if (res == 0) {
    442            ALOGE("binder_loop: unexpected reply?!\n");
    443            break;
    444        }
    445        if (res < 0) {
    446            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
    447            break;
    448        }
    449    }
    450}
    

    frameworks/native/cmds/servicemanager/[binder.c]

    229int binder_parse(struct binder_state *bs, struct binder_io *bio,
    230                 uintptr_t ptr, size_t size, binder_handler func)
    231{
    232    int r = 1;
    233    uintptr_t end = ptr + (uintptr_t) size;
    234
    235    while (ptr < end) {
    236        uint32_t cmd = *(uint32_t *) ptr;
    237        ptr += sizeof(uint32_t);
    241        switch(cmd) {
    256        case BR_TRANSACTION: {
               ...
    277            if (func) {
    278                unsigned rdata[256/4];
    279                struct binder_io msg;
    280                struct binder_io reply;
    281                int res;
    282
    283                bio_init(&reply, rdata, sizeof(rdata), 4);
        //解析binder信息
    284                bio_init_from_txn(&msg, &txn.transaction_data);
        //收到binder事务
    285                res = func(bs, &txn, &msg, &reply);
        //如果收到的不是ONE WAY binder,才会发送reply事件
    286                if (txn.transaction_data.flags & TF_ONE_WAY) {
    287                    binder_free_buffer(bs, txn.transaction_data.data.ptr.buffer);
    288                } else {
    289                    binder_send_reply(bs, &reply, txn.transaction_data.data.ptr.buffer, res);
    290                }
    291            }
    292            break;
    293        }
    323        default:
    324            ALOGE("parse: OOPS %d\n", cmd);
    325            return -1;
    326        }
    327    }
    328
    329    return r;
    330}
    
    //添加Service到ServiceManager
    status_t ServiceManagerShim::addService(const String16& name, const sp<IBinder>& service,
                                            bool allowIsolated, int dumpsysPriority)
    {
        Status status = mTheRealServiceManager->addService(
            String8(name).c_str(), service, allowIsolated, dumpsysPriority);
        return status.exceptionCode();
    }
    

    frameworks/native/cmds/servicemanager/ServiceManager.cpp

    status ServiceManager::addService(const std::string& name, const sp<IBinder>& binder, bool allowIsolated, int32_t dumpPriority) {
     ...
        // Overwrite the old service if it exists
        mNameToService[name] = Service {
            .binder = binder,
            .allowIsolated = allowIsolated,
            .dumpPriority = dumpPriority,
            .debugPid = ctx.debugPid,
        };
    ...
        return Status::ok();
    }
    

    frameworks/native/cmds/servicemanager/ServiceManager.h

        struct Service {
            sp<IBinder> binder; // not null
            bool allowIsolated;
            int32_t dumpPriority;
            bool hasClients = false; // notifications sent on true -> false.
            bool guaranteeClient = false; // forces the client check to true
            pid_t debugPid = 0; // the process in which this service runs
    
            // the number of clients of the service, including servicemanager itself
            ssize_t getNodeStrongRefCount();
        };
    
       using ServiceMap = std::map<std::string, Service>;
       ServiceMap mNameToService;
    

    参考资料

    [1]萧晓 https://www.bilibili.com/read/cv7592830 想掌握 Binder 机制?驱动核心源码详解和Binder超系统学习资源,想学不会都难! 2020.09.24

    [2] gityuan http://gityuan.com/2015/11/14/binder-add-service/ Binder系列5—注册服务(addService) 2020.09.24

    [3] 稀土掘金 https://www.colabug.com/2020/0608/7441589/ 一文让你深入了解 Android 系统 Services 机制 2020.09.24

    [4] 罗升阳 安卓系统源代码情景分析(第三版) 2017.10

    相关文章

      网友评论

        本文标题:Binder——从发起通信到talkWithDriver()

        本文链接:https://www.haomeiwen.com/subject/nvgquktx.html