最近研究了下如下进程间通信事实的原理:
Client 多次传输同一个Binder对象,服务端收到同一个对象。
Cilent多次传输同一个aidl Stub子类对象,服务端收到不同的对象。
一个由aidl生成的Stub类的对象通过remote接口传到另一个进程,服务端收到不同的对象,直接看下aidl生成的服务端的代码就可以看出来:
case TRANSACTION_callbackTrans:
{
data.enforceInterface(descriptor);
personal.jayhou.mydemos.aidl.ICallback _arg0;
_arg0 = personal.jayhou.mydemos.aidl.ICallback.Stub.asInterface(data.readStrongBinder());
this.callbackTrans(_arg0);
reply.writeNoException();
return true;
}
data.readStrongBinder后,对于跨进程间的调用直接由Stub.asInterface new一个对象出来,即使readStrongBinder是同一个对象,传给用户自己实现的服务端,也是一个新new出来的Proxy对象:
public static personal.jayhou.mydemos.aidl.IRemoteService asInterface(android.os.IBinder obj)
{
if ((obj==null)) {
return null;
}
android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);
if (((iin!=null)&&(iin instanceof personal.jayhou.mydemos.aidl.IRemoteService))) {
return ((personal.jayhou.mydemos.aidl.IRemoteService)iin);
}
return new personal.jayhou.mydemos.aidl.IRemoteService.Stub.Proxy(obj);
}
所以每次服务端肯定收到的都是不同的对象。
而对于跨进程传输Binder对象来说,Binder对象通过Parcel 对象的writeStrongBinder接口写入Parcel对象,服务端通过readStrongBinder读出来的Binder对象,client端与服务端的binder对象都是一一对应的,即多次通过远程调用传输的Binder对象,服务端收到的必然是同一个Binder对象(java hashCode值相同)
一个Binder对象通过remote接口到另一个进程的过程,不使用aidl接口,可以用如下demo实现:
client端:
Intent intent = new Intent(this, RemoteService.class);
bindService(intent,this, Context.BIND_AUTO_CREATE);
mBinderToPass1 = new Binder();
mBinderToPass2 = new Binder();
@Override
public void onServiceConnected(ComponentName componentName, IBinder iBinder) {
mRemote = iBinder;
}
private void passBinder(IBinder binderObject) {
Parcel _data = Parcel.obtain();
Parcel _result = Parcel.obtain();
try {
_data.writeInterfaceToken(DESCRIPTOR);
_data.writeStrongBinder(binderObject);
mRemote.transact(RemoteService.TRANSACTION_CODE_PASS_BINDER, _data, _result, 0);
_result.readException();
} catch (RemoteException e) {
e.printStackTrace();
}
finally {
_result.recycle();
_data.recycle();
}
}
@Override
public void onClick(View view) {
passBinder(mBinderToPass1);
passBinder(mBinderToPass2);
}
service端:
public class RemoteService extends Service {
private static final String TAG = "RemoteCallbackTest-Service";
public static final int TRANSACTION_CODE_PASS_BINDER = IBinder.FIRST_CALL_TRANSACTION + 0;
public static final String DESCRIPTOR = "binder-service";
@Override
public IBinder onBind(Intent intent) {
return new MyService();
}
public static class MyService extends Binder {
@Override
protected boolean onTransact(int code, @NonNull Parcel data, @Nullable Parcel reply, int flags) throws RemoteException {
data.enforceInterface(DESCRIPTOR);
IBinder binder = data.readStrongBinder();
Log.d(TAG, "binder object:" + binder);
reply.writeNoException();
return super.onTransact(code, data, reply, flags);
}
}
}
执行几次后的log:
02-18 17:57:24.726 27527 27542 D RemoteCallbackTest-Service: binder object:android.os.BinderProxy@f349f38
02-18 17:57:24.726 27527 27542 D RemoteCallbackTest-Service: binder object:android.os.BinderProxy@f3b0b11
02-18 17:57:26.594 27527 27542 D RemoteCallbackTest-Service: binder object:android.os.BinderProxy@f349f38
02-18 17:57:26.595 27527 27541 D RemoteCallbackTest-Service: binder object:android.os.BinderProxy@f3b0b11
可以看到两次传输的对象不同,服务端收到的对象也不同,重复传输,服务端收到的对象相同,这也是Binder对象能在服务端作为token的基础。
下面看看这个Binder对象是怎么跑到另一个进程,还每次都对应同一个对象的,分析这个过程其实只需要分析client writeStrongBinder同一个Binder对象,service端为什么每次都能readStrongBinder同一个对象出来呢?
java层的Parcel 会调用到如下jni接口中
frameworks/base/core/jni/android_os_Parcel.cpp
static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr, jobject object)
{
Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
if (parcel != NULL) {
const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));
if (err != NO_ERROR) {
signalExceptionForError(env, clazz, err);
}
}
}
其中,nativePtr是保存在java Parcel对象中的c++ 对象指针地址,最后是将ibinderForJavaObject的返回值写入 c++ Parcel对象。
frameworks/base/core/jni/android_util_Binder.cpp
sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj)
{
if (obj == NULL) return NULL;
// Instance of Binder?
if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) {
JavaBBinderHolder* jbh = (JavaBBinderHolder*)
env->GetLongField(obj, gBinderOffsets.mObject);
return jbh->get(env, obj);
}
// Instance of BinderProxy?
if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) {
return getBPNativeData(env, obj)->mObject;
}
ALOGW("ibinderForJavaObject: %p is not a Binder object", obj);
return NULL;
}
ibinderForJavaObject会判断传入的java对象是一个服务端实例,还是一个代理端实例,这里是服务端实例,所以返回的是从JavaBBinderHolder对象中通过java binder对象查出来的sp<IBinder> 对象,我们知道java世界的binder其实都对应于native层的binder,这里取出来的sp<IBinder>对象,就是指向一个对应于传入的java binder的native binder的强智能指针对象。
再看native层的parcel writeStrongBinder
frameworks/native/libs/binder/Parcel.cpp
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
return flatten_binder(ProcessState::self(), val, this);
}
flatten_binder主要是这部分逻辑:可以看到,最后写入parcel的是结构体flat_binder_object,对于服务端的binder(local != null)这里获取了一个handle值,就是binder驱动用来定位client binder数据要传递给那个进程哪个线程以及哪个binder对象用的,并且将其类型设置为了BINDER_TYPE_HANDLE,所以最后通过parcel写入驱动的数据,和该binder对象关联的,只有这个handle值。
if (binder != nullptr) {
BBinder *local = binder->localBinder();
if (!local) {
BpBinder *proxy = binder->remoteBinder();
if (proxy == nullptr) {
ALOGE("null proxy");
}
const int32_t handle = proxy ? proxy->handle() : 0;
obj.hdr.type = BINDER_TYPE_HANDLE;
obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
obj.handle = handle;
obj.cookie = 0;
} else {
if (local->isRequestingSid()) {
obj.flags |= FLAT_BINDER_FLAG_TXN_SECURITY_CTX;
}
obj.hdr.type = BINDER_TYPE_BINDER;
obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
obj.cookie = reinterpret_cast<uintptr_t>(local);
}
} else {
obj.hdr.type = BINDER_TYPE_BINDER;
obj.binder = 0;
obj.cookie = 0;
}
当parcel数据传递到另一个进程时,走的是Parcel的readStrongBinder:
sp<IBinder> Parcel::readStrongBinder() const
{
sp<IBinder> val;
// Note that a lot of code in Android reads binders by hand with this
// method, and that code has historically been ok with getting nullptr
// back (while ignoring error codes).
readNullableStrongBinder(&val);
return val;
}
status_t Parcel::readNullableStrongBinder(sp<IBinder>* val) const
{
return unflatten_binder(ProcessState::self(), *this, val);
}
status_t unflatten_binder(const sp<ProcessState>& proc,
const Parcel& in, sp<IBinder>* out)
{
const flat_binder_object* flat = in.readObject(false);
if (flat) {
switch (flat->hdr.type) {
case BINDER_TYPE_BINDER:
*out = reinterpret_cast<IBinder*>(flat->cookie);
return finish_unflatten_binder(nullptr, *flat, in);
case BINDER_TYPE_HANDLE:
*out = proc->getStrongProxyForHandle(flat->handle);
return finish_unflatten_binder(
static_cast<BpBinder*>(out->get()), *flat, in);
}
}
return BAD_TYPE;
}
这里走的是BINDER_TYPE_HANDLE分支,通过handle值取出一个sp<IBinder>对象地址:
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != nullptr) {
// We need to create a new BpBinder if there isn't currently one, OR we
// are unable to acquire a weak reference on this current one. See comment
// in getWeakProxyForHandle() for more info about this.
IBinder* b = e->binder;
if (b == nullptr || !e->refs->attemptIncWeak(this)) {
//...省略handle 为0的情况,handle为0是ServiceManager的 BpBinder
Parcel data;
status_t status = IPCThreadState::self()->transact(
0, IBinder::PING_TRANSACTION, data, nullptr, 0);
if (status == DEAD_OBJECT)
return nullptr;
}
b = BpBinder::create(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
// This little bit of nastyness is to allow us to add a primary
// reference to the remote proxy when this team doesn't have one
// but another team is sending the handle to us.
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
lookupHandleLocekd:
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
const size_t N=mHandleToObject.size();
if (N <= (size_t)handle) {
handle_entry e;
e.binder = nullptr;
e.refs = nullptr;
status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
if (err < NO_ERROR) return nullptr;
}
return &mHandleToObject.editItemAt(handle);
}
可以看出,client端取到的第一个BpBinder对象是由ProcessState对象创建,保存在mHandleToObject中的,每个进程只有一个ProcessState对象,所以native层来说,相同的handle值取出来的对象肯定相同。
然后再回到java层,java的readStrongBinder,在jni层调用的如下函数:
static jobject android_os_Parcel_readStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr)
{
Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
if (parcel != NULL) {
return javaObjectForIBinder(env, parcel->readStrongBinder());
}
return NULL;
}
从上面的流程看,parcel-readStrongBinder()相同的handle值读出相同的native BpBinder 对象,对于相同的sp<IBinder>对象,javaObjectForBinder
// If the argument is a JavaBBinder, return the Java object that was used to create it.
// Otherwise return a BinderProxy for the IBinder. If a previous call was passed the
// same IBinder, and the original BinderProxy is still alive, return the same BinderProxy.
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
if (val == NULL) return NULL;
if (val->checkSubclass(&gBinderOffsets)) {
// It's a JavaBBinder created by ibinderForJavaObject. Already has Java object.
jobject object = static_cast<JavaBBinder*>(val.get())->object();
LOGDEATH("objectForBinder %p: it's our own %p!\n", val.get(), object);
return object;
}
BinderProxyNativeData* nativeData = new BinderProxyNativeData();
nativeData->mOrgue = new DeathRecipientList;
nativeData->mObject = val;
jobject object = env->CallStaticObjectMethod(gBinderProxyOffsets.mClass,
gBinderProxyOffsets.mGetInstance, (jlong) nativeData, (jlong) val.get());
if (env->ExceptionCheck()) {
// In the exception case, getInstance still took ownership of nativeData.
return NULL;
}
BinderProxyNativeData* actualNativeData = getBPNativeData(env, object);
if (actualNativeData == nativeData) {
// Created a new Proxy
uint32_t numProxies = gNumProxies.fetch_add(1, std::memory_order_relaxed);
uint32_t numLastWarned = gProxiesWarned.load(std::memory_order_relaxed);
if (numProxies >= numLastWarned + PROXY_WARN_INTERVAL) {
// Multiple threads can get here, make sure only one of them gets to
// update the warn counter.
if (gProxiesWarned.compare_exchange_strong(numLastWarned,
numLastWarned + PROXY_WARN_INTERVAL, std::memory_order_relaxed)) {
ALOGW("Unexpectedly many live BinderProxies: %d\n", numProxies);
}
}
} else {
delete nativeData;
}
return object;
}
主要通过如下调用从java世界查找对应的Object:
jobject object = env->CallStaticObjectMethod(gBinderProxyOffsets.mClass,
gBinderProxyOffsets.mGetInstance, (jlong) nativeData, (jlong) val.get());
gBinderProxyOffsets.mGetInstance是初始化时赋值的,可以看到是对应于java世界的BinderProxy类的getInstance方法
static int int_register_android_os_BinderProxy(JNIEnv* env)
{
jclass clazz = FindClassOrDie(env, "java/lang/Error");
gErrorOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
clazz = FindClassOrDie(env, kBinderProxyPathName);
gBinderProxyOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
gBinderProxyOffsets.mGetInstance = GetStaticMethodIDOrDie(env, clazz, "getInstance",
"(JJ)Landroid/os/BinderProxy;");
gBinderProxyOffsets.mSendDeathNotice = GetStaticMethodIDOrDie(env, clazz, "sendDeathNotice",
"(Landroid/os/IBinder$DeathRecipient;)V");
gBinderProxyOffsets.mNativeData = GetFieldIDOrDie(env, clazz, "mNativeData", "J");
clazz = FindClassOrDie(env, "java/lang/Class");
gClassOffsets.mGetName = GetMethodIDOrDie(env, clazz, "getName", "()Ljava/lang/String;");
return RegisterMethodsOrDie(
env, kBinderProxyPathName,
gBinderProxyMethods, NELEM(gBinderProxyMethods));
}
android/os/BinderProxy.java
private static BinderProxy getInstance(long nativeData, long iBinder) {
BinderProxy result;
synchronized (sProxyMap) {
try {
result = sProxyMap.get(iBinder);
if (result != null) {
return result;
}
result = new BinderProxy(nativeData);
} catch (Throwable e) {
// We're throwing an exception (probably OOME); don't drop nativeData.
NativeAllocationRegistry.applyFreeFunction(NoImagePreloadHolder.sNativeFinalizer,
nativeData);
throw e;
}
NoImagePreloadHolder.sRegistry.registerNativeAllocation(result, nativeData);
// The registry now owns nativeData, even if registration threw an exception.
sProxyMap.set(iBinder, result);
}
return result;
}
这里的long iBinder就是native层的Binder对象的地址,同一个Binder对象相同的地址,从一个sProxyMap中查询对应的BinderProxy对象,第一次会new一个出来,后面就直接从sProxyMap中取出,所以都是同一个java对象。
综合上面的,同一个Binder对象通过parcel从进程A传递到进程B,进程B中取到同一个对象,这个对象在native层保存在ProcessState类型的,进程唯一的对象的mHandleToObject vector容器中,handle值为key,java层则保存在BinderProxy类的静态变量sProxyMap中,以native binder对象地址为key。
同时进程A的Binder类型对象,传输到进程B后,就变成了BinderProxy对象,该对象中保存有对应于A进程Binder对象的handle值,所有Android世界的Binder进程间通信都利用这个handle找对应的进程来传输数据,封装IPC业务逻辑。
网友评论