美文网首页
OC底层原理08-类结构的 cache_t 分析

OC底层原理08-类结构的 cache_t 分析

作者: AndyGF | 来源:发表于2020-09-21 17:15 被阅读0次

    今天我们来玩一下类结构中的 cache, 也就是类的缓存, 他本身是什么, 究竟缓存了什么, 怎么存的, 让我们拭目以待.

    我们不能盲目的去探索, 首先通过源码找出 cache 有类结构中的位置和自身的结构, 然后有目的的去探索.

    objc_class 类结构源码 cache_t 结构体定义 bucket 结构 cache 在类结构中的位置和自身的结构

    此三处源码由于函数较多, 所以只截取了成员部分, 文章最后有相关源码.

    由此可知, cache_t 存储的核心内容是 _sel (方法/函数编号) 和 _imp (方法/函数指针), 其余都是辅助作用,

    请记住本文重点是 cache_t , bucket_t, _sel_imp,

    请记住本文重点是 cache_t , bucket_t, _sel_imp,

    请记住本文重点是 cache_t , bucket_t, _sel_imp,

    cache_t

    cache_t 的源码来看, 共分为三种运行环境, 分别是 :

    • CACHE_MASK_STORAGE_OUTLINED : mac 或者 模拟器
    • CACHE_MASK_STORAGE_HIGH_16 : 64 位真机
    • CACHE_MASK_STORAGE_LOW_4 : 32 位真机

    相关宏定义

    #if defined(__arm64__) && __LP64__ 
    // 64 位真机
    #define CACHE_MASK_STORAGE CACHE_MASK_STORAGE_HIGH_16
    
    #elif defined(__arm64__) && !__LP64__ 
    // 32 位真机
    #define CACHE_MASK_STORAGE CACHE_MASK_STORAGE_LOW_4
    
    #else
    // 模拟器 或者 mac OS
    #define CACHE_MASK_STORAGE CACHE_MASK_STORAGE_OUTLINED
    #endif
    

    我们先来对比一下 真机 与 模拟器(mac os) 的定义的区别:

    模拟器

    explicit_atomic<struct bucket_t *> _buckets;    
    explicit_atomic<mask_t> _mask;
    

    真机

    explicit_atomic<uintptr_t> _maskAndBuckets;
    

    模拟器可以很明显看出 _buckets_mask, 而真机只有一个 _maskAndBuckets, 这是为什么呢 ? 原因很简单, 手机内存相对于电脑来说是比较小的, 此处通过位域方式将 _buckets_mask 存在了一个地方, 后期使用时通过各自的掩码进行位运算得到正确的值.

    举个简单的例子, 比如有 8 个位置的存储空间,存储空间最小是 8 位, a 和 b 各自最多占 4 位, 也就是说 a, b 分开存储, 需要分 2 个 8 位, 如果将 a , b 存在一个空间内, 各占 4 位 同样可以满足 a, b 的存储需求, 后期在使用时通过位运算可分别取得 a 和 b 的值.


    _maskAndBuckets 存储原理

    注意此处只是讲原理, 真实情况位数要多

    分析到此处, 我们已经知道怎么得到 _buckets, 从名称上看, 这应该是一个 bucket_t 的数组, 这不是瞎猜的, 苹果源码命名还是很负责的. 那么接下来让我们来看看 bucket_t 的结构.

    bucket_t

    源码成员只有两个, 只是顺序不同而已. 一个是 _sel 方法/函数编号, 一个是 _imp 方法/函数的指针.

    #if __arm64__
        // 真机
        explicit_atomic<uintptr_t> _imp;
        explicit_atomic<SEL> _sel;
    #else
        // 非真机
        explicit_atomic<SEL> _sel;
        explicit_atomic<uintptr_t> _imp;
    #endif
    

    到此为止, 关于 cache_t 的成员结构已经分析完了, 接下来我们通过 LLDB 来验证一下我们的分析, 另外我们已经知道 cache_t_buckets 在真机和 mac os 上的区别就是存取的方式不同, 原理和作用都是一样的, 因此我们将在 mac os 环境下来验证.

    开始前先介绍几个重要的的函数:

    cache_t 的函数 :
    struct bucket_t *buckets(); : 获取 bucket 数组(首地址)

    bucket 的函数(只看函数声明) :
    SEL sel() : 获取方法/函数编号.
    IMP imp(Class cls): 获取方法/函数首地址.

    通过 LLDB 查看 cache_tbucket_t

    首先我们创建一个 GFPerson 类, 代码如下:

    
    #import <Foundation/Foundation.h>
    @interface GFPerson : NSObject
    @property (nonatomic, copy) NSString *gfName;
    @property (nonatomic, strong) NSString *nickName;
    
    - (void)sayHello;
    
    - (void)sayCode;
    
    - (void)sayMaster;
    
    - (void)sayNB;
    
    + (void)sayHappy;
    
    @end
    
    #import "GFPerson.h"
    
    @implementation GFPerson
    - (void)sayHello{
        NSLog(@"GFPerson say : %s",__func__);
    }
    
    - (void)sayCode{
        NSLog(@"GFPerson say : %s",__func__);
    }
    
    - (void)sayMaster{
        NSLog(@"GFPerson say : %s",__func__);
    }
    
    - (void)sayNB{
        NSLog(@"GFPerson say : %s",__func__);
    }
    
    + (void)sayHappy{
        NSLog(@"GFPerson say : %s",__func__);
    }
    @end
    
    

    main函数代码如下图 , 因为有断点就截图了.

    main函数 LLDB 调试查看 cache_t

    因为我们对 p 的属性 gfNamenickName 进行了赋值, 也就是调用了他们的 setter, 此时如果我们继续执行下图中的 LLDB 命令, 并没有得到 bucket, 这说明就只有两个, 我们走一步断点, 调用 sayHello 方法.

    $3+2

    重新读取 $1 的值

    sayHello

    cache_t 的核心内容我们已经进行了验证, 接下来我们来看看其他几个成员, 如: _occupied, _mask, 首先来看cache_t 的几个重要函数 .

    cache_t 函数 :

    • mask_t mask(); : 获取 _mask
    • mask_t occupied(); : 获取 _occupied
    • void incrementOccupied(); : _occupied 自增
    • void insert(Class cls, SEL sel, IMP imp, id receiver); : 插入新的 _sel_imp
    • void reallocate(mask_t oldCapacity, mask_t newCapacity, bool freeOld); : 重新分配内存

    重新运行项目进行 LLDB 调试


    main 函数 _occupied 和 _mask

    当我们执行完 第 23 行代码, 即 [p sayHello], 再去查看 _occupied_mask 的值, 发现发生了变化, _occupied 从 2 变为 1, _mask 从 3 变为 7, 这是为什么呢 ? 这是因为 cache_t 在第一次初始化的时候分配的内存大小是 4 个mask_t长度, _mask = capacity - 1;, _occupied 表示调用方法的数量, 初始为0, 在调用方法时, _occupied 自增 1 , 得到一个 newOccupied , 当 newOccupied + 1 大于内存大小的 3/4 时, 会清空之前的内存区域, _occupied = 0 , 然后重新开辟内存, 新内存大小是原来的 2 倍. 所以才会发生上面的变化. 而上面 [p sayHello] 时 newOccupied = 3, 所以 newOccupied + 1 > capacity / 4 * 3, 此时代码进入 else 分支, 重新开启空间进行扩容.

    insert 函数部分代码

    insert 函数部分代码

    主要局部代码逻辑也分析完了. 接下来看看 cache_t 的完整工作流程.

    cache中的内容都通过 insert 函数写入的, 可以认为 insert 函数就是 cache_t 的入口,所以我们就从这个函数入手, 在当前文件搜索 insert, 然而并没有搜索到调用之处, 再全局搜索 ->insert( 或者 .insert(, 发现只有在 objc-cache.mm 文件下的 void cache_fill(Class cls, SEL sel, IMP imp, id receiver) 函数中调用一次.

    我们在main 函数和 insert 函数中打上断点, 运行项目, 除 main 函数之外的断点先关掉, 因为程序的初始化也会走 insert函数, 所以要保证我们调试的是当前类的 cache.

    main 函数 insert 断点

    cache_t工作流程

    insert 核心逻辑流程图

    1. 计算 capacity 和 newOccupied
    2. 开辟空间(新开 / 扩容 / 保持不变)
    3. 内部成员赋值, sel, imp, cls 绑定

    当第一次调用: [p sayHello]

    计算 开辟新空间 内部成员赋值, sel, imp, cls 绑定

    当第一次调用: [p sayCode] (重新运行了项目, 所以 LLDB 与上面没有连惯)

    _occupied 变化之前 sel 绑定完成之后

    当第一次调用: [p sayMaster]

    reallocate 之前 reallocate 之后

    主要执行流程已经明了, 接下看几个重要的函数

    • void reallocate(mask_t oldCapacity, mask_t newCapacity, bool freeOld);: 开辟新空间, 如果是扩容, 还要释放旧空间. _occupied 归 0
    • void setBucketsAndMask(struct bucket_t *newBuckets, mask_t newMask);
    allocateBuckets setBucketsAndMask

    cache_t 完整结构图, 图片来源于 月月

    cache_t

    最后附上相关结构体的源码

    cache_t源码
    objc_class源码
    bucket_t 源码

    cache_t 的源码 :

    struct cache_t {
    #if CACHE_MASK_STORAGE == CACHE_MASK_STORAGE_OUTLINED
        explicit_atomic<struct bucket_t *> _buckets;
        explicit_atomic<mask_t> _mask;
    #elif CACHE_MASK_STORAGE == CACHE_MASK_STORAGE_HIGH_16
        explicit_atomic<uintptr_t> _maskAndBuckets;
        mask_t _mask_unused;
        
        // How much the mask is shifted by.
        static constexpr uintptr_t maskShift = 48;
        
        // Additional bits after the mask which must be zero. msgSend
        // takes advantage of these additional bits to construct the value
        // `mask << 4` from `_maskAndBuckets` in a single instruction.
        static constexpr uintptr_t maskZeroBits = 4;
        
        // The largest mask value we can store.
        static constexpr uintptr_t maxMask = ((uintptr_t)1 << (64 - maskShift)) - 1;
        
        // The mask applied to `_maskAndBuckets` to retrieve the buckets pointer.
        static constexpr uintptr_t bucketsMask = ((uintptr_t)1 << (maskShift - maskZeroBits)) - 1;
        
        // Ensure we have enough bits for the buckets pointer.
        static_assert(bucketsMask >= MACH_VM_MAX_ADDRESS, "Bucket field doesn't have enough bits for arbitrary pointers.");
    #elif CACHE_MASK_STORAGE == CACHE_MASK_STORAGE_LOW_4
        // _maskAndBuckets stores the mask shift in the low 4 bits, and
        // the buckets pointer in the remainder of the value. The mask
        // shift is the value where (0xffff >> shift) produces the correct
        // mask. This is equal to 16 - log2(cache_size).
        explicit_atomic<uintptr_t> _maskAndBuckets;
        mask_t _mask_unused;
    
        static constexpr uintptr_t maskBits = 4;
        static constexpr uintptr_t maskMask = (1 << maskBits) - 1;
        static constexpr uintptr_t bucketsMask = ~maskMask;
    #else
    #error Unknown cache mask storage type.
    #endif
        
    #if __LP64__
        uint16_t _flags;
    #endif
        uint16_t _occupied;
    
    public:
        static bucket_t *emptyBuckets();
        
        struct bucket_t *buckets();
        mask_t mask();
        mask_t occupied();
        void incrementOccupied();
        void setBucketsAndMask(struct bucket_t *newBuckets, mask_t newMask);
        void initializeToEmpty();
    
        unsigned capacity();
        bool isConstantEmptyCache();
        bool canBeFreed();
    
    #if __LP64__
        bool getBit(uint16_t flags) const {
            return _flags & flags;
        }
        void setBit(uint16_t set) {
            __c11_atomic_fetch_or((_Atomic(uint16_t) *)&_flags, set, __ATOMIC_RELAXED);
        }
        void clearBit(uint16_t clear) {
            __c11_atomic_fetch_and((_Atomic(uint16_t) *)&_flags, ~clear, __ATOMIC_RELAXED);
        }
    #endif
    
    #if FAST_CACHE_ALLOC_MASK
        bool hasFastInstanceSize(size_t extra) const
        {
            if (__builtin_constant_p(extra) && extra == 0) {
                return _flags & FAST_CACHE_ALLOC_MASK16;
            }
            return _flags & FAST_CACHE_ALLOC_MASK;
        }
    
        size_t fastInstanceSize(size_t extra) const
        {
            ASSERT(hasFastInstanceSize(extra));
    
            if (__builtin_constant_p(extra) && extra == 0) {
                return _flags & FAST_CACHE_ALLOC_MASK16;
            } else {
                size_t size = _flags & FAST_CACHE_ALLOC_MASK;
                // remove the FAST_CACHE_ALLOC_DELTA16 that was added
                // by setFastInstanceSize
                return align16(size + extra - FAST_CACHE_ALLOC_DELTA16);
            }
        }
    
        void setFastInstanceSize(size_t newSize)
        {
            // Set during realization or construction only. No locking needed.
            uint16_t newBits = _flags & ~FAST_CACHE_ALLOC_MASK;
            uint16_t sizeBits;
    
            // Adding FAST_CACHE_ALLOC_DELTA16 allows for FAST_CACHE_ALLOC_MASK16
            // to yield the proper 16byte aligned allocation size with a single mask
            sizeBits = word_align(newSize) + FAST_CACHE_ALLOC_DELTA16;
            sizeBits &= FAST_CACHE_ALLOC_MASK;
            if (newSize <= sizeBits) {
                newBits |= sizeBits;
            }
            _flags = newBits;
        }
    #else
        bool hasFastInstanceSize(size_t extra) const {
            return false;
        }
        size_t fastInstanceSize(size_t extra) const {
            abort();
        }
        void setFastInstanceSize(size_t extra) {
            // nothing
        }
    #endif
    
        static size_t bytesForCapacity(uint32_t cap);
        static struct bucket_t * endMarker(struct bucket_t *b, uint32_t cap);
    
        void reallocate(mask_t oldCapacity, mask_t newCapacity, bool freeOld);
        void insert(Class cls, SEL sel, IMP imp, id receiver);
    
        static void bad_cache(id receiver, SEL sel, Class isa) __attribute__((noreturn, cold));
    }
    

    objc_class 类结构体源码 :

    struct objc_class : objc_object {
        // Class ISA;
        Class superclass;
        cache_t cache;             // formerly cache pointer and vtable
        class_data_bits_t bits;    // class_rw_t * plus custom rr/alloc flags
    
        class_rw_t *data() const {
            return bits.data();
        }
        void setData(class_rw_t *newData) {
            bits.setData(newData);
        }
    
        void setInfo(uint32_t set) {
            ASSERT(isFuture()  ||  isRealized());
            data()->setFlags(set);
        }
    
        void clearInfo(uint32_t clear) {
            ASSERT(isFuture()  ||  isRealized());
            data()->clearFlags(clear);
        }
    
        // set and clear must not overlap
        void changeInfo(uint32_t set, uint32_t clear) {
            ASSERT(isFuture()  ||  isRealized());
            ASSERT((set & clear) == 0);
            data()->changeFlags(set, clear);
        }
    
    #if FAST_HAS_DEFAULT_RR
        bool hasCustomRR() const {
            return !bits.getBit(FAST_HAS_DEFAULT_RR);
        }
        void setHasDefaultRR() {
            bits.setBits(FAST_HAS_DEFAULT_RR);
        }
        void setHasCustomRR() {
            bits.clearBits(FAST_HAS_DEFAULT_RR);
        }
    #else
        bool hasCustomRR() const {
            return !(bits.data()->flags & RW_HAS_DEFAULT_RR);
        }
        void setHasDefaultRR() {
            bits.data()->setFlags(RW_HAS_DEFAULT_RR);
        }
        void setHasCustomRR() {
            bits.data()->clearFlags(RW_HAS_DEFAULT_RR);
        }
    #endif
    
    #if FAST_CACHE_HAS_DEFAULT_AWZ
        bool hasCustomAWZ() const {
            return !cache.getBit(FAST_CACHE_HAS_DEFAULT_AWZ);
        }
        void setHasDefaultAWZ() {
            cache.setBit(FAST_CACHE_HAS_DEFAULT_AWZ);
        }
        void setHasCustomAWZ() {
            cache.clearBit(FAST_CACHE_HAS_DEFAULT_AWZ);
        }
    #else
        bool hasCustomAWZ() const {
            return !(bits.data()->flags & RW_HAS_DEFAULT_AWZ);
        }
        void setHasDefaultAWZ() {
            bits.data()->setFlags(RW_HAS_DEFAULT_AWZ);
        }
        void setHasCustomAWZ() {
            bits.data()->clearFlags(RW_HAS_DEFAULT_AWZ);
        }
    #endif
    
    #if FAST_CACHE_HAS_DEFAULT_CORE
        bool hasCustomCore() const {
            return !cache.getBit(FAST_CACHE_HAS_DEFAULT_CORE);
        }
        void setHasDefaultCore() {
            return cache.setBit(FAST_CACHE_HAS_DEFAULT_CORE);
        }
        void setHasCustomCore() {
            return cache.clearBit(FAST_CACHE_HAS_DEFAULT_CORE);
        }
    #else
        bool hasCustomCore() const {
            return !(bits.data()->flags & RW_HAS_DEFAULT_CORE);
        }
        void setHasDefaultCore() {
            bits.data()->setFlags(RW_HAS_DEFAULT_CORE);
        }
        void setHasCustomCore() {
            bits.data()->clearFlags(RW_HAS_DEFAULT_CORE);
        }
    #endif
    
    #if FAST_CACHE_HAS_CXX_CTOR
        bool hasCxxCtor() {
            ASSERT(isRealized());
            return cache.getBit(FAST_CACHE_HAS_CXX_CTOR);
        }
        void setHasCxxCtor() {
            cache.setBit(FAST_CACHE_HAS_CXX_CTOR);
        }
    #else
        bool hasCxxCtor() {
            ASSERT(isRealized());
            return bits.data()->flags & RW_HAS_CXX_CTOR;
        }
        void setHasCxxCtor() {
            bits.data()->setFlags(RW_HAS_CXX_CTOR);
        }
    #endif
    
    #if FAST_CACHE_HAS_CXX_DTOR
        bool hasCxxDtor() {
            ASSERT(isRealized());
            return cache.getBit(FAST_CACHE_HAS_CXX_DTOR);
        }
        void setHasCxxDtor() {
            cache.setBit(FAST_CACHE_HAS_CXX_DTOR);
        }
    #else
        bool hasCxxDtor() {
            ASSERT(isRealized());
            return bits.data()->flags & RW_HAS_CXX_DTOR;
        }
        void setHasCxxDtor() {
            bits.data()->setFlags(RW_HAS_CXX_DTOR);
        }
    #endif
    
    #if FAST_CACHE_REQUIRES_RAW_ISA
        bool instancesRequireRawIsa() {
            return cache.getBit(FAST_CACHE_REQUIRES_RAW_ISA);
        }
        void setInstancesRequireRawIsa() {
            cache.setBit(FAST_CACHE_REQUIRES_RAW_ISA);
        }
    #elif SUPPORT_NONPOINTER_ISA
        bool instancesRequireRawIsa() {
            return bits.data()->flags & RW_REQUIRES_RAW_ISA;
        }
        void setInstancesRequireRawIsa() {
            bits.data()->setFlags(RW_REQUIRES_RAW_ISA);
        }
    #else
        bool instancesRequireRawIsa() {
            return true;
        }
        void setInstancesRequireRawIsa() {
            // nothing
        }
    #endif
        void setInstancesRequireRawIsaRecursively(bool inherited = false);
        void printInstancesRequireRawIsa(bool inherited);
    
        bool canAllocNonpointer() {
            ASSERT(!isFuture());
            return !instancesRequireRawIsa();
        }
    
        bool isSwiftStable() {
            return bits.isSwiftStable();
        }
    
        bool isSwiftLegacy() {
            return bits.isSwiftLegacy();
        }
    
        bool isAnySwift() {
            return bits.isAnySwift();
        }
    
        bool isSwiftStable_ButAllowLegacyForNow() {
            return bits.isSwiftStable_ButAllowLegacyForNow();
        }
    
        bool isStubClass() const {
            uintptr_t isa = (uintptr_t)isaBits();
            return 1 <= isa && isa < 16;
        }
    
        // Swift stable ABI built for old deployment targets looks weird.
        // The is-legacy bit is set for compatibility with old libobjc.
        // We are on a "new" deployment target so we need to rewrite that bit.
        // These stable-with-legacy-bit classes are distinguished from real
        // legacy classes using another bit in the Swift data
        // (ClassFlags::IsSwiftPreStableABI)
    
        bool isUnfixedBackwardDeployingStableSwift() {
            // Only classes marked as Swift legacy need apply.
            if (!bits.isSwiftLegacy()) return false;
    
            // Check the true legacy vs stable distinguisher.
            // The low bit of Swift's ClassFlags is SET for true legacy
            // and UNSET for stable pretending to be legacy.
            uint32_t swiftClassFlags = *(uint32_t *)(&bits + 1);
            bool isActuallySwiftLegacy = bool(swiftClassFlags & 1);
            return !isActuallySwiftLegacy;
        }
    
        void fixupBackwardDeployingStableSwift() {
            if (isUnfixedBackwardDeployingStableSwift()) {
                // Class really is stable Swift, pretending to be pre-stable.
                // Fix its lie.
                bits.setIsSwiftStable();
            }
        }
    
        _objc_swiftMetadataInitializer swiftMetadataInitializer() {
            return bits.swiftMetadataInitializer();
        }
    
        // Return YES if the class's ivars are managed by ARC, 
        // or the class is MRC but has ARC-style weak ivars.
        bool hasAutomaticIvars() {
            return data()->ro()->flags & (RO_IS_ARC | RO_HAS_WEAK_WITHOUT_ARC);
        }
    
        // Return YES if the class's ivars are managed by ARC.
        bool isARC() {
            return data()->ro()->flags & RO_IS_ARC;
        }
    
    
        bool forbidsAssociatedObjects() {
            return (data()->flags & RW_FORBIDS_ASSOCIATED_OBJECTS);
        }
    
    #if SUPPORT_NONPOINTER_ISA
        // Tracked in non-pointer isas; not tracked otherwise
    #else
        bool instancesHaveAssociatedObjects() {
            // this may be an unrealized future class in the CF-bridged case
            ASSERT(isFuture()  ||  isRealized());
            return data()->flags & RW_INSTANCES_HAVE_ASSOCIATED_OBJECTS;
        }
    
        void setInstancesHaveAssociatedObjects() {
            // this may be an unrealized future class in the CF-bridged case
            ASSERT(isFuture()  ||  isRealized());
            setInfo(RW_INSTANCES_HAVE_ASSOCIATED_OBJECTS);
        }
    #endif
    
        bool shouldGrowCache() {
            return true;
        }
    
        void setShouldGrowCache(bool) {
            // fixme good or bad for memory use?
        }
    
        bool isInitializing() {
            return getMeta()->data()->flags & RW_INITIALIZING;
        }
    
        void setInitializing() {
            ASSERT(!isMetaClass());
            ISA()->setInfo(RW_INITIALIZING);
        }
    
        bool isInitialized() {
            return getMeta()->data()->flags & RW_INITIALIZED;
        }
    
        void setInitialized();
    
        bool isLoadable() {
            ASSERT(isRealized());
            return true;  // any class registered for +load is definitely loadable
        }
    
        IMP getLoadMethod();
    
        // Locking: To prevent concurrent realization, hold runtimeLock.
        bool isRealized() const {
            return !isStubClass() && (data()->flags & RW_REALIZED);
        }
    
        // Returns true if this is an unrealized future class.
        // Locking: To prevent concurrent realization, hold runtimeLock.
        bool isFuture() const {
            return data()->flags & RW_FUTURE;
        }
    
        bool isMetaClass() {
            ASSERT(this);
            ASSERT(isRealized());
    #if FAST_CACHE_META
            return cache.getBit(FAST_CACHE_META);
    #else
            return data()->flags & RW_META;
    #endif
        }
    
        // Like isMetaClass, but also valid on un-realized classes
        bool isMetaClassMaybeUnrealized() {
            static_assert(offsetof(class_rw_t, flags) == offsetof(class_ro_t, flags), "flags alias");
            static_assert(RO_META == RW_META, "flags alias");
            return data()->flags & RW_META;
        }
    
        // NOT identical to this->ISA when this is a metaclass
        Class getMeta() {
            if (isMetaClass()) return (Class)this;
            else return this->ISA();
        }
    
        bool isRootClass() {
            return superclass == nil;
        }
        bool isRootMetaclass() {
            return ISA() == (Class)this;
        }
    
        const char *mangledName() { 
            // fixme can't assert locks here
            ASSERT(this);
    
            if (isRealized()  ||  isFuture()) {
                return data()->ro()->name;
            } else {
                return ((const class_ro_t *)data())->name;
            }
        }
        
        const char *demangledName(bool needsLock);
        const char *nameForLogging();
    
        // May be unaligned depending on class's ivars.
        uint32_t unalignedInstanceStart() const {
            ASSERT(isRealized());
            return data()->ro()->instanceStart;
        }
    
        // Class's instance start rounded up to a pointer-size boundary.
        // This is used for ARC layout bitmaps.
        uint32_t alignedInstanceStart() const {
            return word_align(unalignedInstanceStart());
        }
    
        // May be unaligned depending on class's ivars.
        uint32_t unalignedInstanceSize() const {
            ASSERT(isRealized());
            return data()->ro()->instanceSize;
        }
    
        // Class's ivar size rounded up to a pointer-size boundary.
        uint32_t alignedInstanceSize() const {
            return word_align(unalignedInstanceSize());
        }
    
        size_t instanceSize(size_t extraBytes) const {
            if (fastpath(cache.hasFastInstanceSize(extraBytes))) {
                return cache.fastInstanceSize(extraBytes);
            }
    
            size_t size = alignedInstanceSize() + extraBytes;
            // CF requires all objects be at least 16 bytes.
            if (size < 16) size = 16;
            return size;
        }
    
        void setInstanceSize(uint32_t newSize) {
            ASSERT(isRealized());
            ASSERT(data()->flags & RW_REALIZING);
            auto ro = data()->ro();
            if (newSize != ro->instanceSize) {
                ASSERT(data()->flags & RW_COPIED_RO);
                *const_cast<uint32_t *>(&ro->instanceSize) = newSize;
            }
            cache.setFastInstanceSize(newSize);
        }
    
        void chooseClassArrayIndex();
    
        void setClassArrayIndex(unsigned Idx) {
            bits.setClassArrayIndex(Idx);
        }
    
        unsigned classArrayIndex() {
            return bits.classArrayIndex();
        }
    }
    

    bucket_t 源码 :

    struct bucket_t {
    private:
        // IMP-first is better for arm64e ptrauth and no worse for arm64.
        // SEL-first is better for armv7* and i386 and x86_64.
    #if __arm64__
        explicit_atomic<uintptr_t> _imp;
        explicit_atomic<SEL> _sel;
    #else
        explicit_atomic<SEL> _sel;
        explicit_atomic<uintptr_t> _imp;
    #endif
    
        // Compute the ptrauth signing modifier from &_imp, newSel, and cls.
        uintptr_t modifierForSEL(SEL newSel, Class cls) const {
            return (uintptr_t)&_imp ^ (uintptr_t)newSel ^ (uintptr_t)cls;
        }
    
        // Sign newImp, with &_imp, newSel, and cls as modifiers.
        uintptr_t encodeImp(IMP newImp, SEL newSel, Class cls) const {
            if (!newImp) return 0;
    #if CACHE_IMP_ENCODING == CACHE_IMP_ENCODING_PTRAUTH
            return (uintptr_t)
                ptrauth_auth_and_resign(newImp,
                                        ptrauth_key_function_pointer, 0,
                                        ptrauth_key_process_dependent_code,
                                        modifierForSEL(newSel, cls));
    #elif CACHE_IMP_ENCODING == CACHE_IMP_ENCODING_ISA_XOR
            return (uintptr_t)newImp ^ (uintptr_t)cls;
    #elif CACHE_IMP_ENCODING == CACHE_IMP_ENCODING_NONE
            return (uintptr_t)newImp;
    #else
    #error Unknown method cache IMP encoding.
    #endif
        }
    
    public:
        inline SEL sel() const { return _sel.load(memory_order::memory_order_relaxed); }
    
        inline IMP imp(Class cls) const {
            uintptr_t imp = _imp.load(memory_order::memory_order_relaxed);
            if (!imp) return nil;
    #if CACHE_IMP_ENCODING == CACHE_IMP_ENCODING_PTRAUTH
            SEL sel = _sel.load(memory_order::memory_order_relaxed);
            return (IMP)
                ptrauth_auth_and_resign((const void *)imp,
                                        ptrauth_key_process_dependent_code,
                                        modifierForSEL(sel, cls),
                                        ptrauth_key_function_pointer, 0);
    #elif CACHE_IMP_ENCODING == CACHE_IMP_ENCODING_ISA_XOR
            return (IMP)(imp ^ (uintptr_t)cls);
    #elif CACHE_IMP_ENCODING == CACHE_IMP_ENCODING_NONE
            return (IMP)imp;
    #else
    #error Unknown method cache IMP encoding.
    #endif
        }
    
        template <Atomicity, IMPEncoding>
        void set(SEL newSel, IMP newImp, Class cls);
    }
    

    insert 源码 :

    void cache_t::insert(Class cls, SEL sel, IMP imp, id receiver)
    {
    #if CONFIG_USE_CACHE_LOCK
        cacheUpdateLock.assertLocked();
    #else
        runtimeLock.assertLocked();
    #endif
    
        ASSERT(sel != 0 && cls->isInitialized());
    
        // Use the cache as-is if it is less than 3/4 full
        mask_t newOccupied = occupied() + 1;
        unsigned oldCapacity = capacity(), capacity = oldCapacity;
        if (slowpath(isConstantEmptyCache())) {
            // Cache is read-only. Replace it.
            if (!capacity) capacity = INIT_CACHE_SIZE;
            reallocate(oldCapacity, capacity, /* freeOld */false);
        }
        else if (fastpath(newOccupied + CACHE_END_MARKER <= capacity / 4 * 3)) { // 4  3 + 1 bucket cache_t
            // Cache is less than 3/4 full. Use it as-is.
        }
        else {
            capacity = capacity ? capacity * 2 : INIT_CACHE_SIZE;  // 扩容两倍 4
            if (capacity > MAX_CACHE_SIZE) {
                capacity = MAX_CACHE_SIZE;
            }
            reallocate(oldCapacity, capacity, true);  // 内存 扩容完毕
        }
    
        bucket_t *b = buckets();
        mask_t m = capacity - 1;
        mask_t begin = cache_hash(sel, m);
        mask_t i = begin;
    
        // Scan for the first unused slot and insert there.
        // There is guaranteed to be an empty slot because the
        // minimum size is 4 and we resized at 3/4 full.
        do {
            if (fastpath(b[i].sel() == 0)) {
                incrementOccupied();
                b[i].set<Atomic, Encoded>(sel, imp, cls);
                return;
            }
            if (b[i].sel() == sel) {
                // The entry was added to the cache by some other thread
                // before we grabbed the cacheUpdateLock.
                return;
            }
        } while (fastpath((i = cache_next(i, m)) != begin));
    
        cache_t::bad_cache(receiver, (SEL)sel, cls);
    }
    

    相关文章

      网友评论

          本文标题:OC底层原理08-类结构的 cache_t 分析

          本文链接:https://www.haomeiwen.com/subject/nkuvyktx.html