美文网首页
多线程 — 锁

多线程 — 锁

作者: GTMYang | 来源:发表于2018-05-17 14:00 被阅读0次

互斥锁 NSLock

加锁之后在解锁之前,资源独占其他人不能访问

递归锁 NSRecursiveLock

同一线程可重入的锁。
解决同一线程多次申请同一个锁造成的死锁问题。

func A(){
  Lock.lock();
  B();
  Lock.unlock();
}
func B() {
   Lock.lock();
    // do sth.
   Lock.unlock();
}

读写锁

读操作的时候允许其他读操作执行,但不允许写操作执行。
写操作的时候既不允许写操作也不允许读操作。
对于读数据比写数据频繁的操作,用读写锁代替互斥锁可以提高效率。

自旋锁 OSSpinLock

轮询检查锁状态,不阻塞线程。
对于预计等待锁定时间较短的多处理器系统,不阻塞线程能够提高性能,因为能减少线程切换(上下文的保存:寄存器,堆栈信息。修改内核内存中的线程数据结构)的开销。

条件锁 NSConditionLock

带条件的锁。
可以根据条件来设置线程的执行顺序。

 static int index = 1;
    //主线程中
    NSConditionLock *lock = [[NSConditionLock alloc] initWithCondition:0];
    
    //线程1第3个执行
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        [lock lockWhenCondition:2];
        NSLog(@"线程1第%i个执行", index++);
        sleep(2);
        [lock unlockWithCondition:3];
    });
    
    //线程2第1个执行
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        [lock lockWhenCondition:0];
        NSLog(@"线程2第%i个执行", index++);
        sleep(2);
        [lock unlockWithCondition:1];
    });
    
    //线程3第2个执行
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        [lock lockWhenCondition:1];
        NSLog(@"线程3第%i个执行", index++);
        sleep(2);
        [lock unlockWithCondition:2];
    });
    
    //线程4第4个执行
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        [lock lockWhenCondition:3];
        NSLog(@"线程4第%i个执行", index++);
        sleep(2);
        [lock unlock];
    });

条件 NSCondition

让线程可等待,可被唤醒。
使用场景:依赖关系
比Lock多了四个方法:

- (void)wait; // 等待唤醒signal,broadcast
- (BOOL)waitUntilDate:(NSDate *)limit; // 等待唤醒一段时间
- (void)signal;    // 唤醒其他等待的线程
- (void)broadcast; // 唤醒其他等待的线程

同步块@synchronized

// 源代码
@synchronized(obj) {
    // do work
}
// 编译器转换后
@try {
    objc_sync_enter(obj);
    // do work
} @finally {
    objc_sync_exit(obj);    
}

objc_sync_enter 和 objc_sync_exit

// Begin synchronizing on 'obj'. 
// Allocates recursive mutex associated with 'obj' if needed.
// Returns OBJC_SYNC_SUCCESS once lock is acquired.  
int objc_sync_enter(id obj)
{
    int result = OBJC_SYNC_SUCCESS;

    if (obj) {
        SyncData* data = id2data(obj, ACQUIRE);
        assert(data);
        data->mutex.lock();
    } else {
        // @synchronized(nil) does nothing
        if (DebugNilSync) {
            _objc_inform("NIL SYNC DEBUG: @synchronized(nil); set a breakpoint on objc_sync_nil to debug");
        }
        objc_sync_nil();
    }

    return result;
}


// End synchronizing on 'obj'. 
// Returns OBJC_SYNC_SUCCESS or OBJC_SYNC_NOT_OWNING_THREAD_ERROR
int objc_sync_exit(id obj)
{
    int result = OBJC_SYNC_SUCCESS;
    
    if (obj) {
        SyncData* data = id2data(obj, RELEASE); 
        if (!data) {
            result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR;
        } else {
            bool okay = data->mutex.tryUnlock();
            if (!okay) {
                result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR;
            }
        }
    } else {
        // @synchronized(nil) does nothing
    }
    return result;
}

typedef struct SyncData {
    struct SyncData* nextData;
    DisguisedPtr<objc_object> object;
    int32_t threadCount;  // number of THREADS using this block
    recursive_mutex_t mutex;
} SyncData;

static SyncData* id2data(id object, enum usage why)
{
    spinlock_t *lockp = &LOCK_FOR_OBJ(object);
    SyncData **listp = &LIST_FOR_OBJ(object);
    SyncData* result = NULL;

#if SUPPORT_DIRECT_THREAD_KEYS
    // Check per-thread single-entry fast cache for matching object
    bool fastCacheOccupied = NO;
    SyncData *data = (SyncData *)tls_get_direct(SYNC_DATA_DIRECT_KEY);
    if (data) {
        fastCacheOccupied = YES;

        if (data->object == object) {
            // Found a match in fast cache.
            uintptr_t lockCount;

            result = data;
            lockCount = (uintptr_t)tls_get_direct(SYNC_COUNT_DIRECT_KEY);
            if (result->threadCount <= 0  ||  lockCount <= 0) {
                _objc_fatal("id2data fastcache is buggy");
            }

            switch(why) {
            case ACQUIRE: {
                lockCount++;
                tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
                break;
            }
            case RELEASE:
                lockCount--;
                tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
                if (lockCount == 0) {
                    // remove from fast cache
                    tls_set_direct(SYNC_DATA_DIRECT_KEY, NULL);
                    // atomic because may collide with concurrent ACQUIRE
                    OSAtomicDecrement32Barrier(&result->threadCount);
                }
                break;
            case CHECK:
                // do nothing
                break;
            }

            return result;
        }
    }
#endif

    // Check per-thread cache of already-owned locks for matching object
    SyncCache *cache = fetch_cache(NO);
    if (cache) {
        unsigned int i;
        for (i = 0; i < cache->used; i++) {
            SyncCacheItem *item = &cache->list[i];
            if (item->data->object != object) continue;

            // Found a match.
            result = item->data;
            if (result->threadCount <= 0  ||  item->lockCount <= 0) {
                _objc_fatal("id2data cache is buggy");
            }
                
            switch(why) {
            case ACQUIRE:
                item->lockCount++;
                break;
            case RELEASE:
                item->lockCount--;
                if (item->lockCount == 0) {
                    // remove from per-thread cache
                    cache->list[i] = cache->list[--cache->used];
                    // atomic because may collide with concurrent ACQUIRE
                    OSAtomicDecrement32Barrier(&result->threadCount);
                }
                break;
            case CHECK:
                // do nothing
                break;
            }

            return result;
        }
    }

    // Thread cache didn't find anything.
    // Walk in-use list looking for matching object
    // Spinlock prevents multiple threads from creating multiple 
    // locks for the same new object.
    // We could keep the nodes in some hash table if we find that there are
    // more than 20 or so distinct locks active, but we don't do that now.
    
    lockp->lock();

    {
        SyncData* p;
        SyncData* firstUnused = NULL;
        for (p = *listp; p != NULL; p = p->nextData) {
            if ( p->object == object ) {
                result = p;
                // atomic because may collide with concurrent RELEASE
                OSAtomicIncrement32Barrier(&result->threadCount);
                goto done;
            }
            if ( (firstUnused == NULL) && (p->threadCount == 0) )
                firstUnused = p;
        }
    
        // no SyncData currently associated with object
        if ( (why == RELEASE) || (why == CHECK) )
            goto done;
    
        // an unused one was found, use it
        if ( firstUnused != NULL ) {
            result = firstUnused;
            result->object = (objc_object *)object;
            result->threadCount = 1;
            goto done;
        }
    }

    // malloc a new SyncData and add to list.
    // XXX calling malloc with a global lock held is bad practice,
    // might be worth releasing the lock, mallocing, and searching again.
    // But since we never free these guys we won't be stuck in malloc very often.
    result = (SyncData*)calloc(sizeof(SyncData), 1);
    result->object = (objc_object *)object;
    result->threadCount = 1;
    new (&result->mutex) recursive_mutex_t();
    result->nextData = *listp;
    *listp = result;
    
 done:
    lockp->unlock();
    if (result) {
        // Only new ACQUIRE should get here.
        // All RELEASE and CHECK and recursive ACQUIRE are 
        // handled by the per-thread caches above.
        if (why == RELEASE) {
            // Probably some thread is incorrectly exiting 
            // while the object is held by another thread.
            return nil;
        }
        if (why != ACQUIRE) _objc_fatal("id2data is buggy");
        if (result->object != object) _objc_fatal("id2data is buggy");

#if SUPPORT_DIRECT_THREAD_KEYS
        if (!fastCacheOccupied) {
            // Save in fast thread cache
            tls_set_direct(SYNC_DATA_DIRECT_KEY, result);
            tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)1);
        } else 
#endif
        {
            // Save in thread cache
            if (!cache) cache = fetch_cache(YES);
            cache->list[cache->used].data = result;
            cache->list[cache->used].lockCount = 1;
            cache->used++;
        }
    }
    return result;
}

有一点不明白:为什么object对应的是持有锁的结构体的链表,为什么是一对多的关系?? 求高手解答

其他

同步块@synchronized
互斥锁NSLock,
互斥锁pthread_mutex,
GCD信号量dispatch_semaphore

阅读

iOS中的多种锁(Lock)
iOS开发中的11种锁以及性能对比
同步块@synchronized
iOS 各种锁机制

相关文章

  • 起底多线程同步锁(iOS)

    起底多线程同步锁(iOS) 起底多线程同步锁(iOS)

  • 浅析乐观锁、悲观锁与CAS

    乐观锁与悲观锁 处理多线程并发访问最常用的就是加锁,锁又分成乐观锁和悲观锁。 悲观锁 在多线程访问共享资源时,同时...

  • 多线程与锁

    多线程与锁

  • 多线程 (三)iOS中的锁

    ios 多线程--锁

  • 多线程(四)

    上篇多线程(三)我们看了多线程的安全隐患 以及各种锁的简单使用,接下来我们来看看锁的比较、以及自旋锁、互斥锁比较 ...

  • 解决线程同步的方案汇总总结

    这是一篇继上一篇继续介绍多线程同步的博客.(你了解多线程自旋锁、互斥锁、递归锁等锁吗?[https://www.j...

  • OC--各种线程锁

    参考:正确使用多线程同步锁@synchronized()iOS中的锁iOS多线程安全详解iOS 常见知识点(三):...

  • zookeeperDistributedLock

    分布式锁: 分布式锁是什么? 通常说的锁是单进程多线程的锁,在多线程并发编程中用于线程之间的数据共享 分布式锁 指...

  • 锁与多线程同步的实现

    Java当中的锁都是为了保证多线程同步执行。如果没有锁的话,多线程是异步执行的。 什么是多线程同步? 请看下面的代...

  • iOS线程同步

    线程同步 提到多线程大家肯定会提到锁,其实真正应该说的是多线程同步,锁只是多线程同步的一部分。 多线程对于数据处理...

网友评论

      本文标题:多线程 — 锁

      本文链接:https://www.haomeiwen.com/subject/zozvdftx.html