美文网首页
iOS Objective-C GCD之函数篇

iOS Objective-C GCD之函数篇

作者: just东东 | 来源:发表于2020-10-27 17:59 被阅读0次

iOS Objective-C GCD之函数篇

1. GCD 中函数简介

在上一篇队列篇中我们简要的介绍了GCD中的函数,在GCD中执行任务的函数分为同步函数和异步函数。

  • 执行任务的异步函数:dispatch_async
    • 不用等待当前语句执行完毕就可以执行下一条语句
    • 会开启线程执行block的任务
    • 异步是多线程的代名词
  • 执行任务的同步函数dispatch_sync
    • 必须等待当前语句执行完毕才会执行下一条语句
    • 不会开启线程
    • 在当前线程执行block任务

不管是同步函数还是异步函数都有两个参数,分别是dispatch_queue_t类型的队列和dispatch_block_t类型的block

提出一些问题:

那么这些函数是是如何执行的呢?我们的任务代码块是在哪里被调用的呢?对于同步函数的死锁在底层又是什么样的呢?带着这些疑问我们仅以探索。

2. dispatch_sync 源码分析

我们来到libdispath源码进行全局搜索dispatch_sync(dispatch_queue_t便可以找到dispatch_sync的源码实现,如下:

void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
    uintptr_t dc_flags = DC_FLAG_BLOCK;
    if (unlikely(_dispatch_block_has_private_data(work))) {
        return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
    }
    _dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}

这里的unlikely的意思是不太可能的意思,所以也就基本不会走这里的分支,我们也能够看的出来,这里就是一层隔离封装,判断只是容错处理。所以我们直接分析_dispatch_sync_f函数,这里的第三个函数是对我们开始传入的block进行了一些处理。这里_dispatch_Block_invoke是一个宏,代码如下:

#define _dispatch_Block_invoke(bb) \
        ((dispatch_function_t)((struct Block_layout *)bb)->invoke)

下面我们继续探索,来到_dispatch_sync_f中,非常简单就一行代码调用的是_dispatch_sync_f_inline函数,代码如下:

static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
        uintptr_t dc_flags)
{
    _dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}

下面我们就去_dispatch_sync_f_inline函数一探究竟,代码如下:

static inline void
_dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt,
        dispatch_function_t func, uintptr_t dc_flags)
{
    if (likely(dq->dq_width == 1)) {
        return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
    }

    if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
        DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
    }

    dispatch_lane_t dl = upcast(dq)._dl;
    // Global concurrent queues and queues bound to non-dispatch threads
    // always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
    if (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) {
        return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);
    }

    if (unlikely(dq->do_targetq->do_targetq)) {
        return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);
    }
    _dispatch_introspection_sync_begin(dl);
    _dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(
            _dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)));
}

这里我们直接看第一个判断就行,为什么呢?因为对于同步的函数,任务是一个挨着一个执行的,所以他的队列宽度是1,在这里调用的是_dispatch_barrier_sync_f函数,这就是我们的栅栏函数,所以说同步函数在底层调用的就是栅栏函数亦或者说同步函数是一种特殊的栅栏函数。下面我们接着分析,跳转到_dispatch_barrier_sync_f发现其内部也只是包装了一层,实际调用是_dispatch_barrier_sync_f_inline函数,_dispatch_barrier_sync_f_inline函数源码如下:

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,
        dispatch_function_t func, uintptr_t dc_flags)
{
    dispatch_tid tid = _dispatch_tid_self();

    if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
        DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
    }

    dispatch_lane_t dl = upcast(dq)._dl;
    // The more correct thing to do would be to merge the qos of the thread
    // that just acquired the barrier lock into the queue state.
    //
    // However this is too expensive for the fast path, so skip doing it.
    // The chosen tradeoff is that if an enqueue on a lower priority thread
    // contends with this fast path, this thread may receive a useless override.
    //
    // Global concurrent queues and queues bound to non-dispatch threads
    // always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
    if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dl, tid))) {
        return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,
                DC_FLAG_BARRIER | dc_flags);
    }

    if (unlikely(dl->do_targetq->do_targetq)) {
        return _dispatch_sync_recurse(dl, ctxt, func,
                DC_FLAG_BARRIER | dc_flags);
    }
    _dispatch_introspection_sync_begin(dl);
    _dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
            DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
                    dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
}

_dispatch_barrier_sync_f_inline函数中:

  • 首先是获取了线程tid
  • 然后做了一步队列和线程的匹配,如果该队列不支持同步函数则抛出异常;
  • 接下来获取dl
  • 下一步使用到了tid,这也是在该函数唯一使用tid的地方,一般这种获取tid然后使用的都是比较重要的,虽然是unlikely,我们还是要一探究竟。

_dispatch_queue_try_acquire_barrier_sync源码:

DISPATCH_ALWAYS_INLINE DISPATCH_WARN_RESULT
static inline bool
_dispatch_queue_try_acquire_barrier_sync(dispatch_queue_class_t dq, uint32_t tid)
{
    return _dispatch_queue_try_acquire_barrier_sync_and_suspend(dq._dl, tid, 0);
}

_dispatch_queue_try_acquire_barrier_sync_and_suspend源码:

/* Used by _dispatch_barrier_{try,}sync
 *
 * Note, this fails if any of e:1 or dl!=0, but that allows this code to be a
 * simple cmpxchg which is significantly faster on Intel, and makes a
 * significant difference on the uncontended codepath.
 *
 * See discussion for DISPATCH_QUEUE_DIRTY in queue_internal.h
 *
 * Initial state must be `completely idle`
 * Final state forces { ib:1, qf:1, w:0 }
 */
DISPATCH_ALWAYS_INLINE DISPATCH_WARN_RESULT
static inline bool
_dispatch_queue_try_acquire_barrier_sync_and_suspend(dispatch_lane_t dq,
        uint32_t tid, uint64_t suspend_count)
{
    uint64_t init  = DISPATCH_QUEUE_STATE_INIT_VALUE(dq->dq_width);
    uint64_t value = DISPATCH_QUEUE_WIDTH_FULL_BIT | DISPATCH_QUEUE_IN_BARRIER |
            _dispatch_lock_value_from_tid(tid) |
            (suspend_count * DISPATCH_QUEUE_SUSPEND_INTERVAL);
    uint64_t old_state, new_state;

    return os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, acquire, {
        uint64_t role = old_state & DISPATCH_QUEUE_ROLE_MASK;
        if (old_state != (init | role)) {
            os_atomic_rmw_loop_give_up(break);
        }
        new_state = value | role;
    });
}

上面这个函数就是从os底层获取一个new_state然后返回回去,那么我们在回到_dispatch_barrier_sync_f_inline接着看,如果这个判断成立会调用_dispatch_sync_f_slow函数

_dispatch_sync_f_slow源码:

DISPATCH_NOINLINE
static void
_dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt,
        dispatch_function_t func, uintptr_t top_dc_flags,
        dispatch_queue_class_t dqu, uintptr_t dc_flags)
{
    dispatch_queue_t top_dq = top_dqu._dq;
    dispatch_queue_t dq = dqu._dq;
    if (unlikely(!dq->do_targetq)) {
        return _dispatch_sync_function_invoke(dq, ctxt, func);
    }

    pthread_priority_t pp = _dispatch_get_priority();
    struct dispatch_sync_context_s dsc = {
        .dc_flags    = DC_FLAG_SYNC_WAITER | dc_flags,
        .dc_func     = _dispatch_async_and_wait_invoke,
        .dc_ctxt     = &dsc,
        .dc_other    = top_dq,
        .dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG,
        .dc_voucher  = _voucher_get(),
        .dsc_func    = func,
        .dsc_ctxt    = ctxt,
        .dsc_waiter  = _dispatch_tid_self(),
    };

    _dispatch_trace_item_push(top_dq, &dsc);
    __DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq);

    if (dsc.dsc_func == NULL) {
        // dsc_func being cleared means that the block ran on another thread ie.
        // case (2) as listed in _dispatch_async_and_wait_f_slow.
        dispatch_queue_t stop_dq = dsc.dc_other;
        return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags);
    }

    _dispatch_introspection_sync_begin(top_dq);
    _dispatch_trace_item_pop(top_dq, &dsc);
    _dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags
            DISPATCH_TRACE_ARG(&dsc));
}

_dispatch_sync_f_slow函数详解:

  • 首先初始化一些局部变量top_dqdq
  • 然后判断unlikely(!dq->do_targetq)也就是我们队列中的标识字符串没有值的时候,直接调用_dispatch_sync_function_invoke函数去调用传入的block函数
  • 在上一步中的判断不成立后则通过priority获取优先级
  • 初始化一个dispatch_sync_context_s类型结构体dsc,里面存储了函数信息等内容。
  • 通过_dispatch_trace_item_push函数向队列中push一个block对象;(这些对象将按照先入先出的顺序进行处理,系统在GCD的底层维护一个线程池,用来执行这些block。)
  • 将该函数标记为等待执行状态
  • 判断函数是否为空,如果为空则表示该代码块可能被运行在其他线程上,需要将其递归移除处队列(调用_dispatch_sync_complete_recurse函数处理)
  • 标记开始执行,pop出函数,调用函数_dispatch_sync_invoke_and_complete_recurse,去执行我们同步函数传入的**代码块
    **

我们发现_dispatch_sync_invoke_and_complete_recurse内部调用了_dispatch_sync_function_invoke_inline函数;其实在一开始对target判断的时候调用的_dispatch_sync_function_invoke函数内部也是调用了_dispatch_sync_function_invoke_inline函数,其源码如下:

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_sync_function_invoke_inline(dispatch_queue_class_t dq, void *ctxt,
        dispatch_function_t func)
{
    dispatch_thread_frame_s dtf;
    _dispatch_thread_frame_push(&dtf, dq);
    _dispatch_client_callout(ctxt, func);
    _dispatch_perfmon_workitem_inc();
    _dispatch_thread_frame_pop(&dtf);
}

_dispatch_sync_function_invoke_inline函数的第三行代码我们可以看到我们传入的block在这里调用了,所以说GCD函数调用在底层就是在这里实现的。

对于函数调用我们有了答案,那么关于死锁呢?其实在_dispatch_sync_f_slow函数中,有一行被我们忽略的代码__DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq);,在上面我们分析的时候将其定义为将函数标记为等待状态,那么我们就看看它的源码是怎么样的:

DISPATCH_WAIT_FOR_QUEUE 源码:

static void
__DISPATCH_WAIT_FOR_QUEUE__(dispatch_sync_context_t dsc, dispatch_queue_t dq)
{
    uint64_t dq_state = _dispatch_wait_prepare(dq);
    if (unlikely(_dq_state_drain_locked_by(dq_state, dsc->dsc_waiter))) {
        DISPATCH_CLIENT_CRASH((uintptr_t)dq_state,
                "dispatch_sync called on queue "
                "already owned by current thread");
    }

    // Blocks submitted to the main thread MUST run on the main thread, and
    // dispatch_async_and_wait also executes on the remote context rather than
    // the current thread.
    //
    // For both these cases we need to save the frame linkage for the sake of
    // _dispatch_async_and_wait_invoke
    _dispatch_thread_frame_save_state(&dsc->dsc_dtf);

    if (_dq_state_is_suspended(dq_state) ||
            _dq_state_is_base_anon(dq_state)) {
        dsc->dc_data = DISPATCH_WLH_ANON;
    } else if (_dq_state_is_base_wlh(dq_state)) {
        dsc->dc_data = (dispatch_wlh_t)dq;
    } else {
        _dispatch_wait_compute_wlh(upcast(dq)._dl, dsc);
    }

    if (dsc->dc_data == DISPATCH_WLH_ANON) {
        dsc->dsc_override_qos_floor = dsc->dsc_override_qos =
                (uint8_t)_dispatch_get_basepri_override_qos_floor();
        _dispatch_thread_event_init(&dsc->dsc_event);
    }
    dx_push(dq, dsc, _dispatch_qos_from_pp(dsc->dc_priority));
    _dispatch_trace_runtime_event(sync_wait, dq, 0);
    if (dsc->dc_data == DISPATCH_WLH_ANON) {
        _dispatch_thread_event_wait(&dsc->dsc_event); // acquire
    } else {
        _dispatch_event_loop_wait_for_ownership(dsc);
    }
    if (dsc->dc_data == DISPATCH_WLH_ANON) {
        _dispatch_thread_event_destroy(&dsc->dsc_event);
        // If _dispatch_sync_waiter_wake() gave this thread an override,
        // ensure that the root queue sees it.
        if (dsc->dsc_override_qos > dsc->dsc_override_qos_floor) {
            _dispatch_set_basepri_override_qos(dsc->dsc_override_qos);
        }
    }
}

其实这里就是判断死锁的地方,也是函数一开始处的代码体现出来的,首先获取队列的状态,然后又调用_dq_state_drain_locked_by函数,其内部调用的是_dispatch_lock_is_locked_by函数,代码如下:

static inline bool
_dispatch_lock_is_locked_by(dispatch_lock lock_value, dispatch_tid tid)
{
    // equivalent to _dispatch_lock_owner(lock_value) == tid
    return ((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0;
}

其实这个函数就是死锁的关键所在,如果队列和线程都处于等待状态,最终的返回结果就是YES,就会进入分支调用DISPATCH_CLIENT_CRASH导致Crash,这就是在GCD中同步函数中死锁的Crash点。

下面我们回到_dispatch_barrier_sync_f_inline函数中:

  • 这里接下来就是一层判断的容错处理,最终的调用也跟上面大同小异这里就不跟了
  • 接下来就是调用_dispatch_introspection_sync_begin函数,这里查看它的实现是空的,所以也没看出什么来
  • 最后就是调用_dispatch_lane_barrier_sync_invoke_and_complete函数了

_dispatch_lane_barrier_sync_invoke_and_complete源码:

static void
_dispatch_lane_barrier_sync_invoke_and_complete(dispatch_lane_t dq,
        void *ctxt, dispatch_function_t func DISPATCH_TRACE_ARG(void *dc))
{
    _dispatch_sync_function_invoke_inline(dq, ctxt, func);
    _dispatch_trace_item_complete(dc);
    if (unlikely(dq->dq_items_tail || dq->dq_width > 1)) {
        return _dispatch_lane_barrier_complete(dq, 0, 0);
    }

    // Presence of any of these bits requires more work that only
    // _dispatch_*_barrier_complete() handles properly
    //
    // Note: testing for RECEIVED_OVERRIDE or RECEIVED_SYNC_WAIT without
    // checking the role is sloppy, but is a super fast check, and neither of
    // these bits should be set if the lock was never contended/discovered.
    const uint64_t fail_unlock_mask = DISPATCH_QUEUE_SUSPEND_BITS_MASK |
            DISPATCH_QUEUE_ENQUEUED | DISPATCH_QUEUE_DIRTY |
            DISPATCH_QUEUE_RECEIVED_OVERRIDE | DISPATCH_QUEUE_SYNC_TRANSFER |
            DISPATCH_QUEUE_RECEIVED_SYNC_WAIT;
    uint64_t old_state, new_state;

    // similar to _dispatch_queue_drain_try_unlock
    os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, release, {
        new_state  = old_state - DISPATCH_QUEUE_SERIAL_DRAIN_OWNED;
        new_state &= ~DISPATCH_QUEUE_DRAIN_UNLOCK_MASK;
        new_state &= ~DISPATCH_QUEUE_MAX_QOS_MASK;
        if (unlikely(old_state & fail_unlock_mask)) {
            os_atomic_rmw_loop_give_up({
                return _dispatch_lane_barrier_complete(dq, 0, 0);
            });
        }
    });
    if (_dq_state_is_base_wlh(old_state)) {
        _dispatch_event_loop_assert_not_owned((dispatch_wlh_t)dq);
    }
}
  • 上面的函数首先调用_dispatch_sync_function_invoke_inline(dq, ctxt, func);函数,这跟上面我们分析时调用的一致,主要就是调用我们的传入的block函数
  • 接下来就是一些判断以及状态的修改,这里就不过多解释了

小结:

至此我们的GCD同步函数就分析完毕了,过程分支比较复杂,主要就是各种层层判断,以及封装调用,探索过程比较繁琐,要想记住也比较麻烦,但是探索下来还是对理解GCD有很大帮助的。在这里我们不仅知道了函数是何时调用的,也知道了死锁是如何处理的。

3. dispatch_async 源码分析

分析完同步函数我们就来看看异步函数的底层实现是什么样的,其实我们知道一步函数相较于同步函数的主要不同点就是它会开启线程去执行任务,那么首先我们还是来到异步函数的实现处:

void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
    dispatch_continuation_t dc = _dispatch_continuation_alloc();
    uintptr_t dc_flags = DC_FLAG_CONSUME;
    dispatch_qos_t qos;

    qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
    _dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
  • 这里首先通过定义了几个变量

  • 然后通过_dispatch_continuation_init函数保存了我们传入的block,并返回一个优先级qos

  • 最终调用_dispatch_continuation_async函数进行下一步处理

    • _dispatch_continuation_async函数中内部主要是调用dx_push
    • 我们跳转到dx_push的实现后发现其实就是个宏#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z),包装后调用dq_push
    • 接下来我们并不能跳转到dq_push的实现,我们只能进行全局搜索了,搜索结果如下图:
16031821400460.jpg
* 此时我们可以看到`dq_push`的实现有很多,这里我们就选择`_dispatch_root_queue_push`进行下一步分析(实际也都差不多,对于异步函数的队列取值有的是从全局数组`_dispatch_root_queues`中取的,这里我们也更偏向于取`root_queue`)
* 在`_dispatch_root_queue_push`中最后会调用`_dispatch_root_queue_push_inline`函数
* 然后是调用`_dispatch_root_queue_poke`函数
* 然后是调用`_dispatch_root_queue_poke_slow`函数

_dispatch_root_queue_poke_slow源码:

static void
_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
{
    int remaining = n;
    int r = ENOSYS;

    _dispatch_root_queues_init();
    _dispatch_debug_root_queue(dq, __func__);
    _dispatch_trace_runtime_event(worker_request, dq, (uint64_t)n);

#if !DISPATCH_USE_INTERNAL_WORKQUEUE
#if DISPATCH_USE_PTHREAD_ROOT_QUEUES
    if (dx_type(dq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE)
#endif
    {
        _dispatch_root_queue_debug("requesting new worker thread for global "
                "queue: %p", dq);
        r = _pthread_workqueue_addthreads(remaining,
                _dispatch_priority_to_pp_prefer_fallback(dq->dq_priority));
        (void)dispatch_assume_zero(r);
        return;
    }
#endif // !DISPATCH_USE_INTERNAL_WORKQUEUE
#if DISPATCH_USE_PTHREAD_POOL
    dispatch_pthread_root_queue_context_t pqc = dq->do_ctxt;
    if (likely(pqc->dpq_thread_mediator.do_vtable)) {
        while (dispatch_semaphore_signal(&pqc->dpq_thread_mediator)) {
            _dispatch_root_queue_debug("signaled sleeping worker for "
                    "global queue: %p", dq);
            if (!--remaining) {
                return;
            }
        }
    }

    bool overcommit = dq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
    if (overcommit) {
        os_atomic_add2o(dq, dgq_pending, remaining, relaxed);
    } else {
        if (!os_atomic_cmpxchg2o(dq, dgq_pending, 0, remaining, relaxed)) {
            _dispatch_root_queue_debug("worker thread request still pending for "
                    "global queue: %p", dq);
            return;
        }
    }

    int can_request, t_count;
    // seq_cst with atomic store to tail <rdar://problem/16932833>
    t_count = os_atomic_load2o(dq, dgq_thread_pool_size, ordered);
    do {
        can_request = t_count < floor ? 0 : t_count - floor;
        if (remaining > can_request) {
            _dispatch_root_queue_debug("pthread pool reducing request from %d to %d",
                    remaining, can_request);
            os_atomic_sub2o(dq, dgq_pending, remaining - can_request, relaxed);
            remaining = can_request;
        }
        if (remaining == 0) {
            _dispatch_root_queue_debug("pthread pool is full for root queue: "
                    "%p", dq);
            return;
        }
    } while (!os_atomic_cmpxchgvw2o(dq, dgq_thread_pool_size, t_count,
            t_count - remaining, &t_count, acquire));

#if !defined(_WIN32)
    pthread_attr_t *attr = &pqc->dpq_thread_attr;
    pthread_t tid, *pthr = &tid;
#if DISPATCH_USE_MGR_THREAD && DISPATCH_USE_PTHREAD_ROOT_QUEUES
    if (unlikely(dq == &_dispatch_mgr_root_queue)) {
        pthr = _dispatch_mgr_root_queue_init();
    }
#endif
    do {
        _dispatch_retain(dq); // released in _dispatch_worker_thread
        while ((r = pthread_create(pthr, attr, _dispatch_worker_thread, dq))) {
            if (r != EAGAIN) {
                (void)dispatch_assume_zero(r);
            }
            _dispatch_temporary_resource_shortage();
        }
    } while (--remaining);
#else // defined(_WIN32)
#if DISPATCH_USE_MGR_THREAD && DISPATCH_USE_PTHREAD_ROOT_QUEUES
    if (unlikely(dq == &_dispatch_mgr_root_queue)) {
        _dispatch_mgr_root_queue_init();
    }
#endif
    do {
        _dispatch_retain(dq); // released in _dispatch_worker_thread
#if DISPATCH_DEBUG
        unsigned dwStackSize = 0;
#else
        unsigned dwStackSize = 64 * 1024;
#endif
        uintptr_t hThread = 0;
        while (!(hThread = _beginthreadex(NULL, dwStackSize, _dispatch_worker_thread_thunk, dq, STACK_SIZE_PARAM_IS_A_RESERVATION, NULL))) {
            if (errno != EAGAIN) {
                (void)dispatch_assume(hThread);
            }
            _dispatch_temporary_resource_shortage();
        }
        if (_dispatch_mgr_sched.prio > _dispatch_mgr_sched.default_prio) {
            (void)dispatch_assume_zero(SetThreadPriority((HANDLE)hThread, _dispatch_mgr_sched.prio) == TRUE);
        }
        CloseHandle((HANDLE)hThread);
    } while (--remaining);
#endif // defined(_WIN32)
#else
    (void)floor;
#endif // DISPATCH_USE_PTHREAD_POOL
}

由于代码量挺大的我们挑重点分析

  • 这里通过overcommit对串行队列进行处理
  • 通过t_count = os_atomic_load2o(dq, dgq_thread_pool_size, ordered);获取线程池大小
  • 根据线程池大小去开辟线程,如果线程池满了就会抛出异常
  • 最后通过pthread_create(pthr, attr, _dispatch_worker_thread, dq)创建线程

小结

到这里我们的异步函数探索就告一段落了,我一直有个疑问,怎样找到像同步函数调用的那种显示的调用来体现异步函数的调用呢?我猜想是这里都是push到队列内部,然后

相关文章

网友评论

      本文标题:iOS Objective-C GCD之函数篇

      本文链接:https://www.haomeiwen.com/subject/zhqtmktx.html