美文网首页JS破解&&Android逆向
Art虚拟机分配对象过程简析

Art虚拟机分配对象过程简析

作者: 珍惜Any | 来源:发表于2021-03-05 15:48 被阅读0次

    前言:

    主要记录了Art虚拟机如何分配一个对象,包括我们new A的时候A储存到哪里

    本文主要基于安卓7.1源码进行分析。

    前置概念:

    引用类型:

    强引用(StrongReference):

    JVM 宁可抛出 OOM ,也不会让 GC 回收具有强引用的对象;

    软引用(SoftReference):

    只有在内存空间不足时,才会被回的对象;

    弱引用(WeakReference):

    在 GC 时,一旦发现了只具有弱引用的对象,不管当前内存空间足够与否,都会回收它的内存;

    虚引用(PhantomReference):

    任何时候都可以被GC回收,当垃圾回收器准备回收一个对象时,如果发现它还有虚引用,就会在回收对象的内存之前,把这个虚引用加入到与之关联的引用队列中。程序可以通过判断引用队列中是否存在该对象的虚引用,来了解这个对象是否将要被回收。可以用来作为GC回收Object的标志。

    Art堆划分:

    Image Space

    Image Space 连续地址空间,不进行垃圾回收,存放系统预加载类,而这些对象是存放system@framework@boot.art@classes.oat这个OAT文件中的,每次开机启动只需把系统类映射到Image Space。

    开机只创建一次,全局唯一

    Zygote Space

    Zygote Space连续地址空间,匿名共享内存,进行垃圾回收,管理Zygote进程在启动过程中预加载和创建的各种对象、资源。

    Allocation Space

    Allocation Space与Zygote Space性质一致,在Zygote进程fork第一个子进程之前,就会把Zygote Space一分为二,原来的已经被使用的那部分堆还叫Zygote Space,而未使用的那部分堆就叫Allocation Space。以后的对象都在Allocation Space上分配。

    Large Object Space

    Large Object Space 离散地址空间,进行垃圾回收,用来分配一些大于12K的大对象。

    我们大部分分配的对象基本都是在 Allocation Space 和 Large Object Space 范围进行管理。

    注:Image Space和Zygote Space在Zygote进程和应用程序进程之间进行共享,而Allocation Space就每个进程都独立地拥有一份。

    注意,虽然Image Space和Zygote Space都是在Zygote进程和应用程序进程之间进行共享,但是前者的对象只创建一次,而后者的对象需要在系统每次启动时根据运行情况都重新创建一遍。

    当满足以下三个条件时,在large object heap上分配,否则在zygote或者allocation space上分配:

    • 请求分配的内存大于等于Heap类的成员变量large_object_threshold_指定的值。这个值等于3 * kPageSize,即3个页面的大小。

      大小不唯一,主要和手机的内存大小有关系。

    • 已经从Zygote Space划分出Allocation Space,即Heap类的成员变量have_zygote_space_的值等于true。

    • 被分配的对象是一个原子类型数组,即byte数组、int数组和boolean数组等。因为数组是连续的内存段。

    内存分配方式:

    我们知道,一般在java程序中,new的对象是分配在堆空间中的,但是实际的情况是,大部分的new对象会进入堆空间中,而并非是全部的对象,还有另外两个地方可以存储new的对象,我们称之为栈上分配以及TLAB

    前言:

    在我们的应用程序中,其实有很多的对象的作用域都不会逃逸出方法外,也就是说该对象的生命周期会随着方法的调用开始而开始,方法的调用结束而结束,对于这种对象,是不是该考虑将对象不在分配在堆空间中呢?

    因为一旦分配在堆空间中,当方法调用结束,没有了引用指向该对象,该对象就需要被gc回收,而如果存在大量的这种情况,对gc来说无疑是一种负担。

    栈内存分配方式的特点

    因此,JVM和ART一样,提供了一种叫做栈上分配的概念,针对那些作用域不会逃逸出方法的对象,在分配内存时不在将对象分配在堆内存中,而是将对象属性打散后分配在栈(线程私有的,属于栈内存)上,这样,随着方法的调用结束,栈空间的回收就会随着将栈上分配的打散后的对象回收掉,不再给gc增加额外的无用负担,从而提升应用程序整体的性能

    下来介绍 TLAB(Thread Local Allocation Buffer)

    TLAB当时是我自己的了解去写的。当时也想了很久,不知道为什么还需要TLAB

    我们知道,对象分配在堆上,而堆是一个全局共享的区域,当多个线程同一时刻操作堆内存分配对象空间时,就需要进行同步,而同步带来的效果就是对象分配效率变差(尽管JVM采用了CAS的形式处理分配失败的情况),但是对于存在竞争激烈的分配场合仍然会导致效率变差。

    于是就出现了 TLAB的方式去分配内存

    TLAB方式分配内存:

    即线程本地分配缓存 , TLAB是Android为了减少多线程之间同步,加快处理速度,使用Thread的本地存储空间来进行存储。如果可以使用TLAB分配,最终会调用Thread对象的AllocTlab()方法进行内存分配。

    需要TLAB的原因就是提高对象在堆上的分配效率而采用的一种手段,就是给每个线程分配一小块私有的堆空间,即TLAB是一块线程私有的堆空间(实际上是Eden区中划出的)。

    栈上分配和TLAB对比

    名称 针对点 处于对象分配流程的位置
    栈上分配 避免gc无谓负担 1
    TLAB 加速堆上对象的分配 2

    Heap内存分配阶段:

    我们直接找到Heap的分配内存的方法

    • allocator表示分配器的类型,也就是描述要在哪个空间分配对象。AllocatorType是一个枚举类型,它的定义如下所示:

    这个枚举类型定义在文件/art/runtime/gc/allocator_type.h。
    AllocatorType一共有六个值,它们的含义如下所示:

    kAllocatorTypeBumpPointer:表示在Bump Pointer Space中分配对象。

    kAllocatorTypeTLAB:表示要在由Bump Pointer Space提供的线程局部分配缓冲区中分配对象。
    kAllocatorTypeRosAlloc:表示要在Ros Alloc Space分配对象。
    kAllocatorTypeDlMalloc:表示要在Dl Malloc Space分配对象。
    kAllocatorTypeNonMoving:表示要在Non Moving Space分配对象。
    kAllocatorTypeLOS:表示要在Large Object Space分配对象。

    // Different types of allocators.
    enum AllocatorType {
    
      kAllocatorTypeBumpPointer,  // Use BumpPointer allocator, has entrypoints.
      kAllocatorTypeTLAB,  // Use TLAB allocator, has entrypoints.
      kAllocatorTypeRosAlloc,  // Use RosAlloc allocator, has entrypoints.
      kAllocatorTypeDlMalloc,  // Use dlmalloc allocator, has entrypoints.
      kAllocatorTypeNonMoving,  // Special allocator for non moving objects, doesn't have entrypoints.
      kAllocatorTypeLOS,  // Large object space, also doesn't have entrypoints.
      kAllocatorTypeRegion,
      kAllocatorTypeRegionTLAB,
    };
    
    
    • pre_fence_visitor是一个回调函数,用来在分配对象完成后在当前执行路径中执行初始化操作,例如分配完成一个数组对象,通过该回调函数立即设置数组的大小,这样就可以保证数组对象的完整性和一致性,避免多线程环境下通过加锁来完成相同的操作。

    核心方法如下:

    
    //在Allocator内存区 分配内存
    template <bool kInstrumented, bool kCheckLargeObject, typename PreFenceVisitor>
    inline mirror::Object* Heap::AllocObjectWithAllocator(Thread* self,
                                                          mirror::Class* klass,
                                                          size_t byte_count,
                                                          AllocatorType allocator,
                                                          const PreFenceVisitor& pre_fence_visitor) {
      if (kIsDebugBuild) {
        CheckPreconditionsForAllocObject(klass, byte_count);
        // Since allocation can cause a GC which will need to SuspendAll, make sure all allocations are
        // done in the runnable state where suspension is expected.
        CHECK_EQ(self->GetState(), kRunnable);
        self->AssertThreadSuspensionIsAllowable();
      }
    
    
      
      // Need to check that we arent the large object allocator since the large object allocation code
      // path this function. If we didn't check we would have an infinite loop.
      mirror::Object* obj;
      
      //判断是否需要在Large Object Space 内存区分配内存
      //1) 请求分配的内存大于等于large_object_threshold_(这个值等于3 * kPageSize,即3个页面的大小)。
      //2)被分配的对象是一个原子类型数组(即byte数组、int数组和boolean数组等)或者字符串。
      //3)kCheckLargeObject为ture。
      if (kCheckLargeObject && UNLIKELY(ShouldAllocLargeObject(klass, byte_count))) {
        //如果返回的是true的话就则调用AllocLargeObject方法进行大内存对象分配
        obj = AllocLargeObject<kInstrumented, PreFenceVisitor>(self, &klass, byte_count,
                                                               pre_fence_visitor);
        if (obj != nullptr) {
          return obj;
        } else {
          // There should be an OOM exception, since we are retrying, clear it.
          self->ClearException();
        }
        // If the large object allocation failed, try to use the normal spaces (main space,
        // non moving space). This can happen if there is significant virtual address space
        // fragmentation.
      }
      // bytes allocated for the (individual) object.
      
      // 分配给对象的内存大小
      size_t bytes_allocated;
      // 当前分区可用的内存大小
      size_t usable_size;
      size_t new_num_bytes_allocated = 0;
      if (allocator == kAllocatorTypeTLAB || allocator == kAllocatorTypeRegionTLAB) {
        byte_count = RoundUp(byte_count, space::BumpPointerSpace::kAlignment);
      }
      // If we have a thread local allocation we don't need to update bytes allocated.
      if ((allocator == kAllocatorTypeTLAB || allocator == kAllocatorTypeRegionTLAB) &&
          byte_count <= self->TlabSize()) {
        //如果满足条件则在TLAB表分配对象  
        obj = self->AllocTlab(byte_count);
        DCHECK(obj != nullptr) << "AllocTlab can't fail";
        //设置分配对象的class类型,可以看出来一个类刚被创建第一件事就是
        //设置class的类型
        obj->SetClass(klass);
        if (kUseBakerOrBrooksReadBarrier) {
          if (kUseBrooksReadBarrier) {
            obj->SetReadBarrierPointer(obj);
          }
          obj->AssertReadBarrierPointer();
        }
        bytes_allocated = byte_count;
        usable_size = bytes_allocated;
        pre_fence_visitor(obj, usable_size);
        QuasiAtomic::ThreadFenceForConstructor();
      } else if (!kInstrumented && allocator == kAllocatorTypeRosAlloc &&
                 (obj = rosalloc_space_->AllocThreadLocal(self, byte_count, &bytes_allocated)) &&
                 LIKELY(obj != nullptr)) {
        DCHECK(!is_running_on_memory_tool_);
        obj->SetClass(klass);
        if (kUseBakerOrBrooksReadBarrier) {
          if (kUseBrooksReadBarrier) {
            obj->SetReadBarrierPointer(obj);
          }
          obj->AssertReadBarrierPointer();
        }
        usable_size = bytes_allocated;
        pre_fence_visitor(obj, usable_size);
        QuasiAtomic::ThreadFenceForConstructor();
      } else {
        // bytes allocated that takes bulk thread-local buffer allocations into account.
        size_t bytes_tl_bulk_allocated = 0;
        
        //TryToAllocate 也是核心方法之一,尝试在Allocation Space分区进行分配
        obj = TryToAllocate<kInstrumented, false>(self, allocator, byte_count, &bytes_allocated,
                                                  &usable_size, &bytes_tl_bulk_allocated);
                                                  
                                                  
        if (UNLIKELY(obj == nullptr)) {
          // AllocateInternalWithGc can cause thread suspension, if someone instruments the entrypoints
          // or changes the allocator in a suspend point here, we need to retry the allocation.
          //这个地方是以上都失败了,会先尝试GC然后在进行内存分配
          //核心方法
          obj = AllocateInternalWithGc(self,
                                       allocator,
                                       kInstrumented,
                                       byte_count,
                                       &bytes_allocated,
                                       &usable_size,
                                       &bytes_tl_bulk_allocated, &klass);
          if (obj == nullptr) {
            // The only way that we can get a null return if there is no pending exception is if the
            // allocator or instrumentation changed.
            if (!self->IsExceptionPending()) {
              // AllocObject will pick up the new allocator type, and instrumented as true is the safe
              // default.
              return AllocObject</*kInstrumented*/true>(self,
                                                        klass,
                                                        byte_count,
                                                        pre_fence_visitor);
            }
            return nullptr;
          }
        }
        DCHECK_GT(bytes_allocated, 0u);
        DCHECK_GT(usable_size, 0u);
        obj->SetClass(klass);
        if (kUseBakerOrBrooksReadBarrier) {
          if (kUseBrooksReadBarrier) {
            obj->SetReadBarrierPointer(obj);
          }
          obj->AssertReadBarrierPointer();
        }
        if (collector::SemiSpace::kUseRememberedSet && UNLIKELY(allocator == kAllocatorTypeNonMoving)) {
          // (Note this if statement will be constant folded away for the
          // fast-path quick entry points.) Because SetClass() has no write
          // barrier, if a non-moving space allocation, we need a write
          // barrier as the class pointer may point to the bump pointer
          // space (where the class pointer is an "old-to-young" reference,
          // though rare) under the GSS collector with the remembered set
          // enabled. We don't need this for kAllocatorTypeRosAlloc/DlMalloc
          // cases because we don't directly allocate into the main alloc
          // space (besides promotions) under the SS/GSS collector.
          WriteBarrierField(obj, mirror::Object::ClassOffset(), klass);
        }
        pre_fence_visitor(obj, usable_size);
        QuasiAtomic::ThreadFenceForConstructor();
        new_num_bytes_allocated = static_cast<size_t>(
            num_bytes_allocated_.FetchAndAddRelaxed(bytes_tl_bulk_allocated)) + bytes_tl_bulk_allocated;
      }
      if (kIsDebugBuild && Runtime::Current()->IsStarted()) {
        CHECK_LE(obj->SizeOf(), usable_size);
      }
      // TODO: Deprecate.
      if (kInstrumented) {
        if (Runtime::Current()->HasStatsEnabled()) {
          RuntimeStats* thread_stats = self->GetStats();
          ++thread_stats->allocated_objects;
          thread_stats->allocated_bytes += bytes_allocated;
          RuntimeStats* global_stats = Runtime::Current()->GetStats();
          ++global_stats->allocated_objects;
          global_stats->allocated_bytes += bytes_allocated;
        }
      } else {
        DCHECK(!Runtime::Current()->HasStatsEnabled());
      }
      if (kInstrumented) {
        if (IsAllocTrackingEnabled()) {
          // allocation_records_ is not null since it never becomes null after allocation tracking is
          // enabled.
          DCHECK(allocation_records_ != nullptr);
          allocation_records_->RecordAllocation(self, &obj, bytes_allocated);
        }
      } else {
        DCHECK(!IsAllocTrackingEnabled());
      }
      if (AllocatorHasAllocationStack(allocator)) {
        PushOnAllocationStack(self, &obj);
      }
      if (kInstrumented) {
        if (gc_stress_mode_) {
          CheckGcStressMode(self, &obj);
        }
      } else {
        DCHECK(!gc_stress_mode_);
      }
      // IsConcurrentGc() isn't known at compile time so we can optimize by not checking it for
      // the BumpPointer or TLAB allocators. This is nice since it allows the entire if statement to be
      // optimized out. And for the other allocators, AllocatorMayHaveConcurrentGC is a constant since
      // the allocator_type should be constant propagated.
      if (AllocatorMayHaveConcurrentGC(allocator) && IsGcConcurrent()) {
        CheckConcurrentGC(self, new_num_bytes_allocated, &obj);
      }
      VerifyObject(obj);
      self->VerifyStack();
      return obj;
    }
    

    函数总结如下图:

    image.png

    TryToAllocate

    template <const bool kInstrumented, const bool kGrow>
    inline mirror::Object* Heap::TryToAllocate(Thread* self,
                                               AllocatorType allocator_type,
                                               size_t alloc_size,
                                               size_t* bytes_allocated,
                                               size_t* usable_size,
                                               size_t* bytes_tl_bulk_allocated) {
    
            //如果不是指定在当前ART运行时线程的TLAB中分配,                                   
      if (allocator_type != kAllocatorTypeTLAB &&
          allocator_type != kAllocatorTypeRegionTLAB &&
          allocator_type != kAllocatorTypeRosAlloc &&
          //如果指定分配的对象大小超出了当前堆的限制,那么就会分配失败,返回一个nullptr指针。
          UNLIKELY(IsOutOfMemoryOnAllocation<kGrow>(allocator_type, alloc_size))) {
        return nullptr;
      }
      //判断分配器的类型      
      mirror::Object* ret;
      switch (allocator_type) {
        //kAllocatorTypeBumpPointer类型,会在Bump Pointer Space中分配对象,
        //调用Heap类的成员变量bump_pointer_space_指向的一个BumpPointerSpace
        //对象的成员函数AllocNonvirtual分配指定大小的内存。
        case kAllocatorTypeBumpPointer: {
          DCHECK(bump_pointer_space_ != nullptr);
          alloc_size = RoundUp(alloc_size, space::BumpPointerSpace::kAlignment);
          ret = bump_pointer_space_->AllocNonvirtual(alloc_size);
          if (LIKELY(ret != nullptr)) {
            *bytes_allocated = alloc_size;
            *usable_size = alloc_size;
            *bytes_tl_bulk_allocated = alloc_size;
          }
          break;
        }
        //kAllocatorTypeRosAlloc类型,会在Ros Alloc Space中分配对象。
        //这里会根据kInstrumented的值和is_running_on_memory_tool_参数来进行判断,
        //分别会调用Heap类的成员变量rosalloc_space_指向的RosAllocSpace
        //对象的成员函数Alloc者AllocNonvirtual分配指定大小的内存。
        case kAllocatorTypeRosAlloc: {
          if (kInstrumented && UNLIKELY(is_running_on_memory_tool_)) {
            // If running on valgrind or asan, we should be using the instrumented path.
            size_t max_bytes_tl_bulk_allocated = rosalloc_space_->MaxBytesBulkAllocatedFor(alloc_size);
            if (UNLIKELY(IsOutOfMemoryOnAllocation<kGrow>(allocator_type,
                                                          max_bytes_tl_bulk_allocated))) {
              return nullptr;
            }
            ret = rosalloc_space_->Alloc(self, alloc_size, bytes_allocated, usable_size,
                                         bytes_tl_bulk_allocated);
          } else {
            DCHECK(!is_running_on_memory_tool_);
            size_t max_bytes_tl_bulk_allocated =
                rosalloc_space_->MaxBytesBulkAllocatedForNonvirtual(alloc_size);
            if (UNLIKELY(IsOutOfMemoryOnAllocation<kGrow>(allocator_type,
                                                          max_bytes_tl_bulk_allocated))) {
              return nullptr;
            }
            if (!kInstrumented) {
              DCHECK(!rosalloc_space_->CanAllocThreadLocal(self, alloc_size));
            }
            ret = rosalloc_space_->AllocNonvirtual(self, alloc_size, bytes_allocated, usable_size,
                                                   bytes_tl_bulk_allocated);
          }
          break;
        }
        //kAllocatorTypeDlMalloc类型,会在DlMalloc Space中分配对象,
        //调用Heap类的成员变量dlmalloc_space_指向的一个DlMallocSpace对象的成员函数Alloc
        //或AllocNonvirtual分配指定大小的内存(判断条件同kAllocatorTypeRosAlloc类型)。
        case kAllocatorTypeDlMalloc: {
          if (kInstrumented && UNLIKELY(is_running_on_memory_tool_)) {
            // If running on valgrind, we should be using the instrumented path.
            ret = dlmalloc_space_->Alloc(self, alloc_size, bytes_allocated, usable_size,
                                         bytes_tl_bulk_allocated);
          } else {
            DCHECK(!is_running_on_memory_tool_);
            ret = dlmalloc_space_->AllocNonvirtual(self, alloc_size, bytes_allocated, usable_size,
                                                   bytes_tl_bulk_allocated);
          }
          break;
        }
        //kAllocatorTypeNonMoving类型,会在Non Moving Space中分配对象,
        //调用Heap类的成员变量non_moving_space_指向的一个RosAllocSpace
        //对象或者DlMallocSpace对象的成员函数Alloc分配指定大小的内存。
        case kAllocatorTypeNonMoving: {
          ret = non_moving_space_->Alloc(self, alloc_size, bytes_allocated, usable_size,
                                         bytes_tl_bulk_allocated);
          break;
        }
    
        //kAllocatorTypeLOS类型,会在Large Object Space中分配对象,
        //调用Heap类的成员变量large_object_space_指向的一个LargeObjectSpace
        //对象的成员函数Alloc分配指定大小的内存。
        case kAllocatorTypeLOS: {
          ret = large_object_space_->Alloc(self, alloc_size, bytes_allocated, usable_size,
                                           bytes_tl_bulk_allocated);
          // Note that the bump pointer spaces aren't necessarily next to
          // the other continuous spaces like the non-moving alloc space or
          // the zygote space.
          DCHECK(ret == nullptr || large_object_space_->Contains(ret));
          break;
        }
        //kAllocatorTypeRegion类型,会在Region Space中分配对象,
        //调用Heap类的成员变量region_space_指向的一个RegionSpace对象的成员函数
        //AllocNonvirtual来分配指定大小的内存。
        case kAllocatorTypeTLAB: {
          DCHECK_ALIGNED(alloc_size, space::BumpPointerSpace::kAlignment);
          if (UNLIKELY(self->TlabSize() < alloc_size)) {
            const size_t new_tlab_size = alloc_size + kDefaultTLABSize;
            if (UNLIKELY(IsOutOfMemoryOnAllocation<kGrow>(allocator_type, new_tlab_size))) {
              return nullptr;
            }
            // Try allocating a new thread local buffer, if the allocaiton fails the space must be
            // full so return null.
            if (!bump_pointer_space_->AllocNewTlab(self, new_tlab_size)) {
              return nullptr;
            }
            *bytes_tl_bulk_allocated = new_tlab_size;
          } else {
            *bytes_tl_bulk_allocated = 0;
          }
          // The allocation can't fail.
          ret = self->AllocTlab(alloc_size);
          DCHECK(ret != nullptr);
          *bytes_allocated = alloc_size;
          *usable_size = alloc_size;
          break;
        }
        //kAllocatorTypeTLAB或kAllocatorTypeRegionTLAB类型,
        //在当前ART运行时线程的TLAB中分配对象。
        //首先会判断当前TLAB剩余大小是否小于将要分配的大小,如果小于,
        //就会调用Thread对象的AllocWithNewTLAB成员函数重新请求一块内存,
        //然后进行对象分配。如果TLAB剩余大小足够大
        //就会直接调用当前Thread对象的成员函数AllocTlab进行内存分配。
    
        case kAllocatorTypeRegion: {
          DCHECK(region_space_ != nullptr);
          alloc_size = RoundUp(alloc_size, space::RegionSpace::kAlignment);
          ret = region_space_->AllocNonvirtual<false>(alloc_size, bytes_allocated, usable_size,
                                                      bytes_tl_bulk_allocated);
          break;
        }
        case kAllocatorTypeRegionTLAB: {
          DCHECK(region_space_ != nullptr);
          DCHECK_ALIGNED(alloc_size, space::RegionSpace::kAlignment);
          if (UNLIKELY(self->TlabSize() < alloc_size)) {
            if (space::RegionSpace::kRegionSize >= alloc_size) {
              // Non-large. Check OOME for a tlab.
              if (LIKELY(!IsOutOfMemoryOnAllocation<kGrow>(allocator_type, space::RegionSpace::kRegionSize))) {
                // Try to allocate a tlab.
                if (!region_space_->AllocNewTlab(self)) {
                  // Failed to allocate a tlab. Try non-tlab.
                  ret = region_space_->AllocNonvirtual<false>(alloc_size, bytes_allocated, usable_size,
                                                              bytes_tl_bulk_allocated);
                  return ret;
                }
                *bytes_tl_bulk_allocated = space::RegionSpace::kRegionSize;
                // Fall-through.
              } else {
                // Check OOME for a non-tlab allocation.
                if (!IsOutOfMemoryOnAllocation<kGrow>(allocator_type, alloc_size)) {
                  ret = region_space_->AllocNonvirtual<false>(alloc_size, bytes_allocated, usable_size,
                                                              bytes_tl_bulk_allocated);
                  return ret;
                } else {
                  // Neither tlab or non-tlab works. Give up.
                  return nullptr;
                }
              }
            } else {
              // Large. Check OOME.
              if (LIKELY(!IsOutOfMemoryOnAllocation<kGrow>(allocator_type, alloc_size))) {
                ret = region_space_->AllocNonvirtual<false>(alloc_size, bytes_allocated, usable_size,
                                                            bytes_tl_bulk_allocated);
                return ret;
              } else {
                return nullptr;
              }
            }
          } else {
            *bytes_tl_bulk_allocated = 0;  // Allocated in an existing buffer.
          }
          // The allocation can't fail.
          ret = self->AllocTlab(alloc_size);
          DCHECK(ret != nullptr);
          *bytes_allocated = alloc_size;
          *usable_size = alloc_size;
          break;
        }
        default: {
          LOG(FATAL) << "Invalid allocator type";
          ret = nullptr;
        }
      }
      return ret;
    }
    

    AllocateInternalWithGc

    mirror::Object* Heap::AllocateInternalWithGc(Thread* self,
                                                 AllocatorType allocator,
                                                 bool instrumented,
                                                 size_t alloc_size,
                                                 size_t* bytes_allocated,
                                                 size_t* usable_size,
                                                 size_t* bytes_tl_bulk_allocated,
                                                 mirror::Class** klass) {
                                                 
      bool was_default_allocator = allocator == GetCurrentAllocator();
      // Make sure there is no pending exception since we may need to throw an OOME.
      self->AssertNoPendingException();
      DCHECK(klass != nullptr);
      StackHandleScope<1> hs(self);
      HandleWrapper<mirror::Class> h(hs.NewHandleWrapper(klass));
      klass = nullptr;  // Invalidate for safety.
      // The allocation failed. If the GC is running, block until it completes, and then retry the
      // allocation.
      collector::GcType last_gc = WaitForGcToComplete(kGcCauseForAlloc, self);
      // If we were the default allocator but the allocator changed while we were suspended,
      // abort the allocation.
      if ((was_default_allocator && allocator != GetCurrentAllocator()) ||
          (!instrumented && EntrypointsInstrumented())) {
        return nullptr;
      }
      if (last_gc != collector::kGcTypeNone) {
        // A GC was in progress and we blocked, retry allocation now that memory has been freed.
        mirror::Object* ptr = TryToAllocate<true, false>(self, allocator, alloc_size, bytes_allocated,
                                                         usable_size, bytes_tl_bulk_allocated);
        if (ptr != nullptr) {
          return ptr;
        }
      }
    
      collector::GcType tried_type = next_gc_type_;
      const bool gc_ran =
          CollectGarbageInternal(tried_type, kGcCauseForAlloc, false) != collector::kGcTypeNone;
      if ((was_default_allocator && allocator != GetCurrentAllocator()) ||
          (!instrumented && EntrypointsInstrumented())) {
        return nullptr;
      }
      if (gc_ran) {
        mirror::Object* ptr = TryToAllocate<true, false>(self, allocator, alloc_size, bytes_allocated,
                                                         usable_size, bytes_tl_bulk_allocated);
        if (ptr != nullptr) {
          return ptr;
        }
      }
    
      // Loop through our different Gc types and try to Gc until we get enough free memory.
      for (collector::GcType gc_type : gc_plan_) {
        if (gc_type == tried_type) {
          continue;
        }
        // Attempt to run the collector, if we succeed, re-try the allocation.
        const bool plan_gc_ran =
            CollectGarbageInternal(gc_type, kGcCauseForAlloc, false) != collector::kGcTypeNone;
        if ((was_default_allocator && allocator != GetCurrentAllocator()) ||
            (!instrumented && EntrypointsInstrumented())) {
          return nullptr;
        }
        if (plan_gc_ran) {
          // Did we free sufficient memory for the allocation to succeed?
          mirror::Object* ptr = TryToAllocate<true, false>(self, allocator, alloc_size, bytes_allocated,
                                                           usable_size, bytes_tl_bulk_allocated);
          if (ptr != nullptr) {
            return ptr;
          }
        }
      }
      // Allocations have failed after GCs;  this is an exceptional state.
      // Try harder, growing the heap if necessary.
      mirror::Object* ptr = TryToAllocate<true, true>(self, allocator, alloc_size, bytes_allocated,
                                                      usable_size, bytes_tl_bulk_allocated);
      if (ptr != nullptr) {
        return ptr;
      }
      // Most allocations should have succeeded by now, so the heap is really full, really fragmented,
      // or the requested size is really big. Do another GC, collecting SoftReferences this time. The
      // VM spec requires that all SoftReferences have been collected and cleared before throwing
      // OOME.
      VLOG(gc) << "Forcing collection of SoftReferences for " << PrettySize(alloc_size)
               << " allocation";
      // TODO: Run finalization, but this may cause more allocations to occur.
      // We don't need a WaitForGcToComplete here either.
      DCHECK(!gc_plan_.empty());
      CollectGarbageInternal(gc_plan_.back(), kGcCauseForAlloc, true);
      if ((was_default_allocator && allocator != GetCurrentAllocator()) ||
          (!instrumented && EntrypointsInstrumented())) {
        return nullptr;
      }
      ptr = TryToAllocate<true, true>(self, allocator, alloc_size, bytes_allocated, usable_size,
                                      bytes_tl_bulk_allocated);
      if (ptr == nullptr) {
        const uint64_t current_time = NanoTime();
        switch (allocator) {
          case kAllocatorTypeRosAlloc:
            // Fall-through.
          case kAllocatorTypeDlMalloc: {
            if (use_homogeneous_space_compaction_for_oom_ &&
                current_time - last_time_homogeneous_space_compaction_by_oom_ >
                min_interval_homogeneous_space_compaction_by_oom_) {
              last_time_homogeneous_space_compaction_by_oom_ = current_time;
              HomogeneousSpaceCompactResult result = PerformHomogeneousSpaceCompact();
              // Thread suspension could have occurred.
              if ((was_default_allocator && allocator != GetCurrentAllocator()) ||
                  (!instrumented && EntrypointsInstrumented())) {
                return nullptr;
              }
              switch (result) {
                case HomogeneousSpaceCompactResult::kSuccess:
                  // If the allocation succeeded, we delayed an oom.
                  ptr = TryToAllocate<true, true>(self, allocator, alloc_size, bytes_allocated,
                                                  usable_size, bytes_tl_bulk_allocated);
                  if (ptr != nullptr) {
                    count_delayed_oom_++;
                  }
                  break;
                case HomogeneousSpaceCompactResult::kErrorReject:
                  // Reject due to disabled moving GC.
                  break;
                case HomogeneousSpaceCompactResult::kErrorVMShuttingDown:
                  // Throw OOM by default.
                  break;
                default: {
                  UNIMPLEMENTED(FATAL) << "homogeneous space compaction result: "
                      << static_cast<size_t>(result);
                  UNREACHABLE();
                }
              }
              // Always print that we ran homogeneous space compation since this can cause jank.
              VLOG(heap) << "Ran heap homogeneous space compaction, "
                        << " requested defragmentation "
                        << count_requested_homogeneous_space_compaction_.LoadSequentiallyConsistent()
                        << " performed defragmentation "
                        << count_performed_homogeneous_space_compaction_.LoadSequentiallyConsistent()
                        << " ignored homogeneous space compaction "
                        << count_ignored_homogeneous_space_compaction_.LoadSequentiallyConsistent()
                        << " delayed count = "
                        << count_delayed_oom_.LoadSequentiallyConsistent();
            }
            break;
          }
          case kAllocatorTypeNonMoving: {
            // Try to transition the heap if the allocation failure was due to the space being full.
            if (!IsOutOfMemoryOnAllocation<false>(allocator, alloc_size)) {
              // If we aren't out of memory then the OOM was probably from the non moving space being
              // full. Attempt to disable compaction and turn the main space into a non moving space.
              DisableMovingGc();
              // Thread suspension could have occurred.
              if ((was_default_allocator && allocator != GetCurrentAllocator()) ||
                  (!instrumented && EntrypointsInstrumented())) {
                return nullptr;
              }
              // If we are still a moving GC then something must have caused the transition to fail.
              if (IsMovingGc(collector_type_)) {
                MutexLock mu(self, *gc_complete_lock_);
                // If we couldn't disable moving GC, just throw OOME and return null.
                LOG(WARNING) << "Couldn't disable moving GC with disable GC count "
                             << disable_moving_gc_count_;
              } else {
                LOG(WARNING) << "Disabled moving GC due to the non moving space being full";
                ptr = TryToAllocate<true, true>(self, allocator, alloc_size, bytes_allocated,
                                                usable_size, bytes_tl_bulk_allocated);
              }
            }
            break;
          }
          default: {
            // Do nothing for others allocators.
          }
        }
      }
      // If the allocation hasn't succeeded by this point, throw an OOM error.
      if (ptr == nullptr) {
        ThrowOutOfMemoryError(self, alloc_size, allocator);
      }
      return ptr;
    }
    

    1,首先判断当前的GC状态,如果正在进行GC,则等待直至GC结束。

    2,判断当前内存分配器类型是否发生了变化,如果发生了变化,则分配失败。

    3,如果last_gc != collector::kGcTypeNone,表明刚刚进行了GC操作,这时可以直接调用TryToAllocate成员方法尝试进行内存分配。

    4,调用CollectGarbageInternal进行垃圾回收,不回收弱引用、软引用。

    5,GC成功,再次调用TryToAllocate成员方法尝试进行内存分配。

    6,根据GC类型由弱到强,进行多次内存分配,直至获得足够的内存进行内存分配。这个过程可能会多次调用TryToAllocate成员方法尝试进行内存分配。

    注意:以上过程的内存分配,堆大小不会增大。

    7,直接增大堆的大小进行内存分配。具体方法是,调用TryToAllocate成员方法,传递的模板参数kGrow为true。

    8,如果还没有分配成功,会再一次进行GC,这次将会回收软引用。

    9,直接增大堆的大小进行内存分配。具体方法是,调用TryToAllocate成员方法,传递的模板参数kGrow为true。

    10,如果失败了,会跟进内存分配器的类型分别进行处理。

    • 如果是kAllocatorTypeRosAlloc、kAllocatorTypeDlMalloc类型,会判断是否支持同构空间压缩,并且距离上一次同构空间压缩的时间大于允许的最小时间间隔,则会调用PerformHomogeneousSpaceCompact方法进行同构空间压缩。如果压缩成功,则调用TryToAllocate最后一次尝试进行内存分配。

    • 如果是kAllocatorTypeNonMoving类型,首先设置最大堆空间,如果成功,接着尝试禁用移动空间的GC,并将主空间转换为非移动空间。成功后再次调用TryToAllocate最后一次尝试进行内存分配。

    11,如果上述步骤都失败了,最后会发送OOM的Error。

    参考:

    https://blog.csdn.net/melody157398/article/details/106394066/

    https://blog.csdn.net/u011578734/article/details/99692289


    安卓逆向百级教程+全网最新js逆向视频+永久小蜜圈+永久售后群=1299

    视频下载网盘
    http://nas.alienhe.cn:5008/home.html
    下载视频账号密码:
    账号guest 密码world

    Js试看:
    http://oss.alienhe.cn/JS%E9%80%86%E5%90%91%E5%85%A5%E9%97%A8-%E5%B8%A6%E6%B0%B4%E5%8D%B0.mp4

    相关文章

      网友评论

        本文标题:Art虚拟机分配对象过程简析

        本文链接:https://www.haomeiwen.com/subject/mrorqltx.html