美文网首页互联网科技JavaJava 杂谈
一次非典型的CPU告警的排查

一次非典型的CPU告警的排查

作者: java欧阳丰 | 来源:发表于2019-08-13 18:53 被阅读22次

    最近线上频繁收到CPU超过阈值的告警, 很明显是哪里出了问题.
    于是排查了一番, 到最后找到罪魁祸首的时候, 突然意识到这次是一次很有意思的"非典型"的CPU的问题, 所以这里特意记录一下.
    为啥说它是非典型呢, 因为在我的经验里, 典型的CPU飙升通常都是业务代码里面有死循环, 或是某个RPC性能过低阻塞了大量线程等等,
    而这次的CPU问题却是由GC引起的, 因吹斯汀
    来看看排查过程

    找出占用CPU的线程

    top

    首先肯定是先看哪些线程占用CPU最高, 这个可以使用top命令:

    top -Hp $pid -b -n 1|sed -n "7,17p"
    
       PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND           
     94349 app     20   0 11.2g 5.0g  12m S 15.0 32.1 215:03.69 java              
     94357 app     20   0 11.2g 5.0g  12m S 15.0 32.1  88:22.39 java              
     94352 app     20   0 11.2g 5.0g  12m S 13.1 32.1 215:05.71 java              
     94350 app     20   0 11.2g 5.0g  12m S 11.2 32.1 215:04.86 java              
     94351 app     20   0 11.2g 5.0g  12m S 11.2 32.1 215:04.99 java              
     94935 app     20   0 11.2g 5.0g  12m S 11.2 32.1  63:11.75 java              
     94926 app     20   0 11.2g 5.0g  12m S  9.4 32.1  63:10.58 java              
     94927 app     20   0 11.2g 5.0g  12m S  5.6 32.1  63:06.89 java              
     94932 app     20   0 11.2g 5.0g  12m S  5.6 32.1  63:12.65 java              
     94939 app     20   0 11.2g 5.0g  12m S  5.6 32.1  63:01.75 java  
    

    $pid是我们对应的java进程的进程ID, sed -n "7,17p" 是取第7到第17行, 因为前7行都是top命令的头部信息, 所以第7到第17行就是该线程下最耗CPU的前10个线程了.
    其中第一列的"pid"就是JVM里面对应的线程ID, 我们只需要用线程ID在jstack里面找到对应的线程就知道是谁在搞鬼了.
    不过需注意的是top命令中的PID是十进制的, 而jstack里面的线程ID是十六进制的, 所以我们还需要做一个工作, 就是把上面的PID转成十六进制, 这里我只转换了最耗CPU的前3个:

    [app@linux-v-l-02:/app/tmp/]$printf '%x\n' 94349
    1708d
    [app@linux-v-l-02:/app/tmp/]$printf '%x\n' 94357
    17095
    [app@linux-v-l-02:/app/tmp/]$printf '%x\n' 94352
    17090
    

    jstack

    现在我们已经知道消耗CPU的线程ID, 接着就要去看看这些线程ID对应的是什么线程.

    首先使用jstack打出JVM里面的所有的线程信息:

    [app@linux-v-l-02:/app/tmp/]jstack -l $pid >>/tmp/jstack.txt
    

    值得一提的是, 由于JVM里面的线程一直在变化, 而TOP中的线程也一直在变化, 所以如果jstack命令和top命令是分开执行的话, 很有可能两者的线程ID会对应不上. 因此top命令和jstack命令最好是写好脚本一起执行. 其实我就是写脚本一起执行的~

    接着, 看看1708d, 17095, 17090 这三个到底是什么线程:

    [app@linux-v-l-02:/app/tmp/]$egrep "1708d|17095|17090" jstack.txt --color
    "Gang worker#0 (Parallel GC Threads)" os_prio=0 tid=0x00007f4d4c023000 nid=0x1708d runnable 
    "Gang worker#3 (Parallel GC Threads)" os_prio=0 tid=0x00007f4d4c028800 nid=0x17090 runnable 
    "G1 Concurrent Refinement Thread#0" os_prio=0 tid=0x00007f4d4c032000 nid=0x17095 runnable 
    

    上面的nid就是对应的十六进制的线程ID. 从jstack可以看出, 居然最耗CPU的线程都是一些GC线程.
    对JVM的FULL GC我们是有监控的, 这个应用自从换了G1之后, 一般一周左右才会发生一次FULL GC, 所以我们一直都以为我们的JVM堆是很健康的, 但很种种迹象表明, 我们的JVM确实是出问题了

    GC问题

    gc日志

    GC日志我们是一直有打印, 打开一看, 果然是有非常多的GC pause, 如下

    2019-08-12T20:12:23.002+0800: 501598.612: [GC pause (G1 Humongous Allocation) (young) (initial-mark), 0.0907586 secs]
       [Parallel Time: 84.5 ms, GC Workers: 4]
          [GC Worker Start (ms): Min: 501598615.0, Avg: 501598615.0, Max: 501598615.0, Diff: 0.1]
          [Ext Root Scanning (ms): Min: 4.9, Avg: 5.0, Max: 5.0, Diff: 0.2, Sum: 19.8]
          [Update RS (ms): Min: 76.6, Avg: 76.7, Max: 76.7, Diff: 0.1, Sum: 306.7]
             [Processed Buffers: Min: 945, Avg: 967.0, Max: 1007, Diff: 62, Sum: 3868]
          [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
          [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
          [Object Copy (ms): Min: 2.4, Avg: 2.5, Max: 2.6, Diff: 0.2, Sum: 9.8]
          [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
             [Termination Attempts: Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 4]
          [GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.3]
          [GC Worker Total (ms): Min: 84.2, Avg: 84.2, Max: 84.2, Diff: 0.1, Sum: 336.7]
          [GC Worker End (ms): Min: 501598699.2, Avg: 501598699.2, Max: 501598699.2, Diff: 0.0]
       [Code Root Fixup: 0.2 ms]
       [Code Root Purge: 0.0 ms]
       [Clear CT: 0.1 ms]
       [Other: 5.9 ms]
          [Choose CSet: 0.0 ms]
          [Ref Proc: 1.3 ms]
          [Ref Enq: 0.1 ms]
          [Redirty Cards: 0.1 ms]
          [Humongous Register: 0.1 ms]
          [Humongous Reclaim: 0.7 ms]
          [Free CSet: 0.2 ms]
       [Eden: 230.0M(1968.0M)->0.0B(1970.0M) Survivors: 8192.0K->8192.0K Heap: 1693.6M(4096.0M)->1082.1M(4096.0M)]
     [Times: user=0.34 sys=0.00, real=0.10 secs] 
    2019-08-12T20:12:23.094+0800: 501598.703: [GC concurrent-root-region-scan-start]
    2019-08-12T20:12:23.101+0800: 501598.711: [GC concurrent-root-region-scan-end, 0.0076353 secs]
    2019-08-12T20:12:23.101+0800: 501598.711: [GC concurrent-mark-start]
    2019-08-12T20:12:23.634+0800: 501599.243: [GC concurrent-mark-end, 0.5323465 secs]
    2019-08-12T20:12:23.639+0800: 501599.249: [GC remark 2019-08-12T20:12:23.639+0800: 501599.249: [Finalize Marking, 0.0019652 secs] 2019-08-12T20:12:23.641+0800: 501599.251: [GC ref-proc, 0.0027393 secs] 2019-08-12T20:12:23.644+0800: 501599.254: [Unloading, 0.0307159 secs], 0.0366784 secs]
     [Times: user=0.13 sys=0.00, real=0.04 secs] 
    2019-08-12T20:12:23.682+0800: 501599.291: [GC cleanup 1245M->1226M(4096M), 0.0041762 secs]
     [Times: user=0.02 sys=0.00, real=0.01 secs] 
    2019-08-12T20:12:23.687+0800: 501599.296: [GC concurrent-cleanup-start]
    2019-08-12T20:12:23.687+0800: 501599.296: [GC concurrent-cleanup-end, 0.0000487 secs]
    2019-08-12T20:12:30.022+0800: 501605.632: [GC pause (G1 Humongous Allocation) (young) (to-space exhausted), 0.3849037 secs]
       [Parallel Time: 165.7 ms, GC Workers: 4]
          [GC Worker Start (ms): Min: 501605635.2, Avg: 501605635.2, Max: 501605635.3, Diff: 0.1]
          [Ext Root Scanning (ms): Min: 3.5, Avg: 3.8, Max: 4.4, Diff: 0.9, Sum: 15.2]
          [Update RS (ms): Min: 135.5, Avg: 135.8, Max: 136.0, Diff: 0.5, Sum: 543.3]
             [Processed Buffers: Min: 1641, Avg: 1702.2, Max: 1772, Diff: 131, Sum: 6809]
          [Scan RS (ms): Min: 1.5, Avg: 1.6, Max: 1.6, Diff: 0.0, Sum: 6.2]
          [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
          [Object Copy (ms): Min: 24.1, Avg: 24.4, Max: 24.6, Diff: 0.4, Sum: 97.4]
          [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.1]
             [Termination Attempts: Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 4]
          [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
          [GC Worker Total (ms): Min: 165.6, Avg: 165.6, Max: 165.6, Diff: 0.0, Sum: 662.4]
          [GC Worker End (ms): Min: 501605800.8, Avg: 501605800.9, Max: 501605800.9, Diff: 0.0]
       [Code Root Fixup: 0.2 ms]
       [Code Root Purge: 0.0 ms]
       [Clear CT: 0.3 ms]
       [Other: 218.7 ms]
          [Evacuation Failure: 210.1 ms]
          [Choose CSet: 0.0 ms]
          [Ref Proc: 1.5 ms]
          [Ref Enq: 0.1 ms]
          [Redirty Cards: 0.3 ms]
          [Humongous Register: 0.2 ms]
          [Humongous Reclaim: 2.2 ms]
          [Free CSet: 0.2 ms]
       [Eden: 666.0M(1970.0M)->0.0B(204.0M) Survivors: 8192.0K->0.0B Heap: 2909.5M(4096.0M)->1712.4M(4096.0M)]
     [Times: user=1.44 sys=0.00, real=0.39 secs] 
    2019-08-12T20:12:32.225+0800: 501607.834: [GC pause (G1 Evacuation Pause) (mixed), 0.0800708 secs]
       [Parallel Time: 74.8 ms, GC Workers: 4]
          [GC Worker Start (ms): Min: 501607835.5, Avg: 501607835.6, Max: 501607835.6, Diff: 0.1]
          [Ext Root Scanning (ms): Min: 3.7, Avg: 4.0, Max: 4.4, Diff: 0.6, Sum: 16.2]
          [Update RS (ms): Min: 67.8, Avg: 68.0, Max: 68.1, Diff: 0.3, Sum: 272.0]
             [Processed Buffers: Min: 863, Avg: 899.8, Max: 938, Diff: 75, Sum: 3599]
    

    G1的日志有个不好的地方就是太多了, 看得眼花缭乱, 为了方便描述, 我将上述GC日志省略一些无意义的, 浓缩为以下三段:

    2019-08-12T20:12:23.002+0800: 501598.612: [GC pause (G1 Humongous Allocation) (young) (initial-mark), 0.0907586 secs]
       [Parallel Time: 84.5 ms, GC Workers: 4]
          [GC Worker Start (ms): Min: 501598615.0, Avg: 501598615.0, Max: 501598615.0, Diff: 0.1]
    ......
        
       [Eden: 230.0M(1968.0M)->0.0B(1970.0M) Survivors: 8192.0K->8192.0K Heap: 1693.6M(4096.0M)->1082.1M(4096.0M)]
     [Times: user=0.34 sys=0.00, real=0.10 secs] 
    2019-08-12T20:12:23.094+0800: 501598.703: [GC concurrent-root-region-scan-start]
    2019-08-12T20:12:23.101+0800: 501598.711: [GC concurrent-root-region-scan-end, 0.0076353 secs]
    2019-08-12T20:12:23.101+0800: 501598.711: [GC concurrent-mark-start]
    2019-08-12T20:12:23.634+0800: 501599.243: [GC concurrent-mark-end, 0.5323465 secs]
    2019-08-12T20:12:23.639+0800: 501599.249: [GC remark 2019-08-12T20:12:23.639+0800: 501599.249: [Finalize Marking, 0.0019652 secs] 2019-08-12T20:12:23.641+0800: 501599.251: [GC ref-proc, 0.0027393 secs] 2019-08-12T20:12:23.644+0800: 501599.254: [Unloading, 0.0307159 secs], 0.0366784 secs]
     [Times: user=0.13 sys=0.00, real=0.04 secs] 
    2019-08-12T20:12:23.682+0800: 501599.291: [GC cleanup 1245M->1226M(4096M), 0.0041762 secs]
     [Times: user=0.02 sys=0.00, real=0.01 secs] 
    2019-08-12T20:12:23.687+0800: 501599.296: [GC concurrent-cleanup-start]
    2019-08-12T20:12:23.687+0800: 501599.296: [GC concurrent-cleanup-end, 0.0000487 secs]
    2019-08-12T20:12:30.022+0800: 501605.632: [GC pause (G1 Humongous Allocation) (young) (to-space exhausted), 0.3849037 secs]
       [Parallel Time: 165.7 ms, GC Workers: 4]
          [GC Worker Start (ms): Min: 501605635.2, Avg: 501605635.2, Max: 501605635.3, Diff: 0.1]
    ......
    
       [Eden: 666.0M(1970.0M)->0.0B(204.0M) Survivors: 8192.0K->0.0B Heap: 2909.5M(4096.0M)->1712.4M(4096.0M)]
     [Times: user=1.44 sys=0.00, real=0.39 secs] 
    2019-08-12T20:12:32.225+0800: 501607.834: [GC pause (G1 Evacuation Pause) (mixed), 0.0800708 secs]
       [Parallel Time: 74.8 ms, GC Workers: 4]
          [GC Worker Start (ms): Min: 501607835.5, Avg: 501607835.6, Max: 501607835.6, Diff: 0.1]
    ......
    
    

    这段日志看着就清晰多了, 首先从日志里面就能看出至少3个问题:

    • 出现了mixed类型的Evacuation Pause
    • 频繁G1 Humongous Allocation导致的to-space exhausted, 说明大量的大对象不断地分配出来
    • GC pause时间达到0.3849037 secs, 这是我们最不可容忍的

    另外还有一个更严重的问题这里是看不出来的, 就是类似的日志非常频繁! 高峰时期基本就是每2秒钟打印出一次

    jmap -histo

    通过上面的GC日志, 我们基本可以判断, 是应用程序不断地new一些大对象导致的.
    那什么是大对象呢?
    一般情况就是局部变量的List, 通常可以通过jmap -histo来查看堆内哪些对象占用内存比较大, 实例数是多少
    所以, 先通过jmap -histo $pid来看看堆栈里面的对象分布是如何的:

    
    num   #instances  #bytes  class name
    --------------------------------------------
    1:       1120   1032796420   [B
    2:     838370    105246813   [C
    3:     117631     55348463   [I
    4:     352652     31033457   java.lang.reflect.Method
    5:     665505     13978410   java.lang.String
    6:     198567     12368412   [Ljava.lang.Object
    7:     268941      9467465   java.util.HashMap$Node
    8:     268941      8064567   java.util.treeMap$Entry
    9:     268941      7845665   java.lang.reflect.Field
    10:    268941      7754771   [Ljava.util.HashMap$Node
    
    ....
    
    

    一般来说, 如果运气好, 而且业务代码有问题的, 通常能在jmap -histo里面看到我们业务涉及到的类名的.
    但是很可惜, 这里没有.
    然而, 聪明的同学可能一眼就看出了这个堆其实是很有问题的.
    我们看排行第一的[B (byte数组), 占用了堆大小1032796420(1G左右), 而instances却只有1120多个, 简单地一相除, 居然每个对象能有1M大小!
    很明显, 这就是我们要找的大对象了, 但是只知道是一些byte数组, 并不知道是什么数组, 所以还需要进一步排查

    为什么1M就是大对象了呢? 由于我们的堆只有4G大小, 一般G1最大只有2048个region, 因此每个region的大小就是2M. G1在分配新生代的对象的内存空间时, 发现这个对象大于region size的一半, 就可以认为是大对象了,故而发生G1 Humongous Allocation

    jmap -dump:format=b

    使用jmap -dump:format=b,file=head.hprof $pid 命令可以把JVM的堆内容dump出来. dump出来后一般直接在命令行查看是看不出什么的, 得下载到本地, 借助一些分析工具来进行分析. 可以有很多工具可以分析, 例如jvisualvm, jprofile, MAT等等
    这里我用到的是jvisualvm, 打开jvisualvm==>文件==>装入==>选中我刚刚下载下来的head.hprof, 然后再点击"类", 再点击按大小排序, 可以得到如下图.


    可以看出, 堆里面的byte数组实例数占比只有0.9%, 内存大小占比却高达30%, 说明每一个实例都是大对象.
    接下来我们双击第一行的"byte[]" 查看这些byte数组的明细. 可以看很多的对象都是1048600byte, 也就是刚好1M, 不过这样还是看不出这个数组的内容, 所以我们导出这个数组到本地, 如下图:


    导出后先用Sublime Text打开看一下, 如下图


    可以看到,这个数组实际大小只有1k左右(数前面的非0的数字的个数), 后面都是一些无意义的0值.
    虽然还无法确定这个数组是什么代码产生的, 但是至少可以大概确定了问题产生的原因:
    肯定是某处代码new了一个1048600大的byte数组,而实际场景中这个byte数组只需要1k左右即可,后面没有填充的位都是默认的0值
    最后证实一下我们的猜测,简单的用

     String str= new String (bytearr, "UTF-8");
     System.out.println("str = [" + str + "]");
    

    把数组内容打印出来, 打印结果大概如下(我省略了大部分内容):

    str = [p� C0+org.apache.camel.impl.DefaultExchangeHolder�
    exchangeId�inFaultFlagoutFaultFlag exception�inBody�outBody    inHeaders
    outHeaders
    remoteOptRole�faceUrl�version�fromIfromUser�failFaceUrl
    .....
    
    

    再用相关的关键词搜索一下代码,最后找出了真凶:

           data = DefaultExchangeHolder.marshal(exchange, false);
           baos = new ByteArrayOutputStream(1048600);// 真凶在这里
           hessianOut = new Hessian2Output(baos);
           hessianOut.startMessage();
           hessianOut.writeObject(data);
           hessianOut.completeMessage();
           hessianOut.flush();
           exchangeData = baos.toByteArray();
    

    ByteArrayOutputStream的构造方法

      public ByteArrayOutputStream(int size) {
            if (size < 0) {
                throw new IllegalArgumentException("Negative initial size: "
                                                   + size);
            }
            buf = new byte[size];
        }
    

    其实就是在利用Hessian序列化之前, new了一个1M大小的byte数组,导致了大量的大对象出现,而这个byte数组只是作为一个buf来使用, 而且大小不够时会自动增长(The buffer automatically grows as data is written to it.),所以根本没必要设置那么大.

    相关文章

      网友评论

        本文标题:一次非典型的CPU告警的排查

        本文链接:https://www.haomeiwen.com/subject/ddobjctx.html