美文网首页apm
filebeat+logstash+influxdb+grafa

filebeat+logstash+influxdb+grafa

作者: 骆的沙 | 来源:发表于2019-01-26 17:45 被阅读0次

    overview

    kubernetes集群中scheduler对pod各个阶段的调度耗时也展示了scheduler的性能,目前在社区中没有找到相关的监控项,于是本文将使用filebeat+logstash+influxdb+grafana技术栈将Metres展示监控起来。

    kube-scheduler.INFO相关Trace日志
    I0117 18:08:43.224106   87811 trace.go:76] Trace[1863564834]: "Scheduling fat-app/xxxxxxxx-xxxxxxxx-3404-0" (started: 2019-01-17 18:08:43.106820332 +0800 CST m=+116911.416416886) (total time: 117.213043ms):
    Trace[1863564834]: [117.187µs] [117.187µs] Computing predicates
    Trace[1863564834]: [2.093583ms] [1.976396ms] Prioritizing
    Trace[1863564834]: [117.172797ms] [115.079214ms] Selecting host
    Trace[1863564834]: [117.213043ms] [40.246µs] END
    

    filebeat配置

    filebeat.prospectors:
    - type: log
      paths:
        - /var/log/kubernetes/kube-scheduler.*.INFO.*
      include_lines: ['trace','Trace']
    # pattern: 正则表达式
    # negate: true 或 false(默认是false),false表示匹配pattern的行合并到上一行;true表示不匹配pattern的行合并到上一行
    #  match:  after 或 before,合并到上一行的末尾或开头
      multiline:
        pattern: '^Trace\[[0-9]+\]'
        negate: false
        match: after
      fields:
        tag: scheduler  #添加tag
      fields_under_root: true
      scan_frequency: 10s
      ignore_older: 6h
      close_inactive: 5m
      close_removed: true
      clean_removed: true
      tail_files: false
    
    fields:
      zone: $zone_name  #添加自己的环境信息
    fields_under_root: true
    
    output.logstash:
      enabled: true
      hosts:
        - $logstash_host:5044  #logstash的地址
    
    logging.level: info
    logging.metrics.enabled: false
    http.enabled: true
    http.host: localhost
    http.port: 5066
    
    setup.dashboards.enabled: false
    setup.template.enabled: false
    
    path.home: /usr/share/filebeat
    path.data: /var/lib/filebeat
    path.logs: /var/log/filebeat
    filebeat.registry_file: ${path.data}/registry
    max_procs: 2
    
    

    note:

    • debug时可通过 filebeat -e -c filebeat.yml -d "publish"来确认采集的信息是否正确
    • 若有如下报错,则需要将filebeat stop掉,将/var/lib/filebeat/registry 删除,重启filebeat重新注册即可

    ERROR registrar/registrar.go:346 Writing of registry returned error: rename /var/lib/filebeat/registry.new /var/lib/filebeat/registry: no such file or directory. Continuing...

    Logstash的安装及配置

    install logstash

    • 下载logstash安装
    $ wget https://artifacts.elastic.co/downloads/logstash/logstash-6.5.4.rpm
    $ rpm -ivh
    
    • 到logstash的path目录下,链接/etc/logstash
    $ cd /usr/share/logstash/config
    $ ln -s /etc/logstash/* .
    
    • 更改logstash相关目录权限
    $ chown -R /etc/logstash
    $ chown -R /usr/share/logstash
    $ chown -R /var/lib/logstash
    
    • 安装所需插件,详细见下文
    $ cd /usr/share/logstash/
    $ bin/logstash-plugin install logstash-output-influxdb
    
    • 检查配置文件是否正常
    $ bin/logstash -f /etc/logstash/logstash.conf --config.test_and_exit
    
    • 启动
    $ bin/logstash -f /etc/logstash/logstash.conf --config.reload.automatic
    

    install-plugin

    • logstash-filter-grok
    • logstash-output-influxdb
      插件需要单独安装,安装方式如果能不能连接外网则可采用以下方式
      https://gems.ruby-china.com/

    (1)尽可能用比较新的 RubyGems 版本,建议 2.6.x 以上

    $ yum install -y gem
    $ gem update --system # 这里请翻墙一下
    $ gem -v
      2.6.3
    

    (2) 更换到国内的源,确保只有 gems.ruby-china.com

    $ gem sources --add https://gems.ruby-china.com/ --remove https://rubygems.org/
    $ gem sources -l
      https://gems.ruby-china.com
    

    (3)安装单个插件

    $ gem install logstash-output-influxdb
    

    (4)安装所有Gemfile中的插件

    # 在Gemfile中添加想要安装的plugin
      gem 'logstash-output-influxdb', '~> 5.0', '>= 5.0.5'
    $ bin/logstash-plug ininstall --no-verify
    

    self-defined grok-pattern

    KUBESCHEDULER .*Scheduling %{NOTSPACE:namespace:tag}/%{NOTSPACE:podname:tag}\".*total time: %{NUMBER:total_time}%{NOTSPACE:total_mesurement}\).*Trace\[\S+\].*\[\S+\].*\[%{NUMBER:computing_time}%{NOTSPACE:computing_mesurement}\]\s+Computing %{WORD:ComputingPredicates:tag}.*Trace\[\S+\].*\[\S+\].*\[%{NUMBER:prioritizing_time}%{NOTSPACE:prioritizing_mesurement}\]\s+%{WORD:Prioritizing:tag}.*Trace\[\S+\].*\[\S+\].*\[%{NUMBER:SelectingHost_time}%{NOTSPACE:SelectingHost_mesurement}\]\s+%{WORD:SelectingHost:tag}.**Trace\[\S+\].*\[\S+\].*\[%{NUMBER:end_time}%{NOTSPACE:end_mesurement}\]\s+%{WORD:END:tag}
    
    KUBESCHEDULERSHORT .*Scheduling %{NOTSPACE:namespace:tag}/%{NOTSPACE:podname:tag}\".*total time: %{NUMBER:total_time}%{NOTSPACE:total_mesurement}\).*Trace\[\S+\].*\[\S+\].*\[%{NUMBER:computing_time}%{NOTSPACE:computing_mesurement}\]\s+Computing %{WORD:ComputingPredicates:tag}.**Trace\[\S+\].*\[\S+\].*\[%{NUMBER:end_time}%{NOTSPACE:end_mesurement}\]\s+%{WORD:END:tag}
    

    logstash.conf配置文件

    input {
        beats {
            port => "5044"
            client_inactivity_timeout => "60" #默认60s
        }   
    }
    filter {
        grok {
            # 需要安装grok插件,并且pattern能正确匹配才能有输出
            patterns_dir => ["/usr/share/logstash/config/pattern"] #自定义的grok pattern
            match => {
                       "message" => ["%{KUBESCHEDULER}", "%{KUBESCHEDULERSHORT}"]
                     }
            remove_field => ["message"]
    
        } 
        ruby {
        # logstash的算术运算操作,统一运算单位
            code => "
                total_unit = event.get('total_mesurement')
                computing_unit = event.get('computing_mesurement')
                prioritizing_unit = event.get('prioritizing_mesurement')
                selecting_unit = event.get('SelectingHost_mesurement')
                ending_unit = event.get('end_mesurement')
                if total_unit == 'ms'
                    event.set('total_time',(event.get('total_time').to_f*1000))
                elsif total_unit == 's'
                    event.set('total_time',(event.get('total_time').to_f*1000000))
                else
                    event.set('total_time',(event.get('total_time').to_f))
                end
                if computing_unit == 'ms'
                    event.set('computing_time',(event.get('computing_time').to_f*1000))
                elsif computing_unit == 's'
                    event.set('computing_time',(event.get('computing_time').to_f*1000000))
                else
                    event.set('computing_time',(event.get('computing_time').to_f))
                end
                if prioritizing_unit == 'ms'
                    event.set('prioritizing_time',(event.get('prioritizing_time').to_f*1000))
                elsif prioritizing_unit == 's'
                    event.set('prioritizing_time',(event.get('prioritizing_time').to_f*1000000))
                else
                    event.set('prioritizing_time',(event.get('prioritizing_time').to_f))
                end
                if selecting_unit == 'ms'
                    event.set('SelectingHost_time',(event.get('SelectingHost_time').to_f*1000))
                elsif selecting_unit == 's'
                    event.set('SelectingHost_time',(event.get('SelectingHost_time').to_f*1000000))
                else
                    event.set('SelectingHost_time',(event.get('SelectingHost_time').to_f))
                end
                if ending_unit == 'ms'
                    event.set('end_time',(event.get('end_time').to_f*1000))
                elsif ending_unit == 's'
                    event.set('end_time',(event.get('end_time').to_f*1000000))
                else
                    event.set('end_time',(event.get('end_time').to_f))
                end
               "   
            remove_field => ["total_mesurement","computing_mesurement","prioritizing_mesurement","SelectingHost_mesurement","end_mesurement","prospector","offset","tags","beat","source"]
        }   
    }
    
    output {
      influxdb {
        db => "$dbname"
        host => "$influxdb-host"
        port => "8086"
        user => "$username"
        password => "$yourpasswd"
        measurement => "kubeSchedulerTimeCost"
        coerce_values => {
          "total_time" => "float"
          "computing_time" => "float"
          "prioritizing_time" => "float"
          "SelectingHost_time" => "float"
          "end_time" => "float"
        }
        data_points => {
          "namespace" => "%{namespace}"
          "podname" => "%{podname}"
          "ComputingPredicates" => "%{ComputingPredicates}"
          "Prioritizing" => "%{Prioritizing}"
          "SelectingHost" => "%{SelectingHost}"
          "End" => "%{END}"
          "zone" => "%{zone}"
          "total_time" => "%{total_time}"
          "computing_time" => "%{computing_time}"
          "prioritizing_time" => "%{prioritizing_time}"
          "SelectingHost_time" => "%{SelectingHost_time}"
          "end_time" => "%{end_time}"
          "host" => "%{host}"
         }   
        send_as_tags => ["host","zone","namespace","podname","ComputingPredicates","Prioritizing","SelectingHost","End"]
      }
    #  elasticsearch {
    #    hosts => "$es-host:9200"
    #    manage_template => false
    #    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #    document_type => "%{[@metadata][type]}"
    #  }
      stdout { codec => rubydebug }
    }
    

    note:

    • 可通过执行bin/logstash -f ./config/conf.d/scheduler.conf -- config.reload.automatic
    • 若启动logstash服务,但是input beats 5044 port没有打开,则可能时权限问题,更改一下queue的权限chown -R logstash:logstash /var/lib/logstash

    influxdb

    influxdb需要将相应的端口打开

    grafana看板

    可看到各个集群的pod在各个阶段的调度耗时及调度时间


    scheduler.png

    相关文章

      网友评论

        本文标题:filebeat+logstash+influxdb+grafa

        本文链接:https://www.haomeiwen.com/subject/pgrijqtx.html