美文网首页
Jaeger 使用 clickhouse 作为 storage

Jaeger 使用 clickhouse 作为 storage

作者: 偷油考拉 | 来源:发表于2024-02-21 16:55 被阅读0次

    https://www.jaegertracing.io/docs/1.18/deployment/
    https://github.com/jaegertracing/jaeger-clickhouse

    一、下载 jaeger-clickhouse 项目

    git clone https://github.com/jaegertracing/jaeger-clickhouse.git
    

    二、部署 clickhouse

    1. docker 运行环境

    sudo yum remove docker \
                      docker-client \
                      docker-client-latest \
                      docker-common \
                      docker-latest \
                      docker-latest-logrotate \
                      docker-logrotate \
                      docker-engine
    
    sudo yum install -y yum-utils
    sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    
    sudo yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    
    cat <<EOF > /etc/docker/daemon.json
    {
       "registry-mirrors": [
       "https://mirror.ccs.tencentyun.com"
      ]
    }
    EOF
    
    sudo systemctl enable --now docker
    

    2. 在 docker 部署 clickhouse

    mkdir -p /lvmdata/clickhouse/{logs,data}
    docker run -d -p 8123:8123 -p9000:9000 -p9009:9009\
        -v /lvmdata/clickhouse/data:/var/lib/clickhouse/ \
        -v /lvmdata/clickhouse/logs:/var/log/clickhouse-server/ \
        --name ch --ulimit nofile=262144:262144 clickhouse/clickhouse-server
    
    [root@VM-201-2-centos ~]# echo 'SELECT version()' | curl 'http://localhost:8123/' --data-binary @-
    24.1.5.6
    
    

    By default, starting above server instance will be run as the default user without password.

    [root@VM-201-2-centos ~]# curl http://10.41.201.2:8123
    Ok.
    [root@VM-201-2-centos ~]# echo 'SELECT 1' | curl 'http://10.41.201.2:8123/' --data-binary @-
    1
    [root@VM-201-2-centos jaeger-clickhouse]# echo 'CREATE DATABASE IF NOT EXISTS jaeger;' | curl 'http://10.41.201.2:8123/' --data-binary @-
    [root@VM-201-2-centos jaeger-clickhouse]# echo 'show databases;' | curl 'http://10.41.201.2:8123/' --data-binary @-
    INFORMATION_SCHEMA
    default
    information_schema
    jaeger
    system
    

    3. docker部署的同时,创建数据库

    使用默认配置可不创建。因为用户名默认 default,密码 空,数据库 default。CH初始化就是这样的。
    如下仅示范创建。

    https://clickhouse.com/docs/en/interfaces/http

    docker run -d -p 8123:8123 -p9000:9000 -p9009:9009\
        -e TZ='Asia/Shanghai' \
        -e CLICKHOUSE_DB=jaeger \
        -e CLICKHOUSE_USER=jaeger  \
        -e CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT=1 \
        -e CLICKHOUSE_PASSWORD=jaeger \
        -v /lvmdata/clickhouse/data:/var/lib/clickhouse/ \
        -v /lvmdata/clickhouse/logs:/var/log/clickhouse-server/ \
        --name ch --ulimit nofile=262144:262144 clickhouse/clickhouse-server:24.1.3
    
    [root@VM-201-2-centos clickhouse]# docker exec -it ch date
    Wed 21 Feb 2024 03:39:39 PM CST
    
    [root@VM-201-2-centos clickhouse]# echo 'SELECT 1' | curl 'http://localhost:8123/' --data-binary @- -ujaeger:jaeger
    1
    [root@VM-201-2-centos clickhouse]# echo 'show databases' | curl 'http://localhost:8123/' --data-binary @- -ujaeger:jaeger
    INFORMATION_SCHEMA
    default
    information_schema
    jaeger
    system
    

    三、配置文件说明

    配置文件 config.yaml 说明

    配置项 说明
    address: some-clickhouse-server:9000 ClickHouse 服务器地址。
    init_sql_scripts_dir: 存放启动脚本(.sql 文件)的目录,主要用于集成测试。取决于 init_tables 的设置,可以作为创建span存储表的替换或补充。如果init_tables 设置 enabled,则先执行该目录中的脚本。
    init_tables: 是否在ClickHouse自动创建表。如果init_sql_scripts_dir是空,则默认 enabled;如果init_sql_scripts_dir指定了,则默认 disabled。
    max_span_count: 一次可以挂起写入的最大span数量。超过此限制的new span将被丢弃,如果写入ClickHouse时出现问题,检查一下内存。检查 jaeger_clickhouse_discarded_spans 指标来关注丢弃的状况。如果为 0,则不设限。默认 10_000_000。
    batch_write_size: 批写入大小。默认 10_000。
    batch_flush_interval: 批间隔时间。默认 5s。
    encoding: 数据编码格式, json or protobuf。默认 json。
    ca_file: CA TLS 证书路径
    username: 连接 ClickHouse 的用户名。默认 "default"。
    password: 连接 ClickHouse 的密码。
    database: ClickHouse 数据库名。数据库需要在Jaeger启动前手动创建完成。默认 "default"。
    tenant: 租户名,并以此名在表中开启 tenant 列,默认 empty。参考 guide-multitenancy.md。
    metrics_endpoint: localhost:9090 提供 prometheus metrics 的 Endpoint。默认 localhost:9090。
    replication: 是否使用sql脚本支持复制和分片。Replication 仅可在Atomic引擎下开启。默认 false。
    spans_table: Span 表。默认是 jaeger_spans_local,如果开启 replication 则是 jaeger_spans
    spans_index_table: Span 索引表。默认是 jaeger_index_local, 如果开启 replication 则是 jaeger_index
    operations_table: 操作表。默认是 jaeger_operations_local,如果开启 replication 则是jaeger_operations
    ttl: 数据超时时间(以 day 计)。如果为 0 ,则没有TTL限制。默认 0。
    max_num_spans: 每个 trace 提取的最大 span 数量。如果为 0,则不设限。默认 0。

    四、jaeger-clickhouse 插件编译&配置

    https://www.jaegertracing.io/docs/next-release/deployment/#storage-plugin
    https://github.com/jaegertracing/jaeger/tree/main/plugin/storage/grpc
    https://github.com/jaegertracing/jaeger-clickhouse

    1. 修改配置文件 config.xml

    address: 127.0.0.1:9000
    username: jaeger
    password: jaeger
    database: jaeger
    
    address: 127.0.0.1:9000
    init_sql_scripts_dir:
    init_tables:
    max_span_count:
    batch_write_size:
    batch_flush_interval:
    encoding:
    ca_file:
    username: jaeger
    password: jaeger
    database: jaeger
    tenant:
    metrics_endpoint: localhost:9090
    replication:
    spans_table:
    spans_index_table:
    operations_table:
    ttl:
    max_num_spans:
    

    2. 编译插件

    cd jaeger-clickhouse
    yum install golang -y
    go env -w GOPROXY="https://goproxy.cn,direct"
    make build
    

    编译完成后,在目录生成 jaeger-clickhouse-linux-amd64 二进制文件

    五、启动服务(二进制文件)

    在 jaeger-clickhouse 目录下执行

     SPAN_STORAGE_TYPE=grpc-plugin /root/jaeger-1.54.0-linux-amd64/jaeger-all-in-one \
     --query.ui-config=jaeger-ui.json \
     --grpc-storage-plugin.binary=./jaeger-clickhouse-linux-amd64 \
     --grpc-storage-plugin.configuration-file=config.yaml \
     --grpc-storage-plugin.log-level=debug
    

    输出如下

    [root@VM-201-2-centos jaeger-clickhouse]# SPAN_STORAGE_TYPE=grpc-plugin /root/jaeger-1.54.0-linux-amd64/jaeger-all-in-one \
     --query.ui-config=jaeger-ui.json \
     --grpc-storage-plugin.binary=./jaeger-clickhouse-linux-amd64 \
     --grpc-storage-plugin.configuration-file=config.yaml \
     --grpc-storage-plugin.log-level=debug
    2024/02/22 16:20:48 maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
    2024/02/22 16:20:48 application version: git-commit=a614bb9ab161020b17917452e7d9680819622480, git-version=v1.54.0, build-date=2024-02-06T23:10:30Z
    {"level":"info","ts":1708590048.2264655,"caller":"flags/service.go:119","msg":"Mounting metrics handler on admin server","route":"/metrics"}
    {"level":"info","ts":1708590048.2293284,"caller":"flags/service.go:125","msg":"Mounting expvar handler on admin server","route":"/debug/vars"}
    {"level":"info","ts":1708590048.2300792,"caller":"flags/admin.go:130","msg":"Mounting health check on admin server","route":"/"}
    {"level":"info","ts":1708590048.230168,"caller":"flags/admin.go:144","msg":"Starting admin HTTP server","http-addr":":14269"}
    {"level":"info","ts":1708590048.2302058,"caller":"flags/admin.go:122","msg":"Admin server started","http.host-port":"[::]:14269","health-status":"unavailable"}
    {"level":"info","ts":1708590048.2303848,"caller":"grpc@v1.61.0/clientconn.go:429","msg":"[core][Channel #1] Channel created","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.2318678,"caller":"grpc@v1.61.0/clientconn.go:1724","msg":"[core][Channel #1] original dial target is: \"localhost:4317\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.2319472,"caller":"grpc@v1.61.0/clientconn.go:1731","msg":"[core][Channel #1] parsed dial target is: resolver.Target{URL:url.URL{Scheme:\"localhost\", Opaque:\"4317\", User:(*url.Userinfo)(nil), Host:\"\", Path:\"\", RawPath:\"\", OmitHost:false, ForceQuery:false, RawQuery:\"\", Fragment:\"\", RawFragment:\"\"}}","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.2320015,"caller":"grpc@v1.61.0/clientconn.go:1745","msg":"[core][Channel #1] fallback to scheme \"passthrough\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.2320445,"caller":"grpc@v1.61.0/clientconn.go:1753","msg":"[core][Channel #1] parsed dial target is: passthrough:///localhost:4317","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.2320836,"caller":"grpc@v1.61.0/clientconn.go:1874","msg":"[core][Channel #1] Channel authority set to \"localhost:4317\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.2335765,"caller":"grpc@v1.61.0/resolver_wrapper.go:196","msg":"[core][Channel #1] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"localhost:4317\",\n      \"ServerName\": \"\",\n      \"Attributes\": null,\n      \"BalancerAttributes\": null,\n      \"Metadata\": null\n    }\n  ],\n  \"Endpoints\": [\n    {\n      \"Addresses\": [\n        {\n          \"Addr\": \"localhost:4317\",\n          \"ServerName\": \"\",\n          \"Attributes\": null,\n          \"BalancerAttributes\": null,\n          \"Metadata\": null\n        }\n      ],\n      \"Attributes\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.234026,"caller":"grpc@v1.61.0/balancer_wrapper.go:161","msg":"[core][Channel #1] Channel switches to new LB policy \"pick_first\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.234117,"caller":"grpc@v1.61.0/balancer_wrapper.go:213","msg":"[core][Channel #1 SubChannel #2] Subchannel created","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.2341633,"caller":"grpc@v1.61.0/clientconn.go:532","msg":"[core][Channel #1] Channel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.234216,"caller":"grpc@v1.61.0/clientconn.go:335","msg":"[core][Channel #1] Channel exiting idle mode","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.2342336,"caller":"grpc@v1.61.0/clientconn.go:1223","msg":"[core][Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.2343643,"caller":"grpc@v1.61.0/clientconn.go:1338","msg":"[core][Channel #1 SubChannel #2] Subchannel picks a new address \"localhost:4317\" to connect","system":"grpc","grpc_log":true}
    {"level":"warn","ts":1708590048.237665,"caller":"grpc@v1.61.0/clientconn.go:1400","msg":"[core][Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"localhost:4317\", ServerName: \"localhost:4317\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp [::1]:4317: connect: connection refused\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.237798,"caller":"grpc@v1.61.0/clientconn.go:1225","msg":"[core][Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp [::1]:4317: connect: connection refused\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.2378736,"caller":"grpc@v1.61.0/clientconn.go:532","msg":"[core][Channel #1] Channel Connectivity change to TRANSIENT_FAILURE","system":"grpc","grpc_log":true}
    2024/02/22 16:20:48 (deprecated, will be removed after 2024-03-01) using sidecar model of grpc-plugin storage, please upgrade to 'reomte' gRPC storage. https://github.com/jaegertracing/jaeger/issues/4647
    2024-02-22T16:20:48.239+0800 [DEBUG] starting plugin: path=./jaeger-clickhouse-linux-amd64 args=["./jaeger-clickhouse-linux-amd64", "--config", "config.yaml"]
    2024-02-22T16:20:48.239+0800 [DEBUG] plugin started: path=./jaeger-clickhouse-linux-amd64 pid=1087055
    2024-02-22T16:20:48.240+0800 [DEBUG] waiting for RPC address: plugin=./jaeger-clickhouse-linux-amd64
    2024-02-22T16:20:48.292+0800 [DEBUG] jaeger-clickhouse-linux-amd64: Running SQL statement: @module=jaeger-clickhouse
      statement=
      | CREATE TABLE IF NOT EXISTS jaeger_index_local
      | 
      | (
      |     timestamp  DateTime CODEC (Delta, ZSTD(1)),
      |     traceID    String CODEC (ZSTD(1)),
      |     service    LowCardinality(String) CODEC (ZSTD(1)),
      |     operation  LowCardinality(String) CODEC (ZSTD(1)),
      |     durationUs UInt64 CODEC (ZSTD(1)),
      |     tags Nested
      |     (
      |         key LowCardinality(String),
      |         value String
      |     ) CODEC (ZSTD(1)),
      |     INDEX idx_tag_keys tags.key TYPE bloom_filter(0.01) GRANULARITY 64,
      |     INDEX idx_duration durationUs TYPE minmax GRANULARITY 1
      | ) ENGINE MergeTree()
      |     
      |     PARTITION BY (
      |         toDate(timestamp)
      |     )
      |     ORDER BY (service, -toUnixTimestamp(timestamp))
      |     SETTINGS index_granularity = 1024
       timestamp="2024-02-22T16:20:48.292+0800"
    2024-02-22T16:20:48.431+0800 [DEBUG] jaeger-clickhouse-linux-amd64: Running SQL statement: @module=jaeger-clickhouse
      statement=
      | CREATE MATERIALIZED VIEW IF NOT EXISTS jaeger_operations_local
      | 
      |     ENGINE SummingMergeTree
      |     
      |     PARTITION BY (
      |         toYYYYMM(date)
      |     )
      |     ORDER BY (
      |         date,
      |         service,
      |         operation
      |     )
      |     SETTINGS index_granularity = 32
      |     POPULATE
      | AS SELECT
      |     toDate(timestamp) AS date,
      |     service,
      |     operation,
      |     count() AS count,
      |     if(
      |         has(tags.key, 'span.kind'),
      |         tags.value[indexOf(tags.key, 'span.kind')],
      |         ''
      |     ) AS spankind
      | FROM jaeger.jaeger_index_local
      | GROUP BY
      |     date,
      |     service,
      |     operation,
      |     tags.key,
      |     tags.value
       timestamp="2024-02-22T16:20:48.431+0800"
    2024-02-22T16:20:48.484+0800 [DEBUG] jaeger-clickhouse-linux-amd64: Running SQL statement: @module=jaeger-clickhouse
      statement=
      | CREATE TABLE IF NOT EXISTS jaeger_spans_local
      | 
      | (
      |     timestamp DateTime CODEC (Delta, ZSTD(1)),
      |     traceID   String CODEC (ZSTD(1)),
      |     model     String CODEC (ZSTD(3))
      | ) ENGINE MergeTree()
      |     
      |     PARTITION BY (
      |         toDate(timestamp)
      |     )
      |     ORDER BY traceID
      |     SETTINGS index_granularity = 1024
       timestamp="2024-02-22T16:20:48.484+0800"
    2024-02-22T16:20:48.491+0800 [DEBUG] jaeger-clickhouse-linux-amd64: Running SQL statement: @module=jaeger-clickhouse
      statement=
      | CREATE TABLE IF NOT EXISTS jaeger_spans_archive_local
      | 
      | (
      |     timestamp DateTime CODEC (Delta, ZSTD(1)),
      |     traceID   String CODEC (ZSTD(1)),
      |     model     String CODEC (ZSTD(3))
      | ) ENGINE MergeTree()
      |     
      |     PARTITION BY (
      |         toYYYYMM(timestamp)
      |     )
      |     ORDER BY traceID
      |     SETTINGS index_granularity = 1024
       timestamp="2024-02-22T16:20:48.490+0800"
    2024-02-22T16:20:48.498+0800 [DEBUG] using plugin: version=1
    {"level":"info","ts":1708590048.498503,"caller":"grpc@v1.61.0/clientconn.go:429","msg":"[core][Channel #3] Channel created","system":"grpc","grpc_log":true}
    2024-02-22T16:20:48.498+0800 [DEBUG] jaeger-clickhouse-linux-amd64: plugin address: network=unix address=/tmp/plugin664906769 timestamp="2024-02-22T16:20:48.498+0800"
    {"level":"info","ts":1708590048.498544,"caller":"grpc@v1.61.0/clientconn.go:1724","msg":"[core][Channel #3] original dial target is: \"unused\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.4986176,"caller":"grpc@v1.61.0/clientconn.go:1731","msg":"[core][Channel #3] parsed dial target is: resolver.Target{URL:url.URL{Scheme:\"\", Opaque:\"\", User:(*url.Userinfo)(nil), Host:\"\", Path:\"unused\", RawPath:\"\", OmitHost:false, ForceQuery:false, RawQuery:\"\", Fragment:\"\", RawFragment:\"\"}}","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.498634,"caller":"grpc@v1.61.0/clientconn.go:1745","msg":"[core][Channel #3] fallback to scheme \"passthrough\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.498649,"caller":"grpc@v1.61.0/clientconn.go:1753","msg":"[core][Channel #3] parsed dial target is: passthrough:///unused","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.4986608,"caller":"grpc@v1.61.0/clientconn.go:1874","msg":"[core][Channel #3] Channel authority set to \"unused\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.4994872,"caller":"grpc@v1.61.0/resolver_wrapper.go:196","msg":"[core][Channel #3] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"unused\",\n      \"ServerName\": \"\",\n      \"Attributes\": null,\n      \"BalancerAttributes\": null,\n      \"Metadata\": null\n    }\n  ],\n  \"Endpoints\": [\n    {\n      \"Addresses\": [\n        {\n          \"Addr\": \"unused\",\n          \"ServerName\": \"\",\n          \"Attributes\": null,\n          \"BalancerAttributes\": null,\n          \"Metadata\": null\n        }\n      ],\n      \"Attributes\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.4995494,"caller":"grpc@v1.61.0/balancer_wrapper.go:161","msg":"[core][Channel #3] Channel switches to new LB policy \"pick_first\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.4995987,"caller":"grpc@v1.61.0/balancer_wrapper.go:213","msg":"[core][Channel #3 SubChannel #4] Subchannel created","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.499647,"caller":"grpc@v1.61.0/clientconn.go:532","msg":"[core][Channel #3] Channel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.4996738,"caller":"grpc@v1.61.0/clientconn.go:335","msg":"[core][Channel #3] Channel exiting idle mode","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.4997873,"caller":"grpc@v1.61.0/clientconn.go:1223","msg":"[core][Channel #3 SubChannel #4] Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.4998105,"caller":"grpc@v1.61.0/clientconn.go:1338","msg":"[core][Channel #3 SubChannel #4] Subchannel picks a new address \"unused\" to connect","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5013008,"caller":"grpc@v1.61.0/clientconn.go:1223","msg":"[core][Channel #3 SubChannel #4] Subchannel Connectivity change to READY","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.501366,"caller":"grpc@v1.61.0/clientconn.go:532","msg":"[core][Channel #3] Channel Connectivity change to READY","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5029986,"caller":"grpc/factory.go:96","msg":"External plugin storage configuration","configuration":{"PluginBinary":"./jaeger-clickhouse-linux-amd64","PluginConfigurationFile":"config.yaml","PluginLogLevel":"debug","RemoteServerAddr":"","RemoteTLS":{"Enabled":false,"CAPath":"","CertPath":"","KeyPath":"","ServerName":"","ClientCAPath":"","CipherSuites":null,"MinVersion":"","MaxVersion":"","SkipHostVerify":false,"ReloadInterval":0},"RemoteConnectTimeout":5000000000,"TenancyOpts":{"Enabled":false,"Header":"x-tenant","Tenants":[]}}}
    {"level":"info","ts":1708590048.5040607,"caller":"static/strategy_store.go:210","msg":"No sampling strategies provided or URL is unavailable, using defaults"}
    {"level":"info","ts":1708590048.5140727,"caller":"grpc@v1.61.0/server.go:681","msg":"[core][Server #6] Server created","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5142136,"caller":"server/grpc.go:104","msg":"Starting jaeger-collector gRPC server","grpc.host-port":"[::]:14250"}
    {"level":"info","ts":1708590048.5142415,"caller":"server/http.go:56","msg":"Starting jaeger-collector HTTP server","http host-port":":14268"}
    {"level":"info","ts":1708590048.5143456,"caller":"grpc@v1.61.0/server.go:881","msg":"[core][Server #6 ListenSocket #7] ListenSocket created","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5145783,"caller":"app/collector.go:146","msg":"Not listening for Zipkin HTTP traffic, port not configured"}
    {"level":"info","ts":1708590048.5146635,"caller":"handler/otlp_receiver.go:77","msg":"OTLP receiver status change","status":"StatusStarting"}
    {"level":"warn","ts":1708590048.5152948,"caller":"internal@v0.93.0/warning.go:40","msg":"Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks","documentation":"https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
    {"level":"info","ts":1708590048.515404,"caller":"grpc@v1.61.0/server.go:681","msg":"[core][Server #8] Server created","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5154247,"caller":"otlpreceiver@v0.93.0/otlp.go:102","msg":"Starting GRPC server","endpoint":"0.0.0.0:4317"}
    {"level":"warn","ts":1708590048.515577,"caller":"internal@v0.93.0/warning.go:40","msg":"Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks","documentation":"https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
    {"level":"info","ts":1708590048.5157154,"caller":"grpc@v1.61.0/server.go:881","msg":"[core][Server #8 ListenSocket #9] ListenSocket created","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5162594,"caller":"otlpreceiver@v0.93.0/otlp.go:152","msg":"Starting HTTP server","endpoint":"0.0.0.0:4318"}
    {"level":"info","ts":1708590048.5163774,"caller":"grpc/builder.go:74","msg":"Agent requested insecure grpc connection to collector(s)"}
    {"level":"info","ts":1708590048.5164711,"caller":"grpc@v1.61.0/clientconn.go:429","msg":"[core][Channel #10] Channel created","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5165427,"caller":"grpc@v1.61.0/clientconn.go:1724","msg":"[core][Channel #10] original dial target is: \"localhost:14250\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5166228,"caller":"grpc@v1.61.0/clientconn.go:1731","msg":"[core][Channel #10] parsed dial target is: resolver.Target{URL:url.URL{Scheme:\"localhost\", Opaque:\"14250\", User:(*url.Userinfo)(nil), Host:\"\", Path:\"\", RawPath:\"\", OmitHost:false, ForceQuery:false, RawQuery:\"\", Fragment:\"\", RawFragment:\"\"}}","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5166798,"caller":"grpc@v1.61.0/clientconn.go:1745","msg":"[core][Channel #10] fallback to scheme \"passthrough\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5167496,"caller":"grpc@v1.61.0/clientconn.go:1753","msg":"[core][Channel #10] parsed dial target is: passthrough:///localhost:14250","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5168161,"caller":"grpc@v1.61.0/clientconn.go:1874","msg":"[core][Channel #10] Channel authority set to \"localhost:14250\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5169356,"caller":"grpc@v1.61.0/resolver_wrapper.go:196","msg":"[core][Channel #10] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"localhost:14250\",\n      \"ServerName\": \"\",\n      \"Attributes\": null,\n      \"BalancerAttributes\": null,\n      \"Metadata\": null\n    }\n  ],\n  \"Endpoints\": [\n    {\n      \"Addresses\": [\n        {\n          \"Addr\": \"localhost:14250\",\n          \"ServerName\": \"\",\n          \"Attributes\": null,\n          \"BalancerAttributes\": null,\n          \"Metadata\": null\n        }\n      ],\n      \"Attributes\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.517038,"caller":"grpc@v1.61.0/balancer_wrapper.go:161","msg":"[core][Channel #10] Channel switches to new LB policy \"round_robin\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5171227,"caller":"grpc@v1.61.0/balancer_wrapper.go:213","msg":"[core][Channel #10 SubChannel #11] Subchannel created","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5171983,"caller":"base/balancer.go:182","msg":"[roundrobin]roundrobinPicker: Build called with info: {map[]}","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5172563,"caller":"grpc@v1.61.0/clientconn.go:532","msg":"[core][Channel #10] Channel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.517287,"caller":"grpc@v1.61.0/clientconn.go:1223","msg":"[core][Channel #10 SubChannel #11] Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5173216,"caller":"grpc@v1.61.0/clientconn.go:335","msg":"[core][Channel #10] Channel exiting idle mode","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5173326,"caller":"grpc@v1.61.0/clientconn.go:1338","msg":"[core][Channel #10 SubChannel #11] Subchannel picks a new address \"localhost:14250\" to connect","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5185287,"caller":"grpc@v1.61.0/clientconn.go:1223","msg":"[core][Channel #10 SubChannel #11] Subchannel Connectivity change to READY","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5185935,"caller":"base/balancer.go:182","msg":"[roundrobin]roundrobinPicker: Build called with info: {map[SubConn(id:11):{{Addr: \"localhost:14250\", ServerName: \"\", }}]}","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5186214,"caller":"grpc@v1.61.0/clientconn.go:532","msg":"[core][Channel #10] Channel Connectivity change to READY","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.518698,"caller":"grpc/builder.go:115","msg":"Checking connection to collector"}
    {"level":"info","ts":1708590048.518712,"caller":"grpc/builder.go:131","msg":"Agent collector connection state change","dialTarget":"localhost:14250","status":"READY"}
    {"level":"info","ts":1708590048.5192022,"caller":"all-in-one/main.go:263","msg":"Starting agent"}
    {"level":"info","ts":1708590048.5199,"caller":"app/agent.go:69","msg":"Starting jaeger-agent HTTP server","http-port":5778}
    {"level":"info","ts":1708590048.5205953,"caller":"grpc@v1.61.0/server.go:681","msg":"[core][Server #14] Server created","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5221837,"caller":"app/static_handler.go:107","msg":"Using UI configuration","path":"jaeger-ui.json"}
    {"level":"info","ts":1708590048.523758,"caller":"app/server.go:220","msg":"Query server started","http_addr":"[::]:16686","grpc_addr":"[::]:16685"}
    {"level":"info","ts":1708590048.5238283,"caller":"healthcheck/handler.go:129","msg":"Health Check state change","status":"ready"}
    {"level":"info","ts":1708590048.5241168,"caller":"app/server.go:300","msg":"Starting GRPC server","port":16685,"addr":":16685"}
    {"level":"info","ts":1708590048.5241666,"caller":"grpc@v1.61.0/server.go:881","msg":"[core][Server #14 ListenSocket #15] ListenSocket created","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590048.5242453,"caller":"app/server.go:284","msg":"Starting HTTP server","port":16686,"addr":":16686"}
    {"level":"info","ts":1708590049.23799,"caller":"grpc@v1.61.0/clientconn.go:1225","msg":"[core][Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp [::1]:4317: connect: connection refused\"","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590049.2380733,"caller":"grpc@v1.61.0/clientconn.go:1223","msg":"[core][Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590049.2380922,"caller":"grpc@v1.61.0/clientconn.go:1338","msg":"[core][Channel #1 SubChannel #2] Subchannel picks a new address \"localhost:4317\" to connect","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590049.2388952,"caller":"grpc@v1.61.0/clientconn.go:1223","msg":"[core][Channel #1 SubChannel #2] Subchannel Connectivity change to READY","system":"grpc","grpc_log":true}
    {"level":"info","ts":1708590049.2389312,"caller":"grpc@v1.61.0/clientconn.go:532","msg":"[core][Channel #1] Channel Connectivity change to READY","system":"grpc","grpc_log":true}
    2024-02-22T16:20:58.499+0800 [DEBUG] jaeger-clickhouse-linux-amd64: Flush due to timer: @module=jaeger-clickhouse timestamp="2024-02-22T16:20:58.499+0800"
    2024-02-22T16:20:58.499+0800 [DEBUG] jaeger-clickhouse-linux-amd64: Writing spans: size=4 @module=jaeger-clickhouse timestamp="2024-02-22T16:20:58.499+0800"
    2024-02-22T16:21:53.526+0800 [DEBUG] jaeger-clickhouse-linux-amd64: Flush due to timer: @module=jaeger-clickhouse timestamp="2024-02-22T16:21:53.525+0800"
    2024-02-22T16:21:53.526+0800 [DEBUG] jaeger-clickhouse-linux-amd64: Writing spans: @module=jaeger-clickhouse size=1 timestamp="2024-02-22T16:21:53.526+0800"
    

    访问地址

    http://10.41.201.2:16686/search

    六、启动服务(docker)

    1. 修改 config.yml 中 ch 的地址

    [root@VM-201-2-centos jaeger-clickhouse]# cat config.yaml
    address: 10.41.201.2:9000
    username: jaeger
    password: jaeger
    database: jaeger
    

    2. 启动 clickhouse

      docker run -d -p 8123:8123 -p9000:9000 -p9009:9009\
        -e TZ='Asia/Shanghai' \
        -e CLICKHOUSE_DB=jaeger \
        -e CLICKHOUSE_USER=jaeger  \
        -e CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT=1 \
        -e CLICKHOUSE_PASSWORD=jaeger \
        -v /lvmdata/clickhouse/data:/var/lib/clickhouse/ \
        -v /lvmdata/clickhouse/logs:/var/log/clickhouse-server/ \
        --name ch --ulimit nofile=262144:262144 clickhouse/clickhouse-server:24.1.3
    

    3. 启动 jaeger

    docker run -d --name jaeger\
      -p 5775:5775/udp \
      -p 6831:6831/udp \
      -p 6832:6832/udp \
      -p 5778:5778 \
      -p 16686:16686 \
      -p 14268:14268 \
      -p 9411:9411 \
      -v /root/jaeger-clickhouse:/jaeger-clickhouse \
      -e SPAN_STORAGE_TYPE=grpc-plugin \
      -e GRPC_STORAGE_PLUGIN_BINARY=/jaeger-clickhouse/jaeger-clickhouse-linux-amd64 \
      -e GRPC_STORAGE_PLUGIN_CONFIGURATION_FILE=/jaeger-clickhouse/config.yaml \
      jaegertracing/all-in-one
    

    -e TZ='Asia/Shanghai' 参数对修改时区无效。但是,jaeger trans 显示时间好像没问题。待发现问题再处理。

    七、说明

    根据路线路,Jaeger V2 将原生支持 ClickHouse.

    相关文章

      网友评论

          本文标题:Jaeger 使用 clickhouse 作为 storage

          本文链接:https://www.haomeiwen.com/subject/nkhmadtx.html