美文网首页
docker-compose安装elasticsearch8.5

docker-compose安装elasticsearch8.5

作者: 卖菇凉的小火柴丶 | 来源:发表于2022-12-07 15:48 被阅读0次

1.Compose 简介

Compose 是用于定义和运行多容器 Docker 应用程序的工具。通过 Compose,您可以使用 YML 文件来配置应用程序需要的所有服务。然后,使用一个命令,就可以从 YML 文件配置中创建并启动所有服务。
Compose 使用的三个步骤:

  • 使用 Dockerfile 定义应用程序的环境。
  • 使用 docker-compose.yml 定义构成应用程序的服务,这样它们可以在隔离环境中一起运行。
  • 最后,执行 docker-compose up 命令来启动并运行整个应用程序。

2.centos7安装docker-compose

首先下载Docker Compose

curl -L https://get.daocloud.io/docker/compose/releases/download/v2.14.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose

将可执行权限应用于二进制文件:

sudo chmod +x /usr/local/bin/docker-compose

创建软链:

sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

测试是否安装成功:

docker-compose version

3.Centos7环境设置:

  • 设置max_map_count不然es启动报错
# 查看max_map_count的值,应该是65530
cat /proc/sys/vm/max_map_count
# 设置max_map_count的值
sysctl -w vm.max_map_count=262144
  • 注意:centos7需要对外暴漏端口9200以及5601,具体操作请百度。

4.创建集群

4.1 不带密码的集群

  • 开发环境自用的es不需要证书验证
  • 创建一个干净的目录,然后进入该目录。
# es为创建的es用户,具体根据自己的来
cd /home/es
mkdir es_docker_compose
cd es_docker_compose
# 重点是创建下面两个文件
touch .env
touch docker-compose.yml
  • .env文件中的内容
# kibana_system账号的密码 (至少六个字符),该账号仅用于一些kibana的内部设置,不能用来查询es
KIBANA_PASSWORD=abcdef

# es和kibana的版本
STACK_VERSION=8.5.0

# 集群名字
CLUSTER_NAME=docker-cluster

# es映射到宿主机的的端口
ES_PORT=9200

# kibana映射到宿主机的的端口
KIBANA_PORT=5601

# es容器的内存大小,请根据自己硬件情况调整(字节为单位,当前1G)
MEM_LIMIT=1073741824

# 命名空间,会体现在容器名的前缀上
COMPOSE_PROJECT_NAME=demo
  • docker-compose.yaml文件内容。操作步骤其实为:1.启动操作 2.三个es集群一个kibana。
    注意:docker-compose的版本号与第2步中安装的docker-compose版本号一致
# docker-compose的版本号
version: "2.14.2"

services:
  es01:
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es02,es03
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=false
      - xpack.security.transport.ssl.enabled=false
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1

  es02:
    depends_on:
      - es01
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - esdata02:/usr/share/elasticsearch/data
    environment:
      - node.name=es02
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es03
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=false
      - xpack.security.transport.ssl.enabled=false
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1

  es03:
    depends_on:
      - es02
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - esdata03:/usr/share/elasticsearch/data
    environment:
      - node.name=es03
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es02
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=false
      - xpack.security.transport.ssl.enabled=false
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
  kibana:
    image: kibana:${STACK_VERSION}
    volumes:
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=http://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
    mem_limit: ${MEM_LIMIT}

volumes:
  esdata01:
    driver: local
  esdata02:
    driver: local
  esdata03:
    driver: local
  kibanadata:
    driver: local
  • 启动应用
    在docker-compose.yaml与.env的目录下,执行命令。
# -d是后台运行
docker-compose up -d

输出:

CONTAINER ID   IMAGE                 COMMAND                  CREATED         STATUS         PORTS                              NAMES
11663375288d   elasticsearch:8.5.0   "/bin/tini -- /usr/l…"   4 minutes ago   Up 4 minutes   9200/tcp, 9300/tcp                 demo_es03_1
ad6f0390b9cf   elasticsearch:8.5.0   "/bin/tini -- /usr/l…"   4 minutes ago   Up 4 minutes   9200/tcp, 9300/tcp                 demo_es02_1
5080709e5358   kibana:8.5.0          "/bin/tini -- /usr/l…"   4 minutes ago   Up 4 minutes   0.0.0.0:5601->5601/tcp             demo_kibana_1
4b1e576fbfd3   elasticsearch:8.5.0   "/bin/tini -- /usr/l…"   4 minutes ago   Up 4 minutes   0.0.0.0:9200->9200/tcp, 9300/tcp   demo_es01_1

验证成功

  • 浏览器直接访问https://docker主机地址:9200/,注意是http。
  • 进入后账号密码都是.env里的注释以及数据(请注意查看)
  • 访问http://localhost:5601/,就是访问kibana。账号密码也在.env里。

4.2 带密码以及安全检查的集群(自签证书+账号密码)

  • 创建一个干净的目录,然后进入该目录。
# es为创建的es用户,具体根据自己的来
cd /home/es
mkdir es_docker_compose
cd es_docker_compose
# 重点是创建下面两个文件
touch .env
touch docker-compose.yml
  • .env文件中的内容
# elastic账号的密码 (至少六个字符)
ELASTIC_PASSWORD=123456

# kibana_system账号的密码 (至少六个字符),该账号仅用于一些kibana的内部设置,不能用来查询es
KIBANA_PASSWORD=abcdef

# es和kibana的版本
STACK_VERSION=8.5.0

# 集群名字
CLUSTER_NAME=docker-cluster

# x-pack安全设置,这里选择basic,基础设置,如果选择了trail,则会在30天后到期
LICENSE=basic
#LICENSE=trial

# es映射到宿主机的的端口
ES_PORT=9200

# kibana映射到宿主机的的端口
KIBANA_PORT=5601

# es容器的内存大小,请根据自己硬件情况调整(字节为单位,当前1G)
MEM_LIMIT=1073741824

# 命名空间,会体现在容器名的前缀上
COMPOSE_PROJECT_NAME=es
  • docker-compose.yaml文件内容。操作步骤其实为:1.启动操作 2.三个es集群一个kibana。
    注意:docker-compose的版本号与第2步中安装的docker-compose版本号一致
# docker-compose的版本号
version: "2.14.2"

services:
  setup:
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es02\n"\
          "    dns:\n"\
          "      - es02\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es03\n"\
          "    dns:\n"\
          "      - es03\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es02,es03
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  es02:
    depends_on:
      - es01
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata02:/usr/share/elasticsearch/data
    environment:
      - node.name=es02
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es03
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es02/es02.key
      - xpack.security.http.ssl.certificate=certs/es02/es02.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es02/es02.key
      - xpack.security.transport.ssl.certificate=certs/es02/es02.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  es03:
    depends_on:
      - es02
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata03:/usr/share/elasticsearch/data
    environment:
      - node.name=es03
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es02
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es03/es03.key
      - xpack.security.http.ssl.certificate=certs/es03/es03.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es03/es03.key
      - xpack.security.transport.ssl.certificate=certs/es03/es03.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  kibana:
    depends_on:
      es01:
        condition: service_healthy
      es02:
        condition: service_healthy
      es03:
        condition: service_healthy
    image: kibana:${STACK_VERSION}
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

volumes:
  certs:
    driver: local
  esdata01:
    driver: local
  esdata02:
    driver: local
  esdata03:
    driver: local
  kibanadata:
    driver: local
  • 启动应用
    在docker-compose.yaml与.env的目录下,执行命令。
# -d表示后台启动
docker-compose up -d

输出:

Creating network "demo_default" with the default driver
Pulling setup (elasticsearch:8.5.0)...
8.5.0: Pulling from library/elasticsearch
Digest: sha256:8c666cb1e76650306655b67644a01663f9c7a5422b2c51dd570524267f11ce3d
Status: Downloaded newer image for elasticsearch:8.5.0
Pulling kibana (kibana:8.5.0)...
8.5.0: Pulling from library/kibana
Digest: sha256:cf34801f36a2e79c834b3cdeb0a3463ff34b8d8588c3ccdd47212c4e0753f8a5
Status: Downloaded newer image for kibana:8.5.0
Creating demo_setup_1 ... done
Creating demo_es01_1  ... done
Creating demo_es02_1  ... done
Creating demo_es03_1  ... done
Creating demo_kibana_1 ... done
  • 查看容器的状态,负责启动的setup容器已经退出,其他容器正常运行。
docker ps -a

输出:

CONTAINER ID   IMAGE                 COMMAND                  CREATED          STATUS                      PORTS                              NAMES
c8ce010cddfc   kibana:8.5.0          "/bin/tini -- /usr/l…"   20 minutes ago   Up 20 minutes (healthy)     0.0.0.0:5601->5601/tcp             demo_kibana_1
78662d44ae31   elasticsearch:8.5.0   "/bin/tini -- /usr/l…"   21 minutes ago   Up 21 minutes (healthy)     9200/tcp, 9300/tcp                 demo_es03_1
7e96273872cb   elasticsearch:8.5.0   "/bin/tini -- /usr/l…"   21 minutes ago   Up 21 minutes (healthy)     9200/tcp, 9300/tcp                 demo_es02_1
8b8be1d645ba   elasticsearch:8.5.0   "/bin/tini -- /usr/l…"   21 minutes ago   Up 21 minutes (healthy)     0.0.0.0:9200->9200/tcp, 9300/tcp   demo_es01_1
c48ffb724ca2   elasticsearch:8.5.0   "/bin/tini -- /usr/l…"   21 minutes ago   Exited (0) 20 minutes ago                                      demo_setup_1
  • 查看setup容器日志,提示启动成功
docker logs demo_setup_1

输出:

Setting file permissions
Waiting for Elasticsearch availability
Setting kibana_system password
All done!
  • 如果需要使用curl命令像ES发请求,必须将crt文件从容器中复制出来
docker cp demo_es01_1:/usr/share/elasticsearch/config/certs/es01/es01.crt .

验证成功

  • 浏览器直接访问https://docker主机地址:9200/,注意是https。因为带证书。
  • 进入后账号密码都是.env里的注释以及数据(请注意查看)
  • 访问http://localhost:5601/,就是访问kibana。账号密码也在.env里。

5.清理环境

  • 如果要删除es,执行docker-compose down就会删除容器,但是,此命令不会删除数据,下次执行docker-compose up -d后,新的es集群中会出现刚才创建的test001索引,并且数据也在。
  • 这是因为docker-compose.yaml中使用了数据卷volume存储es集群的关键数据,这些输入被保存在宿主机的磁盘上
> docker volume ls

DRIVER    VOLUME NAME
local     demo_certs
local     demo_esdata01
local     demo_esdata02
local     demo_esdata03
local     demo_kibanadata
  • 执行docker volume rm demo_certs demo_esdata01 demo_esdata02 demo_esdata03即可将它们彻底清除

相关文章

网友评论

      本文标题:docker-compose安装elasticsearch8.5

      本文链接:https://www.haomeiwen.com/subject/knbnfdtx.html