美文网首页Vitamin
Kubernetes-prd-应用部署

Kubernetes-prd-应用部署

作者: Coderove | 来源:发表于2019-04-09 10:38 被阅读0次

@TOC

A. SIP-PROXY安装

1. localrpm ready

  • bzip2-1.0.6-13.el7.x86_64.rpm
  • docker-rpm.tar.gz

docker local install

cat >> /etc/yum.repos.d/docker-ce-local.repo << EOF
[docker]
name=docker-local-repo
baseurl=file:///breeze/rpms/
gpgcheck=0
enabled=1
EOF

# 刷新缓存
yum clean all && yum makecache

#安装docker和docker-compose
yum install -y docker docker-compose

#启动docker服务并设为开机启动
systemctl start docker && systemctl enable docker

docker load -i docker-image.tar
docker images


docker load -i fsdeb.tar



  • 安装记录
    1  [2019-03-28 17:29:19] vim /etc/ssh/sshd_config 
    2  [2019-03-28 17:30:06] cp /etc/ssh/sshd_config /etc/ssh/sshd_config.ori
    3  [2019-03-28 17:30:08] vim /etc/ssh/sshd_config 
    4  [2019-03-28 17:31:04] systemctl restart sshd
    5  [2019-03-28 17:35:06] cd /etc/ssh
    6  [2019-03-28 17:35:07] ls
    7  [2019-03-28 17:35:14] cp sshd_config sshd_config.ori
    8  [2019-03-28 17:35:18] cp sshd_config sshd_config.ori2
    9  [2019-03-28 17:35:21] vi sshd_config
   10  [2019-03-28 17:35:30] ifconfig
   11  [2019-03-28 17:35:39] systemctl restart sshd
   12  [2019-03-29 14:07:24] ls
   13  [2019-03-29 14:07:29] cd /app
   14  [2019-03-29 14:07:30] ls
   15  [2019-03-29 14:07:54] cat >> /etc/yum.repos.d/docker-ce-local.repo << EOF
[docker]
name=docker-local-repo
baseurl=file:///breeze/rpms/
gpgcheck=0
enabled=1
EOF

   16  [2019-03-29 14:08:17] vim /etc/yum.repos.d/docker-ce-local.repo 
   17  [2019-03-29 14:09:32] ls
   18  [2019-03-29 14:09:34] cd localrpm/
   19  [2019-03-29 14:09:35] ls
   20  [2019-03-29 14:09:51] tar -zxvf docker-rpm.tar.gz 
   21  [2019-03-29 14:10:02] ls
   22  [2019-03-29 14:10:14] vim /etc/yum.repos.d/docker-ce-local.repo 
   23  [2019-03-29 14:10:28] yum clean all && yum makecache
   24  [2019-03-29 14:11:00] yum install -y docker docker-compose
   25  [2019-03-29 14:11:59] systemctl start docker && systemctl enable docker
   26  [2019-03-29 14:12:11] ls
   27  [2019-03-29 14:12:34] yum localinstall bzip2-1.0.6-13.el7.x86_64.rpm 
   28  [2019-03-29 14:34:24] cd ../SIPPROXY/
   29  [2019-03-29 14:34:25] ls
   30  [2019-03-29 14:34:37] tar -zxvf rainny.tar.gz 
   31  [2019-03-29 14:34:44] ls
   32  [2019-03-29 14:34:55] cd app/
   33  [2019-03-29 14:34:57] ls
   34  [2019-03-29 14:35:05] mv rainny ../
   35  [2019-03-29 14:35:06] ls
   36  [2019-03-29 14:35:10] cd ..
   37  [2019-03-29 14:35:11] ls
   38  [2019-03-29 14:35:21] rm -rf app
   39  [2019-03-29 14:35:23] ls
   40  [2019-03-29 14:35:31] cd rainny/
   41  [2019-03-29 14:35:33] ls
   42  [2019-03-29 14:46:22] vim FreeswitchWatchdog 
   43  [2019-03-29 14:47:27] chmod +x FreeswitchWatchdog 
   44  [2019-03-29 14:47:31] ll
   45  [2019-03-29 14:47:46] chmod +x docker.sh 
   46  [2019-03-29 14:47:49] ll
   47  [2019-03-29 14:48:08] ./docker.sh 
   48  [2019-03-29 14:48:25] vim docker.sh 
   49  [2019-03-29 15:40:01] ps -ef |grep docker.sh 
   50  [2019-03-29 15:40:06] ps -ef |grep docke
   51  [2019-03-29 15:40:20] ps -ef |grep freeswitch
   52  [2019-03-29 15:43:30] docker images
   53  [2019-03-29 15:43:51] ls
   54  [2019-03-29 15:43:58] cd /app
   55  [2019-03-29 15:43:59] ls
   56  [2019-03-29 15:44:04] cd SIPPROXY/
   57  [2019-03-29 15:44:05] ls
   58  [2019-03-29 15:44:14] cd rainny/
   59  [2019-03-29 15:44:15] ls
   60  [2019-03-29 15:47:02] cd ..
   61  [2019-03-29 15:47:04] ls
   62  [2019-03-29 15:47:07] cd ..
   63  [2019-03-29 15:47:09] ls
   64  [2019-03-29 15:47:13] cd ..
   65  [2019-03-29 15:47:15] ls
   66  [2019-03-29 15:56:48] history 

常用的命令

PATH:/var/rainny/ippbx


运维

常用的命令

  • 抓包分析
tcpdump -np -s 0 -i eth0 -w /app/demp.pcap udp

tcpdump -np -s 0 -i eth0   udp

  • K8S调试命令

  • Master节点

  • Path:/root/

kubectl delete -f pcc.yml;
kubectl apply -f pcc.yml ; sleep 6;
kubectl get pod -n pcc -o wide;

kubectl exec -it ippbx-1 -n pcc sh
功能 命令 备注
保存操作 history >> /root/nodeippbx2
查看日志 kubectl logs -f ippbx-0
删除pod kubectl delete -f pcc.yml;
创建pod kubectl apply -f pcc.yml ; sleep 6;
查看pod kubectl get pod -n pcc -o wide;
进入pod kubectl exec -it ippbx-1 -n pcc sh
  • 搜索文件
  252  [2019-03-27 11:43:11] find -name "*.yml"
  253  [2019-03-27 11:44:24] find -name "*.yaml"
kubectl delete -f pcc.yml ; kubectl apply -f pcc.yml ; sleep 6 ;kubectl get pod -n pcc -o wide

./fs_cli -H 127.0.0.1 -p wzw -P 4521
docker load -i fsdeb.tar

docker images

mv  freeswitch freeswitch.old


kubectl delete -f pcc.yml ; kubectl apply -f pcc.yml ; sleep 6 ;kubectl get pod -n pcc -o wide

ippbx cti 应用节点部署

1. 文件准备

  • 拷贝文件 zs招商copy.tar.gz 到跳板机,
  • 从跳板机上拷贝到每个机器的/root下,
  • 解压得到 keepalived,freeswtich,a生产环境部署三个目录

zs招商copy.tar.gz

2 准备本地镜像文件

准备应用节点镜像文件

  • load freeswtich压缩文件到docker 镜像,并上传到harbor仓库:

(a) 在deploy节点,即100.67.33.10上操作:

  • 准备工作
[root@nodecti2 ~]# cd /root
[root@nodecti2 ~]# tar -xzvf zs*.tar.gz  
[root@nodecti2 ~]# ls
bin  slogs  zs????copy.tar.gz  zs招商copy
[root@nodecti2 ~]# cd zs招商copy/
[root@nodecti2 zs招商copy]# ls
a生产环境部署  freeswitch  keepalived
[root@nodecti2 zs招商copy]# cd freeswitch/
[root@nodecti2 freeswitch]# ls
freeswitch.tar.bz2  fsdeb.tar  rainny.tar.bz2

  • load freeswtich压缩文件到docker 镜像
[root@nodecti2 freeswitch]# docker load -i fsdeb.tar 
f33a13616df9: Loading layer [==================================================>] 82.96 MB/82.96 MB
c8f952ba8693: Loading layer [==================================================>] 1.024 kB/1.024 kB
5ca3e1235786: Loading layer [==================================================>] 1.024 kB/1.024 kB
8cda936b4d69: Loading layer [==================================================>]  5.12 kB/5.12 kB
5329b221f182: Loading layer [==================================================>] 93.21 MB/93.21 MB
9c9167e39e1a: Loading layer [==================================================>] 359.9 MB/359.9 MB
9aa6a792bc60: Loading layer [==================================================>] 443.9 MB/443.9 MB

[root@nodecti2 freeswitch]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
wise2c/playbook                        v1.11.8             e52b775e14b3        2 weeks ago         1.14 GB
wise2c/yum-repo                        v1.11.8             6ff733f70993        3 weeks ago         758 MB
100.67.33.9/library/kube-proxy-amd64   v1.11.8             2ed65dca1a98        3 weeks ago         98.1 MB
fs                                     deb1                9c3b419a16e0        4 weeks ago         960 MB
100.67.33.9/library/flannel            v0.11.0-amd64       ff281650a721        8 weeks ago         52.6 MB
wise2c/pagoda                          v1.1                d4f2b4cabdec        2 months ago        483 MB
wise2c/deploy-ui                       v1.3                2f159b37bf13        2 months ago        40.1 MB
100.67.33.9/library/pause              3.1                 da86e6ba6ca1        15 months ago       742 kB
[root@nodecti2 freeswitch]# 

  • 补充: docker images 查看镜像
删除镜像 docker rmi 仓库:TAG
删除镜像 docker rmi ${IMAGE ID}
docker load -i 镜像经过压缩的tar文件
docker tag 旧镜像repo:tag 新镜像repo:tag
docker push 镜像repo:tag (执行前,需要docker login docker-register-server)
docker login docker-register-server
  • 补充: docker images 查看镜像
    docker rmi 仓库:TAG 用于删除镜像
    docker rmi ${IMAGE ID} 用于删除镜像
    docker load -i 镜像经过压缩的tar文件
    docker tag 旧镜像repo:tag 新镜像repo:tag
    docker push 镜像repo:tag (执行前,需要docker login docker-register-server)

b) 上传镜像到Harbor仓库

  • 浏览器打开100.67.33.9 harbor仓库(登陆置灰,也能点击登陆),打开library 项目,可以查看到一些镜像,此时不需要在界面上做任何操作。
  • 在deploy节点上执行:
[root@nodecti2 freeswitch]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
wise2c/playbook                        v1.11.8             e52b775e14b3        2 weeks ago         1.14 GB
wise2c/yum-repo                        v1.11.8             6ff733f70993        3 weeks ago         758 MB
100.67.33.9/library/kube-proxy-amd64   v1.11.8             2ed65dca1a98        3 weeks ago         98.1 MB
fs                                     deb1                9c3b419a16e0        4 weeks ago         960 MB
100.67.33.9/library/flannel            v0.11.0-amd64       ff281650a721        8 weeks ago         52.6 MB
wise2c/pagoda                          v1.1                d4f2b4cabdec        2 months ago        483 MB
wise2c/deploy-ui                       v1.3                2f159b37bf13        2 months ago        40.1 MB
100.67.33.9/library/pause              3.1                 da86e6ba6ca1        15 months ago       742 kB
  • 重新打标签
[root@nodecti2 freeswitch]# docker  tag fs:deb1 100.67.33.9/library/fs:deb1

[root@nodecti2 freeswitch]# 
[root@nodecti2 freeswitch]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
wise2c/playbook                        v1.11.8             e52b775e14b3        2 weeks ago         1.14 GB
wise2c/yum-repo                        v1.11.8             6ff733f70993        3 weeks ago         758 MB
100.67.33.9/library/kube-proxy-amd64   v1.11.8             2ed65dca1a98        3 weeks ago         98.1 MB
100.67.33.9/library/fs                 deb1                9c3b419a16e0        4 weeks ago         960 MB
fs                                     deb1                9c3b419a16e0        4 weeks ago         960 MB
100.67.33.9/library/flannel            v0.11.0-amd64       ff281650a721        8 weeks ago         52.6 MB
wise2c/pagoda                          v1.1                d4f2b4cabdec        2 months ago        483 MB
wise2c/deploy-ui                       v1.3                2f159b37bf13        2 months ago        40.1 MB
100.67.33.9/library/pause              3.1                 da86e6ba6ca1        15 months ago       742 kB
[root@nodecti2 freeswitch]# docker rmi fs:deb1
Untagged: fs:deb1

[root@nodecti2 freeswitch]# docker images     
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
wise2c/playbook                        v1.11.8             e52b775e14b3        2 weeks ago         1.14 GB
wise2c/yum-repo                        v1.11.8             6ff733f70993        3 weeks ago         758 MB
100.67.33.9/library/kube-proxy-amd64   v1.11.8             2ed65dca1a98        3 weeks ago         98.1 MB
100.67.33.9/library/fs                 deb1                9c3b419a16e0        4 weeks ago         960 MB
100.67.33.9/library/flannel            v0.11.0-amd64       ff281650a721        8 weeks ago         52.6 MB
wise2c/pagoda                          v1.1                d4f2b4cabdec        2 months ago        483 MB
wise2c/deploy-ui                       v1.3                2f159b37bf13        2 months ago        40.1 MB
100.67.33.9/library/pause              3.1                 da86e6ba6ca1        15 months ago       742 kB
[root@nodecti2 freeswitch]# cat /etc/docker/daemon.json 
{
  "exec-opt": [
    "native.cgroupdriver=systemd"
  ],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "5"
  },
  "insecure-registries": [
      "100.67.33.9"
  ],
  "storage-driver": "overlay2"
}
  • 远程登陆Docker
[root@nodecti2 freeswitch]# docker login 100.67.33.9
Username: admin
Password:           #此处输入Harbor12345
Login Succeeded
  • push image 到Harbor
[root@nodecti2 freeswitch]# docker push 100.67.33.9/library/fs:deb1
The push refers to a repository [100.67.33.9/library/fs]
8c7f9e41cf2f: Pushed 
0e8f26b8afa9: Pushed 
683e839e85ce: Pushed 
1999ce8fd68d: Pushed 
5f70bf18a086: Pushed 
16ada34affd4: Pushed 
deb1: digest: sha256:37a3b7114091c8ea7dfd2f6b16a6c708469443e6da6edf96a2b0201c226b0ed2 size: 1787
  • 此时浏览器刷新harbor,可查看到推送的镜像

3 四个node-worker节点安装和配置keepalived

a 本地安装keepalived

此处在 100.67.33.7 上执行,其它三个节点类似:

[root@nodeippbx1 ~]# cd zs招商copy/
[root@nodeippbx1 zs招商copy]# ls
a生产环境部署  freeswitch  keepalived
[root@nodeippbx1 zs招商copy]# cd keepalived/
[root@nodeippbx1 keepalived]# ls
keepalived-1.3.5-8.el7_6.x86_64.rpm  lm_sensors-libs-3.4.0-6.20160601gitf9185e5.el7.x86_64.rpm  net-snmp-agent-libs-5.7.2-37.el7.x86_64.rpm  net-snmp-libs-5.7.2-37.el7.x86_64.rpm
  • 本地安装keepalived
    与上面路径一致
[root@nodeippbx1 keepalived]# yum localinstall *.rpm -y
...
Installed:
  keepalived.x86_64 0:1.3.5-8.el7_6                         net-snmp-agent-libs.x86_64 1:5.7.2-37.el7                         net-snmp-libs.x86_64 1:5.7.2-37.el7                        

Updated:
  lm_sensors-libs.x86_64 0:3.4.0-6.20160601gitf9185e5.el7                                                                                                                                 

Complete!
[root@nodeippbx1 keepalived]# 
systemctl enable keepalived.service;systemctl start keepalived.service

b 配置ippbx

注意keepavlied.conf 里面,主从的virtual_router_id 必须一致,最好不要大于255,而 router_id 必须不一样,主从的router_id,state,priority 不能一样,具体如下。

  • 100.67.33.7
 # Backup
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.ori



vim /etc/keepalived/keepalived.conf
  ###### begin
  [root@nodeippbx1 keepalived]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id ippbx_master
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 188
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       100.67.33.16/23 dev eth0 label eth0:1
    }
}
###### end
  • 100.67.33.8
###### begin
[root@nodeippbx2 keepalived]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id ippbx_backup
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 188
    priority 20
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       100.67.33.16/23 dev eth0 label eth0:1
    }
}
 ###### end
  • 补充
 两台主机分别:  systemctl  enable keepalived     && systemctl start keepalived   && ps -ef | grep keepalived
 分别查看ip:   ifconfig,会发现在主节点上有:
   eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 100.67.33.16  netmask 255.255.254.0  broadcast 0.0.0.0
        ether fa:16:3e:dd:97:61  txqueuelen 1000  (Ethernet)
 在主节点关闭keepalived,再分别查看ip,会发现从节点出现,主节点没有此vip了:
   eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 100.67.33.16  netmask 255.255.254.0  broadcast 0.0.0.0
        ether fa:16:3e:06:a4:f6  txqueuelen 1000  (Ethernet)

c 配置两台CTI

  • 100.67.33.9
##################begin
[root@nodecti1 keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id cti_master
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 189
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 2222
    }
    virtual_ipaddress {
       100.67.33.17/23 dev eth0 label eth0:1
    }
}
##################end
  • 100.67.33.10
 ##################begin
 [root@nodecti2 keepalived]# cat /etc/keepalived/keepalived.conf
global_defs {
   router_id cti_backup
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 189
    priority 20
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 2222
    }
    virtual_ipaddress {
       100.67.33.17/23 dev eth0 label eth0:1
    }
}
##################end

d 测试Keepalived

两台节点分别执行:

[root@nodecti1 keepalived]# systemctl enable keepalived
[root@nodecti1 keepalived]# systemctl start keepalived 
[root@nodecti1 keepalived]# ps -ef | grep keep
root      1780     1  0 Jan29 ttyS0    00:00:00 /sbin/agetty --keep-baud 115200 38400 9600 ttyS0 vt220
root     16941     1  0 19:20 ?        00:00:00 /usr/sbin/keepalived -D
root     16942 16941  0 19:20 ?        00:00:00 /usr/sbin/keepalived -D
root     16943 16941  0 19:20 ?        00:00:00 /usr/sbin/keepalived -D
root     18826  4506  0 19:22 pts/1    00:00:00 grep --color=auto keep

[root@nodecti1 keepalived]# ifconfig | grep eth0   ## 在主节点能查到两个ip,从节点只有一个,当主节点停止keepalived服务后,从节点多了vip
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
[root@nodecti1 keepalived]# ifconfig | grep 67
        inet 100.67.33.9  netmask 255.255.254.0  broadcast 100.67.33.255
        inet 100.67.33.17  netmask 255.255.254.0  broadcast 0.0.0.0
        RX packets 723588  bytes 667124730 (636.2 MiB)
        RX packets 459  bytes 86709 (84.6 KiB)
vethe922676: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        TX packets 750673  bytes 669843972 (638.8 MiB)
        inet6 fe80::867:1dff:fe29:55d0  prefixlen 64  scopeid 0x20<link>
        ether 0a:67:1d:29:55:d0  txqueuelen 0  (Ethernet)

4 配置fs app 的yaml文件

a 本地安装bzip2 包并解压文件

  • rpm包准备
    需要从外拷贝bzip2-1.0.6-13.el7.x86_64.rpm 到deploy机器,并 yum localinstall ./bzip2-1.0.6-13.el7.x86_64.rpm

  • 本地安装安装bzip2

yum localinstall ./bzip2-1.0.6-13.el7.x86_64.rpm
  • 解压rainny.tar.bz2与freeswitch.tar.bz2
[root@nodecti2 freeswitch]# yum localinstall ./bzip2-1.0.6-13.el7.x86_64.rpm
[root@nodecti2 freeswitch]# pwd
/root/zs招商copy/freeswitch
[root@nodecti2 freeswitch]# ls
freeswitch.tar.bz2  fsdeb.tar  rainny.tar.bz2    
[root@nodecti2 freeswitch]# tar -xjvf rainny.tar.bz2  
[root@nodecti2 freeswitch]# tar -xjvf freeswitch.tar.bz2
[root@nodecti2 freeswitch]# ls
freeswitch  freeswitch.tar.bz2  fsdeb.tar  rainny  rainny.tar.bz2

b 准备映射到pod内的host目录

  • 为四个node_worker节点打标签
    运行节点:在master上执行:
[root@master1 ~]# kubectl get nodes -o wide
NAME         STATUS    ROLES     AGE       VERSION
master1      Ready     master    1d        v1.11.8
master2      Ready     master    1d        v1.11.8
master3      Ready     master    1d        v1.11.8
nodecti1     Ready     <none>    1d        v1.11.8
nodecti2     Ready     <none>    1d        v1.11.8
nodeippbx1   Ready     <none>    1d        v1.11.8
nodeippbx2   Ready     <none>    1d        v1.11.8
[root@master1 ~]# kubectl label node nodeippbx1 pcc/ippbx=true
[root@master1 ~]# kubectl label node nodeippbx2 pcc/ippbx=true
[root@master1 ~]# kubectl label node nodecti1 pcc/cti=true
[root@master1 ~]# kubectl label node nodecti2 pcc/cti=true
  • [?]配置免密传输
免密登陆
  • 准备映射到pod内的host目录:
    将上面解压的rainny目录,拷贝到四个工作节点的 /data (单独逻辑卷)目录下:
 scp -r rainny 100.67.33.7:/data
 scp -r rainny 100.67.33.8:/data
 scp -r rainny 100.67.33.9:/data
 scp -r rainny 100.67.33.10:/data

补充脚本

IPPBX

 cp /root/zs招商copy/freeswitch/config/rc.local-pbbxc rc.local

c 修改配置文件 pcc.yaml

  • Path: /data
  • command

Keepalived config

code

运维

常用的命令

  • 搜索文件
  252  [2019-03-27 11:43:11] find -name "*.yml"
  253  [2019-03-27 11:44:24] find -name "*.yaml"
Action Command Remarks
kubectl logs -f pod/ippbx-0 -n pcc
kubectl describe pod/ippbx-0 -n pcc
kubectl get nodes,pod -n pcc
restart systemctl start keepalived
check process ` ps -ef grep keepalived `
check ip ` ifconfig grep eth0 `
check ip ip -a



Reference

[1] tar.bz2文件解压命令
[2] Linux运行shell脚本提示No such file or directory错误的解决办法

相关文章

网友评论

    本文标题:Kubernetes-prd-应用部署

    本文链接:https://www.haomeiwen.com/subject/dpoviqtx.html