k8s

作者: 慕知 | 来源:发表于2021-04-01 08:31 被阅读0次

K8S介绍及优化

kubernetes简称k8s,k8s是一个容器化管理平台



优化:
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc


一,POD

1,POD介绍

1) k8s集群中部署的最小单元;Pod最主要的功能管理是将一个业务或者一个调用链的所有服务(容器)


2) Pod 的设计理念,支持多个容器在一个 Pod 中共享网络地址和文件系统,可以通过进程间通信和文件共享这种简单高效的方式组合完成服务


3) Pod 是 K8s 集群中所有业务类型的基础,可以看作运行在 K8s 集群中的小机器人,不同类型的业务就需要不同类型的小机器人去执行

2,POD生命周期

生命周期图示 image.png
1、创建pod
2、创建pause基础容器,提供共享名称空间(主容器  --> 业务容器)
3、串行业务容器容器初始化
4、启动业务容器,启动那一刻会同时运行主容器上定义的Poststart钩子事件
5、健康状态监测,判断容器是否启动成功;持续存活状态监测、就绪状态监测
6、结束时,执行prestop钩子事件
7、终止容器  (先终止业务容器 ---> 再终止主容器)

3,POD优势及工作原理

管理多个容器:

Pod 中可以同时运行多个进程(作为容器运行)协同工作;
同一个 Pod 中的容器会自动的分配到同一个 node上;同一个 Pod 中的容器共享资源、网络环境和依赖,所以它们总是被同时调度。




POD优势:

1. 做为一个可以独立运行的服务单元,简化了应用部署的难度,以更高的抽象层次为应用部署管提供了极大的方便。

2. 做为最小的应用实例可以独立运行,因此可以方便的进行部署、水平扩展和收缩、方便进行调度管理与资源的分配。

3. pod中的容器共享相同的数据和网络地址空间,Pod 之间也进行了统一的资源管理与分配

4,POD重启策略

1. Always:当容器失效时,由 kubelet 自动重启该容器。
  (不管什么原因容器挂掉都会重启)

2. OnFailure:当容器终止运行且退出码不为 0 时,由 kubelet 自动重启该容器
    (容易意外故障退出重启)    -----推荐

3. Never:不论容器运行状态如何,kubelet 都不会重启该容器。


5,POD体验

apiVersion: v1
kind: Pod
metadata:
  name: wordpress
spec:
  containers:
    - name: nginx
      image: nginx
    - name: php
      image: php



apiVersion : 指定k8s部署的api版本号
kind :  指定资源类型(pod)
metadata : 记录部署应用的基础信息
spec :  指定部署详情


# 版本号和类型可以下命令查询,   kubectle explain 资源类型
[root@\ k8s-m-01~]# kubectl explain pod
KIND:     Pod
VERSION:  v1
... ...

6,创建pod

[root@\ k8s-m-01~]# kubectl apply -f test1.yaml 
pod/test1 created

#获取资源pod
[root@\ k8s-m-01~]# kubectl get pods
NAME                          READY   STATUS              RESTARTS   AGE
deployment-5849786498-nvhg2   1/1     Running             0          4h27m
test1                         0/2     ContainerCreating   0          12s




二,名称空间 ,标签

1,名称空间

1) 名称空间概念
k8s中名称空间是用来隔离集群资源,而k8s中的资源也分为名称空间级资源以及集群级资源。(业内默认标准,一个微服务一个namespace)

k8s集群中:
1,集群级资源:所有命名空间都能够使用
2,命名空间级资源:只能在同一个命名空间使用


# kubectl是k8s客户端,它跟k8s没有任何关系。
## kubectl get [资源名称] 获取集群资源的命令

2) 命名规范
1、必须小写
2、必须以字母开头
3、名称当中只能够包含字母、数字和中划线(-)

3) 创建,获取名称空间
# 注:部署应用一般是部署在自己的名称空间之内

[root@\ k8s-m-01~]# kubectl get namespaces 
NAME              STATUS   AGE
default           Active   5d20h
kube-node-lease   Active   5d20h
kube-public       Active   5d20h
kube-system       Active   5d20h
wordpress         Active   6h58m

[root@\ k8s-m-01~]# kubectl get ns
NAME              STATUS   AGE
default           Active   5d20h
kube-node-lease   Active   5d20h
kube-public       Active   5d20h
kube-system       Active   5d20h
wordpress         Active   6h58m



#创建命名空间
[root@\ k8s-m-01~]# kubectl create namespace lnmp
namespace/lnmp created

[root@\ k8s-m-01~]# kubectl get ns
NAME              STATUS   AGE
default           Active   5d20h
kube-node-lease   Active   5d20h
kube-public       Active   5d20h
kube-system       Active   5d20h
lnmp              Active   2s
wordpress         Active   6h59m

2,标签 (针对于pod)

1)概念

Label :相当于我们熟悉的“标签”,給某个资源对象定义一个 Label,就相当于給它打了一个标签,    随后可以通过 Label Selector(标签选择器)查询和筛选拥有某些 Label 的资源对象


# docker中的TAG = 仓库URL/名称空间/仓库名称:版本号

k8s中的标签是用来管理(识别一系列)容器,方便与管理和监控拥有同一标签的所有容器
标签可以称之为资源的标示,一般用于发现资源

[root@\ k8s-m-01~]# kubectl get deployments --show-labels 

[root@\ k8s-m-01~]# kubectl get pods --show-labels

2)标签使用

1,配置中增加标签

[root@\ k8s-m-01~]# vim deloyment.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-tag
  labels:
    release: stable
spec:
  containers:
    - name: nginx
      image: nginx





2,查看label

[root@\ k8s-m-01~]# kubectl apply -f test.yaml 
pod/test-tag created

[root@\ k8s-m-01~]# kubectl get pods --show-labels 
NAME                          READY   STATUS    RESTARTS   AGE     LABELS
test-tag                      1/1     Running   0          80s     release=stable






3,增加label

# kubectl label pod(资源类型) 资源名称  app=tag


[root@\ k8s-m-01~]# kubectl label pod test-tag app=tag
pod/test-tag labeled

[root@\ k8s-m-01~]# kubectl get pods --show-labels 
NAME                          READY   STATUS    RESTARTS   AGE     LABELS
test-tag                      1/1     Running   0          3m22s   app=tag,release=stable







4,删除label

[root@\ k8s-m-01~]# kubectl label pod test-tag app-
pod/test-tag labeled
[root@\ k8s-m-01~]# kubectl get pods --show-labels 
NAME                          READY   STATUS    RESTARTS   AGE     LABELS
test-tag                      1/1     Running   0          4m7s    release=stable

# 修改标签,即先删除再增加


三,控制器

控制器:   管理Pod

k8s中控制器分为:  deployment ,    DaemonSet,    StatufluSet

1,deployment:用来部署长期运行的,无状态的应用(对启动顺序没有要求)
        特点:集群之中,随机部署

2,DaemonSet:每一个节点上部署一个Pod,删除节点自动删除对应的Pod,
       特点:每一台节点有且只有一个

3,StatufluSet: 部署有状态应用(对启动顺序有要求)
       特点:有启动顺序

1,deployment 控制器

1)介绍

deployment:用来部署长期运行的,无状态的应用(对启动顺序没有要求)
        特点:集群之中,随机部署

在Deployment对象中描述所需的状态,然后Deployment控制器将实际状态以受控的速率更改为所需的状态。
(如果删除了节点上的容器,会再次生成)

2) 测试删除节点上的容器

示例一:
在部署节点上删除容器,由于pod作用,容器会再次生成


# 查看test-tag部署在n01节点上
[root@\ k8s-m-01~]# kubectl get pod -o wide
NAME                          READY   STATUS    RESTARTS   AGE     IP            NODE       NOMINATED NODE   READINESS GATES
test-tag                      1/1     Running   0          27m     10.244.1.11   k8s-n-01   <none>           <none>



# 到n01 查看容器运行状态
[root@\ k8s-n-01~]# docker ps | grep test-tag
698a8f018d45   nginx                                               "/docker-entrypoint.…"   29 minutes ago   Up 29 minutes             k8s_nginx_test-tag_default_370ed585-6f6d-403b-848b-609a8ba00b23_0
c43c269f628a   registry.cn-hangzhou.aliyuncs.com/k8sos/pause:3.2   "/pause"                 29 minutes ago   Up 29 minutes             k8s_POD_test-tag_default_370ed585-6f6d-403b-848b-609a8ba00b23_0





# 删除
[root@\ k8s-n-01~]# docker ps | grep test-tag | awk '{print $1}' | xargs -I {} docker rm -f {}
698a8f018d45
c43c269f628a


# 再次查看又生成
[root@\ k8s-n-01~]# docker ps | grep test-tag
d4c7c29e1a02   nginx                                               "/docker-entrypoint.…"   38 seconds ago   Up 37 seconds             k8s_nginx_test-tag_default_370ed585-6f6d-403b-848b-609a8ba00b23_0
f7c41394b5c1   registry.cn-hangzhou.aliyuncs.com/k8sos/pause:3.2   "/pause"                 42 seconds ago   Up 40 seconds             k8s_POD_test-tag_default_370ed585-6f6d-403b-848b-609a8ba00b23_0












示例二:
在主节点直接删除pod

[root@\ k8s-m-01~]# kubectl explain deployment
KIND:     Deployment
VERSION:  apps/v1
... ...




[root@\ k8s-m-01~]# vim dep-test.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep-test
spec:
  selector:                                      #通过标签关联
    matchLabels:
      release: stable
  template:                               #控制器用来管理pod,template下面就是pod的模板
    metadata:
      name: test-tag                            #名字可以不要,会自动生成
      labels:
        release: stable
    spec:
      containers:
        - name: nginx
          image: nginx



# 创建 查看
[root@\ k8s-m-01~]# kubectl apply -f dep-test.yaml 
deployment.apps/dep-test created


[root@\ k8s-m-01~]# kubectl get deployments.apps 
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
dep-test     0/1     1            0           9s

[root@\ k8s-m-01~]# kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
dep-test-5849786498-z8jhf     1/1     Running   0          75s




# 删除pod
[root@\ k8s-m-01~]# kubectl delete pod dep-test-5849786498-z8jhf 
pod "dep-test-5849786498-z8jhf" deleted

# 再次查询会再次生成,
[root@\ k8s-m-01~]# kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
dep-test-5849786498-khlnz     1/1     Running   0          17s


以上示例理解以下这句话:
在Deployment对象中描述所需的状态,然后Deployment控制器将实际状态以受控的速率更改为所需的状态
会自动修复

3) 弹性扩容

[root@\ k8s-m-01~]# kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
dep-test-5849786498-khlnz     1/1     Running   0          55s







# 方式一:修改配置清单
[root@\ k8s-m-01~]# kubectl edit deployments dep-test 
    replicas: 3                      #修改为3,默认为1
           


#监控查看,生成了3个
[root@\ k8s-m-01~]# kubectl get pod  -w
NAME                          READY   STATUS    RESTARTS   AGE
dep-test-5849786498-khlnz     1/1     Running   0          5m53s
dep-test-5849786498-99b5x     0/1     Pending   0          0s
dep-test-5849786498-99b5x     0/1     Pending   0          0s
dep-test-5849786498-dq6fq     0/1     Pending   0          0s
dep-test-5849786498-dq6fq     0/1     Pending   0          0s
dep-test-5849786498-99b5x     0/1     ContainerCreating   0          0s
dep-test-5849786498-dq6fq     0/1     ContainerCreating   0          0s
dep-test-5849786498-99b5x     1/1     Running             0          7s
dep-test-5849786498-dq6fq     1/1     Running             0          10s







#方式二:打标签
[root@\ k8s-m-01~]# kubectl patch deployments.apps dep-test  -p '{"spec":{"replicas":5}}'
deployment.apps/dep-test patched


#查看变成了5台
[root@\ k8s-m-01~]# kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
dep-test-5849786498-4fqkq     1/1     Running   0          42s
dep-test-5849786498-99b5x     1/1     Running   0          5m21s
dep-test-5849786498-dq6fq     1/1     Running   0          5m21s
dep-test-5849786498-khlnz     1/1     Running   0          11m
dep-test-5849786498-pgnzz     1/1     Running   0          42s








#方式三:scale
[root@\ k8s-m-01~]# kubectl scale deployment/dep-test --replicas=2   # 控制器类型/控制器名称
deployment.apps/dep-test scaled     
   

# 查看
[root@\ k8s-m-01~]# kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
dep-test-5849786498-99b5x     1/1     Running   0          8m18s
dep-test-5849786498-khlnz     1/1     Running   0          14m



4) 更新镜像

[root@\ k8s-m-01~]# vim dep-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: stable
  template:
    metadata:
      labels:
        app: stable
    spec:
      containers:
        - name: nginx
          image: nginx:1.17.10



[root@\ k8s-m-01~]# kubectl apply -f dep-v2.yaml 
deployment.apps/dep-v2 created

[root@\ k8s-m-01~]# kubectl get deployments.apps 
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
dep-v2       1/1     1            1           31s

[root@\ k8s-m-01~]# kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
dep-v2-5b6fddfc87-9pfpc       1/1     Running   0          40s



# 验证其版本
[root@\ k8s-m-01~]# kubectl exec -it dep-v2-5b6fddfc87-9pfpc -- bash
root@dep-v2-5b6fddfc87-9pfpc:/# nginx -v
nginx version: nginx/1.17.10







#方式一 : edit
[root@\ k8s-m-01~]# kubectl edit deployments deployment
  
      containers:
      - image: nginx:1.18.0



#方式二 : 修改配置清单

[root@\ k8s-m-01~]# vim deloyment.yaml 
... ...
   spec:
      containers:
        - name: nginx
          image: nginx:1.18.0






# 方式三: 设置镜像
[root@\ k8s-m-01~]# kubectl set image deployment/dep-v2 nginx=nginx:1.16.0
deployment.apps/dep-v2 image updated



验证
[root@\ k8s-m-01~]# kubectl exec -it dep-v2-66fd455d7f-xgfbl -- bash
root@dep-v2-66fd455d7f-xgfbl:/# nginx -v
nginx version: nginx/1.16.0






#方式四:打标签

[root@\ k8s-m-01~]# kubectl patch deployments.apps dep-v2  -p '{"spec":{"template":{"spec":{"containers":[{"image":"nginx:1.18.0","name":"nginx"}]}}}}'
deployment.apps/dep-v2 patched


# 查看验证
[root@\ k8s-m-01~]# kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
dep-v2-5559b96bbf-r65ph       1/1     Running   0          14s


[root@\ k8s-m-01~]# kubectl exec -it dep-v2-5559b96bbf-r65ph -- bash
root@dep-v2-5559b96bbf-r65ph:/# nginx -v
nginx version: nginx/1.18.0


5) 回滚

# 查看回滚历史
[root@\ k8s-m-01~]# kubectl rollout history deployment dep-v2
deployment.apps/dep-v2 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
3         <none>




# 方式一: 回滚上一个版本
[root@\ k8s-m-01~]# kubectl rollout undo deployment dep-v2 
deployment.apps/dep-v2 rolled back
[root@\ k8s-m-01~]# kubectl rollout history deployment dep-v2
deployment.apps/dep-v2 
REVISION  CHANGE-CAUSE
1         <none>
3         <none>
4         <none>

一共三个版本,回滚到上一个版本即第二个版本,所以第二个版本的序号没有了,成立了一个新的第四个版本






# 方式二: 回滚指定的版本
[root@\ k8s-m-01~]# kubectl rollout history deployment dep-v2
deployment.apps/dep-v2 
REVISION  CHANGE-CAUSE
3         <none>
4         <none>
5         <none>


[root@\ k8s-m-01~]# kubectl rollout undo deployment dep-v2 --to-revision 3
deployment.apps/dep-v2 rolled back


2,DaemonSet控制器

1)介绍

集群上所有的节点上只部署一个pod(不支持弹性扩容)

删除一个节点后再次加入集群,会再次生成

2) 测试

[root@\ k8s-m-01~]# kubectl explain DaemonSet
KIND:     DaemonSet
VERSION:  apps/v1
... ...



[root@\ k8s-m-01~]# vim zabbix.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: zabbix-agent
spec:
  selector:
    matchLabels:
      app: zabbix-agent
  template:
    metadata:
      labels:
        app: zabbix-agent
    spec:
      containers:
        - name: zabbix-agent
          image: zabbix/zabbix-agent:5.2.6-centos



[root@\ k8s-m-01~]# kubectl apply -f zabbix.yaml 
daemonset.apps/zabbix-agent created


[root@\ k8s-m-01~]# kubectl get daemonsets.apps 
NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
zabbix-agent   2         2         2       2            2           <none>          6h18m



#监控查看状态,分别在node节点1和2都有
[root@\ k8s-m-01~]# kubectl get pods -w -o wide
NAME                          READY   STATUS             RESTARTS   AGE     IP            NODE       NOMINATED NODE   READINESS GATES
zabbix-agent-bgsr2            1/1     Running            0          100s    10.244.1.9    k8s-n-01   <none>           <none>
zabbix-agent-sz686            1/1     Running            0          100s    10.244.0.12   k8s-m-01   <none>           <none>
zabbix-agent-xcgxj            1/1     Running            0          100s    10.244.2.10   k8s-n-02   <none>           <none>





#删除 一个节点
[root@\ k8s-m-01~]# kubectl  get nodes
NAME       STATUS   ROLES                  AGE     VERSION
k8s-m-01   Ready    control-plane,master   5d22h   v1.20.5
k8s-n-01   Ready    <none>                 5d21h   v1.20.5
k8s-n-02   Ready    <none>                 5d21h   v1.20.5

[root@\ k8s-m-01~]# kubectl delete nodes k8s-n-02 
node "k8s-n-02" deleted


# 查看只有一个node有部署
[root@\ k8s-m-01~]# kubectl  get nodes
NAME       STATUS   ROLES                  AGE     VERSION
k8s-m-01   Ready    control-plane,master   5d22h   v1.20.5
k8s-n-01   Ready    <none>                 5d21h   v1.20.5


重新加入集群步骤
1,清空
[root@\ k8s-n-02~]# kubeadm reset 
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y


2,删除
[root@\ k8s-n-02~]# rm -rf /etc/kubernetes/



3,master执行,重新加入集群
[root@\ k8s-m-01~]# kubeadm token create --print-join-command
kubeadm join 192.168.15.31:6443 --token ix1klt.0sxid4ugubhd2ywa     --discovery-token-ca-cert-hash sha256:7bf88b32c590e1057664ec33e93cc239babd8a30efe7677852b924d9b121a4b4 


4,token值放在node2执行
[root@\ k8s-n-02~]# kubeadm join 192.168.15.31:6443 --token ix1klt.0sxid4ugubhd2ywa     --discovery-token-ca-cert-hash sha256:7bf88b32c590e1057664ec33e93cc239babd8a30efe7677852b924d9b121a4b4 



5,查看node2节点已经加入
[root@\ k8s-m-01~]# kubectl get nodes
NAME       STATUS   ROLES                  AGE     VERSION
k8s-m-01   Ready    control-plane,master   5d22h   v1.20.5
k8s-n-01   Ready    <none>                 5d21h   v1.20.5
k8s-n-02   Ready    <none>                 55s     v1.20.5




6,监控查看node2启动就部署了容器

[root@\ k8s-m-01~]# kubectl get pod -o wide -w
zabbix-agent-2jvjk            1/1     Running             0          3s      10.244.2.2    k8s-n-02   <none>           <none




以上验证,daemonset在集群中有且只有一个pod,并会自动生成


3)更新

#方式一
[root@\ k8s-m-01~]# kubectl  edit daemonsets.apps zabbix-agent 
      - image: zabbix/zabbix-agent:centos-5.2.5


# 方式二:打标签
[root@\ k8s-m-01~]# kubectl patch daemonsets.apps zabbix-agent  -p '{"spec":{"template":{"spec":{"containers":[{"image":"zabbix/zabbix-agent:centos-5.2.4", "name":"zabbix-agent"}]}}}}'



#方式三:设置镜像
[root@\ k8s-m-01~]# kubectl set image daemonset/zabbix-agent zabbix-agent=zabbix/zabbix-agent:centos-5.2.3


4)回滚

## 回滚到上一个版本
[root@k8s-m-01 ~]# kubectl rollout undo daemonset zabbix-agent 
daemonset.apps/zabbix-agent rolled back


## 回滚到指定版本
[root@k8s-m-01 ~]# kubectl rollout undo daemonset zabbix-agent --to-revision=1
daemonset.apps/zabbix-agent rolled back

3, StatefulSet 控制器

StatefulSet : 控制器,有序
示例一:

[root@\ k8s_master~]# kubectl explain StatefulSet
KIND:     StatefulSet
VERSION:  apps/v1



[root@\ k8s-m-01~]# vim statefulset.yaml
kind: Service
apiVersion: v1
metadata:
  name: statefulset
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: test
spec:
  serviceName: statefulset
  selector:
    matchLabels:
      app: stable
  template:
    metadata:
      labels:
        app: stable
    spec:
      containers:
        - name: nginx
          image: nginx



[root@\ k8s-m-01~]# kubectl apply -f statefulset.yaml

[root@\ k8s-m-01~]# kubectl get pod
NAME                          READY   STATUS             RESTART  
test-0                        1/1     Running            0  



[root@\ k8s-m-01~]# kubectl get statefulsets.apps 
NAME   READY   AGE
test   1/1     7m16s
[root@\ k8s-m-01~]# kubectl get statefulsets test 
NAME   READY   AGE
test   1/1     7m27s



# 弹性扩容到五台
[root@\ k8s-m-01~]# kubectl edit statefulsets.apps test 
  replicas: 5


# 观察pod,会有序的增加,test1running起来,才会开始下一台test2
[root@\ k8s_master~]# kubectl get pod -w
test-1                        0/1     Pending            0      
test-1                        0/1     Pending            0      
test-1                        0/1     ContainerCreating   0     
test-1                        1/1     Running             0     
test-2                        0/1     Pending             0     
test-2                        0/1     Pending             0     
test-2                        0/1     ContainerCreating   0     
test-2                        1/1     Running             0     
test-3                        0/1     Pending             0     
test-3                        0/1     Pending             0     
test-3                        0/1     ContainerCreating   0     
test-3                        1/1     Running             0     
test-4                        0/1     Pending             0     
test-4                        0/1     Pending             0     
test-4                        0/1     ContainerCreating   0     
test-4                        1/1     Running             0    



# 再弹性缩减到1
[root@\ k8s-m-01~]# kubectl edit statefulsets.apps test 
  replicas: 1


#再次观察
[root@\ k8s_master~]# kubectl get pod -w
test-4                        1/1     Terminating         0     
test-4                        0/1     Terminating         0     
test-4                        0/1     Terminating         0     
test-4                        0/1     Terminating         0     
test-3                        1/1     Terminating         0     
test-3                        0/1     Terminating         0      
test-3                        0/1     Terminating         0     
test-3                        0/1     Terminating         0     
test-2                        1/1     Terminating         0     
test-2                        0/1     Terminating         0     
test-2                        0/1     Terminating         0     
test-2                        0/1     Terminating         0     
test-1                        1/1     Terminating         0     
test-1                        0/1     Terminating         0     
test-1                        0/1     Terminating         0     
test-1                        0/1     Terminating         0 

四,智能负载均衡

service

1)概述

提供负载均衡和服务自动发现;一个service就等同于一个微服务
#pod有生命周期,再次生成ip会随机
#pod内部网络互通,但是不能外网连接

2) 测试

测试一:

#删除,虽然会再生成,ip地址不同
[root@\ k8s-m-01~]# kubectl delete pod zabbix-agent-z9dmz 
pod "zabbix-agent-z9dmz" deleted


[root@\ k8s-m-01~]# kubectl get pods -o wide -w
NAME                          READY   STATUS    RESTARTS   AGE     IP            NODE       NOMINATED NODE   READINESS GATES
abbix-agent-z9dmz            0/1     Terminating   0          10m     10.244.0.14   k8s-m-01   <none>           <none>
... ... 
zabbix-agent-b5lfk            1/1     Running             0          2s      10.244.0.15   k8s-m-01   <none>           <none>





测试二:

# dep-test-5849786498-vnct9 部署的nginx
[root@\ k8s-m-01~]# kubectl get pods -o wide -w
NAME                          READY   STATUS    RESTARTS   AGE     IP            NODE       NOMINATED NODE   READINESS GATES
dep-test-5849786498-khlnz     1/1     Running   0          84m     10.244.1.13   k8s-n-01   <none>           <none>
dep-test-5849786498-vnct9     1/1     Running   0          30m     10.244.1.19   k8s-n-01   <none>           <none>




# 内网可以访问,外网不可以
[root@\ k8s-m-01~]# curl 10.244.1.19 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title


3)负载均衡测试

[root@\ k8s-m-01~]# vim service.yaml
apiVersion: v1
kind: Service
metadata:
  name: service
spec:
  selector:
    release: stable
  ports:
    - name: http
      port: 80                                #负载均衡向外暴露的端口
      targetPort: 80                                # 内部端口
      protocol: "TCP"
    - name: https
      port: 443
      targetPort: 443
      protocol: "TCP"



[root@\ k8s-m-01~]# kubectl apply -f service.yaml 
service/service created



[root@\ k8s-m-01~]# kubectl get service
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service      ClusterIP   10.96.60.35   <none>        80/TCP,443/TCP   82s


# 可以ping通,此时已经完成了负载均衡,无法验证
[root@\ k8s-m-01~]# curl 10.96.60.35
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
... ...



[root@\ k8s-m-01~]# vim deloyment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      release: stable
  template:
    metadata:
      name: test-tag
      labels:
        release: stable
    spec:
      containers:
        - name: nginx
          image: alvinos/django:v1


[root@\ k8s-m-01~]# kubectl apply -f deloyment.yaml 
deployment.apps/deployment configured



[root@\ k8s-m-01~]# kubectl get service
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP          5d22h
service      ClusterIP   10.96.60.35   <none>        80/TCP,443/TCP   8m31s

[root@\ k8s-m-01~]# curl 10.96.60.35/index
主机名:deployment-5d4fd8d67-l9nsf,版本:v1




#弹性扩容

[root@\ k8s-m-01~]# kubectl edit deployments.apps deployment 

  replicas: 5




# 查看,实现负载均衡
[root@\ k8s-m-01~]# while true ;do curl 10.96.60.35/index; sleep 1; echo ; done
主机名:deployment-5d4fd8d67-q6mbx,版本:v1
主机名:deployment-5d4fd8d67-qbh7z,版本:v1
主机名:deployment-5d4fd8d67-q6mbx,版本:v1
主机名:deployment-5d4fd8d67-q6mbx,版本:v1
主机名:deployment-5d4fd8d67-qbh7z,版本:v1



PS:
[root@\ k8s-m-01~]# cat service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: service
spec:
  selector:
    release: stable
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: "TCP"
    - name: https
      port: 443
      targetPort: 443
      protocol: "TCP"



[root@\ k8s-m-01~]# cat deloyment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      release: stable
  template:
    metadata:
      name: test-tag
      labels:
        release: stable
    spec:
      containers:
        - name: nginx
          image: alvinos/django:v1

以上实现网络互通原理,是通过标签关联

4)service的几种类型

1,CluserIP(默认使用) : 向集群内布暴露一个IP

2,nodePort : 在宿主机中开启一个端口与负载均衡IP的端口一一对应,外界可以使用宿主机的端口访问集群内部服务


3,LoadBalancer : 实现暴露服务的另一种解决方案,,依赖于公有云弹性IP实现  (公有云的弹性IP可测试)


4,ExternalName

--1)CluserIP
# 推荐使用CluserIP,默认
[root@\ k8s-m-01~]# kubectl get service
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP          5d23h
service      ClusterIP   10.96.60.35   <none>        80/TCP,443/TCP   24m

--2)nodePort
更改配置
[root@\ k8s-m-01~]# kubectl edit service service 
  type: NodePort

# 多了映射端口,可以直接访问
[root@\ k8s-m-01~]# kubectl get service
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP                      5d23h
service      NodePort    10.96.60.35   <none>        80:30765/TCP,443:30253/TCP   27m



image.png 实现负载均衡 node节点ip一样可以访问 image.png
--3)ExternalName
[root@\ k8s-m-01~]# vim exter.yaml
apiVersion: v1
kind: Service
metadata:
  name: external-name
spec:
  type: ExternalName
  externalName: www.jd.com


[root@\ k8s-m-01~]# kubectl apply -f exter.yaml 
service/external-name created
[root@\ k8s-m-01~]# kubectl get service
NAME            TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)                      AGE
external-name   ExternalName   <none>        www.baidu.com   <none>                       6s
kubernetes      ClusterIP      10.96.0.1     <none>          443/TCP                      6d14h
service         NodePort       10.96.60.35   <none>          80:30765/TCP,443:30253/TCP   15h



[root@\ k8s-m-01~]# kubectl run test -it --rm --image=busybox:1.28.3
If you don't see a command prompt, try pressing enter.
/ # nslookup external-name
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      external-name
Address 1: 103.235.46.39
    
访问103.235.46.39    =====》 就是百度的页面
可以更改配置中的externalName: www.jd.com ,再次解析,就是京东的页面

--4)Headless
#service在创建之前,是可以自定义ip的


[root@\ k8s-m-01~]# vim headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: headless-service
spec:
  clusterIP: None
  selector:
    app: wordpress
  ports:
    - name: http
      port: 80
      targetPort: 80




[root@\ k8s-m-01~]# kubectl apply -f headless.yaml 
service/headless-service created

[root@\ k8s-m-01~]# kubectl get svc
NAME               TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
headless-service   ClusterIP      None          <none>        80/TCP                       11s
kubernetes         ClusterIP      10.96.0.1     <none>        443/TCP                      7d18h
service            NodePort       10.96.60.35   <none>        80:30765/TCP,443:30253/TCP   44h






[root@\ k8s-m-01~]# cat deloyment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      release: stable
  template:
    metadata:
      name: test-tag
      labels:
        release: stable
    spec:
      containers:
        - name: nginx






[root@\ k8s-m-01~]# kubectl apply -f deloyment.yaml 
deployment.apps/deployment created

[root@\ k8s-m-01~]# kubectl get pod
deployment-c8fc95c-975rr     1/1     Running       0          3s


[root@\ k8s-m-01~]# kubectl get pod --show-labels 
deployment-c8fc95c-975rr    1/1     Running   0          40s   pod-template-hash=c8fc95c,release=stable






[root@\ k8s-m-01~]# vim service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: service
spec:
  selector:
    release: stable
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: "TCP"


[root@\ k8s-m-01~]# kubectl delete -f service.yaml 
service "service" deleted
[root@\ k8s-m-01~]# kubectl apply -f service.yaml 
service/service created







#查看service详情
[root@\ k8s-m-01~]# kubectl describe service service 
Name:              service
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          release=stable
Type:              ClusterIP
IP Families:       <none>
IP:                10.97.244.168
IPs:               10.97.244.168
Port:              http  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.28:80,10.244.1.29:80,10.244.2.11:80
Session Affinity:  None
Events:            <none>

#类型属于Type:              ClusterIP



# 查看pod
[root@\ k8s-m-01~]# kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE     IP            NODE       NOMINATED NODE   READINESS GATES
dep-test-5849786498-bx4r7   1/1     Running   0          22h     10.244.1.29   k8s-n-01   <none>           <none>
dep-test-5849786498-hdp7t   1/1     Running   0          22h     10.244.1.28   k8s-n-01   <none>           <none>
deployment-c8fc95c-975rr    1/1     Running   0          5m39s   10.244.2.11   k8s-n-02   <none>           <none>


# 查看service
[root@\ k8s-m-01~]# kubectl get svc
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service            ClusterIP      10.97.244.168   <none>        80/TCP    11s


# 查看 endpoints service的详情
[root@\ k8s-m-01~]# kubectl describe endpoints service 
Name:         service
Namespace:    default
Labels:       <none>
Annotations:  endpoints.kubernetes.io/last-change-trigger-time: 2021-04-02T06:57:07Z
Subsets:
  Addresses:          10.244.1.28,10.244.1.29,10.244.2.11
  NotReadyAddresses:  <none>                    # 代表不可用的地址
  Ports:
    Name  Port  Protocol
    ----  ----  --------
    http  80    TCP

Events:  <none>


验证以上不可用地址:
在node2执行

[root@\ k8s-n-02~]# docker ps | grep deployment-c8fc95c-975rr 
894d507b2576   0abe8858796c                                        "python3 manage.py r…"   11 minutes ago   Up 11 minutes             k8s_nginx_deployment-c8fc95c-975rr_default_91884bd4-fb53-4e23-ab54-b34c9d36e619_0
387951ec8f82   registry.cn-hangzhou.aliyuncs.com/k8sos/pause:3.2   "/pause"                 11 minutes ago   Up 11 minutes             k8s_POD_deployment-c8fc95c-975rr_default_91884bd4-fb53-4e23-ab54-b34c9d36e619_0

[root@\ k8s-n-02~]# docker rm -f 894d507b2576
894d507b2576




#在master查看,会有一台不可用地址就是node2上的容器地址(会再很快时间内重新启动)
[root@\ k8s-m-01~]# kubectl describe endpoints service 
Name:         service
Namespace:    default
Labels:       <none>
Annotations:  endpoints.kubernetes.io/last-change-trigger-time: 2021-04-02T07:07:35Z
Subsets:
  Addresses:          10.244.1.28,10.244.1.29
  NotReadyAddresses:  10.244.2.11
  Ports:
    Name  Port  Protocol
    ----  ----  --------
    http  80    TCP

Events:  <none>

如上,如果地址不可用不会加载在负载均衡里,可用会再次拉入集群












总结:
service与pod的关系
service创建endpoints(同步创建),endpoints去连接pod



# 删除service,emdpoints也会消失
[root@\ k8s-m-01~]# kubectl delete -f service.yaml 
service "service" deleted
[root@\ k8s-m-01~]# kubectl describe endpoints service 
Error from server (NotFound): endpoints "service" not found



--5)Ingress
属于集群创建资源,不属于k8s
ingress是基于域名的网络转发资源


1,下载
[root@\ k8s-m-01~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
--2021-04-02 15:18:41--  https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 18189 (18K) [text/plain]
Saving to: ‘deploy.yaml.1’

100%[=========================================================================================>] 18,189      78.8KB/s   in 0.2s   

2021-04-02 15:18:42 (78.8 KB/s) - ‘deploy.yaml’ saved [18189/18189]



# 搜索出来的结果,第一个镜像无法下载,需要换个源
[root@\ k8s-m-01~]# cat deploy.yaml | grep image
         



2, 修改镜像
sed -i 's#k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a#registry.cn-hangzhou.aliyuncs.com/k8sos/ingress-controller:v0.44.0#g'  deploy.yaml




3、部署
[root@\ k8s-m-01~]# kubectl apply -f deploy.yaml 



4,编辑配置清单


[root@\ k8s-m-01~]# kubectl get service
NAME               TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE

[root@\ k8s-m-01~]# kubectl get endpoints service
NAME      ENDPOINTS                                      AGE
service   10.244.1.28:80,10.244.1.29:80,10.244.2.11:80   7s



# 用上面的service
[root@\ k8s-m-01~]# vim ingress.yaml 
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: ingress-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
    - host: www.test.com
      http:
        paths:
          - path: /
            backend:
              serviceName: service
              servicePort: 80





[root@\ k8s-m-01~]# kubectl apply -f  ingress.yaml 

[root@\ k8s-m-01~]# kubectl get ingress
NAME              CLASS    HOSTS          ADDRESS         PORTS   AGE
ingress-ingress   <none>   www.test.com   192.168.15.33   80      4h45m




5,修改hosts解析访问
# 查看端口
[root@\ k8s-m-01~]# kubectl get svc -n ingress-nginx 
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.102.254.20   <none>        80:30654/TCP,443:31066/TCP   5m43s
ingress-nginx-controller-admission   ClusterIP   10.111.163.84   <none>        443/TCP                      5m43s


可以访问的到
[root@\ k8s-m-01~]# vim dep-test.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep-test
spec:
  selector:
    matchLabels:
      release: stable
  template:
    metadata:
      name: test-tag
      labels:
        release: stable
    spec:
      containers:
        - name: nginx
          image: nginx
---
apiVersion: v1
kind: Service
metadata:
  name: test-svc
spec:
  selector:
    app: nginx
  ports:
    - name: http
      port: 80
      targetPort: 80


[root@\ k8s-m-01~]# kubectl apply -f dep-test.yaml 
service/test-svc created




[root@\ k8s-m-01~]# kubectl get svc
NAME               TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
test-svc           ClusterIP      10.106.138.213   <none>        80/TCP    42s




# 修改ingress
[root@\ k8s-m-01~]# vim ingress.yaml 
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: ingress-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
    - host: www.test.com
      http:
        paths:
          - path: /
            backend:
              serviceName: service
              servicePort: 80
    - host: www.abc.com
      http:
        paths:
          - path: /
            backend:
              serviceName: test-svc
              servicePort: 80



# 没更新之前查看ingress、只有www.test.com
[root@\ k8s-m-01~]# kubectl get ingress
NAME              CLASS    HOSTS          ADDRESS         PORTS   AGE
ingress-ingress   <none>   www.test.com   192.168.15.33   80      4h56m

# 更新配置清单
[root@\ k8s-m-01~]# kubectl apply -f ingress.yaml 
ingress.extensions/ingress-ingress configured

# 再次查看
[root@\ k8s-m-01~]# kubectl get ingress
NAME              CLASS    HOSTS                      ADDRESS         PORTS   AGE
ingress-ingress   <none>   www.test.com,www.abc.com   192.168.15.33   80      4h56m



更改hosts,访问www.abc.com

原理

[root@\ k8s-m-01~]# kubectl exec -n ingress-nginx -it ingress-nginx-controller-57dc855f79-j7xgj -- bash
bash-5.1$ cd /etc/nginx/
bash-5.1$ ls
fastcgi.conf            koi-utf                 modsecurity             owasp-modsecurity-crs   uwsgi_params.default
fastcgi.conf.default    koi-win                 modules                 scgi_params             win-utf
fastcgi_params          lua                     nginx.conf              scgi_params.default
fastcgi_params.default  mime.types              nginx.conf.default      template
geoip                   mime.types.default      opentracing.json        uwsgi_params


# 搜索www.test.com
bash-5.1$ vi nginx.conf
                              ... ...                                                                                                
        ## end server www.abc.com                                                                                               
                                                                                                                                
        ## start server www.test.com                                                                                            
        server {                                                                                                                
                server_name www.test.com ;                                                                                         
                                                                                                                                   
                listen 80  ;                                                                                                       
                listen 443  ssl http2 ;                                                                                            
                                                                                                                                   
                set $proxy_upstream_name "-";                                                                                      
                                                                                                                                   
                ssl_certificate_by_lua_block {                                                                                     
                        certificate.call()                                                                                         
                }                                                                                                                  
                                                                                                                                   
                location / {                                                                                                       
                                                                                                                                   
                        set $namespace      "default";                                                                             
                        set $ingress_name   "ingress-ingress";                                                                     
                        set $service_name   "service";                                                                             
                        set $service_port   "80";                                                                                  
                        set $location_path  "/";                                                                                   
                        set $global_rate_limit_exceeding n;                                                 
   ... ...
                       proxy_pass http://upstream_balancer;      


通过一堆的变量


bash-5.1$ cd lua/

bash-5.1$ ls
balancer                   global_throttle.lua        plugins.lua                util
balancer.lua               lua_ingress.lua            tcp_udp_balancer.lua       util.lua
certificate.lua            monitor.lua                tcp_udp_configuration.lua
configuration.lua          plugins                    test


# 主要是这个文件里tcp_udp_balancer.lua 一堆函数调用生成的server_name

这是删除
[root@\ k8s-m-01~]# kubectl get pod -n ingress-nginx 

相关文章

网友评论

      本文标题:k8s

      本文链接:https://www.haomeiwen.com/subject/kckohltx.html