环境
- K8S集群
- 集群一: k8s-admin 集群访问凭证保存为 ~/kubeconfig-k8s-admin
- 集群二: k8s-cube 集群访问凭证保存为 ~/kubeconfig-k8s-cube
- OPS主机需要软件列表:helm,kubectl,ansible
- MacOS系统:
brew install helm kubectl ansible
cat >> ~/.zshrc << EOF
export PATH="/opt/homebrew/opt/ansible/:$PATH"
export PATH="/opt/homebrew/opt/helm/bin/:$PATH"
export PATH="/opt/homebrew/opt/kubectl/bin/:$PATH"
EOF
source ~/.zshrc
方式一:使用多个 kubeconfig 文件来管理多集群
可以通过--kubeconfig 命令行参数来选择操作不同集群
kubectl --kubeconfig ~/kubeconfig-k8s-admin get ns
kubectl --kubeconfig ~/kubeconfig-k8s-cube get ns
helm --kubeconfig ~/kubeconfig-k8s-admin list -A
helm --kubeconfig ~/kubeconfig-k8s-cube list -A
方式二:使用单一 kubeconfig 文件来管理多集群
使用默认的 kubeconfig 文件, 需要将多个 kubeconfig 合并,保存在kubeconfig 的默认位置$HOME/.kube/config
, 合并后的参考格式如下:
apiVersion: v1
kind: Config
preferences: {}
clusters:
- name: cluster-a
cluster:
certificate-authority-data: <cluster-a-token>
server: https://cluster-a-api-lb:6443
- name: cluster-b
cluster:
certificate-authority-data: <cluster-b-token>
server: https://cluster-b-api-lb:6443
users:
- name: cluster-a-user
user:
token: < cluster-a-user-token >
- name: cluster-b-user
user:
token: < cluster-b-user-token >
contexts:
- name: cluster-a-context
context:
cluster: cluster-a
user: cluster-a-user
- name: cluster-b-context
context:
cluster: cluster-b
user: cluster-b-user
current-context: cluster-a-context
kubeconfig 中定义了 (clusters)集群、(users)用户和 以及相关联的(contexts)上下文,如果使用Uk8s 可以在控制台: 概览-> 内网凭证/外网凭证 查看需要的凭证,然后按照上述格式补全即可
-
在执行kubectl 命令时需要执行
--context
命令行参数, 来选择操作不同集群 -
在执行helm 命令时需要执行 --kube-context 命令行参数, 来选择操作不同集群
-
在执行kubectl helm 命令不指定参数,则选则文件中的
current-context
作为默认集群
kubectl get pods -A #操作的是 current-context 定义的集群
kubectl get pods -A --context cluster-a-context #操作的是 cluster-a 集群
kubectl get pods -A --context cluster-b-context #操作的是 cluster-b 集群
helm list -A --kube-context cluster-a-context #操作的是 cluster-a 集群
helm list -A --kube-context cluster-b-context #操作的是 cluster-b 集群
方式三:使用ansible Playbook 来管理集群内的容器应用
sudo ansible-galaxy collection install kubernetes.core
sudo pip3 install kubernetes
场景描述: 目前我们需要使用helm安装一个external-dns 用来将 ingess对接的域名解析规则,自动同步到DNS服务器,使用shell命令操作参考如下:
helm repo add stable https://harbor.onwalk.net/chartrepo/knative
helm repo update
cat > cat > admin-values.yaml << EOF
clusterDomain: admin.local
sources:
- service
- ingress
domainFilters:
- onwalk.net
policy: upsert-only
provider: alibabacloud
alibabacloud:
accessKeyId: xxxxxxxxxx
accessKeySecret: xxxxxxxxx
regionId: rg-xxxxxx
zoneType: public
EOF
helm upgrade -i external-dns stable/external-dns --version '5.4.11' -f admin-values.yaml -n external-dns --create-namespace --kube-context k8s-admin
将以上操作转化为 ansible-playbook 的tasks 可以拆分为四个 tasks ,
- task1: Add stable chart repo 调用 kubernetes.core.helm_repository模块
- task2: Update repo 调用 shell 模块
- task3: Create NameSpace 调用 kubernetes.core.k8s 模块
- Task4: Deploy External Dns 调用 kubernetes.core.helm 模块
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Add stable chart repo
kubernetes.core.helm_repository:
name: stable
repo_url: "https://harbor.onwalk.net/chartrepo/knative"
- name: Update repo
shell: "helm repo update"
- name: Create NameSpace
kubernetes.core.k8s:
api_version: v1
kind: Namespace
context: cube-admin
name: external-dns
state: present
- name: Deploy External Dns
kubernetes.core.helm:
name: external-dns
chart_ref: stable/external-dns
chart_version: 5.4.11
context: cube-admin
release_namespace: external-dns
values:
clusterDomain: cube.local
sources:
- service
- ingress
domainFilters:
- onwalk.net
policy: upsert-only
provider: alibabacloud
alibabacloud:
accessKeyId: xxxxxxxxxx
accessKeySecret: xxxxxxxxx
regionId: rg-xxxxxx
zoneType: public
将上诉文件保存为 deploy_external_dns.yaml, 执行命令 ansible-playbook deploy_external_dns.yaml 命令执行成功回看到返回如下类似结果:
PLAY [localhost] *****************************************************************************************************************************************************************
TASK [Add stable chart repo] *****************************************************************************************************************************************************
ok: [localhost]
TASK [Update repo] ***************************************************************************************************************************************************************
changed: [localhost]
TASK [Create NameSpace] **********************************************************************************************************************************************************
ok: [localhost]
TASK [Deploy External Dns] *******************************************************************************************************************************************************
ok: [localhost]
PLAY RECAP ***********************************************************************************************************************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
方式四:使用ansible-playbook roles来组织管理集群变更tasks
在方式三中,如果是管理一个集群,编写一个deploy_external_dns.yaml就能完成工作,如果管理多个集群,就要编写多个yaml文件,并且重复编写很多差异不多的tasks 实际操作可能类似这个样子:
ansible-playbook k8s_dev1_deploy_external_dns.yaml
ansible-playbook k8s_pre1_deploy_external_dns.yaml
ansible-playbook k8s_prd1_deploy_external_dns.yaml
ansible-playbook ...
经过对比分析,这些tasks 主要差异变量:
-
context: cube-admin
-
clusterDomain: xxx.local
只要将以上两个变量参考可配置化,最原始的四个 tasks就可以复用,然后使用 Ansible role 重新组织tasks 文件,拆分为两个role:
-
helm-repository
-
external-dns
其中 external_dns 依赖 helm-repository 然后目录结构如下
roles/helm-repository
└── tasks
└── main.yml
roles/external-dns
├── meta
│ └── main.yml
└── tasks
└── main.yml
roles/helm-repository/main.yml
- name: Add stable chart repo
kubernetes.core.helm_repository:
name: stable
repo_url: "https://harbor.onwalk.net/chartrepo/knative"
- name: Update repo
shell: "helm repo update"
roles/external-dns/tasks/main.yml
- name: "cluster {{ clusterContext }} : Create NameSpace"
kubernetes.core.k8s:
api_version: v1
kind: Namespace
context: "{{ clusterContext }}"
name: external-dns
state: present
- name: "cluster {{ clusterContext }} : Deploy External Dns"
kubernetes.core.helm:
name: external-dns
chart_ref: stable/external-dns
chart_version: 5.4.11
context: "{{ clusterContext }}"
release_namespace: external-dns
values:
clusterDomain: "{{ clusterDomain }}"
sources:
- service
- ingress
domainFilters:
- onwalk.net
policy: upsert-only
provider: alibabacloud
alibabacloud:
accessKeyId: xxxxxxxxx
accessKeySecret: xxxxxxxxx
regionId: rg-xxxxxxxxx
zoneType: public
roles/external-dns/meta/main.yml
dependencies:
- role: helm-repository
新建一个文件 deploy-chart-external-dns 来引用 role:helm-repository
- hosts: localhost
connection: local
gather_facts: false
tasks:
- include_role:
name: external_dns
vars:
clusterContext: "{{ item.clusterContext }}"
clusterDomain: "{{ item.clusterDomain }}"
with_items:
- { clusterContext: 'k8s-admin', clusterDomain: 'admin.local' }
- { clusterContext: 'k8s-cube', clusterDomain: 'cube.local' }
- { clusterContext: 'k8s-dev', clusterDomain: 'dev.local' }
- { clusterContext: 'k8s-pre', clusterDomain: 'pre.local' }
- ...
最后,仅仅需要维护一个可复用的 role:helm-repository ,以及在deploy-chart-external-dns 定义要集群属性等变量,就可以轻松的维护多集群内的各类容器应用了
执行命令:ansible-playbook deploy-chart-external-dns 返回结果如下:
PLAY [localhost] *****************************************************************************************************************************************************************
TASK [include_role : external_dns] ***********************************************************************************************************************************************
TASK [helm-repository : Add stable chart repo] ***********************************************************************************************************************************
ok: [localhost]
TASK [helm-repository : Update repo] *********************************************************************************************************************************************
changed: [localhost]
TASK [external_dns : cluster k8s-admin : Create NameSpace] ***********************************************************************************************************************
ok: [localhost]
TASK [external_dns : cluster k8s-admin : Deploy External Dns] ********************************************************************************************************************
ok: [localhost]
TASK [external_dns : cluster cube-admin : Create NameSpace] **********************************************************************************************************************
ok: [localhost]
TASK [external_dns : cluster cube-admin : Deploy External Dns] *******************************************************************************************************************
ok: [localhost]
PLAY RECAP ***********************************************************************************************************************************************************************
localhost : ok=6 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
参考
- Ansible kubernetes.core.helm 模块
- 使用 kubeconfig 文件组织多集群访问:
- https://kubernetes.io/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig/
- [https://kubernetes.io/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable](http://2. https://kubernetes.io/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)
网友评论