版本矩阵
kubernetes | helm | ingress-nginx | cri-dockerd | Contour | cc | ee | description |
---|---|---|---|---|---|---|---|
1.19 | 3.7.2 | v1.2.1 | - | 1.18.3 | |||
1.21 | 3.9.4 | v1.3.1 | - | 1.21.3 | |||
1.22 | 3.10 | v1.4.0 | - | 1.22.4 | |||
1.23 | 3.11 | v1.6.4 | - | 1.23.3 | |||
1.24 | 3.11 | v1.6.4 | v0.3.1 | 1.24.1 | |||
1.25 | 3.11 | v1.6.4 | v0.3.1 | 1.24.1 | |||
1.26 | 3.11 | v1.6.4 | v0.3.1 | 1.24.1 | |||
1.27 | 3.12 | v1.6.4 | v0.3.1 |
特性
kubernetes 1.19: Ingress GA
kubernetes 1.21: EndpointSlices
kubernetes 1.23 : IPv4/IPv6 双协议栈网络 GA
kubernetes 1.25 : cgroups v2 GA
Kubernetes 1.26 : 不支持 CRI v1alpha2, 只支持 v1. 需要 containerd 1.6 及以上版本。
ingress-nginx: 1.19+ ingress networking.k8s.io/v1 。
ingress-nginx: 1.14+ ingress extensions/v1 。
discovery.k8s.io/v1 : 1.21+ . 1.25 remove discovery.k8s.io/v1beta1 。
https://kubernetes.io/docs/reference/using-api/deprecation-guide/
helm repo
helm repo add longhorn https://charts.longhorn.io
helm repo add minio https://charts.min.io/
helm repo add carina-csi-driver https://carina-io.github.io
helm repo add pingcap https://charts.pingcap.org/
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --version 1.5.1
默认情况下 ipvs 的参数为 4096, 通过以下命令可以增加哈希表的条数为pow(2,20)=1048576
cat > /etc/modprobe.d/ip_vs_options.conf <<-"EOF"
options ip_vs conn_tab_bits=20
EOF
# 停止所有pod,重新加载 ipvs 模块,不用重启系统
docker ps | awk '{print $1}' | xargs docker stop
ipvsadm -C
ipvsadm -ln
rmmod ip_vs_rr ip_vs_sh ip_vs_wrr ip_vs
ipvsadm -ln |head -1
## DRBD
1. install kernel-lt or kernel-ml
wget https://github.com/piraeusdatastore/piraeus-operator/archive/refs/tags/v2.1.0.tar.gz -O piraeus-operator-2.1.0.tar.gz
tar zxfv piraeus-operator-2.1.0.tar.gz
helm show values piraeus-operator-2.1.0/charts/piraeus/
helm install piraeus-operator piraeus-operator-2.1.0/charts/piraeus --create-namespace -n piraeus-datastore --set installCRDs=true
#或者修改 参数
helm upgrade piraeus-operator -n piraeus-datastore piraeus-operator-2.1.0/charts/piraeus --reuse-values --set installCRDs=true
kubectl wait pod --for=condition=Ready -n piraeus-datastore -l app.kubernetes.io/component=piraeus-operator
pod/piraeus-operator-controller-manager-dd898f48c-bhbtv condition met
kubectl apply -f - <<EOF
apiVersion: piraeus.io/v1
kind: LinstorCluster
metadata:
name: linstorcluster
spec: {}
EOF
helm repo add piraeus-charts https://piraeus.io/helm-charts/
helm install piraeus-ha-controller piraeus-charts/piraeus-ha-controller
# cert
https://cert-manager.io/docs/installation/supported-releases/
1.8.2 ==> 1.19 - 1.24
1.12.0 ==> 1.22 - 1.27 [not release yet]
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.8.2 \
--set installCRDs=true
Ingress 版本适配情况
ingress-nginx 见上表
apisix ingress 要求 k8s 1.16+
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace ingress-nginx --set controller.service.type=NodePort --disable-openapi-validation
# 查看默认参数
helm show values ~/.cache/helm/repository/ingress-nginx-4.6.0.tgz
# 下载指定的 chart 版本
k8s 1.19 安装 ingress chart 4.6.0 报错 failed to list *v1.EndpointSlice: the server could not find the requested resource
helm pull ingress-nginx/ingress-nginx --version=4.6.0
cd ~/.cache/helm/repository
tar zxfv ingress-nginx-4.6.0.tgz
helm install ingress-nginx ~/.cache/helm/repository/ingress-nginx --create-namespace --namespace ingress-nginx --set controller.service.type=NodePort \
--set controller.image.registry=registry.k8s.io --set controller.image.image=ingress-nginx/controller --set controller.image.tag="v1.7.0" \
--set controller.admissionWebhooks.patch.image.digest='' \
--set controller.image.digest=""
# k8s 1.19 install ingress chart 4.1.4 ,ingress 1.2.1 试试
helm pull ingress-nginx/ingress-nginx --version=4.1.4
helm uninstall ingress-nginx -n ingress-nginx
helm install ingress-nginx -n ingress-nginx ~/.cache/helm/repository/ingress-nginx-4.1.4.tgz --set controller.hostNetwork=true --set controller.watchIngressWithoutClass=true --create-namespace \
--set controller.service.type=NodePort \
--set controller.admissionWebhooks.patch.image.registry=registry.cn-hangzhou.aliyuncs.com \
--set controller.admissionWebhooks.patch.image.image=google_containers/kube-webhook-certgen \
--set controller.admissionWebhooks.patch.image.tag=v20230404-helm-chart-4.6.0-11-gc76179c04 \
--set controller.admissionWebhooks.patch.image.digest='' \
--set controller.image.digest="" \
--set controller.image.registry=registry.cn-hangzhou.aliyuncs.com \
--set controller.image.image=google_containers/nginx-ingress-controller \
--set controller.image.tag=v1.2.1
registry.k8s.io/ingress-nginx/kube-webhook-certgen
v1.1.1
v20230404-helm-chart-4.6.0-11-gc76179c04
kubectl port-forward -n ingress-nginx service/ingress-nginx-controller 8890:443 --address 0.0.0.0
# registry.k8s.io/ingress-nginx/controller:v1.2.1
#
helm repo add horizon https://horizoncd.github.io/helm-charts
# https://github.com/horizoncd/helm-charts/releases/
helm pull horizon/horizon --version=2.1.17
----
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/dashboard)$ $1/ redirect;
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /dashboard(/|$)(.*)
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
------
yum install skopeo
docker login ccr.ccs.tencentyun.com --username=myuser
skopeo login -u myuser -p mypassword docker.io
skopeo copy --multi-arch all --insecure-policy --src-tls-verify=false --dest-tls-verify=false --dest-authfile /root/.docker/config.json docker://registry.k8s.io/conformance:v1.26.3 docker://docker.io/myuser/conformance:v1.26.3
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
------------
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
# 默认参数安装
# helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --version 6.0.7
helm 一键安装 dashboard metrics-server [已在 1.19 测试,可正常安装]
helm upgrade --install kubernetes-dashboard -n kube-system ~/.cache/helm/repository/kubernetes-dashboard-6.0.7.tgz \
--set metricsScraper.enabled=true --set metrics-server.enabled=true --set metrics-server.args="{--kubelet-insecure-tls}" \
--set metrics-server.image.repository=registry.k8s.io/metrics-server/metrics-server \
--set metrics-server.image.tag=v0.6.3 \
--set image.repository=docker.io/kubernetesui/dashboard \
--set image.tag=v2.7.0 \
--set metricsScraper.image.repository=docker.io/kubernetesui/metrics-scraper \
--set metricsScraper.image.tag=v1.0.9
使用的三个镜像为:
registry.k8s.io/metrics-server/metrics-server v0.6.3 817bbe3f2e517 70.3MB
docker.io/kubernetesui/dashboard v2.7.0 07655ddf2eebe 75.8MB
docker.io/kubernetesui/metrics-scraper v1.0.9 ac9017206ce55 19.8MB
kubectl get pods -A |grep kubernetes-dashboard
export POD_NAME=$(kubectl get pods -n default -l "app.kubernetes.io/name=kubernetes-dashboard,app.kubernetes.io/instance=kubernetes-dashboard" -o jsonpath="{.items[0].metadata.name}")
echo https://127.0.0.1:8443/
kubectl -n default port-forward $POD_NAME 8443:8443 --address=0.0.0.0
kubectl port-forward -n default service/kubernetes-dashboard 8443:443 --address 0.0.0.0
或
kubectl -n default port-forward $(kubectl get pods -n default -l "app.kubernetes.io/name=kubernetes-dashboard,app.kubernetes.io/instance=kubernetes-dashboard" -o jsonpath="{.items[0].metadata.name}") 8443:8443 --address=0.0.0.0
# 查看 token
kubectl get secret | grep kubernetes-dashboard-token |awk '{print $1}' | xargs -I{} kubectl describe secret {} | grep 'token:' | awk '{print $2}'
默认用户为 system:serviceaccount:default:kubernetes-dashboard 没有任何权限
创建新的管理员账号 dashboard-admin 和 只读用户 dashboard-read
dashboard-read 是默认的 view 权限,无法执行 pod exec,无法查看 secert
---
# dashboard-admin
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dashboard-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kube-system
---
apiVersion: v1
kind: Secret
metadata:
name: dashboard-admin
namespace: kube-system
annotations:
kubernetes.io/service-account.name: "dashboard-admin"
type: kubernetes.io/service-account-token
---
# dashboard-read user
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-read
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: view
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: dashboard-read
namespace: kube-system
---
apiVersion: v1
kind: Secret
metadata:
name: dashboard-read
namespace: kube-system
annotations:
kubernetes.io/service-account.name: "dashboard-read"
type: kubernetes.io/service-account-token
---
# ingress dashboard
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/dashboard)$ $1/ redirect;
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /dashboard(/|$)(.*)
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
----
➜ kubectl -n kube-system get secret $(kubectl -n kube-system get sa/dashboard-admin -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
➜ kubectl -n kube-system get secret $(kubectl -n kube-system get sa/dashboard-read -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
-------------
docker pull multiarch/qemu-user-static
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
docker run --rm -ti arm64v8/alpine:3.17 sh
ingress build image
version="v1.6.4"
helm="4.5.2"
#wget -c https://github.com/kubernetes/ingress-nginx/archive/refs/tags/controller-${version}.tar.gz
#tar zxfv controller-${version}.tar.gz
#cd ingress-nginx-controller-${version}
git clone https://github.com/kubernetes/ingress-nginx.git
cd ingress-nginx
git checkout controller-${version}
docker pull multiarch/qemu-user-static
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
docker run --rm arm64v8/alpine:3.17 uname -m # echo aarch64
# x86 服务器构建 arm64 ingress 关键步骤。如果报错需要把ingress 相关的镜像删除掉。
DOCKER_DEFAULT_PLATFORM=linux/arm64 PLATFORMS=linux/arm64 ARCH=arm64 make clean-image build image
build success => DOCKER_DEFAULT_PLATFORM=linux/arm64 docker run --rm gcr.io/k8s-staging-ingress-nginx/controller:v1.6.4 uname -m
https://github.com/kubernetes/ingress-nginx/issues/7653
-------------
# 删除 kube-system 的所有 coredns pods
kubectl get pods -n kube-system |grep coredns | awk '{print $1}' | xargs -I{} kubectl delete -n kube-system pods/{}
# 查看日志
kubectl get pods -n kube-system |grep coredns | awk '{print $1}' | head -1| xargs -I{} kubectl logs -f pods/{} -n kube-system
#
kush delete pod-name -n namespace-name
kush logs pod-name -n namespace-name
kush logf pod-name -n namespace-name
------------
HELM REPO
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add apisix https://charts.apiseven.com
helm repo add traefik https://helm.traefik.io/traefik
helm repo update
helm upgrade --install metrics-server ~/.cache/helm/repository/metrics-server-3.10.0.tgz --set metricsScraper.enabled=true \
--set metrics-server.enabled=true --set metrics-server.args=' - --kubelet-insecure-tls'
helm get values metrics-server
USER-SUPPLIED VALUES:
metrics-server:
args: ' - --kubelet-insecure-tls'
enabled: true
metricsScraper:
enabled: true
需要增加这个参数,忽略校验证书
- --kubelet-insecure-tls
----
----
## KIND
docker pull kindest/node:v1.14.10
docker pull kindest/node:v1.19.16
docker pull kindest/node:v1.21.14
docker pull kindest/node:v1.23.14
docker pull indest/node:v1.24.13
sudo kind create cluster --name kind-v21 --image kindest/node:v1.21.14
kubectl cluster-info --context kind-kind-v21
# kind 加载本地镜像
kind load docker-image --name kind-v21 registry.k8s.io/metrics-server/metrics-server:v0.6.3
docker exec -it kind-v21-control-plane crictl images
-----------------
sudo kind create cluster --name kind-v19 --image kindest/node:v1.19.16
Creating cluster "kind-v19" ...
✓ Ensuring node image (kindest/node:v1.19.16) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind-v19"
You can now use your cluster with:
kubectl cluster-info --context kind-kind-v19
sudo kubectl get nodes -o wide
## 切换集群
sudo kubectl cluster-info --context kind-kind-v19
# install redis 多节点哨兵模式
helm install crm bitnami/redis \
--set auth.password="abc123456" \
--set sentinel.masterSet="mymaster" \
--set architecture=replication --set sentinel.enabled=true \
--set sentinel.quorum=2 \
--set replica.replicaCount=3 \
--create-namespace \
--namespace redis \
--set fullnameOverride=crm
# kind 加载本地镜像
kind load docker-image --name kind-v19 registry.k8s.io/metrics-server/metrics-server:v0.6.3
docker exec -it kind-v19-control-plane crictl images
#
docker pull bitnami/redis:7.0.4-debian-11-r17
docker pull apache/apisix-ingress-controller:1.5.0-rc1
docker pull apache/apisix-dashboard:2.13-alpine
docker pull openresty/openresty:1.21.4.1-3-buster-fat
docker pull openkruise/kruise-manager:v1.2.0
## install apisix-ingress
helm repo add apisix https://charts.apiseven.com
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
kubectl create ns ingress-apisix
helm install apisix apisix/apisix \
--set gateway.type=NodePort \
--set ingress-controller.enabled=true \
--namespace ingress-apisix \
--set ingress-controller.config.apisix.serviceNamespace=ingress-apisix \
--create-namespace
kubectl get service --namespace ingress-apisix
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apisix-admin ClusterIP 10.96.186.202 <none> 9180/TCP 0s
apisix-etcd ClusterIP 10.96.132.32 <none> 2379/TCP,2380/TCP 0s
apisix-etcd-headless ClusterIP None <none> 2379/TCP,2380/TCP 0s
apisix-gateway NodePort 10.96.2.63 <none> 80:30739/TCP 0s
apisix-ingress-controller ClusterIP 10.96.74.5 <none> 80/TCP 0s
helm install apisix-dashboard apisix/apisix-dashboard --namespace ingress-apisix --create-namespace
# for k8s 1.19
helm install my-release bitnami/nginx-ingress-controller --version=9.2.26
docker pull bitnami/nginx-ingress-controller:1.2.0-debian-10-r0
helm install contour-operator bitnami/contour-operator
helm install zk bitnami/zookeeper
# kubectl run
kubectl run --image=nybase/ping ping -- sleep 90d
kubectl exec -ti ping -- sh
kubectl delete pods/ping
kubectl run --image=nginx:1.23 nginx
kind load docker-image docker.io/alpine:3.17 -n kind-v19
kubectl run --image=alpine:3.17 alpine -- sleep 90d
kubectl exec -ti alpine -- sh
# 转发 业务映射
kubectl port-forward -n ingress-apisix svc/apisix-gateway 7000:80 --address 0.0.0.0
# 转发后台映射
kubectl port-forward -n ingress-apisix svc/apisix-dashboard 7001:80 --address 0.0.0.0
# 查看 kind 上面的镜像:
docker exec -it kind-v19-control-plane crictl images
nerdctl pull registry.k8s.io/ingress-nginx/controller:v1.6.4 --platform=amd64,arm64
# 升级 coredns
Kubernetes 1.11/1.12/1.14/1.16 均兼容CoreDNS 1.6.2版本,这些低版本的k8s,建议升级到 CoreDNS 1.6.2版本。
Kubernetes 1.19 自带 1.70,兼容 CoreDNS 1.8.6 1.9.3 1.10.1 版本( 要增加 clusterrole 配置)
docker pull coredns/coredns:1.9.3
docker pull coredns/coredns:1.10.1
备份
mkdir -p coredns
cd coredns
kubectl get deployments -n kube-system coredns -oyaml > coredns-deployments-bak.yaml
kubectl get cm -n kube-system coredns -o yaml > coredns-config-bak.yaml
kubectl get deploy -n kube-system coredns -o yaml > coredns-controllers-bak.yaml
kubectl get clusterrole system:coredns -o yaml > coredns-clusterrole-bak.yaml
kubectl get clusterrolebinding system:coredns -o yaml > coredns-clusterrolebinding-bak.yaml
kubectl get deployment coredns -n kube-system -o jsonpath="{.spec.template.spec.containers[0].image}"
=》
k8s.gcr.io/coredns:1.7.0
kubectl get deployment/coredns -n kube-system -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
coredns 1/2 2 1 104m coredns docker.io/coredns/coredns:1.10.1 k8s-app=kube-dns
kubectl get deployment/coredns -n kube-system
=》
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 99m
# 下面两种办法都是修改镜像的地址,均可
kubectl set image deployment/coredns -n kube-system coredns=docker.io/coredns/coredns:1.10.1
kubectl patch deployment coredns -n kube-system --patch '{"spec": {"template": {"spec": {"containers": [{"name": "coredns","image":"docker.io/coredns/coredns:1.10.1"}]}}}}'
语法:
$ image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N
kubectl set image deployment/部署组 容器名=镜像地址
# https://github.com/coredns/deployment/blob/master/kubernetes/CoreDNS-k8s_version.md
cat > coredns-clusterrole-patch.yaml <<-EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:coredns
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"system:coredns"},"rules":[{"apiGroups":[""],"resources":["endpoints","services","pods","namespaces"],"verbs":["list","watch"]},{"apiGroups":[""],"resources":["nodes"],"verbs":["get"]},{"apiGroups":["discovery.k8s.io"],"resources":["endpointslices"],"verbs":["list","watch"]}]}
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
EOF
kubectl apply -f coredns-clusterrole-patch.yaml
或 以下命令,在结尾增加
kubectl edit clusterrole system:coredns
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
- nodes
verbs:
- get
- list
- watch
https://blog.csdn.net/qq_30632605/article/details/121555327
升级需要增加以上的配置
或者
helm repo add coredns https://coredns.github.io/helm
helm --namespace=kube-system install coredns coredns/coredns \
--set image.repository=docker.io --set image.tag=1.9.3
参考链接
https://helm.sh/docs/topics/version_skew/
https://github.com/kubernetes/ingress-nginx
https://github.com/Mirantis/cri-dockerd/releases/tag/v0.2.5 目前官方只有amd64架构的二进制包
https://get.helm.sh/helm-v3.7.2-linux-arm64.tar.gz
https://get.helm.sh/helm-v3.7.2-linux-amd64.tar.gz
https://projectcontour.io/resources/compatibility-matrix/
https://developer.aliyun.com/article/941614 skywalking监控nginx-ingress-controller
https://zhuanlan.zhihu.com/p/547615442 部署基于docker和cri-dockerd的Kubernetes 1.24
https://github.com/apache/apisix-ingress-controller/blob/master/docs/en/latest/deployments/kind.md
https://artifacthub.io/packages/helm/bitnami/redis
https://cloud-atlas.readthedocs.io/zh_CN/latest/docker/docker_in_docker/load_kind_image.html
https://blog.csdn.net/qq_30632605/article/details/121555327 coredns未ready 状态running
https://projectcontour.io/resources/compatibility-matrix/
https://projectcontour.io/guides/kind/
关于 bitnami nginx ingress 的坑
helm install ingress-nginx 1.2.1 k8s 1.19
helm install nginx-ingress-121 bitnami/nginx-ingress-controller --version 9.2.19
# install ingress nginx 1.3.1
helm install nginx-ingress-131 bitnami/nginx-ingress-controller --version 9.3.11
# helm install mynginx ./nginx-1.2.3.tgz
坑 certificate.lua:259: call(): failed to set DER private key: d2i_PrivateKey_bio() failed, context: ssl_certificate_by_lua
回滚到: bitnami/nginx-ingress-controller:1.3.0-debian-11-r9
https://www.cnblogs.com/guoyabin/p/16698141.html
nerdctl 使用
nerdctl pull --platform=arm64 nginx:alpine
nerdctl pull --platform=amd64 nginx:alpine
nerdctl save --platform=amd64 --platform=arm64 nginx:alpine -o nginx-alpine-multi.save
gzip nginx-alpine-multi.save
docker load < nginx-alpine-multi.save.gz
docker exec -ti nginx:alpine sh
nginx -g 'daemon off;'
错误命令: nerdctl pull --platform=amd64 --platform=arm64 nginx:1.22 =》同时指定两个平台,发现有问题,只能一个一个下载
nerdctl images 查看 SIZE有大小就说明下载到了本地。
网友评论