美文网首页
K8S02-源码包安装(1.11.0版)

K8S02-源码包安装(1.11.0版)

作者: K8S_Goearth | 来源:发表于2018-11-16 14:37 被阅读0次

    5.flannel网络安装

    flannel启动顺序
    1、启动etcd (先为flannel及docker分配虚拟网段)
    2、启动flanneld
    3、使用flannel参数(/run/flannel/subnet.env)启动dockerd

    安装 flannel 网络插件
    所有的 node 节点都需要安装网络插件才能让所有的Pod加入到同一个局域网中,如果想要在master节点上也能访问 pods的ip,master 节点也安装。
    yum install -y flannel
    修改flannel启动配置文件
    service配置文件 vi /usr/lib/systemd/system/flanneld.service

    [Unit]
    Description=Flanneld overlay address etcd agent
    After=network.target
    After=network-online.target
    Wants=network-online.target
    After=etcd.service
    Before=docker.service
    [Service]
    Type=notify
    EnvironmentFile=/etc/sysconfig/flanneld
    EnvironmentFile=-/etc/sysconfig/docker-network
    ExecStart=/usr/bin/flanneld-start
    -etcd-endpoints={FLANNEL_ETCD_ENDPOINTS} \ -etcd-prefix={FLANNEL_ETCD_PREFIX}
    $FLANNEL_OPTIONS
    ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
    Restart=on-failure
    [Install]
    WantedBy=multi-user.target
    RequiredBy=docker.service
    修改配置文件
    vi /etc/sysconfig/flanneld
    FLANNEL_ETCD_ENDPOINTS="https://172.16.0.5:2379,https://172.16.0.6:2379,https://172.16.0.7:2379"
    FLANNEL_ETCD_PREFIX="/atomic.io/network" #此处保持不变
    FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"
    注: 如果是主机是多网卡,则需要在FLANNEL_OPTIONS中增加指定的外网出口的网卡,例如-iface=eth2
    ------------------------------------------------------------------以上是flannel相关的2个配置文件修改,还不能启动程序
    在任意1台etcd服务器中执行以下命令即可,我这边是在etcd1上执行.执行下面的命令为docker分配IP地址段。
    etcdctl --endpoints=https://172.16.0.5:2379,https://172.16.0.6:2379,https://172.16.0.7:2379
    --ca-file=/etc/kubernetes/ssl/ca.pem
    --cert-file=/etc/kubernetes/ssl/kubernetes.pem
    --key-file=/etc/kubernetes/ssl/kubernetes-key.pem
    mk /atomic.io/network/config '{"Network":"10.10.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}}'
    如果你要使用host-gw模式,可以直接将vxlan改成host-gw即可。根据原作者测试,使用host-gw模式时网络性能好一些。
    启动flannel服务
    systemctl daemon-reload
    systemctl enable flanneld
    systemctl start flanneld
    systemctl status flanneld
    注: 在node节点启动flannel前,请先停止docker,flannel启动好后,再启动docker
    利用etcdctl 查询命令
    etcdctl --endpoints=https://172.16.0.5:2379,https://172.16.0.6:2379,https://172.16.0.7:2379
    --ca-file=/etc/kubernetes/ssl/ca.pem
    --cert-file=/etc/kubernetes/ssl/kubernetes.pem
    --key-file=/etc/kubernetes/ssl/kubernetes-key.pem
    get /atomic.io/network
    { "Network": "172.30.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }
    etcdctl --endpoints=https://172.16.0.5:2379,https://172.16.0.6:2379,https://172.16.0.7:2379
    --ca-file=/etc/kubernetes/ssl/ca.pem
    --cert-file=/etc/kubernetes/ssl/kubernetes.pem
    --key-file=/etc/kubernetes/ssl/kubernetes-key.pem
    get /atomic.io/network/subnets/172.30.14.0-24
    {"PublicIP":"192.168.223.204","BackendType":"vxlan","BackendData":{"VtepMAC":"56:27:7d:1c:08:22"}}
    ifconfig 查看是不是docker0t flannel:1网卡是不是在同一网段
    如果可以查看到以上内容证明flannel已经安装完成,并且已经正常分配kubernetes网段
    如果想flannel的网段由10.10.0.0换成192.168.0.0则需要执行以下命令即可
    etcdctl rm /atomic.io/network/config 删除config文件
    etcdctl mk /atomic.io/network/config { "Network": "192.168.0.0/16","SubnetMin": "10.10.1.0", "SubnetMax": "192.168.0.0" } 生成新的文件
    rm -f /run/flannel/docker 此文件一定要删除,不然docker分配到的还是以前的网段的ip
    ---------------------------------------------------------------------------------------------------------------------------到此flannel 网络配置完毕

    6.master集群

    kubernetes master 节点包含的组件:
    kube-apiserver kube-scheduler kube-controller-manager
    kube-scheduler、kube-controller-manager、kube-apiserver 三者的功能紧密相关;同时只能有一个 kube-scheduler、kube-controller-manager 进程处于工作状态,如果运行多个,则需要通过选举产生一个 leader(需要在配置文件中配置--leader-elect);
    kube-apiserver 为无状态服务,使用haproxy+keepalived或者nginx+keepalived实现高可用
    kube-apiserver
    配置和启动kube-apiserver,apiserver涉及二个配置文件(/usr/lib/systemd/system/kube-apiserver.service-启动配置文件,/etc/kubernetes/apiserver-参数配置文件)
    创建kube-apiserver 启动文件
    vi /usr/lib/systemd/system/kube-apiserver.service

    [Unit]
    Description=Kubernetes API Service
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    After=etcd.service
    [Service]
    EnvironmentFile=-/etc/kubernetes/apiserver #为了统一配置,每个模块只写一个配置文件,然后在下面的ExecStart中引用
    ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS  #apiserver配置文件只有一个变量名:KUBE_API_ARGS,其他所有变量都写在kube-apiserver文件里
    Restart=on-failure
    Type=notify
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target

    apiserver配置文件

    vim /etc/kubernetes/apiserver

    KUBE_API_ARGS="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction
    --advertise-address=172.16.0.100
    --bind-address=0.0.0.0
    --secure-port 6443
    --service-cluster-ip-range=169.169.0.0/16
    --authorization-mode=Node,RBAC
    --kubelet-https=true
    --token-auth-file=/etc/kubernetes/token.csv \ #kubelet启动需要验证,不然kubelet启动找不到token,node就不能注册成功
    --service-node-port-range=10000-60000
    --tls-cert-file=/etc/kubernetes/pki/kubernetes.pem
    --tls-private-key-file=/etc/kubernetes/pki/kubernetes-key.pem
    --client-ca-file=/etc/kubernetes/pki/ca.pem
    --service-account-key-file=/etc/kubernetes/pki/ca-key.pem
    --etcd-cafile=/etc/kubernetes/pki/ca.pem
    --etcd-certfile=/etc/kubernetes/pki/kubernetes.pem
    --etcd-keyfile=/etc/kubernetes/pki/kubernetes-key.pem
    --storage-backend=etcd3
    --etcd-servers=https://172.16.0.5:2379,https://172.16.0.6:2379,https://172.16.0.7:2379
    --enable-swagger-ui=true
    --allow-privileged=true
    --apiserver-count=3
    --audit-log-maxage=30
    --audit-log-maxbackup=3
    --audit-log-maxsize=100
    --audit-log-path=/var/lib/audit.log
    --event-ttl=1h
    --logtostderr=false
    --log-dir=/var/log/kubernetes/apiserver
    --v=2 1>>/var/log/kubernetes/apiserver/kube-apiserver.log 2>&1"
    如果中途修改过--service-cluster-ip-range地址,则必须将default命名空间的kubernetes的service给删除,使用命令:kubectl delete service kubernetes,然后系统会自动用新的ip重建这个service,不然apiserver的log有报错the cluster IP x.x.x.x for service kubernetes/default is not within the service CIDR x.x.x.x/16; please recreate
    启动kube-apiserver
    systemctl daemon-reload
    systemctl enable kube-apiserver
    systemctl start kube-apiserver
    systemctl status kube-apiserver
    以下没有报错,说明启动成功
    [root@master1 ~]# systemctl status kube-apiserver
    kube-apiserver.service - Kubernetes API Service
    Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
    Active: active (running) since Fri 2018-09-21 18:46:35 CST; 2h 14min ago
    Docs: https://github.com/GoogleCloudPlatform/kubernetes
    Main PID: 30835 (kube-apiserver)
    CGroup: /system.slice/kube-apiserver.service
    └─30835 /usr/local/bin/kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction --advertise-address=172.16.0.100 --bind-address=0.0.0.0 --secure-port 6443 --authorization-mode...
    Sep 21 18:46:26 master1 systemd[1]: Starting Kubernetes API Service...
    Sep 21 18:46:30 master1 kube-apiserver[30835]: [restful] 2018/09/21 18:46:30 log.go:33: [restful/swagger] listing is available at https://172.16.0.100:6443/swaggerapi
    Sep 21 18:46:30 master1 kube-apiserver[30835]: [restful] 2018/09/21 18:46:30 log.go:33: [restful/swagger] https://172.16.0.100:6443/swaggerui/ is mapped to folder /swagger-ui/
    Sep 21 18:46:32 master1 kube-apiserver[30835]: [restful] 2018/09/21 18:46:32 log.go:33: [restful/swagger] listing is available at https://172.16.0.100:6443/swaggerapi
    Sep 21 18:46:32 master1 kube-apiserver[30835]: [restful] 2018/09/21 18:46:32 log.go:33: [restful/swagger] https://172.16.0.100:6443/swaggerui/ is mapped to folder /swagger-ui/
    Sep 21 18:46:35 master1 systemd[1]: Started Kubernetes API Service.
    ---------------------------------------------------------------到此kube-apiserver启动完毕.后面二个服务都要依赖此服务,所以要先安装
    检查etcd集群状态
    [root@master1 kubernetes]# kubectl cluster-info
    Kubernetes master is running at https://172.16.0.100:6443
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

    检查命名空间的cluster ip
    [root@master1 kubernetes]# kubectl get all --all-namespaces
    NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    default service/kubernetes ClusterIP 169.169.0.1 <none> 443/TCP 1d

    检查master组件中api,controller,scheduler的状态,目前就controller还不能使用
    [root@master1 kubernetes]# kubectl get componentstatuses
    NAME STATUS MESSAGE ERROR
    controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
    scheduler Healthy ok
    etcd-1 Healthy {"health": "true"}
    etcd-0 Healthy {"health": "true"}
    etcd-2 Healthy {"health": "true"}

    配置和启动 kube-controller-manager

    创建 kube-controller-manager的启动配置文件
    vim /usr/lib/systemd/system/kube-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    [Service]
    EnvironmentFile=-/etc/kubernetes/controller-manager
    ExecStart=/usr/local/bin/kube-controller-manager KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target 参数配置文件 vi /etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 \ --port=10252 \ --master=http://127.0.0.1:8080 \ --service-cluster-ip-range=169.169.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \ --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \ --root-ca-file=/etc/kubernetes/pki/ca.pem \ --leader-elect=true \ --logtostderr=false \ --v=2" --service-cluster-ip-range 参数指定 Cluster 中 Service 的CIDR范围,该网络在各 Node 间必须路由不可达,必须和 kube-apiserver 中的参数一致; --cluster-signing-* 指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥; --root-ca-file 用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件; --address 值必须为 127.0.0.1,kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器; 启动 kube-controller-manager systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manager 我们启动每个组件后可以通过执行命令 kubectl get componentstatuses,来查看各个组件的状态; kubectl get componentstatuses 或者 kubectl get cs NAME STATUS MESSAGE ERROR scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused controller-manager Healthy ok etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} 注: 目前scheduler未启动,报错是正常的 ------------------------------------------------------------------------------------------------------------------------------------以上是安装 kube-controller-manager kubelet 发起的 CSR 请求都是由 kube-controller-manager 来做实际签署的,所有使用的证书都是根证书的密钥对 。由于kube-controller-manager是和kube-apiserver部署在同一节点上,且使用非安全端口8080通信,故不需要证书 使用的证书 证书作用 ca.pem CA根证书 ca-key.pem kube-apiserver的tls认证私钥 --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem 指定签名的CA机构根证书,用来签名为 TLS BootStrap 创建的证书和私钥 --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem 指定签名的CA机构私钥,用来签名为 TLS BootStrap 创建的证书和私钥 --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem 同上 --root-ca-file=/etc/kubernetes/ssl/ca.pem 根CA证书文件路径 ,用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件 --kubeconfig kubeconfig配置文件路径,在配置文件中包括Master的地址信息及必要认证信息 查看controller-manager 节点当前主节点 [root@master3 kubernetes]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master3_ca1927b0-bf10-11e8-9421-5254d2b1bb60","leaseDurationSeconds":15,"acquireTime":"2018-09-23T09:22:58Z","renewTime":"2018-09-23T10:24:09Z","leaderTransitions":5}' creationTimestamp: 2018-09-23T08:51:38Z name: kube-controller-manager namespace: kube-system resourceVersion: "103906" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager uid: e2091533-bf0d-11e8-a13d-525494b06dee kube-scheduler 配置和启动kube-scheduler,scheduler涉及二个配置文件(/usr/lib/systemd/system/kube-scheduler.service-启动配置文件,/etc/kubernetes/scheduler-参数配置文件) [root@master1 kubernetes]# vim /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/scheduler ExecStart=/usr/local/bin/kube-schedulerKUBE_SCHEDULER_ARGS
    Restart=on-failure
    RestartSec=5
    [Install]
    WantedBy=multi-user.target
    [root@master1 kubernetes]# cat kubeconfig

    kubernetes system config

    The following values are used to configure various aspects of all

    kubernetes services, including

    kube-apiserver.service

    kube-controller-manager.service

    kube-scheduler.service

    kubelet.service

    kube-proxy.service

    logging to stderr means we get it in the systemd journal

    KUBE_LOGTOSTDERR="--logtostderr=true"

    journal message level, 0 is debug

    KUBE_LOG_LEVEL="--v=0"

    Should this cluster be allowed to run privileged docker containers

    KUBE_ALLOW_PRIV="--allow-privileged=true"

    How the controller-manager, scheduler, and proxy find the apiserver

    KUBE_MASTER="--master=https://172.16.0.100:6443"
    [root@master2 kubernetes]# cat scheduler
    KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"
    systemctl daemon-reload
    systemctl enable kube-scheduler
    systemctl start kube-scheduler
    systemctl status kube-scheduler
    -----------------------------------------------------------------------------------------------------------------------------到此scheduler安装完毕
    kube-scheduler
    kube-scheduler是和kube-apiserver一般部署在同一节点上,且使用非安全端口8080通信,故启动参参数中没有指定证书的参数可选 。 若分离部署,可在kubeconfig文件中指定证书,使用kubeconfig认证,kube-proxy类似
    查看scheduler健康状态
    [root@master1 kubernetes]# curl -L http://127.0.0.1:10251/healthz
    ok
    检查当节主节点是哪个
    [root@master1 pki]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
    apiVersion: v1
    kind: Endpoints
    metadata:
    annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master2_41aa3ed4-bf12-11e8-b2d2-52545f4f975a","leaseDurationSeconds":15,"acquireTime":"2018-09-23T09:28:23Z","renewTime":"2018-09-23T10:26:41Z","leaderTransitions":9}'
    creationTimestamp: 2018-09-22T06:35:21Z
    name: kube-scheduler
    namespace: kube-system
    resourceVersion: "104103"
    selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
    uid: ae19fe71-be31-11e8-80b1-525494b06dee


    7.node节点安装

    kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests):
    cd /etc/kubernetes
    kubectl create clusterrolebinding kubelet-bootstrap
    --clusterrole=system:node-bootstrapper
    --user=kubelet-bootstrap
    --user=kubelet-bootstrap 是在 /etc/kubernetes/token.csv 文件中指定的用户名,同时也写入了 /etc/kubernetes/bootstrap.kubeconfig 文件

    kubelet安装

    客户端启动node需要文件bootstrap.kubeconfig和kubelet.conf,kubelet.service
    [root@node1 ~]# cat /usr/lib/systemd/system/kubelet.service
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service
    [Service]
    Environment="KUBELET_MY_ARGS=--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice" #此处要单独写在启动文件中,如果写在kubelet.conf文件中,会报错,在1.12.0可以直接写到kubelet.conf文件中
    EnvironmentFile=-/etc/kubernetes/kubelet.conf
    ExecStart=/usr/local/bin/kubelet KUBELET_ARGSKUBELET_MY_ARGS
    Restart=on-failure
    KillMode=process
    [Install]
    WantedBy=multi-user.target

    在master1上面生成bootstrap.kubeconfig,拷贝到node节点
    [root@node1 kubernetes]# cat bootstrap.kubeconfig
    kubelet参数配置文件

    [root@node1 kubernetes]# cat kubelet.conf
    KUBELET_ARGS="--cgroup-driver=systemd
    --hostname-override=172.16.0.8
    --cert-dir=/etc/kubernetes/pki
    --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig
    --kubeconfig=/etc/kubernetes/kubelet.kubeconfig" #此文件在ca服务端验证通过后,会自动生成.在这里只要指定生成文件地址就可以
    master1上面生成拷贝到node节点
    [root@node1 kubernetes]# cat token.csv
    3d6e22f8d0ab15681f9b9317386ab0c4,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    启动kubelet
    systemctl daemon-reload
    systemctl enable kubelet
    systemctl start kubelet
    systemctl status kubelet
    注:kubelet启动成功后会在node节点生成三个文件
    /etc/kubernetes/kubelet.kubeconfig 验证通过后生成的配置文件
    /etc/kubernetes/pki/kubelet.key|kubelet.crt|kubelet-client-current.pem 生成三个配置文件
    ------------------------------------------------------------------------------------------------------------------------------------------------到此安装完成
    使用命令kubectl get clusterrolebinding和kubectl get clusterrole可以查看系统中的角色与角色绑定
    kube-proxy安装
    客户端安装kube-proxy,主要有三个文件
    服务启动文件
    [root@node1 kubernetes]# cat /usr/lib/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    [Service]
    EnvironmentFile=-/etc/kubernetes/kube-proxy.conf
    ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target
    参数配置文件
    [root@node1 kubernetes]# cat kube-proxy.conf
    KUBE_PROXY_ARGS="--bind-address=172.16.0.8 --hostname-override=172.16.0.8 --cluster-cidr=169.169.0.0/16 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig"
    kubeconfig文件,主要包括了ca认证需要证书及apiserver的ip及端口
    [root@node1 kubernetes]# cat kube-proxy.kubeconfig
    启动服务
    启动kubelet
    systemctl daemon-reload
    systemctl enable kube-proxy
    systemctl start kube-proxy
    systemctl status kube-proxy
    ------------------------------------------------------------------------------------------------------------------------------按以上配置即可启动成功
    master验证node节点,在master上执行
    kubectl get csr 查看是否有未认证的node节点
    手动通过未认证的node节点
    kubectl certificate approve node-csr-rX1lBLN1lwS6T-ffV412xdUctIVUrZLNBBLZqR2sURE
    认证通过后,就可以看到成功的记录
    [root@master1 ~]# kubectl get node
    NAME STATUS ROLES AGE VERSION
    172.16.0.8 Ready <none> 9h v1.11.0
    172.16.0.9 Ready <none> 9h v1.11.0
    出现以上说明node节点已通过认证

    8.docker本地仓库搭建

    1.创建本地镜像仓库
    yum -y install docker
    systemctl enable docker;systemctl start docker
    docker run -d -p 5000:5000 --restart=always --name="docker-image" --hostname="master1" -v /data/docker-image:/registry -e REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/registry registry
    浏览器访问,里面还没镜像,为空
    http://173.248.227.20:5000/v2/
    2.上传镜像到本地仓库,只在有搭建本地仓库的docker上执行即可

    vim /etc/docker/daemon.json
    {
    "insecure-registries": ["172.16.0.2:5000"]
    }
    "registry-mirrors": ["http://hub-mirror.c.163.com"]
    重启
    systemctl restart docker

    docker image

    docker tag docker.io/registry 172.16.0.2:5000/docker.io/registry 修改镜像名

    docker pull 172.16.0.2:5000/docker.io/registry 上传镜像

    curl -X GET http://172.16.0.2:5000/v2/_catalog 查看私有仓库镜像

    浏览器查看
    http://173.248.227.20:5000/v2/_catalog
    {"repositories":["docker.io/registry","k8s.gcr.io/etcd-amd64","k8s.gcr.io/kube-apiserver-amd64","k8s.gcr.io/kube-controller-manager-amd64","k8s.gcr.io/kube-proxy-amd64","k8s.gcr.io/kube-scheduler-amd64","k8s.gcr.io/pause-amd64"]}
    3.其他master服务器上下载
    拉取先要修改配置文件
    vim /etc/docker/daemon.json
    {
    "insecure-registries": ["172.16.0.2:5000"] #加入本地仓库
    }
    "registry-mirrors": ["http://hub-mirror.c.163.com"]
    systemctl restart docker
    然后在目标服务器上执行下面命令,注意:172.16.0.2:5000/k8s.gcr.io/kube-apiserver-amd64:v1.11.3格式一定要正确,:后面的版本号一定要与docker images里面的相同
    docker pull 172.16.0.2:5000/k8s.gcr.io/kube-apiserver-amd64:v1.11.3
    docker pull 172.16.0.2:5000/k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
    docker pull 172.16.0.2:5000/k8s.gcr.io/kube-scheduler-amd64:v1.11.3
    docker pull 172.16.0.2:5000/k8s.gcr.io/kube-proxy-amd64:v1.11.3
    docker pull 172.16.0.2:5000/k8s.gcr.io/pause:3.1
    docker pull 172.16.0.2:5000/k8s.gcr.io/etcd-amd64:3.2.18
    docker pull 172.16.0.2:5000/k8s.gcr.io/coredns:1.1.3
    然后修改镜像名,把172.16.0.2:5000端口去掉
    docker tag 172.16.0.2:5000/k8s.gcr.io/kube-apiserver-amd64:v1.11.3 k8s.gcr.io/kube-apiserver-amd64:v1.11.3
    docker tag 172.16.0.2:5000/k8s.gcr.io/kube-controller-manager-amd64:v1.11.3 k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
    docker tag 172.16.0.2:5000/k8s.gcr.io/kube-scheduler-amd64:v1.11.3 k8s.gcr.io/kube-scheduler-amd64:v1.11.3
    docker tag 172.16.0.2:5000/k8s.gcr.io/kube-proxy-amd64:v1.11.3 k8s.gcr.io/kube-proxy-amd64:v1.11.3
    docker tag 172.16.0.2:5000/k8s.gcr.io/pause:3.1 k8s.gcr.io/pause:3.1
    docker tag 172.16.0.2:5000/k8s.gcr.io/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
    docker tag 172.16.0.2:5000/k8s.gcr.io/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
    vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    --pod-infra-container-image=172.16.0.2/senyint/pause-amd64:3.0
    Environment="KUBELET_INFRA_IMAGE=--pod-infra-container-image=172.16.0.2:5000/kube-apiserver-amd64:v1.11.3"
    Environment="KUBELET_INFRA_IMAGE=--pod-infra-container-image=172.16.0.2:5000/kube-controller-manager-amd64:v1.11.3"
    Environment="KUBELET_INFRA_IMAGE=--pod-infra-container-image=172.16.0.2:5000/kube-scheduler-amd64:v1.11.3"
    Environment="KUBELET_INFRA_IMAGE=--pod-infra-container-image=172.16.0.2:5000/kube-proxy-amd64:v1.11.3"
    Environment="KUBELET_INFRA_IMAGE=--pod-infra-container-image=172.16.0.2:5000/etcd-amd64:3.2.18"
    Environment="KUBELET_INFRA_IMAGE=--pod-infra-container-image=172.16.0.2:5000/coredns:1.1.3"
    Environment="KUBELET_INFRA_IMAGE=--pod-infra-container-image=172.16.0.2:5000/pause-amd64:3.0"

    相关文章

      网友评论

          本文标题:K8S02-源码包安装(1.11.0版)

          本文链接:https://www.haomeiwen.com/subject/vgdmfqtx.html