美文网首页
kubernets 1.12.3集群搭建

kubernets 1.12.3集群搭建

作者: 张家锋 | 来源:发表于2018-12-22 12:02 被阅读46次

该安装基于Centos7 ,使用阿里云的源,kubernetes版本1.12.3

添加k8s软件源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

exclude=kube*

EOF

安装依赖软件

yum install -y yum-utils device-mapper-persistent-data lvm2

docker软件源

yum-config-manager -y --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker 18.06.1.ce

yum makecache fast && yum -y install docker-ce-18.06.1.ce

创建 /etc/docker 目录.

mkdir /etc/docker

mkdir -p /etc/systemd/system/docker.service.d

重启 docker.

systemctl daemon-reload

systemctl enable docker

systemctl restart docker

systemctl status docker

systemctl disable firewalld

systemctl stop firewalld

禁用SELinux及交换分区

setenforce 0

sed -i 's/^SELINUX=.*$/SELINUX=permissive/' /etc/selinux/config

swapoff -a

安装kubernetes

yum install -y kubelet-1.12.3 kubeadm-1.12.3 kubectl-1.12.3 --disableexcludes=kubernetes

systemctl enable kubelet && systemctl start kubelet

cat <<EOF >  /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system

下载k8s.1.12.3所需要的镜像列表

echo 'docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.12.3

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.12.3

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.12.3

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.12.3

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.24

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.3

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.4

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.3 k8s.gcr.io/coredns:1.2.3

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.4 k8s.gcr.io/coredns:1.2.4

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.12.3 k8s.gcr.io/kube-scheduler:v1.12.3

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.12.3 k8s.gcr.io/kube-controller-manager:v1.12.3

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.12.3 k8s.gcr.io/kube-apiserver:v1.12.3

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.12.3 k8s.gcr.io/kube-proxy:v1.12.3' > ~/down-images.sh

chmod +777 ~/down-images.sh

sh ~/down-images.sh

以上步骤要在所有节点上执行

初始化k8s

这一步只需要在Master节点上执行

sysctl net.bridge.bridge-nf-call-iptables=1

kubeadm init --kubernetes-version=1.12.3 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.9.246

这个地方的192.168.9.246改成自己master的节点的IP地址

根据执行成功后的提示信息,执行以下步骤,如果执行错误,执行 kubeadm reset后在重新执行

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

在其他slave节点执行:

kubeadm join 192.168.9.246:6443 --token vldt2a.tuo0oe0z6n6lal5m --discovery-token-ca-cert-hash sha256:981c9a9d29a921d0519c3e800e2d16cb40760678ab0783bdf6a7e9d7405a50bf

将节点加入到集群中,这句是在集群初始化成功后,最后有显示,拷贝到slave节点上执行就行了

如果是节点加入,因为token是有期限的,需要重新生成token

解决方法如下:

kubeadm token create

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

用红色标出的部分替换原先init命令里的对应部分就可以了。

# 安装k8s-dashboard扩展

kubectl apply -f ~/kubernetes-dashboard.yaml

kubectl apply -f ~/kubernetes-dashboard-admin.rbac.yaml

# 完成后等待pod:dashboard创建启动

# 查看pod状态

kubectl get pods -n kube-system

# 查看service状态

kubectl get service -n kube-system

#打开浏览器:访问 :https://localhost:30001,使用token登录,token查看方法如下

#kubectl -n kube-system get secret

#kubectl -n kube-system describe secret kubernetes-dashboard-admin-token-skhfh #{上条命令输出的结果中复制的类似kubernetes-dashboard-admin-token-skhfh的key字符串到这里替换}

#复制tokdn数据到登录框内登录即可登录

在浏览器中输入https://IP:30001就可以看到下面的界面

选择令牌,将我们上面获取的令牌复制黏贴到这里点击登录即可

kubernetes-dashboard.yaml文件内容:


# Copyright 2017 The Kubernetes Authors.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

#    http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1

kind: Secret

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard-certs

  namespace: kube-system

type: Opaque

---

# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1

kind: ServiceAccount

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

---

# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: kubernetes-dashboard-minimal

  namespace: kube-system

rules:

  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.

- apiGroups: [""]

  resources: ["secrets"]

  verbs: ["create"]

  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

  resources: ["configmaps"]

  verbs: ["create"]

  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.

- apiGroups: [""]

  resources: ["secrets"]

  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]

  verbs: ["get", "update", "delete"]

  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

  resources: ["configmaps"]

  resourceNames: ["kubernetes-dashboard-settings"]

  verbs: ["get", "update"]

  # Allow Dashboard to get metrics from heapster.

- apiGroups: [""]

  resources: ["services"]

  resourceNames: ["heapster"]

  verbs: ["proxy"]

- apiGroups: [""]

  resources: ["services/proxy"]

  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]

  verbs: ["get"]

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

  name: kubernetes-dashboard-minimal

  namespace: kube-system

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: Role

  name: kubernetes-dashboard-minimal

subjects:

- kind: ServiceAccount

  name: kubernetes-dashboard

  namespace: kube-system

---

# ------------------- Dashboard Deployment ------------------- #

kind: Deployment

apiVersion: apps/v1beta2

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

spec:

  replicas: 1

  revisionHistoryLimit: 10

  selector:

    matchLabels:

      k8s-app: kubernetes-dashboard

  template:

    metadata:

      labels:

        k8s-app: kubernetes-dashboard

    spec:

      containers:

      - name: kubernetes-dashboard

        image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3

        ports:

        - containerPort: 8443

          protocol: TCP

        args:

          - --auto-generate-certificates

          # Uncomment the following line to manually specify Kubernetes API server Host

          # If not specified, Dashboard will attempt to auto discover the API server and connect

          # to it. Uncomment only if the default does not work.

          # - --apiserver-host=http://my-address:port

        volumeMounts:

        - name: kubernetes-dashboard-certs

          mountPath: /certs

          # Create on-disk volume to store exec logs

        - mountPath: /tmp

          name: tmp-volume

        livenessProbe:

          httpGet:

            scheme: HTTPS

            path: /

            port: 8443

          initialDelaySeconds: 30

          timeoutSeconds: 30

      volumes:

      - name: kubernetes-dashboard-certs

        secret:

          secretName: kubernetes-dashboard-certs

      - name: tmp-volume

        emptyDir: {}

      serviceAccountName: kubernetes-dashboard

      # Comment the following tolerations if Dashboard must not be deployed on master

      tolerations:

      - key: node-role.kubernetes.io/master

        effect: NoSchedule

---

# ------------------- Dashboard Service ------------------- #

kind: Service

apiVersion: v1

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

spec:

  type: NodePort

  ports:

    - port: 443

      targetPort: 8443

      nodePort: 30001

  selector:

    k8s-app: kubernetes-dashboard


kubernetes-dashboard-admin.rbac.yaml内容:

---

apiVersion: v1

kind: ServiceAccount

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard-admin

  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRoleBinding

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard-admin

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

subjects:

- kind: ServiceAccount

  name: kubernetes-dashboard-admin

  namespace: kube-system

---

相关文章

  • kubernets 1.12.3集群搭建

    该安装基于Centos7 ,使用阿里云的源,kubernetes版本1.12.3 添加k8s软件源 cat <

  • Docker 配置Flannel网络过程及原理

    备注:本文内容承接“kubernets集群搭建实操”,对网络部分进行补充描述。 总体描述:在搭建容器集群后,容器之...

  • 03 YAML文件

    前面我们的kubernets集群已经搭建成功了,现在我们就可以在集群里面来跑我们的应用了。要在集群里面运行我们自己...

  • 来看看基于Kubernetes的Spark部署完全指南

    本文是在Kubernets上搭建Spark集群的操作指南,同时提供了Spark测试任务及相关的测试数据,通过阅读本...

  • k8s重要概念

    在实践之前,我们必须了解一下kubernets的几个重要概念,它们是组成kubernets集群的基石。 Clust...

  • 大数据集群搭建2

    大数据集群搭建 本文档将搭建以下集群 hadoop 集群 zookeeper 集群 hbase 集群 spark ...

  • 大数据集群搭建

    大数据集群搭建 本文档将搭建以下集群 hadoop 集群 zookeeper 集群 hbase 集群 spark ...

  • spark yarn集群搭建(三:spark集群搭建)

    spark yarn集群搭建(一:准备工作) spark yarn集群搭建(二:hadoop集群搭建) Maste...

  • kubelet运行机制

    在kubernets集群中每个node节点都会运行kubelet服务进程。 kubelet进程负责处理下发到pod...

  • 4.1 容器runner拉取git资源的配置

    目前在使用gitlab的CI/CD流程,配置的runner类型是kubernets集群上的容器runner,这种类...

网友评论

      本文标题:kubernets 1.12.3集群搭建

      本文链接:https://www.haomeiwen.com/subject/lsdikqtx.html