Rancher(ha)2.0部署

作者: 橘子基因 | 来源:发表于2018-12-07 14:44 被阅读298次

    Rancher2.0ha安装

    环境

    • CentOS 7.5
    • Docker-ce 17.03
    • rke v0.1.11
    • kubectl Client v1.12.2, kubectl Server v1.11.3
    • helm Client v2.11.0, helm Server v2.11.0
    主机名 IP 备注
    k8s-master 10.176.56.232 负载均衡器,rancher url请求
    k8s-node00 10.176.56.240 rancher node,etcd controlplane
    k8s-node01 10.176.57.151 rancher node,worker
    k8s-node01 10.176.57.152 rancher node,worker

    注:ip随意,保证互通就行

    1. 基础环境配置(所有节点)

    1.1 hostname和hosts配置

    配置每台主机的hosts(/etc/hosts),添加host_ip $hostname到/etc/hosts文件中。

    [admin@k8s-master home]$ cat /etc/hostname 
    k8s-master
    [admin@k8s-master home]$ cat /etc/hosts
    10.176.57.152 k8s-node02
    10.176.57.151 k8s-node01
    10.176.56.240 k8s-node00
    10.176.56.232 k8s-master
    

    1.2 CentOS关闭selinux

    sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

    1.3 关闭防火墙(可选)或者放行相应端口

    systemctl stop firewalld.service && systemctl disable firewalld.service
    注:为了避免出现网络通信问题,此例我将防火墙直接关闭。如不选择关闭防火墙,可配置端口放行。端口放行可参考:https://www.cnrancher.com/docs/rancher/v2.x/cn/installation/references/

    1.4 配置主机时间、时区、系统语言

    修改时区
    ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

    修改系统语言环境
    sudo echo 'LANG="en_US.UTF-8"' >> /etc/profile;source /etc/profile

    安装ntp服务
    yum install ntp ntpdate -y

    修改/etc/ntp.conf文件,将server0~server4全部注释然后添加自己或者其他的ntp服务器地址

    #server 0.centos.pool.ntp.org iburst
    #server 1.centos.pool.ntp.org iburst
    #server 2.centos.pool.ntp.org iburst
    #server 3.centos.pool.ntp.org iburst
    server 10.176.56.9 iburst
    

    重启服务
    systemctl restart ntpd.service

    1.5 Kernel性能调优

    cat >> /etc/sysctl.conf<<EOF
    net.ipv4.ip_forward=1
    net.bridge.bridge-nf-call-iptables=1
    net.ipv4.neigh.default.gc_thresh1=4096
    net.ipv4.neigh.default.gc_thresh2=6144
    net.ipv4.neigh.default.gc_thresh3=8192
    EOF
    

    保存配置
    sysctl –p

    2. Docker的安装与配置(所有节点)

    2.1 修改系统源

    sudo cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
    cat > /etc/yum.repos.d/CentOS-Base.repo << EOF
    
    [base]
    name=CentOS-$releasever - Base - mirrors.aliyun.com
    failovermethod=priority
    baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
            http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
            http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
    gpgcheck=1
    gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
    
    #released updates
    [updates]
    name=CentOS-$releasever - Updates - mirrors.aliyun.com
    failovermethod=priority
    baseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
            http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
            http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/
    gpgcheck=1
    gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
    
    #additional packages that may be useful
    [extras]
    name=CentOS-$releasever - Extras - mirrors.aliyun.com
    failovermethod=priority
    baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
            http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
            http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
    gpgcheck=1
    gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
    
    #additional packages that extend functionality of existing packages
    [centosplus]
    name=CentOS-$releasever - Plus - mirrors.aliyun.com
    failovermethod=priority
    baseurl=http://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
            http://mirrors.aliyuncs.com/centos/$releasever/centosplus/$basearch/
            http://mirrors.cloud.aliyuncs.com/centos/$releasever/centosplus/$basearch/
    gpgcheck=1
    enabled=0
    gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
    
    #contrib - packages by Centos Users
    [contrib]
    name=CentOS-$releasever - Contrib - mirrors.aliyun.com
    failovermethod=priority
    baseurl=http://mirrors.aliyun.com/centos/$releasever/contrib/$basearch/
            http://mirrors.aliyuncs.com/centos/$releasever/contrib/$basearch/
            http://mirrors.cloud.aliyuncs.com/centos/$releasever/contrib/$basearch/
    gpgcheck=1
    enabled=0
    gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
    
    EOF
    

    生成缓存
    sudo yum makecache

    2.2 docker-ce 安装

    yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm  -y
    yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm  -y
    

    把当前用户加入docker用户组

    sudo usermod -aG dockerroot USERNAME
    sudo systemctl restart docker
    sudo chmod a+rw /var/run/docker.sock
    

    注:USERNAME填写自己的用户名
    注:因为CentOS的安全限制,通过RKE安装K8S集群时候无法使用root账户。所以,建议CentOS用户使用非root用户来运行docker

    设置开机启动
    sudo systemctl enable docker

    2.3 docker-ce 配置

    编辑/etc/docker/daemon.json文件

    {
            "max-concurrent-downloads": 3,
            "max-concurrent-uploads": 5,
            "registry-mirrors": ["https://7bezldxe.mirror.aliyuncs.com/"],
            "storage-driver": "overlay2",
            "storage-opts": ["overlay2.override_kernel_check=true"],
            "log-driver": "json-file",
            "log-opts": {
                "max-size": "100m",
                "max-file": "3"
            }
    }
    

    有私有仓库的可加上自己的私有仓库 "insecure-registries": ["IP:PORT"]

    3. 创建四层负载均衡(56.232节点)

    3.1 推荐架构

    3.2 必备工具

    3.3 配置负载均衡器(56.232节点)

    添加nginx源

    vim /etc/yum.repos.d/nginx.repo
    [nginx]
    name=nginx repo
    baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
    gpgcheck=0
    enabled=1
    

    下载nginx
    yum install -y nginx

    配置/etc/nginx/nginx.conf文件

    worker_processes 4;
    worker_rlimit_nofile 40000;
    
    events {
        worker_connections 8192;
    }
    
    http {
        server {
            listen         80;
            return 301 https://$host$request_uri;
        }
    }
    
    stream {
        upstream rancher_servers {
            least_conn;
            server 10.176.56.240:443 max_fails=3 fail_timeout=5s;
            server 10.176.57.151:443 max_fails=3 fail_timeout=5s;
            server 10.176.57.152:443 max_fails=3 fail_timeout=5s;
        }
        server {
            listen     443;
            proxy_pass rancher_servers;
        }
    }
    

    重新加载nginx服务
    nginx -s reload

    4. RKE安装kubernetes(56.232节点)

    4.1 创建rancher-cluster.yml文件

    [admin@k8s-master home]$ cat rancher-cluster.yml 
    nodes:
      - address: 10.176.56.240
        user: admin
        role: [controlplane,etcd]
      - address: 10.176.57.151
        user: admin
        role: [worker]
      - address: 10.176.57.152
        user: admin
        role: [worker]
    
    services:
      etcd:
        snapshot: true
        creation: 6h
        retention: 24h
    

    如果您的节点有public and internal地址,建议设置internal_address:以便Kubernetes将其用于集群内通信

    4.2 运行rke命令

    下载rke_linux-amd64并放在和rancher-cluster.yml同目录下,修改rke_linux-amd64名称为rke

    mv rke_linux-amd64 rke
    chmod +x ./rke  
    
    02.PNG

    使用rke安装kubernetes
    ./rke up --config ./rancher-cluster.yml

    rke.PNG

    注:如果显示没有权限创建kube_config_rancher-cluster.yml文件,sudo也不能创建。可先手动创建一个kube_config_rancher-cluster.yml文件,并将其权限修改为普通用户可读写即可`

    将kube_config_rancher-cluster.yml文件复制到HOME/.kube/config
    cp kube_config_rancher-cluster.yml $HOME/.kube/config

    将KUBECONFIG环境变量设置为kube_config_rancher-cluster.yml文件路径
    export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml

    4.3 安装kubectl

    下载kubectl_linux-amd64

    确保kubectl二进制文件是可执行文件
    chmod +x ./kubectl

    将kubectl二进制文件移动到PATH路径下
    sudo mv ./kubectl /usr/local/bin/kubectl

    01.PNG

    4.4 测试是否安装成功

    rke-kubernetes.PNG

    保存kube_config_rancher-cluster.yml和rancher-cluster.yml文件的副本,您将需要这些文件来维护和升级Rancher实例

    5. 安装配置helm(56.232节点)

    5.1 配置helm客户端访问权限

    kubectl -n kube-system create serviceaccount tiller
    kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
    

    Helm在集群上安装tiller服务以管理charts. 由于RKE默认启用RBAC, 因此我们需要使用kubectl来创建一个serviceaccount,clusterrolebinding才能让tiller具有部署到集群的权限

    5.2 安装helm客户端

    下载你需要的版本:https://github.com/helm/helm/releases

    解压缩
    tar -zxvf helm-v2.11.0-linux-amd64.tgz

    使helm可执行
    mv linux-amd64/helm /usr/local/bin/helm

    在kube-system命名空间中创建ServiceAccount
    kubectl -n kube-system create serviceaccount tiller

    创建ClusterRoleBinding以授予tiller帐户对集群的访问权限
    kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

    5.3 安装Helm Server(Tiller)

    helm init --service-account tiller --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.11.0 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

    helm.PNG

    查看tiller是否运行成功
    kubectl get pods --namespace kube-system

    tiller.PNG

    6. helm安装rancher

    6.1 添加chart仓库地址

    helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

    6.2 安装证书管理器

    helm install stable/cert-manager \
      --name cert-manager \
      --namespace kube-system
    

    只有Rancher自动生成的证书和LetsEncrypt颁发的证书才需要cert-manager。如果是你自己的证书,可使用ingress.tls.source=secret参数指定证书,并跳过此步骤。可参考https://www.cnrancher.com/docs/rancher/v2.x/cn/installation/server-installation/ha-install/helm-rancher/rancher-install/

    6.3 选择SSL配置方式并安装Rancher server(rancher自动生成证书)

    helm install rancher-stable/rancher \
      --name rancher \
      --namespace cattle-system \
      --set hostname=k8s-master
    

    内网环境可添加 --set proxy=" " 和 --set noProxy=" "来设置代理

    6.4 为Agent Pod添加主机别名(/etc/hosts)

    如果你没有内部DNS服务器而是通过添加/etc/hosts主机别名的方式指定的Rancher server域名,那么不管通过哪种方式(自定义、导入、Host驱动等)创建K8S集群,K8S集群运行起来之后,因为cattle-cluster-agent Pod和cattle-node-agent无法通过DNS记录找到Rancher server,最终导致无法通信。

    export KUBECONFIG=xxx/xxx/xx.kubeconfig.yaml #指定kubectl配置文件
    kubectl -n cattle-system patch  deployments cattle-cluster-agent --patch '{
        "spec": {
            "template": {
                "spec": {
                    "hostAliases": [
                        {
                            "hostnames":
                            [
                                "k8s-master"
                            ],
                                "ip": "10.176.56.232"
                        }
                    ]
                }
            }
        }
    }'
    
    export KUBECONFIG=xxx/xxx/xx.kubeconfig.yaml #指定kubectl配置文件
    kubectl -n cattle-system patch  daemonsets cattle-node-agent --patch '{
        "spec": {
            "template": {
                "spec": {
                    "hostAliases": [
                        {
                            "hostnames":
                            [
                                "k8s-master"
                            ],
                                "ip": "10.176.56.232"
                        }
                    ]
                }
            }
        }
    }'
    
    连接不是私密连接.PNG

    我没有域名服务器所有直接将ip加到了访问rancher的机器的hosts里面,出现不是安全连接,点击高级👉继续前往

    rancher ui.PNG rancher(ha).PNG

    访问https://k8s-master,创建admin密码即可

    相关文章

      网友评论

        本文标题:Rancher(ha)2.0部署

        本文链接:https://www.haomeiwen.com/subject/mjjthqtx.html