美文网首页
kubernetes安装部署-day06

kubernetes安装部署-day06

作者: 会笑的熊猫 | 来源:发表于2019-06-11 15:59 被阅读0次

    八、构建Centos基础镜像:

    8.1构建自定义Centos基础镜像:

    基于官方的Centos 7.2.1511镜像进行构建,需要下载Centos基础镜像,默认的版本为latest即当前的最新版本,如果要下载指定的版本则需要具体指定版本。
    [root@docker-server1 ~]# docker pull centos:7.2.1511
    7.2.1511: Pulling from library/centos
    f2d1d709a1da: Pull complete
    Digest: sha256:7c47810fd05ba380bd607a1ece3b4ad7e67f5906b1b981291987918cb22f6d4d
    Status: Downloaded newer image for centos:7.2.1511
    或者直接docker load -i centos7.5-docker-image.tar.gz

    8.1.1:创建Dockerfile目录:

    目录的创建结合业务,创建分层的业务目录。

    [root@docker-server1 opt]# mkdir -pv /opt/dockerfile/system/{centos,redtar,ubuntu}
    [root@docker-server1 opt]# mkdir -pv /opt/dockerfile/web/{nginx/boss/{nginx-pre,nginx -online},jdk/{jdk7,jdk6},tomcat/boss/{tomcat-pre,tomcat-online}}
    

    8.1.2:创建Dockerfile文件:

    Dockefile文件是创建docker 镜像的重要依赖文件,其中定义了镜像的各种需要安装的包、添加的文件等操作,另外要安装业务需求把需要安装的包进行安装。

    [root@docker-server1 ~]# cd /opt/dockerfile/system/centos
    [root@docker-server1 centos]# vim Dockerfile
    #Centos Base Image
    FROM centos:7.2.1511
    MAINTAINER xxxxx
    
    RUN yum clean all && yum makecache && yum install -y https://mirrors.aliyun.com/epel/epel-release-latest-7.noarch.rpm
    RUN yum install -y vim wget tree pcre pcre-devel gcc gcc-c++ zlib zlib-devel openssl openssl-devel net-tools iotop unzip zip iproute ntpdate nfs-utils tcpdump telnet traceroute
    

    8.1.3:执行构建命令:

    [root@docker-server1 centos]# docker build -t 192.168.10.210/images/centos7.2.1511-base .

    推荐将每个镜像的构建命令写成脚本保存到当前目录,方便后期使用,如:

    [root@docker-server1 centos]# cat build-command.sh

    #!/bin/bash
    docker build -t 192.168.10.210/images/centos7.2.1511-base  .
    

    8.2构建nginx业务镜像

    8.2.1构建nginx基础镜像

    编辑dockerfile脚本

    #Nginx Base Image
    
    FROM k8s-harbor1.example.com/library/centos-7.5-base:latest
    
    ADD nginx-1.12.2.tar.gz /usr/local/src
    RUN yum -y install gcc gcc-c++ autoconf automake make zlib zlib-devel openssl openssl--devel pcre pcre-devel
    RUN cd /usr/local/src/nginx-1.12.2 && ./configure --prefix=/usr/local/nginx && make && make install && ln -sv /usr/local/nginx/sbin/nginx /usr/bin/
    RUN useradd nginx -u 1000
    ADD filebeat-6.2.4-x86_64.rpm /usr/local/src
    RUN yum localinstall -y /usr/local/src/filebeat-6.2.4-x86_64.rpm
    RUN rm -rf /usr/local/src/filebeat-6.2.4-x86_64.rpm
    

    编写build-command.sh脚本

    #!/bin/bash
    
    docker build -t k8s-harbor1.example.com/library/nginx-base:v1 .
    sleep 2
    docker push k8s-harbor1.example.com/library/nginx-base:v1
    

    执行build.command.sh脚本

    [root@k8s-harbor1 nginx-web1]# ./build-commad.sh 
    Sending build context to Docker daemon   12.8kB
    Step 1/9 : FROM k8s-harbor1.example.com/baseimages/nginx-base:v1
     ---> 56fdf1eb39d0
    Step 2/9 : ADD nginx.conf /usr/local/nginx/conf/nginx.conf
     ---> Using cache
     ---> 970391583d4f
    Step 3/9 : ADD webapp/* /usr/local/nginx/html/webapp/
     ---> Using cache
     ---> 210944e07001
    Step 4/9 : ADD index.html /usr/local/nginx/html/index.html
     ---> Using cache
     ---> fce226452978
    Step 5/9 : RUN mkdir /usr/local/nginx/html/webapp/{img,static}
     ---> Using cache
     ---> 88e4d1c4e0e3
    Step 6/9 : ADD filebeat.yml /etc/filebeat/filebeat.yml
     ---> Using cache
     ---> 84b6788d980a
    Step 7/9 : ADD run_nginx.sh /usr/local/nginx/sbin/run_nginx.sh
     ---> 18ca13a55224
    Step 8/9 : EXPOSE 80 443
     ---> Running in d2e23f9d020c
    Removing intermediate container d2e23f9d020c
     ---> 22e14633bd0f
    Step 9/9 : CMD ["/usr/local/nginx/sbin/run_nginx.sh"]
     ---> Running in 610923128def
    Removing intermediate container 610923128def
     ---> 86cdfeb93a11
    Successfully built 86cdfeb93a11
    Successfully tagged k8s-harbor1.example.com/library/nginx-web1:app1
    The push refers to repository [k8s-harbor1.example.com/library/nginx-web1]
    6386144e82d3: Pushed 
    b690d87ca69d: Layer already exists 
    d1ea132f4212: Layer already exists 
    f92693d12043: Layer already exists 
    ae302d3b4b1f: Layer already exists 
    6b1a44c14af0: Layer already exists 
    921b64e67401: Layer already exists 
    1a38cd873ab2: Layer already exists 
    03fe738cbac6: Layer already exists 
    bacae692bcdc: Layer already exists 
    0466167696d7: Layer already exists 
    c2c7b781c557: Layer already exists 
    4d7c2f02fa21: Layer already exists 
    a7db5a01a52d: Layer already exists 
    e140baabf03f: Layer already exists 
    bcc97fbfc9e1: Layer already exists 
    app1: digest: sha256:d8fadee2f5901d1cedb07ea4e425209dc3380c1d2b323d59b6b37894f991758d size: 3669
    [root@k8s-harbor1 nginx-web1]# docker images
    REPOSITORY                                                           TAG                 IMAGE ID            CREATED             SIZE
    k8s-harbor1.example.com/library/nginx-web1                           app1                86cdfeb93a11        6 seconds ago       1.02GB
    k8s-harbor1.example.com/baseimages/nginx-base                        v1                  56fdf1eb39d0        3 hours ago         1.02GB
    k8s-harbor1.example.com/library/nginx-base                           v1                  56fdf1eb39d0        3 hours ago         1.02GB
    centos-7.5-base                                                      latest              9a34a3cc5984        3 hours ago         773MB
    k8s-harbor1.example.com/library/centos-7.5-base                      latest              9a34a3cc5984        3 hours ago         773MB
    centos                                                               latest              49f7960eb7e4        12 months ago       200MB
    centos                                                               7.2.1511            0a2bad7da9b5        19 months ago       195MB
    vmware/harbor-log                                                    v1.2.2              36ef78ae27df        19 months ago       200MB
    vmware/harbor-jobservice                                             v1.2.2              e2af366cba44        19 months ago       164MB
    vmware/harbor-ui                                                     v1.2.2              39efb472c253        19 months ago       178MB
    vmware/harbor-adminserver                                            v1.2.2              c75963ec543f        19 months ago       142MB
    vmware/harbor-db                                                     v1.2.2              ee7b9fa37c5d        19 months ago       329MB
    vmware/nginx-photon                                                  1.11.13             6cc5c831fc7f        19 months ago       144MB
    vmware/registry                                                      2.6.2-photon        5d9100e4350e        21 months ago       173MB
    vmware/postgresql                                                    9.6.4-photon        c562762cbd12        21 months ago       225MB
    vmware/clair                                                         v2.0.1-photon       f04966b4af6c        23 months ago       297MB
    vmware/harbor-notary-db                                              mariadb-10.1.10     64ed814665c6        2 years ago         324MB
    vmware/notary-photon                                                 signer-0.5.0        b1eda7d10640        2 years ago         156MB
    vmware/notary-photon                                                 server-0.5.0        6e2646682e3c        2 years ago         157MB
    photon                                                               1.0                 e6e4e4a2ba1b        2 years ago         128MB
    mirrorgooglecontainers/pause-amd64                                   3.0                 99e59f495ffa        3 years ago         747kB
    k8s-harbor1.example.com/library/mirrorgooglecontainers/pause-amd64   3.0                 99e59f495ffa        3 years ago         747kB
    

    8.3构建tomcat业务镜像

    8.3.1构建JDK基础业务镜像

    编辑Dockerfile文件

    #JDk Base Image
    
    FROM k8s-harbor1.example.com/library/centos-7.5-base:latest
    
    ADD jdk-8u181-linux-x64.tar.gz /usr/local/src
    RUN ln -sv /usr/local/src/jdk1.8.0_181 /usr/local/jdk
    ADD profile /etc/profile
    
    ENV JAVA_HOME /usr/local/jdk
    ENV JRE_HOME $JAVA_HOME/jre
    ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/bin/
    ENV PATH $PATH:$JAVA_HOME/bin
    
    #date
    RUN rm -rf /etc/localtime && ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/Shanghai" > /etc/timezone
    

    编写build-commad.sh

    #!/bin/bash
    
    docker build -t k8s-harbor1.example.com/library/cnetos-jdk8-base:v1 .
    sleep 2
    docker push k8s-harbor1.example.com/library/cnetos-jdk8-base:v1
    

    执行build-commad.sh

    [root@k8s-harbor1 jdk-base]# ./build-commad.sh 
    Sending build context to Docker daemon  570.1MB
    Step 1/9 : FROM k8s-harbor1.example.com/library/centos-7.5-base:latest
     ---> 9a34a3cc5984
    Step 2/9 : ADD jdk-8u181-linux-x64.tar.gz /usr/local/src
     ---> a2176fec8f21
    Step 3/9 : RUN ln -sv /usr/local/src/jdk1.8.0_181 /usr/local/jdk
     ---> Running in 77494ef7905e
    '/usr/local/jdk' -> '/usr/local/src/jdk1.8.0_181'
    Removing intermediate container 77494ef7905e
     ---> 2983233c0aba
    Step 4/9 : ADD profile /etc/profile
     ---> 97e10360f265
    Step 5/9 : ENV JAVA_HOME /usr/local/jdk
     ---> Running in 5110451d4766
    Removing intermediate container 5110451d4766
     ---> 9a8b766f81ac
    Step 6/9 : ENV JRE_HOME $JAVA_HOME/jre
     ---> Running in a0ffee7e7310
    Removing intermediate container a0ffee7e7310
     ---> fd9c918c6f7d
    Step 7/9 : ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/bin/
     ---> Running in d1869d5833e9
    Removing intermediate container d1869d5833e9
     ---> aa9da8734fb3
    Step 8/9 : ENV PATH $PATH:$JAVA_HOME/bin
     ---> Running in 864b814c3a85
    Removing intermediate container 864b814c3a85
     ---> f1faa5ad76f8
    Step 9/9 : RUN rm -rf /etc/localtime && ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/Shanghai" > /etc/timezone
     ---> Running in 71ee6cb835f0
    Removing intermediate container 71ee6cb835f0
     ---> 1d2a936f9436
    Successfully built 1d2a936f9436
    Successfully tagged k8s-harbor1.example.com/library/cnetos-jdk8-base:v1
    The push refers to repository [k8s-harbor1.example.com/library/cnetos-jdk8-base]
    0fd807ed489d: Pushed 
    5e3efaa4d967: Pushed 
    a972f8312cc6: Pushed 
    0b5fd2bfb267: Pushed 
    a7db5a01a52d: Mounted from library/nginx-web2 
    e140baabf03f: Mounted from library/nginx-web2 
    bcc97fbfc9e1: Mounted from library/nginx-web2 
    v1: digest: sha256:a2f9d558be500db48fb8b74ac329c1d4da3314bea3e18db3301696cb9786fc09 size: 1789
    

    8.3.2构建tomcat基础镜像

    编辑Dockerfile文件

    #Tomcat Base Image
    
    FROM k8s-harbor1.example.com/library/cnetos-jdk8-base:v1
    
    ADD apache-tomcat-8.5.34.tar.gz /opt/apps
    RUN ln -sv /opt/apps/apache-tomcat-8.5.34 /opt/apps/tomcat
    RUN useradd tomcat -u 2000
    

    编写build-commad.sh

    #!/bin/bash
    
    docker build -t k8s-harbor1.example.com/library/tomcat-base:v1 .
    sleep 2
    docker push k8s-harbor1.example.com/library/tomcat-base:v1
    

    执行build-commad.sh

    [root@k8s-harbor1 tomcat-base]# ./build-commad.sh 
    Sending build context to Docker daemon  23.93MB
    Step 1/4 : FROM k8s-harbor1.example.com/library/cnetos-jdk8-base:v1
     ---> 1d2a936f9436
    Step 2/4 : ADD apache-tomcat-8.5.34.tar.gz /opt/apps
     ---> f6342c9044d6
    Step 3/4 : RUN ln -sv /opt/apps/apache-tomcat-8.5.34 /opt/apps/tomcat
     ---> Running in 80035f5d8898
    '/opt/apps/tomcat' -> '/opt/apps/apache-tomcat-8.5.34'
    Removing intermediate container 80035f5d8898
     ---> b02be36e3b0d
    Step 4/4 : RUN useradd tomcat -u 2000
     ---> Running in 4f96fca0841d
    Removing intermediate container 4f96fca0841d
     ---> 6a7e6cf30620
    Successfully built 6a7e6cf30620
    Successfully tagged k8s-harbor1.example.com/library/tomcat-base:v1
    The push refers to repository [k8s-harbor1.example.com/library/tomcat-base]
    7e6afc84675d: Pushed 
    9451df4b281d: Pushed 
    25c476cd1fbd: Pushed 
    0fd807ed489d: Mounted from library/cnetos-jdk8-base 
    5e3efaa4d967: Mounted from library/cnetos-jdk8-base 
    a972f8312cc6: Mounted from library/cnetos-jdk8-base 
    0b5fd2bfb267: Mounted from library/cnetos-jdk8-base 
    a7db5a01a52d: Mounted from library/cnetos-jdk8-base 
    e140baabf03f: Mounted from library/cnetos-jdk8-base 
    bcc97fbfc9e1: Mounted from library/cnetos-jdk8-base 
    

    8.4创建tomcat的yml文件

    编辑tomcat.yml文件分APP1和App2

    [root@k8s-master1 tomcat]# vim tomcat.yml 
    
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      labels:
        #pod name
        app: python-test-tomcat-app
      #父集的deployment,删pod需要先删除deployment
      name: python-test-tomcat-deployment
      namespace: python1
    spec:
    #副本数
      replicas: 1
      selector:
        matchLabels:
          app: python-test-tomcat-app
      template:
        metadata:
          labels:
            app: python-test-tomcat-app
          # Comment the following annotation if Dashboard must not be deployed on master
          #annotations:
          #  scheduler.alpha.kubernetes.io/tolerations: |
          #    [
          #      {
          #        "key": "dedicated",
          #        "operator": "Equal",
          #        "value": "master",
          #        "effect": "NoSchedule"
          #      }
          #    ]
        spec:
          containers:
          - name: python-test-tomcat-spec
            image: k8s-harbor1.example.com/library/tomcat-app1:v1
            #command: ["/apps/tomcat/bin/run_tomcat.sh"]
            imagePullPolicy: IfNotPresent
            #每次全会拉镜像
            imagePullPolicy: Always
            ports:
            - containerPort: 8080
              protocol: TCP
            #resources:
            #  requests:
            #    memory: 4Gi
                #cpu: 2
            #  limits:
            #    memory: 4Gi
                #cpu: 4
            #args:
              # Uncomment the following line to manually specify Kubernetes API server Host
              # If not specified, Dashboard will attempt to auto discover the API server and connect
              # to it. Uncomment only if the default does not work.
            #  - --apiserver-host=http://10.20.15.209:8080
            #livenessProbe:
            #  httpGet:
            #    path: /
            #    port: 8080
            #  initialDelaySeconds: 30
            #  timeoutSeconds: 30
    ---
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        app: python-test-tomcat-app
      name: python-test-tomcat-spec
      namespace: python1
    spec:
      type: NodePort
      ports:
      - port: 80
        #镜像暴露的端口,与containerPort保持一致
        targetPort: 8080
        nodePort: 30011
      selector:
        app: python-test-tomcat-app
    —————————————————————————————————————————————————————
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      labels:
        #pod name
        app: python-test-tomcat2-app
      #父集的deployment,删pod需要先删除deployment
      name: python-test-tomcat2-deployment
      namespace: python1
    spec:
    #副本数
      replicas: 1
      selector:
        matchLabels:
          app: python-test-tomcat2-app
      template:
        metadata:
          labels:
            app: python-test-tomcat2-app
          # Comment the following annotation if Dashboard must not be deployed on master
          #annotations:
          #  scheduler.alpha.kubernetes.io/tolerations: |
          #    [
          #      {
          #        "key": "dedicated",
          #        "operator": "Equal",
          #        "value": "master",
          #        "effect": "NoSchedule"
          #      }
          #    ]
        spec:
          containers:
          - name: python-test-tomcat2-spec
            image: k8s-harbor1.example.com/library/tomcat-app2:v1
            #command: ["/apps/tomcat2/bin/run_tomcat2.sh"]
            imagePullPolicy: IfNotPresent
            #每次全会拉镜像
            imagePullPolicy: Always
            ports:
            - containerPort: 8080
              protocol: TCP
            #resources:
            #  requests:
            #    memory: 4Gi
                #cpu: 2
            #  limits:
            #    memory: 4Gi
                #cpu: 4
            #args:
              # Uncomment the following line to manually specify Kubernetes API server Host
              # If not specified, Dashboard will attempt to auto discover the API server and connect
              # to it. Uncomment only if the default does not work.
            #  - --apiserver-host=http://10.20.15.209:8080
            #livenessProbe:
            #  httpGet:
            #    path: /
            #    port: 8080
            #  initialDelaySeconds: 30
            #  timeoutSeconds: 30
    ---
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        app: python-test-tomcat2-app
      name: python-test-tomcat2-spec
      namespace: python1
    spec:
      type: NodePort
      ports:
      - port: 80
        #镜像暴露的端口,与containerPort保持一致
        targetPort: 8080
        nodePort: 30012
      selector:
        app: python-test-tomcat2-app
    

    启动APP01和APP02

    [root@k8s-master1 tomcat]# kubectl apply -f tomcat.yml
    deployment.extensions/python-test-tomcat-deployment created
    service/python-test-tomcat-spec created
    [root@k8s-master1 tomcat]# kubectl apply -f tomcat-app02.yml
    deployment.extensions/python-test-tomcat2-deployment created
    service/python-test-tomcat2-spec created
    [root@k8s-master1 tomcat]# kubectl get pods --all-namespaces -o wide
    NAMESPACE     NAME                                             READY     STATUS    RESTARTS   AGE       IP           NODE
    default       busybox                                          1/1       Running   78         3d        10.2.38.4    10.170.186.216
    kube-system   heapster-587f6c9b46-hwljv                        1/1       Running   0          3d        10.2.36.13   10.51.67.209
    kube-system   kube-dns-65f747f6c8-4p7gn                        3/3       Running   241        5d        10.2.38.3    10.170.186.216
    kube-system   kubernetes-dashboard-7f4f96b579-5hxdw            1/1       Running   0          3d        10.2.36.11   10.51.67.209
    kube-system   kubernetes-dashboard-7f4f96b579-glqnh            1/1       Running   0          3d        10.2.38.5    10.170.186.216
    kube-system   monitoring-grafana-5dc657db9f-cqxjq              1/1       Running   0          3d        10.2.36.12   10.51.67.209
    kube-system   monitoring-influxdb-789d98f4cb-ktsl7             1/1       Running   0          3d        10.2.36.14   10.51.67.209
    python1       python-test-nginx-deployment-6d9cf9fcb5-m8wch    1/1       Running   0          1d        10.2.38.6    10.170.186.216
    python1       python-test-tomcat-deployment-85967b4dc9-kxdp7   1/1       Running   0          21m       10.2.36.15   10.51.67.209
    python1       python-test-tomcat2-deployment-8ff58467d-zcv67   1/1       Running   0          6s        10.2.36.16   10.51.67.209
    

    测试网络

    [root@k8s-master1 tomcat]# kubectl get service -n python1
    NAME                       TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
    python-test-nginx-spec     NodePort   10.1.202.247   <none>        80:30010/TCP   1d
    python-test-tomcat-spec    NodePort   10.1.77.89     <none>        80:30011/TCP   22m
    python-test-tomcat2-spec   NodePort   10.1.175.181   <none>        80:30012/TCP   1m
    You have new mail in /var/spool/mail/root
    [root@k8s-master1 tomcat]# kubectl get deployment -n python1
    NAME                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    python-test-nginx-deployment     1         1         1            1           1d
    python-test-tomcat-deployment    1         1         1            1           23m
    python-test-tomcat2-deployment   1         1         1            1           2m
    You have new mail in /var/spool/mail/root
    [root@k8s-master1 tomcat]# kubectl exec busybox nslookup python-test-tomcat-spec.python1.svc.cluster.local
    Server:    10.1.0.254
    Address 1: 10.1.0.254 kube-dns.kube-system.svc.cluster.local
    
    Name:      python-test-tomcat-spec.python1.svc.cluster.local
    Address 1: 10.1.77.89 python-test-tomcat-spec.python1.svc.cluster.local
    You have new mail in /var/spool/mail/root
    [root@k8s-master1 tomcat]# kubectl exec busybox nslookup python-test-tomcat2-spec.python1.svc.cluster.local
    Server:    10.1.0.254
    Address 1: 10.1.0.254 kube-dns.kube-system.svc.cluster.local
    
    Name:      python-test-tomcat2-spec.python1.svc.cluster.local
    Address 1: 10.1.175.181 python-test-tomcat2-spec.python1.svc.cluster.local
    [root@k8s-master1 tomcat]# kubectl exec busybox nslookup python-test-nginx-spec.python1.svc.cluster.local
    Server:    10.1.0.254
    Address 1: 10.1.0.254 kube-dns.kube-system.svc.cluster.local
    
    Name:      python-test-nginx-spec.python1.svc.cluster.local
    Address 1: 10.1.202.247 python-test-nginx-spec.python1.svc.cluster.local
    You have new mail in /var/spool/mail/root
    

    修改nginx镜像文件,加入tomcat转发策略

    [root@k8s-harbor1 nginx-web1]# vim nginx.conf 
    user  nginx nginx;
    #启动几个进程
    worker_processes  auto;
    daemon off;
    events {
        worker_connections  1024;
    }
    http {
        include       mime.types;
        default_type  application/octet-stream;
        log_format access_json '{"@timestamp":"$time_iso8601",'
            '"host":"$server_addr",'
            '"clientip":"$remote_addr",'
            '"size":$body_bytes_sent,'
            '"responsetime":$request_time,'
            '"upstreamtime":"$upstream_response_time",'
            '"upstreamhost":"$upstream_addr",'
            '"http_host":"$host",'
            '"url":"$uri",'
            '"domain":"$host",'
            '"xff":"$http_x_forwarded_for",'
            '"referer":"$http_referer",'
            '"status":"$status"}';
        access_log  logs/access.log  access_json;
        sendfile        on;
        keepalive_timeout  65;
        upstream tomcat_webserver{
            server python-test-tomcat-spec.python1.svc.cluster.local:80;
            server python-test-tomcat2-spec.python1.svc.cluster.local:80;
        }
        server {
            listen       80;
            server_name  localhost;
            location / {
                root   html;
                index  index.html index.htm;
            }
            location /webapp {
                root   html;
                index  index.html index.htm;
            }
            location /myapp  {
                proxy_pass http://tomcat_webserver;
                proxy_set_header  Host     $host;
                proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header  X-Real-IP $remote_addr;
            }
            error_page   500 502 503 504  /50x.html;
            location = /50x.html {
                root   html;
            }
        }
    user  nginx nginx;
    #启动几个进程
    worker_processes  auto;
    daemon off;
    events {
        worker_connections  1024;
    }
    http {
        include       mime.types;
        default_type  application/octet-stream;
    
        #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
        #                  '$status $body_bytes_sent "$http_referer" '
        #                  '"$http_user_agent" "$http_x_forwarded_for"';
    
        #access_log  logs/access.log  main;
        log_format access_json '{"@timestamp":"$time_iso8601",'
            '"host":"$server_addr",'
            '"clientip":"$remote_addr",'
            '"size":$body_bytes_sent,'
            '"responsetime":$request_time,'
            '"upstreamtime":"$upstream_response_time",'
            '"upstreamhost":"$upstream_addr",'
            '"http_host":"$host",'
            '"url":"$uri",'
            '"domain":"$host",'
            '"xff":"$http_x_forwarded_for",'
            '"referer":"$http_referer",'
            '"status":"$status"}';
        access_log  logs/access.log  access_json;
        sendfile        on;
        keepalive_timeout  65;
        upstream tomcat_webserver{
            server python-test-tomcat-spec.python1.svc.cluster.local:80;
            server python-test-tomcat2-spec.python1.svc.cluster.local:80;
        }
        server {
            listen       80;
            server_name  localhost;
    
            #charset koi8-r;
    
            #access_log  logs/host.access.log  main;
    
            location / {
                root   html;
                index  index.html index.htm;
            }
            location /webapp {
                root   html;
                index  index.html index.htm;
            }
            location /myapp  {
                proxy_pass http://tomcat_webserver;
                proxy_set_header  Host     $host;
                proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header  X-Real-IP $remote_addr;
            }
            error_page   500 502 503 504  /50x.html;
            location = /50x.html {
                root   html;
            }
        }
    }
    

    重新构建nginx1的docker镜像

    [root@k8s-harbor1 nginx-web1]# bash build-commad.sh 
    Sending build context to Docker daemon   12.8kB
    Step 1/9 : FROM k8s-harbor1.example.com/baseimages/nginx-base:v1
     ---> 56fdf1eb39d0
    Step 2/9 : ADD nginx.conf /usr/local/nginx/conf/nginx.conf
     ---> cd340030fcd5
    Step 3/9 : ADD webapp/* /usr/local/nginx/html/webapp/
     ---> e6971f806aca
    Step 4/9 : ADD index.html /usr/local/nginx/html/index.html
     ---> 4948612828ca
    Step 5/9 : RUN mkdir /usr/local/nginx/html/webapp/{img,static}
     ---> Running in 96c1417a1020
    Removing intermediate container 96c1417a1020
     ---> d52b530129c3
    Step 6/9 : ADD filebeat.yml /etc/filebeat/filebeat.yml
     ---> 526b54119a91
    Step 7/9 : ADD run_nginx.sh /usr/local/nginx/sbin/run_nginx.sh
     ---> 20c2f42a29ba
    Step 8/9 : EXPOSE 80 443
     ---> Running in db138e01d716
    Removing intermediate container db138e01d716
     ---> e1dcf473cdea
    Step 9/9 : CMD ["/usr/local/nginx/sbin/run_nginx.sh"]
     ---> Running in 6b7a5e889ea5
    Removing intermediate container 6b7a5e889ea5
     ---> 71dfcfc428f6
    Successfully built 71dfcfc428f6
    Successfully tagged k8s-harbor1.example.com/library/nginx-web1:app1
    The push refers to repository [k8s-harbor1.example.com/library/nginx-web1]
    7fbb010a956a: Pushed 
    e05475c74a43: Pushed 
    bf42b15f65a9: Pushed 
    f98fb749ead8: Pushed 
    a5f4386305f5: Pushed 
    d7a47c031313: Pushed 
    921b64e67401: Layer already exists 
    1a38cd873ab2: Layer already exists 
    03fe738cbac6: Layer already exists 
    bacae692bcdc: Layer already exists 
    0466167696d7: Layer already exists 
    c2c7b781c557: Layer already exists 
    4d7c2f02fa21: Layer already exists 
    a7db5a01a52d: Layer already exists 
    e140baabf03f: Layer already exists 
    bcc97fbfc9e1: Layer already exists 
    app1: digest: sha256:7b90abaae19a00a796435fbf4d8e2497ccc6d43d3c5e64040a5314b2558d1fee size: 3669
    

    重新创建nginx镜像

    [root@k8s-master1 nginx]# kubectl delete -f nginx.yaml 
    deployment.extensions "python-test-nginx-deployment" deleted
    service "python-test-nginx-spec" deleted
    [root@k8s-master1 nginx]# vim nginx.yaml 
    You have new mail in /var/spool/mail/root
    [root@k8s-master1 nginx]# kubectl apply -f nginx.yaml 
    deployment.extensions/python-test-nginx-deployment created
    service/python-test-nginx-spec created
    [root@k8s-master1 nginx]# kubectl get pods --all-namespaces
    NAMESPACE     NAME                                             READY     STATUS    RESTARTS   AGE
    default       busybox                                          1/1       Running   196        8d
    kube-system   heapster-587f6c9b46-hwljv                        1/1       Running   0          7d
    kube-system   kube-dns-65f747f6c8-4p7gn                        3/3       Running   630        10d
    kube-system   kubernetes-dashboard-7f4f96b579-5hxdw            1/1       Running   0          8d
    kube-system   kubernetes-dashboard-7f4f96b579-glqnh            1/1       Running   0          8d
    kube-system   monitoring-grafana-5dc657db9f-cqxjq              1/1       Running   0          7d
    kube-system   monitoring-influxdb-789d98f4cb-ktsl7             1/1       Running   0          7d
    python1       python-test-nginx-deployment-75d57f78f9-bpxwg    1/1       Running   0          5s
    python1       python-test-tomcat-deployment-85967b4dc9-kxdp7   1/1       Running   0          4d
    python1       python-test-tomcat2-deployment-8ff58467d-zcv67   1/1       Running   0          4d
    [root@k8s-master1 nginx]# 
    

    浏览器测试检查下。

    相关文章

      网友评论

          本文标题:kubernetes安装部署-day06

          本文链接:https://www.haomeiwen.com/subject/vtmjxctx.html