美文网首页互联网科技程序员码农的世界
如何在阿里云容器服务ACK上部署应用管理/发布系统Spinnak

如何在阿里云容器服务ACK上部署应用管理/发布系统Spinnak

作者: 阿里云技术 | 来源:发表于2019-09-25 16:54 被阅读0次

    Spinnaker是一个开源的多云持续交付平台,可帮助您方便管理应用以及快速交付应用。

    Spinnaker的两个主要功能是: 应用管理 , 应用交付

    Applications, clusters, and server groups是Spinnaker中非常重要的几个概念, Load balancers and firewalls描述了如何向用户公开你的服务:


    应用部署和部署策略:


    在ACK上部署Spinnaker的步骤:
    (1)创建一个ACK集群
    (2)创建Spinnaker需要的Kubernetes资源
    (3)配置Spinnaker的安装文件
    (4)部署并访问Spinnaker

    1. 创建集群

    参考 创建阿里云容器服务ACK集群

    2. 创建Spinnaker需要的Kubernetes资源

    2.1 创建 Namespace

    $ kubectl create ns spinnaker
    

    2.2 创建ServiceAccount ClusterRoleBinding 资源用于 Halyard 部署 Spinnaker

    rbac.yaml 文件内容:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: spinnaker-service-account
      namespace: spinnaker
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: spinnaker-role-binding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - namespace: spinnaker
      kind: ServiceAccount
      name: spinnaker-service-account
    

    运行以下命令创建资源:

    $ kubectl create -f rbac.yaml
    

    3. 配置Spinnaker的安装文件

    Spinnaker是通过Halyard工具来管理配置和部署的。

    3.1 部署halyard

    hal-deployment.yaml 文件内容如下:

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      labels:
        app: hal
      name: hal
      namespace: spinnaker
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: hal
      template:
        metadata:
          labels:
            app: hal
        spec:
          containers:
          - image: registry.cn-hangzhou.aliyuncs.com/haoshuwei24/halyard:stable
            name: halyard
          serviceAccount: spinnaker-service-account
          serviceAccountName: spinnaker-service-account
    

    运行以下命令创建资源:

    $ kubectl create -f hal-deployment.yaml
    

    查看pod是否正常运行:

    $ kubectl -n spinnaker get po
    NAME                   READY   STATUS    RESTARTS   AGE
    hal-77b4cf787f-p25h5   1/1     Running   0          9m54s
    

    3.2 配置Cloud Provider

    • exec进入hal pod:
    $ kubectl -n spinnaker exec -it hal-77b4cf787f-p25h5 bash
    
    • 拷贝kubeconfig文件为~/.kube/config
    • 启用kubernetes provider:
    $ hal config provider kubernetes enable
    + Get current deployment
      Success
    + Edit the kubernetes provider
      Success
    Problems in default.provider.kubernetes:
    - WARNING Provider kubernetes is enabled, but no accounts have been
      configured.
    
    + Successfully enabled kubernetes
    
    • 添加一个spinnaker account:
    $ CONTEXT=$(kubectl config current-context)
    
    $ hal config provider kubernetes account add my-k8s-v2-account \
        --provider-version v2 \
        --context $CONTEXT
    + Get current deployment
      Success
    + Add the my-k8s-v2-account account
      Success
    + Successfully added account my-k8s-v2-account for provider
      kubernetes.
    $ hal config features edit --artifacts true
    + Get current deployment
      Success
    + Get features
      Success
    + Edit features
      Success
    + Successfully updated features.
    

    3.3 选择Spinnaker的部署环境

    运行以下命令:

    $ ACCOUNT=my-k8s-v2-account
    $ hal config deploy edit --type distributed --account-name $ACCOUNT
    + Get current deployment
      Success
    + Get the deployment environment
      Success
    + Edit the deployment environment
      Success
    + Successfully updated your deployment environment.
    

    3.4 配置存储

    Spinnaker需要外部安全可靠的存储服务来保留您的应用程序设置和已配置的Pipeline。由于这些数据很敏感,丢失的话恢复起来代价很高。 本次示例我们临时搭建一个Minio Service

    • 部署Minio
      minio-deployment.yml文件内容如下:
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: minio
    
    ---
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      namespace: minio
      name: minio
      labels:
        component: minio
    spec:
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            component: minio
        spec:
          volumes:
          - name: storage
            emptyDir: {}
          - name: config
            emptyDir: {}
          containers:
          - name: minio
            image: minio/minio:latest
            imagePullPolicy: IfNotPresent
            args:
            - server
            - /storage
            - --config-dir=/config
            env:
            - name: MINIO_ACCESS_KEY
              value: "<your MINIO_ACCESS_KEY>"
            - name: MINIO_SECRET_KEY
              value: "your MINIO_SECRET_KEY"
            ports:
            - containerPort: 9000
            volumeMounts:
            - name: storage
              mountPath: "/storage"
            - name: config
              mountPath: "/config"
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      namespace: minio
      name: minio
      labels:
        component: minio
    spec:
      # ClusterIP is recommended for production environments.
      # Change to NodePort if needed per documentation,
      # but only if you run Minio in a test/trial environment, for example with Minikube.
      type: LoadBalancer
      ports:
        - port: 9000
          targetPort: 9000
          protocol: TCP
      selector:
        component: minio
    

    设置MINIO_ACCESS_KEY MINIO_SECRET_KEY的值并部署Minio:

    $ kubectl create -f minio-deployment.yaml
    

    查看Pod运行状态和服务端口:

    $ kubectl -n minio get po
    NAME                     READY   STATUS    RESTARTS   AGE
    minio-59fd966974-nn5ns   1/1     Running   0          12m
    [root@iZbp184d18xuqpwxs9tat3Z minio]# kubectl -n minio get svc
    NAME    TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
    minio   LoadBalancer   172.27.12.130   xxx.xx.xxx.xx   9000:30771/TCP   12m
    

    创建job在Minio中创建bucket和path:
    job.yaml内容如下:

    apiVersion: batch/v1
    kind: Job
    metadata:
      namespace: minio
      name: minio-setup
      labels:
        component: minio
    spec:
      template:
        metadata:
          name: minio-setup
        spec:
          restartPolicy: OnFailure
          volumes:
          - name: config
            emptyDir: {}
          containers:
          - name: mc
            image: minio/mc:latest
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - "mc --config-dir=/config config host add spinnaker http://xxx.xx.xxx.xx:9000 MINIO_ACCESS_KEY MINIO_SECRET_KEY && mc --config-dir=/config mb -p spinnaker/spinnaker"
            volumeMounts:
            - name: config
              mountPath: "/config"
    

    你需要记录 ENDPOINT MINIO_ACCESS_KEY MINIO_SECRET_KEY 在下文会用到

    • 编辑和配置存储信息
      在hal pod中继续执行以下步骤:
    $ mkdir -p ~/.hal/default/profiles
    $ echo "spinnaker.s3.versioning: false" >> ~/.hal/default/profiles/front50-local.yml
    $ ENDPOINT=http://xxx.xx.xxx.xx:9000
    $ MINIO_ACCESS_KEY=<your key>
    $ MINIO_SECRET_KEY=<your secret>
    $ echo $MINIO_SECRET_KEY | hal config storage s3 edit --endpoint $ENDPOINT \
        --path-style-access true \
        --bucket spinnaker \
        --root-folder spinnaker \
        --access-key-id $MINIO_ACCESS_KEY \
        --secret-access-key
    + Get current deployment
      Success
    + Get persistent store
      Success
    + Edit persistent store
      Success
    + Successfully edited persistent store "s3".
    
    $ hal config storage edit --type s3
    + Get current deployment
      Success
    + Get persistent storage settings
      Success
    + Edit persistent storage settings
      Success
    + Successfully edited persistent storage.
    

    4. 部署Spinnaker并访问服务

    • 列出并选择一个版本 注意:此处会从Google Cloud上获取一个versions.yml文件, 请自行解决网络问题
    $ hal version list
    + Get current deployment
      Success
    + Get Spinnaker version
      Success
    + Get released versions
      Success
    + You are on version "", and the following are available:
     - 1.13.12 (BirdBox):
       Changelog: https://gist.github.com/spinnaker-release/9ee98b0cbed65e334cd498bc31676295
       Published: Mon Jul 29 18:18:59 UTC 2019
       (Requires Halyard >= 1.17)
     - 1.14.15 (LoveDeathAndRobots):
       Changelog: https://gist.github.com/spinnaker-release/52b1de1551a8830a8945b3c49ef66fe3
       Published: Mon Sep 16 18:09:49 UTC 2019
       (Requires Halyard >= 1.17)
     - 1.15.2 (ExtremelyWickedShockinglyEvilAndVile):
       Changelog: https://gist.github.com/spinnaker-release/e72cc8015d544738d07d57a183cb5404
       Published: Mon Aug 12 20:48:52 UTC 2019
       (Requires Halyard >= 1.17)
     - 1.15.4 (ExtremelyWickedShockinglyEvilAndVile):
       Changelog: https://gist.github.com/spinnaker-release/2229c2172952e9a485d68788bd4560b0
       Published: Tue Sep 17 17:35:54 UTC 2019
       (Requires Halyard >= 1.17)
     - 1.16.1 (SecretObsession):
       Changelog: https://gist.github.com/spinnaker-release/21ff4522a9e46ba5f27c52f67da88dc9
       Published: Tue Sep 17 17:48:07 UTC 2019
       (Requires Halyard >= 1.17)
    
    • 选择1.16.1版本:
    $ hal config version edit --version 1.16.1
    + Get current deployment
      Success
    + Edit Spinnaker version
      Success
    + Spinnaker has been configured to update/install version "1.16.1".
      Deploy this version of Spinnaker with `hal deploy apply`.
    
    • 部署Spinnaker
    $ hal deploy apply
    + Get current deployment
      Success
    + Prep deployment
      Success
    Problems in default.security:
    - WARNING Your UI or API domain does not have override base URLs
      set even though your Spinnaker deployment is a Distributed deployment on a
      remote cloud provider. As a result, you will need to open SSH tunnels against
      that deployment to access Spinnaker.
    ? We recommend that you instead configure an authentication
      mechanism (OAuth2, SAML2, or x509) to make it easier to access Spinnaker
      securely, and then register the intended Domain and IP addresses that your
      publicly facing services will be using.
    
    + Preparation complete... deploying Spinnaker
    + Get current deployment
      Success
    + Apply deployment
      Success
    + Deploy spin-redis
      Success
    + Deploy spin-clouddriver
      Success
    + Deploy spin-front50
      Success
    + Deploy spin-orca
      Success
    + Deploy spin-deck
      Success
    + Deploy spin-echo
      Success
    + Deploy spin-gate
      Success
    + Deploy spin-rosco
      Success
    + Run `hal deploy connect` to connect to Spinnaker.
    
    • 查看Spinnaker Pod运行状态:
    $ kubectl -n spinnaker get po
    NAME                                READY   STATUS    RESTARTS   AGE
    hal-77b4cf787f-xlr5g                1/1     Running   0          18m
    spin-clouddriver-66bf54c684-6ns9b   1/1     Running   0          7m49s
    spin-deck-cd6489797-7fqzj           1/1     Running   0          7m52s
    spin-echo-85cd9fb85c-dzkrz          1/1     Running   0          7m54s
    spin-front50-6c57c79995-7d5sj       1/1     Running   0          7m46s
    spin-gate-5dc9b977c6-5kl8d          1/1     Running   0          7m51s
    spin-orca-dfdbdf448-gp8s2           1/1     Running   0          7m47s
    spin-redis-7bff9789b6-lmpb4         1/1     Running   0          7m50s
    spin-rosco-666d4889c8-vh7p5         1/1     Running   0          7m47s
    
    $ kubectl -n spinnaker get svc
    NAME               TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
    spin-clouddriver   ClusterIP      172.21.1.183    <none>          7002/TCP         13m
    spin-deck          ClusterIP      172.21.6.203    <none>          9000/TCP         13m
    spin-echo          ClusterIP      172.21.10.119   <none>          8089/TCP         13m
    spin-front50       ClusterIP      172.21.13.128   <none>          8080/TCP         13m
    spin-gate          ClusterIP      172.21.6.130    <none>          8084/TCP         13m
    spin-orca          ClusterIP      172.21.4.37     <none>          8083/TCP         13m
    spin-redis         ClusterIP      172.21.9.201    <none>          6379/TCP         13m
    spin-rosco         ClusterIP      172.21.11.27    <none>          8087/TCP         13m
    
    • 访问Spinnaker服务
      kubectl -n spinnaker edit svc spin-deck修改提供ui服务的 spin-deck service资源 type: LoadBalancer
    $ kubectl -n spinnaker get svc |grep spin-deck
    spin-deck          LoadBalancer   172.21.6.203    xxx.xx.xx.xx   9000:30680/TCP   16m
    
    • 在hal pod中配置ui可外部访问
    $ hal config security ui edit --override-base-url http://xxx.xx.xx.xx:9000
    + Get current deployment
      Success
    + Get UI security settings
      Success
    + Edit UI security settings
      Success
    Problems in default.security:
    - WARNING Your UI or API domain does not have override base URLs
      set even though your Spinnaker deployment is a Distributed deployment on a
      remote cloud provider. As a result, you will need to open SSH tunnels against
      that deployment to access Spinnaker.
    ? We recommend that you instead configure an authentication
      mechanism (OAuth2, SAML2, or x509) to make it easier to access Spinnaker
      securely, and then register the intended Domain and IP addresses that your
      publicly facing services will be using.
    
    + Successfully updated UI security settings.
    

    在浏览器中访问Spinnaker ui界面 http://xxx.xx.xx.xx:9000
    [图片上传失败...(image-4b6df7-1569401563875)]

    注意: Spinnaker本身并没有用户管理模块, 在生产中使用的话,用户需要对接自己的认证系统, 参考[Spinnaker Authentication](https://www.spinnaker.io/setup/security/authentication/)

    • 若需要外部访问Spinnaker API, 则需要做以下操作
      修改 Service spin-gatetype: LoadBalancer

    设置api为外部可访问:

    $ hal config security api edit --override-base-url http://xx.xx.xxx.xx:8084
    + Get current deployment
      Success
    + Get API security settings
      Success
    + Edit API security settings
      Success
    

    5. 其他

    后面我们会继续为大家补充如何使用Spinnaker管理和交付应用。

    参考文档:
    https://www.spinnaker.io/setup/install/
    https://www.mirantis.com/blog/how-to-deploy-spinnaker-on-kubernetes-a-quick-and-dirty-guide/

    阅读原文
    本文为云栖社区原创内容,未经允许不得转载。

    相关文章

      网友评论

        本文标题:如何在阿里云容器服务ACK上部署应用管理/发布系统Spinnak

        本文链接:https://www.haomeiwen.com/subject/xhkwuctx.html