美文网首页
Rook v1.11.2 operator 部署 Ceph

Rook v1.11.2 operator 部署 Ceph

作者: 橘子基因 | 来源:发表于2023-04-17 18:31 被阅读0次

    1.环境准备

    • Kubernetes 集群各节点主机内核版本不低于4.17
    • Kubernetes v1.21 或以上版本
    • 集群至少含有3个节点,且每个节点都有一块无文件格式、无分区的裸盘,用于创建 3 个 Ceph OSD
    • 节点上含有lvm
      • yum install lvm -yapt-get install -y lvm2

    2. Admission Controller

    启用Rook准入控制器,以确保使用自定义资源(CR)设置正确配置了Rook。

    kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.7.1/cert-manager.yaml
    

    3. 部署Rook Operator

    下载rook 1.11.2并解压

    wget https://github.com/rook/rook/archive/refs/tags/v1.11.2.zip && unzip v1.11.2.zip
    

    部署rook operator

    ~ # cd rook-1.11.2/deploy/examples/
    ~ # kubectl create -f crds.yaml -f common.yaml -f operator.yaml
    

    查看rook operator是否running

    ~ # kubectl -n rook-ceph get pod | grep operator                                                                                    
    rook-ceph-operator-5b4fd55548-9h6wp                               1/1     Running     0             10d
    

    4. 安装rook ceph

    我这里的环境有7个节点,只有3个节点安装了裸盘,所以修改了一下默认的cluster.yaml文件里storage部分的内容

      storage: # cluster level storage configuration and selection
        useAllNodes: false
        useAllDevices: false
        #deviceFilter:
        config:
          # crushRoot: "custom-root" # specify a non-default root label for the CRUSH map
          # metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.
          # databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB
          # journalSizeMB: "1024"  # uncomment if the disks are 20 GB or smaller
          # osdsPerDevice: "1" # this value can be overridden at the node or device level
          # encryptedDevice: "true" # the default value for this option is "false"
        # Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the named
        # nodes below will be used as storage resources.  Each node's 'name' field should match their 'kubernetes.io/hostname' label.
        nodes:
          # - name: "172.17.4.201"
          #   devices: # specific devices to use for storage can be specified for each node
          #     - name: "sdb"
          #     - name: "nvme01" # multiple osds can be created on high performance devices
          #       config:
          #         osdsPerDevice: "5"
          #     - name: "/dev/disk/by-id/ata-ST4000DM004-XXXX" # devices can be specified using full udev paths
          #   config: # configuration can be specified at the node level which overrides the cluster level config
          - name: "cdcloud"
            devices:
            - name: "sda"
          - name: "cd-c1"
            devices:
            - name: "sda"
          - name: "cd-c2"
            devices:
            - name: "sda"
    

    创建Ceph集群

    kubectl create -f cluster.yaml
    

    注意:此处ceph的镜像需要外网环境,我这边含有外网环境所以未作镜像修改,在国内部署需要修改镜像信息,可在网上搜索相关镜像。

    查看Ceph部署情况

    ~ # kubectl -n rook-ceph get pod                                                                                                              
    NAME                                                              READY   STATUS      RESTARTS   AGE
    csi-cephfsplugin-7g5sx                                            2/2     Running     0          10d
    csi-cephfsplugin-dqqmz                                            2/2     Running     0          10d
    csi-cephfsplugin-g5mn2                                            2/2     Running     0          10d
    csi-cephfsplugin-provisioner-66ff7944fd-q5m69                     5/5     Running     0          10d
    csi-cephfsplugin-provisioner-66ff7944fd-vlwxx                     5/5     Running     0          10d
    csi-cephfsplugin-vp8k9                                            2/2     Running     2          10d
    csi-cephfsplugin-xmxbz                                            2/2     Running     0          10d
    csi-rbdplugin-4mg9z                                               2/2     Running     0          10d
    csi-rbdplugin-fcx8w                                               2/2     Running     0          10d
    csi-rbdplugin-gsn7q                                               2/2     Running     2          10d
    csi-rbdplugin-m7km7                                               2/2     Running     0          10d
    csi-rbdplugin-provisioner-5bc74d7569-qbrhb                        5/5     Running     0          10d
    csi-rbdplugin-provisioner-5bc74d7569-z6rrv                        5/5     Running     0          10d
    csi-rbdplugin-vdvrb                                               2/2     Running     0          10d
    rook-ceph-crashcollector-cd-c1-756b9b56555hnzg                                      1/1     Running     0          10d
    rook-ceph-crashcollector-cd-c2-6d48bc9d8dvbmzf                                      1/1     Running     0          21h
    rook-ceph-crashcollector-cdcloud-64d4fcxlr2c                      1/1     Running     0          10d
    rook-ceph-mgr-a-745755867d-mg52v                                  3/3     Running     0          10d
    rook-ceph-mgr-b-584f9fb44c-d7cvb                                  3/3     Running     0          10d
    rook-ceph-mon-a-779b9ddc75-7g6vn                                  2/2     Running     0          10d
    rook-ceph-mon-b-7f87674f57-rg2xs                                  2/2     Running     0          10d
    rook-ceph-mon-d-9d9d94cc9-lnx75                                   2/2     Running     0          10d
    rook-ceph-operator-5b4fd55548-9h6wp                               1/1     Running     0          10d
    rook-ceph-osd-0-67c4577fb9-snrdx                                  2/2     Running     0          10d
    rook-ceph-osd-1-59899fd5d6-jgch8                                  2/2     Running     0          10d
    rook-ceph-osd-2-8c47c66-k4mvt                                     2/2     Running     0          10d
    rook-ceph-osd-prepare-cd-c1-pc9jd                                                 0/1     Completed   0          146m
    rook-ceph-osd-prepare-cd-c2-9x6r6                                                 0/1     Completed   0          146m
    rook-ceph-osd-prepare-cdcloud-8h56m                                                 0/1     Completed   0          146m
    

    5. toolbox

    可通过Rook toolbox来验证Rook Ceph的状态

    kubectl create -f deploy/examples/toolbox.yaml
    

    查看toolbox运行状态

    ~ # kubectl get pods -n rook-ceph|grep tools                                                                                                 
    rook-ceph-tools-74bb778c5-l7wpc                                   1/1     Running     0             92m
    

    进入toolbox并查看ceph状态

    ~ # kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ ceph status 
      cluster:
        id:     b54e6410-5825-4142-8fc9-542ec0f2aedd
        health: HEALTH_OK
     
      services:
        mon: 3 daemons, quorum a,b,d (age 10d)
        mgr: a(active, since 22h), standbys: b
        osd: 3 osds: 3 up (since 10d), 3 in (since 10d)
        rgw: 1 daemon active (1 hosts, 1 zones)
     
      data:
        volumes: 1/1 healthy
        pools:   12 pools, 169 pgs
        objects: 574 objects, 424 MiB
        usage:   1.6 GiB used, 2.6 TiB / 2.6 TiB avail
        pgs:     169 active+clean
     
      io:
        client:   853 B/s rd, 1 op/s rd, 0 op/s wr
    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ ceph osd status
    ID  HOST                         USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE      
     0  cd-c1                                            546M   893G      0        0       3      106   exists,up  
     1  cdcloud                                      542M   893G      0        0       0        0   exists,up  
     2  cd-c2                                        546M   893G      0        0       0        0   exists,up  
    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ ceph df
    --- RAW STORAGE ---
    CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
    ssd    2.6 TiB  2.6 TiB  1.6 GiB   1.6 GiB       0.06
    TOTAL  2.6 TiB  2.6 TiB  1.6 GiB   1.6 GiB       0.06
     
    --- POOLS ---
    POOL                         ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
    .mgr                          1    1  2.8 MiB        2  8.4 MiB      0    849 GiB
    replicapool                   2   32  366 MiB      160  1.1 GiB   0.04    849 GiB
    myfs-metadata                 3   16   48 KiB       24  228 KiB      0    849 GiB
    myfs-replicated               4   32    158 B        1   12 KiB      0    849 GiB
    my-store.rgw.control          5    8      0 B        8      0 B      0    849 GiB
    my-store.rgw.buckets.non-ec   6    8      0 B        0      0 B      0    849 GiB
    my-store.rgw.otp              7    8      0 B        0      0 B      0    849 GiB
    my-store.rgw.log              8    8   23 KiB      340  1.9 MiB      0    849 GiB
    my-store.rgw.buckets.index    9    8      0 B       11      0 B      0    849 GiB
    my-store.rgw.meta            10    8  2.2 KiB       11   96 KiB      0    849 GiB
    .rgw.root                    11    8  4.5 KiB       16  180 KiB      0    849 GiB
    my-store.rgw.buckets.data    12   32    8 KiB        1   12 KiB      0    1.7 TiB
    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ rados df
    POOL_NAME                       USED  OBJECTS  CLONES  COPIES  MISSING_ON_PRIMARY  UNFOUND  DEGRADED  RD_OPS       RD  WR_OPS       WR  USED COMPR  UNDER COMPR
    .mgr                         8.4 MiB        2       0       6                   0        0         0     790  1.4 MiB    1075   21 MiB         0 B          0 B
    .rgw.root                    180 KiB       16       0      48                   0        0         0     393  434 KiB      36   31 KiB         0 B          0 B
    my-store.rgw.buckets.data     12 KiB        1       0       3                   0        0         0       5    5 KiB      11    1 KiB         0 B          0 B
    my-store.rgw.buckets.index       0 B       11       0      33                   0        0         0      38   38 KiB      14    2 KiB         0 B          0 B
    my-store.rgw.buckets.non-ec      0 B        0       0       0                   0        0         0       0      0 B       0      0 B         0 B          0 B
    my-store.rgw.control             0 B        8       0      24                   0        0         0       0      0 B       0      0 B         0 B          0 B
    my-store.rgw.log             1.9 MiB      340       0    1020                   0        0         0   79422   69 MiB   45061  5.3 MiB         0 B          0 B
    my-store.rgw.meta             96 KiB       11       0      33                   0        0         0     220  184 KiB      23   11 KiB         0 B          0 B
    my-store.rgw.otp                 0 B        0       0       0                   0        0         0       0      0 B       0      0 B         0 B          0 B
    myfs-metadata                228 KiB       24       0      72                   0        0         0  158822   78 MiB      40   68 KiB         0 B          0 B
    myfs-replicated               12 KiB        1       0       3                   0        0         0       0      0 B       1    1 KiB         0 B          0 B
    replicapool                  1.1 GiB      160       0     480                   0        0         0    1003  4.0 MiB   62540  712 MiB         0 B          0 B
    
    total_objects    574
    total_used       1.6 GiB
    total_avail      2.6 TiB
    total_space      2.6 TiB
    

    6. Ceph Dashboard

    在cluster.yaml中默认开启了dashboard

      dashboard:
        enabled: true
        # serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)
        # urlPrefix: /
        # serve the dashboard at the given port.
        # port: 8443
        # serve the dashboard using SSL
        ssl: true
    

    查看ceph下的service

    ~ # kubectl -n rook-ceph get service                                                                                                   
    NAME                                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
    rook-ceph-mgr                            ClusterIP   10.101.16.43     <none>        9283/TCP            10d
    rook-ceph-mgr-dashboard                  ClusterIP   10.98.91.18      <none>        8443/TCP            10d
    rook-ceph-mon-a                          ClusterIP   10.104.253.46    <none>        6789/TCP,3300/TCP   10d
    rook-ceph-mon-b                          ClusterIP   10.102.128.246   <none>        6789/TCP,3300/TCP   10d
    rook-ceph-mon-d                          ClusterIP   10.104.248.147   <none>        6789/TCP,3300/TCP   10d
    

    使用nodeport访问dashboard

    kubectl create -f deploy/examples/dashboard-external-https.yaml
    

    查看service

    ~ # kubectl get svc -n rook-ceph
    NAME                                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
    rook-ceph-mgr                            ClusterIP   10.101.16.43     <none>        9283/TCP            10d
    rook-ceph-mgr-dashboard                  ClusterIP   10.98.91.18      <none>        8443/TCP            10d
    rook-ceph-mgr-dashboard-external-https   NodePort    10.106.53.198    <none>        8443:30782/TCP      10d
    rook-ceph-mon-a                          ClusterIP   10.104.253.46    <none>        6789/TCP,3300/TCP   10d
    rook-ceph-mon-b                          ClusterIP   10.102.128.246   <none>        6789/TCP,3300/TCP   10d
    rook-ceph-mon-d                          ClusterIP   10.104.248.147   <none>        6789/TCP,3300/TCP   10d
    

    查看管理员密码

    ~ # kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
    

    在浏览器中访问Ceph dashboard,IP+NodePort,输入admin和上述获取到的密码


    ceph dashboard.png

    7. 存储

    7.1 文件存储

    共享文件系统可以从多个pod中以读/写权限挂载,查看filesystem.yaml的内容

    apiVersion: ceph.rook.io/v1
    kind: CephFilesystem
    metadata:
      name: myfs
      namespace: rook-ceph
    spec:
      metadataPool:
        replicated:
          size: 3
      dataPools:
        - name: replicated
          replicated:
            size: 3
      preserveFilesystemOnDelete: true
      metadataServer:
        activeCount: 1
        activeStandby: true
    
    ~ # kubectl apply -f deploy/examples/filesystem.yaml
    

    确认文件系统已配置,查看mds pod启动

    ~ # kubectl -n rook-ceph get pod -l app=rook-ceph-mds                                                                 
    NAME                                    READY   STATUS    RESTARTS   AGE
    rook-ceph-mds-myfs-a-d9d5bbbcc-wqpcq    2/2     Running   0          22h
    rook-ceph-mds-myfs-b-85fd84c564-stjgm   2/2     Running   0          22h
    

    进入toolbox查看ceph状态

      services:
        mon: 3 daemons, quorum a,b,d (age 10d)
        mgr: a(active, since 23h), standbys: b
        mds: 1/1 daemons up, 1 hot standby
    

    创建基于文件存储的storageclass

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: rook-cephfs
    # Change "rook-ceph" provisioner prefix to match the operator namespace if needed
    provisioner: rook-ceph.cephfs.csi.ceph.com
    parameters:
      # clusterID is the namespace where the rook cluster is running
      # If you change this namespace, also change the namespace below where the secret namespaces are defined
      clusterID: rook-ceph
    
      # CephFS filesystem name into which the volume shall be created
      fsName: myfs
    
      # Ceph pool into which the volume shall be created
      # Required for provisionVolume: "true"
      pool: myfs-replicated
    
      # The secrets contain Ceph admin credentials. These are generated automatically by the operator
      # in the same namespace as the cluster.
      csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
      csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
      csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
      csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
      csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
      csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
    
    reclaimPolicy: Delete
    
    kubectl create -f deploy/examples/csi/cephfs/storageclass.yaml
    

    示例:

    以共享文件系统作为备份存储来启动kube-registry pod,创建kube-registry.yaml如下:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: cephfs-pvc
      namespace: kube-system
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 1Gi
      storageClassName: rook-cephfs
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: kube-registry
      namespace: kube-system
      labels:
        k8s-app: kube-registry
        kubernetes.io/cluster-service: "true"
    spec:
      replicas: 3
      selector:
        matchLabels:
          k8s-app: kube-registry
      template:
        metadata:
          labels:
            k8s-app: kube-registry
            kubernetes.io/cluster-service: "true"
        spec:
          containers:
          - name: registry
            image: registry:2
            imagePullPolicy: Always
            resources:
              limits:
                cpu: 100m
                memory: 100Mi
            env:
            # Configuration reference: https://docs.docker.com/registry/configuration/
            - name: REGISTRY_HTTP_ADDR
              value: :5000
            - name: REGISTRY_HTTP_SECRET
              value: "Ple4seCh4ngeThisN0tAVerySecretV4lue"
            - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
              value: /var/lib/registry
            volumeMounts:
            - name: image-store
              mountPath: /var/lib/registry
            ports:
            - containerPort: 5000
              name: registry
              protocol: TCP
            livenessProbe:
              httpGet:
                path: /
                port: registry
            readinessProbe:
              httpGet:
                path: /
                port: registry
          volumes:
          - name: image-store
            persistentVolumeClaim:
              claimName: cephfs-pvc
              readOnly: false
    

    部署Kube registry deployment:

    kubectl create -f deploy/examples/csi/cephfs/kube-registry.yaml
    

    查看pvc及pod

    ~ # kubectl get pvc -n kube-system                                                                                          
    NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    cephfs-pvc   Bound    pvc-6ca32df4-121f-4a01-9815-0fd64feaff8c   1Gi        RWX            rook-cephfs    22h
    ~ # kubectl get pods -n kube-system| grep kube-registry                                                                     
    kube-registry-5d6d8877f7-f67bs                       1/1     Running   0          22h
    kube-registry-5d6d8877f7-l8tgl                       1/1     Running   0          22h
    kube-registry-5d6d8877f7-vtk7t                       1/1     Running   0          22h
    

    7.2 块存储

    块存储允许单个 pod 挂载存储,创建块存储的storageclass

    apiVersion: ceph.rook.io/v1
    kind: CephBlockPool
    metadata:
      name: replicapool
      namespace: rook-ceph
    spec:
      failureDomain: host
      replicated:
        size: 3
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
       name: rook-ceph-block
    # Change "rook-ceph" provisioner prefix to match the operator namespace if needed
    provisioner: rook-ceph.rbd.csi.ceph.com
    parameters:
        # clusterID is the namespace where the rook cluster is running
        clusterID: rook-ceph
        # Ceph pool into which the RBD image shall be created
        pool: replicapool
    
        # (optional) mapOptions is a comma-separated list of map options.
        # For krbd options refer
        # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
        # For nbd options refer
        # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
        # mapOptions: lock_on_read,queue_depth=1024
    
        # (optional) unmapOptions is a comma-separated list of unmap options.
        # For krbd options refer
        # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
        # For nbd options refer
        # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
        # unmapOptions: force
    
        # RBD image format. Defaults to "2".
        imageFormat: "2"
    
        # RBD image features
        # Available for imageFormat: "2". Older releases of CSI RBD
        # support only the `layering` feature. The Linux kernel (KRBD) supports the
        # full complement of features as of 5.4
        # `layering` alone corresponds to Ceph's bitfield value of "2" ;
        # `layering` + `fast-diff` + `object-map` + `deep-flatten` + `exclusive-lock` together
        # correspond to Ceph's OR'd bitfield value of "63". Here we use
        # a symbolic, comma-separated format:
        # For 5.4 or later kernels:
        #imageFeatures: layering,fast-diff,object-map,deep-flatten,exclusive-lock
        # For 5.3 or earlier kernels:
        imageFeatures: layering
    
        # The secrets contain Ceph admin credentials.
        csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
        csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
        csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
        csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
        csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
        csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
    
        # Specify the filesystem type of the volume. If not specified, csi-provisioner
        # will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
        # in hyperconverged settings where the volume is mounted on the same node as the osds.
        csi.storage.k8s.io/fstype: ext4
    
    # Delete the rbd volume when a PVC is deleted
    reclaimPolicy: Delete
    
    # Optional, if you want to add dynamic resize for PVC.
    # For now only ext3, ext4, xfs resize support provided, like in Kubernetes itself.
    allowVolumeExpansion: true
    

    注意:此示例要求每个节点至少有1个OSD,每个OSD位于3个不同的节点上。
    每个OSD必须位于不同的节点上,因为failureDomain设置为host,并且已复制。大小设置为3。

    kubectl create -f deploy/examples/csi/rbd/storageclass.yaml
    

    示例:

    创建一个基于块存储的mysql和wordpress应用

    ~ # kubectl create -f deploy/examples/mysql.yaml
    ~ # kubectl create -f deploy/examples/wordpress.yaml
    

    查看pvc及pod

    ~ # kubectl get pvc | grep rook-ceph-block                                                                                  
    mysql-pv-claim                 Bound    pvc-9253d723-945a-4748-858d-ae7981928bb4   20Gi       RWO            rook-ceph-block   23h
    wp-pv-claim                    Bound    pvc-1cd8e9e8-c77b-4b1b-965c-3cb971718f04   20Gi       RWO            rook-ceph-block   23h
    ~ # kubectl get pods | grep wordpress                                                                                       
    wordpress-7b989dbf57-sxngr             1/1     Running   0              23h
    wordpress-mysql-6965fc8cc8-wzr9b       1/1     Running   0              23h
    

    7.3 对象存储

    创建本地对象存储,object.yaml配置文件如下:

    注意:此示例至少需要3个 bluestore OSD,每个OSD位于不同的节点上。OSD必须位于不同的节点上,因为failureDomain设置为host,并且erasureCoded区块设置需要至少3个不同的OSD(2个数据区块+1个编码区块)。

    apiVersion: ceph.rook.io/v1
    kind: CephObjectStore
    metadata:
      name: my-store
      namespace: rook-ceph
    spec:
      metadataPool:
        failureDomain: host
        replicated:
          size: 3
      dataPool:
        failureDomain: host
        erasureCoded:
          dataChunks: 2
          codingChunks: 1
      preservePoolsOnDelete: true
      gateway:
        sslCertificateRef:
        port: 80
        # securePort: 443
        instances: 1
    
    kubectl create -f object.yaml
    

    确认对象存储已配置并查看RGW pod

    ~ # kubectl -n rook-ceph get pod -l app=rook-ceph-rgw                                                                       
    NAME                                        READY   STATUS    RESTARTS   AGE
    rook-ceph-rgw-my-store-a-6dd6cd74fc-7sdbb   2/2     Running   0          23h
    

    创建bucket,bucket可以通过定义存储类来创建,类似于块和文件存储使用的模式。首先,创建允许对象客户端创建bucket的storageclass如下。

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
       name: rook-ceph-bucket
    # Change "rook-ceph" provisioner prefix to match the operator namespace if needed
    provisioner: rook-ceph.ceph.rook.io/bucket
    reclaimPolicy: Delete
    parameters:
      objectStoreName: my-store
      objectStoreNamespace: rook-ceph
    
    ~ # kubectl apply -f deploy/examples/storageclass-bucket-delete.yaml
    

    基于这个存储类,对象客户端现在可以通过创建对象桶声明(OBC)来请求一个bucket。当OBC被创建时,Rook Bucket provisioner将创建一个Bucket

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: ceph-bucket
    spec:
      generateBucketName: ceph-bkt
      storageClassName: rook-ceph-bucket
    
    kubectl apply -f deploy/examples/object-bucket-claim-delete.yaml
    

    查看AWS_HOST、PORT、BUCKET_NAME、AWS_ACCESS_KEY_ID、AWS_SECRET_ACCESS_KEY

    ## AWS_HOST
    ~ # kubectl -n default get cm ceph-delete-bucket -o jsonpath='{.data.BUCKET_HOST}'
    rook-ceph-rgw-my-store.rook-ceph.svc
    ## PORT
    ~ # kubectl -n default get cm ceph-delete-bucket -o jsonpath='{.data.BUCKET_PORT}'
    80
    ## BUCKET_NAME
    ~ # kubectl -n default get cm ceph-delete-bucket -o jsonpath='{.data.BUCKET_NAME}
    ceph-bkt-0a7a6731-d8e8-4736-beac-6797b1ae8066
    ## AWS_ACCESS_KEY_ID
    ~ # kubectl -n default get secret ceph-delete-bucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode
    8PNN1JKMIFPWOS4M41PC
    ## AWS_SECRET_ACCESS_KEY
    ~ # kubectl -n default get secret ceph-delete-bucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode
    a3nWCzuo4q6ECSqCf67lr2l7sL1UfCPCw2upcsbC
    

    示例:

    从S3客户端使用ceph对象存储,以下操作在toolbox中执行

    v1.11.2中的toolbox不含有s5cmd工具了,为了方便测试,这里使用的是之前版本的toolbox:https://raw.githubusercontent.com/rook/rook/v1.8.1/deploy/examples/toolbox.yaml

    ~ # kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ export AWS_HOST=rook-ceph-rgw-my-store.rook-ceph.svc
    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ export PORT=80
    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ export BUCKET_NAME=ceph-bkt-0a7a6731-d8e8-4736-beac-6797b1ae8066
    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ export AWS_ACCESS_KEY_ID=8PNN1JKMIFPWOS4M41PC
    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ export AWS_SECRET_ACCESS_KEY=a3nWCzuo4q6ECSqCf67lr2l7sL1UfCPCw2upcsbC
    ## 为s5cmd工具设置对象存储credentials
    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ mkdir ~/.aws
    cat > ~/.aws/credentials << EOF
    [default]
    aws_access_key_id = ${AWS_ACCESS_KEY_ID}
    aws_secret_access_key = ${AWS_SECRET_ACCESS_KEY}
    EOF
    

    上传文件到新创建的bucket中

    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ echo "Hello Rook" > /tmp/rookObj
    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ s5cmd --endpoint-url http://$AWS_HOST:$PORT cp /tmp/rookObj s3://$BUCKET_NAME
    

    下载并验证bucket中的文件

    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ s5cmd --endpoint-url http://$AWS_HOST:$PORT cp s3://$BUCKET_NAME/rookObj /tmp/rookObj-download
    [rook@rook-ceph-tools-74bb778c5-l7wpc /]$ cat /tmp/rookObj-download 
    Hello Rook
    

    参考

    相关文章

      网友评论

          本文标题:Rook v1.11.2 operator 部署 Ceph

          本文链接:https://www.haomeiwen.com/subject/ljpcjdtx.html