美文网首页
离线安装minio集群(篇幅一)

离线安装minio集群(篇幅一)

作者: 小小的小帅 | 来源:发表于2021-07-08 16:17 被阅读0次
  1. 新建pv
    vi minio0-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-minio-0
  namespace: minio
spec:
  capacity:  #指定容量
    storage: 5Gi
  accessModes:
    - ReadWriteOnce  #访问模式,还有ReadOnlyMany /ReadOnlymany/ReadWriteMany(RWX)
  storageClassName: nfs  ##指定存储的类型
  nfs:
    path: /minio  #指明nfs的路径
    server: 192.168.30.135  #指明nfs的ip

新建4个pv,minio0-pv.yaml、minio1-pv.yaml、minio2-pv.yaml、minio3-pv.yaml
修改:1.metadata:name:data-minio-0(0-3

  1. 新建pvc
    vi minio0-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
    name: data-minio-0
    namespace: minio
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
        storage: 5Gi
  storageClassName: nfs

新建4个pvc,minio0-pvc.yaml、minio1-pvc.yaml、minio2-pvc.yaml、minio3-pvc.yaml
修改:1.metadata:name:data-minio-0(0-3

pv\pvc查看、删除

kubectl get pv -n minio
kubectl get pvc -n minio
kubectl delete pv  xxx -n minio
kubectl delete pvc  xxx -n minio

查看po状态、删除po

#查看命名空间
kubectl get ns
#查看po
kubectl get po -n minio
#查看svc
kubectl get svc -n minio
#查看po状态
kubectl describe po xxx -n  minio
#查看po日志
kubectl logs -f xxx -n minio
#yaml文件执行(可更新yaml配置)
kubectl apply -f minio.yaml -n minio

nfs目录共享

vi /etc/exports

#加入
/home/nfs_dir_1 *(rw,sync,no_root_squash,no_subtree_check)

service nfs restart

minio.yaml文件

vi minio.yaml

apiVersion: v1
kind: Service
metadata:
  name: minio
  labels:
    app: minio
spec:
  clusterIP: None
  ports:
    - port: 9000
      name: minio
  selector:
    app: minio
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: minio
spec:
  serviceName: minio
  replicas: 4
  selector:
    matchLabels:
      app: minio
  template:
    metadata:
      labels:
        app: minio
    spec:
      containers:
      - name: minio
        env:
        - name: MINIO_ACCESS_KEY
          value: "admin"
        - name: MINIO_SECRET_KEY
          value: "cgOtPXKTTjtzg6FN"
        image: minio/minio
        imagePullPolicy: IfNotPresent
        args:
        - server
        - http://minio-{0...3}.minio.minio.svc.cluster.local/data
        ports:
        - containerPort: 9000
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /minio/health/live
            port: service
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /minio/health/ready
            port: service
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 15
          successThreshold: 1
          timeoutSeconds: 1
        # These volume mounts are persistent. Each pod in the PetSet
        # gets a volume mounted based on this field.
        volumeMounts:
        - name: data
          mountPath: /data
  # These are converted to volume claims by the controller
  # and mounted at the paths mentioned above.
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
      # Uncomment and add storageClass specific to your requirements below. Read more https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
      storageClassName: nfs
---
apiVersion: v1
kind: Service
metadata:
  name: minio-service
spec:
  type: LoadBalancer
  ports:
    - port: 9000
      targetPort: 9000
      protocol: TCP
  selector:
    app: minio

k8s镜像拉去策略:
imagePullPolicy: IfNotPresent

image.png
注意:节点要求为4个或以上、磁盘要求为4个不同服务的磁盘空间

相关文章

网友评论

      本文标题:离线安装minio集群(篇幅一)

      本文链接:https://www.haomeiwen.com/subject/oyrtpltx.html