美文网首页
基于Helm安装sentry

基于Helm安装sentry

作者: do_young | 来源:发表于2020-08-19 18:11 被阅读0次

前言

最初的需求就是在k8s上安装sentry。之前通过自己梳理sentry工程各组件的关联关系做过一次部署。后面了解到kubernetes的helm官方charts仓库有sentry的部署文件
所以这次想根据helm进行sentry的部署,该篇主要是记录一下自己在使用官网charts部署步骤,以及部署过程中,由于本地部署环境的差异所遇到的一些问题,即解决办法。

先决条件

该chats的安装要满足以下条件:

  • 启用Beta API的Kubernetes 1.4+
  • helm> = v2.3.0以正确的顺序运行“加权”挂钩。
  • 基础架构中的PV供应商支持(启用了持久性存储)

环境说明

  • 主机: 本次安装使用的是一主三从的K8S集群(master0,worker1,worker2,worker3)

该集群基于kubeoperator进行的安装(默认已经安装Helm及Ingress)。

  • 网络: K8S集群所在的宿主机均不能访问外网
  • 存储: 存储选择local-volume的方式。
  • 服务: 集群外部访问内部服务通过nginx-ingress访问,访问端口为80,常规按请求的域名来配置到服务的路由规则。

安装步骤

下载charts

由于K8S所在宿主机无法访问外网,所以我是在通过办公电脑(可访问外网)使用git将charts下载到本上。

git clone https://github.com/kubernetes/charts

然后上传到K8S集群主结点上。

预安装

将charts上传到K8S集群主结点以后,在什么都不改的情况下预安装sentry

helm install --dry-run --debug --namespace=sentry --name sentry  ./sentry

会提示如下信息:


image.png

这是由于sentry的默认安装参数是需要依赖redis和postgresql技术组件的。
我们需要将这些技术组件在安装sentry的时候将这些技术组件一起安装。
而同时安装需要依赖它们的charts

根据提示,我们将charts中的redis和postgres组件生成charts的压缩包

helm package  --namespace=sentry  --name postgres ./postgresql
helm package  --namespace=sentry  --name redis ./redis

然后在sentry目录下创建一个charts目录,将打好的包放在charts目录中,如下图所示:


image.png

完成以上工作,正常情况就可以通过命令生成部署sentry的yaml文件了。

helm install --dry-run --debug --namespace=sentry --name sentry  ./sentry

默认配置大致为通过LoadBanlancer对外提供9000端口访问,
一个sentry-web,
两个sentry-worker
一个sentry-cron
一个postgress数据库
一主多从的redis


image.png

如果生成的配置与你的实际部署有差异,需要自己去修改sentry目录下的values.yaml配置文件进行调整,下面是从官方文档中摘下来的相关配置,可以供你参考。

Parameter Description Default
image.repository Sentry image library/sentry
image.tag Sentry image tag 9.1.2
image.pullPolicy Image pull policy IfNotPresent
image.imagePullSecrets Specify image pull secrets []
sentrySecret Specify SENTRY_SECRET_KEY. If isn't specified it will be generated automatically. nil
web.podAnnotations Web pod annotations {}
web.podLabels Web pod extra labels {}
web.replicacount Amount of web pods to run 1
web.resources.limits Web resource limits {cpu: 500m, memory: 500Mi}
web.resources.requests Web resource requests {cpu: 300m, memory: 300Mi}
web.env Additional web environment variables [{name: GITHUB_APP_ID}, {name: GITHUB_API_SECRET}]
web.nodeSelector Node labels for web pod assignment {}
web.affinity Affinity settings for web pod assignment {}
web.schedulerName Name of an alternate scheduler for web pod nil
web.tolerations Toleration labels for web pod assignment []
web.livenessProbe.failureThreshold The liveness probe failure threshold 5
web.livenessProbe.initialDelaySeconds The liveness probe initial delay seconds 50
web.livenessProbe.periodSeconds The liveness probe period seconds 10
web.livenessProbe.successThreshold The liveness probe success threshold 1
web.livenessProbe.timeoutSeconds The liveness probe timeout seconds 2
web.readinessProbe.failureThreshold The readiness probe failure threshold 10
web.readinessProbe.initialDelaySeconds The readiness probe initial delay seconds 50
web.readinessProbe.periodSeconds The readiness probe period seconds 10
web.readinessProbe.successThreshold The readiness probe success threshold 1
web.readinessProbe.timeoutSeconds The readiness probe timeout seconds 2
web.priorityClassName The priorityClassName on web deployment nil
web.hpa.enabled Boolean to create a HorizontalPodAutoscaler for web deployment false
web.hpa.cputhreshold CPU threshold percent for the web HorizontalPodAutoscaler 60
web.hpa.minpods Min pods for the web HorizontalPodAutoscaler 1
web.hpa.maxpods Max pods for the web HorizontalPodAutoscaler 10
cron.podAnnotations Cron pod annotations {}
cron.podLabels Worker pod extra labels {}
cron.replicacount Amount of cron pods to run 1
cron.resources.limits Cron resource limits {cpu: 200m, memory: 200Mi}
cron.resources.requests Cron resource requests {cpu: 100m, memory: 100Mi}
cron.nodeSelector Node labels for cron pod assignment {}
cron.affinity Affinity settings for cron pod assignment {}
cron.schedulerName Name of an alternate scheduler for cron pod nil
cron.tolerations Toleration labels for cron pod assignment []
cron.priorityClassName The priorityClassName on cron deployment nil
worker.podAnnotations Worker pod annotations {}
worker.podLabels Worker pod extra labels {}
worker.replicacount Amount of worker pods to run 2
worker.resources.limits Worker resource limits {cpu: 300m, memory: 500Mi}
worker.resources.requests Worker resource requests {cpu: 100m, memory: 100Mi}
worker.nodeSelector Node labels for worker pod assignment {}
worker.schedulerName Name of an alternate scheduler for worker nil
worker.affinity Affinity settings for worker pod assignment {}
worker.tolerations Toleration labels for worker pod assignment []
worker.concurrency Celery worker concurrency nil
worker.priorityClassName The priorityClassName on workers deployment nil
worker.hpa.enabled Boolean to create a HorizontalPodAutoscaler for worker deployment false
worker.hpa.cputhreshold CPU threshold percent for the worker HorizontalPodAutoscaler 60
worker.hpa.minpods Min pods for the worker HorizontalPodAutoscaler 1
worker.hpa.maxpods Max pods for the worker HorizontalPodAutoscaler 10
user.create Create the default admin true
user.email Username for default admin admin@sentry.local
user.password Password for default admin Randomly generated
email.from_address Email notifications are from smtp
email.host SMTP host for sending email smtp
email.port SMTP port 25
email.user SMTP user nil
email.password SMTP password nil
email.use_tls SMTP TLS for security false
email.enable_replies Allow email replies false
email.existingSecret SMTP password from an existing secret nil
email.existingSecretKey Key to get from the email.existingSecret secret smtp-password
service.type Kubernetes service type LoadBalancer
service.name Kubernetes service name sentry
service.externalPort Kubernetes external service port 9000
service.internalPort Kubernetes internal service port 9000
service.annotations Service annotations {}
service.nodePort Kubernetes service NodePort port Randomly chosen by Kubernetes
service.loadBalancerSourceRanges Allow list for the load balancer nil
ingress.enabled Enable ingress controller resource false
ingress.annotations Ingress annotations {}
ingress.labels Ingress labels {}
ingress.hostname URL to address your Sentry installation sentry.local
ingress.path path to address your Sentry installation /
ingress.extraPaths Ingress extra paths to prepend to every host configuration. []
ingress.tls Ingress TLS configuration []
postgresql.enabled Deploy postgres server (see below) true
postgresql.postgresqlDatabase Postgres database name sentry
postgresql.postgresqlUsername Postgres username postgres
postgresql.postgresqlHost External postgres host nil
postgresql.postgresqlPassword External/Internal postgres password nil
postgresql.postgresqlPort External postgres port 5432
postgresql.existingSecret Name of existing secret to use for the PostgreSQL password nil
postgresql.existingSecretKey Key to get from the postgresql.existingSecret secret postgresql-password
redis.enabled Deploy redis server (see below) true
redis.host External redis host nil
redis.password External redis password nil
redis.port External redis port 6379
redis.existingSecret Name of existing secret to use for the Redis password nil
redis.existingSecretKey Key to get from the redis.existingSecret secret redis-password
filestore.backend Backend for Sentry Filestore filesystem
filestore.filesystem.path Location to store files for Sentry /var/lib/sentry/files
filestore.filesystem.persistence.enabled Enable Sentry files persistence using PVC true
filestore.filesystem.persistence.existingClaim Provide an existing PersistentVolumeClaim nil
filestore.filesystem.persistence.storageClass PVC Storage Class nil (uses alpha storage class annotation)
filestore.filesystem.persistence.accessMode PVC Access Mode ReadWriteOnce
filestore.filesystem.persistence.size PVC Storage Request 10Gi
filestore.filesystem.persistence.persistentWorkers Mount the PVC to Sentry workers, enabling features such as private source maps false
filestore.gcs.credentialsFile Filename of the service account in secret credentials.json
filestore.gcs.secretName The name of the secret for GCS access nil
filestore.gcs.bucketName The name of the GCS bucket nil
filestore.s3.accessKey S3 access key nil
filestore.s3.secretKey S3 secret key nil
filestore.s3.existingSecret Name of existing secret to use for the S3 keys nil
filestore.s3.bucketName The name of the S3 bucket nil
filestore.s3.endpointUrl The endpoint url of the S3 (using for "MinIO S3 Backend") nil
filestore.s3.signature_version S3 signature version (optional) nil
filestore.s3.region_name S3 region name (optional) nil
filestore.s3.default_acl S3 default acl (optional) nil
config.configYml Sentry config.yml file ``
config.sentryConfPy Sentry sentry.conf.py file ``
metrics.enabled Start an exporter for sentry metrics false
metrics.nodeSelector Node labels for metrics pod assignment {}
metrics.tolerations Toleration labels for metrics pod assignment []
metrics.affinity Affinity settings for metrics pod {}
metrics.schedulerName Name of an alternate scheduler for metrics pod nil
metrics.podLabels Labels for metrics pod nil
metrics.resources Metrics resource requests/limit {}
metrics.service.type Kubernetes service type for metrics service ClusterIP
metrics.service.labels Additional labels for metrics service {}
metrics.image.repository Metrics exporter image repository prom/statsd-exporter
metrics.image.tag Metrics exporter image tag v0.10.5
metrics.image.PullPolicy Metrics exporter image pull policy IfNotPresent
metrics.serviceMonitor.enabled if true, creates a Prometheus Operator ServiceMonitor (also requires metrics.enabled to be true) false
metrics.serviceMonitor.namespace Optional namespace which Prometheus is running in nil
metrics.serviceMonitor.interval How frequently to scrape metrics (use by default, falling back to Prometheus' default) nil
metrics.serviceMonitor.selector Default to kube-prometheus install (CoreOS recommended), but should be set according to Prometheus install { prometheus: kube-prometheus }
hooks.affinity Affinity settings for hooks pods {}
hooks.tolerations Toleration labels for hook pod assignment []
hooks.dbInit.enabled Boolean to enable the dbInit job using a hook true
hooks.dbInit.resources.limits Hook job resource limits {memory: 3200Mi}
hooks.dbInit.resources.requests Hook job resource requests {memory: 3000Mi}
serviceAccount.name name of the ServiceAccount to be used by access-controlled resources autogenerated
serviceAccount.create Configures if a ServiceAccount with this name should be created true
serviceAccount.annotations Configures annotation for the ServiceAccount {}

修改好配置以后,通过下面指令进行预安装

helm install --dry-run --debug --namespace=sentry --name sentry  -f ./sentry/values.yaml  ./sentry

下载依赖镜像

因为K8S集群所在的环境无法访问外网,所以需要自己手动操作:

  • 下载镜像(docker pull )
  • 导出镜像 (docker save > ... )
    或上传镜像 (docker push)
  • 导入镜像(docker load< ...)

默认安装依赖的镜像有:

bitnami/postgresql                                               11.7.0-debian-10-r9   
bitnami/postgres-exporter                                        0.8.0-debian-10-r28   
bitnami/redis                                                    5.0.7-debian-10-r32   
bitnami/redis-exporter                                           1.4.0-debian-10-r3    
sentry                                                           9.1.2
prom/node-exporter                                               v0.18.1 

安装

有了相关镜像以后,就可以进行安装了。

helm install --namespace=sentry --name sentry  --wait ./sentry

Note: We have to use the --wait flag for initial creation because the database creation takes longer than the default 300 seconds
正常情况大概6分钟以后就可以安装成功了。
安装完成以后,就可以通过9000端口访问服务了。

image.png

卸载

卸载应用

如果安装有问题,可以通过如下步骤进行卸载。

helm delete  --purge sentry

卸载PV

 kubectl delete -f pv.yaml 
 kubectl patch pv   local-pv-sentry-postgres  -p '{"metadata":{"finalizers":null}}'
 kubectl patch pv   local-pv-sentry-redis-master  -p '{"metadata":{"finalizers":null}}'
 kubectl patch pv   local-pv-sentry-redis-slave -p '{"metadata":{"finalizers":null}}'
 kubectl patch pv   local-pv-sentry-redis-slave-1 -p '{"metadata":{"finalizers":null}}'

其它

删除以后,看看sentry命令空间还有什么没有删除。

kubectl get all -n sentry

我这里发现job.batch及创建的pod没有删除。
于是通过kubectl delete 把相关东西删除掉,如下所示:

  kubectl delete pod -n sentry sentry-db-init-${VAR}
  kubectl delete job.batch -n sentry sentry-db-init

问题

PostgreSQL

PostgreSQL一般不会有什么问题,如果问题,可以看以下官方摘要。

By default, PostgreSQL is installed as part of the chart. To use an external PostgreSQL server set postgresql.enabled to false and then set postgresql.postgresHost and postgresql.postgresqlPassword. The other options (postgresql.postgresqlDatabase, postgresql.postgresqlUsername and postgresql.postgresqlPort) may also want changing from their default values.
To avoid issues when upgrade this chart, provide postgresql.postgresqlPassword for subsequent upgrades. This is due to an issue in the PostgreSQL chart where password will be overwritten with randomly generated passwords otherwise. See https://github.com/helm/charts/tree/master/stable/postgresql#upgrade for more detail.

Rdis

Redis一般不会有什么问题,如果问题,可以看以下官方摘要。

By default, Redis is installed as part of the chart. To use an external Redis server/cluster set redis.enabled to false and then set redis.host. If your redis cluster uses password define it with redis.password, otherwise just omit it. Check the table above for more configuration options.

Ingress

默认安装配置是使用使用service的loadbalancer。
如果你不是运行在公有云的容器集群环境下的话,是不会成功生成一个EXTERNAL-IP(外网IP)入口来对外提供服务访问的。这种情况下应该是可以修改配置,设置ingress来实现服务的访问。

This chart provides support for Ingress resource. If you have an available Ingress Controller such as Nginx or Traefik you maybe want to set ingress.enabled to true and choose an ingress.hostname for the URL. Then, you should be able to access the installation using that address.

因为我是后面才注意到这个参数,所以我手动的将loadbalance的service删除掉。
手动创建的一个ingress配置,如下图所示:


image.png

这个ingress会路由到服务名称为sentry容器的90端口上,所以也需要确认一下这个service是否存在


image.png

持久化

这个是特别需要注意的,这也是我安装时没注意看文档,安装完成以后才注意到的地方。

This chart is capable of mounting the sentry-data PV in the Sentry worker and cron pods. This feature is disabled by default, but is needed for some advanced features such as private sourcemaps.
You may enable mounting of the sentry-data PV across worker and cron pods by changing filestore.filesystem.persistence.persistentWorkers to true. If you plan on deploying Sentry containers across multiple nodes, you may need to change your PVC's access mode to ReadWriteMany and check that your PV supports mounting across multiple nodes.

当时安装时没有修改这些配置,导致的结果是安装不成功。很多结点处于pending状态。
通过容器状态查询,才发现是PVC没有相关的PV配置。我只好手动自己去创建相关的PV,如下所示:

apiVersion: v1
kind: PersistentVolume
metadata:
  finalizers:
  - kubernetes.io/pv-protection
  name: local-pv-sentry
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 30Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: sentry
    namespace: sentry
  local:
    path: /home/local-volume/sentry
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker1.mycluster.k8s.com
  persistentVolumeReclaimPolicy: Retain
  storageClassName: storageclass-default
  volumeMode: Filesystem

如果发现PVC与PV绑定状态LOSE的情况,还需要删除PV或PVC重新创建。
这个过程中往往需要将PV或PVC的状态初始化,下面的指令将需要使用到。

 kubectl patch pvc -n ${NAME_SPACE}  ${PVC_NAME} -p '{"metadata":{"finalizers":null}}'
 kubectl patch pv   ${PV_NAME} -p '{"metadata":{"finalizers":null}}'

新建登录用户

如果你和我一样,通过默认安装完成以后,成功访问到登录界面了,却不知道用户是什么的时候。就可以使用如下指令来初始化登录用户。

kubectl exec -it -n sentry $(kubectl get pods  -n sentry  |grep sentry-web |awk '{print $1}') bash
sentry createuser

关于nginx反向代理k8s的ingress的问题

由于部署环境限制,需要通过nginx反向代理访问ingress,再由ingress通过内部域名访问sentry服务。
相关配置如下:

  • nginx配置
    首先在nginx上配置一个location,即某域名下的二级路径为sentry的请求就交由proxy_sentry进行转发。
    其中proxy_set_header Host 值为ingress的HOST名称,如:"sentry.apps.mycluster.k8s.com"。
        location /sentry {
        proxy_pass http://proxy_sentry;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        client_max_body_size 50m;
        }

proxy_sentry为部署有ingress的宿主机IP及端口。

    upstream proxy_sentry{
       server  192.168.0.1:80 weight=1;
       server  192.168.0.2:80 weight=1;
       }
  • ingress配置
    由于前端反向代理需要二级路径/sentry作为路由规则,所以路由到ingress的请求信息将为:

http://域名/sentry/

而sentry的部署是没有/sentry这个相对路径的,那就需要在ingress这个地方将转发过来的请求中的二级路径sentry去掉,通过查询ingress-nginx文档。了解到注解 nginx.ingress.kubernetes.io/rewrite-target 配置可以实现这个功能。
根据文档ingress配置如下所示:(注:我使用的是nginx ingress,如果使用其它类型的ingress实现,该方法应该不能能用。)

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
  name: sentry
  namespace: sentry
spec:
  rules:
  - host: sentry.apps.mycluster.k8s.com
    http:
      paths:
      - backend:
          serviceName: sentry
          servicePort: 9000
        path: /sentry(/|$)(.*)
status:
  loadBalancer: {}

配置完成以后,就可以按照自己创建的sentry项目路径来配置了。
只是配置时需要做一些简单的改动。
如果项目给的DSN为:

http://78fc4fd3aa594897aaddbb1523eaaco6@www.internet-domain.com/1

那么你需要在域名后面加上二级路径,如sentry,修改配置如下:

http://78fc4fd3aa594897aaddbb1523eaaco6@www.internet-domain.com/1

如果域名前有证书,只需要将协议改为https及可,修改配置如下:

https://78fc4fd3aa594897aaddbb1523eaaco6@www.internet-domain.com/1

相关文章

  • 基于Helm安装sentry

    前言 最初的需求就是在k8s上安装sentry。之前通过自己梳理sentry工程各组件的关联关系做过一次部署。后面...

  • hive启用sentry

    基于CDH6.2.0环境 安装sentry服务 在cloudera manager中添加sentry服务: 并在h...

  • kubeadm(九)——使用helm

    使用helm安装Jenkins 一、安装helm 进入https://github.com/helm/helm/r...

  • istio 安装

    helm安装 helm template安装 简单安装 带kiali的安装 可以用下面的 Helm 参数启用遥测插...

  • helm

    helm helm介绍 helm安装 chart指引 helm命令helm inithelm listhelm c...

  • helm在kubernetes环境中搭建

    1.安装helm 1.1.安装helm客户端 各个版本的helm:https://github.com/helm/...

  • kubernetes1.13.0安装helm并部署Nginx I

    安装Helm Helm由客户端命helm令行工具和服务端tiller组成,Helm的安装十分简单。 下载helm命...

  • 目录

    sentry是用来做什么的? 为什么要用sentry ? sentry 的源码安装? sentry 架构? sen...

  • istio helm安装

    helm是istio官方推荐的正式安装方式 安装helm 项目首页:https://github.com/helm...

  • kubernetes笔记-Helm

    项目首页 https://github.com/helm/helm先安装helm client、使用helm cl...

网友评论

      本文标题:基于Helm安装sentry

      本文链接:https://www.haomeiwen.com/subject/fbmljktx.html