美文网首页cicd
Ruoyi项目云环境搭建(kubeSphere)

Ruoyi项目云环境搭建(kubeSphere)

作者: 攻城老狮 | 来源:发表于2022-05-01 11:31 被阅读0次

    1 Ruoyi 本地环境搭建

    1. 中间件安装
    mysql,redis,nacos(2.x.x),nodejs
    
    1. 库表添加
    -- ry-config -> quartz.sql ry_20210908.sql
    -- ry-seata -> ry_config_20220424.sql
    -- ry-cloud -> ry_seata_20210128.sql
    
    1. 修改 nacos,改为从 mysql 中加载配置

    注意:db的用户名和密码为本机mysql的用户名和密码,连接的配置数据库为 ry-cloud

    ### If use MySQL as datasource:
    spring.datasource.platform=mysql
    
    ### Count of DB:
    db.num=1
    
    ### Connect URL of DB:
    db.url.0=jdbc:mysql://127.0.0.1:3306/ry-config?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
    db.user.0=root
    db.password.0=199748
    
    1. 启动nacos
    ./startup.sh -m standalone
    
    账号:nacos
    密码:nacos
    
    image-20220429002945661.png
    1. 修改 nacos 配置中心的配置文件,将 redis 和 mysql 连接至对应的位置,并修改账号密码
    2. 启动 redis
    redis-server redis.conf
    # 验证是否启动成功
    redis-cli -p 6379
    
    1. 前端依赖下载并启动
    # 进入 ruoyi-ui 模块
    npm install --registry=https://registry.npmmirror.com
    # 启动
    npm run dev
    
    1. 后端模块启动
    image-20220429003128899.png
    1. 顺利进入ruoyi系统
    image-20220429003210217.png

    2 Ruoyi 云环境搭建

    1631670037332-4eab3ef9-8e5f-48ef-aed5-c2792802aeb7.png

    2.1 中间件搭建

    2.1.1 应用部署三要素

    应用部署需要关注的信息

    1、应用的部署方式

    2、应用的数据挂载(数据,配置文件)

    3、应用的可访问性

    2.1.2 MySQL 和 Redis 搭建

    mysql 搭建
    redis 搭建
    
    image-20220430185918459.png

    2.1.3 Nacos 搭建

    1. 迁移本地的数据库表至云环境的 MySQL 中
    image-20220430190758094.png
    1. 创建 nacos 的配置文件
    • application.properties

    注意:需要将连接mysql的地址改为集群的DNS域名「mysql.ruoyi」,密码改为集群的密码「wfEaycHCEf」

    #
    # Copyright 1999-2021 Alibaba Group Holding Ltd.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #      http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    #
    
    #*************** Spring Boot Related Configurations ***************#
    ### Default web context path:
    server.servlet.contextPath=/nacos
    ### Default web server port:
    server.port=8848
    
    #*************** Network Related Configurations ***************#
    ### If prefer hostname over ip for Nacos server addresses in cluster.conf:
    # nacos.inetutils.prefer-hostname-over-ip=false
    
    ### Specify local server's IP:
    # nacos.inetutils.ip-address=
    
    
    #*************** Config Module Related Configurations ***************#
    ### If use MySQL as datasource:
    spring.datasource.platform=mysql
    
    ### Count of DB:
    db.num=1
    
    ### Connect URL of DB:
    db.url.0=jdbc:mysql://mysql.ruoyi:3306/ry-config?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
    db.user.0=root
    db.password.0=wfEaycHCEf
    
    ### Connection pool configuration: hikariCP
    db.pool.config.connectionTimeout=30000
    db.pool.config.validationTimeout=10000
    db.pool.config.maximumPoolSize=20
    db.pool.config.minimumIdle=2
    
    #*************** Naming Module Related Configurations ***************#
    ### Data dispatch task execution period in milliseconds: Will removed on v2.1.X, replace with nacos.core.protocol.distro.data.sync.delayMs
    # nacos.naming.distro.taskDispatchPeriod=200
    
    ### Data count of batch sync task: Will removed on v2.1.X. Deprecated
    # nacos.naming.distro.batchSyncKeyCount=1000
    
    ### Retry delay in milliseconds if sync task failed: Will removed on v2.1.X, replace with nacos.core.protocol.distro.data.sync.retryDelayMs
    # nacos.naming.distro.syncRetryDelay=5000
    
    ### If enable data warmup. If set to false, the server would accept request without local data preparation:
    # nacos.naming.data.warmup=true
    
    ### If enable the instance auto expiration, kind like of health check of instance:
    # nacos.naming.expireInstance=true
    
    ### will be removed and replaced by `nacos.naming.clean` properties
    nacos.naming.empty-service.auto-clean=true
    nacos.naming.empty-service.clean.initial-delay-ms=50000
    nacos.naming.empty-service.clean.period-time-ms=30000
    
    ### Add in 2.0.0
    ### The interval to clean empty service, unit: milliseconds.
    # nacos.naming.clean.empty-service.interval=60000
    
    ### The expired time to clean empty service, unit: milliseconds.
    # nacos.naming.clean.empty-service.expired-time=60000
    
    ### The interval to clean expired metadata, unit: milliseconds.
    # nacos.naming.clean.expired-metadata.interval=5000
    
    ### The expired time to clean metadata, unit: milliseconds.
    # nacos.naming.clean.expired-metadata.expired-time=60000
    
    ### The delay time before push task to execute from service changed, unit: milliseconds.
    # nacos.naming.push.pushTaskDelay=500
    
    ### The timeout for push task execute, unit: milliseconds.
    # nacos.naming.push.pushTaskTimeout=5000
    
    ### The delay time for retrying failed push task, unit: milliseconds.
    # nacos.naming.push.pushTaskRetryDelay=1000
    
    ### Since 2.0.3
    ### The expired time for inactive client, unit: milliseconds.
    # nacos.naming.client.expired.time=180000
    
    #*************** CMDB Module Related Configurations ***************#
    ### The interval to dump external CMDB in seconds:
    # nacos.cmdb.dumpTaskInterval=3600
    
    ### The interval of polling data change event in seconds:
    # nacos.cmdb.eventTaskInterval=10
    
    ### The interval of loading labels in seconds:
    # nacos.cmdb.labelTaskInterval=300
    
    ### If turn on data loading task:
    # nacos.cmdb.loadDataAtStart=false
    
    
    #*************** Metrics Related Configurations ***************#
    ### Metrics for prometheus
    #management.endpoints.web.exposure.include=*
    
    ### Metrics for elastic search
    management.metrics.export.elastic.enabled=false
    #management.metrics.export.elastic.host=http://localhost:9200
    
    ### Metrics for influx
    management.metrics.export.influx.enabled=false
    #management.metrics.export.influx.db=springboot
    #management.metrics.export.influx.uri=http://localhost:8086
    #management.metrics.export.influx.auto-create-db=true
    #management.metrics.export.influx.consistency=one
    #management.metrics.export.influx.compressed=true
    
    #*************** Access Log Related Configurations ***************#
    ### If turn on the access log:
    server.tomcat.accesslog.enabled=true
    
    ### The access log pattern:
    server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i
    
    ### The directory of access log:
    server.tomcat.basedir=
    
    #*************** Access Control Related Configurations ***************#
    ### If enable spring security, this option is deprecated in 1.2.0:
    #spring.security.enabled=false
    
    ### The ignore urls of auth, is deprecated in 1.2.0:
    nacos.security.ignore.urls=/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-ui/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**
    
    ### The auth system to use, currently only 'nacos' and 'ldap' is supported:
    nacos.core.auth.system.type=nacos
    
    ### If turn on auth system:
    nacos.core.auth.enabled=false
    
    ### worked when nacos.core.auth.system.type=ldap,{0} is Placeholder,replace login username
    # nacos.core.auth.ldap.url=ldap://localhost:389
    # nacos.core.auth.ldap.userdn=cn={0},ou=user,dc=company,dc=com
    
    ### The token expiration in seconds:
    nacos.core.auth.default.token.expire.seconds=18000
    
    ### The default token:
    nacos.core.auth.default.token.secret.key=SecretKey012345678901234567890123456789012345678901234567890123456789
    
    ### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay.
    nacos.core.auth.caching.enabled=true
    
    ### Since 1.4.1, Turn on/off white auth for user-agent: nacos-server, only for upgrade from old version.
    nacos.core.auth.enable.userAgentAuthWhite=false
    
    ### Since 1.4.1, worked when nacos.core.auth.enabled=true and nacos.core.auth.enable.userAgentAuthWhite=false.
    ### The two properties is the white list for auth and used by identity the request from other server.
    nacos.core.auth.server.identity.key=serverIdentity
    nacos.core.auth.server.identity.value=security
    
    #*************** Istio Related Configurations ***************#
    ### If turn on the MCP server:
    nacos.istio.mcp.server.enabled=false
    
    #*************** Core Related Configurations ***************#
    
    ### set the WorkerID manually
    # nacos.core.snowflake.worker-id=
    
    ### Member-MetaData
    # nacos.core.member.meta.site=
    # nacos.core.member.meta.adweight=
    # nacos.core.member.meta.weight=
    
    ### MemberLookup
    ### Addressing pattern category, If set, the priority is highest
    # nacos.core.member.lookup.type=[file,address-server]
    ## Set the cluster list with a configuration file or command-line argument
    # nacos.member.list=192.168.16.101:8847?raft_port=8807,192.168.16.101?raft_port=8808,192.168.16.101:8849?raft_port=8809
    ## for AddressServerMemberLookup
    # Maximum number of retries to query the address server upon initialization
    # nacos.core.address-server.retry=5
    ## Server domain name address of [address-server] mode
    # address.server.domain=jmenv.tbsite.net
    ## Server port of [address-server] mode
    # address.server.port=8080
    ## Request address of [address-server] mode
    # address.server.url=/nacos/serverlist
    
    #*************** JRaft Related Configurations ***************#
    
    ### Sets the Raft cluster election timeout, default value is 5 second
    # nacos.core.protocol.raft.data.election_timeout_ms=5000
    ### Sets the amount of time the Raft snapshot will execute periodically, default is 30 minute
    # nacos.core.protocol.raft.data.snapshot_interval_secs=30
    ### raft internal worker threads
    # nacos.core.protocol.raft.data.core_thread_num=8
    ### Number of threads required for raft business request processing
    # nacos.core.protocol.raft.data.cli_service_thread_num=4
    ### raft linear read strategy. Safe linear reads are used by default, that is, the Leader tenure is confirmed by heartbeat
    # nacos.core.protocol.raft.data.read_index_type=ReadOnlySafe
    ### rpc request timeout, default 5 seconds
    # nacos.core.protocol.raft.data.rpc_request_timeout_ms=5000
    
    #*************** Distro Related Configurations ***************#
    
    ### Distro data sync delay time, when sync task delayed, task will be merged for same data key. Default 1 second.
    # nacos.core.protocol.distro.data.sync.delayMs=1000
    
    ### Distro data sync timeout for one sync data, default 3 seconds.
    # nacos.core.protocol.distro.data.sync.timeoutMs=3000
    
    ### Distro data sync retry delay time when sync data failed or timeout, same behavior with delayMs, default 3 seconds.
    # nacos.core.protocol.distro.data.sync.retryDelayMs=3000
    
    ### Distro data verify interval time, verify synced data whether expired for a interval. Default 5 seconds.
    # nacos.core.protocol.distro.data.verify.intervalMs=5000
    
    ### Distro data verify timeout for one verify, default 3 seconds.
    # nacos.core.protocol.distro.data.verify.timeoutMs=3000
    
    ### Distro data load retry delay when load snapshot data failed, default 30 seconds.
    # nacos.core.protocol.distro.data.load.retryDelayMs=30000
    
    1. Nacos 服务类型(需要记录每个nacos的域名,故需要创建为有状态服务)
    image-20220430191658905.png
    1. 添加 nacos 容器,添加环境变量「MODE=standalone」
    image-20220430191742714.png
    1. 挂载配置文件

    2. 指定 nacos 可以外网访问的服务

    image-20220430195240357.png
    1. 外网连接测试成功
    image-20220430195330444.png

    2.2 服务搭建

    2.2.1 Nacos配置

    1. 创建 prod 命名空间,用于生产环境的配置
    image-20220430202117447.png
    1. 将public空间下的配置文件,导入至 prod 空间,并修改名称
    image-20220430202256423.png
    1. 将配置文件中对redis和mysql的依赖改为云环境的DNS域名地址和密码

    2.2.2 微服务后端镜像打包

    1. 使用 Maven 将所有涉及的微服务生成 jar 包ruoyi-auth.jar
    image-20220430232445437.png
    1. 安装 Dockerfile 的规范,将各个微服务整理

    注意:指定为 prod,修改 nacos 的地址为云端的DNS域名地址,指定 nacos 的配置读取的命名空间 prod

    FROM openjdk:8-jdk
    LABEL maintainer=yaoqijun
    
    
    ENV PARAMS="--server.port=8080 --spring.profiles.active=prod --spring.cloud.nacos.discovery.server-addr=nacos.ruoyi:8848 --spring.cloud.nacos.config.server-addr=nacos.ruoyi:8848 --spring.cloud.nacos.config.namespace=prod --spring.cloud.nacos.config.file-extension=yml"
    RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone
    
    COPY target/*.jar /app.jar
    EXPOSE 8080
    
    ENTRYPOINT ["/bin/sh","-c","java -Dfile.encoding=utf8 -Djava.security.egd=file:/dev/./urandom -jar app.jar ${PARAMS}"]
    
    image-20220430233247817.png
    1. Docker 打包为镜像
    docker build -t 镜像名:版本 -f Dockerfile .
    
    image-20220501001048486.png

    2.2.3 Aliyun镜像仓库

    1. 开通阿里云的容器镜像服务
    image-20220430230221100.png
    1. 创建 yqj_ruoyi 命名空间
    image-20220430230431666.png
    1. 推送镜像给阿里云镜像仓库
    # 1.登陆
    docker login --username=yorickjun registry.cn-beijing.aliyuncs.com
    
    # 2.打标签
    docker tag ruoyi-system:v1.0 registry.cn-beijing.aliyuncs.com/yqj_ruoyi/ruoyi-system:v1.0
    # ...
    
    # 3.推送
    docker push registry.cn-beijing.aliyuncs.com/yqj_ruoyi/ruoyi-system:v1.0
    #...
    
    image-20220501093211647.png

    2.2.4 微服务后端构建

    微服务均为无状态服务,无需挂载卷,也无需配置挂载。容器启动后,由Dockerfile可知,自动读取nacos的指定命名空间的yml配置文件。

    2.2.5 微服务前端打包与构建

    1. 前端需要修改生产环境下访问的网关地址,指向云端网关的DNS域名地址
    image-20220501104105508.png
    1. 打包前端服务,将生成的dist文件夹内容放到ruoyi的docker下的nginx里面
    npm run build:prod
    
    image-20220501104433438.png
    1. 将nginx的配置文件做修改
    image-20220501104631781.png
    1. 将文件上传云服务器,并打包镜像上传至阿里云镜像服务中心
    # 构建镜像,(直接打成推送需要的标签)
    docker build -t registry.cn-beijing.aliyuncs.com/yqj_ruoyi/ruoyi-ui:v1.0 -f dockerfile .
    # 推送镜像到阿里云
    docker push registry.cn-beijing.aliyuncs.com/yqj_ruoyi/ruoyi-ui:v1.0
    
    1. kubeSphere 搭建无状态服务,并开启外网端口

    2. 访问,添加数据成功

    image-20220501105708841.png image-20220501105802284.png

    相关文章

      网友评论

        本文标题:Ruoyi项目云环境搭建(kubeSphere)

        本文链接:https://www.haomeiwen.com/subject/tntpyrtx.html