环境准备
- 安装好四台虚拟机,其中两台用来部署项目,一台作为主调度器master,另一台为从调度器backup
- 主从调度器分别安装,keepalived和haproxy,命令如下:
yum -y install haproxy keepalived
- 部署项目的两台虚拟机需要配置好JDK和安装nginx服务器
- 安装JDK参考 https://www.cnblogs.com/zhangyingai/p/7099006.html
- 安装nginx命令
yum install -y nginx
- 创建目录 /home/project/api和/home/project/web用来分别存放后台项目和前端项目
后台部署
- 将打包好项目jar包放到/home/project/api下
scp xxx.jar root@192.168.78.132:/home/project/api
- 进入到/home/project/api并启动项目
nohup java -jar backstage-0.0.1-SNAPSHOT.jar &
- 确认两个服务都能正常访问后,开始配置haproxy,编辑haproxy.cfg文件
vim /etc/haproxy/haproxy.cfg
文件内容如下:
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen stats
bind *:1314
stats enable
stats refresh 30s
stats hide-version
stats uri /haproxystats
stats realm Haproxy\stats
stats auth guo:1234
stats admin if TRUE
frontend main *:8886
stats enable
default_backend api
backend api
balance roundrobin
server http1 192.168.78.133:8886 maxconn 2000 weight 1 check
server http2 192.168.78.132:8886 maxconn 2000 weight 1 check
配置详情参考:http://www.ttlsa.com/linux/haproxy-study-tutorial/
listen stats
bind *:1314
stats enable
stats refresh 30s
stats hide-version
stats uri /haproxystats
stats realm Haproxy\stats
stats auth guo:1234
stats admin if TRUE
这一部分是监控器的配置,访问调度器IP:1314/haproxystats就可以看到调度的服务情况这里主要有三个地方需要设置。
- bind 监控器的端口
- stats uri 监控器的url
- stats auth 登录的用户名和密码
frontend main *:8886
stats enable
default_backend api
backend api
balance roundrobin
server http1 192.168.78.133:8886 maxconn 2000 weight 1 check
server http2 192.168.78.132:8886 maxconn 2000 weight 1 check
这里是配置前端和后端服务调度节点,因为前端服务还没部署,所以这里只有后端的服务节点。当访问调度器IP:8886的时候,调度器就会轮询调度下面133和132两个服务节点,balance roundrobin表示轮询调度。将任意一台服务节点关闭,如果后台服务仍能访问,则代表调度器正常工作。
- 在测试主调度器master能正常工作后,将haproxy.cfg文件复制到从调度器backup的服务器上。
rsync -va /etc/haproxy/haproxy.cfg 192.168.78.130:/etc/haproxy/haproxy.cfg
- 修改主调度器master下的keepalived配置文件
vim /etc/keepalived/keepalived.conf
,如下
! Configuration File for keepalived
global_defs {
router_id haproxy_master
}
vrrp_script check_haproxy {
script "/etc/keepalived/haproxy_check.sh"
interval 2
}
vrrp_instance VI_1 {
state MASTER
nopreempt
interface eth0
virtual_router_id 80
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.78.100
}
track_script {
check_haproxy
}
}
- inerface 网卡名称
- priority 优先级,keepalived是根据优先级来分配虚拟IP给哪个haproxy调度器
- virtual_ipaddress 虚拟IP,这就是前端需要调用IP,前端访问虚拟IP,keepalived根据哪个调度器拥有虚拟IP来判断映射到哪一个调度器,调度器在去调度后端服务节点。
- 修改从调度器backup下的keepalived配置文件
vim /etc/keepalived/keepalived.conf
,如下
! Configuration File for keepalived
global_defs {
router_id haproxy_backup
}
vrrp_script check_haproxy {
script "/etc/keepalived/haproxy_check.sh"
interval 2
}
vrrp_instance VI_1 {
state BACKUP
nopreempt
interface eth0
virtual_router_id 80
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.78.100
}
track_script {
check_haproxy
}
}
- 测试keepalived,在主调度器master输入命令
ip a
可以看到master获得虚拟IP的分配,现在将master的keepalived关闭service keepalived stop
,在从调度器backup下查看ip,backup获取了虚拟IP,再启动master的keepalivedservice keepalived start
,虚拟IP从新分配给master,则证明keepalived配置成功。如果主从调度器同时分配了虚拟IP,则可能是防火墙原因,需要添加防火墙规则或关闭防火墙。 - 从上面keepalived.conf文件中可以看出有一个check_haproxy函数,这是用来执行一个检测haproxy健康状态的脚本,因为haproxy停止服务的话,keepalived是不会停止的,导致master一直占用虚拟IP,而haproxy又出现问题,这样就不能正常的调用服务节点,所以需要编写一个脚本来检测haproxy是否正常提供服务,编写haproxy_check.sh脚本
vim /etc/keepalived/haproxy_check.sh
,如下
#!/bin/bash
#check haproxy
/usr/bin/curl -I http://127.0.0.1:8886 &>/dev/null
if [ $? -ne 0 ];then
/sbin/service keepalived stop
fi
我们关闭master中的haproxyservice haproxy stop
,发现虚拟IP已经分配到从调度backup,再开启haproxyserivce haproxy start
并开启keepalivedservice keepalived start
,虚拟IP再次回到mater下。
前端部署
- 在上面的后台项目中,我们有个虚拟IP,前端项目访问后台api服务就是通过这个虚拟IP,所以我们需要修改前端项目中的请求api服务的ip为我们刚才设置的虚拟IP。然后将前端项目打包并放到两个服务节点的/home/project/web目录下
scp -r {static,index.html} root@192.168.78.132:/home/project/web
- 修改nginx.conf文件
vim /etc/nginx/nginx.conf
,如下
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 8085;
server_name 192.168.78.133;
location / {
root /home/project/web;
index index.html;
}
}
这里主要看server这块内容
server {
listen 8085;
server_name 192.168.78.133;
location / {
root /home/project/web;
index index.html;
}
}
- listen 端口号
- server_name 这里写项目所在服务器IP
- root /home/project/web 这里是放置前端项目的路径
- index 就是入口文件
- 启动nginx服务
service nginx start
,访问192.168.78.133:8085。 - 访问成功后,开始配置haproxy.conf文件,现在有了前端项目,我们就需要对haproxy.conf文件做一些修改。
vim /etc/haproxy/haproxy.cfg
,只需要将后两段修改如下
frontend main *:8886
acl url_static path_beg -i /api
use_backend api if url_static
default_backend web
backend web
balance roundrobin
server web1 192.168.78.133:8085 weight 1 check
server web2 192.168.78.132:8085 weight 1 check
backend api
balance roundrobin
server http1 192.168.78.133:8886 maxconn 2000 weight 1 check
server http2 192.168.78.132:8886 maxconn 2000 weight 1 check
这里的acl做为url的匹配,如果是/api开头则调度后端api,否则调度前端的服务节点,详情参考https://www.cnblogs.com/linuxzkq/p/4927285.html
- 再次将文件拷贝到backup进行覆盖
rsync -va /etc/haproxy/haproxy.cfg 192.168.78.130:/etc/haproxy/haproxy.cfg
- 我们访问192.168.78.100:8886,项目访问正常,自此,全部配置完成。
网友评论