美文网首页
Tomcat会话保持方式实现与MSM配置实例

Tomcat会话保持方式实现与MSM配置实例

作者: Net夜风 | 来源:发表于2018-10-13 01:41 被阅读0次

    Tomcat会话保持方式

    (1). session sticky:简单的基于会话保持机制,在调度器上实现;

    • source_ip:源地址绑定
      • nginx: ip_hash
      • haproxy: source
      • lvs: sh
    • cookie:基于cookie绑定
      • nginx:hash
      • haproxy: cookie

    (2). session cluster:delta session manager ;是tomcat自带的会话集群;在调度器上不需要做会话绑定;
    (3). session server:redis(store), memcached(cache);使用memcached构建session server;跟前端调度器没关系,tomcat要能调用后端的某个服务器的API完成会话数据存取;

    注:此处基于前文中的LNAMT环境演示中配置的两台tomcat主机作为后端主机实验

    1. 基于session sticky方式:

    第一种方式实现::mod_proxy_http模块,并基于cookie实现会话粘性

    session-sticky1.png
      编辑http的配置文件:
      [root@localhost ~]# vim /etc/httpd/conf.d/mod_proxy_http.conf
          内容如下:
          Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
        <Proxy balancer://tcsrvs>
            BalancerMember http://192.168.43.15:8080 route=tomcatA loadfactor=1
            BalancerMember http://192.168.43.16:8080 route=tomcatB loadfactor=2
            ProxySet lbmethod=byrequests
            ProxySet stickysession=ROUTEID
        </Proxy>
        
        <VirtualHost *:80>
            ServerName www.inux.com
            ProxyVia ON
            ProxyRequests Off
            ProxyPreserveHost ON
            <Proxy *>
                Require all granted
            </Proxy>
            ProxyPass / balancer://tcsrvs/
            ProxyPassReverse / balancer://tcsrvs/
            <Location />
                Require all granted
            </Location>
        </VirtualHost>
    [root@localhost ~]# httpd -t
    Syntax OK
    [root@localhost ~]# systemctl reload httpd.service
    

    浏览器访问测试:


    session-sticky2.png

    在httpd中的mod_proxy_balancer模块,自带有管理页面:

    在上文http配置文件中添加一个管理页面的location
    [root@localhost ~]# vim /etc/httpd/conf.d/mod_proxy_http.conf
        <Location /balancer-manager>
            SetHandler balancer-manager
            ProxyPass !
            Require all granted
        </Location>
    [root@localhost ~]# httpd -t
    [root@localhost ~]# systemctl reload httpd.service
    

    浏览器输入:http://192.168.43.12/balancer-manager

    balancer-manager.png

    第二种方式实现:mod_proxy_ajp模块

    session-sticky3.png
    [root@localhost ~]# httpd -M | grep "ajp" 
     proxy_ajp_module (shared)  确保mod_proxy_ajp模块加载并启用  
    [root@localhost ~]# mv /etc/httpd/conf.d/mod_proxy_http.conf  /etc/httpd/conf.d/mod_proxy_http.conf.bak
    [root@localhost ~]# vim /etc/httpd/conf.d/mod_proxy_ajp.conf
    编辑内容如下: 
        <proxy balancer://tcsrvs>
                BalancerMember ajp://192.168.43.15:8009
                BalancerMember ajp://192.168.43.16:8009
                ProxySet lbmethod=byrequests
        </Proxy>
        
        <VirtualHost *:80>
                ServerName www.inux.com
                ProxyVia On
                ProxyRequests Off
                ProxyPreserveHost On
                <Proxy *>
                        Require all granted
                </Proxy>
                ProxyPass / balancer://tcsrvs/
                ProxyPassReverse / balancer://tcsrvs/
                <Location />
                        Require all granted
                </Location>
                <Location /balancer-manager>
                        SetHandler balancer-manager
                        ProxyPass !
                        Require all granted
                </Location>
        </VirtualHost>
    [root@localhost ~]# httpd -t
    [root@localhost ~]# systemctl restart httpd.service
    

    使用浏览器访问可是实现轮询访问后端tomcat主机

    第三种方式实现:nginx+tomcat

    session-sticky4.png
    <1> 源地址绑定:
    
    修改nginx.conf配置文件中upstream上下文中:
    [root@localhost ~]# vim /usr/local/nginx/conf/conf.d/proxy_nginx.conf
        upstream websrvs {
          server 192.168.43.15:8080 ;
          server 192.168.43.16:8080 ;
          server 127.0.0.1:80 backup;
          ip_hash;
          }
        server {
        listen 80;
        server_name www.ilinux.io;
        index index.html index.jsp;
        location / {
                    proxy_pass http://websrvs;
                }
            }
    
    
     <2>基于cookie绑定:
     [root@localhost ~]# vim /usr/local/nginx/conf/conf.d/proxy_nginx.conf
         upstream websrvs {
          server 192.168.43.15:8080 ;
          server 192.168.43.16:8080 ;
          server 127.0.0.1:80 backup;
          hash srv_id;
          }
        server {
        listen 80;
        server_name www.ilinux.io;
        index index.html index.jsp;
        location / {
                    proxy_pass http://websrvs;
                }
            }
    [root@localhost ~]# nginx -t
    nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
    nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
    [root@localhost ~]# nginx -s reload
    
    注意:使用session sticky做会话保持,缺陷是损坏负载均衡效果,某一主机掉线其上的会话也会丢失,这种方法实际中并不使用;真正用时,可使用会话集群或会话服务器来做会话绑定;
    2. Tomcat Session Replication Cluster
    tomcat-Cluster.png

    (1) 配置启用集群,将下列配置放置于server.xml配置文件的<engine><host>段中;(注:此处为tomcat-8.5.34,版本,配置文件与之前版本略有不同)

    <Engine name="Catalina" defaultHost="localhost" jvmRoute="node6">   #请确认此处jvmRoute名称与前代worker名称保持一致;
           
         <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
                    channelSendOptions="8">
                    <Manager className="org.apache.catalina.ha.session.DeltaManager"
                    expireSessionsOnShutdown="false"
                    notifyListenersOnReplication="true"/>
                    <Channel className="org.apache.catalina.tribes.group.GroupChannel">
                    <Membership className="org.apache.catalina.tribes.membership.McastService"
                            address="228.0.0.4"
                            port="45564"
                            frequency="500"
                            dropTime="3000"/>
                    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                            address="auto"   #此处绑定的地址为auto时,会自动解析本地主机名,并解析得出的IP地址作为使用的地址;
                            port="4000"
                            autoBind="100"
                            selectorTimeout="5000"
                            maxThreads="6"/>
    
                    <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
                    <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
                    </Sender>
                    <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
                    <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>
                    <Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
    
                    </Channel>
    
                    <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
                            filter=""/>
                    <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
                    <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
                            tempDir="/tmp/war-temp/"
                            deployDir="/tmp/war-deploy/"
                            watchDir="/tmp/war-listen/"
                            watchEnabled="false"/>
    
                <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
                </Cluster>
    

    (2)配置webapps
    编辑WEB-INF/web.xml,添加<distributable/>

    [root@node6 ~]# systemctl stop tomcat.service 
    [root@node6 ~]# cd /usr/local/tomcat
    [root@node6 tomcat]# cp /usr/local/tomcat/conf/web.xml /usr/local/tomcat/webapps/test/WEB-INF/
    [root@node6 tomcat]# vim /usr/local/tomcat/webapps/test/WEB-INF/web.xml
    在文件中<servlet>之前添加`<distributable/>`
    

    在另外一个tomcat主机上同样部署上两步即可
    (3) 启动tomcat,查看日志

    [root@node6 ~]# systemctl start tomcat.service
    [root@node6 ~]# tail -200 /usr/local/tomcat/logs/catalina.2018-10-11.log 
    ...
    11-Oct-2018 01:55:47.979 INFO [localhost-startStop-1] org.apache.catalina.ha.session.DeltaManager.getAllClusterSessions Manager [localhost#/test], requesting session state from [org.apache.catalina.tribes.membership.MemberImpl[tcp://{192, 168, 43, 15}:4000,{192, 168, 43, 15},4000, alive=15628, securePort=-1, UDP Port=-1, id={-94 23 -31 -98 -14 -48 67 -28 -91 -88 37 -65 11 99 96 66 }, payload={}, command={}, domain={}]]. This operation will timeout if no session state has been received within [60] seconds.
    ...
    

    (4)在浏览器输入:http://192.168.43.11/test/
    基于调度器后端不同主机,但会话保持不变;

    tomcat-Cluster1.png tomcat-Cluster2.png
    3. session server: memcached-session-manager

    memcached-session-manager项目地址:

    http://code.google.com/p/memcached-session-manager/, https://github.com/magro/memcached-session-manager

    memcached-msm.png

    (1)在后端192.168.43.13和192.168.43.14两台主机上安装memcached

    [root@localhost ~]# yum -y install memcached
    [root@localhost ~]# rpm -ql memcached
    /etc/sysconfig/memcached
    /usr/bin/memcached
    /usr/bin/memcached-tool
    /usr/lib/systemd/system/memcached.service
    /usr/share/doc/memcached-1.4.15
    /usr/share/doc/memcached-1.4.15/AUTHORS
    /usr/share/doc/memcached-1.4.15/CONTRIBUTORS
    /usr/share/doc/memcached-1.4.15/COPYING
    /usr/share/doc/memcached-1.4.15/ChangeLog
    /usr/share/doc/memcached-1.4.15/NEWS
    /usr/share/doc/memcached-1.4.15/README.md
    /usr/share/doc/memcached-1.4.15/protocol.txt
    /usr/share/doc/memcached-1.4.15/readme.txt
    /usr/share/doc/memcached-1.4.15/threads.txt
    /usr/share/man/man1/memcached-tool.1.gz
    /usr/share/man/man1/memcached.1.gz
    [root@localhost ~]# systemctl start memcached
    [root@localhost ~]# ss -tnlp | grep memcached
    LISTEN     0      128          *:11211                    *:*                   users:(("memcached",pid=17429,fd=26))
    LISTEN     0      128         :::11211                   :::*                   users:(("memcached",pid=17429,fd=27))
    

    (2)配置MSM
    下载如下jar文件至各tomcat节点的tomcat安装目录下的lib目录中,

    javolution-5.4.3.1.jar                   
    msm-javolution-serializer-2.1.1.jar
    memcached-session-manager-2.1.1.jar      
    spymemcached-2.11.7.jar
    memcached-session-manager-tc8-2.1.1.jar
    

    分别在两个tomcat上的某host上定义一个用于测试的context容器,并在其中创建一个会话管理器,如下所示:

    <Context path="/test" docBase="test" reloadable="true">
         <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
                memcachedNodes="n1:172.16.100.9:11211,n2:172.16.100.10:11211"
                failoverNodes="n1"
                requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
                transcoderFactoryClass="de.javakaffee.web.msm.serializer.javolution.JavolutionTranscoderFactory"
              />
         </Context>
    

    启动tomcat:

    [root@node6 conf]# systemctl start tomcat
    

    查看日志:

    [root@node6 ~]# tail /usr/local/tomcat/logs/catalina.2018-10-13.log
    ...
        -  finished initialization:
        - sticky: true
        - operation timeout: 1000
        - node ids: [m1]
        - failover node ids: [m2]
        - storage key prefix: null
    

    前端调度器上反代2个tomcat主机 配置如下:

    server {
        listen 80;
        server_name www.ilinux.io;
        index index.html index.jsp;
        location / {
                    proxy_pass http://websrvs;
                }
            }
    upstream websrvs {
              server 192.168.43.15:8080;
              server 192.168.43.16:8080;
              server 127.0.0.1:80 backup;
                 }
    

    浏览器访问测试实现会话粘性:


    session-server1.png session-server2.png

    相关文章

      网友评论

          本文标题:Tomcat会话保持方式实现与MSM配置实例

          本文链接:https://www.haomeiwen.com/subject/xsbnaftx.html