集群KeepAlived
  ssGPNGBVZK0u 2023年11月02日 48 0

集群KeepAlived

理论知识点总结

Keepalived工作原理

Layer3、4、5工作在IP/TCP协议栈的IP层,TCP层,及应用层。 keepalived具有3、4、5层交换及健康检测功能。

  • Layer3层检测:进行ICMP ping包检测,确认主机是否存活,如果异常,则会该主机从服务器集群中剔除。Keepalived 使用 Layer3 的方式工作式时, Keepalived 会定期向集群中的服务器发送一个 ICMP 的数据包(既我们平时用的 Ping 程序),如果发现某台服务的IP地址没有激活,Keepalived 便报告这台服务器失效,并将它从集群中剔除,这种情况的典型例子是某台服务器被非法关机。Layer3 的方式是以服务器的IP地址是否有效作为服务器工作正常与否的标准。
  • Layer4层检测: 进行端口检测,例如80、3306等,端口不通时,将服务器从集群中剔除;主要以TCP端口的状态来决定服务器工作正常与否。如 web server 的服务端口一般是80,如果 Keepalived 检测到 80 端口没有启动,则 Keepalived 将把这台服务器从服务器群中删除。
  • Layer5层检测:这个就是基于应用的了,如http返回码是否为200,确认主机是否正常。Layer5 就是工作在具体的应用层了,比 Layer3,Layer4 要复杂一点,在网络上占用的带宽也要大一些。 Keepalived 将根据用户的设定检查服务器程序的运行是否正常,如果与用户的设定不相符,则 Keepalived 将把服务器从服务器群中剔除(例如:http返回码是否为200,确认主机是否正常)。

脑裂产生的原因:

一般来说,裂脑的发生,有以下几种原因:

  • 高可用服务器对之间心跳线链路发生故障,导致无法正常通信。 因心跳线坏了(包括断了,老化)。 因网卡及相关驱动坏了,ip配置及冲突问题(网卡直连)。 因心跳线间连接的设备故障(网卡及交换机)。
  • 高可用服务器上开启了iptables防火墙阻挡了心跳消息传输。
  • 高可用服务器上心跳网卡地址等信息配置不正确,导致发送心跳失败。
  • 其他服务配置不当等原因,如心跳方式不同,心跳广插冲突、软件Bug等。
  • Keepalived配置里同一 VRRP实例如果 virtual_router_id两端参数配置不一致也会导致裂脑问题发生。

实验一(keepalived+lvs)


  • 主调度器
//安装keepalived,主节点192.168.70.100
[root@ds01 ~]# yum -y install gcc openssl-devel pcre-devel libnl-devel
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
[root@ds01 ~]# ls
anaconda-ks.cfg  keepalived-2.0.18.tar.gz
[root@ds01 ~]# scp keepalived-2.0.18.tar.gz 192.168.70.102:~
[root@ds01 ~]# tar zxf keepalived-2.0.18.tar.gz
[root@ds01 ~]# cd keepalived-2.0.18/
[root@ds01 keepalived-2.0.18]# ./configure --prefix=/usr/local/keepalived
[root@ds01 keepalived-2.0.18]# make && make install
[root@ds01 keepalived-2.0.18]# cd /usr/local/keepalived/
[root@ds01 keepalived]# ls
bin  etc  sbin  share
[root@ds01 keepalived]# cd etc/keepalived/
[root@ds01 keepalived]# ls
keepalived.conf  samples

//修改keepalived.conf配置文件
[root@ds01 keepalived]# ln -s /usr/local/keepalived/sbin/keepalived /usr/sbin/
[root@ds01 keepalived]# mkdir /etc/keepalived/
[root@ds01 keepalived]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
[root@ds01 keepalived]#  vi  /etc/keepalived/keepalived.conf
[root@ds01 keepalived]# systemctl restart keepalived
[root@ds01 keepalived]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@ds01 keepalived]# systemctl status keepalived

//安装ipvsadm
[root@ds01 keepalived]# yum -y install ipvsadm
[root@ds01 keepalived]# lsmod | grep ip_vs
ip_vs_rr               12600  1
ip_vs                 145458  3 ip_vs_rr
nf_conntrack          139264  1 ip_vs
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack
[root@ds01 keepalived]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.70.200:80 rr
//启一下 103 104 httpd
[root@ds01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.70.200:80 rr
  -> 192.168.70.103:80            Route   1      0          0
  -> 192.168.70.104:80            Route   1      0          0
[root@ds01 ~]# cd -
/usr/local/keepalived/etc/keepalived
[root@ds01 keepalived]# ls
keepalived.conf  samples



  • 备用调度器
    备用节点的keepalived.conf文件配置与主节点基本相同,只是router_id,state,priority三处不同,其他配置都相同
//备用调度器192.168.70.102
 [root@ds01 ~]# mkdir /etc/keepalived
 [root@ds01 ~]# scp root@192.168.70.100:/etc/keepalived/keepalived.conf /etc/keepalived/
 [root@ds02 ~]# vi /etc/keepalived/keepalived.conf
 [root@ds02 ~]# systemctl enable keepalived.service
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@ds02 ~]# systemctl start keepalived.service
[root@ds02 ~]#  yum -y install ipvsadm
Loaded plugins: fastestmirror


  • 测试主备切换
    判断主调度器,执行 ip a 查看是否有VIP地址,或者查看优先级。
    不关防火墙会出现脑裂。
//关掉ds01 keepalived服务。
[root@ds01 ~]# systemctl stop keepalived
//查看备用调度器。
[root@ds02 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:af:06:68 brd ff:ff:ff:ff:ff:ff
    inet 192.168.70.102/24 brd 192.168.70.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.70.200/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::316a:7be5:c729:a861/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
//修复ds01服务器
[root@ds01 ~]# systemctl start keepalived
[root@ds01 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:43:95:4e brd ff:ff:ff:ff:ff:ff
    inet 192.168.70.100/24 brd 192.168.70.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.70.200/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::96c:38f0:56a0:c0eb/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
  • keepalived脑裂现象
  • 测试VIP访问


实验二(keepalived+nginx)


  • 配置nginx负载均衡
[root@nginx01 ~]# vim /usr/local/nginx/conf/nginx.conf

 upstream backend {
        server 192.168.70.103:8080 weight=1 max_fails=1 fail_timeout=10s;
        server 192.168.70.104:8080 weight=1 max_fails=1 fail_timeout=10s;
        #sticky;
        }
        
 location / {
            proxy_pass http://backend;
            #root   html;
            #index  index.html index.htm;
        }
        
[root@nginx01 ~]# nginx -s reload  
[root@nginx01 ~]#  echo "tomcat web01 192.168.70.103" > /usr/local/tomcat/webapps/ROOT/index.jsp
[root@nginx02 ~]# vim /usr/local/nginx/conf/nginx.conf

 upstream backend {
        server 192.168.70.103:8080 weight=1 max_fails=1 fail_timeout=10s;
        server 192.168.70.104:8080 weight=1 max_fails=1 fail_timeout=10s;
        #sticky;
        }
        
 location / {
            proxy_pass http://backend;
            #root   html;
            #index  index.html index.htm;
        }
        
[root@nginx02 ~]# nginx -s reload  
[root@nginx02 ~]#  echo "tomcat web02 192.168.70.104" > /usr/local/tomcat/webapps/ROOT/index.jsp







//查看主,从。
[root@nginx01 ~]# ip addr show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:43:95:4e brd ff:ff:ff:ff:ff:ff
    inet 192.168.70.100/24 brd 192.168.70.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.70.200/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::101:42e2:ae70:45a/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
       
[root@nginx02 ~]# ip addr show dev ens33			//从服务器也要记得改VIP,这个是脑裂。
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:af:06:68 brd ff:ff:ff:ff:ff:ff
    inet 192.168.70.102/24 brd 192.168.70.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 172.16.16.172/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::6c7b:c849:9f70:e0d6/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
  • 测试抢占模式
//停掉nginx01
[root@nginx01 ~]# systemctl stop keepalived
//查看nginx02
[root@nginx02 ~]# ip addr show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:af:06:68 brd ff:ff:ff:ff:ff:ff
    inet 192.168.70.102/24 brd 192.168.70.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.70.200/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::6c7b:c849:9f70:e0d6/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
//开启nginx01
[root@nginx01 ~]# systemctl start keepalived
[root@nginx01 ~]# ip addr show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:43:95:4e brd ff:ff:ff:ff:ff:ff
    inet 192.168.70.100/24 brd 192.168.70.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.70.200/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::101:42e2:ae70:45a/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
  • 非抢占模式
//停掉nginx01
[root@nginx01 ~]# systemctl stop keepalived
//查看nginx02
[root@nginx02 ~]#  ip addr show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:af:06:68 brd ff:ff:ff:ff:ff:ff
    inet 192.168.70.102/24 brd 192.168.70.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.70.200/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::6c7b:c849:9f70:e0d6/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
//开启nginx01
[root@nginx01 ~]# systemctl start keepalived
[root@nginx01 ~]#  ip addr show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:43:95:4e brd ff:ff:ff:ff:ff:ff
    inet 192.168.70.100/24 brd 192.168.70.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::101:42e2:ae70:45a/64 scope link noprefixroute
       valid_lft forever preferred_lft forever


【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

上一篇: Docker容器技术 下一篇: 集群LVS、KeepAlived
  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论

推荐阅读
  jnZtF7Co41Wg   2023年12月11日   27   0   0 nginx客户端服务端
  stLBpDewCLT1   2023年12月08日   27   0   0 nginx
  Yoru5qB4TSKM   2023年12月10日   36   0   0 服务器重启IP
  jnZtF7Co41Wg   2023年12月10日   20   0   0 nginx客户端服务端NFS
  aYmIB3fiUdn9   2023年12月08日   49   0   0 客户端IPNATlvs