重庆分公司,新征程启航

为企业提供网站建设、域名注册、服务器等服务

Nginx是如何实现七层的负载均衡的

下文给大家带来Nginx是如何实现七层的负载均衡的,希望能够给大家在实际运用中带来一定的帮助,负载均衡涉及的东西比较多,理论也不多,网上有很多书籍,今天我们就用创新互联在行业内累计的经验来做一个解答。

嘉定网站建设公司成都创新互联公司,嘉定网站设计制作,有大型网站制作公司丰富经验。已为嘉定近千家提供企业网站建设服务。企业网站搭建\成都外贸网站建设要多少钱,请找那个售后服务好的嘉定做网站的公司定做!

 Nginx是如何实现七层的负载均衡的

Nginx实现七层的负载均衡

调度到不同组后端云服务器
1. 动静分离
2. 网站进行分区
=================================================================================

拓扑结构

                [vip: 20.20.20.20]

            [LB1 Nginx]      [LB2 Nginx]
            192.168.1.2       192.168.1.3

[index]      [milis]     [videos]     [p_w_picpaths]      [news]
1.11         1.21         1.31         1.41         1.51
1.12          1.22         1.32         1.42         1.52
1.13          1.23         1.33         1.43         1.53
...            ...          ...          ...          ...
/web       /web/milis    /web/videos   /web/p_w_picpaths   /web/news
index.html   index.html   index.html                 index.html


一、实施过程
方案一 根据站点分区进行调度
http {
   upstream index {
       server 192.168.1.11:80 weight=1 max_fails=2 fail_timeout=2;
       server 192.168.1.12:80 weight=2 max_fails=2 fail_timeout=2;
       server 192.168.1.13:80 weight=2 max_fails=2 fail_timeout=2;
      }
     
   upstream milis {
       server 192.168.1.21:80 weight=1 max_fails=2 fail_timeout=2;
       server 192.168.1.22:80 weight=2 max_fails=2 fail_timeout=2;
       server 192.168.1.23:80 weight=2 max_fails=2 fail_timeout=2;
      }
     
    upstream videos {
       server 192.168.1.31:80 weight=1 max_fails=2 fail_timeout=2;
       server 192.168.1.32:80 weight=2 max_fails=2 fail_timeout=2;
       server 192.168.1.33:80 weight=2 max_fails=2 fail_timeout=2;
      }
     
    upstream p_w_picpaths {
       server 192.168.1.41:80 weight=1 max_fails=2 fail_timeout=2;
       server 192.168.1.42:80 weight=2 max_fails=2 fail_timeout=2;
       server 192.168.1.43:80 weight=2 max_fails=2 fail_timeout=2;
      }
     
     upstream news {
       server 192.168.1.51:80 weight=1 max_fails=2 fail_timeout=2;
       server 192.168.1.52:80 weight=2 max_fails=2 fail_timeout=2;
       server 192.168.1.53:80 weight=2 max_fails=2 fail_timeout=2;
      }
     
    server {
          location / {
      proxy_pass http://index;
      }
     
      location  /news {
      proxy_pass http://news;
      }
     
      location /milis {
      proxy_pass http://milis;
      }
     
      location ~* \.(wmv|mp4|rmvb)$ {
      proxy_pass http://videos;
      }
     
      location ~* \.(png|gif|jpg)$ {
      proxy_pass http://p_w_picpaths;
      }
}


方案二 根据动静分离进行调度
http {
    upstream htmlservers {
       server 192.168.1.1:80 weight=1 max_fails=2 fail_timeout=2;
       server 192.168.1.2:80 weight=2 max_fails=2 fail_timeout=2;
        }
       
upstream phpservers {
       server 192.168.1.3:80 weight=1 max_fails=2 fail_timeout=2;
       server 192.168.1.4:80 weight=2 max_fails=2 fail_timeout=2;
        }
       
     server {
      location ~* \.html$ {
      proxy_pass http://htmlservers;
      }
     
      location ~* \.php$ {
      proxy_pass http://phpservers;
      }
     }
}


二、Keepalived实现调度器HA
注:主/备调度器均能够实现正常调度
1. 主/备调度器安装软件
[root@master ~]# yum -y install keepalived
[root@backup ~]# yum -y install keepalived

2. Keepalived
BACKUP1
[root@uplook ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
  router_iddirector1 //辅助改为director2
}

vrrp_instance VI_1 {
   state BACKUP
   nopreempt    
   interface eth0 //心跳接口,尽量单独连接心跳
   virtual_router_id 80//整个集群的调度器一致
   priority 100 //辅助改为50
   advert_int 1
   authentication {
       auth_type PASS
       auth_pass 1111
   }
   virtual_ipaddress {
       20.20.20.20
   }
}

BACKUP2


3. 启动KeepAlived(主备均启动)
[root@uplook ~]# chkconfig keepalived on
[root@uplook ~]# service keepalived start
[root@uplook ~]# ip addr

到此:
可以解决心跳故障keepalived
不能解决Nginx服务故障



4. 扩展对调度器Nginx健康检查(可选)
思路:
让Keepalived以一定时间间隔执行一个外部脚本,脚本的功能是当Nginx失败,则关闭本机的Keepalived
a. script
[root@master ~]# cat /etc/keepalived/check_nginx_status.sh
#!/bin/bash
/usr/bin/curl -I http://localhost &>/dev/null
if [ $? -ne 0 ];then
/etc/init.d/keepalived stop
fi
[root@master ~]# chmod a+x /etc/keepalived/check_nginx_status.sh

b. keepalived使用script
! Configuration File for keepalived

global_defs {
  router_id director1
}

vrrp_scriptcheck_nginx {
  script "/etc/keepalived/check_nginx_status.sh"
  interval 5
}

vrrp_instance VI_1 {
   state BACKUP
   interface eth0
   nopreempt
   virtual_router_id 90
   priority 100
   advert_int 1
   authentication {
       auth_type PASS
       auth_pass uplook
   }
   
   virtual_ipaddress {
       192.168.1.80
   }

   track_script {
       check_nginx
   }
}

注:必须先启动nginx,再启动keepalived




     调度到同一组后端服务器
网站没按业务/版块拆分,所有后端服务器提供整站代码。
=================================================================================

拓扑结构

              [LB Nginx]
              20.20.20.20
              192.168.1.2

[httpd]         [httpd]      [httpd]
192.168.1.3    192.168.1.4    192.168.1.5


实施过程
1. nginx
http {
   upstream httpservers {
       server 192.168.1.3:80 weight=1 max_fails=2 fail_timeout=2;
       server 192.168.1.4:80 weight=2 max_fails=2 fail_timeout=2;
       server 192.168.1.5:80 weight=2 max_fails=2 fail_timeout=2;
       server 192.168.1.100:80 backup;         等3、4、5 挂掉100上线
   }

   server {
      location  / {
               proxy_pass  http://httpservers;
               proxy_set_header X-Real-IP $remote_addr;
      }
    }    
}

2.Apache LogFormat 可选
LogFormat "%{X-Real-IP}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent

3. Nginx LogFormat

=================================================================================

看了以上关于Nginx是如何实现七层的负载均衡的,如果大家还有什么地方需要了解的可以在创新互联行业资讯里查找自己感兴趣的或者找我们的专业技术工程师解答的,创新互联技术工程师在行业内拥有十几年的经验了。创新互联官网链接www.cdcxhl.com


本文名称:Nginx是如何实现七层的负载均衡的
当前路径:http://cqcxhl.com/article/jpedcc.html

其他资讯

在线咨询
服务热线
服务热线:028-86922220
TOP