Appearance
分布式集群
RealMQ 支持分布式集群模式,借助内置的 Mnesia 数据库实现集群内数据的高效同步,完美解决了数据同步和远程调用的挑战。
手动集群和负载均衡配置
RealMQ集群需要借助云厂商的负载均衡服务或者自己搭建的HAProxy或者Nginx进行负载。
部署2台rmq节点,10.1.1.10和10.1.1.11,修改配置如下vim etc/rmq.json
登陆到不同节点修改节点名称
节点1 10.1.1.10
node.name = 127.0.0.1 改为 node.name = 10.1.1.10
节点2 10.1.1.11
node.name = 127.0.0.1 改为 node.name = 10.1.1.11
启动服务后,在节点2中执行命令./bin/rmq ctl cluster join 10.1.1.10
,看到如下输出后,表示加入集群成功。
Join the cluster successfully.
Running nodes: ["10.1.1.10","10.1.1.11"]
Stopped nodes: []
haproxy配置如下
配置haproxy的配置⽂件 vi /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local3 info
#chroot /opt/apps/haproxy
#user haproxy #group haproxy
daemon
maxconn 1024000
defaults
log global
mode tcp
option tcplog
#option dontlognull
timeout connect 10000
# timeout > mqtt's keepalive * 1.2
timeout client 300s
timeout server 300s
frontend rmq_tcp
bind *:1883
option tcplog
mode tcp
default_backend rmq_tcp_back
frontend rmq_ws
bind *:8083
option tcplog
mode tcp
default_backend rmq_ws_back
frontend rmq_dashboard
bind *:8090
option tcplog
mode tcp
default_backend rmq_dashboard_back
backend rmq_tcp_back
balance roundrobin
server rmq_node_1 10.1.1.10:1883 check
server rmq_node_2 10.1.1.11:1883 check
//增加send-proxy 会把真实IP带给rmq
backend rmq_ws_back
balance roundrobin
server rmq_node_1 10.1.1.10:8083 check
server rmq_node_2 10.1.1.11:8083 check
backend rmq_dashboard_back
balance roundrobin
server rmq_node_1 10.1.1.10:8090 check
server rmq_node_2 10.1.1.11:8090 check
Nginx配置如何
stream {
upstream rmq_tcp {
server 10.1.1.10:1883 max_fails=2 fail_timeout=10s;
server 10.1.1.11:1883 max_fails=2 fail_timeout=10s;
}
upstream rmq_ws {
server 10.1.1.10:8083 max_fails=2 fail_timeout=10s;
server 10.1.1.11:8083 max_fails=2 fail_timeout=10s;
}
server {
listen 1883;
proxy_pass rmq_tcp;
proxy_timeout 300s;
proxy_buffer_size 3M;
tcp_nodelay on;
}
server {
listen 8083;
server_name "nginx ip 地址";
location /mqtt {
proxy_pass http://rmq_ws;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
}
}
}