当前位置: 代码迷 >> 综合 >> UOS系统redis集群端口问题
  详细解决方案

UOS系统redis集群端口问题

热度:92   发布时间:2023-11-21 05:57:24

UOS系统redis集群端口问题 最新教程! ——2021-06-03

  1. 最近做的一个项目需要使用redis集群
  2. 没有多台机器,所以本地使用docker
  3. 找了一个一键安装的shell脚本

一. 本地环境安装 docker

sudo apt install docker.io

dokcer 默认需要使用root 用户,让非root用户也能用

sudo groupadd docker
sudo gpasswd -a ${
    USER} docker
sudo systemctl restart docker.service

不出意外的话,这个时候应该不需要root才能运行了

二. 添加端口到"防火墙"

UOS 系统据说默认没有安装防火墙, 找了好久 才知道使用 iptables

如果系统没有的话,安装

sudo apt install iptables
  1. 放行某个端口,可以放行一个区间,忘了命令,自行百度,先运行第三步,看看需要创建 几个redis ,看运行会不会失败
sudo iptables -A INPUT -p tcp --dport 6380 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 6381 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 6382 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 6383 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 6384 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 6385 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 16380 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 16381 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 16382 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 16383 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 16384 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 16385 -j ACCEPT
  1. 查询防火墙列表
sudo iptables -L

三. 一键创建集群的脚本

自己搜

输出日志

当前IP: 192.168.0.126
开始停止和删除docker容器...
redis-6380
redis-6380
redis-6381
redis-6381
redis-6382
redis-6382
redis-6383
redis-6383
redis-6384
redis-6384
redis-6385
redis-6385
docker容器清理完成,开始创建集群配置文件......
集群配置文件创建完毕,开始创建节点配置...
节点配置完成,开始创建新docker容器......
2a64967df8e655321227fdfdad23e17c51c8cbb638c215fe9a23b7540a9788ba
Bf62f730f91a267fcfcafe967d54d691301cb056fd07c5168e62f43ee856d3280
8d2be808dbd678c03c433f62f516c560a10de023c8024846888a7244d145f61a
ff9f4f445144eac04c045e0318caf73ea92c1cda208e9616e637cae4366b0632
09149534ad081139ecd05ccdc59babb646b475fad01d169e6942ab2c39d77fcf
badb1744b8fde07db5d1a07b010b758b44679560422639b788bc5e7d1a9594eb
docker容器创建完成,开始进行集群配置......
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.0.126:6383 to 192.168.0.126:6380
Adding replica 192.168.0.126:6384 to 192.168.0.126:6381
Adding replica 192.168.0.126:6385 to 192.168.0.126:6382
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: fdab1059c6209cb0245698cc7ec3cf6901c9d8f3 192.168.0.126:6380                                                                                                            slots:[0-5460] (5461 slots) master
M: d27189fc64d4f7cc18ddf1f0fe2ecc5fe6f3df3e 192.168.0.126:6381slots:[5461-10922] (5462 slots) master
M: b47afc5f12972097843a9bef100d5a7084194f1d 192.168.0.126:6382slots:[10923-16383] (5461 slots) master
S: 36dd04f12ad6922637829cecb652f82efa6bbf17 192.168.0.126:6383replicates b47afc5f12972097843a9bef100d5a7084194f1d
S: d7d1c40b71bbc685c2d8617a79b4c1ca2d7d1c7c 192.168.0.126:6384replicates fdab1059c6209cb0245698cc7ec3cf6901c9d8f3
S: 48ddc38677c5055e3753b64e0feec964042da3ec 192.168.0.126:6385replicates d27189fc64d4f7cc18ddf1f0fe2ecc5fe6f3df3e
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 192.168.0.126:6380)
M: 35921607d1a73aaccf6779ca2275dec7626e8806 192.168.0.126:6380slots:[0-5460] (5461 slots) master1 additional replica(s)
M: 238338180dd63037b1af454b6128ddb1af4965df 192.168.0.126:6382slots:[10923-16383] (5461 slots) master1 additional replica(s)
S: 2b020af7c83d53074bdb852cba05039293c2b3ec 192.168.0.126:6383slots: (0 slots) slavereplicates ee5f77f11f5b85a6a000401dcf90db88851b2069
M: ee5f77f11f5b85a6a000401dcf90db88851b2069 192.168.0.126:6381slots:[5461-10922] (5462 slots) master1 additional replica(s)
S: aef28f980c484f2c25aa86e756f3e1abf55a6a65 192.168.0.126:6385slots: (0 slots) slavereplicates 35921607d1a73aaccf6779ca2275dec7626e8806
S: 04f85a261a4d3e1b41aa1c6dfd9d8e73af0a6c1f 192.168.0.126:6384slots: (0 slots) slavereplicates 238338180dd63037b1af454b6128ddb1af4965df
[OK] All nodes agree about slots configuration.
>>> Check for open slots...                                                                                                                                               
>>> Check slots coverage...
[OK] All 16384 slots covered.
集群创建完成!