首先我说说自己遇到的问题,以为hadoop 集群配置好了,高兴的起飞 master,
http://192.168.111.130:50070/dfshealth.html#tab-overview ,结果现实很打脸,请看如下界面,数据块信息都不显示,
,然后我就开始各种排查问题,遇到问题不要慌
首先我检查了我的防火墙和SELinux
https://blog.csdn.net/asdrt12589wto1/article/details/108674608,
按照链接给的方法进行检查,发现都是OK的
[root@master sbin]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
[root@slave1 hadoop]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
[root@slave2 hadoop]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
还有SELinux 查看
[root@master sbin]# getenforce
Disabled
[root@slave1 sbin]# getenforce
Disabled
[root@slave2 sbin]# getenforce
Disabled
然后高兴的去把hadoop-2.7.3目录下的 tmp 和 logs都删除了
[root@master hadoop-2.7.3]# ls
bin etc include lib libexec LICENSE.txt logs NOTICE.txt README.txt sbin share tmp
重新hadoop namenode -format 进行格式化,最后再去看网页还是上面的问题
没办法,继续查找问题,然后去查看slave1 和 slave2 datanode结点的log日志如下
也没有发现问题,如是主节点执行 ss -nlpt | grep 9000 查看是否能连接上
[root@master logs]# ss -nlpt | grep 9000
LISTEN 0 128 192.168.111.130:9000 *:* users:(("java",pid=3184,fd=203))
[root@master logs]# telnet 192.168.111.130 9000
Trying 192.168.111.130...
Connected to 192.168.111.130.
Escape character is '^]'.
^Z
Connection closed by foreign host.
逐一排查,最终认为是core-site.xml文件可能配错,结果发现确实配置错了
发现从节点与主节点ip不一样,艾玛呀,修改完,再次启动网页:
完美解决