验证 start-dfs.sh

验证 start-dfs.sh

我正在尝试设置一个 Hadoop 集群,其中 master 是我的笔记本电脑,slave 是 virtualbox,遵循此指导。所以,我做到了,从掌握:

gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ sudo ./start-dfs.sh
Starting namenodes on [master]
root@master's password: 
master: namenode running as process 2911. Stop it first.
root@master's password: root@slave-1's password: 
master: datanode running as process 3057. Stop it first.
<I gave password again here>

slave-1: starting datanode, logging to /home/hadoopuser/hadoop/logs/hadoop-root-datanode-gsamaras-VirtualBox.out
Starting secondary namenodes [0.0.0.0]
[email protected]'s password: 
0.0.0.0: secondarynamenode running as process 3234. Stop it first.
gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ su - hadoopuser
Password: 
-su: /home/hduser/hadoop/sbin: No such file or directory
hadoopuser@gsamaras:~$ jps
15845 Jps

指南指出:“此命令的输出应列出主节点上的 NameNode、SecondaryNameNode、DataNode 以及所有从属节点上的 DataNode。”,这里的情况似乎并非如此(可以?)然后我检查了奴隶的日志:

cat hadoop-root-datanode-gsamaras-VirtualBox.log
..rver: master/192.168.1.2:54310. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-01-24 02:42:14,160 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: master/192.168.1.2:54310

gsamaras@gsamaras-VirtualBox:/home/hadoopuser/hadoop/logs$ ssh master
gsamaras@master's password: 
Welcome to Ubuntu 14.04.3..

主节点中的日志似乎没有错误。请注意,我可以从主设备到从设备进行无密码 ssh,但反之亦然,该指南没有提到类似的内容。有任何想法吗


当我执行时stop-dfs.sh,我收到错误消息:

slave-1: no datanode to stop

现在,我又做了一次,我进入了掌握:

gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ sudo ./stop-dfs.sh
Stopping namenodes on [master]
root@master's password: 
master: no namenode to stop
root@master's password: root@slave-1's password: 
master: no datanode to stop   
slave-1: stopping datanode
Stopping secondary namenodes [0.0.0.0]
[email protected]'s password: 
0.0.0.0: stopping secondarynamenode
gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ jps
19048 Jps
gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ ps axww | grep hadoop
19277 pts/1    S+     0:00 grep --color=auto hadoop
gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ jps
19278 Jps

ps axww | grep hadoop奴隶,给出了 ID 为 2553 的进程。

答案1

我不仅必须像我想象的那样在 hadoop-data 文件夹中设置权限,而且还必须在 hadoop 文件夹本身中设置权限:

sudo chown -R hadoopuser /home/hadoopuser/hadoop/

我的想法来自这里

相关内容