我的 Hadoop 数据文件系统是否可以接受错误的配置消息?

我的 Hadoop 数据文件系统是否可以接受错误的配置消息?

我正在尝试在自己的笔记本电脑上安装 Hadoop,以便进行学习来自 tutorialspoint 的教程。我发起了start-dfs.sh

预期输出为:

10/24/14 21:37:56
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/hadoop/hadoop-
2.4.1/logs/hadoop-hadoop-namenode-localhost.out
localhost: starting datanode, logging to /home/hadoop/hadoop-
2.4.1/logs/hadoop-hadoop-datanode-localhost.out
Starting secondary namenodes [0.0.0.0]

但我明白:

mike@mike-thinks:/usr/local/hadoop/sbin$ ./start-dfs.sh 
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: mkdir: cannot create directory ‘/usr/local/hadoop/logs’: Permission denied
localhost: chown: cannot access '/usr/local/hadoop/logs': No such file or directory
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-mike-namenode-mike-thinks.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-mike-namenode-mike-thinks.out: No such file or directory
localhost: head: cannot open '/usr/local/hadoop/logs/hadoop-mike-namenode-mike-thinks.out' for reading: No such file or directory
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-mike-namenode-mike-thinks.out: No such file or directory
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-mike-namenode-mike-thinks.out: No such file or directory
localhost: mkdir: cannot create directory ‘/usr/local/hadoop/logs’: Permission denied
localhost: chown: cannot access '/usr/local/hadoop/logs': No such file or directory
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-mike-datanode-mike-thinks.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-mike-datanode-mike-thinks.out: No such file or directory
localhost: head: cannot open '/usr/local/hadoop/logs/hadoop-mike-datanode-mike-thinks.out' for reading: No such file or directory
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-mike-datanode-mike-thinks.out: No such file or directory
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-mike-datanode-mike-thinks.out: No such file or directory
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:/s7X8QMliB6FVx5bde5AaCycprQ/B+NtcTXrInrXxJM.
Are you sure you want to continue connecting (yes/no)? no
0.0.0.0: Host key verification failed.

因此我尝试使用以下sudo命令:

mike@mike-thinks:/usr/local/hadoop/sbin$ sudo ./start-dfs.sh 
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-mike-thinks.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-mike-thinks.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:/s7X8QMliB6FVx5bde5AaCycprQ/B+NtcTXrInrXxJM.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-mike-thinks.out

错误的配置让我很困惑......

然后我尝试启动纱线:

mike@mike-thinks:/usr/local/hadoop/sbin$ sudo ./start-yarn.sh 
[sudo] password for mike: 
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-mike-thinks.out
nice: ‘/usr/local/hadoop/bin/yarn’: Permission denied
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-mike-thinks.out
localhost: nice: ‘/usr/local/hadoop/bin/yarn’: Permission denied

我做chmod +xyarn

mike@mike-thinks:/usr/local/hadoop/sbin$ sudo ./start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-mike-thinks.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-mike-thinks.out

但我无法访问http://localhost:50070

我重试了,现在我必须违背我的守护进程:

mike@mike-thinks:/usr/local/hadoop/sbin$ sudo ./start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-mike-thinks.out
localhost: nodemanager running as process 8183. Stop it first.
mike@mike-thinks:/usr/local/hadoop/sbin$ sudo kill 8183
mike@mike-thinks:/usr/local/hadoop/sbin$ sudo ./start-yarn.sh 
starting yarn daemons
resourcemanager running as process 9513. Stop it first.
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-mike-thinks.out
mike@mike-thinks:/usr/local/hadoop/sbin$ sudo kill 9513
mike@mike-thinks:/usr/local/hadoop/sbin$ sudo ./start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-mike-thinks.out
localhost: nodemanager running as process 10058. Stop it first.

但我能够访问“Hadoop 所有应用程序”页面http://localhost:8088/

Hadoop 所有应用程序页面

答案1

我(archlinux 用户)在 Hadoop 3.0.0 中遇到了同样的问题。但我知道一些注意事项。

因此,请尝试

  1. 运行 jps 命令并检查“NameNode”是否存在,
    如果找不到 NameNode,则应该重新执行 Hadoop。
  2. 运行 telnet localhost 8088/50070 命令并检查连接
    我无法连接 50070 端口,但可以连接 8088 端口...(我找不到这个解决方案)

相关内容