错误:启动 hadoop 时

错误:启动 hadoop 时

我安装了Hadoop,如下迈克尔·诺尔的教程。当我使用以下命令启动名称节点时,

hduser@ARUL-PC:/usr/local/hadoop$ sbin/start-all.sh

我得到的答复是,

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
14/05/03 12:36:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
localhost]
sed: -e expression #1, char 6: unknown option to `s'
-c: Unknown cipher type 'cd'
64-Bit: ssh: Could not resolve hostname 64-bit: Name or service not known
Server: ssh: Could not resolve hostname server: Name or service not known
recommended: ssh: Could not resolve hostname recommended: Name or service not known
hduser@localhost's password: link: ssh: Could not resolve hostname link: No address associated with hostname
OpenJDK: ssh: Could not resolve hostname openjdk: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
now.: ssh: Could not resolve hostname now.: Name or service not known
VM: ssh: Could not resolve hostname vm: Name or service not known
guard: ssh: Could not resolve hostname guard: Name or service not known
loaded: ssh: Could not resolve hostname loaded: Name or service not known
that: ssh: Could not resolve hostname that: Name or service not known
The: ssh: Could not resolve hostname the: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
or: ssh: Could not resolve hostname or: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
<libfile>',: ssh: Could not resolve hostname <libfile>',: Name or service not known
'-z: ssh: Could not resolve hostname '-z: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
to: ssh: Could not resolve hostname to: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
will: ssh: Could not resolve hostname will: Name or service not known
it: ssh: Could not resolve hostname it: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
which: ssh: Could not resolve hostname which: Name or service not known
guard.: ssh: Could not resolve hostname guard.: Name or service not known
disabled: ssh: Could not resolve hostname disabled: Name or service not known
might: ssh: Could not resolve hostname might: Name or service not known
VM: ssh: Could not resolve hostname vm: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
You: ssh: Could not resolve hostname you: Name or service not known
you: ssh: Could not resolve hostname you: Name or service not known
It's: ssh: Could not resolve hostname it's: Name or service not known
highly: ssh: Could not resolve hostname highly: Name or service not known
'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
try: ssh: Could not resolve hostname try: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
warning:: ssh: Could not resolve hostname warning:: Name or service not known

请告诉我我哪里犯了错误...

当我检查配置文件夹时,我看到了下面的内容,

root@ARUL-PC:/usr/local/hadoop/etc/hadoop# ls
capacity-scheduler.xml  hadoop-metrics2.properties  httpfs-site.xml             ssl-client.xml.example
configuration.xsl       hadoop-metrics.properties   log4j.properties            ssl-server.xml.example
container-executor.cfg  hadoop-policy.xml           mapred-env.cmd              yarn-env.cmd
core-site.xml           hdfs-site.xml               mapred-env.sh               yarn-env.sh
core-site.xml~          hdfs-site.xml~              mapred-queues.xml.template  yarn-site.xml
hadoop-env.cmd          httpfs-env.sh               mapred-site.xml.template
hadoop-env.sh           httpfs-log4j.properties     mapred-site.xml.template~
hadoop-env.sh~          httpfs-signature.secret     slaves

我按照添加了行教程在 hadoop-env.sh、core-site.xml 中,针对该文件自动创建了另一个文件,文件名以“~”结尾。这是正常的还是有问题?

我使用“gedit”打开该文件,

root@ARUL-PC:/usr/local/hadoop/etc/hadoop# gedit hadoop-env.sh~

我可以看到,

图片:: hadoop-env.sh~

如何解决这个问题。

答案1

对于以 ~ 结尾的文件:gedit 在保存名为 ~ 的文件时会创建备份。如果您不希望出现此行为,可以在“首选项”->“编辑器”->“保存前创建副本”中禁用它

答案2

教程是使用 hadoop 1.x 设置的,而您的环境是使用 hadoop 2.x 设置的...1.x 的 JobTracker/TaskTracker 在 2.x 上有所不同;JobTracker 分为 ResourceManager 和 AppManager,现在每个数据节点都有一个 NodeManager...不确定 1.x 的任务跟踪器是否是 2.x NodeManager 的一部分...更新的 2.x(我使用了 2.5.0)hadoop 安装教程会有所帮助,这个很有帮助:http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide/#Introduction YARN 是 2.x 的附加功能,它取代了 JobTracker 等。

相关内容