hadoop 安装中的 Namenode 格式

hadoop 安装中的 Namenode 格式

我已经给出了配置文件目录和.sh 文件目录的路径。

我设法用这个路径进行配置:

root@ratan-Inspiron-N5110:/home/hduser/hadoop/hadoop-2.4.0/etc/hadoop# gedit coresite.xml

<configuration>

<!-- In: conf/core-site.xml -->
<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/hduser/tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

</configuration>

root@ratan-Inspiron-N5110:/home/hduser/hadoop/hadoop-2.4.0/etc/hadoop# gedit mapred-site.xml

<configuration>

 <!-- In: conf/mapred-site.xml -->
<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>

</configuration>

root@ratan-Inspiron-N5110:/home/hduser/hadoop/hadoop-2.4.0/etc/hadoop# gedit hdfs-site.xml

<configuration>

 <!-- In: conf/hdfs-site.xml -->
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>

</configuration>

我的目录详细信息:

hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0/etc$ cd ./hadoop
hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0/etc/hadoop$ ls
capacity-scheduler.xml      hadoop-policy.xml        mapred-queues.xml.template
configuration.xsl           hdfs-site.xml            mapred-site.xml
container-executor.cfg      hdfs-site.xml~           mapred-site.xml~
core-site.xml               httpfs-env.sh            mapred-site.xml.template
core-site.xml~              httpfs-log4j.properties  slaves
hadoop-env.cmd              httpfs-signature.secret  ssl-client.xml.example
hadoop-env.sh               httpfs-site.xml          ssl-server.xml.example
hadoop-env.sh~              log4j.properties         yarn-env.cmd
hadoop-metrics2.properties  mapred-env.cmd           yarn-env.sh
hadoop-metrics.properties   mapred-env.sh            yarn-site.xml
hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0/etc/hadoop$ cd ..
hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0/etc$ ls
hadoop

hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0/etc$ cd ..
hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0$ ls
bin  etc  include  lib  libexec  LICENSE.txt  logs  NOTICE.txt  README.txt  sbin  share
hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0$ cd ./etc
hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0/etc$ ls
hadoop
hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0/etc$ cd ..
hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0$ cd ./sbin
hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0/sbin$ ls
distribute-exclude.sh  hdfs-config.sh           slaves.sh          start-dfs.cmd        start-yarn.sh     stop-dfs.cmd        stop-yarn.sh
hadoop-daemon.sh       httpfs.sh                start-all.cmd      start-dfs.sh  
       stop-all.cmd      stop-dfs.sh         yarn-daemon.sh
hadoop-daemons.sh      mr-jobhistory-daemon.sh  start-all.sh       start-secure-dns.sh  stop-all.sh       stop-secure-dns.sh  yarn-daemons.sh
hdfs-config.cmd        refresh-namenodes.sh     start-balancer.sh  start-yarn.cmd       stop-balancer.sh  stop-yarn.cmd

当我启动守护进程时:

hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0/sbin$ ls
distribute-exclude.sh    start-all.cmd        stop-all.sh
hadoop-daemon.sh         start-all.sh         stop-balancer.sh
hadoop-daemons.sh        start-balancer.sh    stop-dfs.cmd
hdfs-config.cmd          start-dfs.cmd        stop-dfs.sh
hdfs-config.sh           start-dfs.sh         stop-secure-dns.sh
httpfs.sh                start-secure-dns.sh  stop-yarn.cmd
mr-jobhistory-daemon.sh  start-yarn.cmd       stop-yarn.sh
refresh-namenodes.sh     start-yarn.sh        yarn-daemon.sh
slaves.sh                stop-all.cmd         yarn-daemons.sh
hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0/sbin$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/hduser/hadoop/hadoop-2.4.0/logs/hadoop-hduser-namenode-ratan-Inspiron-N5110.out
localhost: starting datanode, logging to /home/hduser/hadoop/hadoop-2.4.0/logs/hadoop-hduser-datanode-ratan-Inspiron-N5110.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hduser/hadoop/hadoop-2.4.0/logs/hadoop-hduser-secondarynamenode-ratan-Inspiron-N5110.out
starting yarn daemons
starting resourcemanager, logging to /home/hduser/hadoop/hadoop-2.4.0/logs/yarn-hduser-resourcemanager-ratan-Inspiron-N5110.out
localhost: starting nodemanager, logging to /home/hduser/hadoop/hadoop-2.4.0/logs/yarn-hduser-nodemanager-ratan-Inspiron-N5110.out
hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0/sbin$ jps

1441 DataNode
1608 SecondaryNameNode
1912 NodeManager
2448 Jps
1775 ResourceManager

hduser@ratan-Inspiron-N5110:~/hadoop/hadoop-2.4.0/sbin$ 

问题是我找不到需要格式化的名称节点。当我运行守护进程时,名称节点无处可见。我哪里做错了?

答案1

检查此日志文件:

Hadoop-hduser-namenode-ratan-Inspiron-N5110.log

如果提示 namenode 未格式化,请格式化它

bin/hadoop namenode-格式

答案2

检查存储目录是否存在,如果是,请检查权限并格式化名称节点,然后重新启动 dfs。它将起作用。

谢谢

相关内容