mount.nfs:挂载系统调用失败

mount.nfs:挂载系统调用失败

我正在尝试使用以下命令在运行 Ubuntu 的本地计算机上挂载 hdfs:---

sudo mount -t  nfs  -o vers=3,proto=tcp,nolock 192.168.170.52:/ /mnt/hdfs_mount/

但我收到了这个错误:-

mount.nfs: mount system call failed

输出

rpcinfo -p 192.168.170.52

        program vers proto   port  service
        100000    4   tcp    111  portmapper
        100000    3   tcp    111  portmapper
        100000    2   tcp    111  portmapper
        100000    4   udp    111  portmapper
        100000    3   udp    111  portmapper
        100000    2   udp    111  portmapper
        100024    1   udp  48435  status
        100024    1   tcp  54261  status
        100005    1   udp   4242  mountd
        100005    2   udp   4242  mountd
        100005    3   udp   4242  mountd
        100005    1   tcp   4242  mountd
        100005    2   tcp   4242  mountd
        100005    3   tcp   4242  mountd
        100003    3   tcp   2049  nfs

输出

showmount -e 192.168.170.52

Export list for 192.168.170.52:
/ *

我也尝试添加

<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>

在我的 core-site.xml 文件中,位于 /etc/hadoop/conf.pseudo。但是它不起作用。

输出:-

sudo mount -v -t  nfs  -o vers=3,proto=tcp,nolock 192.168.170.52:/ /mnt/hdfs_mount/

是: - -

mount.nfs: timeout set for Thu Jun 29 09:46:30 2017
mount.nfs: trying text-based options 'vers=3,proto=tcp,nolock,addr=192.168.170.52'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.170.52 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: trying 192.168.170.52 prog 100005 vers 3 prot TCP port 4242
mount.nfs: mount(2): Input/output error
mount.nfs: mount system call failed

请在这件事上给予我帮助。

答案1

@84104 所说的是正确的,但我设法通过以下配置/步骤启动它:

  1. 安装 nfs
  2. 更改 /etc/hadoop/hdfs-site.xml

    <property>
      <name>hadoop.proxyuser.YOUR_HOSTNAME_NAME.hosts</name>
      <value>*</value>
    </property>
    
    <property>
      <name>nfs.superuser</name>
      <value>spark</value>
    </property>
    
  3. 更改 /etc/hadoop/core-site.xml

    <property>
      <name>hadoop.proxyuser.root.groups</name>
      <value>*</value>
    </property>
    <property>
      <name>hadoop.proxyuser.root.hosts</name>
      <value>*</value>
    </property>
    
  4. 停止 Hadoop

  5. 启动 Hadoop
  6. mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync YOUR_HOSTNAME_NAME:/ /data/hdfs/ -v

相关内容