Slurm srun 无法为 GPU 分配资源 - 通用资源规范无效

Slurm srun 无法为 GPU 分配资源 - 通用资源规范无效

我能够以传统方式(使用 CPU 和 MEM 作为消耗品)在 GPU 服务器上启动作业:

~ srun -c 1 --mem 1M -w serverGpu1 hostname
serverGpu1

但尝试使用 GPU 会出现错误:

~ srun -c 1 --mem 1M --gres=gpu:1 hostname
srun: error: Unable to allocate resources: Invalid generic resource (gres) specification

我检查了这个问题但对我而言这没有帮助。

Slurm配置文件

在所有节点上

SlurmctldHost=vinz
SlurmctldHost=shiny
GresTypes=gpu
MpiDefault=none
ProctrackType=proctrack/cgroup
ReturnToService=1
SlurmctldPidFile=/media/Slurm/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=slurm
StateSaveLocation=/media/Slurm
SwitchType=switch/none
TaskPlugin=task/cgroup

InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0
DefMemPerCPU=1
SchedulerType=sched/backfill
SelectType=select/cons_tres
SelectTypeParameters=CR_CPU_Memory
AccountingStorageType=accounting_storage/none
AccountingStoreJobComment=YES
ClusterName=cluster
JobCompLoc=/media/Slurm/job_completion.txt
JobCompType=jobcomp/filetxt
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/cgroup
SlurmctldDebug=info
SlurmctldLogFile=/media/Slurm/slurmctld.log
SlurmdDebug=info
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
MaxArraySize=10001
NodeName=docker1 CPUs=144 Boards=1 RealMemory=300000 Sockets=4 CoresPerSocket=18 ThreadsPerCore=2 Weight=100 State=UNKNOWN
NodeName=serverGpu1 CPUs=96 RealMemory=550000 Boards=1 SocketsPerBoard=2 CoresPerSocket=24 Gres=gpu:nvidia_tesla_t4:4 ThreadsPerCore=2 Weight=500 State=UNKNOWN

PartitionName=Cluster Nodes=docker1,serverGpu1 Default=YES MaxTime=INFINITE State=UP

组配置文件

在所有节点上

CgroupAutomount=yes 
CgroupReleaseAgentDir="/etc/slurm-llnl/cgroup" 

ConstrainCores=yes 
ConstrainDevices=yes
ConstrainRAMSpace=yes

gres配置文件

仅在 GPU 服务器上

AutoDetect=nvml

至于GPU服务器的日志:

[2021-12-06T12:22:52.800] gpu/nvml: _get_system_gpu_list_nvml: 4 GPU system device(s) detected
[2021-12-06T12:22:52.801] CPU frequency setting not configured for this node
[2021-12-06T12:22:52.803] slurmd version 20.11.2 started
[2021-12-06T12:22:52.803] killing old slurmd[42176]
[2021-12-06T12:22:52.805] slurmd started on Mon, 06 Dec 2021 12:22:52 +0100
[2021-12-06T12:22:52.805] Slurmd shutdown completing
[2021-12-06T12:22:52.805] CPUs=96 Boards=1 Sockets=2 Cores=24 Threads=2 Memory=772654 TmpDisk=1798171 Uptime=8097222 CPUSpecList=(null) FeaturesAvail=(null) FeaturesActive=(null)

我希望得到一些关于如何解决这个问题的指导。

編輯:根据@Gerald Schneider 的要求

~ sinfo -N -o "%N %G"
NODELIST GRES
docker1 (null)
serverGpu1 (null)

相关内容