我正在使用一个名为 starcluster 的工具http://star.mit.edu/cluster在亚马逊云中启动 SGE 配置的集群。问题是它似乎没有配置任何预设的可消耗资源,除了 SLOTS,我似乎无法直接用 请求qsub -l slots=X
。每次启动集群时,我可能会请求不同类型的 EC2 节点,因此这个插槽资源是预配置的这一事实确实很棒。我可以使用预配置的并行环境请求一定数量的插槽,但问题是它是为 MPI 设置的,因此使用该并行环境请求插槽有时会授予分布在多个计算节点上的作业插槽。
有没有办法 1) 创建一个并行环境,利用 starcluster 设置的现有预配置 HOST=X 插槽设置,您可以在其中请求单个节点上的插槽,或者 2) 使用 SGE 自动知道的某种资源?运行让我想到,qhost
即使和NCPU
在MEMTOT
我能看到的任何地方都没有定义,SGE 以某种方式知道这些资源,是否有设置可以让这些资源可请求,而无需明确定义每个资源有多少可用?
谢谢你的时间!
qhost
输出:
HOSTNAME ARCH NCPU LOAD MEMTOT MEMUSE SWAPTO SWAPUS
-------------------------------------------------------------------------------
global - - - - - - -
master linux-x64 2 0.01 7.3G 167.4M 0.0 0.0
node001 linux-x64 2 0.01 7.3G 139.6M 0.0 0.0
qconf -mc
输出:
#name shortcut type relop requestable consumable default urgency
#----------------------------------------------------------------------------------------
arch a RESTRING == YES NO NONE 0
calendar c RESTRING == YES NO NONE 0
cpu cpu DOUBLE >= YES NO 0 0
display_win_gui dwg BOOL == YES NO 0 0
h_core h_core MEMORY <= YES NO 0 0
h_cpu h_cpu TIME <= YES NO 0:0:0 0
h_data h_data MEMORY <= YES NO 0 0
h_fsize h_fsize MEMORY <= YES NO 0 0
h_rss h_rss MEMORY <= YES NO 0 0
h_rt h_rt TIME <= YES NO 0:0:0 0
h_stack h_stack MEMORY <= YES NO 0 0
h_vmem h_vmem MEMORY <= YES NO 0 0
hostname h HOST == YES NO NONE 0
load_avg la DOUBLE >= NO NO 0 0
load_long ll DOUBLE >= NO NO 0 0
load_medium lm DOUBLE >= NO NO 0 0
load_short ls DOUBLE >= NO NO 0 0
m_core core INT <= YES NO 0 0
m_socket socket INT <= YES NO 0 0
m_topology topo RESTRING == YES NO NONE 0
m_topology_inuse utopo RESTRING == YES NO NONE 0
mem_free mf MEMORY <= YES NO 0 0
mem_total mt MEMORY <= YES NO 0 0
mem_used mu MEMORY >= YES NO 0 0
min_cpu_interval mci TIME <= NO NO 0:0:0 0
np_load_avg nla DOUBLE >= NO NO 0 0
np_load_long nll DOUBLE >= NO NO 0 0
np_load_medium nlm DOUBLE >= NO NO 0 0
np_load_short nls DOUBLE >= NO NO 0 0
num_proc p INT == YES NO 0 0
qname q RESTRING == YES NO NONE 0
rerun re BOOL == NO NO 0 0
s_core s_core MEMORY <= YES NO 0 0
s_cpu s_cpu TIME <= YES NO 0:0:0 0
s_data s_data MEMORY <= YES NO 0 0
s_fsize s_fsize MEMORY <= YES NO 0 0
s_rss s_rss MEMORY <= YES NO 0 0
s_rt s_rt TIME <= YES NO 0:0:0 0
s_stack s_stack MEMORY <= YES NO 0 0
s_vmem s_vmem MEMORY <= YES NO 0 0
seq_no seq INT == NO NO 0 0
slots s INT <= YES YES 1 1000
swap_free sf MEMORY <= YES NO 0 0
swap_rate sr MEMORY >= YES NO 0 0
swap_rsvd srsv MEMORY >= YES NO 0 0
qconf -me master
输出(以其中一个节点为例):
hostname master
load_scaling NONE
complex_values NONE
user_lists NONE
xuser_lists NONE
projects NONE
xprojects NONE
usage_scaling NONE
report_variables NONE
qconf -msconf
输出:
algorithm default
schedule_interval 0:0:15
maxujobs 0
queue_sort_method load
job_load_adjustments np_load_avg=0.50
load_adjustment_decay_time 0:7:30
load_formula np_load_avg
schedd_job_info false
flush_submit_sec 0
flush_finish_sec 0
params none
reprioritize_interval 0:0:0
halftime 168
usage_weight_list cpu=1.000000,mem=0.000000,io=0.000000
compensation_factor 5.000000
weight_user 0.250000
weight_project 0.250000
weight_department 0.250000
weight_job 0.250000
weight_tickets_functional 0
weight_tickets_share 0
share_override_tickets TRUE
share_functional_shares TRUE
max_functional_jobs_to_schedule 200
report_pjob_tickets TRUE
max_pending_tasks_per_job 50
halflife_decay_list none
policy_hierarchy OFS
weight_ticket 0.010000
weight_waiting_time 0.000000
weight_deadline 3600000.000000
weight_urgency 0.100000
weight_priority 1.000000
max_reservation 0
default_duration INFINITY
qconf -mq all.q
输出:
qname all.q
hostlist @allhosts
seq_no 0
load_thresholds np_load_avg=1.75
suspend_thresholds NONE
nsuspend 1
suspend_interval 00:05:00
priority 0
min_cpu_interval 00:05:00
processors UNDEFINED
qtype BATCH INTERACTIVE
ckpt_list NONE
pe_list make orte
rerun FALSE
slots 1,[master=2],[node001=2]
tmpdir /tmp
shell /bin/bash
prolog NONE
epilog NONE
shell_start_mode posix_compliant
starter_method NONE
suspend_method NONE
resume_method NONE
terminate_method NONE
notify 00:00:60
owner_list NONE
user_lists NONE
xuser_lists NONE
subordinate_list NONE
complex_values NONE
projects NONE
xprojects NONE
calendar NONE
initial_state default
s_rt INFINITY
h_rt INFINITY
s_cpu INFINITY
h_cpu INFINITY
s_fsize INFINITY
h_fsize INFINITY
s_data INFINITY
h_data INFINITY
s_stack INFINITY
h_stack INFINITY
s_core INFINITY
h_core INFINITY
s_rss INFINITY
答案1
我发现的解决方案是创建一个具有$pe_slots
分配规则的新并行环境(请参阅man sge_pe
)。我将并行环境可用的插槽数设置为最大值,因为$pe_slots
将插槽使用限制为每个节点。由于 starcluster 在集群启动时设置了插槽,因此这似乎很有效。您还需要将新的并行环境添加到队列中。因此,为了让这一切变得非常简单:
qconf -ap by_node
以下是我编辑文件后的内容:
pe_name by_node
slots 9999999
user_lists NONE
xuser_lists NONE
start_proc_args /bin/true
stop_proc_args /bin/true
allocation_rule $pe_slots
control_slaves TRUE
job_is_first_task TRUE
urgency_slots min
accounting_summary FALSE
还修改队列(all.q
由 starcluster 调用)以将这个新的并行环境添加到列表中。
qconf -mq all.q
并修改此行:
pe_list make orte
更改为:
pe_list make orte by_node
我担心从给定作业生成的作业会限制在单个节点上,但事实似乎并非如此。我有一个有两个节点的集群,每个节点有两个插槽。
我制作了一个如下所示的测试文件:
#!/bin/bash
qsub -b y -pe by_node 2 -cwd sleep 100
sleep 100
并像这样执行:
qsub -V -pe by_node 2 test.sh
过了一会儿,qstat
显示两个作业都在不同的节点上运行:
job-ID prior name user state submit/start at queue slots ja-task-ID
-----------------------------------------------------------------------------------------------------------------
25 0.55500 test root r 10/17/2012 21:42:57 all.q@master 2
26 0.55500 sleep root r 10/17/2012 21:43:12 all.q@node001 2
我还测试了一次提交 3 个作业,请求单个节点上相同数量的插槽,并且每次只运行两个作业,每个节点一个。所以这似乎是正确设置的!