我使用slurm来管理我们的一些计算,但有时任务会因为内存不足而被杀死,尽管事实并非如此。这个奇怪的问题特别是使用多处理的python作业。在
下面是一个简单的例子来重现这种行为
#!/usr/bin/python
from time import sleep
nmem = int(3e7) # this will amount to ~1GB of numbers
nprocs = 200 # will create this many workers later
nsleep = 5 # sleep seconds
array = list(range(nmem)) # allocate some memory
print("done allocating memory")
sleep(nsleep)
print("continuing with multiple processes (" + str(nprocs) + ")")
from multiprocessing import Pool
def f(i):
sleep(nsleep)
# this will create a pool of workers, each of which "seem" to use 1GB
# even though the individual processes don't actually allocate any memory
p = Pool(nprocs)
p.map(f,list(range(nprocs)))
print("finished successfully")
尽管这可能在本地运行得很好,slurm memory acccounting似乎总结了每个进程的驻留内存,导致nprocs x 1GB的内存使用量,而不是1GB(实际内存使用量)。我认为这不是它应该做的,也不是操作系统正在做的,它看起来没有交换或任何东西。在
这是输出,如果我在本地运行代码
^{pr2}$还有htop的截图
下面是我使用slurm运行相同命令的输出
> srun --nodelist=compute3 --mem=128G python test-slurm-mem.py
srun: job 694697 queued and waiting for resources
srun: job 694697 has been allocated resources
done allocating memory
continuing with multiple processes (200)
slurmstepd: Step 694697.0 exceeded memory limit (193419088 > 131968000), being killed
srun: Exceeded job memory limit
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
slurmstepd: *** STEP 694697.0 ON compute3 CANCELLED AT 2018-09-20T10:22:53 ***
srun: error: compute3: task 0: Killed
> $ sacct --format State,ExitCode,JobName,ReqCPUs,MaxRSS,AveCPU,Elapsed -j 694697.0
State ExitCode JobName ReqCPUS MaxRSS AveCPU Elapsed
---------- -------- ---------- -------- ---------- ---------- ----------
CANCELLED+ 0:9 python 2 193419088K 00:00:04 00:00:13
目前没有回答
相关问题 更多 >
编程相关推荐