在python脚本中使用slurm创建作业,该脚本迭代lis中的项

2024-05-18 08:36:21 发布

您现在位置:Python中文网/ 问答频道 /正文

背景:
我编写了一个python脚本来将文件从格式转换为另一种格式。这段代码使用一个文本文件(subject_list.txt)作为输入,迭代该文本文件中列出的源目录名(几百个目录,每个目录有数千个文件),转换它们的内容并将它们存储在指定的输出目录中。在

问题:
为了节省时间,我希望在高性能集群(HPC)上使用这个脚本,并创建作业来并行地转换文件,而不是按顺序遍历列表中的每个目录。在

我对python和HPCs都是新手。我们的实验室以前主要是用BASH编写的,并且没有访问HPC环境的权限,但是我们最近获得了对HPC的访问权,并且决定切换到Python,所以一切都是全新的。在

问题:
python中是否有一个模块允许我在python脚本中创建作业?我已经找到了关于multiprocessingsubprocesspython模块的文档,但我不清楚如何使用它们。或者我应该采取不同的方法吗?我在stackoverflow上也读过很多关于同时使用slurm和python的文章,但是我被太多的信息和不足够的知识所阻碍,无法区分应该选择哪个线程。非常感谢任何帮助。在

环境:
HPC:Red Hat Enterprise Linux Server 7.4版(Maipo)
Python3/3.6.1
第17.11.2节

代码的内务处理部分:

# Change this for your study
group="labname"
study="studyname"

# Set paths
archivedir="/projects" + group + "/archive"
sourcedir="/projects/" + group + "shared/DICOMS/" + study
niidir="/projects/" + group + "/shared/" + study + archivedir + "/clean_niftis"
outputlog=niidir + "/outputlog_convert.txt"
errorlog=niidir + "/errorlog_convert.txt"
dcm2niix="/projects/" + group + "/shared/dcm2niix/build/bin/dcm2niix"

# Source the subject list (needs to be in your current working directory)
subjectlist="subject_list.txt" 

# Check/create the log files
def touch(path): # make a function: 
    with open(path, 'a'): # open it in append mode, but don't do anything to it yet
        os.utime(path, None) # make the file

if not os.path.isfile(outputlog): # if the file does not exist...
    touch(outputlog)
if not os.path.isfile(errorlog):
    touch(errorlog)

我一直坚持的部分:

^{pr2}$

编辑1:
dcm2niix是用于转换的软件,可在HPC上使用。它采用以下标志和路径-o -b y ouputDirectory sourceDirectory。在

编辑2(解决方案):

with open(subjectlist) as file:
    lines = file.readlines() # set variable name to file and read the lines from the file
for line in lines:
    subject=line.strip()
    subjectpath=dicomdir+"/"+subject
    if os.path.isdir(subjectpath):
        with open(outputlog, 'a') as logfile:
            logfile.write(subject+os.linesep)
        # Create a job to submit to the HPC with sbatch 
        batch_cmd = 'sbatch --job-name dcm2nii_{subject} --partition=short --time 00:60:00 --mem-per-cpu=2G --cpus-per-task=1 -o {niidir}/{subject}_dcm2nii_output.txt -e {niidir}/{subject}_dcm2nii_error.txt --wrap="/projects/{group}/shared/dcm2niix/build/bin/dcm2niix -o {niidir} {subjectpath}"'.format(subject=subject,niidir=niidir,subjectpath=subjectpath,group=group)
        # Submit the job
        subprocess.call([batch_cmd], shell=True)
    else:
        with open(errorlog, 'a') as logfile:
            logfile.write(subject+os.linesep)

Tags: thetopathtxtoswithgrouphpc
1条回答
网友
1楼 · 发布于 2024-05-18 08:36:21

这是您的代码的一个可能的解决方案。它还没有经过测试。在

with open(subjectlist) as file:
    lines = file.readlines() 

for line in lines:
    subject=line.strip()
    subjectpath=sourcedir+"/"+subject
    if os.path.isdir(subjectpath):
        with open(outputlog, 'a') as logfile:
            logfile.write(subject+os.linesep)

        # Submit a job to the HPC with sbatch. This next line was not in the 
        # original script that works, and it isn't correct, but it captures
        # the gist of what I am trying to do (written in bash).
        cmd = 'sbatch  job-name dcm2nii_{subject}  partition=short  time 00:60:00\
         mem-per-cpu=2G  cpus-per-task=1 -o {niidir}/{subject}_dcm2nii_output.txt\
        -e {niidir}/{subject}_dcm2nii_error.txt\
         wrap="dcm2niix -o -b y {niidir} {subjectpath}"'.format(subject=subject,niidir=,subjectpath=subjectpath)

        # This is what I want the job to do for the files in each directory:
        subprocess.call([cmd], shell=True)

    else:
        with open(errorlog, 'a') as logfile:
            logfile.write(subject+os.linesep)

相关问题 更多 >