FORTRAN子(或C++或Python)进程使用PyWon(Pythn)生成的():当内部和内部通信器合并时,将不会断开连接。

2024-10-02 18:14:12 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试用Fortran90并行化python代码的一小部分。因此,作为一个开始,我试图了解产卵函数是如何工作的

首先,我尝试从python父进程派生python中的子进程。我使用了来自mpi4py tutorial的动态流程管理示例。一切都很顺利。在这种情况下,据我所知,只使用父进程和子进程之间的intercommunicator

然后,我转到一个示例,该示例使用fortran90从python父进程派生子进程。为此,我使用了stackoverflow中的一个previous post示例

from mpi4py import MPI
import numpy

'''
slavef90 is an executable built starting from slave.f90
'''
# Spawing a process running an executable
# sub_comm is an MPI intercommunicator
sub_comm = MPI.COMM_SELF.Spawn('slavef90', args=[], maxprocs=1)
# common_comm is an intracommunicator accross the python process and the spawned process.
# All kind sof collective communication (Bcast...) are now possible between the python process and the c process
common_comm=sub_comm.Merge(False)
print('parent in common_comm ', common_comm.Get_rank(), ' of  ', common_comm.Get_size())
data = numpy.arange(1, dtype='int32')
data[0]=42
print("Python sending message to fortran: {}".format(data))
common_comm.Send([data, MPI.INT], dest=1, tag=0)

print("Python over")
# disconnecting the shared communicators is required to finalize the spawned process.
sub_comm.Disconnect()
common_comm.Disconnect()

生成子进程的相应fortran90代码(slave.f90)如下所示:

  program test
  !
  implicit none
  !
  include 'mpif.h'
  !
  integer :: ierr,s(1),stat(MPI_STATUS_SIZE)
  integer :: parentcomm,intracomm
  !
  call MPI_INIT(ierr)
  call MPI_COMM_GET_PARENT(parentcomm, ierr)
  call MPI_INTERCOMM_MERGE(parentcomm, 1, intracomm, ierr)
  call MPI_RECV(s, 1, MPI_INTEGER, 0, 0, intracomm,stat, ierr)
  print*, 'fortran program received: ', s
  call MPI_COMM_DISCONNECT(intracomm, ierr)
  call MPI_COMM_DISCONNECT(parentcomm, ierr)
  call MPI_FINALIZE(ierr)
  endprogram test

我使用mpif90 slave.f90 -o slavef90 -Wall编译了fortran90代码。我通常使用python master.py运行python代码。我能够获得所需的输出,但是,生成的进程不会断开连接,也就是说,断开连接命令(call MPI_COMM_DISCONNECT(intracomm, ierr)call MPI_COMM_DISCONNECT(parentcomm, ierr))之后的任何语句都不会在fortran代码中执行(因此python代码中断开连接命令之后的任何语句也不会执行),我的代码也不会在终端中终止

在这种情况下,据我所知,inter-communicator和intra-communicator被合并,这样子进程和父进程就不再是两个不同的组了。而且,在断开它们的连接时似乎出现了一些问题。但是,我无法找到解决办法。我尝试复制FORTRAN90代码,其中子进程在C++和Python中生成,并面临同样的问题。感谢您的帮助。谢谢


Tags: the代码an示例进程iscommoncall
1条回答
网友
1楼 · 发布于 2024-10-02 18:14:12

注意:python脚本首先断开内部通信器,然后断开内部通信器,但Fortran程序首先断开内部通信器,然后断开内部通信器

在修复顺序并释放内部通信器后,我能够在mac(Open MPImpi4py上运行此测试

这是我的master.py

#!/usr/local/Cellar/python@3.8/3.8.2/bin/python3

from mpi4py import MPI
import numpy

'''
slavef90 is an executable built starting from slave.f90
'''
# Spawing a process running an executable
# sub_comm is an MPI intercommunicator
sub_comm = MPI.COMM_SELF.Spawn('slavef90', args=[], maxprocs=1)
# common_comm is an intracommunicator accross the python process and the spawned process.
# All kind sof collective communication (Bcast...) are now possible between the python process and the c process
common_comm=sub_comm.Merge(False)
print('parent in common_comm ', common_comm.Get_rank(), ' of  ', common_comm.Get_size())
data = numpy.arange(1, dtype='int32')
data[0]=42
print("Python sending message to fortran: {}".format(data))
common_comm.Send([data, MPI.INT], dest=1, tag=0)

print("Python over")
# free the (merged) intra communicator
common_comm.Free()
# disconnect the inter communicator is required to finalize the spawned process.
sub_comm.Disconnect()

和我的slave.f90

  program test
  !
  implicit none
  !
  include 'mpif.h'
  !
  integer :: ierr,s(1),stat(MPI_STATUS_SIZE)
  integer :: parentcomm,intracomm
  integer :: rank, size
  !
  call MPI_INIT(ierr)
  call MPI_COMM_GET_PARENT(parentcomm, ierr)
  call MPI_INTERCOMM_MERGE(parentcomm, .true., intracomm, ierr)
  call MPI_COMM_RANK(intracomm, rank, ierr)
  call MPI_COMM_SIZE(intracomm, size, ierr)
  call MPI_RECV(s, 1, MPI_INTEGER, 0, 0, intracomm,stat, ierr)
  print*, 'fortran program', rank, ' / ', size, ' received: ', s
  print*, 'Slave frees intracomm'
  call MPI_COMM_FREE(intracomm, ierr)
  print*, 'Slave disconnect intercomm'
  call MPI_COMM_DISCONNECT(parentcomm, ierr)
  print*, 'Slave finalize'
  call MPI_FINALIZE(ierr)
  endprogram test

相关问题 更多 >