MPI4PY:散布矩阵

2024-06-18 01:02:53 发布

您现在位置:Python中文网/ 问答频道 /正文

我使用MPI4PY将n/p列分散到两个输入数据进程中。但是,我无法发送我想要的专栏。为了在最终评论中报告结果,我必须对代码进行哪些更改

矩阵为:

[1, 2, 3, 4]
[5, 6, 7, 8]
[9, 10, 11, 12]
[13, 14, 15, 16]

然后,n=4,p=2。每个过程将分别有2列

这是我的代码:

# Imports
from mpi4py import MPI
import numpy as np

comm = MPI.COMM_WORLD
size = comm.Get_size() 
rank = comm.Get_rank()

rows = 4
num_columns = rows/size

data=None

if rank == 0:
  data = np.matrix([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]])

recvbuf = np.empty((rows, int(num_columns)), dtype='int')
comm.Scatterv(data, recvbuf, root=0)
print('Rank: ',rank, ', recvbuf received:\n ',recvbuf)

我得到以下输出:

Rank:  0 , recvbuf received:
[[1 2]
[3 4]
[5 6]
[7 8]]
Rank:  1 , recvbuf received:
[[ 9 10]
[11 12]
[13 14]
[15 16]]

我希望获得以下输出:

Rank:  0 , recvbuf received:
[[1 2]
[5 6]
[9 10]
[13 14]]
Rank:  1 , recvbuf received:
[[ 3 4]
[7 8]
[11 12]
[15 16]]

Tags: columns代码importdatasizegetnpnum
1条回答
网友
1楼 · 发布于 2024-06-18 01:02:53

我认为这段代码符合您的要求。这里的问题是Scatterv根本不关心numpy数组的形状,它只考虑包含值的线性内存块。因此,最简单的方法是预先将数据处理成正确的顺序。请注意send_data是一个1D数组,但这并不重要,因为Scatterv不在乎。在另一端,recvbuf的形状已经定义,Scatterv只是从接收到的1D输入填充它

# Imports
from mpi4py import MPI
import numpy as np

comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()

rows = 4
num_cols = rows/size

send_data=None

if rank == 0:
  data = np.matrix([[1, 2, 3, 4],
                    [5, 6, 7, 8],
                    [9, 10, 11, 12],
                    [13, 14, 15, 16]])

  # Split into sub-arrays along required axis
  arrs = np.split(data, size, axis=1)

  # Flatten the sub-arrays
  raveled = [np.ravel(arr) for arr in arrs]

  # Join them back up into a 1D array
  send_data = np.concatenate(raveled)


recvbuf = np.empty((rows, int(num_cols)), dtype='int')
comm.Scatterv(send_data, recvbuf, root=0)

print('Rank: ',rank, ', recvbuf received:\n ',recvbuf)

相关问题 更多 >