使用时,参数位于不同的GPU上nn.数据并行(型号)

2024-09-24 02:20:42 发布

您现在位置:Python中文网/ 问答频道 /正文

火炬0.4.1

Python 2.7.12版

我正在调整NMP QC code(with some compatibility issues ironed out)以使用多个GPU,因为我的GPU无法处理工作负载(耗尽VRAM后崩溃)

我是pytorch的新手,但我发现a tutorial on using nn.DataParallel(model)可以实现多gpu的使用

我改变过的地方有“新的”粘在上面。在

如果代码运行在单个gpu上,即使在多个gpu模式下也能正常运行,但是在2个或更多gpu上运行时会出现“参数位于不同的gpu”错误

libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs3
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs2
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs1
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
Unexpected end of /proc/mounts line `overlay / overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/QKSBQ5PAFDDC3OMBEELQQETALQ:/var/lib/docker/overlay2/l/WWYI3IDQPNXGON7AHODBPSTVXL:/var/lib/docker/overlay2/l/Q54I2HYS4TKH4LDJKBTVTGWWO6:/var/lib/docker/overlay2/l/IUV2LFPNMPOS3MREOTT52TKL54:/var/lib/docker/overlay2/l/DB5GBUCI3DCBPX6TJG3O337YVB:/var/lib/docker/overlay2/l/DNYKXCZJH5FMFNJLNGYJJ2ITPI:/var/lib/docker/overlay2/l/7DZCTDVNSTPJISGW65UG7U3F75:/var/lib/docker/overlay2/l/VOEQO652VS63NLDLZZ4TCIJLO6:/var/lib/docker/overlay2/l/4SI6ZCRUIORG5'
Traceback (most recent call last):
  File "main.py", line 332, in <module>
    main()
  File "main.py", line 190, in main
    train(train_loader, model, criterion, optimizer, epoch, evaluation, logger)
  File "main.py", line 251, in train
    output = model(g, h, e)
  File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/data_parallel.py", line 123, in forward
    outputs = self.parallel_apply(replicas, inputs, kwargs)
  File "/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/data_parallel.py", line 133, in parallel_apply
    return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
  File "/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/parallel_apply.py", line 77, in parallel_apply
    raise output
RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:236

因为我是一次发送一个输入,而不是像教程中那样一次发送一个输入,所以我使用.get_device()进行检查,它确认发送的所有4个参数(g、h、e、target)都在同一个设备上(设备0)


Tags: dockerinpygpuparallelmainvardevice