Pytorch模型加载执行器

2024-05-17 13:01:28 发布

您现在位置:Python中文网/ 问答频道 /正文

在这里加载模型的性能是否正确

Total time: 5.4545 s
Function: load_model at line 81

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
    81                                            @profile
    82                                            def load_model(dirname, device):
    82                                            """ load model from disk """
    83         1         90.0      90.0     0.0       device = torch.device(device)
    84         1         26.0      26.0     0.0       modelfile = os.path.join(dirname, 'model.py')
    85         1          7.0       7.0     0.0       weights = os.path.join(dirname, 'weights_%s.tar' % weights)
    86         1    5379756.0 5379756.0     98.6      model = torch.load(modelfile, map_location=device)
    87         1      72966.0   72966.0     1.3       model.load_state_dict(torch.load(weights, map_location=device))
    88         1       1648.0    1648.0     0.0       model.eval()
    89         1          1.0       1.0     0.0       return model

在第86行加载26MB模型的初始调用比在第87行加载26MB检查点慢约100倍,5.45秒比我预期的要长很多

我使用的是pytorch 1.2和device="cuda",并确认了与1.3.1相同的性能。添加对sync的调用可以确认所有的时间实际上都花在了torch.load

    86         1    5395545.0 5395545.0     98.6      model = torch.load(modelfile, map_location=device)
    87         1         76.0      76.0      0.0      torch.cuda.synchronize(device=device)
    88         1      72403.0   72403.0      1.3      model.load_state_dict(torch.load(weights, map_location=device))
    89         1         52.0      52.0      0.0      torch.cuda.synchronize(device=device)
    90         1       1640.0    1640.0      0.0      model.eval()
    91         1         21.0      21.0      0.0      torch.cuda.synchronize(device=device)

我是做错什么了还是这很典型


Tags: 模型mapsynchronizemodeltimedevicelineload