我只是想澄清我对JIT和TorchScript工作方式的理解,并澄清一个特定的例子
因此,如果我没有错torch.jit.script
将我的方法或模块转换为TorchScript。我可以在python之外的环境中使用我的TorchScript编译模块,但也可以在python中使用它,并进行假定的改进和优化。与torch.jit.trace
类似的情况是,权重和操作被跟踪,但遵循大致类似的思想
如果是这种情况,那么TorchScripted模块通常应该至少与python解释器的典型推理时间一样快。在进行一次bit实验时,我观察到它通常比典型的解释器推理时间慢,在阅读bit时,我发现显然,TorchScript模块需要“预热”一点,以实现其最佳性能。在这样做的时候,我没有看到推理时间的变化,它变得更好了,但还不足以与典型的处理方式(python解释器)相比有所改进。此外,我还使用了一个名为torch_tvm
的第三方库,当启用该库时,任何jit模块方式的推理时间都会减少一半
到目前为止,这一切都没有发生过,我也说不出原因
以下是我做错事时的示例代码-
class TrialC(nn.Module):
def __init__(self):
super().__init__()
self.l1 = nn.Linear(1024, 2048)
self.l2 = nn.Linear(2048, 4096)
self.l3 = nn.Linear(4096, 4096)
self.l4 = nn.Linear(4096, 2048)
self.l5 = nn.Linear(2048, 1024)
def forward(self, input):
out = self.l1(input)
out = self.l2(out)
out = self.l3(out)
out = self.l4(out)
out = self.l5(out)
return out
if __name__ == '__main__':
# Trial inference input
TrialC_input = torch.randn(1, 1024)
warmup = 10
# Record time for typical inference
model = TrialC()
start = time.time()
model_out = model(TrialC_input)
elapsed = time.time() - start
# Record the 10th inference time (10 warmup) for the optimized model in TorchScript
script_model = torch.jit.script(TrialC())
for i in range(warmup):
start_2 = time.time()
model_out_check_2 = script_model(TrialC_input)
elapsed_2 = time.time() - start_2
# Record the 10th inference time (10 warmup) for the optimized model in TorchScript + TVM optimization
torch_tvm.enable()
script_model_2 = torch.jit.trace(TrialC(), torch.randn(1, 1024))
for i in range(warmup):
start_3 = time.time()
model_out_check_3 = script_model_2(TrialC_input)
elapsed_3 = time.time() - start_3
print("Regular model inference time: {}s\nJIT compiler inference time: {}s\nJIT Compiler with TVM: {}s".format(elapsed, elapsed_2, elapsed_3))
以下是在我的CPU上执行上述代码的结果-
Regular model inference time: 0.10335588455200195s
JIT compiler inference time: 0.11449170112609863s
JIT Compiler with TVM: 0.10834860801696777s
在此方面的任何帮助或澄清都将不胜感激
目前没有回答
相关问题 更多 >
编程相关推荐