擅长:python、mysql、java
<p>Pythorch文档中有一部分似乎非常相关:<br/>
<a href="https://pytorch.org/docs/stable/notes/cuda.html#memory-management" rel="nofollow noreferrer">https://pytorch.org/docs/stable/notes/cuda.html#memory-management</a></p>
<blockquote>
<p>Memory management</p>
<p>PyTorch uses a caching memory allocator to speed up memory
allocations. This allows fast memory deallocation without device
synchronizations. However, <strong>the unused memory managed by the allocator
will still show as if used in nvidia-smi</strong>. You can use
memory_allocated() and max_memory_allocated() to monitor memory
occupied by tensors, and use memory_cached() and max_memory_cached()
to monitor memory managed by the caching allocator. Calling
empty_cache() releases all unused cached memory from PyTorch so that
those can be used by other GPU applications. However, the occupied GPU
memory by tensors will not be freed so it can not increase the amount
of GPU memory available for PyTorch.</p>
</blockquote>
<p>我加粗了一部分提到<strong>英伟达smi</strong>,据我所知,这是GPUtil使用的。在</p>