TensorFlow-Large-Model-Support 0.1.0-fosscuda-2019a-Python-3.7.2This library provides an approach to training large models that cannot be fit into GPU memory. It takes a computational graph defined by users, and automatically adds swap-in and swap-out nodes for transferring tensors from GPUs to the host and vice versa. The computational graph is statically modified. Hence, it needs to be done before a session actually starts.
Accessing TensorFlow-Large-Model-Support 0.1.0-fosscuda-2019a-Python-3.7.2
To load the module for TensorFlow-Large-Model-Support 0.1.0-fosscuda-2019a-Python-3.7.2 please use this command on the BEAR systems (BlueBEAR, BEARCloud VMs, and CaStLeS VMs):
module load TensorFlow-Large-Model-Support/0.1.0-fosscuda-2019a-Python-3.7.2
BEAR Apps Version
EL8-haswell (GPUs: NVIDIA P100)
The listed architectures consist of two part: OS-CPU.
- BlueBEAR: The OS used on BlueBEAR is represented by EL and there are several different processor (CPU) types available on BlueBEAR. More information about the processor types on BlueBEAR is available on the BlueBEAR Job Submission page.
- BEAR and CaStLeS Cloud VMs: These VMs can have one of two OSes. Those with access to a BEAR Cloud or CaStLeS VM should check that the listed architectures for an application include the OS of VM being used. The VMs, irrespective of OS, will use the haswell CPU type.
- toposort 1.5
For more information visit the TensorFlow-Large-Model-Support website.
These versions of TensorFlow-Large-Model-Support are available on the BEAR systems (BlueBEAR, BEARCloud VMs, and CaStLeS VMs). These will be retained in accordance with our Applications Support and Retention Policy.
Last modified on 31st July 2019