accelerate 0.33.0-foss-2023a
A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support.Accessing accelerate 0.33.0-foss-2023a
To load the module for accelerate 0.33.0-foss-2023a please use this command on the BEAR systems (BlueBEAR and BEAR Cloud VMs):
📋
module load bear-apps/2023a
module load accelerate/0.33.0-foss-2023a
There is a GPU enabled version of this module: accelerate 0.33.0-foss-2023a-CUDA-12.1.1
BEAR Apps Version
Architectures
EL8-icelake — EL8-sapphirerapids
The listed architectures consist of two parts: OS-CPU. The OS used is represented by EL and there are several different processor (CPU) types available on BlueBEAR. More information about the processor types on BlueBEAR is available on the BlueBEAR Job Submission page.
Extensions
- accelerate 0.33.0
- huggingface-hub-0.24.5
More Information
For more information visit the accelerate website.
Dependencies
This version of accelerate has a direct dependency on: foss/2023a Python/3.11.3-GCCcore-12.3.0 Python-bundle-PyPI/2023.06-GCCcore-12.3.0 PyTorch-bundle/2.1.2-foss-2023a PyYAML/6.0-GCCcore-12.3.0 Safetensors/0.4.3-gfbf-2023a SciPy-bundle/2023.07-gfbf-2023a
Required By
This version of accelerate is a direct dependent of: Transformers/4.42.0-foss-2023a
Other Versions
These versions of accelerate are available on the BEAR systems (BlueBEAR and BEAR Cloud VMs). These will be retained in accordance with our Applications Support and Retention Policy.
Version | BEAR Apps Version |
---|---|
0.33.0-foss-2023a-CUDA-12.1.1 | 2023a |
Last modified on 2nd April 2025