ml_dtypes 0.3.2-gfbf-2023a

ml_dtypes is a stand-alone implementation of several NumPy dtype extensions used in machine learning libraries, including: bfloat16: an alternative to the standard float16 format float8_*: several experimental 8-bit floating point representations including: float8_e4m3b11fnuz float8_e4m3fn float8_e4m3fnuz float8_e5m2 float8_e5m2fnuz

Accessing ml_dtypes 0.3.2-gfbf-2023a

To load the module for ml_dtypes 0.3.2-gfbf-2023a please use this command on the BEAR systems (BlueBEAR and BEAR Cloud VMs):

📋 module load bear-apps/2023a
module load ml_dtypes/0.3.2-gfbf-2023a

BEAR Apps Version

2023a

Architectures

EL8-cascadelakeEL8-icelakeEL8-sapphirerapids

The listed architectures consist of two part: OS-CPU. The OS used is represented by EL and there are several different processor (CPU) types available on BlueBEAR. More information about the processor types on BlueBEAR is available on the BlueBEAR Job Submission page.

Extensions

  • etils 1.6.0
  • ml_dtypes 0.3.2
  • opt_einsum 3.3.0

More Information

For more information visit the ml_dtypes website.

Dependencies

This version of ml_dtypes has a direct dependency on: gfbf/2023a Python/3.11.3-GCCcore-12.3.0 SciPy-bundle/2023.07-gfbf-2023a

Required By

This version of ml_dtypes is a direct dependent of: jax/0.4.25-gfbf-2023a TensorFlow/2.15.1-foss-2023a

Last modified on 25th July 2024