ONNX-Runtime 1.19.2-foss-2023a-CUDA-12.1.1

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.

Accessing ONNX-Runtime 1.19.2-foss-2023a-CUDA-12.1.1

To load the module for ONNX-Runtime 1.19.2-foss-2023a-CUDA-12.1.1 please use this command on the BEAR systems (BlueBEAR and BEAR Cloud VMs):

📋 module load bear-apps/2023a
module load ONNX-Runtime/1.19.2-foss-2023a-CUDA-12.1.1

BEAR Apps Version

2023a

Architectures

EL8-icelake (GPUs: NVIDIA A100, NVIDIA A30)

The listed architectures consist of two parts: OS-CPU. The OS used is represented by EL and there are several different processor (CPU) types available on BlueBEAR. More information about the processor types on BlueBEAR is available on the BlueBEAR Job Submission page.

Extensions

  • coloredlogs 15.0.1
  • humanfriendly 10.0
  • ONNX-Runtime-1.19.2

More Information

For more information visit the ONNX-Runtime website.

Last modified on 16th June 2025