Microsoft.ML.OnnxRuntime.Gpu.Linux
1.23.0-dev-20250928-0507-e5678a133f
Prefix Reserved
See the version list below for details.
dotnet add package Microsoft.ML.OnnxRuntime.Gpu.Linux --version 1.23.0-dev-20250928-0507-e5678a133f
NuGet\Install-Package Microsoft.ML.OnnxRuntime.Gpu.Linux -Version 1.23.0-dev-20250928-0507-e5678a133f
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu.Linux" Version="1.23.0-dev-20250928-0507-e5678a133f" />
<PackageVersion Include="Microsoft.ML.OnnxRuntime.Gpu.Linux" Version="1.23.0-dev-20250928-0507-e5678a133f" />
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu.Linux" />
paket add Microsoft.ML.OnnxRuntime.Gpu.Linux --version 1.23.0-dev-20250928-0507-e5678a133f
#r "nuget: Microsoft.ML.OnnxRuntime.Gpu.Linux, 1.23.0-dev-20250928-0507-e5678a133f"
#:package Microsoft.ML.OnnxRuntime.Gpu.Linux@1.23.0-dev-20250928-0507-e5678a133f
#addin nuget:?package=Microsoft.ML.OnnxRuntime.Gpu.Linux&version=1.23.0-dev-20250928-0507-e5678a133f&prerelease
#tool nuget:?package=Microsoft.ML.OnnxRuntime.Gpu.Linux&version=1.23.0-dev-20250928-0507-e5678a133f&prerelease
About

ONNX Runtime is a cross-platform machine-learning inferencing accelerator.
ONNX Runtime can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.
Learn more → here
NuGet Packages
ONNX Runtime Native packages
Microsoft.ML.OnnxRuntime
- Native libraries for all supported platforms
- CPU Execution Provider
- CoreML Execution Provider on macOS/iOS
- XNNPACK Execution Provider on Android/iOS
Microsoft.ML.OnnxRuntime.Gpu
- Windows and Linux
- TensorRT Execution Provider
- CUDA Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.DirectML
- Windows
- DirectML Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.QNN
- 64-bit Windows
- QNN Execution Provider
- CPU Execution Provider
Intel.ML.OnnxRuntime.OpenVino
- 64-bit Windows
- OpenVINO Execution Provider
- CPU Execution Provider
Other packages
Microsoft.ML.OnnxRuntime.Managed
- C# language bindings
Microsoft.ML.OnnxRuntime.Extensions
- Custom operators for pre/post processing on all supported platforms.
Learn more about Target Frameworks and .NET Standard.
-
.NETCoreApp 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.23.0-dev-20250928-0507-e5678a133f)
-
.NETFramework 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.23.0-dev-20250928-0507-e5678a133f)
-
.NETStandard 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.23.0-dev-20250928-0507-e5678a133f)
GitHub repositories (1)
Showing the top 1 popular GitHub repositories that depend on Microsoft.ML.OnnxRuntime.Gpu.Linux:
| Repository | Stars |
|---|---|
|
Lyrcaxis/KokoroSharp
Fast local TTS inference engine in C# with ONNX runtime. Multi-speaker, multi-platform and multilingual. Integrate on your .NET projects using a plug-and-play NuGet package, complete with all voices.
|
Release Def:
Branch: refs/heads/main
Commit: e5678a133f121ed3ea514960ac53a6dd060ac4c3
Build: https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=954224