Microsoft.ML.OnnxRuntime.Gpu.Windows
1.22.0-dev-20250306-0538-b524229347
Prefix Reserved
See the version list below for details.
dotnet add package Microsoft.ML.OnnxRuntime.Gpu.Windows --version 1.22.0-dev-20250306-0538-b524229347
NuGet\Install-Package Microsoft.ML.OnnxRuntime.Gpu.Windows -Version 1.22.0-dev-20250306-0538-b524229347
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu.Windows" Version="1.22.0-dev-20250306-0538-b524229347" />
paket add Microsoft.ML.OnnxRuntime.Gpu.Windows --version 1.22.0-dev-20250306-0538-b524229347
#r "nuget: Microsoft.ML.OnnxRuntime.Gpu.Windows, 1.22.0-dev-20250306-0538-b524229347"
// Install Microsoft.ML.OnnxRuntime.Gpu.Windows as a Cake Addin #addin nuget:?package=Microsoft.ML.OnnxRuntime.Gpu.Windows&version=1.22.0-dev-20250306-0538-b524229347&prerelease // Install Microsoft.ML.OnnxRuntime.Gpu.Windows as a Cake Tool #tool nuget:?package=Microsoft.ML.OnnxRuntime.Gpu.Windows&version=1.22.0-dev-20250306-0538-b524229347&prerelease
About
ONNX Runtime is a cross-platform machine-learning inferencing accelerator.
ONNX Runtime can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.
Learn more → here
NuGet Packages
ONNX Runtime Native packages
Microsoft.ML.OnnxRuntime
- Native libraries for all supported platforms
- CPU Execution Provider
- CoreML Execution Provider on macOS/iOS
- XNNPACK Execution Provider on Android/iOS
Microsoft.ML.OnnxRuntime.Gpu
- Windows and Linux
- TensorRT Execution Provider
- CUDA Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.DirectML
- Windows
- DirectML Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.QNN
- 64-bit Windows
- QNN Execution Provider
- CPU Execution Provider
Intel.ML.OnnxRuntime.OpenVino
- 64-bit Windows
- OpenVINO Execution Provider
- CPU Execution Provider
Other packages
Microsoft.ML.OnnxRuntime.Managed
- C# language bindings
Microsoft.ML.OnnxRuntime.Extensions
- Custom operators for pre/post processing on all supported platforms.
Learn more about Target Frameworks and .NET Standard.
-
.NETCoreApp 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.22.0-dev-20250306-0538-b524229347)
-
.NETFramework 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.22.0-dev-20250306-0538-b524229347)
-
.NETStandard 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.22.0-dev-20250306-0538-b524229347)
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last updated |
---|---|---|
1.22.0-dev-20250311-0507-33... | 0 | 3/11/2025 |
1.22.0-dev-20250310-0459-fe... | 0 | 3/10/2025 |
1.22.0-dev-20250308-0506-98... | 0 | 3/8/2025 |
1.22.0-dev-20250306-0538-b5... | 0 | 3/6/2025 |
1.22.0-dev-20250305-1651-78... | 0 | 3/6/2025 |
1.21.0 | 0 | 3/7/2025 |
1.21.0-dev-20250228-1248-be... | 0 | 3/4/2025 |
1.17.0-dev-20231221-0523-78... | 1 | 12/21/2023 |
Release Def:
Branch: refs/heads/main
Commit: b5242293475c944c2abae3fd5d96e1c1788054a7
Build: https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=701906