Microsoft.ML.OnnxRuntime.Gpu
1.22.0-dev-20250311-0507-333fbdb4a1
Prefix Reserved
dotnet add package Microsoft.ML.OnnxRuntime.Gpu --version 1.22.0-dev-20250311-0507-333fbdb4a1
NuGet\Install-Package Microsoft.ML.OnnxRuntime.Gpu -Version 1.22.0-dev-20250311-0507-333fbdb4a1
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu" Version="1.22.0-dev-20250311-0507-333fbdb4a1" />
paket add Microsoft.ML.OnnxRuntime.Gpu --version 1.22.0-dev-20250311-0507-333fbdb4a1
#r "nuget: Microsoft.ML.OnnxRuntime.Gpu, 1.22.0-dev-20250311-0507-333fbdb4a1"
// Install Microsoft.ML.OnnxRuntime.Gpu as a Cake Addin #addin nuget:?package=Microsoft.ML.OnnxRuntime.Gpu&version=1.22.0-dev-20250311-0507-333fbdb4a1&prerelease // Install Microsoft.ML.OnnxRuntime.Gpu as a Cake Tool #tool nuget:?package=Microsoft.ML.OnnxRuntime.Gpu&version=1.22.0-dev-20250311-0507-333fbdb4a1&prerelease
About
ONNX Runtime is a cross-platform machine-learning inferencing accelerator.
ONNX Runtime can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.
Learn more → here
NuGet Packages
ONNX Runtime Native packages
Microsoft.ML.OnnxRuntime
- Native libraries for all supported platforms
- CPU Execution Provider
- CoreML Execution Provider on macOS/iOS
- XNNPACK Execution Provider on Android/iOS
Microsoft.ML.OnnxRuntime.Gpu
- Windows and Linux
- TensorRT Execution Provider
- CUDA Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.DirectML
- Windows
- DirectML Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.QNN
- 64-bit Windows
- QNN Execution Provider
- CPU Execution Provider
Intel.ML.OnnxRuntime.OpenVino
- 64-bit Windows
- OpenVINO Execution Provider
- CPU Execution Provider
Other packages
Microsoft.ML.OnnxRuntime.Managed
- C# language bindings
Microsoft.ML.OnnxRuntime.Extensions
- Custom operators for pre/post processing on all supported platforms.
Learn more about Target Frameworks and .NET Standard.
-
.NETCoreApp 0.0
- Microsoft.ML.OnnxRuntime.Gpu.Linux (>= 1.22.0-dev-20250311-0507-333fbdb4a1)
- Microsoft.ML.OnnxRuntime.Gpu.Windows (>= 1.22.0-dev-20250311-0507-333fbdb4a1)
- Microsoft.ML.OnnxRuntime.Managed (>= 1.22.0-dev-20250311-0507-333fbdb4a1)
-
.NETFramework 0.0
- Microsoft.ML.OnnxRuntime.Gpu.Linux (>= 1.22.0-dev-20250311-0507-333fbdb4a1)
- Microsoft.ML.OnnxRuntime.Gpu.Windows (>= 1.22.0-dev-20250311-0507-333fbdb4a1)
- Microsoft.ML.OnnxRuntime.Managed (>= 1.22.0-dev-20250311-0507-333fbdb4a1)
-
.NETStandard 0.0
- Microsoft.ML.OnnxRuntime.Gpu.Linux (>= 1.22.0-dev-20250311-0507-333fbdb4a1)
- Microsoft.ML.OnnxRuntime.Gpu.Windows (>= 1.22.0-dev-20250311-0507-333fbdb4a1)
- Microsoft.ML.OnnxRuntime.Managed (>= 1.22.0-dev-20250311-0507-333fbdb4a1)
GitHub repositories (11)
Showing the top 5 popular GitHub repositories that depend on Microsoft.ML.OnnxRuntime.Gpu:
Repository | Stars |
---|---|
codeproject/CodeProject.AI-Server
CodeProject.AI Server is a self contained service that software developers can include in, and distribute with, their applications in order to augment their apps with the power of AI.
|
|
microsoft/psi
Platform for Situated Intelligence
|
|
NickSwardh/YoloDotNet
YoloDotNet - A C# .NET 8.0 project for Classification, Object Detection, OBB Detection, Segmentation and Pose Estimation in both images and videos.
|
|
dme-compunet/YoloSharp
🚀 Use YOLO11 in real-time for object detection tasks, with edge performance ⚡️ powered by ONNX-Runtime.
|
|
sstainba/Yolov8.Net
A .net 6 implementation to use Yolov5 and Yolov8 models via the ONNX Runtime
|
Release Def:
Branch: refs/heads/main
Commit: 333fbdb4a1161e1a3a8a119bf584ed9549fe9e0f
Build: https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=707516