Microsoft.ML.OnnxRuntime.Gpu.Windows
                                
                            
                                1.22.0-dev-20250308-0506-989d4177ed
                            
                        
                            
                                
                                
                                    Prefix Reserved
                                
                            
                    See the version list below for details.
dotnet add package Microsoft.ML.OnnxRuntime.Gpu.Windows --version 1.22.0-dev-20250308-0506-989d4177ed
NuGet\Install-Package Microsoft.ML.OnnxRuntime.Gpu.Windows -Version 1.22.0-dev-20250308-0506-989d4177ed
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu.Windows" Version="1.22.0-dev-20250308-0506-989d4177ed" />
<PackageVersion Include="Microsoft.ML.OnnxRuntime.Gpu.Windows" Version="1.22.0-dev-20250308-0506-989d4177ed" />
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu.Windows" />
paket add Microsoft.ML.OnnxRuntime.Gpu.Windows --version 1.22.0-dev-20250308-0506-989d4177ed
#r "nuget: Microsoft.ML.OnnxRuntime.Gpu.Windows, 1.22.0-dev-20250308-0506-989d4177ed"
#:package Microsoft.ML.OnnxRuntime.Gpu.Windows@1.22.0-dev-20250308-0506-989d4177ed
#addin nuget:?package=Microsoft.ML.OnnxRuntime.Gpu.Windows&version=1.22.0-dev-20250308-0506-989d4177ed&prerelease
#tool nuget:?package=Microsoft.ML.OnnxRuntime.Gpu.Windows&version=1.22.0-dev-20250308-0506-989d4177ed&prerelease
About

ONNX Runtime is a cross-platform machine-learning inferencing accelerator.
ONNX Runtime can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.
Learn more → here
NuGet Packages
ONNX Runtime Native packages
Microsoft.ML.OnnxRuntime
- Native libraries for all supported platforms
 - CPU Execution Provider
 - CoreML Execution Provider on macOS/iOS
 - XNNPACK Execution Provider on Android/iOS
 
Microsoft.ML.OnnxRuntime.Gpu
- Windows and Linux
 - TensorRT Execution Provider
 - CUDA Execution Provider
 - CPU Execution Provider
 
Microsoft.ML.OnnxRuntime.DirectML
- Windows
 - DirectML Execution Provider
 - CPU Execution Provider
 
Microsoft.ML.OnnxRuntime.QNN
- 64-bit Windows
 - QNN Execution Provider
 - CPU Execution Provider
 
Intel.ML.OnnxRuntime.OpenVino
- 64-bit Windows
 - OpenVINO Execution Provider
 - CPU Execution Provider
 
Other packages
Microsoft.ML.OnnxRuntime.Managed
- C# language bindings
 
Microsoft.ML.OnnxRuntime.Extensions
- Custom operators for pre/post processing on all supported platforms.
 
Learn more about Target Frameworks and .NET Standard.
- 
                                                    
.NETCoreApp 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.22.0-dev-20250308-0506-989d4177ed)
 
 - 
                                                    
.NETFramework 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.22.0-dev-20250308-0506-989d4177ed)
 
 - 
                                                    
.NETStandard 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.22.0-dev-20250308-0506-989d4177ed)
 
 
GitHub repositories (1)
Showing the top 1 popular GitHub repositories that depend on Microsoft.ML.OnnxRuntime.Gpu.Windows:
| Repository | Stars | 
|---|---|
| 
                                                        
                                                            Lyrcaxis/KokoroSharp
                                                        
                                                         
                                                            Fast local TTS inference engine in C# with ONNX runtime. Multi-speaker, multi-platform and multilingual.  Integrate on your .NET projects using a plug-and-play NuGet package, complete with all voices.
                                                         
                                                     | 
                                                    
Release Def:
	Branch: refs/heads/main
	Commit: 989d4177ed99db324ba4a4a35149977626120b14
	Build: https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=705154