Local Fields
This model is an optimized version of DeepSeek-R1-Distill-Qwen-14B to enable local inference on Intel GPUs.
Model Description
Developed by: Microsoft
Model type: ONNX
License: MIT
Model Description: This is a conversion of the DeepSeek-R1-Distill-Qwen-14B for local inference on Intel GPUs.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model DeepSeek-R1-Distill-Qwen-14B for details.
public static readonly FoundryModel DeepseekR114bThis model is an optimized version of DeepSeek-R1-Distill-Qwen-1.5B to enable local inference on Intel GPUs.
Model Description
Developed by: Microsoft
Model type: ONNX
License: MIT
Model Description: This is a conversion of the DeepSeek-R1-Distill-Qwen-1.5B for local inference on Intel GPUs.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model DeepSeek-R1-Distill-Qwen-1.5B for details.
public static readonly FoundryModel DeepseekR115bThis model is an optimized version of DeepSeek-R1-Distill-Qwen-7B to enable local inference on Intel GPUs.
Model Description
Developed by: Microsoft
Model type: ONNX
License: MIT
Model Description: This is a conversion of the DeepSeek-R1-Distill-Qwen-7B for local inference on Intel GPUs.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model DeepSeek-R1-Distill-Qwen-7B for details.
public static readonly FoundryModel DeepseekR17bThis model is an optimized version of gpt-oss-20b to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: Apache-2.0
License Description: Use of this model is subject to the terms of the Apache License, Version 2.0, available at http://www.apache.org/licenses/LICENSE-2.0.
Model Description: This is a conversion of the gpt-oss-20b model for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Azure AI Foundry model gpt-oss-20b for details.
public static readonly FoundryModel GptOss20bThis model is an optimized version of Mistral-7B-Instruct-v0.2 to enable local inference on Intel GPUs.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Mistral-7B-Instruct-v0.2 for local inference on Intel GPUs.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Mistral-7B-Instruct-v0.2 for details.
public static readonly FoundryModel Mistral7bV02This model is an optimized version of Phi-3.5-mini-instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: MIT
Model Description: This is a conversion of the Phi-3.5-mini-instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Phi-3.5-mini-instruct for details.
public static readonly FoundryModel Phi35MiniThis model is an optimized version of Phi-3-Mini-128K-Instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: MIT
Model Description: This is a conversion of the Phi-3-Mini-128K-Instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Phi-3-Mini-128K-Instruct for details.
public static readonly FoundryModel Phi3Mini128kThis model is an optimized version of Phi-3-Mini-4K-Instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: MIT
Model Description: This is a conversion of the Phi-3-Mini-4K-Instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Phi-3-Mini-4K-Instruct for details.
public static readonly FoundryModel Phi3Mini4kThis model is an optimized version of Phi-4 to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: MIT
Model Description: This is a conversion of the Phi-4 for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Phi-4 for details.
public static readonly FoundryModel Phi4This model is an optimized version of Phi-4-mini-instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: MIT
Model Description: This is a conversion of the Phi-4-mini-instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Phi-4-mini-instruct for details.
public static readonly FoundryModel Phi4MiniThis model is an optimized version of Phi-4-mini-reasoning to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: MIT
Model Description: This is a conversion of the Phi-4-mini-reasoning for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Phi-4-mini-reasoning for details.
public static readonly FoundryModel Phi4MiniReasoningThis model is an optimized version of Qwen2.5-0.5B-Instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen2.5-0.5B-Instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen2.5-0.5B-Instruct for details.
public static readonly FoundryModel Qwen2505bThis model is an optimized version of Qwen2.5-14B-Instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen2.5-14B-Instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen2.5-14B-Instruct for details.
public static readonly FoundryModel Qwen2514bThis model is an optimized version of Qwen2.5-1.5B-Instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen2.5-1.5B-Instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen2.5-1.5B-Instruct for details.
public static readonly FoundryModel Qwen2515bQwen2515bInstructTestVitisNpu Section titled Qwen2515bInstructTestVitisNpu staticreadonly FoundryModel public static readonly FoundryModel Qwen2515bInstructTestVitisNpuThis model is an optimized version of Qwen2.5-7B-Instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen2.5-7B-Instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen2.5-7B-Instruct for details.
public static readonly FoundryModel Qwen257bThis model is an optimized version of Qwen2.5-Coder-0.5B-Instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen2.5-Coder-0.5B-Instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen2.5-Coder-0.5B-Instruct for details.
public static readonly FoundryModel Qwen25Coder05bThis model is an optimized version of Qwen2.5-Coder-14B-Instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen2.5-Coder-14B-Instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen2.5-Coder-14B-Instruct for details.
public static readonly FoundryModel Qwen25Coder14bThis model is an optimized version of Qwen2.5-Coder-1.5B-Instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen2.5-Coder-1.5B-Instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen2.5-Coder-1.5B-Instruct for details.
public static readonly FoundryModel Qwen25Coder15bThis model is an optimized version of Qwen2.5-Coder-7B-Instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen2.5-Coder-7B-Instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen2.5-Coder-7B-Instruct for details.
public static readonly FoundryModel Qwen25Coder7bThis model is an optimized version of Qwen3-0.6B to enable local inference. This model uses KLD Gradient quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen3-0.6B for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen3-0.6B for details.
public static readonly FoundryModel Qwen306bThis model is an optimized version of Qwen3-14B to enable local inference. This model uses GPTQ quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen3-14B for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen3-14B for details.
public static readonly FoundryModel Qwen314bThis model is an optimized version of Qwen3-1.7B to enable local inference. This model uses KLD Gradient quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen3-1.7B for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen3-1.7B for details.
public static readonly FoundryModel Qwen317bThis model is an optimized version of Qwen3-4B to enable local inference. This model uses KLD Gradient quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen3-4B for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen3-4B for details.
public static readonly FoundryModel Qwen34bThis model is an optimized version of Qwen3-8B to enable local inference. This model uses KLD Gradient quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen3-8B for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen3-8B for details.
public static readonly FoundryModel Qwen38bThis model is an optimized version of Qwen3-VL-2B-Instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen3-VL-2B-Instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen3-VL-2B-Instruct for details.
public static readonly FoundryModel Qwen3Vl2bInstructThis model is an optimized version of Qwen3-VL-4B-Instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen3-VL-4B-Instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen3-VL-4B-Instruct for details.
public static readonly FoundryModel Qwen3Vl4bInstructThis model is an optimized version of Qwen3-VL-8B-Instruct to enable local inference. This model uses RTN quantization.
Model Description
Developed by: Microsoft
Model type: ONNX
License: apache-2.0
Model Description: This is a conversion of the Qwen3-VL-8B-Instruct for local inference.
Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
Base Model Information
See Hugging Face model Qwen3-VL-8B-Instruct for details.
public static readonly FoundryModel Qwen3Vl8bInstruct