Summary

  • Microsoft Research has developed a new AI model, Phi-4-Reasoning-Plus, optimized for use in applications that require deep, structured reasoning, such as mathematics, science and coding.
  • It is a 14-billion parameter model based on the transformer architecture and uses 16 billion tokens pulled from both synthetic and web-based datasets for training, reinforced by a further 6,400 math-related problems.
  • The model has been released under a permissive licence and offers flexible deployment across a range of widely-used inference frameworks and provides detailed system prompt recommendations for developers.
  • Crucially, Phi-4-Reasoning-Plus outperforms larger, open-weight models and is designed to deliver high-quality reasoning under memory or latency constraints, such as in a chat or embedded device, while also offering extensive safety testing and use guidelines.
  • Its development showcases Microsoft’s growing focus on smaller models that can rival the performance of much larger systems, and it is positioned as a research tool and component for generative AI systems rather than a turnkey solution.

By Carl Franzen

Original Article