Microsoft has launched a series of AI reasoning models - Phi 4 mini reasoning, Phi 4 reasoning and Phi 4 reasoning plus, as a part of its small language model family. The open-weight models combine reinforcement learning with supervised fine-tuning to solve complex problems in maths, science, coding and logic-based tasks.
The most capable one of them - Phi 4 reasoning is a 14-billion parameter model that was trained on curated and high-quality web data from OpenAI’s o3-mini. A blog posted by the company said that the AI model performs the best at math, science and coding applications.
Meanwhile the Phi 4 mini reasoning model has 3.8 billion parameters and was trained on around a million synthetic math problems generated by DeepSeek’s R1 model. The AI model is built for teaching purposes and can be embedded into devices.
The Phi 4 reasoning plus on the other hand outperforms DeepSeek R1 on multiple benchmarks even though the latter is much bigger in size. The model is also comparable to OpenAI’s o3-mini on a math test benchmark.
“Phi’s evolution over the last year has continually pushed this envelope of quality vs. size, expanding the family with new features to address diverse needs. Across the scale of Windows 11 devices, these models are available to run locally on CPUs and GPUs,” Microsoft stated.
The mini reasoning models are all available now over Hugging Face.
Published - May 02, 2025 03:18 pm IST
NEW YORK, May 3 — The creative team behind the popular Grand Theft Auto video game series postponed ...
With AI drastically changing organisational landscapes, management students must understand its stra...