Top News in India: Read Latest News on Sports, Business, Entertainment, Blogs and Opinions from leading columnists. Newz24india.in brings the Breaking News …

Microsoft Introduces Phi-3-Mini, A SLM-Based Lightweight AI Model. What It Can Do Is This

Microsoft Introduces Phi-3-Mini, A SLM-Based Lightweight AI Model. What It Can Do Is This

Microsoft Introduces Phi-3-Mini, A SLM-Based Lightweight AI Model. What It Can Do Is This

Microsoft Introduces Phi-3-Mini

According to Microsoft, Phi-3-mini can outperform models twice its size with a measurement of 3.8 billion parameters.

Microsoft revealed the launching of Phi-3-mini, a new artificial intelligence (AI) model that is more reasonably priced, on Tuesday. With a fraction of the price, this lightweight model is intended to provide comparable capabilities to its predecessors. Microsoft’s Vice President of GenAI Research, Sébastien Bubeck, emphasized the new model’s financial benefits. “Phi-3 is significantly less expensive—not just marginally. According to Reuters, Bubeck said during the unveiling, “We’re looking at a cost reduction of about tenfold compared to other models in the market with comparable capabilities.”

What Can Phi-3-mini Do?

The Phi-3-mini is a component of Microsoft’s larger plan to launch a number of small language models (SLMs), which are designed for easier jobs and are therefore perfect for companies with limited resources.

This strategy is a part of the tech giant’s calculated wager on a technology they think will have a big impact on workplace dynamics and global businesses.

According to Microsoft, Phi-3-mini can outperform models twice its size with a measurement of 3.8 billion parameters.

How Is LLM Different From SLM?

The SLM-based Phi-3-mini’s small size is one of its main advantages since it enables it to function well on local PCs without requiring a lot of computing power.

Smaller businesses and individual users can utilize the Phi-3-mini because it can generate data fast and requires less hardware than larger language models (LLMs), which call for several parallel processing units.

“This approach promises to reduce costs, make the models faster as they are on the edge, and promise more enterprise and consumer use cases of Generative AI,” Bindra added.

EU COULD SUSPEND TIKTOK LITE DUE TO ADDICTION ISSUES

Where Will Phi-3-mini Be Available?

Hugging Face, a well-known machine learning model platform, Microsoft’s own Azure AI model catalogue, and Ollama, a framework that makes it easier to run models locally, are just a few of the platforms on which Microsoft has made the Phi-3-mini instantly available. Moreover, the model’s performance and adaptability have been improved via optimization for Nvidia’s GPUs and the addition of Nvidia Inference Microservices (NIM).

With this launch, Microsoft is continuing its commitment to growing its AI presence globally, as seen by its recent $1.5 billion investment in the UAE-based AI company G42. Additionally, the business has teamed up with the French firm Mistral AI to include its models into the Azure cloud computing platform, further solidifying Microsoft’s standing as a pioneer in AI accessibility and innovation.

https://www.facebook.com/newz24india.in/

Exit mobile version