Microsoft AI (MAI) has begun public testing of the first foundation model it trained in-house.
The model, dubbed MAI-1-preview, is being tested on LMArena, a platform for community model evaluation, the company said in a Thursday (Aug. 28) blog post.
“This represents MAI’s first foundation model trained end-to-end and offers a glimpse of future offerings inside Copilot,” the company said in the post. “We are actively spinning the flywheel to deliver improved models.”
MAI-1-preview is designed for use by consumers and specializes in following instructions and answering everyday questions, according to the post.
It will be rolled out for some text use cases in Copilot in the coming weeks, per the post.
“We will continue to use the very best models from our team, our partners and the latest innovations from the open-source community to power our products,” MAI said in the post. “This approach gives us the flexibility to deliver the best outcomes across millions of unique interactions every day.”
CNBC reported Thursday that Microsoft powers the artificial intelligence features of its Bing search engine, its Windows 11 operating system and other products primarily with AI models from OpenAI—a company in which Microsoft has invested over $13 billion—and that the development of an in-house model could signal that it’s working to reduce that dependence.
Microsoft added OpenAI to a list of competitors in its annual report last year, while OpenAI has added cloud providers beyond Microsoft, including CoreWeave, Google and Oracle, according to the report.
MAI also announced in its Thursday blog post that it is releasing a natural speech generation model called MAI-Voice-1, making it available in Copilot Daily and Podcasts and as a Copilot Labs experience.
“Voice is the interface of the future for AI companions and MAI-Voice-1 delivers high-fidelity, expressive audio across both single and multi-speaker scenarios,” MAI said in the post.
This announcement came on the same day that OpenAI released what it calls its “most advanced speech-to-speech model yet.” The company also made its application programming interface Realtime API generally available, saying the application programming interface now has features that help developers build voice agents.