With MAI-1, Microsoft controls its AI future

With MAI-1, Microsoft controls its AI future

(Below the sky/shutter stock)

Big Tech has spent in the past few years to plug AI in everything in racing. Microsoft has bent over outside models so that the push can be fueled, from open -powered coat to the open source of open source system on the open source system. This approach has helped him move forward, but now the company is pointing to a new phase-where more and more power comes to the plant.

This week, Microsoft introduced two new models fully constructed by its AI team. The My -1 -Pashtun view is a huge language model (LLM) trained on thousands of GPUs, while My Voice -1 provides a sharp, expressive speech production. The two are already living in the co -pilot. These are not just experimental releases. They reflect a major change in the Microsoft strategy, which focuses on the construction of the AI ​​system, which is fully owned by its terms, the lyrics and the scales.

Microsoft, I Blog post by announcing new modelsThey say she wants to create a technology that “empower everyone on the planet.” According to the company, the purpose is to make something helpful and the ground. Something that offers tools that can make people really live and work. It has called this vision “Act AI”, which means to support real needs rather than chase hype. The company says these first models are a step towards this long -term project.

MAI-1-PREVIEW is trained at about 15,000 Nvidia H100 GPUs. It uses a compound experienced design, which provides path to various works through special parts of the model to enhance efficiency and efficiency. This model is being publicly tested on a community -powered benchmarking site Lemrina, and will soon start rolling in the COP co -cope of selected text -oriented features. Microsoft views it as an important step toward the building system that can be ready over time and can directly respond to the user’s needs.

(Shutter stock AI generator)

The second model, Mai Voice -1, is about speech. It is designed to produce fast, natural audio, which is more than common AI sound. Microsoft says it can produce a full minute of speaking production in the same GPU, which will make it one of the currently highly efficient voice models available.

It is already directly in the co -pilot and directly in the polytic podcasts, and is available for testing in the copelot labs. Users can find different tons, sounds and moods, including stories steeling and guidance meditation. Microsoft sees it as a step toward making the sound a more natural way to communicate with its AI tools.

Consumers’ first direction is a deliberate choice through Microsoft. Instead of creating the goal of your internal models on enterprise work loads, Microsoft prefers use issues where AI shows in everyday apps and user experiences.

Microsoft AI Chief Mustafa Solomon “My logic is that we have to create something that works very well for consumers and actually improve Optim in terms of our use.” “So, we have many predictions and very useful data towards consumers’ telemetry, and similar ads. I focus on building models that really work for consumer partner.”

Behind the scenes, Microsoft is quietly scaling its infrastructure to meet its ambitions. The company says its next generation of GB200 clusters is now operating, making it like a raw compute, which is commonly reserved for Frontier AI labs. It points to long -term investment in developing and running large models throughout the house. This is not just about maintaining speed with the demand in the pilot. About the backbone to train what comes behind it.

Although Tech Dev has launched the LLM in his home, it is not completely closing the door on the external models. The company has made it clear that it plans to use the best tool for this job, whether it be its own architecture, a partner model like GPT4, or open source system.

This flexibility can be important, especially when the AI ​​system spreads in industries, geography and compliance. A hybrid approach provides more control over Microsoft on how the model is deployed, how the data is handled, and how quickly the platform can be in line with new demands or regulations.

The move to create Microsoft’s own model has come to light when things are slightly complicated with the open. He is still a partner, but recent stress shows that Microsoft wants more control over where its AI goes next. Opening for outdoor models allows Microsoft to develop the middle path. Google is going to all on its Gemini stack, pushing Meta Lama and Open models, and focusing on offering a wide menu through Amazon Bedrock.

Microsoft’s strategy is different. It is making its own model by making a place for others, and making them directly in the user -facing products such as Cooplot. If the future of AI is a combination of systems working together in context, it may be that the future looks like.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *