Qualcomm debuts AI200 and AI250 chips as it moves into AI infrastructure

Qualcomm debuts AI200 and AI250 chips as it moves into AI infrastructure

(IM Imagery/Shutterstock)

Qualcomm is moving beyond its mobile roots with a new line of data center hardware designed for AI diagnostics. The San Diego-based company introduced its new AI200 and AI250 accelerator chips on Monday, positioning itself to compete with NVIDIA and AMD in the AI ​​hardware market. According to Qualcomm, these new chips will provide improved memory capacity, rack-scale configurations, and compatibility with major AI frameworks.

Qualcomm says its AI200 accelerator supports up to 768GB of DDR memory per card and is optimized for AI evaluation in large language models, multimodal systems and related workloads. Although the AI200 is designed as an accelerator card, Qualcomm describes it as a rack-level system for running generative AI applications. The AI250 features a near-field computing design that minimizes data movement and power draw while supporting controversial AI mitigation, the company said. Both rack systems use direct liquid cooling and have a rated power draw of 160 kW. They also support PCIe and Ethernet and include encrypted computing features to secure sensitive workloads.

A Qualcomm AI Rack (Source: Qualcomm)

The announcement marks Qualcomm’s first major move into the data center accelerator market, which is currently dominated by NVIDIA’s GPUs and, more recently, AMD’s MI300 series. Qualcomm’s core business has historically focused on wireless and mobile processors, including its Snapdragon system-on-chips for smartphones. But as the company faces slowing handset demand, it has pushed into other sectors such as automotive, personal computing, and now large-scale AI infrastructure. So far, its data center strategy seems to be focused on energy efficiency rather than just raw throughput.

This could be a smart strategy for Qualcomm, as inference workloads are changing data center design around power and memory constraints. With larger models demanding greater throughput and efficiency, vendors are increasingly developing better systems for AI at production scale. Qualcomm’s near-memory and liquid-cooled designs address the same pressures on performance and energy consumption.

In May, Qualcomm announced a partnership with state-backed Saudi AI firm Humain to supply AI chips to its data centers in the region. Homer has committed to deploying up to 200 megawatts of Qualcomm AI systems starting in 2026, pledging a $10 billion investment in 18,000 Blackwell GPUs by NVIDIA, Hyman and AMD. The deal could give Qualcomm and its rivals a foothold in a region that invests heavily in autonomous AI infrastructure. It will also provide mass deployment to implement Qualcomm’s new hardware.

The AI200 and AI250 will follow a staggered timeline, with the AI200 expected to be available in 2026 and the AI250 in 2027. Qualcomm did not disclose detailed performance specs such as input or process nodes, although the company said the chips will support standard AI frameworks and tools. As a fabless chip designer, Qualcomm has not disclosed which foundry (or foundries) will produce the chips, leaving questions about manufacturing scale and lead times.

(Bandersnatch/Shutterstock)

Qualcomm’s recent acquisitions also indicate its growing investment in AI and data center tech. This year, the company bought Movian AI, the generative AI unit of Vietnam’s Vinny. Autotix, an Israeli automotive communications chipmaker; Alphavio IP Group, a British semiconductor company focused on data center connectivity. and Arduino, an Italian maker of open-source microcontroller platforms. While Qualcomm hasn’t linked these acquisitions directly to the AI200 or AI250, they show how the company can position itself for a bigger role in AI and data center infrastructure.

Qualcomm’s announcement comes at a time of serious competition to fuel AI workloads in the data center and HPC sectors, and while NVIDIA holds the lion’s share of the market (up to 94% by some estimates), startups like AMD, Intel, and Nexus stand to provide alternatives. Qualcomm now joins a crowded field of vendors aiming to improve energy efficiency as AI workloads expand.

Qualcomm’s ability to scale its low-power expertise to the data center will determine whether its foray into AI infrastructure is a niche experience or a lasting presence in the market. Investors feel confident: The company’s shares rose more than 11 percent on Monday, after rising as much as 22 percent earlier in the day. The company has the technology and capital to compete, but execution will depend on how well its hardware performs against other AI platforms and how quickly it can scale. The next few years will test whether the firm’s performance-based approach can compete in an industry dominated by established chipmakers.


Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *