Intel unveils new data center GPU dubbed ‘Crescent Island’

Intel unveils new data center GPU dubbed ‘Crescent Island’

Intel unveiled a new data center GPU at the OCP Global Summit this week. Dubbed “Crescent Island”, the GPU will use the XE3P graphics architecture, low-power LPDDR5X memory, and will target AI-related workloads, with energy efficiency as a core feature.

As the focus of the general AI revolution shifts from model training to deductive and agent AI, chipmakers have responded with new chip designs that are optimized for the workloads. Instead of cranking out massive AI accelerators with tons of number-crunching horsepower.

The company says this is the backdrop for Intel’s latest GPU, the Crescent Island, which is due in the second half of 2026. The new GPU will feature 160GB of LPDDR5X memory, use the XE3P microarchitecture, and be optimized for performance per watt. XE3P is a new, performance-oriented version of the XE3 architecture used in Intel’s Panther Lake CPUs.

“AI is moving from static training to real-time, inference everywhere,” said Sachin Kutty, Intel’s CTO. Intel’s XE architecture data center GPU provides the efficient headroom customers need – and increased token volume as a high price.”

Intel’s Ponte Vecchio GPU, circa 2022

Intel launched its Intel XE GPU Microarchitecture Initiative in 2018, details of which will emerge in 2019 at its HPC Developer Conference (held down the road from the SC19 show in November 2019). The goal was to compete against NVIDIA and AMD GPUs for both data center (HPC and AI) and desktop (gaming and graphics) use cases. It has launched a series of XE (which stands for “exascale for all”) products over the years, including discrete GPUs for graphics, integrated GPUs embedded in CPUs, and data center GPUs used for AI and HPC workloads.

Its first Intel XE data center GPU was the Ponte Vecchio, which used the XE-HPC microarchitecture and embedded Multi-Die Interconnect Bridge (EMIB) and Phovaros die-stacking packaging on the Intel 4 Node, its 7-nanometer technology. Ponte Vecchio also used some 5nm components from TSMC.

You’ll recall that Argonne National Laboratory’s Aurora supercomputer, which was the second-fastest supercomputer ever when it debuted two years ago, was built using six Ponte Vecchio Max-series GPUs with each Intel Xeon Max-series CPU housed in an HPE Slingist interconnect frame using HPE Slingest Interconnect. Aurora showcased a total of 63,744 XE-HPC Ponte Vecchio GPUs in more than 10,000 nodes in November 2023, delivering 585 petaflops in November 2023. It officially became the second supercomputer to break the exascale barrier in June 2024, and it currently sits in the number three slot on the Top 500 list.

When Aurora was first revealed in 2015, it featured a pair of Intel’s Xeon Fi accelerators with Xeon CPUs. However, when Intel killed off the Xeon Fi in 2017, it forced computer designers to go back to the drawing board. The answer came when Intel announced Ponte Vecchio in 2019.

Intel’s new Crescent Lake GPUs will feature LPDDR5X memory

It’s not exactly clear when Crescent Lake, the successor to Ponte Vecchio, will be built, and whether it will be delivered as a pair of small GPUs or a large-scale GPU. Crescent Island’s performance characteristics will also be something to watch out for, especially in terms of memory bandwidth, which is a critical point in many AI workloads these days.

Using LVDDR5X memory, commonly found in PCs and smartphones, is an interesting choice for data center GPUs. LVDDR5X was released in 2021 and can apparently reach speeds of up to 14.4 Gbps per pin. Memory makers like Samsung and Micron offer LVVDR5X memory in capacities up to 32GB, so Intel will need to figure out a way to connect a handful of DIMMs to each GPU.

Both AMD and NVIDIA are using large amounts of the latest generation of high-bandwidth memory (HBM) in their next-generation GPUs in 2026, with the AMD MI450 using 432GB of HBM4 and NVIDIA offering up to 1TB of HBM4 memory with its Rubin Ultra GPUs.

HBM4 has advantages when it comes to bandwidth. But with HBM4 and tighter supply chains increasing prices, perhaps Intel is working on something using LVDDR5X memory.


This article first appeared on our sister publication, Hpcwire.

About the author: Alex Woody

Alex Woody has written about it as a technology journalist for over a decade. He brings extensive experience from the IBM Midedge marketplace, including topics such as servers, ERP applications, programming, databases, security, high availability, storage, business intelligence, cloud and mobile capabilities. He lives in the San Diego area.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *