Connect with us

Bussiness

NVIDIA Unveils Next-Gen Rubin, Rubin Ultra, Blackwell Ultra GPUs & Supercharged Vera CPUs

Published

on

NVIDIA Unveils Next-Gen Rubin, Rubin Ultra, Blackwell Ultra GPUs & Supercharged Vera CPUs

NVIDIA has just confirmed its next-gen Rubin GPU architecture while announcing Rubin & Blackwell GPUs along with the latest Vera CPU.

NVIDIA Rubin GPU Architecture Is Now Official: Blackwell & Rubin To Get “Ultra” Variants With Supercharged Memory & Specs

As a surprise announcement, NVIDIA’s CEO, Jensen Huang, revealed their next GPU architecture codenamed Rubin which is named after American astronomer, Vera Rubin, who made significant contributions to the understanding of dark matter in the universe while also pioneering work on galaxy rotation rate. Although NVIDIA just revealed its Blackwell platform, it looks like NVIDIA is accelerating its roadmap, offering a new GPU product each year as we reported recently.

But let’s start with Blackwell first, while the first iteration of Blackwell GPUs (B100/B200) will come to data centers later this year, NVIDIA also plans to release a supercharged version that would feature 12Hi memory stacks across 8 sites versus the 8Hi memory stacks across 8 sites on existing products. This chip is expected to launch in 2025.

Then soon after Blackwell, NVIDIA will release its next-gen Rubin GPUs. The NVIDIA Rubin R100 GPUs will be part of the R-series lineup &  are expected to be mass-produced in the fourth quarter of 2025 while systems such as DGX and HGX solutions are expected to be mass-produced in the first half of 2026. According to NVIDIA, Rubin GPUs and the respective platform will be available by 2026 followed by an Ultra version in 2027. NVIDIA also confirms that Rubin GPUs will utilize HBM4 memory.

So essentially, we are expecting:

  • Blackwell (2024) -> Blackwell Ultra (2025)
  • Rubin (2026) -> Rubin Ultra (2027)

It is expected that NVIDIA’s Rubin R100 GPUs will use a 4x reticle design (versus 3.3x of Blackwell) and will be made using the TSMC CoWoS-L packaging technology on the N3 process node. TSMC recently laid out plans for up to 5.5x reticle size chips by 2026 which would feature a 100x100mm substrate and allow for up to 12 HBM sites versus 8 HBM sites on current 80x80mm packages.

The semiconductor company also plans to move to a new SoIC design which will feature a greater than 8x reticle size in a 120x120mm package configuration. These are still being planned out so we can more realistically expect somewhere between 4x reticle size for Rubin GPUs.

Other information mentioned states that NVIDIA will be utilizing the next-generation HBM4 DRAM to power its R100 GPUs. The company currently leverages the fastest HBM3E memory for its B100 GPUs and is expected to refresh these chips with HBM4 variants when the memory solution gets widely mass-produced in late 2025. This will be about the same time when R100 GPUs are expected to enter mass production. HBM4. Both Samsung and SK Hynix have revealed plans to commence development of the next-gen memory solution in 2025 with up to 16-Hi stacks.

NVIDIA is also set to upgrade its Grace CPU for the GR200 Superchip module which will house two R100 GPUs and an upgraded Grace CPU based on TSMC’s 3nm process. Currently, the Grace CPU is built on TSMC’s 5nm process node and packs 72 cores for a total of 144 cores on the Grace Superchip solution. The next-generation ARM CPU solution is also confirmed to be known as Vera which is a nice touch.

One of the biggest focuses for NVIDIA with its next-gen Rubin R100 GPUs will be power efficiency. The company is aware of the growing power needs of its data center chips and it will provide significant improvements in this department while increasing the AI capabilities of its chips. The R100 GPUs are still far away and we shouldn’t expect them to be unveiled until next year’s GTC but if this information is correct, then NVIDIA has lots of exciting developments ahead for the AI and Data Center segment.

NVIDIA Data Center / AI GPU Roadmap

GPU Codename X Rubin (Ultra) Blackwell (Ultra) Hopper Ampere Volta Pascal
GPU Family GX200 GR100 GB200 GH200/GH100 GA100 GV100 GP100
GPU SKU X100 R100 B100/B200 H100/H200 A100 V100 P100
Memory HBM4e? HBM4 HBM3e HBM2e/HBM3/HBM3e HBM2e HBM2 HBM2
Launch 202X 2026-2027 2024-2025 2022-2024 2020-2022 2018 2016
Continue Reading