SAN FRANCISCO - Nvidia may have made its immense fortune ​on the back of specialized graphics processing ⁠units (GPUs) used to power artificial intelligence servers, but CEO Jensen Huang is increasingly professing his love for the more generalist CPU.

The CPU, or central processing unit, ‌was for decades traditionally viewed as the main brain of a computer - a product most associated with Intel or sometimes Advanced Micro Devices.

Huang is fond of saying that where once 90% of computing used to happen ​on CPUs and 10% on chips like his, the ratio had flipped in recent years.

But the CPU is now making a comeback - increasingly seen as an equivalent if not better option as AI ​companies ​shift from training their models to deploying them - a shift that Nvidia plans to be a big part of.

"We love CPUs as well as GPUs," Huang said on a call with analysts on Wednesday for the company's fourth-quarter results.

He assured them that Nvidia was not only ready for the CPU's return to the ⁠spotlight, but also that Nvidia's own CPU offerings for data centers, first released in 2023, would outcompete rivals.

Last month, at the Consumer Electronics Show in Las Vegas in January, Huang also said the number of high-performance Nvidia CPUs being used in data centers would explode and that he wouldn't be surprised "if Nvidia becomes one of the largest CPU makers in the world."

THE CPU VERSUS THE GPU

CPUs and GPUs have served different computing tasks for decades. CPUs are generalist chips designed to handle any mathematical task a software programmer might throw at ​the chip at reasonable speed, given ‌the variety of work.

GPUs, ⁠by contrast, specialize in carrying out ⁠a simpler set of mathematical tasks, but doing those simple calculations in parallel thousands of times at once. 

In video games that meant calculating the value of thousands of pixels on a screen ​many times a second, and in AI work that means multiplying and adding large matrices of numbers that developers use to represent ‌real-world data such as words and images.

AI companies are increasingly deploying "agents" that can independently carry out tasks such as ⁠writing code, sifting through documents and writing research reports - and that sort of computing "is happening more and more, and sometimes primarily, on the CPU," said Ben Bajarin, an analyst at Creative Strategies.

Nvidia's current flagship AI server - called the NVL72 - contains 36 of its CPUs and 72 of its GPUs. Bajarin thinks that could change to a 1 to 1 ratio for so-called agentic work or even that the GPU could be skipped altogether.

NVIDIA OUT TO PROVE A POINT

Underscoring its CPU ambitions, Nvidia recently announced a deal with Meta Platforms that will see the Facebook owner use large volumes of its Grace and Vera CPU chips on a standalone basis. That's a relatively new development compared to Nvidia's current AI servers where each CPU is accompanied by multiple GPUs.

Though it's not that Meta has switched vendors for CPUs - it's just securing more suppliers. Days later, AMD also announced a large deal with Meta that also included its CPUs, which Meta has been buying for years.

On the call with analysts, Huang argued that Nvidia had taken a fundamentally ‌different approach to CPUs.

He outlined why Nvidia had minimized an approach to breaking up chips into smaller ⁠parts that Intel and AMD have used, saying the Nvidia CPU was able to carry out many simple tasks ​in a row with good access to a lot of computer memory.

"It is designed to be focused on very high data processing capabilities," Huang said on the call. "And the reason for that is because most of the computing problems that we're interested in are data driven - artificial intelligence being one."

Dave Altavilla, principal analyst at HotTech Vision and Analysis, said Nvidia is aiming to prove ​that the CPU type once supplied ‌primarily by Intel "is no longer the assumed default foundation of modern compute infrastructure. Instead, it becomes just one architectural option among several."

Huang ⁠said that Nvidia would have more to disclose about its CPUs ​at the company's annual developer conference in Silicon Valley next month.

(Reporting by Stephen Nellis in San Francisco; Editing by Peter Henderson and Edwina Gibbs)