AI acceleration is poised to become one of the most profitable hardware industries in the coming months and years, and Nvidia is well positioned to capture a sizable piece of this market. The H100 data centre GPU is already generating significant income for the Santa Clara-based business.
Nvidia is well positioned
Nvidia looks to be generating a significant profit on each H100 GPU accelerator sold, with claimed margins approaching 1,000% of production costs.
According to Tae Kim, a senior technology writer at Barron’s, Nvidia spends approximately $3,320 to create a single H100 device, which is subsequently sold to end users for a price ranging from $25,000 to $30,000.
These estimates come from the consultancy firm Raymond James and appear to include costs for the onboard HBM memory chips as well.
If the forecasts are correct, this may be the start of an unprecedented golden era for Nvidia’s GPU business. According to Kim, the demand for H100 GPU units is so high that they are virtually sold out until 2024.
Meanwhile, AI firms are racing to get enough GPU accelerators to run their generative models and AI-powered services.
According to Foxconn, the AI server industry will be worth $150 billion by 2027, and these modern AI servers rely significantly on the powerful computing capabilities of the latest Nvidia technology.
The Hopper microarchitecture underpins the H100 GPU, which was created as the data centre counterpart to the Ada Lovelace architecture that drives the latest generation of GeForce RTX gaming GPUs.
However, as evidenced by performance benchmarks, Hopper’s primary interest is not gaming. The H100 accelerator includes a modified GH100 GPU with 14,592 CUDA cores, as well as 80GB of HBM3 RAM with a 5,120-bit memory bus.
To read our blog on “Lenovo Legion Y9000X gaming laptop with Nvidia RTX 4070,” click here