Tesla’s in-house supercomputer has been given an additional 1,600 GPUs, representing a 28% increase over the figure given a year ago.
According to Tesla Engineering Manager Tim Zaman, this would place the machine seventh in the world in terms of GPU count.
The machine now has 7,360 Nvidia A100 GPUs, which are designed specifically for data center servers but use the same architecture as the company’s top-tier GeForce RTX 30-series cards.
Upgrade of the Telsa supercomputer
Tesla most likely requires all of the processing power it can get right now. The company is currently working on ‘neural nets,’ which are used to process the massive amounts of video data collected by the company’s cars. The most recent upgrade could be just the beginning of Tesla’s high-performance computing (HPC) ambitions.
Elon Musk stated in June 2020, “Tesla is developing a neural net training computer called Dojo to process truly vast amounts of video data,” explaining that the planned machine would achieve a performance of over 1 exaFLOPs, which is one quintillion floating-point operations per second, or 1,000 petaFLOPs.
Over 1 exaFLOPs performance would place the machine among the most powerful supercomputers in the world, as only a few current supercomputers have officially surpassed the exascale barrier, including The Frontier supercomputer at Oak Ridge National Laboratory in Tennessee, United States.
You could even get a job building the new computer. Musk encouraged his followers on Twitter to “consider joining our AI or computer/chip teams if this sounds interesting.”
Dojo, on the other hand, will not be reliant on Nvidia hardware. The planned machine will be powered by Tesla’s new D1 Dojo Chip, which the carmaker said at its AI Day event could have specifications of up to 362 TFLOPs.
To read our blog on “Stellantis outsells Tesla in European EV sales,” click here