It’s been over a year since OpenAI released GPT 4, and while we did get GPT 4 Turbo later on, it was more of an upgrade over the original GPT 4. This is why many expect a true successor this year, but it will not be GPT 5.
Analysts believe that GPT 5 will not be implemented this year due to the massive amount of computing power required for such a model.
GPT-5 doesn't seem likely to be released this year.
Ever since GPT-1, the difference between GPT-n and GPT-n+0.5 is ~10x in compute.
That would mean GPT-5 would have around ~100x the compute GPT-4, or 3 months of ~1 million H100s.
I doubt OpenAI has a 1 million GPU server ready. https://t.co/asJWNLkO23— Dan Hendrycks (@DanHendrycks) April 22, 2024
According to Dan Hendrycks, head of the Centre for AI Safety, each upgrade to OpenAI’s GPT LLM requires ten times more computing power.
So, skipping GPT-4.5 and going straight to GPT-5 would require about a hundred times more computing power than GPT-4.
To put it in perspective, that’s equivalent to running approximately 1 million H100 chips continuously for three months. The analyst doubts that OpenAI already has a million GPU servers.
Dario Amodei, CEO of Anthropic, provides insight to back up this claim.
He recently stated that training a cutting-edge LLM currently costs approximately $1 billion. However, this expense is expected to increase to between $5 billion and $10 billion by 2025/26.
Significantly, the $1 billion training cost corresponds to the tenfold increase in computational resources, which is a reasonable assumption for GPT-4.5.
GPT 4.5
GPT 4.5 is more likely to be released this year, as it has previously appeared in leaks from the OpenAI blog. Last month, search engine results revealed a page from the OpenAI blog describing GPT 4.5, which is expected to be faster, more accurate, and scalable.
To read our blog on “ChatGPT is headed to Nothing’s earbuds,” click here