Nvidia CEO Jensen Huang: Future AI Will Need 100 Times More Computation

Editorial Team
By Editorial Team
6 Min Read

Artificial intelligence (AI) is becoming more powerful every year, and experts believe it will need much more computing power in the future. Nvidia CEO Jensen Huang recently said that next-generation AI models will require 100 times more computation than older versions. This is because modern AI systems are learning to think step by step before answering questions, which requires much more processing power.

Huang made this statement in an interview after Nvidia announced its latest financial results. The company has been growing rapidly due to the high demand for its specialized computer chips, which are essential for AI systems. Nvidia reported that its revenue increased by 78% compared to last year, reaching $39.33 billion. Its data center business, which includes the powerful graphics processing units (GPUs) used for AI, grew even faster, with a 93% rise in revenue. These numbers show how much AI technology is expanding and how important Nvidia’s chips are to that growth.

Why AI Needs More Computation

AI models have changed a lot over the years. Older AI systems relied on simple patterns and predictions to generate responses. However, modern AI uses a reasoning process, meaning it carefully thinks through each question before answering. This new method allows AI to provide better and more accurate responses, but it also requires much more computing power.

Huang mentioned several AI models, including DeepSeek’s R1, OpenAI’s GPT-4, and xAI’s Grok 3, as examples of this new way of reasoning. These models break down problems into smaller steps, analyze different possibilities, and then provide the best answer. This process is similar to how humans think through difficult problems. But because AI operates on a much larger scale, it needs thousands of powerful chips working together to handle all the necessary calculations.

According to Huang, the amount of computation required for this type of reasoning is 100 times greater than what was needed just a few years ago. This means that AI companies will need to invest in more powerful hardware and advanced chips to keep up with the growing demand for AI technology.

Nvidia’s Role in AI Growth

Nvidia is one of the leading companies in AI hardware, providing the specialized chips that power AI models. The company has seen massive growth in recent years, mainly due to the increasing need for high-performance computing in AI applications.

Its data center business, which includes AI chips, now makes up more than 90% of Nvidia’s total revenue. This shows how critical AI has become to Nvidia’s success. More companies and research labs are using Nvidia’s GPUs to train and run AI models, leading to a surge in demand for its products.

Despite this growth, Nvidia’s stock price has faced some challenges. Earlier this year, the company’s stock dropped by 17%, marking its worst decline since 2020. The drop was partly due to concerns that AI could become more efficient, reducing the need for expensive hardware.

The Impact of DeepSeek

One of the key factors that influenced this concern was a breakthrough from DeepSeek, a Chinese AI lab. DeepSeek developed a model that was able to achieve strong AI performance with lower infrastructure costs. This raised questions about whether companies would still need to invest in expensive computing power if AI could be optimized in other ways.

However, Huang disagreed with this idea. He explained that DeepSeek’s work actually showed that AI would need even more computation, not less. DeepSeek’s model, which used reasoning techniques, required powerful chips to function effectively. Huang praised DeepSeek for open-sourcing its model, which allows researchers and companies worldwide to learn from its methods and improve AI technology.

This discussion highlights an important trend in AI development: as models become smarter, they require more computational power, not less. While efficiency improvements are always welcome, the fundamental need for high-performance chips is only increasing.

The Future of AI and Computing Power

As AI continues to advance, companies will need to invest in more powerful chips and computing infrastructure. Huang’s prediction that AI will need 100 times more computation suggests that the industry is only at the beginning of its growth. In the coming years, AI companies will likely face challenges in balancing the need for powerful hardware with the costs of maintaining and upgrading computing systems.

One potential solution is to develop more energy-efficient chips that can handle complex AI tasks while using less electricity. Researchers are also exploring new computing architectures that could make AI processing more efficient. But for now, the demand for Nvidia’s powerful GPUs remains high, and the company is expected to continue playing a major role in the AI industry

AI is evolving rapidly, and with this growth comes the need for significantly more computing power. Jensen Huang’s statement that next-generation AI will require 100 times more computation reflects the increasing complexity of AI models. As AI becomes more advanced, it will rely on powerful chips to process information and provide better results.

Influencer Magazine Awards 2025 – Nominations Closing Soon!

IMA 2025
Share This Article