Why the future of AI depends on renewable energy


The future of artificial intelligence seems limitless β but the energy it takes to build it is anything but. During a panel discussion in San Francisco with Constantijn van Oranje, investors, chip designers and entrepreneurs, among others, one theme became painfully clear: AI scales faster than our energy network can handle.
AI models are not only getting bigger, they're also getting smarter. We're moving from generative AI (which produces text and images) to reasoning models β systems that make decisions, carry out tasks and ultimately control physical actions. Think of robots, self-driving cars or surgical assistants. But this shift requires three to five times more computing power than the current foundation models.
And computing power means energy. Lots of energy.
An estimate mentioned during the panel: if OpenAI continues its current growth path, the company will have by 2033 requires as much energy as the country of India. That's not a dystopian exaggeration β it's a real scenario if we continue to scale the way we work today.
The hardware world is feeling the pressure. As one of the speakers aptly said: βMost of the energy does not go to the math itself, but to the moving data.β That is why we are working on new architectures in which memory closer to the computing core is placed, and to optical interconnections via photonics to make data traffic faster and more energy efficient.
Innovations are also taking place on the cooling side: from liquid cooling to new materials that dissipate heat better. However, the consensus is that these improvements incrementally are β and that something fundamentally different needs to happen to really break the energy demand.
Fabrizio del Maffeo, CEO of Axelera AI, painted a different picture: not the AI bubble, but the energy bubble will burst. In Taiwan, for example, the energy supply is reaching its limit, while the demand for data center capacity is growing exponentially. In the US, data centers are now consuming more than 5% of national electricity β a percentage that is rising rapidly.
So AI doesn't just have a compute problem, but a energy ceiling.
Nevertheless, there is optimism. New chip architectures, such as the Mamba model and quantization techniques (where calculations are performed at lower precision) provide enormous efficiency gains. Where Nvidia's old GPUs worked with 32-bit calculations, modern AI chips already run at 8-bit or lower, with minimal loss of quality.
That means that smaller and more energy efficient models can be done without their performance declining. In addition, new hardware categories are emerging, specifically tailored to tasks β from edge devices to specialized AI accelerators.
Entrepreneur and investor Sid Sijbrandij from Gitlab and Kilo Code made an important point: technology alone will not solve the energy issue. The energy infrastructure itself needs to be redesigned.
His vision: the future is sun and batteries. Fusion reactors are too far away, nuclear power is too slow to build. Solar energy does offer scalability, provided we take a smart approach.
His plan:
According to him, the key does not lie in more energy, but in less waste.
The race to scale artificial intelligence is a race between software innovation, hardware architecture, and energy efficiency.
The future of AI won't just be determined by who trains the smartest models, but by who they do it can feed most sustainably.
The conclusion from San Francisco: βThe biggest breakthrough in AI won't come from Silicon Valley, but from a power plant.β

