Lift-off for generative AI would entail huge costs and have possible ramifications for climate change.

Iain Morris, International Editor

March 23, 2023

6 Min Read
Telcos coo as Nvidia pumps up the dirty Gen AI bubble

A future planet stalked by generative AIs is not shaping up to be very green. Training these regurgitative, hallucinating monsters and then using them to write your homework assignment or boardroom presentation is so computationally intensive that the entire Arctic could turn into a giant slushie before you've even had chance to ask ChatGPT about its impact on climate change.

This correspondent demurred and instead relied on some old-fashioned search-engine algorithms to look up the data. One study, cited in a paper Deloitte published this month, reckoned the training process for a large transformer model emitted 284 tons of carbon dioxide. If that sounds bad, it is. The average person throws out about five tons annually, according to the Deloitte report.

On the one hand, it's unlikely we'll have that many generative AIs. On the other, every man and his dog seems to be developing one. Generative AI is starting to look a bit like the new crypto, or the new credit default swap. And Deloitte says nothing about the operational costs when millions of humans have outsourced cognition to ChatGPT and its imitators, opting for life in a semi-vegetative state where the most mentally taxing job is choosing between Adam Sandler comedies on Netflix.

Figure 1: Nvidia CEO Jensen Huang, bringing leather back to the C-suite. (Source: Nvidia) Nvidia CEO Jensen Huang, bringing leather back to the C-suite.
(Source: Nvidia)

Those costs are estimated to be substantial, though. In late 2022, before generative AI had monopolized office conversations about the tech sector, Tom Goldstein, an associate professor at the University of Maryland, put the figure at about $100,000 a day, or $3 million a month, after a "back-of-the-envelope" calculation.

What's starting to look like a generative AI bubble, then, is fabulous short-term news for Nvidia. Its graphical processing units (GPUs) were originally intended for games, as the label suggests, and they are still sold for games purposes. But they have also worked out to be a much better data center option than bog-standard Intel central processing units (CPUs) for computationally intensive tasks. And that includes all things AI.

Intel bashing

Thwarted in his efforts to buy UK-based Arm from Japan's SoftBank, Nvidia boss Jensen Huang has now hitched his leather-jacketed frame to the zeppelin of generative AI and watched Nvidia climb in what is largely a bear market for semiconductor stocks. Since October, when it had sunk to about $121, Nvidia's share price has more than doubled, to nearly $265. "The impressive capabilities of generative AI have created a sense of urgency for companies to reimagine their products and business models," said Huang, pumping more gas into the bubble.

Excitement extends into the telecom sector, with US telco dinosaur AT&T hauled temporarily out of Microsoft serfdom to endorse Nvidia's latest products at a big marketing party this week, where Intel bashing was rife. AT&T's trials of Nvidia's cuOpt technology delivered solutions in 10 seconds, down from 1,000 with Intel's x86-based CPUs, said Nvidia in a statement, presumably with AT&T's blessing.

Figure 2: Nvidia's share price ($) (Source: Google Finance) (Source: Google Finance)

Enthusiasm might limit Intel's opportunity in the radio access network (RAN) market if telcos eventually want to have software in the RAN that can support lower-latency AI services. It's an approach being pushed for some customers and service scenarios by Japan's Fujitsu, which is building Nvidia chips into various central unit (CU) and distributed unit (DU) RAN products. "We segment the market," said Greg Manganello, the head of Fujitsu's wireless business unit. "In our CU and DU, if you want high-performance edge, we are going to pitch Nvidia. It can do the analytics and it has enormous capacity."

There is even telco hope that generative AI could be the fabled post-voice killer application – meaning it takes off, not that it wipes out humanity. "The thing about ChatGPT is the immense compute that's required particularly in the learning phase, and one of the great ways of distributing that – and why I'm thinking is generative AI an opportunity – is between the edge and the devices," said Howard Watson, the chief security and networks officer of BT. Charging Microsoft and other Big Tech companies for pure connectivity is selling a meal someone has already paid for. But charging them for edge access and latency would be something new.

Dirty shade of green

None of this will silence those concerned about energy use and the environment. GPUs might be more power-efficient than CPUs for some applications, but that does not mean they will prevent energy consumption from rocketing if generative AI becomes ubiquitous. "We also face risks related to business trends that may be influenced by climate concerns," said Nvidia, after gushing about its energy efficiency, in its recent annual report filing with the US Securities and Exchange Commission. "We may face decreased demand for computationally powerful but energy-intensive products, such as our GPUs."

It doesn't help that Microsoft, ChatGPT's main sponsor, is a rather dirty shade of green. Since its 2017 fiscal year, its annual energy consumption has doubled to roughly 13,482 gigawatt hours. Microsoft says 96% of this is from renewables, but only a sliver of that renewables total (147 megawatt hours, or less than 0.01%) is from on-site renewable energy. The rest is attributed to the dubious purchase of renewable energy certificates (RECs) and signing of power purchase agreements (PPAs).

Figure 3: Microsoft's energy use and emissions (Source: Microsoft) (Source: Microsoft)

Even with Microsoft's preferred "market-based" methodology, which factors in those RECs and PPAs, the software company's annual Scope 1 and 2 emissions – those for which it is directly responsible – have risen 17% since 2017, to nearly 287,640 tons for the 2021 fiscal year. With a location-based methodology that assesses the make-up of the electric grid, its emissions are up 74%, to more than 4.74 million tons.

Richard Windsor, a tech analyst for Radio Free Mobile, expects the generative AI bubble to pop as companies start to realize the sums don't add up. Among other things, DGX server boxes featuring eight of Nvidia's H100 or A100 GPUs, and 640 gigabytes of memory, will cost $37,000 a month to rent, he points out in one of his latest blogs. Disaster will strike when companies "have to answer to their shareholders as to why the service is not as good as promised, meaning revenue delays and emergency capital raises," he writes. Many will hope he is right.

Related posts:

— Iain Morris, International Editor, Light Reading

Read more about:

EuropeAsia

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like