As individuals, companies and governments strive to reduce their carbon footprint, a growing number of AI leaders are starting to think about how their models impact the environment. In this article, we talk with Dr. Sasha Luccioni, Climate Lead and AI Researcher at Hugging Face, about the link between AI and the environment and explore opportunities for the AI community to make a positive impact.

Since the launch of ChatGPT, we’ve heard a lot more talk about Responsible AI. Is Responsible AI moving up the agenda?

A portrait of Dr. Sasha Luccioni.

I have always thought it strange that we would separate Responsible AI from the rest of the field. I would argue that all AI should be Responsible AI, particularly now that AI has officially left the research lab. In today’s market, any AI theory has a pretty high probability of making it into a product or tool that will be deployed in real life. So I would argue that everyone involved in AI needs to start thinking about Responsible AI as something that is important to the world at large.

Has generative AI made the focus on Responsible AI more important?

It’s more the approach than the models themselves. In the past, big research labs would release a model, and then people would stress test it – compare it to other models, find any biases and identify where it might fail the edge cases. Now, we are seeing new models get released directly into user interfaces and because it’s commercial and proprietary, access to the model is often restricted, meaning it can’t be tested by truly independent researchers. For me, many of the potential risks actually stem from this ‘move fast and break things’ approach to generative AI that’s become prevalent.

What’s the link between AI and the environment?

There are two sides to the story. On the one hand, AI uses a massive amount of energy, not only in training but also in production. We did a bit of research where we looked at a large language model that was using low-carbon energy and optimized training. It still emitted around 50 metric tons of CO2 in training. Every deployed copy of that model then used around 20 kilos of CO2 a day. That might not seem like much on its own. But multiply that by all the models now in operation – probably hundreds of thousands, and we are talking about a massive amount of emissions being created by AI.

On the other hand, we’re also seeing lots of great progress being made using AI. People are using AI to track deforestation, land usage and methane leaks, for example. Recently, NASA shared their own foundation model based on earth observation data. Each image contains a ton of data that is enabling researchers to develop climate-positive applications for AI.

What can researchers and developers be doing to help reduce the impact of their models?

For developers, it’s really about efficiency. Creating efficient models and creating small models where possible. I think we could also be thinking more about how we can make models easier to use, for example, by creating interfaces so that people don’t need to download them. Things like that are actually super impactful.

And on the proactive side? How can the AI community help drive environmental research with their models?

The big thing is to connect with subject matter experts. Academic research is typically quite siloed. Data is often hard to find. And, if you are someone working in ecology, for example, it’s not always easy to connect with an AI or ML professional who can help you. At the same time, if you are an ML developer and you’ve studied computer science your whole life, you may struggle to understand what these ecologists want from you and how you can help them. It’s important to take the time to understand the problem, talk to people who have been studying that problem and then work with them throughout the process to understand how impact is created.

What can companies and executives do to ensure their AI is as environmentally friendly as possible?

Frankly, I don’t think many companies are aware of the challenge. When they are plotting out their Scope 1, 2 and 3 emissions, they really need to include AI as part of that calculation. And in order to do that, you need reliable data on its energy use and carbon footprint.

The problem is that I don’t think many companies have that data. There is no EnergyStar rating for AI. We don’t have any ISO certifications. And, while governments are likely going to get around to it at some point, I think that – as users of AI models – companies should be pushing for more transparency about the energy efficiency of models and infrastructure.

Should environmental considerations play a role when evaluating AI?

I think organizations are really starting to think more clearly about Responsible AI in terms of things like bias and fairness. But sustainability also needs to be a core part of that conversation. Your model may be good in terms of inclusivity and access. But if it also has a huge carbon footprint, you need to ask yourself if it is really benefiting society.  

Does that mean you are worried about the existential risk of AI?

Not the existential risks that we keep hearing discussed on public forums. I don’t know whether we will eventually achieve singularity and whether that will wipe out the human race.
 
What I do know is that AI is at the core of a range of harms that are already happening. Environmental damage is one of those harms. But if we start to work to address those harms today, I believe we will be well-positioned to minimize the risk of existential harm in the future.
 
People can only focus on so many things at once. I would argue we need to put that focus towards solving our current issues.

A portrait of Dr. Sasha Luccioni.

About Dr. Sasha Luccioni

Sasha Luccioni is a leading researcher in ethical artificial intelligence. Over the last decade, her work has paved the way for a better understanding of the societal and environmental impacts of AI technologies.
 
Sasha is a Research Scientist and Climate Lead at Hugging Face, a Board Member of Women in Machine Learning (WiML), and a founding member of Climate Change AI (CCAI), a global initiative that aims to catalyze impactful work and build a community at the intersection of climate change and machine learning. Since 2019, she has also been a postdoctoral scholar at Mila AI Institute, working on AI for Humanity projects that apply Machine Learning to problems in climate change, health and education.