Science

Balancing Risks and Opportunities – Earth State

When we think of artificial intelligence (AI) and climate justice, we can imagine two stars in Orbital Waltz. Everyone has their own gravity, sometimes harmonious and sometimes nervous. In a time of consistency, their fields strengthen each other, with new vitality and perspectives.

However, not all orbits are stable and the gravity field of the AI ​​grows at an accelerated speed. The ultimate danger we face is that AI swallows everything around it, like a black hole.

AI has the potential to elucidate patterns in climate data, enhance models and increase our opportunities to predict uncertain futures. However, the carbon footprint for training large AI models makes sense around its confidentiality and risk-rejecting, especially in communities at the edge of climate and technology discourse. The risk is that AI becomes a superstar, not a co-star, destroying the balance of the entire system. Therefore, the question is not whether the two fields can coexist, but whether they can be carried out in mutually reinforcing terms.

Image: Elimination by Pixabay

In an era of fast information consumption, one of the biggest challenges may be just opening the door to dialogue. In other words, not idealize or dismiss AI, but rather, in more specific terms, to rule its development in the climate realm and climate justice. Indifference can lower silent stakeholders off the field, and climate communities may miss the opportunity to fully participate in one of the most transformative developments of our time.

Neither should be a pure academic exercise to understand climate justice and AI gatherings. It should require dialogue that flows across technology, ethics, communities, policies, politics, environment and social spaces. This work should be rooted in the needs of the daily lives of people who are most vulnerable to climate hazards and technology exclusion.

To try to understand the relationship and tension between AI and climate justice, we can use the popular strategist’s toolkit: SWOT (Strength, Weakness, Opportunity, Threat) analysis. By examining AI and climate justice in the context of this analysis, we can begin to explore the appearance of this relationship.

The ability of AI to quickly process large-scale irrelevant datasets from weather patterns to social media sentiment is changing climate work in different fields. Emergency responders can use AI to optimize response time after a hurricane or wildfire. In Southeast Asia and East Africa, AI-powered early warning systems have saved lives. Satellite images once took weeks to process and can now be evaluated within hours. As a result, deforestation in the Amazon, coastal erosion in Bangladesh and methane flares in Texas are tracked with unprecedented accuracy.

However, the benefits of AI are not evenly distributed and are offset by obvious weaknesses. One of the data imbalances is data imbalances, because some of the data used by AI, which are most susceptible to climate change (e.g., indigenous and rural agricultural communities) are not properly represented. These omissions can lead to considerable bias: for example, a flood simulation program knows nothing about the slum terrain and may underestimate the potential harm involved; similarly, the lack of wildfire patterns of traditional land management may ignore basic fire protection.

Training large AI models requires computer power, power, broadband connections and skills. These conditions are unbearable for local communities in the south and north of the globe. In this case, AI is a luxury.

Furthermore, the unexplained performance of AI (or its “black box” state) raises the question of accountability. If AI is used to decide where to invest in adaptive financing, or if it should be withdrawn from its community, stakeholders need to be able to challenge these decisions. But if these models fail to explain how and why they draw certain conclusions, how can they be trusted or challenged?

Then there is the environmental cost. Ironically, training AI models requires a lot of energy, which is usually produced by fossil fuels. Research shows that building an advanced language model can produce as much carbon as five cars in its lifetime. If AI is part of a climate solution, you need to first solve your own contribution to the problem.

However, there are many opportunities to use AI for climate justice, which may help mitigate some of these weaknesses.

One of the most promising opportunities is community-driven AI. Local communities around the world are leading the development of their own tools – adapting AI to indigenous knowledge, cultural values ​​and local priorities. In Pacific Island, for example, community-driven drone programs closely track erosion and provide guidance for adaptation. In Canada, the Indigenous Data Sovereignty Program is ensuring that environmental models serve their communities, rather than monitoring them.

Another opportunity is policy integration. If AI is embedded in a transparent and responsible process, it can be notified of the entire policy scope from zoning to carbon tax. Some cities are already using AI to plan cooling systems in heat-resistant communities to target the most effective investments possible. Others include projections of farmers’ building regulations or subsidies.

In addition, researchers and activists from various countries are coming together to create tools and data sets to combat digital colonialism by fostering two-way exchange of knowledge. Predictive applications can predict disease outbreaks after floods, track hot spots of food insecurity, and even predict climate-induced migration flows. Such indicators can shift climate action from reactive to preventive if paired with social services and policy responses.

The increase in judicially-led innovation investment marks another turning point. Venture capital and philanthropy companies are supporting projects that place equity in equity, from sun-powered refugee camp sensors to climate finance algorithms that are born out of social vulnerability. With the right guardrails, these investments can catalyze new inclusive technologies.

But we cannot forget many risks.

Digital colonialism is everywhere. AI tools created in the north of the world are often sent in bulk to the south of the world, with little regard to the local environment. This action overturns local expertise and imposes untested standards. Data flows northward, and the selected model is selected remotely, and the community has hardly said there are obvious flaws and needs to be replaced by co-created and co-created models.

Surveillance and displacement issues have attracted attention. Technologies such as drone surveillance and geospatial tracking can target individuals they should protect to maintain evictions, objections to traditional land use and crime. Furthermore, accountability is reduced when monitoring overfilling, and there is no mechanism to ensure fairness in algorithmic decision-making. This could lead to the risk of technological solutionism (simplifying complex social problems into algorithms and applications), implicitly avoiding the need for human action, constructing climate change as pure data problems and eliminating all other dimensions.

No algorithm, no matter how complex it is, can replace human actions and interactions. Simply digitizing knowledge to extract without prioritizing values ​​does not lead to actual progress. In this regard, the next decade will be crucial. As climate hazards intensify and AI systems become more and more powerful, the echoes of our choice will increasingly affect the future.

We can choose how AI and climate justice interact. To make a choice wisely, we must not only listen to the facts. We also need to realize that climate change is not just a matter of data, but a humanitarian and social crisis. To properly utilize AI technology, we need to regulate the activities we know are harmful to society and the most vulnerable, and we need to start co-creating new tools that include the voices of those who are often marginalized. Artificial intelligence helps us extend rational intelligence beyond the limitations we never thought of a few years ago. Let’s not forget that we still need emotional intelligence.


Marco Tedesco is a professor of Lamont Ocean and Polar Geophysics Studies at the Lamont-Doherty Earth Observatory (LDEO), a research part of the Columbia School of Climate. At the recent MR2025 conference, Tedesco delivered a speech titled “Opinions on AI and Climate Justice.”

The views and opinions expressed here are those of the author and do not necessarily reflect the official position of the Columbia Climate School, the Institute of Earth Studies or Columbia University.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button