
Artificial intelligence (AI) is rapidly reshaping societies, economies, and ecosystems. It is becoming a major force capable of transforming healthcare, accelerating scientific discovery, and enabling new environmental technologies. Yet it also brings significant social, environmental, and safety risks. These risks are interconnected, and they are intensifying as AI systems grow more powerful and become more widely deployed.
In an article recently published in The Lancet Planetary Health, we argue that governing AI for the benefit of people and the planet requires a new global approach. Current debates are often fragmented: some focus on social harms such as bias or misinformation, others emphasise environmental impacts, and still others warn of existential risks. But these domains cannot be treated in isolation. To govern AI effectively, we need a coherent framework that recognises their interactions and acts across all three.
Three domains of AI risk: social, planetary, and safety
AI carries significant social risks, many of which are already visible. Algorithms can reinforce discrimination in hiring and policing. Generative AI accelerates the spread of misinformation, eroding trust and increasing polarisation. Automated systems threaten to displace workers, while the power of AI concentrates wealth and influence among a handful of large technology companies. Surveillance capabilities are growing rapidly, often without adequate protections for civil liberties.
AI also has substantial planetary risks. Data centres consume large amounts of energy, water, and critical minerals. The extraction of materials required for chips and other hardware often occurs under environmentally damaging conditions. While AI can help to develop more efficient technologies, the efficiency gains may be outweighed by rebound effects, driving greater overall resource use. At scale, these impacts could accelerate climate change, biodiversity loss, and pollution.
Finally, safety risks arise from the behaviour of AI systems themselves. Errors in high-risk sectors such as transport, chemicals, or health could cause serious harm. The development of AI-enabled autonomous weapons raises profound ethical and geopolitical dangers. The most extreme risks involve advanced agentic systems capable of acting autonomously in ways that escape human control.
Taken together, these risks call for an integrated governance strategy — one that recognises how social instability, environmental pressure, and technological escalation reinforce one another.
Feedback loops: why risks cannot be governed in isolation
A core message of our article is that AI risks are mutually reinforcing.
Competitive pressures drive rapid deployment, as companies and governments race to secure technological advantage. This “AI race” dynamic encourages companies to cut corners on safety, increasing the chance of accidents, misuse, and loss of oversight. At the same time, AI-powered social media can deepen polarisation. Lower social trust, in turn, weakens the ability of societies to regulate both AI and other global commons such as climate or biodiversity.
Meanwhile, narratives that focus only on existential risks can cause us to overlook immediate issues of power concentration, environmental harms, and social inequality. Promises of future breakthroughs, such as curing disease and solving climate change, can also make it harder to address the very real impacts that AI is having today.
Treating AI as a global commons
We propose viewing AI as a global commons— a shared resource whose benefits depend on collective stewardship and whose risks spill across borders. The feedback loops highlight why fragmented governance is insufficient. We propose coordinated interventions across all three critical domains:
- Regulate data use to protect social wellbeing. Data shape how AI systems behave. Strengthening data protection, enforcing transparency, and auditing algorithmic outcomes can reduce discrimination and curb surveillance. Policies such as data taxation or limits on wealth concentration may also be needed to address the broader social impacts of AI-driven inequality and labour market disruption.
- Cap energy use to protect planetary stability. AI development must stay within environmental limits. This means capping energy use, mandating renewable power for data centres, improving circularity in hardware production, and prohibiting applications that drive excessive consumption or support environmentally harmful industries. At the same time, AI should be directed toward climate mitigation, biodiversity monitoring, and other planetary public goods.
- Regulate compute to increase safety. Because an AI system’s capabilities scale with compute, regulating access to high-end chips and supercomputing infrastructure is one of the most direct ways to limit dangerous frontier development. Data centres offer a natural physical “choke point” for oversight. Safety evaluations, transparency requirements, and limits on high-risk agentic AI are essential for maintaining meaningful human control.
AI development sits at a pivotal point in human history. It could either aid progress towards healthier and more sustainable societies, or it could lead to new and deeper crises. Governing AI for planetary health means ensuring that this powerful technology serves people, protects ecosystems, and remains under human control. The window to act is open now, but it is closing quickly.
Access the full article in The Lancet Planetary Health:
The full article may be cited as:
Creutzig, F., Denton, F., Hine, E., Joshi, S., Ke, G., Kyrychenko, Y., Messner, D., O’Neill, D.W., 2026. Governing artificial intelligence for planetary health. The Lancet Planetary Health 101408. https://doi.org/10.1016/j.lanplh.2025.101408.