What is an useful AI and how can it be distinguished from useless AI?
SOCIO-ECOLOGY OF DIGITAL GOVERNANCE
The hidden impact of AI, between ecological cost and dependence
Artificial intelligence is now becoming established in a world facing profound ecological and social crises. However, we often forget that digital technology is not virtual. Our connected lives rely on energy-intensive physical servers, powered by rare materials extracted from the ground and by the invisible work of thousands of people who train, moderate and adjust algorithms. In 2024, data centres were already consuming more than 350 TWh of electricity per year – as much as a large industrialised country. Each request made to a generative AI uses about ten times more energy than a typical web search, and training large models requires huge amounts of water to cool the machines. From cobalt mining to the extraction of our attention, the same extractivist logic persists: while we enjoy ‘magical’ applications, the energy and human costs are hidden behind fluid interfaces.
This realisation is recent. The initial utopia of the Internet presented technology as emancipatory and dematerialised; but as platforms and Big Tech took power, a double dependency took hold: energy dependency, on a global infrastructure that is still largely fossil-based; and cognitive dependency, on tools that filter and capture our attention. Technical progress is therefore only liberating if its architecture and governance are democratised. Without this, innovation generates new forms of servitude – of minds and resources. Hence the growing call for socio-ecological governance of digital technology. This involves integrating environmental, social and political issues into the design of technologies from the outset. Without energy justice — that is, the equitable and sustainable distribution of access to energy and resources — the rise of AI could amplify inequalities and ecological crises. The central question becomes: what and whom does AI really serve? We need to distinguish between useful AI, which serves the common good, and useless or even harmful AI.
Our perceptions of AI oscillate between fascination and concern. On the one hand, techno-solutionist discourse presents it as a panacea capable of solving everything: healing, predicting, organising, repairing. On the other, science fiction fuels fears of things going wrong: uncontrollable machines, total surveillance, loss of human control. These extreme narratives have a perverse effect: they divert attention from the essential question — what are our goals? If we consider AI to be a neutral inevitability, we legitimise its deployment without debate. If, on the contrary, we demonise it, we deprive ourselves of potentially emancipatory tools. Another, more insidious narrative has taken hold: that of algorithmic comfort. Platforms thrive by flattering our desire for ease: personalised content, continuous entertainment, instant services. This promise of comfort instils a form of political apathy: why question a system that simplifies our lives? Yet behind this comfort lies the capture of our attention, the exploitation of our data and the precarious work of a multitude of invisible micro-workers. Resisting this capture does not mean rejecting technology, but rather reinvesting in the collective imagination. As Bernard Stiegler reminded us, all technology shapes the way we think and desire; it therefore requires a digital pharmacology capable of cultivating its beneficial uses while limiting its toxic effects. We must therefore imagine technologies that are open to doubt and complexity, capable of integrating error, slowness and plurality. Useful AI should not maximise control, but rather promote human creativity and community autonomy. Imagination becomes a political tool here: it determines the trajectory that our relationship with machines will take.
Defining useful AI: key differentiating criteria
To differentiate useful AI from useless AI, we can draw on the ethics of the commons, as formulated by Elinor Ostrom, and on the low-tech approach, which values simplicity, participation and resilience. Truly useful AI meets a real and measurable need—education, health, climate adaptation, solidarity—while limiting its ecological footprint. It must be evaluated not by its profit but by its collective use value. Conversely, energy-intensive AI designed to optimise advertising or stimulate overconsumption is a social and environmental waste. It is part of a logic of digital sobriety and seeks efficiency rather than excess. Lighter models, hosted locally, are often sufficient to fulfil a function without deploying colossal infrastructures. Algorithmic frugality is the digital equivalent of repair and reuse: it allows performance to be thought of in terms of available resources, not raw power. But usefulness also comes into play in governance. AI is only useful if it is transparent, controllable and collectively governed. This means involving users, workers and communities in defining its objectives and evaluating its effects. The issue ties in with the concept of digital commons: a system managed not by a single actor, but by a community according to explicit and revisable rules. Finally, useful AI is part of a symbiotic relationship: it enhances human capabilities and helps preserve ecosystems. It does not automate in order to eliminate humans, but to enable them to better understand and act. In agriculture, for example, local AI that helps plan crops according to the climate can be invaluable; opaque AI that encourages technological dependence would, on the contrary, be useless or harmful.
These criteria redefine efficiency: no longer producing faster or harder, but producing meaning and care within planetary boundaries. However, many initiatives are already sketching out this sustainable digital future. The BLOOM project, bringing together more than a thousand volunteer researchers, has proven that it is possible to create open, multilingual and transparent linguistic models. Similarly, collectives such as Open Assistant are developing AI whose code, data and biases are publicly accessible and open to discussion. Open source thus offers transparency, sharing and local ownership. It makes innovation cumulative and non-competitive, and acts as an antidote to the concentration of knowledge by large platforms.
Research into embedded AI, or TinyML, shows that it is possible to design tools with very low energy consumption. These solutions make it possible, for example, to monitor soil moisture, air quality or public lighting without heavy infrastructure. Models such as DeepSeek, designed to run on modest hardware, pave the way for innovation that does not depend on privileged access to computing power. This augmented low-tech approach embodies the convergence between sober engineering and environmental ethics. At the same time, cooperative governance is becoming an essential requirement. The Mastodon social network and cooperative platforms such as Mobicoop illustrate the possibility of decentralised and democratic digital technology. Applied to AI, this logic would result in platforms where citizens collectively decide how their data is used. Initiatives such as La Coop des Communs and the Frugal Digital collective in Europe are exploring these models: polycentric governance, citizen involvement, open documentation, and popular education in digital culture. These are all laboratories for what could be called civic intelligence.
Shifting towards useful AI: a political and cultural choice
Building useful AI is not a utopian dream, but a political choice. It involves shifting from a model of algorithmic growth to a culture of care and moderation. This shift is based on three principles: radical energy efficiency to reduce the material impact of digital technology; polycentric governance to distribute decision-making power; and open co-design to make algorithms auditable and modifiable. As Hélène Tordjman points out, technology is not destiny: it is a social project. In the same way that Ostrom showed that a commons is maintained through deliberation and shared responsibility, AI can only become sustainable if it is backed by living democratic institutions.
Political frameworks are beginning to emerge: the European AI Regulation (AI Act) imposes criteria of transparency, security and proportionality. But its implementation must be accompanied by a genuine public debate: who defines high-risk uses? What safeguards are needed for automated surveillance? Finally, algorithmic justice must complement ecological justice: auditing biases, ensuring accountability, protecting click workers and communities affected by resource extraction.
Useful AI is not the most powerful or spectacular: it is the kind that reinforces human dignity and the sustainability of life. It does not seek to replace, but to connect. It does not promise omnipotence, but the collective ability to inhabit the world with moderation. Between the illusion of infinite progress and the temptation of total rejection, there is a demanding but fruitful path: that of digital technology governed as a common good, sober in its means and ambitious in its ends. Rewriting the code for a habitable world begins here: in the way we choose to design, share and limit our technologies. This is why the socio-ecology of digital governance is not a matter of additional ethics, but of cultural renewal. At a time when AI is everywhere, it is not just a question of innovating, but of learning to think together about technology, justice and life. Perhaps this, finally, is the most useful form of intelligence.