Mohsen ANVAARI
Author
By Mohsen ANVAARI, Mads DAHL GJEFSEN and Lucia LISTE
Does AI offer openings for sufficiency? This was the question explored in an experimental, free-format brainstorming workshop entitled "AI and Sufficiency: Brainstorming the Role of AI in Postgrowth Futures", held at ISEE-Degrowth 2025 in Oslo last June. The undersigned organizers are engaged in research and activism in topics such as democratic digitalization and the manifestations of the growth imperative through digital infrastructure. As such, we are keenly aware of AI's myriad socio-ecological injustices, harms, and uncertainties resulting not only of the stupefying energy needs of AI infrastructure, but also of the unparalleled concentration of money and power within the tech sector. Against this backdrop, we nevertheless wanted to ask: are hopeful framings for AI still possible?
Twelve conference participants from diverse backgrounds joined us in small-group and plenary discussions. The organizers began by briefly presenting recent initiatives that reflect core tenets of sufficiency and post-growth, in relation to AI. One such example was Taiwan's use of AI in deliberative democracy, as a tool for decentralized decision-making and public engagement. Another was the AI Now Institute's recently proposed Zero trust policy agenda, which offers models for public regulation aimed at counteracting the monopolizing tendencies within the tech sector, especially pronounced in the realm of AI. This introduction was followed by moderated breakout session and a final plenary discussion, centered on the question: What might AI look like if put to the service of sufficiency, and how might we get there from today's paradigm?
Across both in the small groups and the plenary, the overall atmosphere was one of skepticism regarding the current AI trajectories to support sufficiency. Certain desirable traits of the technology were acknowledged, such as its capacity to optimize electricity use, support individuals with disabilities, improve language access, and enhance public services. In relation to agricultural technology, where one participant was actively engaged, we discussed potential for decentralizing useful tools through a peer-to-peer "design global, manufacture local" production model, offering an alternative to capitalist modes of production. In this vision, decentralized use of AI and data could be applied.
Nevertheless, the promise of these optimistic framings largely depended on narrowly isolating certain use contexts from the broader phenomenon of AI as it currently manifests; a societal tendency toward digital overload in all areas of everyday life, which on balance conflicts with sufficiency as an ethical paradigm.
This paradigm was succinctly summarized by Thomas Princen during his keynote lecture at the MidWay conference held the same week as ISEE-Degrowth. He described sufficiency as a sense of enoughness and too-muchness (on the individual level) and doing well, now and into the distant future, by organizing to do less than the most possible (on the collective one). Princen's description illustrates a core tension: AI's inherent drive for efficiency gains, its significant resource demands (high energy, land, and water use, coupled with concentrated ownership), and its continuous generation of rebound effects in the absence of broader wealth redistribution and systemic planetary change. This tension cannot be satisfactorily resolved without confronting head the questions: Efficiency gains for whom and in what?
Accordingly, many participants viewed AI as inherently contradictory to post-growth imperatives, and antithetical to sufficiency, insofar as the latter prescribes forms of digital minimalism or stricter boundaries around technology use, such as the AI abstention and "technological progress as minimization" approaches proposed by Reia and colleagues in their recent report Reimagining AI For Environmental Justice and Creativity).
To the extent that the session yielded hopeful framings, these were limited to broad principles and criteria that may be fundamentally incompatible with what AI phenomenologically "is"; a socio-cultural force mirroring the extractivist, exploitative, growth-oriented capitalist system. For instance, if AI were to the service sufficiency, it would need to be not-for-profit, democratically governed, transparent, and responsive to changing needs and democratic processes. It would not be ecologically destructive and would be scaled in relation to planetary boundaries. It would be a tool to consult, but decisions would remain in human hands. In this vision, AI would be "anchored in need, not greed". Moreover, it would be used according to certain rules, such as "Things we derive meaning from, it would not be allowed to do".
The experiment leaves us with a bigger lesson for sufficiency practice: the pitfalls of framing discussions of sufficiency around any particular technology or practice. Although, we as organizers strove to avoid suggesting that AI could itself "become sufficient", opting instead to explore "co-existences between AI and post-growth/sufficiency futures", the discussion nevertheless underscored the impossibility of detaching any single technological or social phenomenon from the global-scale exploitative and extractivist systems in which it is embedded. The search for the "right questions" to prompt such discussions, continues.
Information
Credits
Transcription