Metacrisis and AI governance - talk

I recently gave a talk at the new Integral Altruism speaker series on what AI Governance research can learn from metacrisis theory.

Very interested to have some discussion of these issues. The slides are available here.

The talk covers:

  • Brief definitions of metacrisis and AI governance

  • Where metacrisis sits within AI governance research

  • 5 key metacrisis ideas and what they contribute to AI governance research:

  1. Multiple catastrophic threats

  2. Structural roots: Moloch

  3. AI as prominent threat

  4. Civilization as misaligned system

  5. Need for a ‘third attractor’

(I’m in the process of writing this presentation up into a post. The focus here is especially on Daniel Schmachtenberger’s ideas on metacrisis since in my view these connect up most closely with AI governance research. For a wider survey of metacrisis perspectives there’s my recent overview piece, or the Life Itself white paper.)

(Update: my written article is now available here: https://thewiderangle.substack.com/)

First, thanks for sharing this. I’ll likely post links for my students this fall (who will be exploring a lot of practical AI). To what extent there is uptake from that cohort for the metacrisis framing remains to be seen. I’ll provide feedback on that later in the fall in any case.

My take on Schmachtenberger needs some nuance, so I wrote this because of general dissatisfaction I experienced about the use of the notion of “civilization” in various circles, metacrisis theorists included. (See specifically section D. - Current Civilization - One or Many?)

Against that background, I’d like to focus initial attention on point 4. “Civilization as a misaligned system”. Is “civilization” a well-defined system with clear boundaries, components, processes, feedback loops, an essential core, adaptive potentials, etc.? Metacrisis theory could stand to pay more careful attention to such questions, because notions of misalignment suggest a prior metric of proper alignment. Do we have historical specimens of properly aligned civilizations? How exactly did such proper alignment work? How are we deviating now? And by “we”, are we meaning something like Wilkerson’s Central Civilization (essentially, the entirety of current humanity) or are we focusing on the particular regional civilization of the west, excluding China, India, Islam and perhaps others not fully culturally integrated with the west?

In section G of my linked paper (Action Models) I explicitly lean on Pogany and Karatani in particular to characterize the current “system” and the work of Donella Meadows to suggest how to get a grip on changing that “system”. To sum that up very briefly here I would generally suggest that the best leverage will be found in envisioning a future civilization along the lines of Pogany’s proposed GS3 and Karatani’s unnamed mode of exchange D. Such a view would be “discursive” toward a future iteration of civilization, an exercise of imagination for the orientation of pragmatic action. With respect to pragmatic action vis-a-vis currently embodied “dispositional” macro-systems (Karatani’s Borromean Knot of Capital-Nation-State), I’m leaning toward transparadigmatic cultural creativity as a suitable action target for 2R participants. Lower-level Meadows leverage points (like changing tax rates, adjusting property laws, or political organizing) will be more dynamic if informed by such innovative cultural vision.

Briefly, on other key points from the talk:

  1. Multiple catastrophic threats - generally agreed regarding existential risk. I find Pogany’s analysis most cogent for causal analysis of how we got here.

  2. Moloch. Generally similar to Karatani’s Borromean Knot. However, Karatani shows (in generally neo-Marxian terms) that these systems emerged from material history for objective reasons. The behavioral drivers were pretty simple (hunger, fear, survival, security, etc.) I’d suggest that we respect such primeval human behavioral motivations and focus on how untangling the Borromean Knot can be most advantageous for the on-going support of these simple human needs.

  3. AI as a prominent threat. Just the latest in a long line of disruptive technologies (agriculture, bronze, iron, writing, maths, gunpowder, etc.) My existential bet is that intentional, pro-social use of AI can support a positive civilization transformation. But the risks are real enough.

  4. Third Attractor. The First Attractor (Uncoordinated) is going to happen whether any of us here like it or not. It’s essentially Pogany’s “chaotic transition” from the US-led post-WWII order to whatever come next. Accelerating Great Power conflict is part of that package. On the issues of anarchic geopolitics with uncoordinated power centers, Karatini’s analysis of Kant’s Perpetual Peace through the lens of Thomas Hobbes provides some ray of hope. The general notion is that competitive, power-maximizing nation-states (or civilization-states in cases like China, India, Russia) will see the wisdom of accepting a trans-national global Leviathan to avoid planetary destruction. The Second Attractor (Authoritarian) is generally related to the first. International competition drives national level centralization. Such power competition also ultimately drives runaway extraction. The Nation-State needs Capital (and vice versa), hence Karatani’s Knot. That Knot will only be untied through considerable cultural innovation and recontextualization, on the most global of scales. The Third Attractor has really been articulated by every spiritual tradition going back to Zoroaster. (Or even further - Karatani adverts to pre-civilizational sharing culture as a human universal norm). Secular versions are now also available. I see the primary mission of cultural creatives (that includes you, dear reader!), as putting flesh on the bones of what that transparadigmatic Third Attractor will look like in practical terms. AI may or may not be useful in helping humans navigate toward a Third Attractor. My leanings are more toward humanistic practices in the first instance, with AI perhaps in specific supportive roles.

1 Like

Wow, thanks so much for sharing your piece, it looks fascinating - have I missed a post where you’ve already shared this, as this definitely deserves a thread of its own - would be great to share it on the research group substack when ready as well.

I did read the section on Civilization, which makes a lot of sense - I was also hesitant about using the term civilization for the reasons you give, and so in the written version of the talk which I’ve posted on LessWrong I changed it to ‘global system’.

https://www.lesswrong.com/posts/TxPeQ85yxpcdfq2wg/metacrisis-as-a-framework-for-ai-governance?utm_campaign=post_share&utm_source=link

Since this was written for a ‘rationalist’ audience I haven’t mentioned Marx or capitalism, but I do think something like Wallerstein’s Marxist analysis is crucial here, in that the key dynamics of the global system include economic ones. Looks like I really need to read Karatani though.

Generally agree with your points, though one thing that became even clearer to me in writing up the talk is the way Schmachtenberger’s ideas point to dynamical systems theory.

For me that is really the main thing that distinguishes the ‘third attractor’ concept from both classical political philosophy and rationalist approaches to AI governance. Attractors are states of dynamic systems, and one thing I’m currently interested in fleshing out is how modelling the global system using the tools of complexity science can firm up our understanding of leverage points.

Some initial thoughts are that this is precisely where AI’s supportive role becomes critical - as a way of harnessing existing system dynamics that include technological acceleration but steering them away from those runaway feedback loops.

Thanks for reading that. It’s fairly central to my thinking about the metacrisis - or most anything.

To me, Peter Pogany is the thinker who did the most to put geopolitics and macroeconomics into a clear systems and complexity frame. “Civilization” theory (e.g. Spengler, Toynbee, Sorokin, etc.) has more challenges mapping to a systems perspective. Wallerstein→Pogany→current systems discussion is a clear progression. So I’d start Third Attractor thinking from Pogany’s analysis.

Pogany points to a future world system (GS3) following the current “chaotic transition”. The reason is pure systems theory: anti-entropic complexification and emergence. Interestingly, though, having based his entire argument on the Second Law of Thermodynamics, to get to GS3 he suggests we need a Gebserian leap to integral consciousness on mass scales. Why? Because every Global System needs its own common sense assumptions about the world. A stable GS3 - involving globalized strong multilaterialism and certain degree of transcendence of the Westphalian Nation-State system, will need Gebserian “aperspectival” worldviews to support it.

Most of my larger paper is unpacking - on psychological levels - the details of what such a Gebserian leap on mass scales might involve and what would be the mechanics of getting from here to there. This all has rather obvious parallels to Second Renaissance white papers and other related publications. In the end, “civilization” comes back in, with “civilization” being clearly an imaginal construct. We aim at “civilization” as a constellation of embodied values we hope for. “System” is more the pragmatic disposition of matters as they currently are. In that sense, the proposed Third Attractor can be synonymous with a proposed Civilization of the Future - a new bundle of high ideals (or from the German, a new Kultur).

Anyway, I want to make all these points not just out of pedantry. We’re up against tricky challenges here. Precision increases action leverage. So to improve or preserve “civilization” essentially involves changing our thinking to guide our pragmatic systems work in new directions. For the Third Attractor to truly attract, however, we must do far more that just visualize and discuss it. We also need pragmatic prototypes, otherwise it will not really land in the entropy-driven universe of global systems. All of which justifies of rhetoric of “inventing the future” or “architecting a new civilization” or any number over over-heated tropes and grandiose claims one might share, say, in a TED Talk pointed to the Tech Bros. The bottom line is my thinking is not all that far away from Schmachtenberger, But my additional framing gives me access to very pragmatic and even conservative audiences Schmachtenberger really would have much more difficulty addressing.

I honestly don’t know what that 56 page paper is or where it is headed and for what audience. Here is a bit of back story about why it came to exist:

  1. Wrote this in February 2025: Chaos Compass: Career Navigation for Turbulent Times | by Robert Bunge | Medium This impetus for that work was to figure out how to inject metacrisis thinking into conventional college courses and advising processes. The date of the article also suggests an initial response to the extremely chaotic days of Trump’s second term.

  2. In June 2025 I piloted the model in a senior level software class. To flesh out “what the world needs” I offered students about 10 essays in the school’s learning management system that generally introduce metacrisis thinking to a wide audience. The essays and the resulting class discussions went quite well. I realized this model could have a strong future and started thinking about replication and dissemination ideas.

  3. In the early summer of 2025 I sat with a legal pad and started to outline a series of 3 or 4 Medium articles to follow “Chaos Compass”. The idea was to rework some of my school essays and get them out in the wild. But in the course of that outline, it grew to 7 major topics, and the topics ran together. I realized I needed some complex psychology, for example, to shed light on “civilization” and I needed to lay down markers about exponential growth, cycles, hard limits, and historical instances of collapse for any of it to make much sense at all. In the end, the current paper is mostly me writing for me, to prove to myself that my general thinking is coherently communicable. But in its current form, there is a very limited audience globally for work of this density. Too many references, with too many of the references too far off the generally beaten paths.

  4. So now what? The current 56 page work likely either needs to get carved into quite a few essays. Or it needs to grow to book length. Also, I’m not wedded to the idea that I would be the sole or even the principle author of any of that. The original Chaos Compass article was my take on Snowden’s act-sense-respond chaos loop. Under that framing, the 56-pager is an “act”. Posting it here is for “sensing”. This current discussion is part of the response processing. What the next “act” is remains to be seen!

1 Like

Thanks for the context. I will read through the full thing in the next few weeks and have a think about how this meshes with my own research, and possible collaborations. Perhaps we can also discuss at the research calls at some point.

Thanks. Sounds good. For those following this thread somewhat casually, the major sections of the draft paper we are discussing are listed below.

A. The Historical Record - Why Did Civilizations Emerge, Grow, Decline, and Collapse?
B. Cycles in Culture and Civilization
C. Psychology and Cultural Emergence
D. Current Civilization - One or Many?
E. The Material “Push”
F. The Spiritual “Pull”
G. Action Models

A. summarizes a variety of macrohistorians. B. mentions major theorists of civilizational cycles. C. own much to Brenden Graham Dempsey, infused with McGilchist, D. is most unique to my personal interests, E. leans heavily on UTOK, F. is about emanationism, mashing up all sorts of theology and philosophical idealism, G. is Donella Meadows informed by macrosociology.

A moment ago I almost started a thread on the idea that AI Governance = Gaian Governance, but this thread popped up as similar, so I figured why not try extending this one first?

What leads me to this theme is a desire for orchestrating a few different projects and initiatives into something more coherent and synergizing. A few of those elements:

First, the link below is to a presentation I did for faculty at my school last week. The content includes two draft chapters from an upcoming text (topics = AI + Futurism) and a few overview slides. The chapters draw on some names familiar to readers here (John Vervaeke, Zak Stein, Nate Hagens) along with more conventional technical and historical sources. In a nutshell, the argument in the draft chapters leans in the direction that artificial intelligence requires a deeper grasp of human intelligence, and that ethical alignment of AI can scarcely be separated from ethics in general. This morning I’m feeling a rather straightforward clarity that a) a global brain is taking shape, b) AI is involved in that, and c) humans with some degree of skill, understanding, consciousness, awareness, self-possession, and self-discipline will be more guiding the AI than guided by it. (Quite a few, however, will likely be more guided by it. I’m not loving that prospect - it just seems rather difficult to ignore the way matters are trending). In presenting this to about two dozen faculty from a variety of programs, I found the slides (10 in all) effectively raised the full specter of metacrisis/polycrisis issues in about 30 minutes, allowing time for discussion afterwards. I’ll claim a sort of victory for “less is more” communications strategies from this experience. People know the world is on fire. We need not belabor the point. What works better is to get straight on to crafting potential solutions.

On the solutions side of things, I’ve also enjoyed some progress lately in forming both student and faculty teams to wrestle with the practicalities of skills training to empower AI governance. The project of @dvdjsph, which I first encountered here, is playing a significant role in catalyzing that process. The link below is proving useful for introducing that project to a variety of publics.

What I’m working toward with this post is finding a means of connecting the theory (my draft chapters and slides) with David’s project. Also, generalizing both of those to include complementary theoretical perspectives as well as other projects broadly involved with pro-social approaches to AI governance.

To put a rather provocative point on all this, it all feels like early days of global brain engineeering, global governance engineering, education-of-the-future design, and laying the groundwork for the next iteration of whatever civilization turns out to be. Somehow, this morning all that feels simpler and more manageable drawn together into a synergizing conceptual network, rather than parsing each subtopic into its own disciplinary silo. To be more precise about that - discipline, yes! Silos, no!!

To relate all this to more typical 2R content, I’d like to propose my 10 slides approach as a sort of on-boarding experience to get people thinking about the metacrisis. My avoidance of terms such as “metacrisis”, “polycrisis”, “meaning crisis”, “Moloch”, etc. is intentional. Likewise, I find it more expedient to remain silent on “integral”, “metamodern”, “liminal”, “meta-theoretical” or any large-scale conceptual framing. All that can get discussed (as needed) in due course. What I’m aiming for is more like an expanded conversation - a conversation that can happen at mass scales. Likewise, I’d like to propose @dvdjsph ‘s project as the sort of praxis approach such conversations should also be aiming for. Beyond discussing existential dilemmas, we should also (IMO) be building our way into the sorts of futures we would prefer to inhabit. That building-to-future will not necessarily turn all mysteries into solvable problems or establish a universal metaphysics that all must embrace, but on balance it seems like designing patterns for the future is better than trusting entirely to fate or gods or pure dumb luck.

2 Likes

Love the idea of introducing a group of people young enough to still be open to new ideas to these topics before they learn to stay in their own lanes. I can imagine this presentation must hit heavy as you get further through the slides.

I was in contact with the founder of this project recently, and it seems there are others out there working towards similar ends, even if their means aren’t the same. This is reassuring; the fact that there are several different projects attempting to build a ‘trust layer’ of the internet means that 1) the Zeitgeist is clearly calling for something and 2) there are different ways of answering the calling, each perhaps best suited for a particular purpose (like UDP vs TCP).

Good example. I’ll be encouraging my students to explore that sort of thing along with your project. I’m currently working through Plurality by this author: AI and Democracy: Ambassador Audrey Tang on Plurality in Practice, Transparency and Collective Intelligence | University of Oxford Podcasts

That makes the linkage clear between AI-governance and governance in general.

Paradoxically, with mega-cap tech companies laying off and entry level opportunities drying up, there are no open “lanes”. My non-linear educational models are born as much from objective necessity as from any sort of personal inspiration.