A moment ago I almost started a thread on the idea that AI Governance = Gaian Governance, but this thread popped up as similar, so I figured why not try extending this one first?
What leads me to this theme is a desire for orchestrating a few different projects and initiatives into something more coherent and synergizing. A few of those elements:
First, the link below is to a presentation I did for faculty at my school last week. The content includes two draft chapters from an upcoming text (topics = AI + Futurism) and a few overview slides. The chapters draw on some names familiar to readers here (John Vervaeke, Zak Stein, Nate Hagens) along with more conventional technical and historical sources. In a nutshell, the argument in the draft chapters leans in the direction that artificial intelligence requires a deeper grasp of human intelligence, and that ethical alignment of AI can scarcely be separated from ethics in general. This morning I’m feeling a rather straightforward clarity that a) a global brain is taking shape, b) AI is involved in that, and c) humans with some degree of skill, understanding, consciousness, awareness, self-possession, and self-discipline will be more guiding the AI than guided by it. (Quite a few, however, will likely be more guided by it. I’m not loving that prospect - it just seems rather difficult to ignore the way matters are trending). In presenting this to about two dozen faculty from a variety of programs, I found the slides (10 in all) effectively raised the full specter of metacrisis/polycrisis issues in about 30 minutes, allowing time for discussion afterwards. I’ll claim a sort of victory for “less is more” communications strategies from this experience. People know the world is on fire. We need not belabor the point. What works better is to get straight on to crafting potential solutions.
On the solutions side of things, I’ve also enjoyed some progress lately in forming both student and faculty teams to wrestle with the practicalities of skills training to empower AI governance. The project of @dvdjsph, which I first encountered here, is playing a significant role in catalyzing that process. The link below is proving useful for introducing that project to a variety of publics.
What I’m working toward with this post is finding a means of connecting the theory (my draft chapters and slides) with David’s project. Also, generalizing both of those to include complementary theoretical perspectives as well as other projects broadly involved with pro-social approaches to AI governance.
To put a rather provocative point on all this, it all feels like early days of global brain engineeering, global governance engineering, education-of-the-future design, and laying the groundwork for the next iteration of whatever civilization turns out to be. Somehow, this morning all that feels simpler and more manageable drawn together into a synergizing conceptual network, rather than parsing each subtopic into its own disciplinary silo. To be more precise about that - discipline, yes! Silos, no!!
To relate all this to more typical 2R content, I’d like to propose my 10 slides approach as a sort of on-boarding experience to get people thinking about the metacrisis. My avoidance of terms such as “metacrisis”, “polycrisis”, “meaning crisis”, “Moloch”, etc. is intentional. Likewise, I find it more expedient to remain silent on “integral”, “metamodern”, “liminal”, “meta-theoretical” or any large-scale conceptual framing. All that can get discussed (as needed) in due course. What I’m aiming for is more like an expanded conversation - a conversation that can happen at mass scales. Likewise, I’d like to propose @dvdjsph ‘s project as the sort of praxis approach such conversations should also be aiming for. Beyond discussing existential dilemmas, we should also (IMO) be building our way into the sorts of futures we would prefer to inhabit. That building-to-future will not necessarily turn all mysteries into solvable problems or establish a universal metaphysics that all must embrace, but on balance it seems like designing patterns for the future is better than trusting entirely to fate or gods or pure dumb luck.