The universe has a way of drawing me into rather out of the ordinary contexts. Just now I got off a long Zoom call with a bunch of senior AI engineers who are trying to define their job roles, to help students prepare for AI engineering careers. At one point they asked my something along the lines of how schools might prepare students for the incredibly massive list of technical topics on the list of things an AI engineer really ought to know.
I told them I was doing it already (on pilot) and it is based on the strategy of “philosophy eats AI”. https://sloanreview.mit.edu/article/philosophy-eats-ai/ Generally, the Ikigai questions are used as Socratic prompts to get students asking all the right questions. Then it’s a matter of getting students to learn how to learn, followed by massive amounts of teamwork, collaboration, and AI infusion, all generally calculated to light up human neurons in ways human neurons have never been lit up before. I packaged all that as the equivalent of a coding bootcamp for applied philosophy. Or basically a bootcamp for humans to hack themselves to master the dark arts of AI engineering.
What I’m wondering (in case the metacrisis truly is as urgent and pressing as people make it out to be), would a “bootcamp” approach to 2R values and perspectives acquisition be a viable or desirable approach? Such an approach may be misguided or doomed, but of course I’m trying it anyway, so I suppose we’ll all soon find out!
1 Like
Worth a try! If it’s misguided or doomed, you’ll be better equipped for the next iteration. At any rate, it’s a step in the right direction in my book. I do like the idea of exposing students to philosophical topics through small group discussions.
That’s pretty much how it was always done, until the age of assembly line education. What the industrial metaphor does to education is it kills off any potential for what Vervaeke calls “participatory knowing”. Participatory knowing arrives through experiential initiations, not through lecture halls seating hundreds. Making those lecture halls into Udemy courses or TEDTalks scarcely improves the situation.
An ironical implication of this is the transformation of human consciousness at mass scale will first require a reduction of scale for education to small group, participatory models. It’s a recursive pattern - take a big problem (transformation of culture around 2R lines), and break it down, break down, break it down … till experiential ground truth is experienced as a base case. Then recurse back up the stack to culture at mass scale, but “grounded in being”, as the Bhagavad Gita puts it.
Today I sent an email to a few faculty and staff in Computer Science at my school in relation to a model I discussed lately with @dvdjsph and @Martin . Below is a section of that email.
++++++++
Here are some quick thoughts on how a 2-year capstone would work with model I have in mind:
-
The idea is to daisy chain experiential OJT from raw beginner all the way through to professional placement.
-
Raw beginners start on testing and documentation. As their tech skills grow, they start on the UI/UX end of the stack and work their way in.
-
Given that entry level jobs are being throttled by AI and economic uncertainly, I’m using start-ups as employer proxies until real employers open the entry level doors wider.
-
There is a “learning to learn” methodology to this that involves 1) a lot of group work to process content and 2) lots of AI to push the group way beyond its comfort level. Everyone needs to work a couple levels higher than they really are, and to catch up to that with formal coursework in so far as they are able.
+++++++
The reason for the “leaning to learn” hyper-acceleration is because AI engineering is so demanding and has so many prerequisites, it thrusts anyone prior to the graduate level immediately into a Keganesque “In Over Our Heads” situation to even make the attempt to learn it. But that’s not entirely a bad thing. Being in over one’s head is also a prerequisite for what might be termed “second tier” or “integral” cognitive-emotional leveling up. So in effect, I’m proposing aggressive systems training as a backdoor motivation to also focus students on inner and interpersonal development.
Where 2R comes in on all this is because on account of the “In Over Our Heads” effect of the aggressive systems training, I need to pair the technical bootcamp with a cognitive-emotional bootcamp to keep both personalities and workgroups from having meltdowns. Also - and this is become more and more clear to me the deeper I dig into it - the technical topics of ethical AI, AI guardrails, AI accountability, etc. require philosophical training both for the AI and for the AI engineers. Indeed, sequentially, it seems most advisable to train the engineers first, because they in turn must train the AI.
The mechanics of this likely involve Retrieval Augmented Generation (RAG). That’s a fancy way of saying of uploading documents to a LLM and directing the LLM to use the uploads in its calculations. That’s how any organization can values-align a generic LLM with its specific organizational parameters and purposes. It struck me today reflecting on Charles Taylor’s Sources of the Self, that such RAG to inspire ethical AI output requires what Taylor calls “moral sources”. For RAG training, a library of ethical statements (I suggest in the form of short essays for sufficient depth, complexity, and nuance) could be provided as a repository of “moral sources” for RAG training. Moreover, it seems doubly expedient to make the AI engineers read and discuss the essays also, so they will have better “human in the loop” judgment about which moral sources to apply in which cases.
In summary, it seems I’ve been hanging out in 2R and related liminal spaces in order to source the moral sources. (The feeling I’m experiencing right now is there must be some developmental level in which Wilberian quadrants don’t make a damn bit of difference anymore, because it’s just all quadrants all at once).
1 Like