Also, sorry to derail this topic - let’s move this to chat @RobertBunge
There are generally two ways to go with language - precision and resonance. My jam is taking precision (like yours) and blowing it up into a million unexpected overtones. Both processes have their places. My approach - overloading single terms with multiple meanings, intentional synechodoche, fuzzing conceptual edges at every turn - it works a certain type of purpose - especially for organizational development.
So of course, on all technicalities, you “own” the terms can have them mean precisely what you want. What I’m finding in practice, though, is just mixing and matching “agent”, “microenterprise”, “nonlinear”, and few other terms from the same color palette weaves a certain sort of spell. For 2R purposes, what I’m hinting at is a holonic model of nested agentic systems. Some of those holons are human. Some are technical. Some are technical-human-organizational hybrids. Just to pile on even more vocabulary, each and every one of them are “attractors”. Dave Snowden takes similar liberties with all his sources, and I’m just riffing off all that and taking additional liberties of my own!
May we find the right balance between precision and resonance. I appreciate this perspective a lot and nothing could make me happier than alternative perspectives deepening the protocol. It’s like we’re constructing the perfect thumpers for calling the sand worms. I have to admit, resonance is something that’s been lacking in our protocol, and you and your students are adding the first resonant notes to the mix!
I say our protocol because I never intended it to be mine exclusively. I know the particular agents and their respective promises that make up the ecosystem will change in time, and nothing could make me more content than to see this evolution accelerate and include many perspectives.
This whole “precision” vs “resonance” thing came to me after years of trying to explain “the metacrisis” in a careful, very precise way. Almost no one has the patience to listen to that. Also, the people who are aware of the term generally derail into hair-spitting debates about metacrisis vs polycrisis, or whatever the true root cause really is, or something like that. What does not follow is any serious collaboration or action planning. Being interested in things like collaboration or action planning, I then experimentally changed up the approach. Instead of trying to explain everything, very precisely, I shifted to explaining nothing in particular as vaguely as possible. Works like a charm! Just blast out as many quasi-synonyms as fast as possible and act completely natural as if saying the “sky is blue” or something completely commonsensical and stone cold obvious. For example: “metacrisis, polycrisis, climate change, global collapse, fall of civilization, extinction event, Anthropocene, Moloch, or whatever the sum total of threatening trends are all at once.” The idea is to short circuit critical thinking and just get people nodding their heads. After getting people’s attention like that, then it becomes possible to slow down and explain things properly. (A little bit at a time). I’m pretty sure there is right-brain/left-brain alchemy involved in this technique. Impressionistic language in cadence seems to promote unity. Precision language, staccato, gets people slicing and dicing ideas, but not necessarily in the mood to work either with the speaker or with each other. My hypothesis is that group formation in the first instance (and likely group maintenance) requires the more muscially-inspired non-precision rhetoric from time to time. (In typing that, some fairly deep anthropological studies come to mind …)
OK, so against that background, I’m finding it super easy to interest people in “promise protocol” by just blasting out a lot of world salad about “agentic” and “AI” and “microenterprise” and “trust” and “guardrails” and “governance” and “transactions” and anything else that comes to mind. After people get interested, then later we can follow up with insights like “a microenterprise requires two special types of agent”. (Thanks for pointing that out, BTW!). It’s almost a law of nature that people need to misunderstand a new idea before they eventually understand it (or at least get closer to understanding it). If you hear people talking excitedly about your project and getting the details all wrong, consider that a good sign! That’s what “mindshare” sounds like!
Now I’m going through your website and learning the vocabulary properly. PROMISE Platform
Let’s apply this to the discussion above. You can check my work.
By using the vague, “word salad” approach, the speaker gets high credence to begin with, mostly because the presentation is largely platitudes and puffery, with nothing much to disagree with. At this point, the speaker is either a con artist (setting a High-Credence Trap) or the real deal. The only way to establish if this too-good-to-be-true thing the speaker is presenting is real or not is to then build confidence up to a high level. If the speaker is for real, solid evidence will then start getting tabled, one element at a time, with opportunities for the audience to stress test the evidence. If the speaker is indeed a con artist, instead of solid evidence, the initial pitch will be supported largely by more word salad and more puffery. A critical listener should be discounting confidence accordingly.
Why not start with a high precision, evidence-based pitch in the first place? Real simple. High precision or fact-based statements trigger listeners into critical distance and a skeptical posture. Credence gets depressed right from the get-go. Listeners almost have to over-correct in the direction of challenging the speaker’s ideas. So what may in fact be a Solid Bet looks at best like a Plausible Longshot, because the audience has been primed to stress-test every word coming out of the speaker’s mouth. Even if the confidence-building evidence is rock solid, the proposal is likely to under-perform due to low initial credence.
Moral of the story: in sales (or education), go for credence first, confidence later.
Question for future discussion - the process above is based on practical human psychology. When pitching project merit to agentic AI, would any of these considerations play out differently? @dvdjsph
Really interesting way of applying the language, Bob! Reminds me of Ericksonian hypnotherapy, as both use language in an intentionally indirect way to get to a deeper level of resonance. This is a really new way of looking at the protocol. I’d love to hear you riff on this more when the inspiration comes.
There’s a lot to explore here, but as I understand it, the mechanism I discussed in my last message would be the wrong way to approach agentic AI. The use of the Milton Model (as the creators of NLP called it) was intended to bypass the critical mind in order to get to communicate with the subconscious. Doing the same with Agentic AI (If I understood you correctly) would only lead to less pointed and less relevant answers, as there is no subconscious, higher self to reach–but there is a context window that gets exhausted, resulting in less coherent responses.
That would be my hypothesis also. I’m thinking the credence phase works differently for humans compared to AI. Human credence is more intuitive. AI credence (like all things AI), would be some sort of Bayesian best guess. In a hybrid human-AI system it would be useful to run humans and AI in parallel and blind to each other and then cross check the results.
Actually, I got a mini-gusher already. Basically just rolling up all the usual metamodern psychological theories and applying them to the topic at hand.
For starters, on credence:
- McGilchrist would center it in the right-brain, highly relational. That jibes with psychotherapeutic and hypnotic technique.
- claims that fit our current reference frames (easy to assimilate in Piagetian terms) will likely gain high credence scores. Claims that do not fit our current reference frames, requiring new reference frames (accommodation) will lack credence until the frame shifts. Evidence be damned. It’s just that cognitive dissonance occasioned by information that does fit current frames creates an energetic barrier to belief.
- as long as everything is within a given paradigm and essentially transactional (Vervaeke’s propositional and procedural), credence will be high if fit is good with current understandings. To get over the credence hump for anything truly innovative, will likely require some sort of relational, participatory, or emotive facilitation for human adopters. (Vervaeke’s participatory and perspectival knowing).
When I just communicate instinctively with stakeholders not well schooled in AI or anything else you are involved with, I intentionally bend your vocabulary all out of shape, specifically to jam it into existing reference frames. Once people grok the reference frame around your model, we can all be more precise and clinical in how we approach its various elements.
Another thing that occurs to me just now is that because AI psychology is likely quite different from human psychology in this matter, AI might be more open to radical innovation. I’d suggest running novel claims through AI for a plausibility check and then letting humans do additional reality testing. That might facilitate human reference frame upgrades in a more expedited than usual fashion.