Having just checked in a draft of a textbook (that will not see the light of day till late 2026), it’s on to find new worlds to conquer! The textbook ended with an impossible chapter on Future Trends in Information Technology, which involved an extended acknowledgment that no one really knows anything about that topic (least of all this author!), but none the less, it behooves us to make our best guesses. The general theme of the chapter then was predication or forecasting - what are different approaches to future visioning and why might one method be preferred over the other?
Simultaneous with the textbook drafting, I’ve also been in extended discussions with @dvdjsph about his Promise Protocol. In those discussions, I generally conflated “prediction” with “promise”. Now that my context window for non-textbook matters is widening quite a bit, it occurs to me that yes, there is a family resemblance between “prediction” and “promise”, but there is also quite a significant difference. That difference opens up all manner of topics typically of interest here, which is why I am choosing to post about it in this semi-public space.
First, here is me attempting to be rigorous:
A prediction is an assertion of the likely truth value of a proposition at some future time.
A promise is a prediction supported by agency.
A prediction is an estimation of event probability without intervention by the estimator in the outcome.
A promise is an estimation of event probability with the estimator actively intervening to influence the outcome.
The point of general interest here is the notion of “prediction supported by agency”. Can we “will power” our way to desired results? Does setting goals create positive feedback for the realization of goals? In predicting our own future actions, are we on any firmer footing than in predicting the weather? Indeed, might we even be on shakier footing?
I’d be curious to learn what anyone else here thinks about how future-oriented statements may or may not influence the direction of outcomes that, in the moment, can only be grasped as possibilities or potentials.
Interesting to bring these concepts together. “Supported by agency” seems a bit vague to me. Does this mean, the promiser necessarily believes in the integrity of the promise? A promise can be made in bad faith, right?
Another definition I like is the following: a promise is a claim about an intended outcome. This is explicity neutral in regards to the fulfillment value.
For networks of cooperating agents likes businesses, humans, or even cells, some form of signaling that limits the space of future possibilities seems like a prerequisite.
Another useful question to ask is how we humans, who think of ourselves as individuals but may also be understood as conglomerates of parts, can best enact our intentions through effective promise-making between these parts. For example, my manager part (to use some IFS terminology) promises my firefighter that the danger they fear is duly noted and has been neutralized, rather than this part’s voice being suppressed, only to speak up later, louder and inopportunely.
At your suggestion, I’m now reading Bergstra, J. A., & Burgess, M. (2014). Promise Theory. Createspace Independent Publishing Platform. Let’s go with their statement here:
“The active entities in Promise Theory are called agents. Agents can be persons, animals, plants, machines, or any other kind of entity. They are the things with the agency to exhibit behaviours, whether intentionally or unintentionally, and whose observation leads to perception of behaviours and intentions in other agents.” (p.2)
To me, though, their next statement introduces a bit of semantic confusion that can be cleared up through my use of “prediction”. B&B: “Some of these agents can make promises through what we would call free will; others merely seem to keep effective promises through the agency of inanimate tools, e.g. a light bulb that promises to shine with a certain brightness.” (p.2)
I would not say a light bulb “promises” anything. I would say we predict the light bulb will shine, based on extrapolation from past performance and/or a physical model of electron to photon transformation. I am fine, however, with the notion that lightbulb manufacturers or vendors promise lightbulb performance, based on their own predictive models. The key distinction here is I want one term - promise - for the world of subjects (Wilberian interiors) and a different term - prediction - for claims or beliefs about the world of objects, or more generally, Wilberian exteriors.
I would situate that in the world of conflict theory. False promises can be made to wrong-foot a counter party. Honest promises project truly intended outcomes, with the promiser supplying good faith efforts toward those stated outcomes manifesting. In false promises, the truely intended outcome of the promiser is disguised through misdirection.
The conflict model is interesting - and rather vital for Game Theory, multipolar traps, etc. But I’d rather get the core promise-prediction relationship firmed up first for cases in which intentions may be quite honest, but promises can be undone by predictive uncertainty occasioned by Black Swans, combinatorial explosiveness, or complexity in general.
In working on a grant proposal this morning, I shipped this rhetoric:
” Systems theorist Donella Meadows identified “mindset” as the maximum leverage point in any system. In a nutshell, our most central challenge is to embrace a mindset that humans can – through entirely human means – make a place for themselves in the AI-enabled world. In the end, the challenge AI demands of us is to raise our game as humans and to bring our best selves to the task of AI orchestration.”
In your terms above, I’m proposing to expand the organizational possibility space through a mindset shift enabling novel and emergent forms of human-human, human-AI, AI-AI, and larger human/AI collective communications. Current human communication styles in the organization are somewhat self-limiting (which they must be, in the interests of organizational coherence), which contributes to a variety of systemic crises in the changing political-economic environments the organization is grappling with. I’m proposing a new rhetoric of organizational coherence that allows human actors to remain fully human - indeed to expand their practice of personal humanity in generally IDG ways - but to leverage those human potentials to maximize organizational productivity gains available through AI.
One reason I want to focus “promising” on Wilberian subjectivity to really dig into questions about self, identity, development, inter-subjectivity, etc. In SD terms, “promising” means different things at different levels. Also, the juxtaposition of Promise Theory with UTOK’s Justification Systems Theory (core to UTOK’s theories of self and personality) open up vast ranges of nuance and consideration.
Circling back to the notion of false promises, how often does that topic come up in psychotherapy? In the ballpark of every session, more or less?
A promise can be false in a couple of ways. The promise maker can be mistaken about their ability to fulfill the promise, or they may make the promise under false pretenses. The former has some overlap with your notion of prediction. “I will finish the project tomorrow” is both a promise and a prediction, and quite a common sort at that. So our notion of promises would need to encompass this.
Intentional misrepresentation corresponds to pretty different mental states than honest mistakenness, but may have the same interface or outer appearance. If I rely on someone to complete a project by tomorrow, it may be the case that theyre not able to because they’re overly optimistic – or because they plan to call in sick in order to get off the hook.
The sorts of broken promises that come up in therapy/self development is a huge topic that I’ve been exploring for a year or so and have only scratched the surface of. Just a few scenarios:
We may be accustomed to accepting false promises from others that we don’t believe will be honored, which may be a question of drawing proper boundaries.
We may be in the habit of making promises to ourselves (e.g. “I won’t eat any sweets this week”) that we’re not able to uphold.
This last example is interesting because it’s concerned with internal coherence; part of me wants to be good while other parts want to feel good. Psychotherapy as described by Freud, i.e. making the unconscious conscious, “Where id was, ego shall be” can be seen as the process of making better promises.
One reason I like the promise/predication distinction is it cleans up cases like “Party A has promised something I need. Can I count on Party A?” Whatever Party A’s subjective state is in making the promise (deceptive, overly optimistic, sincere and spot on), the attitude of the consumer of Party A’s services can only be a prediction. (Short of a transpersonal psychic hotline). My predication of Party A’s fulfillment likelihood factors into the PP Abductio calculus for anything related to business transactions or supply chains in general. Such scenarios strike me as bread and butter PP use cases.
This would be a good point to circle back to the recent Eric Heinis research call. Morality as energy optimization is a great way to frame it. Leaning on the unified psychology of Gregg Henriques and the cultural emergence work of Brendan Graham Dempsey (among many others), my general observation is that IDG or SD leveling up has an energy cost and a threshold energy activation requirement. It takes big energy to get out of a deep basin! (The Piagetian technical term is “accommodation”. The Model of Hierarchical complexity expands that model quite a bit.) Short of enough energy to get out of the basin, to synthesize an emergent cognitive structure, to attain a new egoic perspective, etc. , the energetic fall back position is to deploy one or more Freudian defense mechanisms, each of which can be seen as low-energy models for deflecting immediate discomfort at the expense of longer term resolution. I tend to favor p-t-p therapeutic or developmental communities for that reason (DDS in Second Renaissance terms), because energy-surfing from peers in community is far easier and generally more pleasant than rugged individualist psychological bootstrapping.