One of my classes recently was challenged to brainstorm on organizational partnering. Namely, what would one organization need to know about another to be attracted to them as a potential partner? (AI summary of the responses below). I’m wondering if the various trust-generating protocols you are interested in could fit into this picture.
Key Attributes to Compare:
Mission, values, and goals
Leadership and organizational structure
Financial health and investment capacity
Staff expertise and operational infrastructure
Technological capabilities
Past partnerships and reputation
Geographic location and reach
Evaluation Methods:
Point-based scoring
Side-by-side comparison
Weighted attribute models
Venn diagrams and spreadsheets
JSON schemas and databases
Graphs and visual tools
Real-world analogies and case studies
Strategic Insights:
Equal-power mergers often fail without a clear leader
Compatibility must be both quantitative and qualitative
Reverse-engineering failure can reveal success criteria
Structured data enables transparent decision-making
Great question. Borrowing the metaphor from my other thread, what would one cathedral builder need in order to collaborate with another? Practically speaking, the builders must have very much in common. A shared belief system, a shared blueprint, defense from outside forces, and so on. These things have their contemporary correlates.
Does our present vantage point equip us with a conceptual framework to enable more efficient cooperation? Do we understand cooperation at a deeper level, such that, although I do not share the religious beliefs of a Muslim or Hindu who wants to improve the world, we can all partake of a shared value system such that we can build from the same blueprint?
I believe that’s what’s distinctive about the time we live in. The challenges are as formidable as could be, but the potential of cooperation is equally unlimited; we can create a syncretic plan for a better world that promises that effort, from whichever local belief system it is inspired, can be funneled into the greater good they all point to.
Shared, global values, however they are expressed locally, seem to me to be a prerequisite. This is what motivates a huge undertaking like the building of a cathedral. But then, there’s the blueprint that has to make sense as well. If it’s dogshit, and the walls won’t meet where they’re supposed to, well that’s a problem. The principles upon which these grand, beautiful understakings were based, were grounded in sacred geometry and knowledge that is deeper than most of us moderns would suspect. But in summary we could say, these principles relied on many generations of accumulated intellectual achievements that enabled the cathedral to not collapse under its own weight.
I’ve gone into detail before regarding my own framework for enabling cooperation at such levels. In my opinion, the best unit of cooperation that our collective intellectual heritage has produced is the promise, and clarity about promises can create a better way to match expectations and outcomes.
You could also forget about the values/beliefs alignment and pay them to build the cathedral.
It’s also possible that you would start by yourself and attract others on the basis of the project going so well that it inspires and people want to be associated.
Nothing as powerful to change people’s values and beliefs as success.
I’m also interested in the protocols of collaboration and co-creation - but they don’t go as far as motivating people and equipping them with capacities they don’t have.
Questions like that are what drove me to civilizational analysis, followed eventually by ever deeper dives into psychology, philosophy, and the human sciences in general. The gist of my response is to visualize humans as “layered”. This is on analogy to the information technology stack, but there is plenty of precedent for such a model in both ancient wisdom and modern psychology. When a lot of people talk about values alignment, they imagine a sort of monolithic cultural architecture in which Muslims and Hindus for example all agree to believe all the same things. Not very likely. Instead, we might imagine that some portions of the human “stack” are quite personal, localized, culturally bound, and unlikely to be shared very far and wide. Other portions of the human “stack” are likely more amenable to global communication, collaboration, and cooperation. To establish collaboration protocols, it will be desirable to be quite precise about what does and does not need to align for people to effectively collaborate at distance.
I think Confucius was on to something with his Rectification of Names idea. I.e. establishing standards for what a proper X ought to behave like. Going back to your previous post about what one organization would need to know about another–at a most basic level, they would have to know that the counterparty is what they say they are. Going deeper, each of the concepts you listed can also be considered in this way, in terms of prescriptive standards. Sharing the same prescriptive definition for financial health, for example, means we can cooperate on improving it.
How do we agree on prescriptive definitions? Promise Theory gives us a sufficiently powerful descriptive language. Anything we would need to agree upon in order to cooperate can be considered an agent making promises. A newspaper article might promise to be factual, relevant, timely, corrected when wrong, and so on. If a given newspaper article breaks any of these promises, it must be possible for independent observers to assess this publicly.
We can get very precise about what promises should be made by what. For example, here’s a proof agent that defines what a mathematical proof ought to entail. Suppose something comes along claiming to be a proof but doesn’t abide by these standards. Then, we have an agreed-upon definition to compare it to and assess it by. This holds for any of the other concepts you mentioned as well, of course.
Yes, interesting piece. It can be taken as an argument against rigid, prescriptive definitions. This is where my system diverges from Confucius: prescriptions evolve to suit the current context. If we considered an agent meant to represent the concept of science via promises, this definition would be able to start out Popperian and eventually become Kuhnian through a process of viewpoint evolution. We do not, however, want to dispense with the notion of prescriptivism altogether, as it’s harder to cooperate if we can’t even agree on what should be the case.
Agreed. The Wittgenstein critique is against formalizing a set of rules that will supposedly resolve any potential ambiguities thanks to the rules themselves. Per Wittgenstein, that won’t work. (McGilchrist and others add a lot of other reasons why that won’t work). But yes, any given social relationship does need some particularized agreement on terms and definitions. Every software project needs that, for example.
What I fall back on is the need for recursive processes to drill down to primitives to disambiguate. Such primitives may need to be experiential, pre-rational, and pre-verbal. How, for example, would a Confucian sage know the king is not acting like a real king? The sage must have an intuitive sense of what a king is all about - otherwise the current king making truth claims about kingship while perched upon the royal throne would appear to settle the matter both propositionally and precedurally. Why would any outsider not on the throne have any standing at all to dispute the royal claims?
Your idea seems similar in spirit to the primitives posited by Promise Protocol, but I’m not sure if they could be called pre-verbal. The ontology includes the following, which are represented in cryptographically signed data structures, but are representatives they are meant to represent units of experience that don’t contain embedded rational assumptions encoding worldviews:
Intention
Promise
Agent
Everything beyond that is a type of agent that ultimately inherits from an archetypal agent. But Agents themselves are just interfaces for bunches of promises, which themselves are just intentions that have been made public. So everything boils down to intentions, which seem to me like a decent way to build up a model of a complex system, for the very reasons you pointed out - pre-rational, experiential. Pre-verbal is interesting - something I’ll have to think about more.
I’ve been teaching IT security for over 20 years, and one thing is crystal clear. Any digital representation of identity must ultimately be grounded in the physical world. A crypto key is only as secure as its physical storage and the humans who have access to the key. Certificate Authorities supposedly do all sorts of offline identity verification before issuing certificates. The current trend to toward multifactor authentication is because passwords of any length (“something you know”) is inherently less secure than physical devices like key fob code generators (“something you have”) or biometrics (“something you are”). So one simple way to look at all this is if Promise Protocol involves cryptographic assertions (it does), these assertions need physical grounding and a physical audit trail. This wheel need not be reinvented - HTTPS has all the same issues and just check out the size of the compliance industry that sprang up around that!
This makes sense. The features I mentioned are a minimal definition of an agent in Promise Protocol, meant to define what MUST be true about an agent, but if we were to define what SHOULD be true about agents in a secure and reliable system, we could define an agent derivative (e.g. Secure Agent) that includes an extra promise requiring physical grounding, which we require of all agents in the system.
Here are the promises that compose a minimal agent definition in Promise Protocol (at least the current version):
I promise to have a unique content-addressable ID.
I promise to be cryptographically signed.
I promise to expose my public key and verify signatures.
I promise to reference my previous version if it exists.
I promise to emit events when my state changes.
I promise to respond to state queries.
I promise to declare my critical dependencies on other agent types or specific promise domains.
I promise to emit a standardized failure event upon inability to fulfill a committed promise, including a reason code and relevant context.
I agree with you that certain promises related to security ought to be added to this for the purpose of building viable systems for the real world. The purpose of cryptographic signing in this minimal definition isn’t so much to ensure absolute security (it doesn’t), but to ensure skin in the game. I.e. to ensure that the same some entity has staked some value behind the promises that were signed, or behind an assessment of those promises, in order to prevent manipulation (e.g. sybil attacks).
Sure. Your proposal is built on a lot of prior technical protocol infrastructure, so a recursive drill-down into the foundations of content-addressable IP addresses and cryptographic signing will take us places like this:
The main reason I do not enforce a physical basis for security in the top-level agent definition is the Interface Segregation Principle, a la SOLID. Since Promise Protocol’s base agent will also be used to define AI agents and others that do not have a physical existence but still need accountability for their promises, I leave the physical audibility to extensions to define.
No complaint with that. It’s pretty clear you are working at OSI layer 7, which assumes all the lower layers (including physical layer), but also abstracts from them, for good reason.
The main reason I am harping on all this is to over-communicate - endlessly and tediously if need be - on why a generally idealistic community like 2R needs to be GROUNDED before reaching for the stars! This untethering tendancy shows up in philosophical idealism, love of abstraction, and the search of technical fixes to fundamental existential questions. (Younger versions of myself were very guilty of all of the above). So yeah, let’s hack some cool new protocols! Just consider that I’m the sort of project manager type who had to learn the hard way that if the HVAC fails, nothing at the application or network layers is going to work either.
I appreciate it, as I’m fairly idealistic and could use sanity checks–especially from someone with your knowledge and experience (also, cybersecurity isn’t my strong point!). I see this as more of a social protocol than an IT protocol. If it’s used to extend the IP, then yes definitely layer 7 of OSI. It could also be used apart from IP altogether and administered over the US mail system. I was more focused in finding a way to come up with a bottom-up definition of merit than to find yet another way to combat censorship, privacy, or any of the other issues that have already received much attention in tech communities.
Give us an example of grounding here. Also, security is never secure enough - no matter how hard you try. Audit trail is already included in the crypto ledger. What else is more secure? What physical audit trail would work?
Yes, I share your frustration about people who are (for whatever reason) hopeful that everything can be fixed thorugh piling up technology. It’s exactly the opposite - described by Moloch principle.
The reason why we all bank on technology is because we’re all nerds. Geeks have been doing well last 100 years or so and we hope that the same strategy that worked in the past will ultimately help us again.
I believe that the future belongs to holistic integrators and there’ will be diminishing payoff for the super analyticals and super rationalists. It’s just that we’re not yet seeing them emerging. What we have is the counterweight to techno optimism - which is, like you said, philosophical idealism.
Another question - what would doing the right thing look like?
Your point about security never being secure enough is well-taken. There is always some residual risk. Efforts to achieve 100% risk reduction run into diminishing marginal returns, so the net risk reduction is worth less than the cost to achieve it.
I likewise agree that the future belongs to holistic integrators. At least, those are the bets I’m placing. (And encouraging students to place) A recent MIT study responded to the notion of “AI eats software” (an update of the previous conventional wisdom of “software eats world”) with the idea that “philosophy eats AI”. I’d go even further - '“holistic practices eat philosophy”. Or to put a McGilchrist spin on it - human = Master, AI = Emissary.
As for how to do the right thing … there’s a question that could use it’s own thread! My own process involves integrated thinking and feeling along with multiple layers of social and practical reality testing. Repeated frequently. Your mileage may vary.