I’m working with a group on a technical platform called ‘Synergy Engine’. A mock-up demo link is below. The idea is to improve collaboration in the not-for-profit section through an AI-infused platform. There is also a larger framing about education reform.
My students are going to engage with this project in the fall. Meanwhile, I am engaged with this development team on a theoretical level and in that context introduced them to general 2R themes and goals. The developers are looking for input and we were wondering if 2R might like a research call on this topic.
Huh, I somehow missed this post earlier - this sounds super interesting. Would be great to have a research call on this, yes - feel free to put your name down in a free week..
Just as a teaser … in my latest session with the dev teams working on Synergy Engine, I recommended that they mock up whatever the Synergy process is with low-tech human-to-human behavioral analysis. Then figure out, what, if any, part of that might benefit from automation. Analyzing that human-to-human dimension of synergy generation is a game any of us can play anytime. (I’m going to make it an assignment for all my students in the upcoming quarter.). What makes a relationship work? How do you know it’s working? What are the signals or attributes of a potential partner or collaborator that point to potential success in the relationship? It strikes me we ought to have some degree of personal clarity about matters like these before we just trust whatever AI might have to say about them.
Really stimulating questions, @RobertBunge and I wholly support the cross-comparison of human-human interaction and human-AI (or more generally, human-tech) interaction. Please feel free to add yourself to the research call schedule spreadsheet, or ask if you want us to do that for you.
I might also add awareness of the negative side as well. What alerts you to communication failure in human-human interactions? What makes one notice that your communication intention is being received or interpreted in a way that is untrue to the intention? What are the signs of possible, or likely malign, disruptive, or just difficult intent? These are questions that are highly relevant to the process of Ontological Commoning, and point towards a need for something or that nature.
On the technical side, all of this will have profound implications for information security, to whatever extent we allow AI or other platforms to mediate human relationships. But as you rightly point out, even without technology involved, human-to-human trust building is not unproblematic. Who is the real me? Who is the real you? By what process do we ground our perceptions of who is who and what is what?
Here is a lot of metatheory that generally informs my work with World Systems Solutions:
Without agreeing on every detail, my generalized take on the flow of big picture history generally runs parallel to that of Cadell Last and Michel Bauwens.
See especially in the linked article above point 8., Karatani’s Mode of Exchange D. The key to this is Mode D “transcends and reintegrates” prior modes (including reciprocity, state, and capital). Mode D is specifically a revitalization of reciprocity, but on the backs of structures created in the eras of state and capital respectively. Think of villages with global networks, as opposed to Neo-lithic villages limited to single valleys.
As to my work with WSS, this sentence from point 10. in the linked article pretty much sums it up:
”developing software, tools, currencies, and social protocols that reimagine cosmo-local coordination and value.”