Hello everyone,
This is my first post here. I apologize if I’m missing any norms or important context.
Yesterday, Google DeepMind released Veo 3, a new video generation model. For the first time, AI-generated video appears to convincingly mimic real live-action footage. While many of us anticipated this moment, it now feels both real and urgent.
Here are a few examples of what Veo 3 can do:
Instagram Clip
Former Commercial Director Reflects on AI’s Impact (Reddit)
This raises serious concerns about the future of reality, trust, and coherence in our digital environments. As these tools become easier to use and more widely adopted, we may face a flood of misinformation, fantasy, and synthetic media that feels more emotionally gripping than actual footage.
I would love to start a conversation around the following questions:
• How can we protect existing platforms from the influence of AI-generated misinformation?
• Is it possible or desirable to build alternative online spaces where AI is limited or regulated to preserve coherent human discourse?
• What role can in-person communities play in providing resilience and grounding amidst an increasingly artificial media landscape?
• What other strategies, frameworks, or forms of coordination should we be exploring right now?
I’m looking forward to hearing your thoughts and building something constructive together.
Hi @ShaneHanlon , thanks for contributing!
Short answer - focus on face-to-face community. Build trust through fully embodied relationships, then extend those relationships in digital communications through a chain-of-trust model. The only FB group I still follow is the one attached to my local community face-to-face discussion group. I got to 2R through chain-of-trust grounded personal relationships. I won’t participate in anything online that is just wide open to spoofing or manipulation. Assume anything on the Internet is fake unless there is reason to believe otherwise.
This is great personal advice Robert! Thank you.
I do wonder though, in a world in the midst of converging imminent crises, that to address any one of them sufficeintly requires good collective sense-making and to address them all sufficiently requires great collective sensemaking and great coordination. My gut feeling is that we don’t get great anything in time if AI degrades the last remains of coherent sense making that are left.
I feel we have to collectively and proactively address the hit to collective sensemaking that is coming. Otherwise we will be even further away from a loving, conscious, and kind world than we already are.
No disagreement with any of that. My focus is on praxis and technique - namely - how?
A few years ago the image of Odysseus lashed to the mast got stuck in my imagination. According the William Irwin Thompson, Odysseus is the first literary example of “ego” in the way we currently understand that idea. A self-referencing, choice-making actor in the world. We might imagine the Sirens as everything distracting from such self-coherence. AI will function quite effectively in the Siren role, no doubt.
Beyond Odysseus, I also reflected around that time on Arjuna’s dilemma in the Bhagavad Gita. Krishna’s advice to Arjuna is to take action, grounded in being. Then on Vervaeke’s series on the Meaning Crisis, reflecting among other things on Tillich’s notion of the Ground of Being. And so on. The gist of all this is through generally Jungian processes I worked through available theory, practices, and symbolisms to identify a process and a practice of what Vervaeke would term relevance realization. Not for me to say what “grounded” is for someone else. But I’m pretty confident the ungrounded will be easily swept away.
Yes, we must be grounded in the Beauty and Love of our own beings. I will take that insight forward into all that I do. And try to embed it in the collective space I am a part of.
1 Like