My original interest in Vervaeke’s work came from his unusual juxtaposition of the history of philosophy with computer science. That makes his work a deeper than usual jumping off point for discussions of AI and ethics. Hardly the final word, however (on either topic).
Note from this article above (also in reference to Anna Riedl’s recent preso) the situated subject is treated with some degree of suspicion. Whatever our current views are, they are assumed to be incomplete, self-deceptive, flawed in variety of ways. Recursive Relevance Realization needs to be recursive, because we are seeking wisdom, not possessing it in advance. Also - AI did not invent “hallucinations”. Plato, for example, noted those in the Cave. Cognitive science, in many ways, is the science of how we (human and machine) get things wrong and how to do better.
Here is an example of some straight up discussion about AI. No mention of Vervaeke, McGuilchrist, etc. But notice, the whole thing about “small world” representation and predictive processing is exactly what these two and their many collaborators discuss all the time.
My analog for the “strictly-human-in-relation-to-other-humans-who-needs-AI?” space is, do we really need Theories of Everything that run to thousands of pages and try to schematically map cosmic evolution in rich detail? Seems like a lot of overhead. The article above suggests a Cliff’s Notes version of that may just as effective for practical purposes.
This is a very heavy subject, indeed. I’ll admit I’m just starting to read about this perspective offered by Vervaeke, until now my focus was solely on the threats an AGI would pose to humanity by its nature alone. Having watched videos from channels like Robert Miles AI Safety and Rational Animations, and of course other works, Sci-Fi, and AI experts warning of the dangers, I was left with many uncomfortable scenarios. A couple of very salient ones are “you can’t put the genie back in the bottle once it’s out”, and “humans to an AGI will be like ants are to humans”.
On the deeper subject of ethics, that’s a very interesting perspective. I had thought the biggest danger of a super-intelligent entity was that it would be “supremely rational”, cold, calculating, lacking empathy. But the fact that on top of that it may well be a victim of embedded or learned biases complicates things further. We’ve seen this already many times with current Gen-AI systems, but apply that to a true AI/AGI, and it becomes quite terrifying.
How can humans give AI qualities like empathy or kindness? Can they? Finding a model that rewards AI for such qualities, essentially giving it “feelings”, seems unlikely. What is it within ourselves that provides us with those qualities? Is it consciousness itself? Some part(s) of the brain? Why do some humans not even have those at all, or lose them along the way? Looks like I have a lot more reading to do!
They probably can’t. In working on a recent beginners’ chapter on AI, I noted that the evaluation of artificial intelligence requires a prior grasp of what human intelligence is. If we set that metric narrowly - like “good at chess” - AI has blown through the Turing Test long ago. If we set a wider metric, like “forms pro-social, loving relationships, to the extent of including potential self-sacrifice”, I doubt AI will ever really measure up. (Apologies to Commander Data and other noble sci fi AI characters).
To get some grasp on these matters - because human psychology is not part of the classic CS curriculum - I’ve recently been learning psychology through the UTOK community. A couple sample links are below. The first of these is the UTOK theory of mind.
The key to this is that AI works entirely on the levels of Mind 3a-3b. LLMs are language models. They use Bayesian math anticipate next moves in symbolic sequences. What AI lacks entirely is Mind 1, which is animal, mammalian, and bio-evolutionary in nature. Are we likely to have psychotherapy to help AI access its bottled up feelings or process childhood trauma recorded in its silicon body? Not very likely! Because all of that is about helping Mind 1 energies work their way through through ego (Mind 2) and out to social relationships (Mind 3).
On the question of human relationships, the video below is a brief UTOK snippet on empathy in therapy. (Note that UTOK learns from and incorporates a lot of prior psychology, Carl Rogers in this case). Rogerian humanistic psychology calls for highly empathetic human relationships. There are many practice groups in and around 2R that are essentially Rogerian in their empathetic orientation. All of these groups will be doing something or another to access feelings, get into the body, sense into experience, or otherwise tap into what UTOK calls Mind 1. (It occurs to me just know we could put this to the test by adding a chatbot to a T-group or a circle or personal development pod! Anyone game for the experiment?)
It fascinates me how we have all this recent critique of LLM sycophancy, and at the same time we want them to be empathetic and kind. Maybe the sycophancy is the closest they can get to what we want? I’m quickly becoming allergic to this kind of sycophancy, whether from AI or humans. Or to be more accurate, I find myself falling for it again and again, but for shorter times before the allergy sets in.
This actually came up in an Intentional Society breakout a couple weeks ago. I was in a discussion with @ola_o and one other person and we were discussing “building relationships at the speed of trust”. As a sort of reductio ad absurdum I wondered out loud if inserting ChatGPT into our group process would accelerate trust-building. (Rather the opposite, I do in fact imagine!)