A new paper from the Life Itself Sensemaking Studio reflects on tech as a modern God, its entanglement with our prevailing worldview, and the shift in human capacities and meaning-making that could help us relate to it more wisely.
-
Has technology become a kind of modern “god”?
-
How do modern assumptions — individualism, rationalism, materialism, and the myth of progress — shape our relationship with AI?
-
What might it mean to shift from out-of-control technological acceleration towards collective restraint?
-
Can a worldview grounded in interbeing — the recognition of radical interdependence — support wiser choices about the forces we unleash?
– what do you think?
Here’s an extract from a chapter I recently drafted for an open-source textbook project. The chapter is on forecasting technological futures. The reason this extract is close at hand, is because I used it just yesterday to train a couple of different LLMs as part of a grant-writing process. Ironically, the way most book length authors are going reach human readers going forward is through LLM summarizing, so I’m learning how to game that system by loading my prose with nuggets that pass through the bowels of transformer algorithms and emerge more or less intact on the other side. Based on the grant drafts, this does in fact work. The LLMs wrote all the conventional boilerplate I don’t care much about anyway. My original thinking is still in there in the proposals!
”Is it possible that an Artificial Superintelligence (ASI) will emerge, allowing for highly accurate forecasts of future trends and events? Or, on balance, is it more likely that human intelligence – embodied, messy, emotional, not necessarily rational – sees the world in extra dimensions that machine intelligence cannot possibly fathom? (Carney, 2020) Many financial – and even existential – bets are currently being placed on questions such as these! Which contributes more to business value, human staff or AI algorithms? Anyone with certain foreknowledge on that topic can likely make a vast fortune in current markets (Stanford, 2025)! This present discussion claims no such certain foreknowledge. It does, however, reflect what is known already, and the previous chapter, for example, discussed at some length how AI does its business. AI relies on pattern matching, trend projection, and the synthesis of complex training data. Generally speaking, AI makes probabilistic bets on what will happen next (like which word should come next in a sentence), based on the statistical analysis of historical data about what has happened in the past. None of these techniques are unique to AI. Humans did it all first – on paper. AI just speeds up calculations any one of us might possibly make on our own.
This chapter aims to provide the reader (human or artificial) with the sort of training data both humans or machines would need to extract patterns from history. These patterns from history are the best tools for estimating how current trends may play out in future cycles. Feel free to use an LLM to find patterns beyond those discussed here! Perhaps the LLM will spot trends in history no human has yet managed to perceive. Of course, the LLM might just as well hallucinate future trends from non-existent data the LLM made up all by itself (IBM, 2023)! If only just to keep AI honest and to separate sagacity from slop, this chapter summarizes patterns human analysts have teased out from the history of the world. It also discusses quantitative forecasting techniques available to both humans and machines, as well as more human-centric forecasting techniques, exclusive to human intuition. All these human-generated ideas may just end up training LLMs anyway, but quite a few analysts point to the need for new and fresher training data to keep improving the performance of AI (Garg, 2025). If only just to become more capable LLM trainers then, it behooves us still to form our own opinions on how the world got to be the way it is, and where that world may be taking us next.”
Reposted from Daniel Osner in WhatsApp:
“Postscript: If culture does not help us recognize that wisdom is available and possible to us, we have no real prospect of successful inner or outer leadership. And technology created from rejection, greed, and illusion will only lead to destruction. We are all invited to verify for ourselves that wisdom is available, within and without. Cheers Daniel”
1 Like
The recording of the In Tech We Trust white paper launch webinar is available here: https://www.youtube.com/watch?v=NgIjGn3fWDU&t=7s
Featuring a presentation of the paper’s key ideas and core takeaways by paper co-author @rufuspollock and comments and reflections from invited experts in tech and social transformation – @michaelgarfield , Xavier Snelgrove, and Jenny Stefanotti – and co-author Sylvie Barbier.
I had the pleasure of attending the live Zoom on this. But there were lots of really interesting and well-informed people reacting to all of it, so I just sat back and absorbed how the presentation was landing and the sorts of reactions different people had.
One question that kept coming up was, no one is opposed to “wisdom” in the deployment of AI, but what exactly is “wisdom”? Wisdom is not an obvious thing that jumps right out at people. The Stanford Encyclopedia of Philosophy, for example, offers at least five different alternative definitions. If AI is bad, and wisdom is good, how might one recognize the wise when one encounters it?
Another reaction that stuck with me (from one of the panelists, I believe), is the assertion of technology-as-god is a logical leap. Technology as investment obsession or technology as magic solution to avoid discussion of serious social problems is there for all to see. But is it really a full on spiritual alignment? Yeah, I’m sure there are sub-Reddits that come off that way, but to me AI obsession is a marker mostly of spiritual unseriousness, not so much of spiritual misdirection.
1 Like