A new paper from the Life Itself Sensemaking Studio reflects on tech as a modern God, its entanglement with our prevailing worldview, and the shift in human capacities and meaning-making that could help us relate to it more wisely.
-
Has technology become a kind of modern “god”?
-
How do modern assumptions — individualism, rationalism, materialism, and the myth of progress — shape our relationship with AI?
-
What might it mean to shift from out-of-control technological acceleration towards collective restraint?
-
Can a worldview grounded in interbeing — the recognition of radical interdependence — support wiser choices about the forces we unleash?
– what do you think?
Here’s an extract from a chapter I recently drafted for an open-source textbook project. The chapter is on forecasting technological futures. The reason this extract is close at hand, is because I used it just yesterday to train a couple of different LLMs as part of a grant-writing process. Ironically, the way most book length authors are going reach human readers going forward is through LLM summarizing, so I’m learning how to game that system by loading my prose with nuggets that pass through the bowels of transformer algorithms and emerge more or less intact on the other side. Based on the grant drafts, this does in fact work. The LLMs wrote all the conventional boilerplate I don’t care much about anyway. My original thinking is still in there in the proposals!
”Is it possible that an Artificial Superintelligence (ASI) will emerge, allowing for highly accurate forecasts of future trends and events? Or, on balance, is it more likely that human intelligence – embodied, messy, emotional, not necessarily rational – sees the world in extra dimensions that machine intelligence cannot possibly fathom? (Carney, 2020) Many financial – and even existential – bets are currently being placed on questions such as these! Which contributes more to business value, human staff or AI algorithms? Anyone with certain foreknowledge on that topic can likely make a vast fortune in current markets (Stanford, 2025)! This present discussion claims no such certain foreknowledge. It does, however, reflect what is known already, and the previous chapter, for example, discussed at some length how AI does its business. AI relies on pattern matching, trend projection, and the synthesis of complex training data. Generally speaking, AI makes probabilistic bets on what will happen next (like which word should come next in a sentence), based on the statistical analysis of historical data about what has happened in the past. None of these techniques are unique to AI. Humans did it all first – on paper. AI just speeds up calculations any one of us might possibly make on our own.
This chapter aims to provide the reader (human or artificial) with the sort of training data both humans or machines would need to extract patterns from history. These patterns from history are the best tools for estimating how current trends may play out in future cycles. Feel free to use an LLM to find patterns beyond those discussed here! Perhaps the LLM will spot trends in history no human has yet managed to perceive. Of course, the LLM might just as well hallucinate future trends from non-existent data the LLM made up all by itself (IBM, 2023)! If only just to keep AI honest and to separate sagacity from slop, this chapter summarizes patterns human analysts have teased out from the history of the world. It also discusses quantitative forecasting techniques available to both humans and machines, as well as more human-centric forecasting techniques, exclusive to human intuition. All these human-generated ideas may just end up training LLMs anyway, but quite a few analysts point to the need for new and fresher training data to keep improving the performance of AI (Garg, 2025). If only just to become more capable LLM trainers then, it behooves us still to form our own opinions on how the world got to be the way it is, and where that world may be taking us next.”