Jordan Hall & The Great Transition

I just finished reading Jordan Hall’s three most recent Substack articles: The Coming Great Transition v 2.0, The Great Transition: The Divine Economy, & The Coming Great Transition: Healing and the Church.

They seem to me to offer an interesting perspective that brings together AI & Christianity in a way I haven’t seen before. My first reaction is that it’s profound, provocative, and accurate–and I don’t fully trust that reaction. I would love to hear people’s thoughts about this. Is there something I’m missing? A glaring blind spot?

One specific question could be: Why the focus on Christianity? Somehow I like it, but I don’t understand why.

All thoughts, feelings & reactions welcome.

1 Like

First things first. Jordan converted to Christianity a couple years ago. Prior to that he was regarded as one of the leading secular lights of the liminal web, generally associated with channels like GameB. How he is received now is complexified by how his new found religious affiliation lands.

1 Like

For example this recent post by Daniel Thorson: The Buddha Womb. Refering to two perspectives, one Buddhist and one Christian, Daniel says “These aren’t just parallel teachings. They’re the same perception rendered through different bodies, different centuries, different ontologies. And I think they’re both accurate. I think they’re both describing something that can be directly experienced, right now, in the body, if perception gets clear enough.”

Jordan says in The Coming Great Transition v 2.0 “Now, a lot of other religions are going to object. *What about us?* As a Christian, I’m committed to the Christian worldview, and I do think it’s the right one and the only one. But for the purposes of this essay, here is what I’ll say…”

What does Jordan mean “I do think it’s the right and only one”? Why isn’t he taking Daniel’s perspective?

1 Like

Here is Jordan on the Jim Rutt show discussing his conversion. I recommend you read it directly from his perspective:

My take is that Jordan found an evangelical community he likes in North Carolina, is now taking on the flavor of the broth he is stewing in. Throwing elbows at everybody else’s religion is part of that culture. To me, it is not convincing at all. Jordan is still a very bright fellow, and I would happily consider his views on AI, the the metacrisis, or most anything else. But evangelical theological smugness is a not a look that appeals to me at all.

3 Likes

Well, he won’t be perceived as a secular light anymore, but he’s as relevant as a thinker as he’s ever been

2 Likes

Just now read Great Transformation 2.0. Interesting for sure. If there is appetite in this thread to unpack, I’d gladly participate. There are number of AI-related statements in the essay that can be viewed differently, but yes, it’s got some original notions worth considering. For example:

”**Pistis is the only mechanism that enables scalable coordination without centralized control.**

In my recent textbook chapter on AI, I cribbed off of Vervaeke who has an episode on Hobbes vs Descartes vs Pascal on cognitive theory, including AI (Yes! These guys were indeed debating AI in the 17th century!). Pistis is Pascal all the way. I’m team Pascal, and maybe so is Jordan Hall.

My take on “pistis” (faith) though is completely detached from any sense of Christian dogmatism or theologically-specified metaphysics. When Pascal says “the heart has its reasons, that reason knows not of”, he nails what’s lacking in AGI hype. But he also implicitly creates the preconditions for stepping away from dogmatic theology. “Fire. God of Abraham, God of Isaac, God of Jacob, not of the philosophers and the scholars. I will not forget thy word. Amen.” is not a dogma. It’s a lived experience.

2 Likes

The Three Forms of Jordan Hall’s Unkindness

Sure, Jordan Hall is right about one thing: something is breaking. Institutions are losing legitimacy, meaning is fraying, and technological change is going to reorganise large parts of society. On this, he is perceptive and often insightful.

I am concerned that he has upgraded himself from thought leader to prophet, and my heart sinks when I see people descrbing themselves as “fans” and buy in to package deals. I wanted to reply with some detail, to this post by you @ianheffernan , because, frankly… the devil is in the detail.

Where I part company with Hall is not in his diagnosis of disruption, but in the social architecture he proposes in response. Across his recent essays, a pattern emerges — a pattern I can only describe as three forms of unkindness: rhetorical, architectural, and institutional.

This is not unkindness as insult or tone. It is unkindness as design — who is included, who is excluded, and who is simply not considered.

The core concern is this: across these essays, the proposed future is not organised around improving systems for everyone, but around building new systems for those who can qualify for them. The issue is not innovation. The issue is whether the future is being built as a common project, or as a selective exit for the capable.


I. The Rhetorical Unkindness

In the first essay, the unkindness is rhetorical.

Hall describes a coming bifurcation: those who become “superpowered nodes” — technologically capable, networked, adaptive — and those who drift into what he calls “Mouse Utopia.” He suggests that most people will choose comfort, passivity, and managed decline, while a smaller group will step into agency and transformation.

The problem here is not that he predicts differentiation in outcomes. That is likely true. The problem is the moral framing. The majority are described in terms that hover somewhere between pity and disdain, while the minority are framed in near-spiritual terms — those who cross the Red Sea, those who enter the Kingdom.

This is not a neutral description of social change. It is a moral sorting story. He is not only describing who will have more or less; he is implicitly describing who the future is being designed for, and who it is not.

The vulnerable, in this story, are either not really vulnerable (because anyone could become a “superpowered node” if they tried), or they are part of the large population that will have a “sad ending.” Neither position shows much interest in the structural realities of people’s lives: illness, disability, trauma, caring responsibilities, unequal education, unequal starting points, and simple differences in temperament and ability.

The unkindness here is rhetorical because it frames a large portion of humanity as spiritually or psychologically inadequate to the future.


II. The Architectural Unkindness

In the second essay, the unkindness becomes architectural.

Hall’s analysis of twentieth-century consolidation is often clear and useful. Large organisations formed because of real constraints: information-processing limits, coordination costs, regulatory overhead, and the need for capital-intensive infrastructure. He argues that AI reduces these constraints and allows smaller, more local, more agile entities to compete again.

This is a plausible and interesting argument.

But then he makes a huge leap. From the claim that AI changes the economics of scale, he moves to the claim that the optimal system is a church-anchored, vocation-centred, trace-data-visible economic network — what he calls the “Divine Economy.”

This leap is not argued structurally; it is asserted rhetorically. Many other possibilities could follow from the same technological shift: worker cooperatives, community-owned platforms, municipal enterprises, strengthened public infrastructure, sectoral bargaining, universal basic services, and hybrid models we have not yet invented. He does not argue against these alternatives. He simply does not see them as possibilities. The technological shift does not logically imply one specific social form.

But more importantly, the proposed system has entry requirements: you must be entrepreneurial or highly skilled, you must be able to produce visible trace data, you must orient around vocation, and you must be embedded in the right kind of religious community.

Those who do not meet these criteria are not attacked. They are simply not designed for. They remain in the legacy system, which is simultaneously described as dying.

This is architectural unkindness: a system in which the benefits of the future are structurally available to some, while others are left in a system described as obsolete.


III. The Institutional Unkindness

In the third essay, the unkindness becomes institutional.

Here Hall turns to medicine. He correctly observes that regulatory systems designed for mass-produced drugs are poorly suited to highly personalised therapies. This is a real problem. Regulatory frameworks do create friction for personalised medicine.

But his proposed solution is that religious institutions should use religious freedom law to operate outside medical regulation — creating Christian Healing Sanctuaries that can administer unapproved, experimental, personalised treatments to their members.

This is no longer just rhetoric or economic design. This is about who gets access to experimental medicine, under what oversight, and with what accountability.

If such a system exists outside public oversight, then cutting-edge treatments (or harms) become available to those inside a religious network, while those outside remain in the slower, regulated system. If something goes wrong, the legal structure is designed specifically to prevent external scrutiny. The RFRA claim is not a side effect; it is the mechanism. The system is built to be legally unaccountable.

At this point, the unkindness is institutional: the most advanced forms of care are available to those inside the network, and structurally unavailable to those outside it.


IV. The Pattern

Across the three essays, a pattern emerges:

Essay Form of Unkindness Who Is Excluded
First Rhetorical Those who do not become “superpowered nodes”
Second Architectural Those who are not entrepreneurial, technically capable, or religiously aligned
Third Institutional Those outside the religious and legal structure

At each stage, the circle tightens. What begins as a story about human transformation becomes a set of economic structures, and then a set of institutions, that consistently advantage those inside a particular network and leave others in a declining system.

This is not just a vision of the future. It is a sorting mechanism.


V. What I Reject — and What I Hold

I do not reject the claim that our current systems are failing. I do not reject the claim that new forms of organisation, new economic models, and new medical paradigms are coming. Those things are likely true.

This is not an argument against innovation, decentralisation, or new institutional forms. It is an argument against building them in ways that abandon those who cannot simply opt in.

The deeper philosophical disagreement underneath all of this is an old one. It is the argument between universalism and the idea of the elect — between societies organised around systems that work, however imperfectly, for everyone, and societies organised around high-functioning networks that work very well for those inside them, while others fall behind or are left out.

The core question of the coming century is not whether new systems will be built. They will be.
The real question is this:

Will the systems we build be universal systems, designed so that even the vulnerable remain inside the circle — or selective systems, designed primarily for those who can keep up?

There are many possible responses to institutional failure that do not involve building gated systems: rebuilding public infrastructure, cooperative ownership models, community-owned digital platforms, regulatory reform for personalised medicine, and new hybrid institutions that combine innovation with public accountability. The choice is not between stagnation and secession. There is a third path: renewal.

The future does not need prophets dividing humanity into the saved and the left behind. It needs people willing to do something slower and less dramatic: build institutions that work, experiment without abandoning public accountability, care for each other, protect the vulnerable, and remain humble about what we do not yet understand.

That is a harder path than crossing the Red Sea.
But it is the only one worth taking.

4 Likes

Prior to that he showed a predilection for elitist thinking which might have predicted his attraction to renunciate religion. Some regarded him as one of the leading secular lights of the liminal web, generally associated with channels like GameB, where there remains an unfortunate elitist shadow. How he is received now I hope is futher narrowed by how his new found religious affiliation lands.

1 Like

Absolutely! I can’t claim to know at any deep level why some groups of people adopt this sense of being the “elite”, “elect”, “saved”, “illuminati”, “chosen”, etc. but it is clearly endemic, with so many examples across history. Who knows, maybe this tendency goes right back to the origins of homo sapiens?

I also see a strong irony here. The original, early Christianity that I perceive is exactly one in which the values are clearly in favour of the poor, the weak, the oppressed, the dispossessed, and the like. And of course “catholic” originally (without the capital “C” of Roman Catholic, see Wikipedia) meant something like “all-embracing”. So I am dismayed by people who take Christianity and distort it into an elitist cult.

1 Like

In the rhetorical unkindness you seem to be alluding to what he’s saying likely true but you dislike his “moral framing”. If we agree that “the future” won’t be very kind to a large portion of humanity, how would you prefer it being re-worded to offend less?

Regarding “architectural unkindness” (I’m just loving the concepts that the Ai comes up with) - he’s pushing his vision, why does he need to consider alternatives that don’t seem viable for him? Let others put forward their ideas and make some practical steps towards their implementations. Does every article or thinking need to be perfectly balanced so it doesn’t offend anyone and doesn’t advocate anything? How can pumping smoke into the atmosphere make things clearer?

Institutional unkindness - You need to relate to a source in order to draw from it. To get access - you need to establish some sort of positive engagement. Our world is at best selectively accountable as is - all social institutions correspond to the Confucian or Japanese concept of “uchi”. That translates into systems - boundaries and distinctions betwee what’s inside and what doesn’t belong to it. Concepts of “we” or “public” are interesting - we’re talking about a group which is not identifable…

No fundamental difference - every society works, however imperfectly, for everyone - as long as they are allowed to live. The point is that there are only levels of imperfection, but these are subjective.

The future won’t be shaped by the ones who are talking about things but the ones who are building it. For better or for worse…

In skimming the first article I just got really bored. I was getting a “jerked-around” feeling, like I was supposed to be a dog on somebody else’s leash. But out of respect for @ianheffernan I forced myself to read it carefully and yes, there are some serious ideas in there. Thanks for unpacking so much of it! (“Serious” is not the same as “I agree with it”, BTW).

On the substance, responding to @Martin ‘s responses seems most advisable, so more down there …

History has never been especially fair! We do seem very headed toward an epoch of “devil take the hindmost”. This whole business about thinking one is a member of the “elect” and the rest of the world can just go burn is US Culture 101. That attitude came over on the Mayflower. Every ridiculous, cultish, strip-mall church in the US thinks they are the “elect”. They also have very customized readings of the Book of Revelations and can map those passages with great precision to whatever Trump is posting lately on Truth Social. “Offensive” is probably one of the gentler words that could be applied to the whole business.

The problem space that engages me is, given that shite culture, how can one best navigate it?

1 Like

To me, this is the elistist vibe that @Gen is flagging. Handing people the “correct” answer so to speak. Which also involves debunking all of the alternative answers. Makes me think Jordan Hall has missed his calling … he should have been a professor!

I think that the role of “elites” is something you claim rather than build rationale around. It’s by the virtue of being able to “shape” things.

Again, you might dislike his vibe (and I surely do!), but how’s that relevant?

We won’t navigate our way out by writing critical articles about morality is my thinking. It will be more like “build a better mousetrap”.

Let me just take The Coming Great Transition 2.0 as one close reading example.

The section headed the “Bad News” contains 7 paragraphs. Every single one of them is arguable and subject to counter-argument. I am absolutely NOT swallowing any of those assertions whole! Yet the rhetorical style rolls on like a bulldozer, plowing through any potential reasoned opposition to any of it. The net effect is brow-beating. I felt exhausted just by this first paragraph. Do I have any incentive to argue with this guy? Not really. Better things to do, for sure. But again @ianheffernan asked a very sincere question about the article, and the better angels of my nature nudged me in the direction of at least trying to be constructive.

So yeah, “vibe” matters. In the trying-to-be-constructive vein, let me select one of Hall’s point and present it in a different mode.

Hall: “If you’re a SaaS company, your days are numbered. And frankly, pretty much everything else is probably out too. I don’t see a reason for most of the software infrastructure out there (outside of the big AI platforms of course).”

Me: I pretty much study this topic full time. Hall’s POV is out there as a thesis. But so are other alternative theses. Many of my students, colleagues and community members work for SaaS companies. The head of my advisory board works for Snowflake. I am not a tourist in this space.

Whether AI is the end of SaaS appears, on balance, to depend largely on the value proposition of the particular SaaS, the user stories it supports, the moats it has vis-a-vis the rest of the market, and frankly, how adept the particular SaaS offering is at leveraging AI for its features. SaaS, in general, has a huge moat around legacy data, governance, compliance, audit, and IT security. OpenClaw is a complete joke for anything serious in the enterprise space. Not to mention that Google, Anthropic and the other big dogs are creating OpenClaw competitors that will likely exclude that particular tool from any serious enterprise deployment.

Do you see my point? I’m not the sort of reader Jordan Hall needs to be talking down to about SaaS and AI. And yet, he is absolutely talking down. Not just to me, either.

I’m afraid I think Hall is spot on here…. This is the same prediction that Citrine made recently.

In near future we’ll have swarms of Ai agents building everything imaginable and it will cost cents…. both in terms of effort to write software and the cost of infrastructure. That will surely kill any SaaS model and software companies.

If I were a computing student, I’d drop it right now. Sure - there will be Palantir and a handful of others tasked with survaillance, but that’s probably not what you had in mind…

Yeah, if anyone is reckless enough to expose their proprietary data to anybody’s agent with no further ado. Vibe coding a UI is one thing. Vibe coding vast data lakes is not happening (except by way of hallucination). Is anyone seriously going to expose decades of customer data currently in Salesforce to the likes of OpenClaw? Granted, humans who handcraft spreadsheets based on queries about that Salesforce data had better construct a job search strategy ASAP. Claude outperforms that already. But anyone involved in maintaining data integrity, security, backup, etc. for that “crown jewels” corporate data has a pretty bright future ahead. Also, AI workflows very evidently do not orchestrate themselves without rather sophisticated human PMs in loop. The horror stories about unsupervised agents deleting infrastructure due to lack of context and supervision are starting to bubble out there. Hype is quickly giving way to deployment reality.

One key value proposition for SaaS is consistent user interface. It’s worth spending money on a subscription basis so the Help Desk can see the same UI the users are working with. Hobbists can vibe code whatever they want, however they want. Serious organizations need consistency, reliability, boundaries, backups, redundancy, and all sorts of policy-driven things. Within that framework, yes, AI accelerates certain types of production. AI is going to be baked into the IT cake, for sure. But it will be far from the only ingredient.