Evolving Effective Altruism

Evolving Effective Altruism: from EA to Pragmatic Utopianism / Integral Altruism and a Second Renaissance

I am booting this as a meta-thread to pull together other threads and materials. Perhaps we can even create a mini-website :wink:

Fig 1: simplified illustrative evolution from Effective Altruism frame to Second Renaissance frame. (See below for vertical version of diagram)

Fig 2: (imperfect) first draft of comparison of EA to PU/2R

Dialogs

Max + Rufus Dialogs on Evolving EA

By @MaxRamsahoye and @rufuspollock

Part I: Effective Altruism as philosophy and movement, its current status and historical evolution

We (Rufus Pollock and Max Ramsahoye) discuss the evolution of Effective Altruism (EA) from a philosophy to a movement, emphasizing its focus on using reason and evidence to maximize welfare.

We highlight EA’s historical roots in global poverty alleviation, economic growth, and existential risks, particularly AI. The conversation also touches on the shift towards AI safety and governance, with organizations like 80,000 Hours and Open Philanthropy focusing increasingly on AI risks.

We discuss the challenges of AI safety, emphasizing the failure of current approaches to ensure alignment without significant slowdowns.

We discuss critiques of Effective Altruism, citing the “drunkard and the lamppost problem” and the difficulty of determining the importance of AI risk over other global issues. We also discuss the implementation problems and the need for global coordination. We discuss the concept of coordination problems and a “Moloch trap,” where competitive pressures drive rapid AI development despite safety concerns.

We conclude by suggesting a need for cultural evolution and an evolution of effective altruism towards pragmatic utopianism, with an emphasis on wisdom and collective action beyond narrow rationalism.

Part II: Challenges and critiques of Effective Altruism and potential shift towards Pragmatic Utopianism

Rufus Pollock and Max Ramsahoye discuss some of the challenges and critiques of Effective Altruism (EA) and how it could evolve towards pragmatic utopianism.

Rufus Pollock and Max discuss the evolution of Effective Altruism (EA), highlighting its strengths and challenges. They critique the focus on techno-solutionism and capitalist realism, emphasizing the need for systems change.

They argue that EA’s emphasis on measurable impacts often overlooks broader, less quantifiable issues like collective action problems and cultural alignment.

They propose a shift from rationality to wisdom, advocating for a pragmatic utopianism that combines incremental change with radical vision for paradigmatic evolution. They stress the importance of addressing value misalignment and principal-agent problems, and call for a cultural evolution to solve collective action issues and ensure long-term sustainability.

They discuss the need for a comprehensive approach to Effective Altruism (EA) that integrates human psychology, wisdom traditions, and inner sciences like cognitive science and neuroscience.

They suggest that the future of science will involve both outer (ecological and system sciences) and inner (cognitive and neuroscience) disciplines. Finally, they highlight the wisdom gap between technological and psychological/social disciplines, referencing Comte’s hierarchy of sciences and the potential for a mature science of the mind and society to manage our technological powers.

Jonah & Rufus “Evolving Effective Altruism” dialog with a focus on wisdom

By @JonahW + @rufuspollock

Sub-threads

Other relevant materials

From Effective Altruism to Pragmatic Utopianism: Evolving the Ethos of Doing Good

Outline by Rufus (all errors are mine)

Effective Altruism (EA) is one of the most influential philosophical and practical movements in the contemporary landscape of ethics and action. Born at the intersection of utilitarian ethics, rationalist epistemology, and Silicon Valley engineering culture, EA seeks to maximize the positive impact of one’s resources—especially money, talent, and attention—through rigorous cost-benefit analysis and prioritization. Its proponents emphasize doing “the most good,” often understood as saving or improving the largest number of lives for the lowest cost.

However, as the global landscape veers further into what scholars call the polycrisis—interlocking environmental, technological, social, and political crises—it is increasingly clear that EA, in its current form, may be necessary but not sufficient. It risks becoming a narrow framework, blind to the very cultural and ontological transformations required to address the deeper systemic roots of our predicament. What is needed is not an abandonment of EA’s virtues—its clarity, ambition, and moral seriousness—but an expansion into a new integrative paradigm: a pragmatic utopianism aligned with a Second Renaissance.

At its core, EA embodies several powerful assumptions. It places a strong emphasis on tractability, measurability, and marginal impact. The use of quantitative reasoning—epitomized in figures such as “$5,000 to save a life”—enables clear communication and accountability. Yet this same methodology narrows the range of “admissible” moral actions. Causes that are not easily measurable (like cultural renewal, governance reform, or inner development) are frequently sidelined.

This leads to one of the most profound limitations of EA: a form of epistemic tunnel vision. EA tends to prioritize domains that resemble engineering problems—AI safety, biosecurity, malaria nets—while de-emphasizing areas where systems are nonlinear, culturally embedded, and irreducibly complex. For instance, while existential risk from AI is taken seriously, governance-based approaches are often dismissed as “hopeless,” a view explicitly stated in Bostrom’s early work. The result is a self-fulfilling loop where collective action is seen as infeasible, and therefore never seriously attempted.

Compounding this, EA has traditionally been skeptical of spiritual traditions, mystical claims, and subjective or phenomenological accounts of well-being. Concepts such as liberation in the Buddhist sense—or flourishing as defined by inner transformation—are foreign to the analytic, externalist frameworks dominant in EA. Consequently, entire dimensions of human experience are excluded from its calculus.

The challenge, then, is not only philosophical but cultural. EA arises within a broader late-modern worldview: one that valorizes individualism, techno-solutionism, and quantification. Its neglect of cultural evolution—both as a field of inquiry and as a strategic frontier—is not incidental but systemic. Where governance and culture are acknowledged, they are typically treated as exogenous constraints or background conditions, not primary sites of transformation.

To evolve beyond these limits, we must imagine a path from EA to something broader: a pragmatic utopianism. This path involves recognizing the irreducibility of systems problems, the indispensability of cultural narratives, and the centrality of inner development. It means understanding the metacrisis not just as a collection of urgent risks, but as a civilizational transition requiring new ways of knowing, being, and relating.

Pragmatic utopianism does not reject rigor or strategic thinking. Rather, it insists that our metrics of success must include cultural resonance, institutional durability, and psycho-spiritual flourishing. It affirms the role of mysticism, art, and inner work not as luxuries but as generative engines of civilizational renewal. And it calls for courage: the courage to act beyond the measurable, to invest in the invisible, and to believe that long-term, systemic, and cultural interventions are not only necessary—but effective.

This does not mean abandoning EA. It means transcending it. The transition from EA to Pragmatic Utopianism and a Second Renaissance is not a repudiation, but a maturation—a movement from adolescence to adulthood in our ethical imagination. It means embracing complexity, cultivating wisdom, and holding space for the unknown.

As our crises deepen, the time for this transition is now.

Appendix: Vertical evolution diagram

1 Like

A very high quality conversation :clap:

1 Like

Euan Mclean who runs the London Integral Altruism group has published this excellent fictional dialogue on AI risk and metacrisis:

This is a critique of Diego’s position in the Root Cause article above. Let’s start with this image:

On the question of exponential technology, I agree completely. The way I introduce that students is through the work of Ray Kurzweil. Kurzweil and many others have established exponential technology trends through solid empirical work. E.g. Law of Accelerating Returns applies to technology and biology.

On the question of global interconnection, yes … but. We need a better root cause analysis of WHY the globe is interconnected. What systems are at play? What leverage points are available for such systems? To properly answer such questions requires a lot more world history than Diego seems familiar with. (This is a common weakness in the EA movement - lack of historical depth perception). No attempt will be made here to summarize human evolution in a sentence or two, but consider, although discrete civilizations generally rise and fall, the footprint of civilization itself has shown a Kurweil-like trend towards geographic expansion. Why was that?

Finally, on the question of cultural immaturity, what is the metric? When we culturally “grow up”, what will that look like? Is Diego a Wilberian, by any chance? Has he been following Brendan Graham Dempsey’s recent work? How do we know “mature” when we see it? Or have we ever seen it?

I’m rather sympathetic to the cultural immaturity argument, on the theory that humanity (collectively and personally) is always on a learning curve. We solve the problems that present themselves. Recent problems are of fairly recent vintage, so our solutions approaches are of necessity rather immature. However, if we can find good historical analogies to recent problems, then maybe some more proven approaches might be recovered from the long span of human experience. Whatever Second Renaissance intends to be, to me that name implies an interest in such historically-informed long span recovery.

So it seems like you are basically in agreement with Diego’s framing as expressed in the diagram, but think certain elements of it need to be fleshed out in much more depth. I think Diego would agree.

It’s worth bearing in mind that this is very much written for an EA audience, where the key goal is to get them to take the metacrisis framing at all seriously in a context where AI risk is often seen as of overriding importance and urgency.

It’s good to know about the intended audience. With similar goals in mind, (introducing metatheory in a pragmatic way to computer science students), I factored the “input” arrows like so:

  1. exponential technical progress (Kurzweil)
  2. exponential economic growth (capitalism)
  3. exponential population growth (basic biology)
    -versus-
  4. limits to growth (Club of Rome, Malthus, Planetary Boundaries)
  5. carbon pulse (Nate Hagens)
  6. entropic limits to Global Population Plus its Economy (Peter Pogany’s GLOPPE).

My essay juxtaposing all this was titled “When Trends Collide”. It’s all empirically justified. But either things like energy supply or climate are hard limits - or they are not. We all must place bets.

That brings us into Diego’s “metacrisis” central arena, with enough empirical and statistical data to please a calculation-heavy EA sensibility. Is there are any rational basis, for example, for neglecting the Second Law of Thermodynamics? That’s ultimately where 4. - 6. above are taking their stand. Kurzweil imagines nanobots will push those limits way, way back. Is he right about that? Again, wheel is rolling … all bets must be placed.

Where I would continue to quarrel with Diego a bit, is on his sociological Feynman diagram, he has AI, climate, nuclear, and pandemic exiting the metacrisis collision zone as if they were discrete by-products of the exponential vs. limits to growth smash up. I see it differently. If Kurzweil is right, AI, nanotech, quantum computing and other advanced technologies will save the day and represent the pinnacle of evolution. In that case, replace “AI risk” with “Need for technical acceleration”. Or perhaps Kurzweil has it all wrong, and the likes of Nate Hagens and Peter Pogany see the world more clearly. In that case, technology accelerationism is going to increase nuclear, climate, pandemic and other risks and lead to a “chaotic transition” (Pogany) to a “Great Simplification” (Hagens) that will make WWI-WWII look pleasant by comparison. When it comes to high tech - should we favor the brake? Or the gas pedal?

In my recent class, we all came round to a fusion concept. Namely, AI is inevitable. But we can all take pains to align it with human and biospheric survival. It’s a long shot. But at least the payoff is good if it comes in. So we favored a third way: not just brake, not just gas pedal, but mostly an improved steering mechanism.