A New Epistemological Framework for 2R
TL;DR: I built Abductio, a reasoning framework for extraordinary claims that won’t fry your brain. Try it here. Looking for co-founders—see below.
I’ve been developing a new reasoning framework designed to generate extraordinary evidence for extraordinary claims. It synthesizes established epistemological tools to enable groups of reasoners—whether human experts, AI systems, or hybrids of the two—to combine their intelligence in joint reasoning tasks.
The framework is built around a recursive process that assigns two scores to any claim: credence and confidence. If the confidence score falls below a certain threshold, the claim is broken down into subclaims. The process is then applied recursively to each subclaim, with the same confidence criteria serving as a gatekeeper at every level.
I think of this as a post-truth epistemological framework, because the dual-metric approach challenges the simplicity of assigning a truth value between 0 and 1. In reality, the believability of any claim can shift with new information. The extent to which a claim resists such shifts is its stability—what the framework captures as confidence.
My goal in creating this system was to encourage more rigorous reasoning from typically cautious or conventional thinkers—like LLMs or academic experts—who often treat widely accepted norms as unquestioned facts. I wanted these reasoners to first assess their confidence in a claim before asserting its credence.
Take this example: if you ask an LLM whether the claim “Atlantis existed” is true, it might return a probability of 0.05. Coincidentally, if you ask it the probability of correctly guessing a number between 1 and 20, it might also say 0.05. However, the confidence behind these two answers differs dramatically. In the guessing game, the 0.05 reflects high confidence (assuming a uniform distribution). But for the Atlantis claim, any honest, self-aware reasoner would admit that its confidence in the 0.05 rating is very low.
This insight led to the development of Abductio: a reasoning process that requires both credence and confidence for a claim to pass. When confidence is lacking, the reasoning doesn’t stop—it deepens.
You can try it yourself. Just type:
claim:
(If it decomposes, type: continue)
My favorite examples are ones that go from low credence, low confidence, to probably true, high confidence after several decompositions. Try asking something spicy you feel is true but isn't accepted in the mainstream and there's a good chance it will unfold this way.
ChatGPT - Abductio protocol initiation
SELF-PROMOTION NOTE
This uses a free, open source version of Abductio. A pro version that’s significantly more powerful is coming in the near future. I’m looking for co-founders good at Decision Theory and/or software engineering. Follow me on LinkedIn for updates.