Adequate Judgment as Intelligence

Real Artificial Intelligence Framework

A structured reasoning framework to evaluate complex claims across fact, narrative, and systemic context. It helps both people and AI make better sense of the world by asking smarter questions, detecting manipulation, and reaching more grounded conclusions.

Seeking partners, researchers, and collaborators.

© 2025 Max Micheliov · Version 1.0 · Updated June 2025

Adequacy Over Certainty

Intelligence is the ability to generate adequate judgment.
Rather than mimicking biology, it’s about making sense of the world through structured reasoning.

What Is Adequacy?

Adequacy means being aligned with reality, coherent in structure, ethically aware, and contextually wise. It doesn’t claim certainty but seeks to approximate it, while fully aware of the limits of knowledge and the risk of error. Adequate judgment is the closest we can come to truth, given the complexity of the world and the incompleteness of what we know.

Tetrahedron of Adequacy

We evaluate every claim along four axes: epistemic (truth), systemic (coherence), ethical (fairness), and pragmatic (consequences). These form the core of our tetrahedral model—an ideal of judgment we strive toward, even if unattainable in full. Each axis sharpens a distinct facet of reasoning, cutting through complexity with clarity and care.

From Facts to Systems

RAI analyzes narratives on three levels: fact, narrative, and system—moving from basic claims to deeper meaning and strategic framing. At its core is a library of 50+ philosophical premises—civilizational insights that surface hidden assumptions and guide long-term reasoning. This layered approach helps expose not just what is said, but why and to what end.

How It Works

A systematic approach to analyzing complex narratives

Input
Fact Audit
Narrative Map
System Check
Premise Lens
Graded Output

RAI is a structured reasoning framework built as a 32-page system of engineered prompts. It analyzes narratives across three levels—fact, narrative, and system—using modular heuristics to evaluate adequacy. A built-in Premise Lens, drawn from core philosophical insights, helps anchor the analysis in deeper civilizational reasoning.

The framework offers flexible entry points: analysis can begin with fact-checking, narrative interpretation, or a top-down systemic view, then drill down as needed. This adaptability ensures precision and context-awareness, tailoring the process to each narrative’s scale and complexity.

Three Paths to Implementation

From simple prompts to deep integration: how RAI can live inside today’s and tomorrow’s AI systems.

RAI Lite

A single structured prompt that runs on public LLMs like ChatGPT, Gemini, Claude, or DeepSeek. It delivers impressive results with zero setup. RAI Lite is ideal for testing or applying the framework right away.

Companion Mode

A plugin or app that wraps around public LLMs. It structures user input, sends it for processing, then refines the raw output into a judgment aligned with RAI principles.
Even simulated through chat chains, this approach showed a significant leap in quality.

Native Integration

Embedding RAI directly into an LLM's architecture allows for real philosophical depth beyond prompt limits. This is the long-term vision, though for now, it’s limited by scale, format, and the fact that there’s only one philosopher building it.

Possible Applications

How Real AI thinking transforms analysis and enriches learning to build technologies and societies aligned with human values.

Narrative Analysis

For journalists, policymakers, and curious citizens: a structured way to dissect narratives, identify manipulation, and improve public judgment.

See RAI in Action →

Critical Thinking

Paired with the upcoming book Citizen’s Duty, RAI becomes a tool for teaching structured reasoning—building a culture of clarity, skepticism, and responsibility.

Preview the Book →

Intelligence Research

RAI offers a blueprint for the next generation of thinking machines by adding philosophical grounding and modular analysis to improve judgment, coherence, and alignment.

Collaborate on Research →

Beyond narratives

The RAI method turning fields of knowledge into reasoning architectures, applies anywhere judgment matters: aesthetics, spatial reasoning, systems design, even mathematical framing.

From propaganda to painting, from logic to layout—RAI is the scaffold for deeper intelligence.

Reasoning Personas

Named thinking styles that reflect real-world philosophies like “Arendtian judgment” or “Stoic governance.” Enables AI to operate under transparent, pluralist value modes instead of pretending neutrality.

Read the Concept →

About the Author

Max Micheliov

Max Micheliov

Max Micheliov is a systems thinker and creator of RAI, a framework born from 5+ years studying how beliefs form and collapse in digital spaces.

Structured Thinking for Unstructured Problems

RAI is still in early development but the foundation is here. If you see the potential, let’s talk.

© 2025 Max Micheliov · Version 1.0 · Updated June 2025