David Akinboro

Back to Insights

My Intersection of Neuro-Symbolic and Causal Inference as an Approach

April 2025

Building on the Path Forward

In my previous post, I highlighted two key challenges hallucination rates (33% on o3 models) and the persistent “black box” issue and how they’re steering new research on interpretability and robust reasoning. Among the four key areas I highlighted (Neuro-Symbolic AI, Causal Inference, Lifelong Learning, and Improved Explainability), I want to dive deep into how I'm combining the first two to tackle one of AI's most persistent challenges: creating systems that can both reason effectively and explain their thinking.

In the so-called "third AI summer"“third AI summer,” symbolic and neural methods are converging in ways that may redefine how AI systems both reason and explain their conclusions.

Understanding the Foundations

Domains Crying Out for Solutions

High-stakes fields medicine (diagnosis and treatment), legal (judicial reasoning), finance (risk assessment), and policy (intervention effects) share a common challenge: they require not just accurate predictions, but explanations that professionals can trust and validate.

The explainability imperative is particularly acute in these domains. Despite the promise of neuro-symbolic AI to enhance explainability through symbolic transparency, current results are "less evident than imagined," with most approaches still producing systems that are difficult to interpret in practice.

This creates a critical gap: we have AI systems that can perform impressively on pattern recognition tasks, but professionals in high-stakes domains remain hesitant to trust systems they can't understand or verify.

Evolution: From Prototype to Research

Building on that MVP, my advisor and I developed an interpretability framework that

  • (1) extracts key facts
  • (2) maps them into causal chains, and
  • (3) ranks evidence

transforming a prototype into a full research project.

Post inference from a multi-task legal agent, we decided to commit ourselves to research what we call "glass box" legal AI systems where every conclusion can be traced back to specific evidence. Using attention analysis and interpretability techniques, you can see exactly which facts and legal principles influenced each decision. It's like having an AI research assistant that can highlight its sources and explain its logic.

This evolution reflected a key insight: recent breakthroughs at Stanford CodeX show that reasoning-enabled LLMs like OpenAI's o1 demonstrate "massive leaps in capability" on legal reasoning tasks compared to traditional models, opening up new directions for neuro-symbolic approaches to legal problems.

My Approach: Where They Intersect

The problem I'm solving: Legal AI systems that can both learn complex patterns from vast amounts of case law AND explain their reasoning through transparent causal chains that legal professionals can verify and trust.

The technical innovation:

  • Neural component: Transformer-based attention analysis for pattern recognition in legal texts, identifying which parts of legal documents the model focuses on during reasoning
  • Symbolic component: Causal relationship mapping between legal facts and conclusions, systematically extracting cause-effect chains that mirror how legal professionals reason
  • Integration: Combined scoring methodology that weighs both attention patterns (60%) and causal strength (40%) to create interpretable legal reasoning scores

The methodology works by first extracting key legal facts from court opinions, then analyzing both how the neural model attends to these facts (through transformer attention weights) and how they causally connect to legal conclusions (through symbolic causal analysis). This dual approach addresses the core limitation identified in current research: while neuro-symbolic AI promises enhanced explainability, achieving truly interpretable systems remains challenging.

Search-augmented training: Unlike traditional approaches that only provide tool access during inference, our framework provides access to legal databases during training episodes. This enables the system to learn sophisticated research strategies, not just pattern recognition, fundamentally changing how AI agents develop legal reasoning capabilities.

Looking Forward

This intersection of neuro-symbolic AI and causal inference represents more than just a technical advancement it's a pathway toward AI systems that can engage in the kind of systematic, transparent reasoning that professional practice demands.

The approach could transform other high-stakes domains facing similar challenges: medical diagnosis requiring both pattern recognition from symptoms and causal understanding of disease mechanisms, financial risk assessment needing both market pattern analysis and causal models of economic relationships, and policy analysis requiring both empirical pattern detection and causal reasoning about intervention effects.

As we continue developing these "glass box" AI systems, the goal isn't just better performance, it's AI that professionals can genuinely trust, verify, and collaborate with. The intersection of neuro-symbolic AI and causal inference offers a principled path toward that vision, combining the pattern recognition power of modern neural networks with the transparency and logical rigor that complex reasoning demands.

The limitations I identified in my previous post; hallucination, opacity, and unreliable reasoning aren't just technical problems to solve. They're symptoms of a fundamental mismatch between how current AI systems process information and how professional reasoning actually works. By bridging neural pattern recognition with symbolic causal reasoning, we're working toward AI systems that don't just mimic human expertise, but support and enhance it through transparent, verifiable reasoning processes.