Trust verification in agentic systems
AlignmentVerification
Formal methods for verifying intent alignment in AI-assisted workflows. When an agent acts, how do we prove the action matched human intent?
Formal verification. Aligned agents. Human-origin proofs.
Open source from Cape Town.
Intelligence is embodied. The systems it runs on shape what it can learn and how fast it can act.
Learning is transformation. The process that turns data into understanding determines the quality of intelligence.
The outcome we care about. Systems that reason, plan, verify their own actions, and remain aligned with human intent.
How systems prove correctness and trust. Formal methods applied to AI output governance.
Ensuring intent matches execution. Transparent planning, human gates, and attributable actions.
Detecting and proving human-created work. Behavioral biometrics as authorship evidence.
AlignmentVerification
Formal methods for verifying intent alignment in AI-assisted workflows. When an agent acts, how do we prove the action matched human intent?
BiometricsAuthorship
Keystroke dynamics, revision patterns, and compositional behavior as evidence of human authorship.
Y. LeCunRepresentation LearningSelf-supervised
A framework for predicting representations rather than pixels — foundational to world models that understand rather than generate.
Foundational to our verification and intent modeling approach.
Y. LeCunWorld ModelsPlanning
Blueprint for autonomous intelligence using world models, hierarchical planning, and energy-based architectures.
Shapes our thinking on how AI systems should plan, act, and verify.
150,000 spaza shops feed 80% of township households. When a food recall happens, how do they find out?
Food safety and product trust for South Africa's informal retail economy. Real-time recall coordination, barcode verification, community sensing, and AI-driven safety monitoring — because the formal supply chain stops at the township border.
View system(opens in new tab)How do you enforce policy on systems that generate their own actions?
Formal governance for AI in production. Policy enforcement, audit trails, and review routing. When an AI system acts, Thoth ensures it acted within bounds — and proves it.
View system(opens in new tab)What does aligned collaboration between humans and AI agents look like?
Intent-aligned Git workflows. The agent plans, explains its reasoning, identifies risks, and waits for human approval before execution.
View system(opens in new tab)Can you prove a human created something without exposing the work?
Behavioral biometrics as authorship evidence. Keystroke dynamics and revision patterns become portable certificates of human origin.
View system(opens in new tab)Our systems assume adversarial conditions by default. Every claim about what an AI system did — or what a human created — requires cryptographic or behavioral proof. Trust is the output of verification, never the input.
We don't build products and justify them with papers. We study verification, alignment, and human-origin signals. The systems we ship are consequences of that research, not its purpose.
If trust infrastructure isn't itself inspectable, it has already failed. Everything we build ships under MIT license. Fork it, audit it, break it.
We chose Cape Town, not San Francisco. Different vantage points surface different problems. The trust failures we see from here aren't the ones the Valley is solving.
Trust infrastructure you can't inspect isn't trust infrastructure. Everything we build is open source — not as a marketing strategy, but because verification systems that aren't themselves verifiable have already failed.
Food safety and trust infrastructure for informal retail. Barcode verification, recall coordination, community sensing, AI safety monitoring.
Intent-aligned Git workflows with AI agents. Plans explain themselves, risks surface before execution, humans approve before anything ships.
Human-origin proof system. Behavioral biometrics — keystroke dynamics, revision patterns — as portable authorship certificates.