Skip to main content
Simakk Research · South Africa

We study how machines earn trust.

Formal verification. Aligned agents. Human-origin proofs.
Open source from Cape Town.

Framework

3 pillars

Matter

Systems

Intelligence is embodied. The systems it runs on shape what it can learn and how fast it can act.

  • Compute substrates
  • Hardware-aware optimization
  • Physical constraints

Energy

Processes

Learning is transformation. The process that turns data into understanding determines the quality of intelligence.

  • Self-supervised learning
  • Energy-based models
  • Optimization dynamics

Intelligence

Reasoning

The outcome we care about. Systems that reason, plan, verify their own actions, and remain aligned with human intent.

  • World models
  • Verification
  • Aligned decision-making

Research

4 papers · 3 areas

Research areas

Verification systems

How systems prove correctness and trust. Formal methods applied to AI output governance.

Agentic alignment

Ensuring intent matches execution. Transparent planning, human gates, and attributable actions.

Human-origin signals

Detecting and proving human-created work. Behavioral biometrics as authorship evidence.

Active research

Ongoing
2024

Trust verification in agentic systems

AlignmentVerification

Formal methods for verifying intent alignment in AI-assisted workflows. When an agent acts, how do we prove the action matched human intent?

Thoth · Hermes
2024

Behavioral signatures for human-origin content

BiometricsAuthorship

Keystroke dynamics, revision patterns, and compositional behavior as evidence of human authorship.

MindPrint

References

How we think

Verify, then trust.

Our systems assume adversarial conditions by default. Every claim about what an AI system did — or what a human created — requires cryptographic or behavioral proof. Trust is the output of verification, never the input.

Research produces systems.

We don't build products and justify them with papers. We study verification, alignment, and human-origin signals. The systems we ship are consequences of that research, not its purpose.

Open by default.

If trust infrastructure isn't itself inspectable, it has already failed. Everything we build ships under MIT license. Fork it, audit it, break it.

Build outside the monoculture.

We chose Cape Town, not San Francisco. Different vantage points surface different problems. The trust failures we see from here aren't the ones the Valley is solving.

Open Source

MIT Licensed

Trust infrastructure you can't inspect isn't trust infrastructure. Everything we build is open source — not as a marketing strategy, but because verification systems that aren't themselves verifiable have already failed.

terminal
$ npm install -g hermes-cli
$ hermes init
$ hermes commit --explain
✓ Plan generated · 3 files changed · 0 risks flagged
✓ Awaiting human approval...

Get started

Talk to us