About DeAgenticAI

We Built the Missing Layer Between AI Agents and On-Chain Authority

Autonomous AI agents are already operating on-chain. They are executing treasury strategies, interacting with DeFi protocols, and beginning to represent institutional interests in governance systems. The infrastructure they are running on - private key management, multisig governance, rule-based automation - was designed for human actors. It breaks down under every assumption that AI introduces. DeAgenticAI exists to close that gap at the infrastructure layer.

Our Mission

To make autonomous intelligence safe to own and move capital.

We do this by separating an agent's capability from its authority and enforcing that authority cryptographically - so that AI agents can participate in Web3 and enterprise financial operations responsibly, transparently, and at scale.

Our Vision

A world in which autonomous agents are first-class participants in both decentralised and traditional financial systems - where DAOs operate without governance latency, where institutional asset managers automate complex portfolio management without compromising compliance, and where AI agents negotiate, pay, and collaborate with each other over open protocols.

Achieving that requires infrastructure that today's ecosystem lacks: a universal Agentic Control Plane that any agent, platform, or organisation can integrate without surrendering oversight or auditability.

Authority should not be inferred from possession. It must be explicitly defined, enforced, and provable.

Founding Thesis

The fundamental premise of Web3 security - that authority derives from possession of a private key - was designed around human actors who make deliberate, slow, and accountable decisions. An AI agent inverts all three of those assumptions. Agents are probabilistic rather than deterministic. They operate at speeds that make human review impossible in real time. And they can be compromised through prompt injection - a threat class that has no analog in human behaviour.

Giving an AI agent a private key is not a security posture. It is an abdication of one.

This is not a risk to be managed at the application layer or the smart contract layer. It is a structural gap in infrastructure - a missing primitive that must exist between the AI agent and the execution environment. That primitive is the Agentic Control Plane.

What We're Building

Five interoperable control layers define how autonomous agents can execute safely in adversarial financial environments.

Intent Sanitization

Every incoming intent is treated as potentially adversarially crafted and evaluated for structural integrity, source authenticity, semantic coherence, and context manipulation before it is allowed to enter the enforcement pipeline, making it the most critical security boundary in the architecture and the only component specifically designed to block the threat class that makes AI agents fundamentally different from human signers.

Policy-First Enforcement

No action is authorised without explicit policy validation, with policies expressed in a declarative Domain-Specific Language, evaluated deterministically at the signing layer, and versioned so every historical governance context can be reconstructed.

Behavioural Fraud Detection

Static policy evaluates whether an action is permitted, while fraud detection evaluates whether it is consistent with the agent's declared purpose, historical behaviour, and current portfolio state to catch structurally valid but semantically anomalous actions.

MPC Distributed Execution

Key shares are distributed across geographically separated signing nodes using threshold signing protocols, no individual node ever holds the complete private key, and each signing node independently verifies the policy authorisation hash before contributing its partial signature, a cryptographic requirement that cannot be bypassed by compromising any single component including the orchestrator.

Audit and Compliance Infrastructure

Every decision and signature in the control plane is recorded as a cryptographically signed, hash-chained, tamper-evident log entry formatted for MiCA regulatory reporting, DORA operational resilience requirements, and Travel Rule data exchange, making compliance a first-class architectural layer rather than a reporting overlay.

Intent Guard
Policy DSL
MPC Quorum
Audit Trail

Founding Story

DeAgenticAI was founded by Reza Jahankohan in 2026 to solve a core infrastructure gap: autonomous AI agents can execute on-chain, but they cannot be safely granted verifiable authority with private-key-era controls.

Reza focuses on building the Agentic Control Plane as a cryptographic governance layer that enforces policy at signing time, with auditable controls for Web3 and enterprise financial environments.

DeAgenticAI is founder-led and execution-focused, built on a simple principle: authority must be explicitly defined, cryptographically enforced, and provable.

Founder profile and published work are linked below.

Founder Profile

Reza Jahankohan

Founder, DeAgenticAI

Reza Jahankohan is the Founder of DeAgenticAI and publishes on agentic policy enforcement, governed MPC signing, and AI-agent authority controls.

View Full Founder Profile

Work With Us

DeAgenticAI is currently working with a select group of design partners - DAOs, Web3 funds, and RWA platform operators - who need governed autonomous execution before it is available at general access. If your organisation is evaluating infrastructure for AI agent operations, we want to hear from you.

Reach us at [email protected] or use the form below.