Private beta, design partners now

The evidence layer for autonomous agents.

Your AI agents already read source, run shells, call tools, and ship code to production. Provedit gives you one provable record of every action they take: which agent, on whose authority, whether it was allowed, and who approved it. Across every vendor, in one timeline.

You shipped agents to production. The audit trail didn't ship with them.

Cursor, Claude Code, Copilot agent mode, OpenAI Assistants, LangChain workers, your own MCP-driven services. They open files, run shells, call tools, change data, and deploy to production. They do it with credentials originally issued to a human developer or a service account, so the record of what happened points back at the person whose token was borrowed, not the agent that used it.

Every vendor logs its own slice. None of them follow the action across a different vendor's pipeline. None of them record the policy decision or the human approval as evidence bound to the action it authorised. What you are left with is a pile of partial logs that nobody can stitch back together when an auditor, a regulator, or an incident lands on your desk.

Three questions then become very hard to answer:

Q. When an agent did something sensitive, can you tell which agent it was, on whose authority, and whether that behaviour was normal for it?
Q. Was the action within policy, and is the human approval bound to it as evidence, not just sitting in a separate ticket?
Q. If you were audited or breached tomorrow, could you reconstruct the full path (identity, data touched, decision, approval) across every vendor, and prove nothing was edited after the fact?

Provedit is built to answer all three, plus five more, for every single action your agents take.

Eight questions. Pre-answered. Every action.

Every action that lands in Provedit shows up with the same eight questions already answered, each with a status badge and the evidence behind it. Your analysts stop writing queries and start reading verdicts.

01

Which agent did this?

Identity model and session metadata, not just a credential.

02

Was it allowed?

Policy engine decision persisted in the entry hash.

03

Which data did it access?

Normalised target, payload hash, drill-down to blob.

04

Which tool did it call?

~30 normalised action types across read, write, exec, network, secret, IAM, deploy.

05

Was a human approval required?

Signed policy.approve bound to the original action by entry hash.

06

Was this normal for this agent?

Per-agent rolling baseline plus anomaly flags.

07

Did it leak sensitive data?

Sensitive-target hints, exfiltration patterns, allowlisted-host check.

08

Can we prove the timeline later?

Hash chain plus periodic Merkle anchors plus signed root.

Same eight answers, every action. Here is how the platform produces them.

How it works.

One schema feeds one recorder, which writes to one verifiable ledger. Three steps for every action.

1. Collect

Signed events arrive from MCP proxies, CI SDKs, IDE extensions, host sensors, and cloud-audit forwarders. They all speak the same schema, so adding a new agent vendor is a config change, not a rewrite.

2. Decide

The recorder classifies the action, evaluates policy (allow, deny, or require approval), scores it against the agent's normal behaviour, hash-chains the entry, persists it, and returns the outcome. All in one atomic step.

3. Prove

Periodic Merkle anchors and signed roots turn the chain into evidence. Auditors and incident responders can verify, weeks or years later, that nothing was edited after the fact.

Three things make that ledger different from a log table.

Identity-first, not log-first

You don't open Provedit on a stream of raw events. You open it on the agent: its sessions, its normal behaviour, the sensitive things it has touched recently, the approvals waiting on it. SOC analysts already work this way with EDR device pages, so the muscle memory transfers on day one.

Approvals as evidence

A normal log says "X happened". Provedit says "X happened, this rule evaluated it, this person approved it, and the approval is cryptographically bound to that exact action." That binding is what survives an audit, a lawsuit, or a regulator.

One pane across every vendor

Each agent platform will keep improving its own logs inside its own walls. Provedit sits one layer above them all: Cursor, Claude Code, Copilot, OpenAI Assistants, LangChain, JetBrains, self-hosted MCP tools, CI bots, and the cloud APIs they touch. One timeline, one identity model, one chain of evidence, regardless of which vendor produced the action.

Who it's for.

Not every team using AI, and not on day one. Provedit is built first for teams where agents are already touching regulated data, and where one platform group owns how those agents are deployed:

Inside those teams: platform security and AI platform engineering as the champion, CISO and GRC as the sponsor, the SOC as the day-to-day operator. If that sounds like you, the waitlist below is the right next step.

Honest answers to the obvious objections.

Won't the agent vendors solve this themselves?

Each vendor is improving observability inside its own surface, and that work is welcome. None of them is incentivised, or positioned, to be the neutral system of record across every other vendor's agents. Provedit is the layer above them all: one identity model, one policy engine, one tamper-evident chain that spans Cursor, Claude Code, Copilot, OpenAI Assistants, LangChain, self-hosted MCP tools, and CI agents, and that survives outside any single vendor's retention window.

Isn't this just a SIEM table?

A SIEM stores events. Provedit treats the agent as a long-lived identity, evaluates policy in line, binds human approvals to specific actions cryptographically, and produces a tamper-evident chain that an auditor can verify on their own. Your SIEM is a downstream consumer of that chain, not the source of truth for it.

Evidence, or prevention?

Both, dialled per action class. The same product runs in observe, alert, require-approval, and block modes, with the MCP proxy as the natural enforcement point. It defaults to observe, so a noisy approval queue never erodes trust on day one.

What about the EU AI Act and ISO 42001?

Article 12 (record-keeping, in force 2 August 2026), Article 14 (human oversight), and Article 19 (log retention) require traceability, oversight, and evidence outcomes. They do not mandate a specific control. Provedit is one defensible implementation path for those outcomes, alongside ISO/IEC 42001, NIST AI RMF, and the GenAI profile.

Do I have to install another endpoint agent?

No. The pilot footprint is an MCP proxy plus a CI SDK, with nothing installed on developer machines. IDE extensions, log tailers, and host sensors are opt-in collectors you can add later, once the value is obvious.

Get the evidence layer for your agents.

We are taking a small number of design partners through the pilot now. One week to instrument your highest-risk internal agents and MCP tools with the proxy and CI SDK. From that point on, every agent action arrives with an identity, a policy decision, an approval where one was required, and a tamper-evident timeline. Tell us a little about your environment and we'll be in touch directly.

We use this email only to contact you about the design-partner programme. No newsletter, no marketing tracking, no third-party sharing.