← Back to Orbit

Framework Overview

STEa + Orbit

Human-AI Accountability

A Human-AI Accountability Framework

Making AI governance practicalwithout waiting for perfect policies

STEa captures intent and context. Orbit proves provenance. Together they make AI accountability tangible for teams that still operate with messy, incomplete policy landscapes.

The Problem

Most organisations cannot reliably answer basic questions about their AI use.

Which rules applied to this output?

Policies live across wikis, PDFs, and tribal knowledge. The answers are scattered and untraceable.

Who approved it?

Logs show activity, not intent. Approvals are often informal, invisible, or impossible to reconstruct.

What did we believe at the time?

Post-hoc narratives erase uncertainty. Without capture, belief is impossible to prove.

Can we prove it later?

If you cannot answer the three questions above, you cannot answer this one either.

Logs, policies, and post-hoc narratives do not answer these questions. This framework does.

The Solution

A three-layer architecture that creates accountability without requiring policy cleanup first.

Input

Company Guidance - Messy, human, real

Data rules, AI usage policy, regulatory requirements. Lives in Confluence, PDFs, SharePoint, or tribal knowledge.

Layer 1

STEa - Intent + Context

Human-authored planning artefacts that declare what you are doing, under which rules, with what uncertainty.

Layer 2

LLMs / Agents - Capability

Generate, analyse, suggest, accelerate. Only within declared constraints. Do not infer policy or resolve ambiguity.

Layer 3

Orbit - Proof

Immutable record of constraints, planning snapshot, outputs, and approvals. No data, only hashes.

"We cannot prevent all mistakes. We can prove what we believed, what rules we applied, and who was responsible when they occurred."

Key Principles

The framework is intentionally pragmatic, allowing teams to start immediately and evolve.

Start messy

No policy cleanup required. Declare what you believe applies now and make uncertainty visible.

Prove belief, not compliance

Record what you believed at time of action. Divergence from policy becomes a signal.

Bound action, not cognition

Enforce constraints through system architecture. Do not rely on AI to understand rules.

Degrade gracefully

Partial adoption still produces value. Works with incomplete truth.

Three-Stage Proof of Concept

Each stage validates a layer independently before combining them.

Stage 1: Intent Capture

4-6 weeks

Focus

Validate STEa as a thinking tool, without AI.

Scope

Apply to transformation programme decisions: domain splits, team structures, role ownership.

Mechanism

START_HERE.md, CONTEXT.md, DECLARED_STRUCTURE.md, DECISIONS.md

Success

Teams refer back to artefacts unprompted; disagreements shift from "you said" to "the assumption was".

Kill criteria

Teams refuse to fill in artefacts; artefacts are perfunctory; no one refers back.

Stage 2: Bounded AI

4-6 weeks

Focus

Validate constraint enforcement with real AI use.

Scope

Low-risk internal tool: code generation, doc drafting, data analysis.

Mechanism

STEa constraints shape prompts, gate data access, and require approvals.

Success

AI operates within declared bounds; violations are caught; teams can explain what rules applied.

Kill criteria

Enforcement too brittle; constraints ignored; overhead exceeds value.

Stage 3: Provable Provenance

4-6 weeks

Focus

Validate Orbit as the proof layer.

Scope

Add immutable recording to Stage 2 project.

Mechanism

Hash planning snapshots, constraints, outputs, and approvals with identities + timestamps.

Success

Can answer "what rules applied?" and "who approved?" for any AI output.

Kill criteria

Proof layer adds friction without value; auditors do not find it useful.

Timeline Summary

Total timeline: 12-18 weeks
StageDurationAI RiskPrimary Value
1. Intent Capture4-6 weeksNoneDecision traceability
2. Bounded AI4-6 weeksLow (internal)Constraint enforcement
3. Provenance4-6 weeksLow (internal)Audit-ready proof

Each stage has independent value. If Stage 1 fails, you learn quickly at minimal cost. If it succeeds, you have momentum and evidence for Stage 2 and beyond.