Capabilities

What I actually work on

I build and evolve practical systems that help people and organisations make better decisions under real-world constraints - regulation, legacy platforms, operational risk, and imperfect data.

Most of the work below comes from shipping and iterating on my own products, not advisory decks.

Decision-support platforms (Product & Delivery)

I design and build tools that make complex work visible, auditable, and easier to reason about.

Examples:

  • STEa - a structured product and delivery workspace that links decisions, testing, documentation, and outcomes, without turning into process theatre.
  • Lightweight systems for capturing why decisions were made, not just what was delivered.
  • Tools that reduce handover loss between product, engineering, QA, and stakeholders.

Focus:

  • clarity over ceremony
  • traceability without bureaucracy
  • systems that teams actually keep using

Auditability, risk & accountability (Orbit)

I am interested in how systems hold up when they are questioned later - by regulators, auditors, or customers.

Examples:

  • Orbit - an append-only, verifiable ledger for recording decisions, data usage, and system interactions in a way that supports accountability and post-hoc review.
  • Orbit overview - the STEa + Orbit accountability framework and staged proof-of-concept plan.
  • Exploring how cryptographic proofs and structured logs can replace brittle screenshots, spreadsheets, and "trust me" documentation.
  • Designing for explainability rather than blind automation.

Focus:

  • human accountability stays explicit
  • clear boundaries on where automation is allowed
  • evidence by default, not reconstruction later

Applied AI (used carefully, not everywhere)

I use AI as a thinking and support tool, not as an authority.

Examples:

  • Using AI to help surface inconsistencies, risks, or gaps for human review - not to make final decisions.
  • Designing human-in-the-loop patterns with clear ignore/override rules.
  • Testing small, low-risk pilots before scaling anything.

Focus:

  • validation over novelty
  • bounded scope and clear ownership
  • knowing where AI should not be used

App & system development

Alongside platform work, I design and ship focused apps where existing tools are either too generic or too heavy.

Examples:

  • iOS and Android apps with privacy-first principles.
  • Narrow, opinionated products built around a specific job-to-be-done.
  • Systems designed to be maintained realistically, not abandoned after launch.

Focus:

  • simplicity over feature volume
  • compliance baked in, not bolted on
  • real usage, not demos

How this all ties together

Across STEa, Orbit, and my app work, the common thread is the same:

  • make complex work easier to reason about
  • reduce risk without freezing progress
  • design systems that reflect how people actually work

No spin. No vanity metrics. Just tools that hold up when things get messy.

Distinct systems. Clear decisions. Fewer surprises later.

Next step

Need an app shipped, analytics made reliable, or product bets sharpened?

Speak to us