TLDR

Helped an enterprise AI observability platform improve their docs, make them AI-agent-ready, and set up automation to keep everything in sync. Built internal AI agents that monitor code changes and upstream integrations.

The result: docs that stay fresh automatically, and systems that catch breaking changes before they hit production.


The Problem

This client runs an AI observability and evals platform used by enterprise teams. They were launching a new SDK version and had a few pain points:

  • Docs needed a revamp for the new SDK
  • Many integrations with external tools that would break when those tools released updates
  • No system to catch docs drift or integration issues early
  • Wanted their docs to be AI-agent-ready

The Approach

Same philosophy I bring to all my consulting work: deliver value fast, then leave systems behind.

I brought three things:

  1. User perspective — I use evals tools daily, so I could give real feedback on what works and what doesn’t
  2. Product taste — spotting DX issues across their SDK and web app
  3. Systems thinking — automation that compounds, not one-time fixes

What I Built

Docs Revamp

Improved the docs structure and made them AI-agent-ready. This means AI coding tools and agents can easily consume and work with their documentation.

Internal AI Agents

This is where it gets interesting:

  1. Docs auto-update agent — monitors code changes and flags when docs need updating. No more docs drift.

  2. Integration monitoring agent — the platform has many integrations with external tools. When those tools release new versions, the agent checks if anything needs updating on our side.

Product Feedback

Beyond docs, I gave feedback across their entire surface: SDK, web app. The kind of DX issues that are hard to spot unless you’re actually using the product.