Building in Public

Sprint Update: Days 8-9 — AI Credentials, Stateless Verification, and a Launch-Blocking Bug We Almost Missed

· 5 min read
Sprint Update: Days 8-9 — AI Credentials, Stateless Verification, and a Launch-Blocking Bug We Almost Missed

Sprint Update: Days 8-9 — AI Credentials, Stateless Verification, and a Launch-Blocking Bug We Almost Missed

We're nine days into what I've been calling the epic sprint, and the last two days were a mix of genuinely exciting new capability and the kind of architectural reckoning that makes you grateful you caught it before launch.

Here's what happened.


The Big Feature: AI-Powered Credential Generation

The headline from these two days is the AI Agent tab for Digital Facility Records. This is something I've wanted to build since the beginning — a way for users to go from a pile of documents to a valid, issued UNTP credential without having to manually map fields or learn the schema.

The flow works like this: you drop in your PDFs, Word docs, spreadsheets, or CSVs, and the agent reads them. It then builds a complete, UNTP-conformant Digital Facility Record from whatever it finds in the source material. If you've uploaded certifications, audit reports, or compliance records, it doesn't just ignore them — it creates conformityClaim entries in the credential, linking each claim back to its source document. No manual entry.

The part I'm most proud of is the validation loop. During generation, the agent calls a run_validation tool that checks its own output against the official UNTP schema — the same AJV 2020-12 engine used by the UNTP Playground. If the credential fails validation, the agent reads the errors and self-corrects, looping up to three times before surfacing the result to the user. It's doing exactly what a careful developer would do: generate, validate, fix, repeat.

After the initial generation, users refine through a chat interface with a live diff panel. The AI proposes changes field by field, and users can accept or reject individual edits before anything is persisted. This matters a lot to me from a trust standpoint — the AI is a collaborator, not an autopilot.

One practical detail worth sharing: we're using Anthropic's prompt caching with a three-block system prompt structure. The schema and rules (~5,000 tokens) are cached across requests, which cuts token costs by roughly 90% and keeps response times fast even with a large schema payload.

Want to see a real output? Here's a live Digital Facility Record for Xylo Digital Headquarters, generated entirely by the AI agent from source documents: Xylo Digital Headquarters DFR

260220 Xylo Dfr

The Issue We Almost Shipped: Verification Depended on the Database

Midway through these two days, I discovered something that would have been a production launch-blocking problem: our verification flow was routing through Directus.

The way it worked originally, the public verification page did a database lookup to find the credential before verifying it. That's backwards. W3C Verifiable Credentials are designed to be independently verifiable from their storage location — the database shouldn't be in the loop at all.

This matters practically, not just architecturally. If a credential is issued on DPP Kit, it should be verifiable by anyone, anywhere, using any conformant verification tool — not just our own page hitting our own database.

We rebuilt verification to be fully stateless. Credential IDs are now urn:uuid format. The Identity Resolver registers three link types per credential: the raw JSON for storage, an HTML verification page, and a verificationService link. The public verify page reads a ?q= payload containing the resolver URL, fetches the credential from storage via IDR linkset resolution, and verifies the signature through VCKit. No Directus involved anywhere in that chain.

An added benefit: because we're using the resolver URL as the payload URI, the verification link stays stable when a credential is re-issued. The IDR upserts the latest version while prior versions accumulate as history. That's the kind of behavior that matters when credentials represent real compliance claims with audit trails.


Everything Else

Beyond those two big pieces, we shipped a unified collapsible sidebar replacing the old sub-navbar, reorganized the credential editor tab order, and cleaned up the bottom action bar. Viewer-role users can now access issued credential details without getting editor access — something that was missing.

On the infrastructure side: fixed a Caddy routing issue that was causing file uploads to 404, resolved a PDF parser ESM compatibility error, and cleaned up a document folder race condition that could surface when switching between editor tabs quickly. None of these are glamorous, but every one of them would have caused real friction for users.


Where We Are

Nine days in, and the platform is doing something genuinely useful: taking a folder of compliance documents and turning them into a cryptographically signed, UNTP-conformant credential with an independently verifiable URL. The AI layer makes that accessible to people who've never touched a JSON-LD schema. The stateless verification means those credentials actually work the way the standard intends.

We're a fewmonths away from saying production grade, but were just a couple short sprints away from this being able to support UNTP pilots in full force!