OpinionAI Workflow
The Argument

Two engineers on the same project ask the same AI agent to build the same feature. One gets something generic. The other gets something that fits the product, follows the conventions, and lands close enough to merge. Same model. Same prompt. Different results. The difference isn't skill. It isn't a secret prompt. It's that one treated the agent like a Stack Overflow query and the other treated it like a teammate joining on day one.

This is one implementation pattern, shaped by how I build Uno Platform apps with AI-assisted development. Yours may have fewer files, different names, or a different flow. The exact structure isn’t the point. The principle is: context is the product.

Wrong Answer

Why "Just Prompt Better" Is the Wrong Answer

The prompt-engineering frame assumes a transactional model: input prompt, output code. That model fits autocomplete. It does not fit building.

Building requires sustained context across hundreds of decisions: naming, spacing, navigation patterns, error states, what to log, what not to abstract. Most of those decisions never appear in any single prompt. They live in the team's shared sense of how things are done here.

So the real question isn't "what's the best prompt?" It's "what does an agent need to know to make decisions the way our team would?" And the honest answer is: the same thing a new teammate needs. Just written down.

The anti-pattern this replaces is the mega-prompt: those 4000-token instruction blocks some teams paste into every conversation. Brittle. Unmaintainable. Invisible to the humans on the project. And the moment the codebase shifts, the mega-prompt is wrong in ways no one can see. The mega-prompt is a hack around missing documentation. Treat the missing documentation as the bug.

Documents

Documents Are the Engineering Surface

The interesting move is treating project documentation, not prompts, not tool configs, not chat snippets, as the place where agent alignment happens.

This isn't new in spirit. Good engineering teams have always written down architecture decisions, conventions, and intent. What's new is that those documents now have a second reader.

The README that helps a new hire ramp up in week one is the same document that gives an agent enough context to make sensible choices in week two. The architecture notes that prevented a junior engineer from re-introducing a deprecated pattern are the same notes that stop the agent from doing it.

Stop writing two kinds of docs: one for humans, one as prompt scaffolding. Write one kind, well.

The Stack

The Composition Stack

Six markdown files an agent reads at session start, twenty-five sections in total, each earning its place. They're the project's shared mental model, readable by both humans and agents.

The composition stack, visualized: six layers from Foundation through Plan.

Foundation: README.md + CLAUDE.md

The first thing anyone reads on day one. README is the what: what this is, who it's for, what runs it. CLAUDE.md is the how: conventions, rules, settings, what we prefer, what we've already tried and rejected.

Example: "Prefer x:Bind over {Binding}. Default to MVUX for app state, MVVM for sample pages. Never edit generated files."

Wiring: .mcp.json + ux-flows.md

The connectors that turn a model into a teammate. .mcp.json lists the tools the agent can actually call: your linter, your docs server, your design system index. ux-flows.md describes the primary user paths that thread through those tools.

Example: A four-line list: "Sign in -> Today screen -> Open job -> Mark complete -> Sync queue."

Design System: design.md

Tokens, palettes, typography, spacing, components. The constraints that make a hundred screens feel like one product.

Example: "Body text is BodyMedium. Touch targets >= 48dp. Card elevation is reserved for items the user can act on. Don't introduce a new color."

Interactions: interactions.md

Animations, transitions, motion, state changes. The rules for how the product feels.

Example: "Page transitions are 240ms ease-out. Skeletons appear after 200ms, never sooner. Snackbars are for confirmation, not error."

Architecture: architecture.md

Data flow, patterns, libraries. The decisions every other layer in this stack assumes.

Example: "Network calls go through IApiClient. UI never touches HTTP directly. Region-based navigation, one ContentControl host per shell."

Plan: plan.md

Goals, phases, tasks. The vague intent turned into something an agent can scaffold and execute against.

Example: "Phase 1: read-only inventory view. Phase 2: edit + sync queue. Phase 3: barcode scanner. Don't scaffold phase 2 work into phase 1 files."

The point isn't the exact filenames. The point is the structure: six surfaces, each answering a different question, each addressable on its own.

Payoff

What This Gets You That Prompts Don't

Continuity across sessions. Prompts are stateless. Documents persist. When an agent picks up work three days later, it reads the same files it read last time. Your team's shared understanding is durable instead of something you re-type into every conversation.

Visibility for humans. The documents that help the agent are the same ones that help your team. There's no parallel "AI prompt library" to maintain, no separate truth that drifts. New engineers benefit from the same investment. So does the agent. So does the version of you who comes back to this codebase in four months.

Composability. Each layer answers a different question. When you're debugging a layout issue, you reach for design.md. When you're adding a new tool, you reach for .mcp.json. The agent triages the same way. Context becomes addressable instead of a giant blob you reload every session.

Concrete Example

Last week I asked the agent to add a new settings page to a project that had the stack set up. I didn't re-explain anything. It inherited spacing from design.md, the navigation host pattern from architecture.md, and the destination tone (terse, utility-first) from CLAUDE.md. The PR was small enough to review without scrolling. That's not the model getting smarter. That's the model getting context.

Honest Limits

What This Doesn't Solve

  • It won't fix a bad idea. If the intent in your README is wrong, every layer below it inherits the wrongness. The composition stack is a multiplier, not a corrector.
  • It won't replace taste. The system makes "on-brand" a build step, but someone still has to define what "on-brand" means. The agent can follow design.md. It can't write it for you.
  • It doesn't scale linearly. A toy project doesn't need six files (a README and a CLAUDE.md is plenty). A platform with five services might need six files per service.
  • It has a discipline cost. Documents drift if no one tends them. A stale architecture.md is worse than no architecture.md, because now the agent is confidently wrong.
The Shift

The Mental Shift

Six files aren't the point. The shift is.

When you treat context as architecture, every other part of the work changes. When you write a feature, you also write the conventions that explain it. When you choose a library, you also explain the choice. The artifacts of thinking become the artifacts of building.

This is the part of engineering that used to live in heads, in DMs, in tribal knowledge passed down across standups. The composition stack pulls it into the open, where both humans and agents can use it. That's a productivity story, but more importantly, it's a continuity story. The next person on this codebase, whether they're a junior, a contractor, or an agent, gets the same starting line you did.

The shift isn't "now I prompt better." The shift is "now I write differently."

Close

Close

Back to the two engineers.

The one who got the better result didn't have a secret prompt. They had a system. Six files. Each one earned its place. Each one read by humans and agents alike. The agent didn't know more; it was given more.

When agents see the connections, they can navigate the work.