Encoding Taste in Practice

The Problem

AI systems are good at generating content. They’re bad at generating your content — the kind that reflects your judgment, your standards, your way of thinking.

Ask any LLM to write an email, a proposal, or a social media post. It produces something competent and generic. It sounds like everyone and no one. It’s average by design — trained on the center of the distribution.

Differentiation lives at the edges. Your competitive advantage isn’t that you can produce content; it’s that you produce content with a point of view. You have taste.

The question is: can taste be encoded? Can you structure your judgment in a way that AI systems can use?

Yes. This is what we’ve built. Here’s how it works.


What Taste Actually Is

When we say “taste,” we mean the accumulated judgment that makes your work distinctively yours. It’s not mystical. It breaks down into components:

Philosophy — What you believe about how things should work. Core principles that guide decisions. Not platitudes (“we value quality”) but actual positions that constrain choices.

Constraints — What you won’t do, even if you could. The boundaries that define your approach. These are often more useful than positive statements because they eliminate bad paths.

Output Standards — What “good” looks like for your deliverables. Not abstract quality, but specific expectations. What must every proposal include? What tone does every email strike?

Behavioral Rules — How you engage, communicate, and decide. The interaction patterns that feel like you.

TASTE = Philosophy + Constraints + Output Standards + Behavioral Rules

Each component can be written down. Each can be structured. Each can be deployed.


The Architecture

Layer 1: The Taste Package

A taste package is a collection of documents that encode judgment for a specific domain. It might be:

  • Your agency’s approach to client work
  • A client’s brand voice and standards
  • A product’s design philosophy
  • A consultant’s methodology

The package contains:

Philosophy document — 1-2 pages articulating core beliefs. Written in first person. Opinionated. Example: “We believe clarity beats cleverness. We’d rather be understood than admired.”

Constraint definitions — Structured rules that limit the solution space. Some are binary (never do X). Some are dimensional (effort level: minimal / standard / elaborate).

Output contracts — Templates for what deliverables must include. Not rigid forms, but required elements. “Every proposal must include: the problem as we understand it, our approach, what success looks like, investment.”

Behavioral guidelines — How to communicate. Tone, pacing, what to emphasize. “Be direct. No hedging. Acknowledge complexity, then assert the path through.”

Examples — Concrete instances of good and bad output. Positive examples show what to aim for. Negative examples show what to avoid. Both calibrate the system.

Layer 2: The Context Layer

Taste packages don’t live in isolation. They combine with context:

Client context — Everything known about the specific client. History, preferences, past decisions, current situation.

Project context — The immediate task. What’s being asked, what constraints apply, what’s already been tried.

Domain knowledge — Reference material. CI guides, product specs, past work, industry standards.

This context is structured as:

  • RAG (Retrieval-Augmented Generation) — Documents indexed for search. When the system needs to know something, it retrieves relevant content.
  • Ontology — A graph of entities and relationships. People, projects, products, decisions — all connected. Enables navigation beyond text search.

Layer 3: The Skill Layer

Skills are executable procedures. They define how to accomplish specific tasks:

Skill definition — What this skill does, when to use it.

Procedure — Step-by-step process. Load these references, consider these factors, produce this output.

Output contract — What the skill must produce. Required sections, format, quality criteria.

Failure modes — What “off” looks like. How to detect and correct problems.

Skills are modular. They can be composed. A “write proposal” skill might invoke a “research client” skill and a “structure argument” skill.

Layer 4: The Agent Layer

Agents are the execution engine. They:

  • Receive a task
  • Load relevant taste packages
  • Retrieve context from RAG and ontology
  • Execute skills
  • Produce output
  • Accept feedback

Agents maintain conversation history—they can be interrupted, corrected, redirected. They’re tools for humans, not autonomous actors.


The Encoding Process

Step 1: Excavation

Before encoding, you need raw material. This is where “digital archaeology” comes in — surfacing the implicit knowledge that already exists in your organization.

Sources:

  • Past proposals and deliverables
  • Meeting recordings and transcripts
  • Client feedback and decisions
  • Internal discussions and debates
  • Miro boards, decks, and documentation

Method:

  • Extract patterns from historical work
  • Interview key people about their decision-making
  • Document the unwritten rules
  • Identify what distinguishes good work from bad

This isn’t a one-time dump. It’s an ongoing process of capturing knowledge as it’s created.

Step 2: Structuring

Raw material becomes structured components:

From interviews → Philosophy

  • “We always start with the customer’s problem, not our solution”
  • “We’d rather be honest about uncertainty than fake confidence”

From past work → Examples

  • “This proposal landed the deal because it acknowledged their risk”
  • “This email failed because it was too formal for this client”

From feedback → Constraints

  • “Never promise timelines we can’t control”
  • “Always include a next step”

From patterns → Output contracts

  • “Every status update includes: progress, blockers, what’s next”
  • “Every creative brief includes: objective, audience, key message, constraints”

The goal is explicit articulation of implicit knowledge.

Step 3: Testing

Encoded taste must be calibrated. The loop:

  1. Generate — Use the system to produce output
  2. Evaluate — Compare against expectations
  3. Identify gaps — Where did it fail? What was missing?
  4. Refine — Update the taste package
  5. Repeat

Testing happens on real tasks, not synthetic benchmarks. The question isn’t “does it produce grammatically correct text?” It’s “does this sound like us?”

Good signals:

  • Output requires minimal editing
  • The tone matches expectations
  • Edge cases are handled sensibly
  • Mistakes are different from before (not the same errors repeated)

Bad signals:

  • Generic output
  • Ignoring stated constraints
  • Wrong tone for context
  • Hallucinating information

Step 4: Deployment

Once calibrated, the taste package is deployed:

Internal use — Team members access via chat interface or integrated tools. “Help me draft a response to this client email.” System loads client context, applies house taste, produces draft.

Production pipelines — Automated workflows invoke taste packages. “Process new project requests” pipeline applies intake methodology, routes appropriately, generates initial documentation.

Client-facing — Where appropriate, clients interact directly. “Ask questions about your project” interface loads project context and responds in your voice.

API access — Other systems call the taste-encoded agents. CRM needs a summary? Call the agent. Website needs content? Call the agent.


Practical Example: Creative Agency

A creative agency wants to encode client knowledge so creatives can focus on creative work instead of searching for context.

Current state:

  • Brand information scattered across Google Drive
  • VP of Product maintains Miro boards with client context
  • Creatives ask project managers for information
  • No shared memory across projects

Target state:

  • Client context encoded in structured taste packages
  • Creatives converse with an agent that knows the client
  • New information automatically updates the knowledge base
  • Institutional memory persists across team changes

The encoding:

Philosophy (agency-level):

We believe the creative work is what matters. Everything else — finding files, understanding context, remembering past decisions — should be frictionless. Creatives should spend their time creating.

Client taste package (per client):

  • Brand voice: How this client communicates
  • Visual identity: Colors, typography, imagery rules
  • Product knowledge: What they make, what matters
  • Stakeholder preferences: Who decides, what they care about
  • Historical context: Past campaigns, what worked, what didn’t

Output contracts:

  • Every creative brief includes: objective, audience, key message, visual references, constraints
  • Every client communication is reviewed before sending
  • Every asset links back to the approved source

Skills:

  • “Find approved asset” — Navigate client folder, return latest approved version
  • “Generate brief” — Given campaign parameters, produce structured brief
  • “Review against brand” — Check creative against brand guidelines

The interface:

Creative opens chat, asks: “What was the approved color palette for the Q3 campaign?”

System:

  1. Identifies client from context
  2. Searches RAG for Q3 campaign assets
  3. Navigates ontology to find approval records
  4. Returns specific answer with source reference

No navigating folders. No asking project managers. Just answers.


Implementation Roadmap

Start by Starting

Don’t design the perfect system. Build something useful for one case.

Week 1: Pick one client, one task

  • Choose a client with scattered documentation
  • Choose a task that happens repeatedly (briefs, status updates, asset finding)
  • Gather existing materials

Week 2: Minimal encoding

  • Write a one-page philosophy for this client
  • Define 3-5 constraints
  • Create one output contract
  • Add 5 positive examples, 3 negative examples

Week 3: Build simple interface

  • Index client materials with RAG
  • Create basic chat interface
  • Test with real tasks
  • Note where it fails

Week 4: Refine and expand

  • Update based on failures
  • Add missing context
  • Test with more team members
  • Decide whether to continue

Scale What Works

If the first case is useful, extend:

  • Add more clients
  • Add more skills
  • Connect to production workflows
  • Build ontology relationships
  • Enable API access

Each addition follows the same pattern: encode, test, refine, deploy.

Build the Feedback Loop

The system improves when humans correct it:

  • Every edit to generated content is logged
  • Periodic review identifies patterns
  • Patterns become explicit rules
  • Rules update the taste package

This is human-in-the-loop learning. Not automated fine-tuning, but structured capture of human judgment.


What This Enables

For Agencies

Creatives focus on creative. Context retrieval, asset finding, brand guideline checking — these become instant instead of interruptive.

Institutional memory persists. When people leave, knowledge doesn’t leave with them. It’s encoded.

Client relationships deepen. You know more about the client than any individual could remember. That shows up in the work.

New business models emerge. Train an agent on client-specific taste, then license it back to them. “Here’s a tool that produces content in your voice. We trained it. You use it. We supervise.”

For Any Organization with Expertise

The pattern generalizes:

  • Consultancies encode methodology, produce consistent deliverables
  • Design firms encode aesthetic principles, maintain quality at scale
  • Professional services encode judgment, reduce dependency on key individuals
  • Product companies encode brand voice, maintain consistency across touchpoints

Anywhere there’s expertise worth scaling, taste encoding applies.


The Underlying Principle

AI systems without encoded judgment produce average output. Average is commodity. Commodity competes on price.

AI systems with encoded judgment produce distinctive output. Distinctive is differentiation. Differentiation commands premium.

The work isn’t “add AI to what we do.” The work is “capture what makes us us, then deploy it everywhere.”

That’s taste as infrastructure.


Questions This Raises

How much encoding is enough? Start minimal. A one-page philosophy and five examples can go surprisingly far. Add complexity only when the system fails in predictable ways.

Who does the encoding? The people with the judgment. Not a technical team translating for them. The expert writes the philosophy. The creative provides the examples. Technical infrastructure enables, but doesn’t replace.

How do you maintain it? Make it part of the workflow. When a proposal lands well, add it as a positive example. When feedback reveals a gap, update the constraints. The taste package is living documentation.

What about consistency across contexts? Layered composition. Agency taste applies everywhere. Client taste overlays for that client. Project taste overlays for that project. Each layer is independent; together they produce contextually appropriate output.

What if the encoded taste is wrong? You’ll find out quickly. The testing loop reveals gaps. Better to have explicit, fixable taste than implicit, unexaminable habits.


This document is itself an encoded artifact. It reflects our taste — what we believe, how we communicate, what we think matters. An agent reading this should understand not just the mechanics but the philosophy. That’s the point.