Key Takeaways

How Kaliber Group automatically processes meeting transcripts into structured client memory files — preserving decisions, preferences, and context so institutional knowledge never walks out the door when an employee leaves.

The Knowledge That Walks Out the Door

Your best account manager just quit. She had 18 months of client context in her head — the CEO's preference for conservative messaging, the Q3 experiment that flopped, the reason they won't touch TikTok. That knowledge walks out the door with her. Unless it doesn't.

Every agency knows this risk. Few do anything about it. The standard response is "we have meeting notes in Google Docs." But meeting notes are a graveyard. They're written once, filed somewhere, and never read again. The decisions made in a call on March 3rd are forgotten by March 10th. The client preference stated casually between agenda items — "we really don't want to go below $30 CPL" — never gets recorded at all.

We decided to treat client context as infrastructure, not documentation. Not something a person remembers. Something the system remembers. This matters especially for our team operating across Singapore and Indonesia — when context needs to travel across pods and time zones, it can't live in one person's head.

Institutional Memory — The accumulated, structured knowledge about a client relationship: how they prefer things done, what's been tried, what strategic context shapes decisions. Distinguished from meeting notes (write-once, read-never) by being living, categorized, timestamped, and actively referenced by operational systems.

Why Meeting Notes Fail

Meeting notes fail for three reasons, and none of them are about the quality of note-taking.

First, they're unstructured. A typical meeting note mixes action items with strategic decisions with casual preferences with background context. When you need to find "what did the client say about TikTok?" you're searching through 40 documents of mixed content.

Second, they're write-once, read-never. The AM writes notes after the call, files them, and never opens them again. The information exists but is effectively inaccessible — archived in a folder structure nobody navigates.

Third, they capture what was said, not what it means. A client saying "Let's try a more aggressive approach next quarter" is a preference signal that should influence every recommendation for the next three months. In meeting notes, it's a bullet point that disappears into the archive.

The Four-Category Extraction Framework

Structured Transcript Processing — A framework that analyses meeting transcripts by extracting four distinct categories: Decisions (commitments that affect execution), Action Items (who does what by when), Preferences (how the client likes things done), and Context Signals (business events that shape strategy). Contrast with meeting summaries, which compress without categorizing.

We built a system we call Structured Transcript Processing. It doesn't summarize meetings — it analyzes them. Every transcript gets processed through extraction of four distinct categories:

Decisions: What was agreed upon. "We'll pause the Brand campaign during CNY" or "Budget increases to $15K starting April." These are commitments that affect execution.

Action Items: Who does what, by when. "Kaliber to prepare three new ad variations by Friday" or "Client to send updated product feed by EOW." Standard stuff — but timestamped and tracked.

Preferences: How the client likes things done. This is the category most meeting notes miss entirely. "They prefer data-heavy slides over narrative presentations." "The CMO doesn't want to see CPM metrics — only cost per lead." "They dislike the word 'aggressive' in campaign names." These soft signals shape every interaction but are never formally documented.

Context Signals: Business context that affects marketing strategy. "They're launching in Malaysia in Q3." "Their main competitor just raised $20M." "The CEO is presenting to the board in June and needs strong numbers." This is the strategic layer that turns an account manager from a campaign operator into a trusted advisor.

Transcript Processing Pipeline

1
Detect
Watch daemon detects new transcript file in the delivery folder. No manual trigger needed.
|
2
Extract
AI processes raw transcript into four categories: Decisions, Action Items, Preferences, Context Signals.
|
3
Synthesize
Extracted items are formatted, deduplicated against existing memory, and presented for AM review.
|
4
Update Memory
Approved extractions merge into the client's memory.md — a living, timestamped knowledge file.
|
5
Feed Skills
Every skill (weekly reviews, campaign setup, reporting) reads memory.md as context before executing.

The memory.md File

Every client has a file called memory.md. It's not a document anyone sits down to write. It's a living file that accumulates institutional knowledge over time, updated after every client interaction.

The file is organized by category and timestamped. When a new preference is extracted from a meeting, it's added under Preferences with the date. When a decision is made, it goes under Decisions. Over months, the file becomes a comprehensive portrait of the client relationship — not what we think we remember, but what actually happened.

Sample memory.md Structure

## Preferences
<!-- 2026-03-12 --> Client prefers conservative recommendations — wants to see data before scaling
<!-- 2026-02-26 --> CMO reviews reports on Mondays — send by Friday EOD
<!-- 2026-01-15 --> Doesn't want CPM shown in reports — only CPL and ROAS
<!-- 2025-11-20 --> Dislikes "aggressive" terminology — use "growth-focused" instead

## Decisions
<!-- 2026-03-12 --> Budget increase to $18K/mo starting April — approved by CMO
<!-- 2026-02-05 --> Pause all TikTok campaigns — CEO mandate, revisit in Q3
<!-- 2026-01-22 --> Broad match tested Q4 — burned 30% of budget with low-quality leads. Do not retry.

## Context Signals
<!-- 2026-03-12 --> Malaysia launch planned for Q3 2026 — will need market research by May
<!-- 2026-02-26 --> Board presentation in June — CEO wants strong Q2 performance narrative
<!-- 2025-12-10 --> Main competitor (BrandX) raised Series B — expect increased auction pressure

How Memory Feeds Everything

The memory file isn't a standalone document. It's infrastructure — read by every automated skill that touches a client.

When the system runs a weekly review for this client, it knows "they prefer conservative recommendations" — so it frames suggestions as "consider testing" rather than "you should immediately." It knows "they don't want to see CPM" — so CPM doesn't appear in the report.

When the system helps build a new campaign, it knows "they tried broad match in Q4 and burned budget" — so it won't recommend broad match. It knows "Malaysia launch in Q3" — so it might suggest exploratory campaigns when the time is right.

When the system generates a monthly report, it knows "board presentation in June" — so it includes a year-to-date performance narrative the CMO can use.

The account manager doesn't need to remember any of this. Doesn't need to brief the system. Doesn't need to add context before running a skill. The memory is already loaded.

The Watch Daemon

Transcript Watch Daemon — A background process that monitors delivery folders for new transcript files from recording platforms (Otter, Fireflies, etc.). When a new file is detected, it triggers the Structured Transcript Processing pipeline automatically. Designed to remove the manual trigger — the most common point of failure in knowledge capture systems.

The most important design choice was removing the manual trigger. We don't ask account managers to "remember to process the meeting transcript." That's the same failure mode as "remember to take notes."

Instead, we built what we call a Transcript Watch Daemon — a background process that monitors the delivery folder for new transcript files. When a recording platform (Otter, Fireflies, or whatever the team uses) drops a transcript file, the daemon detects it automatically.

Processing starts without human intervention. The AM receives a synthesis — a structured summary of what was extracted — for review. They approve, edit, or reject the extractions. Then the memory file updates.

The AM's job isn't to create the memory. It's to verify it. That's a much smaller cognitive load, and it actually gets done.

Solving the "New AM" Problem

An agency's worst operational nightmare is the new account manager transition. The outgoing AM does a "handover" — usually a 30-minute call where they dump six months of context into a Google Doc that reads like stream of consciousness. The new AM nods, takes notes, and spends the next three months accidentally violating client preferences they were never told about.

With the memory system, a new AM inherits the full memory file on day one. Not a handover document — the actual accumulated knowledge from every interaction. Every preference, every decision, every piece of strategic context. Organized, timestamped, and current.

They know on day one that this client doesn't like the word "aggressive." That broad match failed in Q4. That the CEO has a board meeting in June. That reports should arrive by Friday EOD. This is especially critical when we move AMs between our Singapore and Jakarta pods — the memory file travels seamlessly, no briefing required.

The relationship doesn't restart. It continues — because the knowledge was never in someone's head to begin with. It was in the system.

What This Really Changes

The deeper shift isn't technological. It's philosophical. Most agencies treat institutional knowledge as an unavoidable casualty of team changes. "People leave, and you lose some context. That's just how it works."

We reject that. Client context is an asset. It should appreciate over time, not depreciate when someone changes roles. Every meeting, every decision, every offhand preference adds to the asset. The memory file is worth more after 18 months than after 3 — regardless of who the account manager is.

That's what we mean by institutional memory. Not "we have a wiki." Not "we keep meeting notes." A living, growing, structured knowledge base that makes every person who touches the client smarter on their first day than they would be after three months of learning the hard way. For an APAC agency managing clients across multiple markets, this isn't a nice-to-have — it's operational infrastructure.

Frequently Asked Questions

How do marketing agencies capture and retain client knowledge?

Most agencies rely on meeting notes, Google Docs, and the account manager's memory — all of which fail at retrieval and disappear when team members leave. A structured approach processes every client interaction into categorized, timestamped knowledge files that persist independently of any individual team member.

What is institutional memory and why does it matter for agencies?

Institutional memory is the accumulated knowledge about how a client operates, what they prefer, what's been tried, and what strategic context shapes their decisions. In agencies, it typically lives in people's heads. When those people leave, the agency effectively restarts the client relationship. Structured memory systems preserve this knowledge as an organizational asset.

How does AI process meeting transcripts for marketing teams?

Rather than summarizing transcripts (which produces notes nobody reads), AI can extract structured categories: decisions that were made, action items assigned, client preferences expressed, and business context signals that affect strategy. These extractions are then merged into a persistent client knowledge file that feeds into other operational systems.

How to prevent knowledge loss when account managers leave an agency?

The key is ensuring knowledge never lives exclusively in one person's head. Every client interaction should be processed into a structured memory file — organized by category (preferences, decisions, context), timestamped, and stored where the organization can access it. When an AM leaves, the new AM inherits the complete file on day one.

What should be captured from client meetings besides action items?

Three categories that most teams miss: Preferences (how the client likes things done — report format, communication style, metric priorities), Decisions (strategic agreements that affect future work), and Context Signals (business events like upcoming board meetings, market expansions, or competitor moves that should shape your strategy).

How do you build a client memory system for a marketing agency?

Start with a structured file per client (we use memory.md) organized into categories: Preferences, Decisions, Context Signals, and Action Items. Process every meeting transcript through AI extraction into these categories. Use a file watcher to automate detection so nobody has to remember to trigger the process. Have the AM verify extractions rather than create them — verification is a smaller cognitive load and actually gets done.

Can AI automatically extract decisions and preferences from meeting transcripts?

Yes, and it does it better than manual note-taking because it processes the entire transcript without the attention gaps humans have during live conversations. The critical design choice is having a human review the extractions before they enter the memory system — AI catches what humans miss, humans catch what AI misinterprets. The combination is more reliable than either alone.

RL

Robert Lai

Founder & CEO, Kaliber Group

Built Kali, one of the first Claude-native marketing operations systems in APAC. Managing 20+ clients across Singapore and Indonesia with AI-augmented delivery pods.