Key Takeaways

Why AI gives generic marketing advice and how Config-First Onboarding — a structured approach to client data — fixes it. A framework for making AI understand business context before it analyzes campaigns.

The Failure Mode Nobody Diagnoses

Ask AI to analyze a marketing campaign and you will get a perfectly structured, impressively worded analysis that is almost entirely useless. It will tell you to "optimize your targeting," "test new creative," and "monitor your cost per acquisition." Thank you. Very helpful.

This is not an AI problem. It is a context problem. You fed the system a screenshot and a vague brief. It gave you back a vague analysis. Garbage in, garbage out — the oldest rule in computing, and agencies keep forgetting it when the interface looks smart enough to seem like it should just know.

We learned this the hard way. In the early months of building our AI operations system at our Singapore headquarters, we had powerful API integrations pulling live data from Google Ads, Meta, and BigQuery. The system could access real numbers. And the analysis was still generic, because the system had no idea what those numbers meant for each specific client.

A $50 CPA is catastrophic for an e-commerce brand selling $30 products. It is excellent for a B2B SaaS company where each lead is worth $5,000. The same number, two completely different interpretations. Without context about the client's business model, economics, and goals, AI cannot distinguish between the two — whether the client sits in Singapore, Indonesia, or Australia.

The Insight: Config Before Strategy

The conventional approach to AI-assisted marketing goes like this: write a strategy document, paste it into AI, ask for analysis. This approach fails for three reasons.

First, strategy documents are written for humans. They use narrative structure, contextual references, and implicit assumptions that humans understand and machines misinterpret. A strategy doc that says "we're focusing on high-intent audiences in Q2" tells a human everything they need to know. It tells AI almost nothing actionable.

Second, strategy documents are static. They are written once, during onboarding, and rarely updated. By month three, the live campaign structure has diverged from the strategy doc, and the AI is analyzing reality through an outdated lens.

Third, someone has to remember to paste the strategy doc into every conversation. They won't. After the first two weeks, the strategist stops including it, and the AI reverts to generic analysis.

Config-First Onboarding is a methodology where machine-readable configuration is generated before any strategy documents, ensuring AI systems have structured context from day one. Instead of writing a strategy document and hoping AI reads it, you generate a structured configuration that the system loads automatically — before every query, every analysis, every report. The config is the source of truth. Human-readable strategy documents are derived from it, not the other way around.

What Goes in the Config

Each client in our system has four context files. They are plain markdown, version-controlled, and loaded programmatically whenever a skill is invoked across our Singapore and Indonesia client pods.

client-context.md — The Business Model

This file answers the question AI needs answered first: what kind of business is this, and what does success look like? It contains the company overview, product or service description, target audience, conversion definition (lead form? purchase? booking?), average order value or customer lifetime value, and the archetype classification.

ads-strategy.md — The Campaign Architecture

This file maps how campaigns are structured in the ad platforms. Campaign naming conventions, audience segmentation logic, funnel stage mapping (prospecting vs. retargeting vs. brand), and the strategic rationale behind each campaign group. When the AI sees a campaign called "SG_Search_Brand_Exact," it knows exactly what that campaign does and how to evaluate it.

media-plan.md — The Budget Blueprint

Monthly budget allocation across platforms and campaigns, with daily spend targets derived from the monthly total. Flight dates, ramp-up schedules, and any budget constraints. This is the file that makes daily pacing checks possible — the system compares actual spend against the plan and flags deviations.

memory.md — The Running Log

This is where institutional knowledge accumulates. Every strategic decision, optimization made, anomaly investigated, and lesson learned gets appended to the memory file. "2026-02-14: Paused Broad Match campaign after 3 weeks — CPA 2.4x target with no improvement in lead quality. Reallocated budget to Exact Match." Three months later, when someone asks why that campaign is paused, the answer is there.

Config-First Onboarding Flow

1
Platform Audit
Connect to Google Ads, Meta, and BigQuery APIs. Pull account structure, active campaigns, historical performance, and conversion setup.
2
Archetype Classification
Classify the client as lead_gen, ecommerce, or awareness. This determines which metrics matter, what benchmarks apply, and how anomalies are interpreted.
3
Config Generation
Generate the four context files: client-context.md, ads-strategy.md, media-plan.md, and memory.md. Structured, machine-readable, auto-loaded.
4
BigQuery Configuration
Map the client's ad accounts and configure bigquery-config.json with account IDs, platform mappings, and KPI targets for automated reporting.
5
Validation Run
Execute a weekly review skill against the new config. Verify that data pulls correctly, KPI comparisons are accurate, and the analysis reflects the client's actual situation.

The Archetype System

This is where most AI marketing setups fail silently. They treat every client the same way — same metrics, same benchmarks, same interpretation logic. But a lead generation client and an e-commerce client are playing fundamentally different games.

We built three archetype classifications. Every client is tagged with one, and that tag changes how every downstream analysis behaves — whether we're reviewing Singapore CPMs or Indonesian e-commerce ROAS.

Archetype Comparison

Dimension Lead Gen E-commerce Awareness
Primary KPI Cost per lead (CPL) Return on ad spend (ROAS) Cost per thousand impressions (CPM), reach
Success Signal CPL at or below target, lead volume within 20% of plan ROAS above target, revenue growing Reach targets met, frequency under control
Danger Signal CPL 2x+ target for 7+ days ROAS below breakeven for 3+ days Frequency above 4 in a single week
Budget Logic Flexible — shift budget to lowest CPL campaigns ROAS-gated — scale winners, cut losers fast Reach-optimized — spread for maximum coverage
Review Focus Lead quality indicators, funnel conversion rate, CPL trend Revenue attribution, product-level performance, AOV Brand lift proxies, search volume trends, recall metrics
Typical Variance High — CPL can swing 50%+ week to week and be normal Medium — ROAS fluctuates with promotions and seasonality Low — CPM and reach are relatively stable

Without archetype classification, AI treats a 40% CPL increase the same way regardless of client type. With it, the system knows that a 40% CPL spike for a lead gen client running B2B campaigns is within normal variance (long sales cycles, small volumes, high weekly fluctuation), while the same spike for an e-commerce client warrants immediate investigation.

The Payoff: Context That Loads Itself

Here is what Config-First Onboarding makes possible: the first time a strategist asks the system to analyze a client's campaign performance, the analysis is already specific. Not because someone wrote a clever prompt, but because the client's business model, campaign architecture, budget targets, and historical decisions are loaded automatically.

The strategist types one command. The system loads four context files, queries live data from the appropriate APIs, applies the right archetype lens, compares against configured KPIs, and produces an analysis that sounds like it was written by someone who has managed the account for months.

This is the difference between AI as a generic tool and AI as an operational system. Generic tools need to be told everything, every time. Operational systems know the context and apply it consistently. The config is what makes that transition possible.

When we onboard a new client now — whether they're managing APAC campaigns from Singapore or running Southeast Asian e-commerce — the first thing we build is the config — not the strategy deck, not the media plan, not the creative brief. The config. Because once the config exists, every other output becomes client-specific by default. Strategy documents become generated artifacts, derived from structured data, rather than authored narratives that the system has to interpret.

Context is not optional. It is the entire system.

Frequently Asked Questions

Why does AI give generic marketing advice and how do you fix it?

AI gives generic advice because it lacks specific context about the client's business model, campaign structure, and performance targets. The fix is not a better prompt — it is structured context files that load automatically before every analysis. When AI knows that a client is a B2B lead gen company with a $150 CPL target and a 90-day sales cycle, its analysis becomes specific and actionable instead of generic.

What is Config-First Onboarding for AI marketing systems?

Config-First Onboarding is a methodology where you create a machine-readable client configuration — including business model classification, campaign taxonomy, budget allocation, and KPI targets — before writing any strategy documents. The config becomes the source of truth that AI loads automatically, ensuring every analysis is client-specific from the first query. Strategy documents are derived from the config, not the other way around.

How do you make AI understand a client's business model?

Through archetype classification and structured context files. Each client is classified as lead_gen, ecommerce, or awareness — which changes how metrics are interpreted, what benchmarks apply, and how anomalies are flagged. A client-context.md file captures the business model details (conversion type, average order value, sales cycle length), and this file loads automatically before any AI analysis runs.

What data does AI need to analyze marketing campaigns effectively?

Effective AI campaign analysis requires four layers of data: business context (what the client sells, who they sell to, what success looks like), campaign architecture (how campaigns map to funnel stages and audience segments), budget and KPI targets (what the plan says, not just what happened), and historical decisions (what was changed, when, and why). Without all four layers, AI can describe what happened but cannot interpret whether it is good or bad.

How do marketing agencies structure client data for AI analysis?

At Kaliber, each client has four markdown files in a standardized structure: client-context.md (business model and goals), ads-strategy.md (campaign architecture and targeting logic), media-plan.md (budget allocation and timeline), and memory.md (running log of decisions and learnings). These files are version-controlled, loaded programmatically before every skill invocation, and updated after significant changes. The structure is consistent across all clients, but the content is client-specific.

What is the difference between lead gen and e-commerce campaign analysis?

Lead gen analysis focuses on cost per lead, lead volume, and funnel conversion rate — with high tolerance for weekly variance because small lead volumes produce noisy data. E-commerce analysis focuses on return on ad spend, revenue, and average order value — with lower tolerance for poor performance because every dollar of ad spend has a direct, measurable revenue outcome. The same CPA increase that is normal variance for lead gen may require immediate action for e-commerce. Archetype classification ensures AI applies the right interpretation automatically.

About the Author

Robert Lai

Founder & CEO, Kaliber Group

Robert leads Kaliber Group, an AI-native performance marketing agency in Singapore. He built Kali — one of the first Claude-native marketing operations systems in APAC — managing 20+ clients across Singapore and Indonesia with 36 custom AI skills. Based in Singapore.