Key Takeaways

Deliberation Gates are a three-phase control architecture that lets AI prepare campaign changes, budget adjustments, and optimizations — but requires human approval before any money moves. Here's how it works.

The Fear Is Valid

Here is every marketer's nightmare: an AI system autonomously adjusting bids at 3 AM, blowing through a client's monthly budget in a weekend, targeting the wrong audiences at scale, and nobody noticing until Monday morning when the client calls asking why their credit card got charged $40,000.

This nightmare is not irrational. It is the logical outcome of how most AI marketing tools are designed. They optimize for autonomy — set it and forget it, let the algorithm learn, trust the machine. And sometimes the machine learns the wrong thing, scales the wrong decision, and spends real money doing it.

The industry response to this fear splits into two camps, both wrong. Camp one says "just let AI handle it" and accepts the occasional catastrophic failure as the cost of automation. Camp two says "AI can't be trusted with money" and restricts it to generating blog posts and email subject lines. Neither camp gets the benefit of AI in campaign operations without the risk.

There is a third option. We call it Deliberation Gates — a three-phase approval architecture where AI handles analysis and preparation, but humans approve before any budget moves. We developed this pattern while managing APAC campaigns from our Singapore headquarters, where a single mistake could affect client budgets across Singapore, Indonesia, and Australia simultaneously.

The Architecture: Three Phases, Two Checkpoints

A Deliberation Gate is a mandatory checkpoint between AI preparation and live execution. The core principle is simple: AI does the thinking and building; humans do the approving and activating. This is not AI-as-suggestion-box, where the human still does all the work. The AI does the heavy lifting — research, analysis, campaign structuring, budget allocation, bid calculations. But between "here is what I recommend" and "it is now live," there is a gate that only a named human can open.

The system has three phases, each separated by a gate.

Phase 1: Research

In the Research phase, AI gathers data and produces recommendations. The gads-plan skill, for example, takes a brief from the strategist, researches keywords using the Google Ads API, pulls search volume and competition data, forecasts costs, and produces a structured recommendation: "Here are the keyword clusters, expected CPCs, estimated monthly spend, and projected conversion volume at different budget levels."

The output is a recommendation document. No campaign has been created. No money can move. The strategist reviews the research, pushes back on assumptions, adjusts targeting, and approves the direction — or rejects it and asks for a different approach.

Phase 2: Build

Once research is approved, the Build phase begins. The gads-campaign-setup skill takes the approved plan and constructs the full campaign architecture: campaign settings, ad group structure, keyword lists, bid strategies, ad copy, and extensions. It uses the Google Ads API or Meta Marketing API to create these entities in the live ad platform.

The critical constraint: everything is created in a PAUSED state. The campaign exists in the platform, fully configured, but it is not spending money. The strategist can log into Google Ads or Meta, inspect every setting, verify the structure matches the plan, and confirm nothing was misconfigured.

This is the step most AI marketing tools skip entirely. They go from "recommendation" to "live" with no intermediate state. The Build phase gives humans a complete preview of what will go live — in the actual platform, not in a mockup — before any budget is committed.

Phase 3: Execute

Execution is the final gate. The execute-optimisation skill activates the paused campaigns, adjusts bids, or modifies budgets — but only after a named person has signed off. The sign-off is logged: who approved it, when, and what specifically was approved.

This applies to ongoing optimization, not just new campaigns. When the system recommends increasing a campaign's daily budget by 20% because it is converting well below target CPA, that recommendation goes through the same gate. AI prepares the change. Human approves. System executes and logs.

Deliberation Gate Architecture

Phase 1 — Research
Analyze & Recommend
AI queries APIs, researches keywords/audiences, forecasts costs, and produces a structured recommendation. No entities created. No money at risk.
Output: Recommendation document with data-backed rationale
G1
Gate 1 — Strategist reviews & approves direction
Phase 2 — Build
Create as PAUSED
AI constructs full campaign structure in the live platform — campaigns, ad sets, ads, targeting, bids, copy. Everything created in PAUSED state. Zero spend.
Output: Complete campaign in platform, ready for inspection
G2
Gate 2 — Named sign-off to activate
Phase 3 — Execute
Activate & Log
System activates campaigns or applies approved changes. Every action logged with timestamp, approver name, and expected outcome for audit trail.
Output: Live changes + action log entry with full attribution

The Action Logging Loop

Every change that passes through a Deliberation Gate produces a log entry. This is not optional and it is not an afterthought — it is a core architectural requirement. The log captures:

What changed: "Increased daily budget on campaign 'SG_Search_Brand_Exact' from $30 to $36 (+20%)."
Why: "CPA at $12.40 vs. $18 target. Campaign under-spending against daily target by 25%. Budget increase recommended to capture available demand."
Who approved: "Approved by [strategist name] on 2026-03-14."
What happened next: Updated after 48 hours with the result — did CPA hold, did spend increase as expected, any unexpected effects.

These logs serve two purposes. First, they create accountability. When a client asks "why did our spend increase last week?" the answer is immediately retrievable — not reconstructed from memory, but documented with reasoning attached. Second, they feed the institutional memory system. The memory file for each client accumulates these decision records, so the system learns from its own optimization history.

Pod Action Logs: Cross-Client Visibility

Individual client logs are useful. Cross-client visibility is transformational. Each delivery pod (we run two — Pod Singapore serving 12 clients and Pod Indonesia serving 9 clients) maintains a pod action log that aggregates all changes across every client that week. A pod manager can review one document and see everything that changed across the entire portfolio.

This solves a management problem that most agencies handle badly: knowing what your team did. In a traditional agency, the manager asks each strategist what they changed, gets a verbal summary, and hopes nothing was missed. With pod action logs, every AI-assisted and human-approved change is documented in one place, with reasoning attached.

Why This Is Better Than Both Extremes

The debate about AI in marketing usually gets framed as a binary: autonomous AI vs. human-only operations. This framing misses the entire point.

Three Approaches Compared

Dimension Autonomous AI Manual Only Deliberation Gates
Speed Fast — but occasionally catastrophically wrong Slow — human bottleneck on every task Fast preparation, brief approval pause, fast execution
Risk High — AI can scale mistakes at speed Low — but opportunity cost of inaction is high Low — gates catch errors before spend commits
Accountability Unclear — "the algorithm did it" Clear — but undocumented and dependent on memory Clear — named approver, timestamped log, rationale attached
Learning Black box — hard to extract why decisions were made Tribal — knowledge lives in people's heads Documented — every decision feeds institutional memory
Scalability Scales fast — including mistakes Scales linearly with headcount AI handles volume, humans handle judgment — scales well
Client Trust Erodes — clients don't want robots managing their budget unsupervised High — but expensive to maintain High — human oversight with AI efficiency, fully auditable

Fully autonomous AI optimizes for speed at the expense of control. Manual-only operations optimize for control at the expense of speed. Deliberation Gates optimize for both: AI handles the 80% of work that is research, preparation, and calculation, while humans handle the 20% that requires judgment, context, and accountability.

The approval step adds minutes, not hours. Reviewing a well-prepared recommendation with clear rationale takes five minutes. Rebuilding trust after an autonomous system blows a client's budget takes months.

When Gates Are Not Needed

Not every AI action requires a gate. We apply gates based on a simple principle: if it can spend money or change something a client will see, it requires a gate. If it is analysis, reporting, or internal preparation, it runs freely.

Daily pacing checks run automatically at 10 AM SGT with no approval needed — they are read-only monitoring. Weekly reviews generate automatically — they are analysis, not action. Transcript processing runs automatically — it extracts information but does not change anything external.

Gates apply to: campaign creation, budget changes, bid adjustments, audience modifications, status changes (pausing or activating campaigns), and schedule modifications. These are the actions where mistakes cost money.

This distinction matters because over-gating kills the efficiency benefit of AI. If every action requires approval, you have replaced manual work with manual approval work — a lateral move at best. The architecture is deliberate about where gates add value and where they add friction.

Building This for Your Agency

Deliberation Gates are not a product you buy. They are an architectural pattern you implement. The core requirements are straightforward: your AI system needs API access to the ad platforms (so it can create entities in a PAUSED state), a logging mechanism (so every change is recorded with attribution), and a workflow structure that separates research, build, and execute into distinct phases with explicit checkpoints between them.

The hardest part is not the technology. It is the discipline. The temptation to let AI "just handle it" grows as trust increases. Resist it. The gates are not training wheels you remove once the system matures. They are a permanent architectural feature, like the approval workflows in financial systems. You do not remove approval requirements from wire transfers once the bank has been operating for a year.

The same principle applies to AI in marketing operations — whether you're serving APAC markets from Singapore or managing campaigns anywhere else. The stakes are real. The controls should be permanent.

Frequently Asked Questions

Is it safe to let AI manage Google Ads and Meta campaigns?

It depends entirely on the control architecture. AI managing campaigns without guardrails is genuinely dangerous — it can scale mistakes at machine speed. AI managing campaigns through Deliberation Gates — where it researches, builds in a PAUSED state, and executes only after human approval — is both safe and efficient. The key is never allowing AI to activate spend autonomously. Every budget-affecting action should require named human sign-off.

What are Deliberation Gates in AI marketing automation?

Deliberation Gates are mandatory checkpoints between AI preparation and live execution in marketing operations. The system has three phases: Research (AI analyzes data and produces recommendations), Build (AI creates campaign structures in a PAUSED state), and Execute (changes go live after human approval). Each transition requires explicit sign-off. This architecture lets AI handle the analytical and structural work while keeping humans in control of spend decisions.

How do you prevent AI from overspending on ad campaigns?

Three mechanisms working together: first, all campaigns created by AI are set to PAUSED by default — they cannot spend until a human activates them. Second, budget changes require named approval through a Deliberation Gate before execution. Third, automated daily pacing checks at 10 AM SGT compare actual spend against budget targets and flag any account over-pacing by more than 15%, giving the team time to intervene before budgets are exhausted.

What guardrails should marketing agencies put on AI tools?

At minimum: no autonomous budget changes, no autonomous campaign activation, mandatory logging of all AI-initiated actions, and named human approval for any change that affects live ad spend. Beyond these basics, agencies should implement archetype-specific thresholds (what constitutes an anomaly varies by client type), cross-client visibility through pod action logs, and a memory system that records the reasoning behind every approved change for future reference.

Can AI create Google Ads campaigns without human approval?

AI can create the full campaign structure — campaigns, ad groups, keywords, ads, bid strategies, and extensions — using the Google Ads API. However, in a properly architected system, everything is created in a PAUSED state. The human reviews the complete structure in the live platform, verifies settings, and then approves activation. The AI does the building; the human does the activating. This gives you the efficiency of AI-powered campaign setup without the risk of unchecked spend.

How do marketing agencies track AI-made campaign changes?

Through action logging — every change that passes through a Deliberation Gate produces a log entry recording what changed, why (the AI's rationale), who approved it, and what happened afterward. At Kaliber, these logs exist at two levels: per-client (in the client's memory file) and per-pod (in the pod action log, which aggregates all changes across all clients for manager review). This creates a complete audit trail that is searchable and attributable.

What is the difference between autonomous AI and AI with Deliberation Gates?

Autonomous AI makes decisions and executes them without human intervention — optimizing bids, adjusting budgets, pausing underperformers, and scaling winners on its own. AI with Deliberation Gates does the same analytical work but stops at the decision boundary: it presents its recommendation with supporting data, then waits for a named human to approve before executing. The difference is not in capability — both do the same analysis — but in control. Deliberation Gates ensure that judgment, accountability, and risk assessment remain human responsibilities.

About the Author

Robert Lai

Founder & CEO, Kaliber Group

Robert leads Kaliber Group, an AI-native performance marketing agency in Singapore. He built Kali — one of the first Claude-native marketing operations systems in APAC — managing 20+ clients across Singapore and Indonesia with 36 custom AI skills. Based in Singapore.