AI Agents t ≈ 19 min

Google Agents CLI for Marketers: The Skills-First Playbook

Google shipped 7 skills and a CLI to turn Claude Code into an ADK expert. Here's the marketing playbook.

yfx(m)

yfxmarketer

April 27, 2026

Google released Agents CLI in April 2026: a Python CLI plus seven portable skill files engineered to turn any coding agent into an expert at building, evaluating, and deploying agents on Google Cloud. The pattern matters more than the tool.

Three months after Anthropic introduced Claude Skills and Addy Osmani open-sourced agent-skills, Google adopted the same primitive for its Agent Development Kit. Skills-as-packages is becoming the default way enterprises distribute AI capability, with direct consequences for how marketing teams will build agents in 2026.

TL;DR

Agents CLI ships seven skills covering the full ADK lifecycle: scaffolding, evaluation, deployment, publishing, and observability. The CLI works standalone but is designed to be driven by Claude Code, Gemini CLI, Codex, or Antigravity. For marketers, this collapses the gap between prototype and production for six concrete agent patterns: news bot, industry watch, self-tuning support, organizational memory, RFP generator, and institutional memory navigator.

Key Takeaways

  • Agents CLI is not a coding agent. It is a CLI plus seven skill files installed into your existing coding agent.
  • Google adopted the same skills primitive used by Anthropic Claude Skills and Addy Osmani’s agent-skills project, signaling pattern consolidation.
  • Local development requires a Gemini API key only. Deployment requires Google Cloud.
  • Six of the twelve documented use cases map directly to marketing operations workflows.
  • The CLI is currently Pre-GA under Apache 2.0, supports Python 3.11+, and works on macOS, Linux, and Windows WSL 2.

What Is Google Agents CLI?

Agents CLI is a tool for coding agents, not a coding agent itself. It is a Python package distributed through PyPI as google-agents-cli, paired with seven skill files installed into the coding agent of your choice.

The CLI handles four lifecycle stages around the Agent Development Kit (ADK): scaffolding new projects, running evaluations, deploying to Google Cloud, and registering with Gemini Enterprise. The skills give your coding agent the contextual knowledge to drive these stages without you memorizing every flag.

The repository launched with 848 stars and 89 forks at the time of writing, lives at google/agents-cli on GitHub, and is licensed under Apache 2.0. Google flags the project as Pre-GA under its Service Specific Terms.

Action item: Run uvx google-agents-cli setup in your terminal to install the CLI and skills into Claude Code or Gemini CLI before reading further. Hands-on context makes the rest of this post actionable.

Why Does the Skills Pattern Signal Consolidation?

Skills are portable Markdown context packages with YAML frontmatter, shipped to a coding agent and activated based on user intent. Anthropic released the format. Addy Osmani open-sourced a community spec at agent-skills. Google has now adopted the same primitive for ADK.

Three independent vendors converging on one packaging format is a signal. Skills are becoming the default way to distribute domain expertise to AI coding agents, the same way npm packages became the default for JavaScript libraries.

For enterprise marketing teams, the implication is operational: the agents you build in 2026 will be packaged, versioned, and distributed as skills. Tracking how Google, Anthropic, and the open ecosystem implement skills now is how you avoid rewriting your stack in 2027.

How Do the Seven Skills Map to the ADK Lifecycle?

Each skill teaches your coding agent one phase of agent development. Together they cover scaffolding through observability.

  • google-agents-cli-workflow: development lifecycle, code preservation rules, model selection
  • google-agents-cli-adk-code: ADK Python API, agents, tools, orchestration, callbacks, state
  • google-agents-cli-scaffold: project scaffolding via create, enhance, upgrade
  • google-agents-cli-eval: evaluation methodology, evalsets, LLM-as-judge, trajectory scoring
  • google-agents-cli-deploy: deployment to Agent Runtime, Cloud Run, GKE, CI/CD, secrets
  • google-agents-cli-publish: Gemini Enterprise registration
  • google-agents-cli-observability: Cloud Trace, logging, third-party integrations

The google-agents-cli-workflow skill is always active in the coding agent. The other six load contextually based on what you ask the coding agent to do.

Which CLI Commands Matter Most for Marketing Operators?

The CLI exposes five command groups. Marketers will use four of them regularly.

  • agents-cli scaffold <name>: creates a new agent project from a template
  • agents-cli run "prompt": runs the agent locally with a single prompt for testing
  • agents-cli eval run: executes the eval suite against your agent
  • agents-cli deploy: ships the agent to Google Cloud
  • agents-cli publish gemini-enterprise: registers the agent with Gemini Enterprise so internal users find it

Three templates ship out of the box: adk for single agents, adk_a2a for multi-agent coordination, and agentic_rag for retrieval-augmented agents. Pick the template based on whether your agent needs memory across documents.

Action item: Run agents-cli scaffold my-first-agent --prototype --yes to generate a working project skeleton. Inspect the file structure before adding any custom logic.

Which Marketing Agents Ship First?

Six of the twelve use cases Google documents map directly to marketing operations. Each one below includes the template, a ready-to-use prompt, a 4-step ship checklist, and an honest time estimate.

Daily News Bot for Competitive Intelligence

Template: adk. Build time: 2 to 3 hours. Time saved per week: 3 to 4 hours.

Pull RSS feeds from competitors, industry blogs, and trade press. Summarize the top stories with Gemini. Post to Google Chat or email on a Cloud Scheduler trigger every morning at 7am.

SYSTEM: You are an ADK agent builder using Agents CLI inside Claude Code.

<context>
Stack: Claude Code with google-agents-cli skills installed
Template: adk
Deployment: Cloud Run with Cloud Scheduler trigger
</context>

Build a daily news bot. Specs:
1. Input: list of RSS feed URLs from {{COMPETITOR_RSS_LIST}}
2. Process: fetch feeds, deduplicate by URL, score relevance with Gemini against {{INDUSTRY_KEYWORDS}}, keep top 5
3. Output: Google Chat message via webhook {{GOOGLE_CHAT_WEBHOOK}} with title, 1-sentence summary, source link
4. Schedule: Cloud Scheduler trigger at 7am {{TIMEZONE}}
5. Eval: 3 evalsets for summary quality, relevance scoring, deduplication

MUST scaffold with: agents-cli scaffold news-bot --template adk
MUST include eval cases before deployment.
NEVER post duplicates within 7 days (track in Firestore).

Output: Numbered implementation plan, then execute step by step.

Ship checklist:

  • Provision Firestore for the dedup store via agents-cli infra single-project
  • Add 5 RSS feeds covering your top 3 competitors plus 2 industry publications
  • Write 10 evalset cases covering edge feeds (paywalls, missing dates, non-English)
  • Deploy to Cloud Run, then test the Cloud Scheduler trigger manually before enabling

Industry Watch for Strategic Signal Detection

Template: adk. Build time: 4 to 6 hours. Time saved per quarter: 8 to 12 hours.

Track public release notes, documentation updates, job postings, and conference talks across a defined competitor set. Surface shipped features and hiring trends. Persist findings to BigQuery for week-over-week review.

SYSTEM: You are an ADK agent builder using Agents CLI.

<context>
Goal: Strategic competitor signal detection
Template: adk
Storage: BigQuery for time-series competitor signals
</context>

Build an industry watch agent that runs daily and tracks:
1. Release notes from {{COMPETITOR_DOCS_URLS}}
2. Job postings from {{COMPETITOR_CAREERS_PAGES}}
3. Public roadmaps and changelogs

For each new signal, extract: signal_type, competitor_name, signal_date, summary, raw_url. Write to BigQuery table competitor_signals.

Weekly digest: Friday 9am, summarize the week's signals grouped by competitor and signal_type, post to Slack {{SLACK_WEBHOOK}}.

MUST handle pagination on careers pages.
MUST cache fetched URLs for 7 days to avoid duplicate processing.

Output: Project plan, then execute.

Ship checklist:

  • Create the BigQuery dataset before running the agent (the schema is in the scaffold)
  • Add 5 to 8 competitors and verify their pages allow scraping (check robots.txt)
  • Test the weekly digest manually before scheduling
  • Build a Looker Studio dashboard on the BigQuery table for non-technical stakeholders

Self-Tuning Support Agent

Template: adk. Build time: 6 to 8 hours. Quality improvement: measurable in 2 weeks.

Run evaluation after each customer conversation. Identify gaps in knowledge or weak responses. Draft new evaluation cases automatically. Coverage adapts to the questions customers ask in production, not the ones you predicted.

SYSTEM: You are an ADK agent builder using Agents CLI.

<context>
Goal: Customer support agent with self-improving eval coverage
Template: adk
Knowledge source: {{HELP_DOCS_URL}} or {{HELP_CENTER_EXPORT_PATH}}
</context>

Build a support agent with three loops:
1. Answer loop: customer asks question, agent retrieves from knowledge base, responds
2. Eval loop: every conversation runs LLM-as-judge against 4 metrics (factual accuracy, tone match, completeness, source citation)
3. Self-tuning loop: conversations scoring below 0.7 on any metric trigger a draft new evalset case for review

Output drafts to {{REVIEW_FOLDER}} for human approval before merging into the eval suite.

Weekly report: aggregate eval scores by category, surface knowledge gaps where the agent fell back to "I don't know" more than 3 times.

MUST use the google-agents-cli-eval skill for evalset structure.
MUST log all conversations to Cloud Trace for replay.

Output: Implementation plan, then execute.

Ship checklist:

  • Export your existing help center to a single source folder before scaffolding
  • Write 25 seed evalset cases covering your top 10 customer question categories
  • Set up the review folder in Google Drive with notification permissions
  • Run the agent against last quarter’s actual support tickets before going live

Organizational Memory Agent

Template: agentic_rag. Build time: 8 to 12 hours. Time saved per recurring decision: 1 to 2 hours.

Index Google Chat, email, design documents, and meeting notes for decision records. When a marketing proposal recurs (“let’s run the same Black Friday playbook”), the agent surfaces the original thread, the decision the team reached, and the outcome.

SYSTEM: You are an ADK agent builder using Agents CLI.

<context>
Goal: Marketing team decision-record memory
Template: agentic_rag
Sources: Google Chat exports, Gmail, Google Drive (Marketing folder)
</context>

Build a RAG agent for organizational memory:
1. Nightly ingestion at 2am from {{GOOGLE_DRIVE_FOLDER_ID}}, last 90 days of Gmail labeled "marketing-decisions", and Google Chat space {{CHAT_SPACE_ID}}
2. Chunk strategy: by document section for Drive, by thread for Gmail and Chat
3. Embedding model: text-embedding-004
4. Vector store: Vertex AI Vector Search

Query interface: when a user describes a proposal or idea, the agent retrieves the 5 closest past decisions and returns: original_thread_link, decision_summary, decision_date, decision_owner, outcome_if_known.

MUST respect Drive permissions (only retrieve documents the querying user has access to).
MUST return source links, never paraphrase decisions without citation.

Output: Plan, then execute.

Ship checklist:

  • Audit your Drive Marketing folder structure before ingestion (the agent inherits your folder hierarchy)
  • Define what counts as a “decision record” (we suggest: any thread with the word “decided”, “approved”, or “rejected” plus any doc tagged decision-record)
  • Run ingestion on a 30-day window first, query manually, then expand to 90 days
  • Add an outcomes loop: monthly review of past decisions to tag results

RFP Response Generator

Template: agentic_rag. Build time: 10 to 16 hours. Time saved per RFP: 8 to 15 hours (estimated, varies by complexity).

Pull from past proposals, current resource availability, and pricing models. Estimate timelines and budgets. Draft a technical approach. Produce a proposal package for human review before sending.

SYSTEM: You are an ADK agent builder using Agents CLI.

<context>
Goal: First-draft RFP response generator
Template: agentic_rag
Sources: past proposals, capability statements, pricing models, case studies
</context>

Build an RFP response agent:
1. Input: RFP document (PDF or Word) uploaded to {{INTAKE_FOLDER}}
2. Extraction: identify all questions and requirements with section numbers
3. Retrieval: for each question, retrieve the 3 most relevant past responses, case studies, and capability statements from {{KNOWLEDGE_BASE_PATH}}
4. Drafting: generate a first-draft response per question, citing the source documents inline
5. Output: Word document mirroring the RFP structure with [DRAFT] tags on every section, saved to {{DRAFT_OUTPUT_FOLDER}}

MUST include a confidence score per response (high/medium/low based on retrieval similarity).
MUST flag questions where retrieval similarity is below 0.6 for human writing from scratch.
NEVER fabricate metrics, certifications, or client names. Only use what's in the knowledge base.

Output: Plan, then execute.

Ship checklist:

  • Curate the source knowledge base first: 20 strongest past proposals beat 200 average ones
  • Tag every source document with industry, deal size, and outcome (won/lost) before ingestion
  • Run the agent on a closed past RFP and compare its draft to your final response
  • Set up the review workflow: who edits, who approves, who sends

Institutional Memory Navigator

Template: agentic_rag plus Gemini Enterprise registration. Build time: 12 to 20 hours. Time saved per new hire: 8 to 15 hours of onboarding.

Deploy in Gemini Enterprise with permissioned access to Drive, Google Chat, and email. Respond to onboarding questions like “how do we tag UTMs for paid social” with both the documented process and the current operational reality from recent threads.

SYSTEM: You are an ADK agent builder using Agents CLI.

<context>
Goal: New-hire onboarding agent in Gemini Enterprise
Template: agentic_rag
Distribution: Gemini Enterprise registration via agents-cli publish
</context>

Build an institutional memory agent:
1. Sources: Marketing wiki in Drive, last 180 days of #marketing-ops Google Chat, Gmail labeled "process-docs"
2. Response shape: every answer returns two parts:
   a. Documented process (from official wiki and process docs)
   b. Operational reality (from recent threads, only if it differs from the documented process)
3. Permissions: respect Drive ACLs per user, never expose content the asker doesn't have access to

Distribution: register with Gemini Enterprise so the entire marketing team finds it via the standard search.

MUST cite both sources when official and operational reality differ.
MUST flag stale processes (documented process unchanged for 12+ months but recent threads show different practice).

Output: Plan, then execute. Final step: agents-cli publish gemini-enterprise.

Ship checklist:

  • Build the agent in a single-team scope first (5 to 10 users) before company-wide rollout
  • Set up a feedback channel: every response includes a “wrong/incomplete?” link routed to a Drive folder
  • Schedule a quarterly content owner review for any process flagged as stale
  • Track usage in BigQuery agent analytics to identify which questions surface most often

Action item: Pick one pattern this week. Scaffold it today. Ship the prototype to your local playground at localhost:8080 before scoping deployment.

How Do You Ship Your First Agent in 90 Minutes?

A concrete walkthrough using Claude Code and the news bot pattern. Every command runs in your terminal.

Step 1: Install Agents CLI and Skills (5 minutes)

# Install uv if you don't have it
curl -LsSf https://astral.sh/uv/install.sh | sh

# Install Agents CLI plus skills into your coding agents
uvx google-agents-cli setup

# Get a Gemini API key from https://aistudio.google.com/apikey
export GEMINI_API_KEY="your-key-here"

Step 2: Scaffold the Project (2 minutes)

agents-cli scaffold news-bot --template adk --prototype --yes
cd news-bot
agents-cli install

The --prototype flag skips deployment scaffolding so you ship to local first.

Step 3: Drive Claude Code with the Right Prompt (45 minutes)

Open Claude Code in the project directory. Verify the skills loaded:

claude
/skills

You should see google-agents-cli-workflow plus 6 others. Now paste the news bot prompt from the section above. Claude Code drives the implementation: edits the agent file, writes the tools, scaffolds the eval cases.

Step 4: Run Locally (10 minutes)

agents-cli playground

This starts the ADK web playground at http://localhost:8080 with hot reload. Test the agent with 3 sample queries before writing the eval suite.

Step 5: Run Evals (15 minutes)

agents-cli eval run

Review the trajectory scores. Anything below 0.7 needs a fix before deployment. Iterate with Claude Code on the failing cases.

Step 6: Deploy to Cloud Run (15 minutes)

# First-time only: provision the project
agents-cli infra single-project

# Deploy the agent
agents-cli deploy

The CLI handles the Docker build, the Cloud Run service, and the IAM bindings. The output URL is your agent endpoint.

Action item: Block 90 minutes on your calendar this week. Run all 6 steps. Ship one agent before you read a deeper post.

What Does This Cost on Google Cloud?

Honest cost reality for the marketing patterns above. Numbers are estimates based on Google Cloud public pricing as of April 2026 and assume modest usage (one operator team, single agent).

The five cost lines per agent:

  • Gemini API calls: $0.30 to $3 per 1M input tokens depending on model. A daily news bot processing 50 articles costs roughly $5 to $15 per month.
  • Cloud Run hosting: free tier covers 2M requests per month. Most marketing agents stay free here.
  • Cloud Scheduler triggers: $0.10 per job per month. Negligible.
  • Firestore or BigQuery storage: free tier covers small agents. Industry watch with 1 year of signals: $5 to $20 per month.
  • Vertex AI Vector Search (RAG agents only): $20 to $80 per month for a small index of 100K documents.

Realistic monthly cost ranges:

  • Daily News Bot: $10 to $20
  • Industry Watch: $15 to $35
  • Self-Tuning Support: $40 to $150 (scales with conversation volume)
  • Organizational Memory (agentic_rag): $50 to $120
  • RFP Generator (agentic_rag, low volume): $30 to $80
  • Institutional Memory Navigator: $80 to $200 plus Gemini Enterprise license

Verify these against the Google Cloud pricing calculator for your specific volume. Costs scale linearly with usage, so a 10x conversation volume on the support agent is roughly 10x the monthly bill.

Action item: Set a Google Cloud budget alert at $50 for your first agent deployment. Adjust upward only after you have 30 days of real usage data.

What Is the 7-Day Starter Plan for Marketing Teams?

A concrete week-by-week schedule for a marketing operator new to Agents CLI.

Day 1 (60 minutes): Install Agents CLI, get the Gemini API key, run the manual workflow tutorial end-to-end with the example agent.

Day 2 (90 minutes): Scaffold the news bot. Run it locally with sample feeds. No deployment yet.

Day 3 (90 minutes): Write 10 eval cases for the news bot. Run agents-cli eval run. Fix the lowest-scoring case.

Day 4 (60 minutes): Deploy the news bot to Cloud Run. Configure Cloud Scheduler. Verify the morning post arrives in Google Chat.

Day 5 (90 minutes): Pick the second pattern (we suggest Industry Watch or Organizational Memory). Scaffold it. Stop at the local playground.

Day 6 (45 minutes): Document what worked, what broke, and the cost so far. Share with one other marketer.

Day 7 (off): Let the news bot run for a full week before iterating.

The point of the schedule: ship one working agent in production before scaffolding the second. Most marketing teams build five half-finished agents and ship none. Reverse the order.

How Do Claude Code and Agents CLI Work Together Under the Hood?

The walkthrough above showed Claude Code driving Agents CLI. The mechanic is worth understanding because it determines what the agent does when your prompts get more ambiguous.

When you ask Claude Code to “build a daily news bot for marketing,” the google-agents-cli-workflow skill loads first because it’s always active. The workflow skill identifies the request as a scaffolding task and pulls the scaffold and adk-code skills into context. Claude Code then drives the CLI on your behalf.

The implication for prompt design: be specific about the lifecycle stage. “Scaffold an agent” loads different skills than “evaluate this agent” or “deploy this agent.” Vague prompts force the workflow skill to guess.

SYSTEM: You are a marketing ops engineer using Claude Code with Agents CLI skills installed.

<context>
Goal: Build a competitive intelligence agent
Stack: Claude Code, Agents CLI, Gemini API, Google Cloud
Templates available: adk, adk_a2a, agentic_rag
</context>

Walk me through:
1. Which template fits this use case and why
2. The exact commands to scaffold the project
3. The eval cases I should write before deployment
4. The deployment target (Cloud Run vs Agent Runtime vs GKE) for my expected load of {{REQUESTS_PER_DAY}}

MUST follow Agents CLI conventions. MUST cite the relevant skill for each step.

Output: Numbered plan with commands and rationale.

The benefit of driving Agents CLI through Claude Code rather than typing commands directly: you skip the documentation-reading phase. Claude Code already knows the flags, the templates, and the gotchas because the skills loaded into its context.

Where Does Agents CLI Fit Versus Claude Skills and agent-skills?

Three skills ecosystems exist as of April 2026. Each serves a different layer.

  • Anthropic Claude Skills: skills shipped natively inside Claude. Best for Claude-specific workflows like document creation, code review, and research.
  • agent-skills (Addy Osmani): open community spec for portable skills working across coding agents. Best for cross-vendor workflows and personal tooling.
  • Google Agents CLI skills: skills focused on building ADK agents for Google Cloud deployment. Best when your destination is Agent Runtime, Cloud Run, or GKE.

Marketing teams will accumulate all three over time. The decision rule is simple: pick the skills package whose runtime matches where the agent will live.

What Should Marketers Track Next?

Three signals to watch in the next two quarters.

OpenAI has not shipped an equivalent skills primitive for Codex as of April 2026. If they do, the consolidation thesis strengthens. If they ship a competing primitive, fragmentation becomes the operating reality.

Anthropic’s Claude Skills marketplace is currently developer-focused. Watch for marketing-specific skill packages from agencies and SaaS vendors.

Microsoft 365 Copilot adopted a similar pattern with declarative agents and connectors. Convergence across Microsoft, Google, and Anthropic on one packaging format is the scenario worth planning for.

Action item: Set a calendar reminder for July 2026 to audit which skills packages your marketing team has installed across Claude Code, Gemini CLI, and any other coding agents in use.

Implementation Gotchas Specific to Marketers

Document these before your first deployment.

  • ADK agents are Python only. Go, Java, and TypeScript are not supported as of April 2026.
  • Real-time voice and video are not yet supported. Build text-first.
  • Multi-cloud deployments require custom infrastructure. Agents CLI is opinionated toward Google Cloud.
  • Local development with a Gemini API key from AI Studio does not require a Google Cloud account. Deployment does.
  • The CLI is distributed as a pre-built wheel, not as source. The Python code is inspectable inside the .whl archive but not editable.

Final Takeaways

Agents CLI is the right tool when your agent needs to live on Google Cloud and the operator team prefers Claude Code or Gemini CLI as the development surface.

The skills pattern is consolidating across Anthropic, Google, and the open community. Marketing teams learning one skills format will transfer most of the knowledge to the others.

Six of the twelve documented use cases map directly to marketing operations. Pick one this week. Ship the prototype before you scope the deployment.

Claude Code plus Agents CLI collapses the documentation-reading phase. The skills carry the operational knowledge so you focus on the agent’s behavior.

Skills-as-packages is the npm moment for AI agent capability. Track the format, accumulate the right packages, and your marketing stack compounds rather than fragments.

yfx(m)

yfxmarketer

AI Marketing Engineer

Writing about AI marketing, growth, and the systems behind successful campaigns.

read_next(related)