Blog/AI Agents

The Complete Guide to AI Marketing Agents and Skills: OpenClaw and Claude Code vs. Hyper

A practitioner's guide to AI agents that execute real marketing work — covering 15 production-grade skills for paid media, SEO, GEO, AI search visibility, creative, content, and operations.

The Complete Guide to AI Marketing Agents and Skills: OpenClaw and Claude Code vs. Hyper
AI Agents·
Elliot Fleck
Elliot Fleck
·
25 min read
March 22, 2026

A practitioner's guide to AI agents that execute real marketing work — covering the open-source tools, the managed platforms, the skills ecosystem, the integration trade-offs, and 15 production-grade workflows you can deploy today across paid media, SEO, GEO, creative, content, and operations.


For the last two years, the conversation around AI in marketing has been dominated by chatbots. Better content, better brainstorming, better ad copy. But the underlying workflow hasn't changed: a marketer sits at a keyboard, prompts an AI, copies the output, switches to another tab, and pastes it in. The AI generates. The human implements. The bottleneck — the actual execution — remains manual.

That era is ending. The shift underway is from AI that talks to AI that does. The new frontier is agentic AI: persistent, autonomous systems that can be given a goal and then plan, reason, and execute complex, multi-step tasks across real platforms — without constant human intervention. Not "write me ad copy." Instead: "Analyze my Google Ads account, find the campaigns that are bleeding budget on irrelevant search terms, add negative keywords, reallocate that spend to the top-performing ad groups, and send me a summary in Slack when you're done."

That's a fundamentally different kind of tool. And the ecosystem for building these tools is maturing fast. At the center of this shift are three approaches worth understanding: OpenClaw, Claude Code, and Hyper. Each represents a different philosophy for how AI agents should work, who should use them, and what trade-offs are acceptable. Understanding the differences matters — because the choice you make determines not just what's possible, but what's practical for your team to actually operate at scale.

The Three Approaches to AI Marketing Agents

OpenClaw

OpenClaw is an open-source personal AI assistant. Think of it less like a chatbot and more like a junior employee that lives on your computer. You give it a goal, and it uses a large language model to reason, plan, and execute tasks. It can open a browser, write and run code, interact with files, and connect to external tools and APIs through the Model Context Protocol (MCP) — an open standard that lets AI agents interface with external services.

The appeal of OpenClaw is its openness. It's free, it's extensible, and if you're a developer, you can make it do essentially anything. The community around it is large and growing, with thousands of MCP servers, skills, and plugins available. For a technically sophisticated solo operator — someone comfortable in a terminal, managing API keys, and debugging integration failures — it's a powerful foundation.

The trade-off is that everything is your responsibility. You choose the model, you manage the API keys, you configure each integration, you handle security. There's no guardrail between the agent and your local machine, and the attack surface is significant. More on that later.

Claude Code

Claude Code is Anthropic's agentic coding tool — a terminal-based AI agent that can read, write, and execute code autonomously in a persistent session. It was purpose-built for software engineering: refactoring codebases, writing tests, debugging across files, managing git workflows. But marketers with technical chops have started using it for something else entirely — building and running marketing automations directly from the command line.

Claude Code's strength is the depth of its reasoning. Because it has native access to Anthropic's most capable models, it can handle genuinely complex analytical tasks: writing Python scripts to process campaign data, building custom reporting dashboards, analyzing large CSVs, and generating insights that go beyond surface-level summaries. It's particularly good at tasks that live at the intersection of marketing and data engineering — the kind of work that would normally require a data analyst or a marketing ops specialist.

Like OpenClaw, Claude Code operates on your local machine and extends its capabilities through MCP and skills. The same ecosystem of tools is available. The same security considerations apply. The difference is in the DNA: where OpenClaw is designed as a generalist personal assistant, Claude Code is a coding agent that happens to be very good at marketing-adjacent technical work.

Hyper

Hyper takes a fundamentally different approach. Rather than giving you a general-purpose agent and asking you to assemble the marketing stack yourself, Hyper provides the complete environment — and this distinction matters more than it might seem.

The foundation is the same kind of agentic AI: autonomous agents that reason, plan, and execute tasks. But Hyper wraps that capability in a purpose-built operating environment for marketing. Hyper is model agnostic — it works with Claude, OpenAI, Gemini, Kimi, Minimax, and any other frontier model. This isn't just a checkbox feature. Model agnosticism means Hyper can route different tasks to different models based on what each one is best at: a fast, inexpensive model for data formatting and simple lookups, a reasoning-heavy model for campaign strategy and diagnosis, a multimodal model for creative analysis. It also means you're never locked into a single provider's pricing, availability, or capabilities.

Agents operate in isolated sandbox environments with real compute — they can write Python, run SQL directly against connected databases and data warehouses, browse the web, and generate creative assets. The SQL capability is worth highlighting: agents can query your PostgreSQL, BigQuery, or MySQL databases directly, joining ad platform data with CRM records, product data, or revenue figures to produce the kind of cross-functional analysis that normally requires a data team and a week-long ticket queue. They have persistent memory that carries context across sessions, days, and weeks. They can be scheduled to run on any frequency, triggered by webhooks, or chained into multi-step workflows.

For complex, high-volume tasks, Hyper uses sub-agent orchestration — breaking large jobs into smaller pieces and delegating them to specialized sub-agents that run in parallel. A cross-platform audit that would take a single agent 45 minutes to complete sequentially can be split across sub-agents — one analyzing Google Ads, one analyzing Meta, one pulling SEO data, one scraping competitor creative — and finished in a fraction of the time. Sub-agent orchestration also keeps costs down: instead of feeding an entire campaign's worth of data into a single expensive context window, each sub-agent works with only the data it needs, using the most cost-efficient model for its specific task. The orchestrating agent then synthesizes the results into a unified output.

What makes Hyper different from the DIY approach isn't the intelligence — it's the infrastructure and integrations. Hyper has over 80 native integrations built on rich APIs and SDKs, not thin MCP wrappers. The difference is significant. When you connect Meta Ads to OpenClaw or Claude Code via a typical third-party ad MCP, you get a basic interface that can pull some data and push some changes. When Hyper connects to Meta Ads, it connects through the full Marketing API with deep understanding of campaign structures, audience configurations, creative specifications, and the dozens of settings that interact in non-obvious ways. The same applies across Google Ads, TikTok, LinkedIn, HubSpot, Slack, Google Search Console, GA4, email ESPs, CRMs, and more.

On top of these integrations, Hyper built what it calls the context layer — the decision frameworks, diagnostic patterns, and operational knowledge that turn a general agent into a marketing expert. When Hyper analyzes a campaign, it doesn't just pull metrics; it runs the same diagnostic logic an experienced media buyer would, checking whether low performance stems from creative fatigue or audience saturation, identifying when rising costs correlate with frequency, catching configuration issues before they compound. Because marketing changes constantly — Meta's Andromeda ad delivery system, Google's Performance Max updates, shifting best practices — Hyper maintains pipelines that keep this context current rather than relying on stale training data.

Hyper also supports custom MCPs and is rolling out its own MCP server that gives external agents access to Hyper's rich integration layer. So the ecosystem isn't either/or — you can bring your own tools into Hyper, or bring Hyper's capabilities into your existing agent setup.

The built-in tooling goes deep. HyperSEO provides search visibility tracking across Google alongside LLM visibility across ChatGPT, Claude, Perplexity, and Gemini — when someone asks an AI for recommendations in your category, you can see whether your brand appears. Native scrapers pull data from the web, Meta's Ads Library, Reddit, X, and YouTube. Agents can browse the web autonomously, navigate complex workflows that would normally require a human at the keyboard, and connect to databases directly — running SQL queries against your data warehouse, joining marketing data with business data for analysis that goes far deeper than what any standalone tool can produce.

Tasks, schedules, and triggers are first-class in Hyper — not a bolt-on script you hope keeps running. You can run work on a cron-style schedule at almost any cadence: every minute, hourly, daily, specific weekdays (e.g. Tuesday and Wednesday only), weekly, monthly, or custom combinations. The same system supports event-driven runs: a webhook fires, an important email lands, a campaign KPI crosses a threshold (CPA up, ROAS down, spend spike), or data from a connected integration changes — and an agent picks up the job without you re-prompting. That combination — reliable recurrence plus real triggers — is what turns "I asked the AI once" into always-on marketing operations.

You assign work the way you would brief a coworker: recurring checklists, one-off projects, and "when this happens, do that" rules. Agents can schedule follow-ups for themselves (next review, re-pull metrics, retry after a rate limit) and you get full visibility: what ran, when, what it did, and what's queued next. None of it depends on your laptop being open or a local terminal staying connected.

OpenClaw and Claude Code / Cowork: how scheduling compares

OpenClaw can run scheduled and triggered workflows, but in practice many teams report cron and trigger reliability as uneven — missed windows, opaque failures, and more "black box" debugging than you'd want for production marketing. The project is actively improving scheduling and observability; still, you're often responsible for the process manager, machine uptime, and figuring out why a job didn't fire.

Claude Code, and Anthropic Cowork in particular, have been pushing tasks as a way to offload recurring work. That's a real step forward, but these flows are relatively new, and for deeper or longer-running work many setups still assume your machine or session is available — not ideal if you want "every morning at 6am" or "when Meta spend doubles" to run while you're offline. Hyper's model is server-side execution by default: the agent runs in Hyper's environment, schedules are enforced by platform infrastructure, and you get a clear audit trail rather than guessing whether a local scheduler woke up.

FeatureStandard ChatbotOpenClaw / Claude CodeHyper
ModelsSingle model, provider's choice.OpenClaw: configurable. Claude Code: Claude only.Model agnostic. Route tasks to the best model for the job — Claude, OpenAI, Gemini, Kimi 2.5, Minimax, and more.
ExecutionText-based answers and suggestions.Takes direct action. Runs code, uses tools, interacts with APIs.Sandbox execution with sub-agent orchestration for parallel, cost-efficient task processing.
Tasks & schedulingNone.OpenClaw: cron/triggers possible; community feedback often cites inconsistency; improving. Claude Code / Cowork: tasks newer; many flows need an available machine/session.First-class cron (minute to monthly, weekdays, custom) plus triggers (webhooks, email, KPI thresholds). Server-side, dependable, full run visibility.
Data AccessNone.Whatever you connect manually.Direct SQL access to databases and data warehouses. Join marketing data with business data.
PersistenceForgets everything at session end.Maintains memory and context over time.Persistent memory, scheduled agents, triggered workflows, and cross-session context.
IntegrationsLimited to pre-built plugins.Extensible via MCP servers. You configure each one manually.80+ deep API/SDK-based integrations with secure OAuth. Plus custom MCP support.
CreativeText output only.Can call image APIs. No brand awareness, no video.All generative models built in. Brand-aware image and video generation from a brief or URL.
Marketing IntelligenceNone. General-purpose language model.Whatever skills and context you provide.Built-in context layer: decision frameworks, platform best practices, diagnostic patterns.
SecurityRuns on a third-party's servers.Self-hosted. Full control, but full risk.Enterprise-grade process isolation, credential management, and audit trails.

The Integration Gap: MCPs vs. Native APIs

This is worth unpacking because it's the most misunderstood part of the AI agent landscape, and it directly impacts what an agent can actually accomplish.

MCP (Model Context Protocol) is an open standard that lets AI agents interact with external services through a standardized interface. It's a great idea — a universal adapter that means an agent can connect to any service that exposes an MCP server. The OpenClaw and Claude Code ecosystems are built on this foundation, and there are thousands of MCP servers available for everything from Slack to Salesforce.

The limitation is depth. Most MCP servers are thin wrappers around a subset of an API. A generic Google Ads MCP, for example, might pull reports and make basic changes — and that's genuinely useful. But Meta's Marketing API alone has hundreds of endpoints, dozens of campaign objectives, complex audience structures, creative asset specifications, and platform-specific quirks that a thin wrapper simply doesn't cover. The same is true for Google Ads, especially with the complexity of Performance Max campaigns, which operate across Search, Display, YouTube, Gmail, Maps, and Discover simultaneously and require careful management of asset groups, audience signals, and search themes.

When an agent uses a thin integration, it's limited to basic operations: "get this report" and "change this budget." When an agent has deep access to the full API surface, it can do what a human expert would do: structure campaigns correctly from scratch, manage audience overlap, diagnose performance issues at the ad set level, and catch configuration mistakes before they cost money.

Hyper's native integrations are built at this deeper level. They're maintained by a team that understands both the technical API surface and the marketing domain knowledge required to use it effectively. That's the difference between "can technically interact with Meta Ads" and "can actually run your Meta Ads."

For teams that need something Hyper doesn't natively integrate with, custom MCP support means you're not locked in. You can connect any MCP server alongside the native integrations. And Hyper's upcoming MCP server goes the other direction — letting you access Hyper's deep integration layer from external agents, effectively making Hyper's marketing expertise available to any MCP-compatible tool.

Creative Generation: Images, Video, and On-Brand Assets

One of the most underappreciated capabilities of modern AI agents is creative generation — and the gap between what's possible with a general-purpose agent versus a marketing-native platform is enormous.

OpenClaw and Claude Code can call image generation APIs — Flux, Imagen, or others — and produce individual images. But marketing creative isn't just "make a picture." It's "make a picture that uses our brand colors, includes our logo, matches the aspect ratio requirements for Instagram Stories, and looks like it belongs alongside our existing campaigns." That's a fundamentally different problem.

Hyper has every major generative model built in — from Nano Banana for fast, high-quality stills to Sora for video generation — and wraps them in a brand-aware layer. When Hyper generates creative, it pulls from your brand assets: your logo, your color palette, your typography, your existing visual language. You can give it a website URL and it will analyze the design system, extract brand elements, and generate ad creatives that look native to your brand — not like generic AI output.

This matters because the creative bottleneck in paid media isn't having ideas — it's producing the assets. A media buyer who wants to test 20 ad variations needs 20 images or videos, each properly formatted for the target platform. With Hyper, that's a single request: "Generate 20 ad creatives for our summer campaign, 5 variations each across 4 messaging angles, formatted for Meta Feed and Stories." The agent handles the generation, the formatting, and the brand consistency. Then it uploads them directly to the ad platform through Hyper's native integrations — no downloading, renaming, and re-uploading required.

Video generation is where the gap widens further. Producing even a simple 15-second ad video traditionally requires a designer, editing software, and hours of work. With Sora and similar models available natively in Hyper, agents can generate video ads from a text description, iterate on them based on performance data, and deploy new variations automatically. This turns video creative testing — previously a luxury reserved for brands with in-house production teams — into something any team can do at scale.

Watch: AI-powered campaign launch

From brief to live campaign — creative generation, setup, and launch with Hyper.

The Agent Skills Ecosystem

Skills are the building blocks of any AI agent's capabilities. They're markdown files — structured documents that contain instructions, workflows, decision frameworks, and best practices for specific tasks. Think of them as playbooks that an AI agent can read and follow. A well-written skill turns a general-purpose agent into a specialist for a particular task.

The skills ecosystem is platform-agnostic. The same skill file works across OpenClaw, Claude Code, Hyper, and any agent that supports the format. This means the community's collective work benefits everyone, regardless of which agent platform you choose. Writing a skill is also how you capture and systematize your own expertise — the workflows you've developed, the edge cases you've learned to handle, the diagnostic patterns that come from years of doing the work.

  • skills.sh: The central directory for the open agent skills ecosystem. A leaderboard and search engine for discovering and installing skills with a single command: npx skills add <owner/repo>. It tracks the most popular and highest-rated skills across every category.
  • Community Repositories: Developers and marketing teams maintain their own collections. Two of the best for marketers:
    • coreyhaines31/marketingskills: Focused on CRO, copywriting, SEO, and growth engineering. High-quality, battle-tested workflows.
    • kostja94/marketing-skills: Over 160 skills for SEO, paid ads, content marketing, and 40+ page types. Broad coverage across the marketing stack.

In Hyper, skills are built into the platform — hundreds of production-grade marketing workflows maintained by the Hyper team, covering paid media, SEO, content, analytics, and operations. You can also create custom skills or import skills from the open ecosystem.

Building Your Marketing Agent Stack

If you're going the DIY route with OpenClaw or Claude Code, you need to assemble your own stack. This means choosing, configuring, and maintaining each component yourself. Here's what a typical setup looks like:

  1. The Core:

    • Agent: OpenClaw or Claude Code as the execution environment.
    • Model: A powerful frontier LLM — Claude, OpenAI, Gemini, or others. Both tools require you to bring your own API key and manage usage costs directly.
  2. Ad Platform Connectivity (MCP):

    • Custom or third-party ad MCPs: Teams often wire up their own MCP against Google Ads, Meta, LinkedIn, or TikTok — or install community servers. Coverage and reliability vary; most expose a fraction of the full Marketing API.
    • Hyper MCP: Hyper's own MCP server exposes Hyper's deep, SDK-based ad platform integrations to any MCP-compatible agent. If you're running OpenClaw or Claude Code but want marketing-grade API depth without maintaining integrations yourself, connect Hyper MCP and use Hyper's full Marketing API surface from your existing agent setup.
  3. CRM & Marketing Automation (MCP):

    • GoHighLevel MCP Server: For agencies running on GHL — interact directly with contacts, pipelines, and workflows.
  4. Data & Search APIs:

    • Serper API: A low-cost, fast alternative to the Google Search API for pulling SERP data. Essential for SEO workflows.
    • DataForSEO: An affordable, pay-as-you-go API for deep keyword research, competitor analysis, rank tracking, and SERP feature data.
  5. Communication (MCP):

    • Slack or Telegram: To receive alerts, reports, and notifications from your agent.

With Hyper, this stack comes pre-assembled. Ad platforms, CRMs, SEO tools, databases, communication channels, scrapers, and analytics — all connected through native integrations with secure OAuth, no API key management required. Model selection is handled for you with automatic routing across providers, and sub-agent orchestration breaks complex tasks into parallel workstreams for faster, more cost-efficient execution. You can add custom MCPs on top if you need something specific.

The Security Risks of Self-Hosted Agents

Both OpenClaw and Claude Code are developer tools, not consumer products. They run locally on your machine with potentially broad access to your files, network, browser, and applications. For a developer who understands the security model, this is manageable. For a marketing team deploying these tools at scale, the risks are significant and well-documented.

Security researchers from firms like CrowdStrike and Microsoft have highlighted several key vulnerabilities:

  • Indirect Prompt Injection: An attacker hides malicious instructions in a webpage, email, or document that your agent reads during normal operation. The agent follows those instructions — potentially exfiltrating data, modifying campaigns, or executing commands — without your knowledge. This is particularly dangerous for marketing agents that routinely scrape competitor websites, read emails, and process external content.
  • Tool Poisoning: A compromised MCP server or API returns manipulated data that causes the agent to take incorrect actions. Imagine your budget management agent receiving inflated spend numbers from a poisoned integration — it would cut budgets on your best-performing campaigns.
  • API Key and Credential Leakage: Self-hosted agents often have access to API keys, OAuth tokens, and other credentials. If not properly isolated, these can appear in logs, terminal output, or agent-generated files. A single leaked Meta or Google Ads API key can give an attacker full control of your ad accounts.
  • Lateral Movement: An agent running on your local machine or server may have access to resources beyond what it needs — file systems, databases, internal networks. A compromised agent becomes a foothold into your broader infrastructure.

Setting up these tools securely requires expertise in network security, process isolation, container management, and identity management. For most marketing teams, this overhead — and the risk of getting it wrong — makes a managed platform the more practical choice. Hyper's agents run in isolated sandbox environments with strict process boundaries, managed credential storage, and enterprise-grade access controls built in from the ground up.

15 Production-Grade Marketing Skills

What follows are fifteen practical, production-grade workflows that represent the state of the art in AI-powered marketing automation. Each skill follows the open agent skills format — you can copy them directly into your agent's skills directory if you're running OpenClaw or Claude Code, adapt them for your stack, or use them as a reference for what's possible with a platform like Hyper (where most of these workflows are already built in with deeper platform integrations).

Every skill lists its integration requirements. You'll see three options for ad platform connectivity: Hyper (native integrations with full API depth), Hyper MCP (Hyper's deep integrations exposed to any MCP-compatible agent), and custom or third-party ad MCPs you build or install yourself. On Hyper, these skills run with richer data, deeper platform access, and zero configuration. On OpenClaw or Claude Code, you'll use Hyper MCP or your own MCP wiring.

These skills are organized roughly by domain: paid media and advertising first, then SEO, GEO, and content, then operations. Before each skill, we'll cover why the workflow matters, what manual process it replaces, and the scale of impact you can expect.


The biggest opportunity for AI agents in marketing is paid media. The work is highly structured, deeply data-driven, and brutally repetitive. A single Google Ads account can generate thousands of search terms per week that need review. A Meta Ads account running creative tests produces dozens of data points daily that need to be analyzed, compared, and acted on. Budget pacing, bid adjustments, negative keyword management, A/B test conclusions — these are tasks that experienced media buyers do manually, day after day, and they're exactly the kind of work that agents excel at.

The skills in this section cover the full lifecycle of paid media management: from daily monitoring and anomaly detection, through campaign creation and creative testing, to budget management and search term optimization. On a platform like Hyper, many of these workflows connect directly through native ad platform integrations — the agent has deep access to campaign structures, audience configurations, and real-time performance data without the limitations of thin MCP wrappers.

1. Paid Ads: Daily Performance Briefing

Most media buyers start their day the same way: logging into Google Ads and Meta Ads, scanning for anything that looks off, checking budgets, and trying to spot problems before they get expensive. It's a 30-60 minute ritual that happens every single morning — and it's entirely automatable. This skill replaces that ritual with an agent that runs at 8 AM, catches anomalies that a human might miss after their second coffee, and delivers the briefing to Slack before the team even opens their laptops.

---
name: paid-ads-daily-briefing
description: "Analyze daily ad performance from Google Ads and Meta Ads and deliver a formatted summary to Slack. Use when automating daily performance monitoring, setting up a morning advertising report, or creating scheduled PPC alerts."
metadata:
  version: 1.2.0
  integrations:
    ad_platforms: ["Hyper (native)", "Hyper MCP", "Custom ad platform MCP"]
    communication: ["Hyper (native)", "Slack MCP"]
---

# Paid Ads: Daily Performance Briefing

Provide a concise, actionable summary of yesterday's advertising performance across Google and Meta, delivered to a designated Slack channel. The goal is to surface what changed and what needs attention — not to recap every metric.

## Before Starting

1. Check for a .agents/product-marketing-context.md file. If it exists, read it first — it contains business goals, target CPA/ROAS, and key products that determine what counts as "good" or "bad" performance.
2. Confirm these parameters (ask if not provided):
    - Target CPA for lead gen campaigns (default: $75)
    - Target ROAS for e-commerce campaigns (default: 3.0)
    - Slack channel ID for delivery (e.g., C02A4B8XYZ)
    - Alert sensitivity — the % change threshold that warrants flagging (default: 20%)

## Workflow

1. **Schedule:** Run daily at 8:00 AM local time.
2. **Fetch data** for both Google Ads and Meta Ads. Pull the last full day (T-1) and the day before (T-2) at the campaign, ad group/ad set, and creative level. Required metrics: spend, impressions, clicks, cpa, roas, conversions, ctr.
3. **Analyze for anomalies:**
    - Flag campaigns where spend changed by more than the alert threshold. Categorize as "spike" or "drop" — this distinction matters because a 30% spike in a well-performing campaign is good news, while a 30% spike in a failing campaign is a budget leak.
    - Flag ad groups where CPA exceeds the target. Include the actual CPA and the gap above target so the reader can gauge severity at a glance.
    - Rank all active creatives by ROAS. Extract the top 3 and bottom 3 across both platforms.
    - Check for diminishing returns: if spend has increased for 3+ consecutive days without a corresponding increase in conversions, flag it. This pattern often indicates audience saturation or frequency fatigue.
4. **Format for Slack** using Block Kit. Organize into sections: Spend Alerts, CPA Alerts, Top Performers, Bottom Performers, Trends. Include a one-sentence recommendation per alert (e.g., "Consider pausing" or "Increase budget").
5. **Send** to the configured channel.

## Error Handling

- If an API call fails or returns no data, include "Data for [Platform] could not be retrieved" in the report. Never send an empty report — it looks like the automation broke.
- If total spend for a platform was $0, state that and skip the analysis for that platform. Division-by-zero on CPA/ROAS calculations will produce garbage.
- If only one platform returns data, send the report with what's available rather than waiting.
- Note at the top of every report that ad platform data can be delayed by up to 24 hours, so the numbers are preliminary.

2. Meta Ads: Campaign Creation & Launch

Launching a campaign on Meta is deceptively complex. The Ads Manager interface exposes dozens of settings — campaign objectives, optimization events, audience structures, placements, bid strategies, creative specifications — and many of them interact in ways that aren't obvious. An experienced media buyer knows that choosing "Advantage+ Audience" behaves differently than custom audiences, that Campaign Budget Optimization changes how spend distributes across ad sets, and that the wrong optimization event can train Meta's algorithm on the wrong signal entirely. This skill encodes that knowledge into a repeatable workflow that gets campaigns launched correctly the first time.

---
name: meta-ads-campaign-builder
description: "Build and launch a complete Meta Ads campaign from a brief — including campaign structure, audience targeting, creative upload, and budget configuration. Use when launching new Meta ad campaigns, setting up Facebook or Instagram advertising, or automating campaign creation from a marketing brief."
metadata:
  version: 1.0.0
  integrations:
    ad_platforms: ["Hyper (native)", "Hyper MCP", "Custom ad platform MCP"]
    creative: ["Hyper (native — brand-aware generation)", "Flux API", "Gemini Imagen API"]
---

# Meta Ads: Campaign Creation & Launch

Take a campaign brief and translate it into a fully structured, correctly configured Meta Ads campaign — ready to go live. The goal is to handle the entire setup process: objective selection, audience configuration, placement strategy, creative upload, budget allocation, and launch — with the same care an experienced media buyer would apply.

## Before Starting

Confirm these inputs (ask if not provided):
- Campaign objective in business terms (e.g., "generate leads for a SaaS product," "drive online purchases," "build awareness for a product launch")
- Target audience description (demographics, interests, behaviors, or existing custom/lookalike audience IDs)
- Geographic targeting (countries, regions, or radius targeting)
- Daily or lifetime budget
- Creative assets (images/videos) or a brief for generating them
- Landing page URL
- Pixel ID and conversion event to optimize for
- Campaign naming convention (if the account uses one)

## Workflow

1. **Map the business objective to Meta's campaign objectives.** This is where most mistakes happen. Meta offers six Advantage+ objectives: Awareness, Traffic, Engagement, Leads, App Promotion, and Sales. "Generate leads" could mean Leads objective (on-platform lead forms) or Sales objective optimized for a lead event on the website. Clarify the distinction before proceeding — the wrong choice means Meta's algorithm optimizes for the wrong behavior.
2. **Configure Campaign Budget Optimization (CBO).** Use CBO rather than ad set-level budgets in most cases — it lets Meta allocate spend across ad sets based on performance. Set a minimum spend per ad set (10-20% of the daily budget) to prevent Meta from starving a new ad set before it has enough data.
3. **Build ad sets with audience structure:**
    - Create 2-3 ad sets with distinct audience segments: one broad (Advantage+ Audience with suggested targeting), one interest-based, one lookalike (if a source audience exists). This structure tests audience approaches against each other.
    - Set exclusions to prevent overlap — exclude purchasers from prospecting ad sets, exclude one interest group from another if they share significant overlap.
    - Use Advantage+ Placements unless the creative is format-specific (e.g., Reels-only video). Manual placement selection usually reduces reach without improving performance.
4. **Configure optimization and bidding:**
    - Set the conversion event that matches the business goal. For lead gen: Lead or CompleteRegistration. For e-commerce: Purchase (not AddToCart — optimizing for AddToCart finds browsers, not buyers).
    - Use "Maximize conversions" bid strategy for new campaigns. Cost cap and ROAS targets require historical data to work effectively — setting them on a new campaign usually just restricts delivery.
    - Set the attribution window to 7-day click, 1-day view (Meta's default and generally the best starting point).
5. **Upload and configure creatives:**
    - If assets are provided, upload them with correct aspect ratios per placement: 1:1 for Feed, 9:16 for Stories/Reels, 1.91:1 for right column.
    - If a brief is provided instead of assets, generate images using the configured creative integration (Hyper generates on-brand creatives using your logo, colors, and design system; standalone APIs generate generic images from a style prompt). Constrain to no text in the image — Meta penalizes text-heavy images and text renders poorly at small sizes.
    - Write primary text (max 125 chars for optimal display), headline (max 40 chars), and description for each ad. The primary text should address the audience's pain point. The headline should state the value proposition. The description is often truncated — use it for supporting detail, not critical information.
    - Create 3-5 ad variations per ad set. More variations give Meta's algorithm more to test.
6. **Set naming conventions.** Apply consistent naming at every level: Campaign: [Brand]_[Objective]_[Date]. Ad Set: [Audience]_[Geo]_[Optimization]. Ad: [Creative_Type]_[Angle]_[Version]. Without consistent naming, analyzing results across dozens of campaigns becomes impossible.
7. **Review and launch.** Before publishing, verify: Pixel is firing on the landing page, conversion event is correctly configured, daily budget matches the brief, audiences don't overlap excessively, and creative assets meet Meta's specifications.

## Error Handling

- If the Pixel is not detected on the landing page URL, flag this as a blocking issue. Launching without tracking wastes the entire budget.
- If a creative asset fails Meta's review (text policy, prohibited content), log the rejection reason and continue with remaining creatives.
- If the campaign's estimated audience size is below 100,000, warn that delivery may be limited and suggest broadening targeting.
- After launch, schedule a check-in 24 hours later to verify the campaign is delivering and learning phase metrics are on track.

3. Google Ads: Performance Max Optimization

Performance Max is Google's most complex campaign type — and increasingly, it's the most important one. PMax now drives nearly half of all Google Ads conversions in 2026. It serves ads across Search, Display, YouTube, Gmail, Maps, and Discover from a single campaign, using Google's AI to assemble creative combinations and find audiences in real-time. The catch is that it's largely a black box: Google controls placement, bidding, and creative selection, and the reporting is limited. Managing PMax well requires understanding what levers you actually have, how to feed the algorithm good inputs, and how to diagnose problems when the reporting won't tell you what's wrong directly. This skill covers the ongoing management and optimization of PMax campaigns — not initial setup, but the daily and weekly work of keeping them performing.

---
name: google-ads-pmax-optimization
description: "Monitor, analyze, and optimize Google Ads Performance Max campaigns — including asset group performance, search theme refinement, audience signal tuning, and budget allocation. Use when managing PMax campaigns, analyzing Performance Max results, optimizing Google Ads automated campaigns, or diagnosing underperforming PMax asset groups."
metadata:
  version: 1.0.0
  integrations:
    ad_platforms: ["Hyper (native)", "Hyper MCP", "Custom ad platform MCP"]
---

# Google Ads: Performance Max Optimization

Provide ongoing management and optimization of Performance Max campaigns by analyzing asset group performance, refining search themes, tuning audience signals, and managing budget allocation. PMax is powerful but opaque — the value of this skill is surfacing actionable insights from limited reporting and making the adjustments that keep the algorithm pointed in the right direction.

## Before Starting

Confirm these parameters (ask if not provided):
- Google Ads account ID (Customer ID)
- Target CPA or target ROAS for each PMax campaign
- Business context: what products/services each campaign promotes, and what a "good" conversion looks like
- Brand terms to monitor (to assess how much PMax cannibalizes branded search)
- Review frequency (default: weekly)

## Workflow

1. **Schedule:** Run weekly, every Monday.
2. **Pull campaign-level performance.** Fetch spend, conversions, conversion value, CPA, ROAS, impression share, and search impression share for each PMax campaign. Compare against the previous week and the 4-week average. Flag any campaign where CPA increased by more than 15% or ROAS dropped by more than 15%.
3. **Analyze asset group performance.** Within each campaign, break performance down by asset group. Identify asset groups that are consuming disproportionate budget relative to their conversion rate — PMax will sometimes funnel spend into a high-impression, low-conversion asset group because it's optimizing for a local maximum rather than the campaign goal.
4. **Review asset performance ratings.** Google rates individual assets (headlines, descriptions, images, videos) as Low, Good, or Best. Extract assets rated "Low" and flag them for replacement. Assets rated "Best" should be studied for patterns — if all top-performing headlines mention a specific feature or benefit, that's a signal for creative direction.
5. **Audit search themes.** Pull the search terms report (where available — PMax reporting is limited but provides category-level insights). Check if the campaign is capturing relevant queries or drifting into irrelevant territory. Recommend new search themes to add or existing ones to remove. Each asset group supports up to 25 search themes — use them to steer the algorithm.
6. **Check for brand cannibalization.** Compare PMax's branded search impressions against dedicated branded Search campaigns. If PMax is capturing a large share of branded queries, those conversions likely would have happened anyway — the PMax campaign is taking credit for cheap branded traffic rather than finding new customers. Recommend brand exclusions at the campaign level if this pattern is significant.
7. **Evaluate audience signals.** Review the audience signals attached to each asset group. Signals are directional hints, not hard restrictions — Google will expand beyond them. If a campaign is performing well, the original signals are working as a starting point. If performance is poor, the signals may be too narrow or too broad. Recommend adjustments.
8. **Generate a weekly report** with: campaign-level performance summary, asset group breakdown with recommendations, underperforming assets to replace, search theme recommendations, brand cannibalization assessment, and overall budget allocation recommendation.

## Error Handling

- PMax reporting is intentionally limited by Google. If specific metrics aren't available (e.g., placement-level data), note the gap and work with what's available rather than making assumptions.
- Do not evaluate PMax performance during the learning period (typically 6-8 weeks for new campaigns, 1-2 weeks after significant changes). Making adjustments during learning disrupts the algorithm.
- If a PMax campaign has fewer than 30 conversions in the evaluation period, note that the data is insufficient for reliable analysis. Recommend either extending the evaluation window or increasing budget.
- Watch for the "PMax steals from Search" pattern: if a standard Search campaign's impression share drops when PMax launches, the two campaigns are likely competing. Recommend restructuring.

4. Competitive Intelligence: Meta Ads Library Analysis

Competitive intelligence is one of the highest-leverage activities in paid media — and one of the most neglected, because it's tedious. Manually browsing the Meta Ads Library, scrolling through dozens of ads, trying to spot patterns in messaging, offers, and creative formats — it's the kind of work that gets skipped when deadlines are tight. This skill automates the entire process: scraping a competitor's active ads, categorizing their strategy, and producing a report that's ready for a creative brainstorm or strategy meeting. On Hyper, the built-in Meta Ads Library scraper handles extraction natively.

---
name: competitive-intelligence-meta-ads
description: "Scrape and analyze a competitor's latest ads from the Meta Ads Library. Use when researching competitor advertising, building a swipe file, understanding a competitor's messaging strategy, or preparing for a creative brainstorm with competitive context."
metadata:
  version: 1.1.0
  integrations:
    scraping: ["Hyper (native — built-in Meta Ads Library scraper)", "Browser automation"]
---

# Competitive Intelligence: Meta Ads Library Analysis

Systematically scrape a competitor's active ads from the Meta Ads Library, categorize their strategy, and produce a structured competitive intelligence report.

## Before Starting

Confirm these inputs (ask if not provided):
- Competitor's Facebook Page URL (e.g., https://www.facebook.com/competitorinc)
- Competitor's name (used in file naming and report headers)
- Time window to analyze (default: last 7 days)
- Any specific focus areas (e.g., "only video ads" or "only ads linking to pricing pages")

## Workflow

1. **Navigate** to the Meta Ads Library filtered for the competitor. URL format: https://www.facebook.com/ads/library/?active_status=all&ad_type=all&country=ALL&q=[PAGE_NAME]&sort_data[direction]=desc&sort_data[mode]=relevancy_monthly_grouped
2. **Filter** to "Active" ads within the configured time window.
3. **Scrape all ads.** Scroll through the results, waiting 2 seconds between scrolls for content to load. Stop when 3 consecutive scrolls produce no new ads. For each ad, extract: primary text, image/video thumbnail URL, headline, CTA text, destination URL, and approximate launch date.
4. **Save creative assets** locally. Use a consistent naming convention: [competitor]_[YYYY-MM-DD]_[sequential_number].jpg.
5. **Categorize each ad** along three dimensions:
    - **Core Offer:** discount, free trial, webinar, content download, direct purchase, or brand awareness
    - **Messaging Angle:** pain point, benefit highlight, social proof, urgency/scarcity, comparison, or educational
    - **Inferred Audience:** based on language, imagery, and where the landing page sits in the funnel
6. **Identify patterns.** Look beyond individual ads: Are they running multiple angles for the same offer (a sign of creative testing)? Increasing or decreasing creative volume? Running sequential campaigns that tell a story?
7. **Generate the report** as competitor_report_[competitor_name]_[date].md:
    - Executive summary of their current advertising strategy (2-3 paragraphs)
    - Volume metrics: total active ads, new ads in the time window, image vs. video ratio
    - Table of all ads: Creative Link | Headline | Offer Type | Messaging Angle | Audience | Destination URL
    - Strategic observations and recommended counter-positioning for your team

## Error Handling

- If the Meta Ads Library UI changes and element selectors break, stop and report the issue. A partial scrape with incorrect data is worse than no scrape.
- If the library returns no results, verify the page URL is correct before reporting.
- If an individual ad's creative fails to save, log the failure and continue — don't block the entire report over one missing image.

5. Ad Creative Testing at Scale

Creative testing at scale is the single biggest lever in paid media performance — and the single biggest bottleneck. The math is simple: more creative variations means more data on what resonates, which means faster optimization. But producing 27 ad variations by hand — writing unique copy for each, generating images, uploading them, naming them consistently — takes hours. Most teams test 3-5 variations when they should be testing 20-30. This skill takes a creative matrix and produces every combination automatically, ready to launch. On Hyper, the entire workflow — from brand-aware image generation to campaign upload — happens natively without external APIs.

---
name: ad-creative-testing-at-scale
description: "Generate and launch a full set of Meta Ads from a creative matrix of topics, personas, and visual styles. Use for large-scale A/B testing, creative iteration, campaign launches, or systematic ad creative exploration across multiple messaging angles."
metadata:
  version: 1.0.0
  integrations:
    ad_platforms: ["Hyper (native)", "Hyper MCP", "Custom ad platform MCP"]
    creative: ["Hyper (native — brand-aware, generates from URL/brief)", "Flux API", "Gemini Imagen API"]
---

# Ad Creative Testing at Scale

Take a creative matrix (topics x personas x visual styles) and generate every combination as a complete Meta ad — copy, image, and upload. This approach replaces the manual process of writing and designing dozens of ad variations by hand, which is the main bottleneck in creative testing at scale.

## Before Starting

Confirm these inputs (ask if not provided):
- Path to creative_matrix.txt or the matrix content directly
- Meta Ads Campaign ID and Ad Set ID
- Brand voice guidelines, reference copy, or website URL for brand extraction
- Maximum number of ads to create as a safety limit (default: 27)

## Workflow

1. **Parse the creative matrix.** Read creative_matrix.txt. Expected format uses section headers:
    --TOPICS--
    AI-powered automation
    Save 10 hours a week
    Reduce human error
    --PERSONAS--
    Marketing Manager
    Agency Owner
    Solo Founder
    --STYLES--
    Minimalist typography on dark background
    Abstract tech illustration
    Lifestyle photo with text overlay
2. **Calculate all combinations** (Topic x Persona x Style). Display the total count. If it exceeds the safety limit, ask the user to either raise the limit or trim the matrix. Launching 200 ads by accident is expensive and clutters the ad account — this confirmation step prevents it.
3. **For each combination, generate the ad:**
    - **Copy:** Write a headline (max 40 chars) and primary text (max 125 chars) tailored to the specific combination. The headline should address the persona's pain point directly. The primary text should frame the topic as the solution. Each variation should feel distinct — if they all read like the same ad with a swapped noun, the test won't reveal anything useful.
    - **Image:** Generate using the configured creative integration. On Hyper, images are generated on-brand using your logo, colors, and design system — with no text in the image (Meta penalizes text-heavy images). With external APIs, construct a prompt specifying the visual style, brand colors (if known), and the no-text constraint. Save locally.
    - **Upload** to the ad platform with the headline, primary text, and image.
4. **Add all creatives to the campaign** in a loop.
5. **Generate a tracking CSV** mapping each ad to its Topic, Persona, Style, Headline, and Creative ID. Without this, analyzing results later requires manually cross-referencing dozens of ads.

## Error Handling

- If image generation fails, log the error, skip that combination, and continue. Note all skipped combinations in the final summary.
- If Meta rejects a creative (e.g., text policy violation), log the rejection reason and continue. Don't halt the entire batch over one rejected ad.
- Name each creative systematically: [Topic_short]_[Persona_short]_[Style_short]_[Date]. This makes the Ads Manager readable when you have 27+ creatives.
- Before creating each ad, check if one with the same name already exists. Skip duplicates.

6. Paid Media: Budget Pacing & Management

Budget pacing is one of those problems that sounds simple and isn't. Ad platform "daily budgets" are soft limits — Meta can overshoot by 20% on any given day, and Google's daily spend can vary even more widely (they guarantee monthly spend within a margin, not daily). Over a 30-day month, these fluctuations compound. Without active management, you'll either run out of budget with 8 days left or finish the month with 15% unspent — both of which leave money on the table. This skill checks pacing every morning and makes small adjustments that keep spend on track, accelerating toward month-end when there's room and pulling back when you're ahead.

---
name: paid-media-budget-pacing
description: "Keep monthly ad spend on track by calculating pacing ratios and adjusting daily campaign budgets. Use when automating budget management, preventing overspending, ensuring campaigns pace correctly through the month, or optimizing budget utilization toward end-of-month."
metadata:
  version: 1.1.0
  integrations:
    ad_platforms: ["Hyper (native)", "Hyper MCP", "Custom ad platform MCP"]
---

# Paid Media: Budget Pacing & Management

Prevent over- or under-spending of the monthly advertising budget by making small, daily adjustments to campaign daily budgets. The key insight is that ad platform "daily budgets" are soft limits — actual spend can vary by 20%+ on any given day, so pacing needs active management rather than set-and-forget.

## Before Starting

Confirm these parameters (ask if not provided):
- Total monthly budget (e.g., $10,000)
- Pacing tolerance (default: 5% — act if spend deviates by more than this from ideal pace)
- Maximum budget adjustment per campaign per day (default: 10%)
- Minimum daily budget floor per campaign (default: $10 — going lower effectively pauses the campaign)
- Maximum daily budget ceiling per campaign (default: 2x the original daily budget — prevents runaway spend)
- Priority metric for budget increases (default: ROAS)

## Workflow

1. **Schedule:** Run daily at 7:00 AM.
2. **Calculate pacing:**
    - Ideal Daily Pace = Total Budget / Total Days in Month
    - Ideal MTD Spend = Ideal Daily Pace * Days Passed
    - Pacing Ratio = Actual MTD Spend / Ideal MTD Spend
    - A ratio of 1.0 means perfectly on pace. Above 1.0 means over-spending. Below 1.0 means under-spending.
3. **Fetch spend data.** Get MTD spend and ROAS broken down by campaign.
4. **Adjust based on pacing:**
    - **Over-pacing (ratio > 1 + tolerance):** Identify the top 3 campaigns by spend. Decrease their daily budget by the configured adjustment percentage. Never go below the floor — a $2/day budget doesn't generate meaningful data, it just wastes the minimum spend.
    - **Under-pacing (ratio < 1 - tolerance):** Identify the top 3 campaigns by ROAS. Increase their daily budget by the configured adjustment percentage. Never exceed the ceiling. Prioritizing by ROAS ensures extra budget goes to what's already working, not spread equally across underperformers.
    - **On-pace:** Log "Pacing on track" with the current ratio. No action needed.
5. **End-of-month acceleration.** In the last 3 days of the month, double the adjustment percentage. Unused budget at month-end is wasted allocation. If under-pacing by more than 15% with 2 days remaining, flag for manual review — the gap may be too large for automated adjustments alone.
6. **Log every change:** campaign ID, campaign name, old budget, new budget, pacing ratio, and reason. Budget changes without a paper trail are untraceable when something goes wrong.

## Error Handling

- If spend data for a campaign is unavailable, skip it and note the gap in the log.
- If the API returns a rate limit error, wait 60 seconds and retry once.
- On first run, store the original daily budget for each campaign. This becomes the reference point for ceiling calculations across future runs.

7. Google Ads: Negative Keyword Management

Most Google Ads accounts leak 15-30% of their budget on irrelevant search terms. It's the single most common source of wasted spend, and it's entirely preventable. The problem is that reviewing search term reports is tedious — a mature account can generate hundreds of new terms per week, and each one needs to be evaluated for relevance. Media buyers do this manually, usually on Monday mornings, and they inevitably miss things. This skill automates the entire process: pulling the search term report, classifying terms by relevance, auto-negating the obvious waste, and queuing edge cases for human review.

---
name: google-ads-negative-keywords
description: "Analyze Google Ads search term reports to find irrelevant queries wasting budget and add them as negative keywords. Use when reducing wasted ad spend, cleaning up Google Ads accounts, automating negative keyword management, or improving campaign efficiency through search term hygiene."
metadata:
  version: 1.2.0
  integrations:
    ad_platforms: ["Hyper (native)", "Hyper MCP", "Custom ad platform MCP"]
---

# Google Ads: Negative Keyword Management

Continuously improve campaign efficiency by identifying irrelevant search queries that are burning budget without converting, and adding them as negative keywords. This skill automates the cleanup that PPC managers typically do manually every Monday morning.

## Before Starting

Confirm these parameters (ask if not provided):
- Minimum click threshold for a search term to be reviewed (default: 10 clicks with zero conversions)
- Negative keyword match type (default: exact match)
- Application level (default: ad group — more precise than campaign-level negatives because a term that's irrelevant in one ad group might be valuable in another)
- Additional negative patterns beyond the default list
- Review period (default: last 7 days)

## Workflow

1. **Schedule:** Run weekly, every Monday.
2. **Fetch the search term report** for the configured period. Pull: search term, clicks, impressions, conversions, cost, ad group, and campaign.
3. **Filter for waste.** Extract terms where conversions == 0 AND clicks >= the configured threshold. Calculate wasted spend for each term.
4. **Pattern matching.** Check each filtered term against the negative pattern list:
    Default patterns: ["free", "jobs", "how to", "reviews", "cheap", "salary", "reddit", "youtube", "tutorial", "course", "download", "cracked", "torrent", "diy", "internship", "template", "example", "sample"]
    Also flag terms that are clearly unrelated to the account's product/service.
5. **Classify by confidence:**
    - **Auto-negate (high confidence):** Contains a negative pattern and no product-relevant keywords. Add these immediately.
    - **Review queue (medium confidence):** Contains a negative pattern BUT also contains a relevant product keyword. These need human judgment — "free CRM tutorial" might be irrelevant for a paid CRM, but "free trial CRM" is a high-intent query. Save to review_queue_[date].md.
    - **Skip (low confidence):** Zero conversions but no pattern match, and fewer than 20 clicks. These terms may just need more data before judging them.
    This classification matters because blindly negating every zero-conversion term blocks queries that simply haven't converted *yet* — especially for high-consideration B2B products with long sales cycles.
6. **Add negative keywords.** For each auto-negate term, add as exact match at the ad group level.
7. **Check for duplicates** before adding. If the negative already exists, skip it.
8. **Generate a weekly summary:**
    - Total negatives added
    - Estimated spend saved (sum of wasted spend on negated terms)
    - Terms sent to the review queue
    - Top 5 most expensive wasted terms
    - If more than 50 terms qualify for auto-negation in a single week, flag for manual review — this usually indicates a broader targeting problem (wrong match types, too-broad keywords) rather than just a few bad queries.

8. A/B Testing: Statistical Analysis & Optimization

The gap between running A/B tests and drawing valid conclusions from them is enormous. Most marketing teams know they should test — and most teams do it badly. Tests run too long because nobody checks them. Tests get called too early because the first day's data looked promising. Tests with three variations get evaluated without correcting for multiple comparisons, leading to false winners. This skill applies actual statistical rigor to the process: checking minimum sample sizes, enforcing minimum test durations to guard against the novelty effect, applying a proper z-test, and only declaring a winner when the math warrants it.

---
name: ab-testing-statistical-analysis
description: "Monitor running A/B tests and declare a winner when statistical significance is reached. Use when automating A/B test analysis, eliminating bias from test conclusions, ensuring optimal budget allocation, or preventing tests from running too long past significance."
metadata:
  version: 1.1.0
  integrations:
    ad_platforms: ["Hyper (native)", "Hyper MCP", "Custom ad platform MCP"]
    communication: ["Hyper (native)", "Slack MCP"]
---

# A/B Testing: Statistical Analysis & Optimization

Monitor active A/B tests, apply a two-proportion z-test when minimum sample sizes are met, and declare a winner only when the statistics warrant it. The value here is twofold: stopping winning tests from running too long (leaving money on the table) and stopping losing tests from being called too early (wasting the data collected so far).

## Before Starting

Confirm these parameters (ask if not provided):
- Minimum confidence level to declare a winner (default: 95%)
- Minimum conversions per variation before concluding (default: 50 — below this, conversion rates are too noisy to be meaningful)
- Minimum test duration (default: 7 days — this guards against the novelty effect, where new ads temporarily outperform due to freshness rather than quality)
- Action on the losing variation: auto-pause or flag for human review
- Slack channel for notifications (optional)

## Workflow

1. **Schedule:** Run daily.
2. **Identify active tests.** Filter for campaigns containing "A/B Test" or "Test" in the name, or tagged with a test label.
3. **Check eligibility.** For each test:
    - Calculate days since launch. If below the minimum duration, skip it and note how many days remain. Even if the data looks significant early, the novelty effect can reverse results after day 3-4.
    - Fetch performance data for each variation: conversions, impressions, clicks, spend, conversion rate.
    - Check the minimum sample size. If any variation has fewer than the required conversions, skip and note the gap.
4. **Calculate statistical significance.** Apply a two-proportion z-test:
    - p1 = conversions_A / impressions_A
    - p2 = conversions_B / impressions_B
    - p_pooled = (conversions_A + conversions_B) / (impressions_A + impressions_B)
    - z = (p1 - p2) / sqrt(p_pooled * (1 - p_pooled) * (1/n1 + 1/n2))
    Convert the z-score to a confidence percentage. Also calculate lift: (winner_rate - loser_rate) / loser_rate * 100.
5. **If significance is reached:**
    - Pause the losing variation (if auto-pause is configured).
    - Reallocate the loser's budget to the winner.
    - Send a Slack notification with: test name, winner, loser, confidence level, lift percentage, total test spend, and the action taken.
6. **If significance is not reached:** Log the current confidence level and lift. Don't take action — premature conclusions waste the entire test investment.
7. **Archive all concluded tests** to test_results_log.md with: date, test name, winner, loser, confidence, lift, duration, and total spend. This historical log becomes invaluable for understanding what types of creative consistently win.

## Error Handling

- For tests with more than 2 variations, perform pairwise comparisons and apply Bonferroni correction to the significance threshold (divide the target confidence by the number of comparisons). Without this correction, you'll declare false winners at a rate that scales with the number of variations.
- If both variations have fewer than 25 conversions after 14 days, flag the test as "insufficient traffic" and recommend increasing budget or broadening targeting. The test design is the problem, not the creative.
- If a test has been running for fewer than 7 days and the newer variation is winning, add a warning: "Possible novelty effect — consider extending."

9. Business Intelligence: Cross-Platform Media Dashboard

Most marketing teams run ads on at least two platforms — typically Google and Meta, often with LinkedIn, TikTok, or Pinterest in the mix. Each platform has its own reporting interface, its own metrics definitions, and its own attribution model. Comparing performance across platforms means exporting CSVs, normalizing column names, reconciling attribution windows, and building a spreadsheet that's already outdated by the time you finish it. This skill automates the entire process and produces a unified business intelligence dashboard — with generative visualizations, trend analysis, and budget reallocation recommendations — that answers the question every CMO asks: "Where should we put the next dollar?"

---
name: business-intelligence-media-dashboard
description: "Pull ad performance data from multiple platforms, normalize metrics, and generate a unified cross-platform media buying dashboard with generative visualizations. Use when comparing Google Ads vs Meta Ads performance, building unified advertising dashboards, allocating budget across platforms, or preparing cross-channel media reports for stakeholders."
metadata:
  version: 1.0.0
  integrations:
    ad_platforms: ["Hyper (native — with generative dashboards)", "Hyper MCP", "Custom ad platform MCP"]
    databases: ["Hyper (native — SQL access for joining with CRM/revenue data)", "Manual CSV export"]
---

# Business Intelligence: Cross-Platform Media Dashboard

Aggregate advertising data from Google Ads, Meta Ads, and any other connected platforms into a single normalized dashboard. On Hyper, this skill can join ad platform data with CRM, product, and revenue data from connected databases to produce true business intelligence — not just platform metrics, but marketing's actual contribution to revenue. The goal is to enable apples-to-apples comparison across platforms so budget allocation decisions are based on actual performance, not platform-specific vanity metrics.

## Before Starting

Confirm these parameters (ask if not provided):
- Platforms to include (default: Google Ads and Meta Ads)
- Reporting period (default: last 30 days)
- Primary KPI for comparison (default: CPA for lead gen, ROAS for e-commerce)
- Total budget across all platforms
- Currency (for accounts in different currencies)
- Database connection for revenue/CRM data (optional — enables true ROI calculation)
- How to handle attribution differences (default: note them, don't try to reconcile)

## Workflow

1. **Fetch data from each platform.** For each platform, pull: total spend, impressions, clicks, CTR, conversions, conversion value, CPA, ROAS, and CPM. Get both aggregate totals and campaign-level breakdowns.
2. **Join with business data (if database access is available).** Run SQL queries to match ad platform conversions with actual revenue, customer LTV, or pipeline value. This transforms the dashboard from "how many conversions did we get" to "how much revenue did each platform actually generate" — a fundamentally more useful metric for budget allocation.
3. **Normalize metrics.** This is the hardest part and where most cross-platform reports fail:
    - **CTR:** Calculate consistently as clicks / impressions. Some platforms include different click types in their default CTR.
    - **Conversions:** Note each platform's default attribution window (Google: 30-day click; Meta: 7-day click, 1-day view). Don't sum conversions across platforms without this caveat — you'll double-count users who saw a Meta ad and then clicked a Google ad.
    - **CPA and ROAS:** Calculate from the normalized spend and conversion figures. Present platform-specific values alongside the cross-platform calculation.
    - **CPM:** Standardize to cost per 1,000 impressions across all platforms for reach efficiency comparison.
4. **Calculate platform efficiency scores.** For each platform, compute: share of total spend, share of total conversions, and the ratio between them. A platform with 30% of spend generating 50% of conversions is outperforming. A platform with 40% of spend generating 15% of conversions needs scrutiny.
5. **Identify budget reallocation opportunities.** Flag cases where: one platform's CPA is more than 50% higher than another for similar audiences, one platform has declining ROAS over the last 4 weeks while another is improving, or one platform is at capacity (high frequency, declining CTR) while another has room to scale.
6. **Generate the dashboard** as an interactive HTML file with generative charts and inline CSS:
    - Executive summary with the single most important finding
    - Platform comparison table: Platform | Spend | Impressions | Clicks | Conversions | CPA | ROAS | CPM
    - Efficiency chart: spend share vs. conversion share per platform
    - Trend analysis: 4-week rolling CPA and ROAS by platform
    - Revenue attribution breakdown (if database data is available)
    - Budget reallocation recommendation with specific dollar amounts
    - Attribution caveat section explaining why numbers might differ from platform-native reporting

## Error Handling

- If a platform returns no data or API access fails, generate the report with available platforms and note the gap. Never block the entire report over one missing platform.
- If currencies differ, convert to a single currency using the exchange rate from the first day of the reporting period. Note the rate used.
- If conversion events differ across platforms (e.g., "Lead" on Meta vs. "Form Submit" on Google), note this and present both raw and normalized views.

SEO, GEO & Content

Organic search and content marketing are the long game of marketing — slower to show results than paid media, but compounding over time. And the game itself is changing. Traditional SEO — ranking on Page 1 of Google — is now only half the picture. Generative Engine Optimization (GEO) has emerged as an equally important discipline: optimizing your content so that AI-powered search engines like ChatGPT, Perplexity, Claude, and Google's AI Overviews cite and recommend your brand when answering user queries.

The numbers make the shift clear. Traditional search volume is projected to decline 25% this year as users move to AI-powered answer engines. ChatGPT serves 800 million weekly users. Google's AI Overviews reach over 2 billion monthly users. Perplexity processes hundreds of millions of queries monthly. In these interfaces, there is no "Page 1" — there are 2-7 cited sources per answer, and if your brand isn't one of them, you're invisible. Pages optimized for GEO get selected as citations 3x more often than pages optimized only for traditional SEO.

The challenge is that SEO and GEO work is research-intensive and analytically complex. Finding content gaps requires comparing your keyword footprint against competitors across thousands of terms. Building effective content briefs means analyzing what's actually ranking, not just guessing at topics. And monitoring AI search visibility requires querying multiple LLMs with hundreds of prompts and tracking how responses change over time — work that's practically impossible to do manually at scale.

On Hyper, SEO and GEO workflows are deeply supported. HyperSEO provides search visibility tracking across Google alongside LLM visibility monitoring across ChatGPT, Claude, Perplexity, and Gemini — showing you not just where you rank on Google, but whether your brand appears when someone asks an AI for recommendations in your category. Hyper also has built-in GEO skills and data pipelines that track AI citation patterns, monitor competitor visibility across LLMs, and produce actionable recommendations for improving your AI search presence. For the DIY approach, tools like DataForSEO and Serper provide the raw SEO data, but the GEO layer — querying LLMs at scale and analyzing citation patterns — is something you'd need to build from scratch.

10. SEO: Content Gap Analysis

Content gap analysis is the foundation of any data-driven content strategy. Instead of guessing which topics to write about, you compare your keyword footprint against a competitor's and find the specific queries where they rank and you don't. The output isn't a vague list of "topic ideas" — it's a set of content briefs with target keywords, search volumes, competitive difficulty, and structured outlines that a writer can execute immediately.

---
name: seo-content-gap-analysis
description: "Identify keywords a competitor ranks for that your site doesn't, and create detailed content briefs to close those gaps. Use for SEO analysis, content strategy, competitive research, or building an editorial calendar based on real ranking data."
metadata:
  version: 1.0.0
  integrations:
    seo_data: ["Hyper (native — HyperSEO)", "DataForSEO API", "Serper API"]
    scraping: ["Hyper (native)", "Browser automation"]
---

# SEO: Content Gap Analysis

Compare your site's keyword rankings against a competitor's to find high-value gaps, then generate content briefs that a writer can execute immediately. The output should be actionable — not a data dump, but a set of specific articles to write with clear reasoning for why each one matters.

## Before Starting

Confirm these inputs (ask if not provided):
- Your domain (your_domain)
- Competitor's domain (competitor_domain)
- Minimum search volume threshold (default: 100 monthly searches)
- Number of content briefs to generate (default: 10)
- Any topics or categories to exclude

## Workflow

1. **Find sitemaps** for both domains. Start with /robots.txt, which typically contains the sitemap URL. If not found, search for site:[domain] filetype:xml. Handle sitemap indexes by recursively extracting all child URLs — many sites split their sitemaps across multiple files.
2. **Extract and filter URLs.** Pull all loc URLs from each sitemap. Filter to content pages only (blog posts, articles, guides). Exclude product pages, tag/category pages, pagination URLs, and author archives — these add noise without adding keyword signal.
3. **Get competitor rankings.** For each competitor URL, find the top 3 ranking keywords. Record: keyword, search volume, ranking position, URL, and keyword difficulty. This gives a real picture of what's actually driving their organic traffic.
4. **Get your rankings** using the same method for your URLs.
5. **Identify gaps.** A "gap" is a keyword where the competitor ranks in the top 10 but your domain doesn't rank in the top 50. Filter for keywords above the minimum search volume threshold. Sort by search volume descending. These are the keywords where creating content has the clearest opportunity — the competitor has proven the keyword converts to traffic, and you're not competing yet.
6. **Generate content briefs.** For the top N gaps, create a separate markdown file per brief:
    - **Target keyword** and monthly search volume
    - **Search intent** (informational, commercial, navigational, transactional) — infer this from what the top-ranking pages actually contain, not from the keyword alone
    - **Proposed H1 title** — keyword-optimized but compelling enough that someone would click
    - **Semantic outline** — H2s and H3s based on what the top 3 ranking articles cover. The outline should feel like a natural article, not a keyword-stuffed skeleton.
    - **Key concepts to cover** — entities, related terms, and questions that appear across top-ranking content
    - **Internal linking suggestions** — pages on your site that should link to and from this new article
    - **Competitive notes** — what the competitor's article does well and where yours can differentiate

## Error Handling

- **API cost control:** Before running, estimate total API calls based on sitemap size. If the estimated cost exceeds $10, ask for confirmation. DataForSEO bills per call, and a large sitemap can generate thousands of requests.
- **Rate limiting:** Add a 500ms delay between API calls. Slamming the API will get you rate-limited, and retries are slower than pacing.
- **Missing sitemaps:** If a domain has no sitemap, fall back to crawling the main navigation and blog archive pages.
- **Incomplete data:** If the API returns no data for a URL, skip it and move on. Blocking the entire workflow over one missing data point wastes the data you already have.

11. GEO: AI Search Visibility & Optimization

A growing share of your potential customers will never visit Google. They'll ask ChatGPT "what's the best project management tool for remote teams?" or ask Perplexity "which CRM should I use for a 20-person sales team?" — and the AI will recommend 3-5 products. If yours isn't one of them, that customer is gone before you ever had a chance. Traditional SEO tools can't help you here — they track Google rankings, not LLM citations. This skill systematically audits your brand's visibility across AI search engines, maps where competitors are being recommended instead, and produces specific, actionable recommendations for improving your AI search presence through better content structure, entity coverage, and citation-worthiness.

---
name: geo-ai-search-optimization
description: "Audit and optimize brand visibility across AI search engines — ChatGPT, Perplexity, Claude, Gemini, and Google AI Overviews. Use when monitoring AI search presence, optimizing for generative engine optimization (GEO), tracking LLM brand mentions, improving AI citations, or analyzing how AI recommends your product vs. competitors."
metadata:
  version: 1.0.0
  integrations:
    geo_monitoring: ["Hyper (native — HyperSEO with built-in LLM visibility tracking)", "Manual LLM querying"]
    scraping: ["Hyper (native)", "Browser automation", "Search APIs"]
---

# GEO: AI Search Visibility & Optimization

Systematically query AI search engines with prompts relevant to your product category, track whether your brand is cited, analyze why competitors are recommended instead, and produce an optimization plan. Unlike traditional SEO audits that check Google rankings, this audit checks whether AI models know about your brand, trust it enough to recommend it, and can find citable content on your site. On Hyper, HyperSEO provides built-in LLM visibility monitoring across ChatGPT, Claude, Perplexity, and Gemini — this skill layers strategic analysis and optimization planning on top of that data.

## Before Starting

Confirm these inputs (ask if not provided):
- Your brand name and primary domain
- Product/service category (e.g., "AI marketing platform," "project management tool," "CRM for SMBs")
- 3-5 primary competitors to track
- Target AI platforms (default: ChatGPT, Perplexity, Claude, Gemini)
- Key use cases or buyer intents to audit (e.g., "best [category] for [audience]," "how to [task your product solves]," "[your category] comparison")

## Workflow

1. **Build the prompt set.** Generate 30-50 prompts that a potential customer might ask an AI when researching your category. Organize into three tiers:
    - **Brand-adjacent (10-15 prompts):** Direct queries where your product should appear — "best [category] tools 2026," "top [category] for [use case]," "[your product] vs [competitor]," "alternatives to [competitor]."
    - **Problem-aware (10-15 prompts):** Questions about the problems your product solves — "how do I automate [task]," "what tools do marketing teams use for [workflow]," "how to reduce [pain point]."
    - **Category-level (10-15 prompts):** Broader industry queries where thought leadership matters — "what is [industry concept]," "best practices for [domain]," "[industry trend] explained."
    Vary phrasing — some formal, some conversational. Include long-tail queries that match real buying behavior.
2. **Query each AI platform.** For every prompt, submit it to each target AI platform and record:
    - Whether your brand is mentioned (yes/no)
    - Position in the response (first recommendation, mentioned in a list, footnote citation, not mentioned)
    - Exact quote of how your brand is described (sentiment and accuracy matter)
    - Which competitors are mentioned instead, and in what position
    - Sources cited by the AI (URLs) — these are the pages the AI considers authoritative
3. **Calculate visibility scores:**
    - **Brand Visibility Rate:** % of prompts where your brand appears, broken down by platform and prompt tier. A brand that appears in 60% of brand-adjacent queries but 5% of problem-aware queries has a content gap, not a brand recognition gap.
    - **Share of Voice:** Across all prompts, how often is your brand mentioned vs. each competitor? Map this on a grid: your visibility vs. competitor visibility by prompt category.
    - **Sentiment Score:** Of the mentions you do receive, how positive/neutral/negative is the AI's description? An AI saying "X is popular but has a steep learning curve" is different from "X is the industry leader."
    - **Citation Source Analysis:** Which of your pages get cited most often? Which competitor pages get cited? This reveals what content AI models consider authoritative.
4. **Diagnose gaps.** For prompts where your brand doesn't appear but competitors do:
    - Check if you have content that directly addresses the query topic. No content = no citation.
    - Check if your existing content is structured for citability: does it lead with clear, specific answers? Does it include statistics, definitions, and expert claims that an AI can extract and quote?
    - Check third-party coverage: AI models heavily weight authoritative third-party sources (review sites, industry publications, comparison articles). If competitors appear on G2, Capterra, and industry blogs but you don't, that's a citation source gap.
    - Check schema markup: proper Organization, Product, FAQ, and HowTo structured data helps AI models understand and categorize your content.
5. **Generate the optimization plan** as geo_audit_[brand]_[date].md:
    - Executive summary: overall AI visibility score, biggest gaps, and the single highest-impact action
    - Platform-by-platform breakdown with visibility rates and sentiment
    - Competitor comparison matrix: Brand | ChatGPT Visibility | Perplexity Visibility | Claude Visibility | Gemini Visibility | Overall Share of Voice
    - Top 10 "missing citation" opportunities — prompts with high buyer intent where competitors appear and you don't, ranked by estimated impact
    - Content recommendations: specific pages to create or restructure for each gap, with guidance on structure (lead with the answer, include quotable statistics, use FAQ schema)
    - Third-party coverage recommendations: publications, review platforms, and directories to target for earned mentions
    - Technical recommendations: schema markup additions, content freshness updates, internal linking improvements
6. **Establish a baseline.** Save the full audit data as a timestamped JSON file for tracking changes over time. AI search visibility changes as models update — monthly re-audits reveal whether optimizations are working.

## Error Handling

- AI platform responses are non-deterministic — the same prompt can produce different results on different runs. Run each prompt 3 times per platform and use majority-vote for the visibility score. Note any high-variance prompts where results are inconsistent.
- If an AI platform is unavailable or rate-limited, complete the audit with available platforms and note the gap. Don't block the entire report over one platform.
- Some prompts may trigger refusal responses ("I can't recommend specific products"). Mark these as "no recommendation" rather than "brand not visible" — they're a different signal.
- Content freshness matters disproportionately for GEO — AI models tend to favor recently updated content. Note the last-modified date of your key pages alongside citation data.

12. Social Media: Trend-Based Content Creation

The value of social media content is inversely proportional to how long it takes to produce. A perfectly crafted LinkedIn post published three days after a trend peaked is worth less than a good-enough post published the morning the trend breaks. The bottleneck isn't writing quality — it's the lag between awareness and action. This skill monitors for relevant trends, evaluates them against your brand, and produces draft posts within minutes. A human still reviews and publishes, but the creative grunt work is done.

---
name: social-media-trend-content
description: "Identify trending industry topics and draft on-brand social media posts for review. Use when creating timely social content, building a content calendar from current trends, automating social media drafting, or preparing posts that respond to industry news."
metadata:
  version: 1.0.0
  integrations:
    scraping: ["Hyper (native — Reddit, X, web scrapers)", "Search APIs", "Browser automation"]
    creative: ["Hyper (native — on-brand image generation for social posts)", "Flux API", "Gemini Imagen API"]
---

# Social Media: Trend-Based Content Creation

Scan for trending conversations in a given industry, evaluate relevance to the brand, and draft social media posts for human review. The goal is to reduce the time between "something interesting happened" and "we have a post ready" from hours to minutes — while keeping a human in the approval loop, because auto-posting without review is how brands end up apologizing on Twitter.

## Before Starting

Confirm these inputs (ask if not provided):
- Industry or niche focus (e.g., "AI marketing," "SaaS," "e-commerce")
- Path to brand_context.md — should contain: brand values, tone of voice, key themes, topics to avoid, and 2-3 example posts that nail the voice
- Target platforms (default: LinkedIn and X)
- Number of drafts per run (default: 3-5)

## Workflow

1. **Schedule:** Run daily, ideally before 9 AM so posts can go out during peak engagement windows.
2. **Find trending topics.** Run multiple searches: "trending [industry] news today", "viral [industry] posts this week", "[industry] announcements today". Also search for trending hashtags on the target platforms. Collect at least 10 candidate topics — more is better at this stage since most will be filtered out.
3. **Score each topic** against the brand context on three dimensions:
    - Relevance to brand themes (40% weight)
    - Timeliness — is this happening now, or is it already old news? (30% weight)
    - Engagement potential — does this topic invite discussion or reaction? (30% weight)
    Only proceed with topics scoring 7/10 or higher. Discard anything politically charged, controversial, or on the brand's "avoid" list.
4. **Draft posts** for each qualifying topic:
    - **LinkedIn:** 3-5 sentences. Professional tone. Lead with an insight or contrarian take, not a summary of the news (summaries are what everyone else posts). End with a question or call to discussion. Include 2-3 hashtags.
    - **X:** 1-2 sentences. Punchier and more opinionated. Optionally include a thread hook if the topic warrants depth.
    - Each draft should reflect the brand's unique perspective. The litmus test: if you removed the brand name, would you still know who wrote it?
5. **Save all drafts** to social_posts_for_review.md with: date, source trend, relevance score, LinkedIn version, X version, and suggested posting time based on topic freshness.

## Error Handling

- If no topics score 7+, report "No high-relevance trends today" and suggest 2-3 evergreen post ideas drawn from the brand's key themes. A day with no trends isn't a failure — posting irrelevant content to fill a calendar is.
- Re-read brand_context.md before drafting each post. Tone drift happens gradually and is hard to catch in individual posts.
- Never auto-post. This skill produces drafts only. A human reviews and publishes.

Operations & Reporting

The last category — and arguably the highest-leverage for agencies and marketing teams managing multiple clients or brands — is operations. Reporting, lead management, client communication, and the coordination that happens across a dozen tools. This is the work that doesn't feel like marketing but consumes a staggering amount of marketer time. Agency teams routinely spend 30-40% of their week on reporting alone. That's time not spent on strategy, creative, or optimization — the work that actually moves performance.

On Hyper, operational workflows benefit from the platform's persistent memory and scheduling capabilities. An agent can pull performance data from every platform on a schedule and deliver reports automatically — formatted, personalized, and sent via email or Slack. You can create agents with assigned tasks that run on any frequency you need, with full visibility into what ran, what was produced, and what actions were taken.

13. Agency Operations: Lead Nurturing & CRM Automation

For agencies running on GoHighLevel, lead response time is everything. Data consistently shows that leads contacted within 5 minutes convert at dramatically higher rates than leads contacted after an hour. But responding in 5 minutes means having someone available 24/7 — or having an agent handle the initial touchpoint. This skill automates the first contact: qualifying the lead based on form data, sending a personalized welcome email, creating a follow-up task with context for the sales rep, and moving the opportunity through the pipeline. The rep's first conversation is warm and informed instead of cold and generic.

---
name: agency-lead-nurturing-crm
description: "Automatically qualify, nurture, and task new leads when they enter a GoHighLevel pipeline. Use when automating agency workflows, managing leads in GHL, setting up AI-powered client onboarding, or building automated follow-up sequences."
metadata:
  version: 1.0.0
  integrations:
    crm: ["Hyper (native — GHL, HubSpot, and other CRM integrations)", "GoHighLevel MCP"]
    communication: ["Hyper (native)", "Slack MCP"]
---

# Agency Operations: Lead Nurturing & CRM Automation

When a new contact enters a CRM pipeline, automatically qualify them, send a personalized welcome email, create a follow-up task for the assigned sales rep, and update the pipeline stage. The goal is zero-latency lead response — every lead gets contacted within seconds, 24/7, with enough context that the sales rep's first conversation is warm and informed.

## Before Starting

Confirm these parameters (ask if not provided):
- Pipeline ID for new leads
- Stage IDs for "New Leads" and "Contacted"
- User ID of the assigned sales rep
- Custom field ID for the budget field
- Budget threshold for "High-Value Lead" tagging (default: $1,000)
- Welcome email template or key talking points to include

## Workflow

1. **Trigger:** Fired by a webhook when a new contact is created in the specified pipeline.
2. **Authenticate** with the CRM integration.
3. **Retrieve contact details** with the contact ID from the webhook payload. Pull all form submission data, custom fields, source/medium, and referral information.
4. **Qualify the lead:**
    - If the budget custom field exceeds the configured threshold, add the "High-Value Lead" tag. This tag drives downstream routing — reps prioritize these contacts, so accuracy matters.
    - If the lead source is a paid ad, add the "Paid Lead" tag. This helps attribute ROI back to ad spend.
    - Write a brief qualification summary to the contact's notes field so the sales rep has context without clicking through multiple screens.
5. **Send a welcome email.** Reference the contact's name and the specific service or page they inquired about (pull from form data). Generic "thanks for reaching out" emails get ignored — specificity signals that a real person is paying attention, even when it's automated.
6. **Create a follow-up task:**
    - Title: "Follow up with [Contact Name] - [Lead Source]"
    - Due date: 24 hours from now (48 hours for weekend submissions, since calling on Saturday night doesn't convert well)
    - Body: form submission summary, budget info, qualification tags, and 2-3 suggested talking points tailored to what the contact asked about
7. **Update pipeline stage** by moving the opportunity from "New Leads" to "Contacted."

## Error Handling

- **Duplicate prevention:** Before sending an email or creating a task, check if one already exists for this contact ID. Webhooks can fire multiple times for the same event, and duplicate welcome emails look broken.
- **Missing budget data:** If the budget field is empty, skip qualification tagging but still send the welcome email and create the task. Not every form captures budget — don't let a missing field block the entire workflow.
- **API failures:** Log the error with the contact ID and the specific step that failed. A partially processed lead (email sent but no task created) is worse than a fully failed one, because the rep doesn't know they need to follow up.

14. Client Reporting: Automated Performance Reports

Client reporting is the single biggest time sink for marketing agencies — and paradoxically, the least differentiated. Every agency pulls the same data from the same platforms, puts it into roughly the same format, and writes roughly the same narrative. The work takes hours per client per month, and it's almost entirely automatable. This skill produces a complete, professional, white-labeled report: data pulled directly from ad platforms, month-over-month changes calculated automatically, an executive summary written in plain English for a non-technical stakeholder, and the whole thing formatted in clean HTML that's ready to send. On Hyper, reports can include generative visualizations and data from connected databases for true business intelligence.

---
name: client-reporting-automation
description: "Create a professional, white-labeled monthly performance report by pulling data from ad platforms and generating written insights. Use when automating client reporting, generating performance summaries for stakeholders, building agency reports, or creating recurring marketing reports."
metadata:
  version: 1.0.0
  integrations:
    ad_platforms: ["Hyper (native — with generative dashboards)", "Hyper MCP", "Custom ad platform MCP"]
    databases: ["Hyper (native — SQL for joining with revenue data)", "Manual CSV"]
    communication: ["Hyper (native — email delivery)", "Slack MCP"]
---

# Client Reporting: Automated Performance Reports

Pull performance data from ad platforms, calculate month-over-month changes, write an executive summary in plain English, and construct a polished HTML report ready to send to a client. On Hyper, reports can pull from connected databases to include revenue attribution, pipeline data, and customer LTV alongside ad metrics — transforming a standard performance report into a true business intelligence deliverable. The report should look like it was assembled by an account manager, not generated by a script — formatting, tone, and insight quality all matter.

## Before Starting

Confirm these inputs (ask if not provided):
- Client name
- Reporting period (e.g., "February 2026")
- Path to client logo file
- Client's target KPIs and thresholds (CPA target, ROAS target, etc.)
- Platforms to include (default: Google Ads and Meta Ads)
- Database connection for revenue/CRM data (optional)
- Path to client_context.md if available (brand colors, historical context, specific goals)
- Distribution preference: save locally, email, or both

## Workflow

1. **Fetch performance data** for the client and period. Metrics: spend, impressions, clicks, CTR, conversions, CPA, ROAS, conversion rate. Break down by: campaign, ad group/ad set, and creative (top 5 each).
2. **Fetch comparison data** for the previous month. If the client has been active for 12+ months, also pull the same month from last year — year-over-year comparison eliminates seasonal noise that month-over-month misses.
3. **Join with business data (if available).** Query connected databases to match ad conversions with actual revenue, deal closures, or customer LTV. This transforms "we generated 150 leads at $45 CPA" into "we generated $280,000 in pipeline at a 6.2x return on ad spend."
4. **Calculate changes** for each metric:
    - Month-over-month (absolute and percentage)
    - Year-over-year if available
    - Performance vs. target KPIs — above or below, and by how much
5. **Write the executive summary** (2-3 paragraphs). This is the most important part of the report. Write for a non-technical business owner — focus on business outcomes (leads generated, revenue attributed, cost efficiency) not platform jargon (CTR, CPM, frequency). Highlight the single most important win and the single biggest challenge. If everything improved, say so directly. If something declined, explain why and what's being done about it.
6. **Write supporting sections:**
    - **Top Wins:** 3 specific achievements backed by numbers
    - **Recommendations:** 3 actionable recommendations for next month, each tied to a data point. "Increase budget on Campaign X" is better than "continue optimizing."
7. **Build the HTML report** with inline CSS and generative charts:
    - Client logo in the header, report period, generation date
    - Executive summary
    - Key Metrics section: large, bold numbers for the 4-5 most important KPIs with MoM indicators (green up-arrows for improvement, red down-arrows for decline)
    - Generative trend charts showing performance over time
    - Platform breakdown tables with consistent formatting
    - Revenue attribution section (if database data is available)
    - Top performers section (best campaigns, best creatives)
    - Wins and recommendations sections
    - Footer with agency branding
    - If client_context.md provides brand colors, use them. Otherwise default to a clean, neutral palette.
8. **Save** as [Client_Name]_Report_[Month]_[Year].html.

## Error Handling

- If a platform returns no data, mark that section as "Data not available for [Platform]." Never show zeros or blank cells — they look like errors to the client.
- If the reporting period hasn't ended, note "Partial data — covers [start] through [current date]."
- If there's no previous month data for comparison, skip MoM indicators and note "First month of reporting — comparison data will be available next month."

15. Figma Ad Creative: Automated Variation Generation

In January 2026, Anthropic published a case study that made the rounds across marketing Twitter and LinkedIn: Austin Lau, a growth marketer at Anthropic who had never opened a terminal in his life, used Claude Code to build a Figma plugin that generates ad creative variations with a single click. A process that previously took 30 minutes per ad dropped to 30 seconds. He built a companion workflow that generates Google Ads responsive search ad copy, validates character limits, and exports upload-ready CSV files — all without writing a line of code.

The story went viral because it crystallized what many marketers had been feeling: the gap between "I wish this existed" and "I can actually build this myself" had collapsed. Commentators at Luminary Lane coined the term "copilot ceiling" — noting that while Claude excelled at creating the ad variations, someone still had to upload them to the ad platform, set targeting parameters, monitor click-through rates, and adjust bids based on performance. The creation bottleneck was solved; the execution bottleneck remained.

This is exactly where the distinction between copilot tools and agentic platforms matters most. The skill below replicates Austin's Figma workflow — any agent running OpenClaw or Claude Code can use it to generate ad creative variations at scale. But on Hyper, the workflow doesn't stop at the Figma export. Hyper's native ad platform integrations mean the agent generates the creatives, uploads them directly to Google Ads or Meta, sets the campaign parameters, and monitors performance — closing the loop that the copilot ceiling leaves open. Where Austin's workflow saves 30 minutes on creative production, the full agentic version saves hours across the entire campaign lifecycle.

---
name: figma-ad-creative-variation-generator
description: "Generate dozens of ad creative variations from a Figma template by swapping headlines, descriptions, and visual elements programmatically. Use when scaling ad creative production, building responsive search ad variations, automating Figma-based ad workflows, or reducing manual copy-paste in design tools."
metadata:
  version: 1.0.0
  integrations:
    design: ["Hyper (native — brand-aware creative generation without Figma)", "Figma API", "Figma MCP"]
    ad_platforms: ["Hyper (native)", "Hyper MCP", "Custom ad platform MCP"]
    export: ["Google Ads CSV upload", "Meta Ads API"]
---

# Figma Ad Creative: Automated Variation Generation

Take a Figma template and a set of copy variations (headlines, descriptions, CTAs) and programmatically generate every permutation as a production-ready ad creative — eliminating the manual tab-switching, copy-pasting, and frame-duplicating that makes ad creative production the biggest time sink in performance marketing.

## Before Starting

1. Check for a .agents/product-marketing-context.md file. If it exists, read it for brand voice, value propositions, and product positioning that should inform copy generation.
2. Confirm these inputs (ask if not provided):
    - Figma file URL or file key containing the ad template(s)
    - Frame names or node IDs for the template frames to use as the base
    - Copy variations source: either a Google Sheets URL, a CSV file path, or a list of headlines and descriptions provided directly
    - Target ad platform and format (e.g., "Google RSA," "Meta Feed + Stories," "LinkedIn Sponsored Content")
    - Character limits per field (Google RSA: 30-char headlines, 90-char descriptions; Meta: 40-char headline, 125-char primary text)
    - Output format: Figma frames only, exported PNGs, upload-ready CSV, or all three

## Workflow

1. **Connect to Figma** via the Figma API or MCP. Authenticate and verify access to the target file.
2. **Inspect the template frame.** Identify all text layers within the frame — map each to its role (headline, description, CTA, fine print). If the template has multiple aspect ratios (e.g., 1:1, 9:16, 4:5), identify all ratio variants and their corresponding text layers.
3. **Load copy variations.** Parse the source (Sheets, CSV, or direct input). Validate that every headline and description meets the character limits for the target platform. Flag any that exceed limits — truncation produces bad ads. For Google RSA specifically, enforce: 15 headlines at max 30 characters each, 4 descriptions at max 90 characters each.
4. **Generate all permutations.** For each copy variation:
    - Duplicate the template frame in Figma
    - Replace headline text layer content with the new headline
    - Replace description text layer content with the new description
    - If CTA variations are provided, swap those too
    - Name the new frame systematically: [Template]_[Headline_short]_[Variation_number]
    - Repeat across all aspect ratio variants
5. **Validate visually.** After generation, scan each frame for text overflow — if a headline is too long for the text box even within character limits, flag it. Different fonts and sizes mean character limits alone don't guarantee the text fits the design.
6. **Export.** Based on the configured output format:
    - **Figma frames:** Leave all variations in the file, organized in a "Generated Variations" page
    - **PNGs:** Export each frame at 2x resolution. Name files [Platform]_[AspectRatio]_[Variation].png
    - **Google Ads CSV:** Compile headlines and descriptions into a CSV with columns matching Google Ads Editor import format (Campaign, Ad Group, Headline 1-15, Description 1-4). This is the format Austin Lau's /rsa workflow produces — ready for direct upload.
    - **Meta Ads:** If connected to Meta via Hyper or MCP, upload images and copy directly to the specified campaign and ad set. Otherwise, export a zip of assets with a manifest CSV mapping each image to its copy.
7. **Generate a summary report:** Total variations created, any flagged issues (text overflow, character limit violations), and a link to the Figma file or export directory.

## Error Handling

- If Figma API returns a rate limit error, implement exponential backoff starting at 2 seconds. Figma's API has strict rate limits on write operations.
- If a text layer can't be found in the template (renamed or deleted), skip that frame and log the error. Don't silently produce ads with placeholder text.
- If the copy source has more than 50 headline variations, warn the user before generating — 50 headlines across 3 aspect ratios produces 150 frames, which can slow down the Figma file significantly.
- On Hyper: skip the Figma step entirely when possible. Hyper's native creative generation can produce ad variations directly from copy + brand assets, generating platform-ready images without an intermediate design tool step. Use Figma only when the template includes specific design elements (photography, illustrations) that can't be generated.

YouTube Resources for AI Marketing Agents

Seeing these agents in action is worth more than reading about them. These tutorials and demos cover setup, creative generation, and campaign management across different tools:

  • Hyper: AI Marketing Agents in Action

  • Hyper: AI-Powered Campaign Launch

  • Anthropic: How Austin Lau Built a Figma Ad Plugin with Claude Code

  • For Beginners: The Ultimate Setup Guide

  • For Agencies: GoHighLevel + AI Agent Integration

OpenClaw and Claude Code vs. Hyper: The Full Comparison

OpenClaw and Claude Code are powerful tools — and for developers who want maximum control and don't mind the infrastructure overhead, they're excellent choices. But for marketing teams that need to move fast, operate at scale, and trust that their tools are secure, the trade-offs point clearly toward a managed platform.

The core difference isn't capability — it's the gap between "possible" and "practical." You can build a PPC monitoring agent with OpenClaw. You can wire up Meta's Marketing API through an MCP and get basic operations working. But doing it well — with the depth of integration, the platform-specific knowledge, the security model, and the reliability that a business depends on — requires significant engineering effort and ongoing maintenance. Hyper provides all of that out of the box.

AspectOpenClaw / Claude Code (DIY)Hyper (Managed Platform)
SetupSignificant technical expertise required: terminal, Docker, API key management, server maintenance, and constant updates.Zero setup. Log in and it works. All maintenance handled for you.
ModelsOpenClaw: configurable but manual. Claude Code: locked to Claude. You manage API keys and costs per provider.Model agnostic with intelligent routing. Claude, OpenAI, Gemini, Kimi 2.5, Minimax — route each task to the best model automatically.
ArchitectureSingle-agent execution. One model, one context window, one task at a time.Sub-agent orchestration. Complex tasks split across parallel agents for faster, cheaper execution.
Tasks, cron & triggersOpenClaw: Scheduling and triggers are DIY; many users report missed or inconsistent runs and harder observability — improving over time. Claude Code / Cowork: Task features are newer; substantive flows often need your machine or session available.Production-grade schedules and triggers. Minute-level to monthly cron, weekday rules, webhooks, email and KPI-based events. Server-side execution — no laptop required. Full visibility into queue, history, and agent self-scheduled follow-ups.
SecurityHigh Risk. You're responsible for securing servers, network, API keys, and defending against prompt injection and tool poisoning.Enterprise-Grade. Process isolation, credential management, and guardrails built in.
Ad Platform IntegrationsVia self-built or thin community MCP wrappers with limited API coverage. You manage auth, rate limits, and error handling.Deep API/SDK-based integrations with Meta, Google, TikTok, LinkedIn, and more. Full Marketing API access, not a subset.
Creative GenerationCall image APIs manually. No brand awareness, no video, no platform formatting.All frontier generative models built in — images and video. Brand-aware: uses your logo, colors, and design system. Generates platform-ready creatives from a URL or brief.
Data & DatabasesNo built-in database access. You write scripts to query data and pipe it into the agent manually.Direct SQL access to PostgreSQL, BigQuery, MySQL. Join marketing data with CRM, product, and revenue data natively.
Marketing IntelligenceGeneral-purpose. You build the marketing expertise by finding and installing skills, and keeping them up to date yourself.Built-in context layer. Decision frameworks, diagnostic patterns, and platform best practices — maintained and updated automatically.
SEO & GEOManual setup via DataForSEO, Serper, or other APIs. No LLM visibility tracking.HyperSEO built in. Google search rankings plus LLM visibility across ChatGPT, Claude, Perplexity, and Gemini. Built-in GEO skills and data.
CostSeemingly free, but hidden costs: hosting, API fees, developer time, and the cost of a potential security breach.Transparent subscription pricing. Sub-agent orchestration and model routing keep compute costs efficient.
SupportCommunity-based. GitHub issues, Discord, and self-debugging.Dedicated support from a team that understands both the AI and the marketing domain.

Conclusion

The shift from AI that generates content to AI that executes work is the most significant change in marketing operations since programmatic advertising. The tools are here — open-source agents like OpenClaw and Claude Code for builders who want maximum control, and managed platforms like Hyper for teams that want production-grade marketing automation without the infrastructure burden.

The fifteen skills in this guide are production-grade templates. They represent the kind of workflows that will define the next generation of marketing operations — from PPC monitoring and media buying to GEO and AI search visibility — not because they're theoretical, but because they're running in production today, managing real budgets, real campaigns, and real client relationships.

A common misconception is that your competition is AI. It isn't. Your competition is your competitors using AI. Whether you choose to go all in with Hyper or build your own stack with OpenClaw and Claude Code, the important thing is to start. The gap between teams using agentic AI and teams that aren't is already widening — and it compounds every month.


References

[1] OpenClaw Official Website. https://openclaw.ai/ [2] Claude Code. https://docs.anthropic.com/en/docs/claude-code [3] Hyper AI Platform. https://hyperfx.ai [4] The Agent Skills Directory. https://skills.sh/ [5] GitHub | coreyhaines31/marketingskills. https://github.com/coreyhaines31/marketingskills [6] CrowdStrike | What Security Teams Need to Know About OpenClaw. https://www.crowdstrike.com/en-us/blog/what-security-teams-need-to-know-about-openclaw-ai-super-agent/ [7] Microsoft | Running OpenClaw Safely. https://www.microsoft.com/en-us/security/blog/2026/02/19/running-openclaw-safely-identity-isolation-runtime-risk/ [8] DataForSEO | API for SEO & PPC Keyword Research. https://dataforseo.com/apis/keyword-data-api [9] Serper | Google Search API. https://serper.dev/ [10] Anthropic | Claude Models. https://www.anthropic.com/claude [11] Anthropic | How Anthropic's Growth Marketing Team Cut Ad Creation Time from 30 Minutes to 30 Seconds. https://claude.com/blog/how-anthropic-uses-claude-marketing [12] Luminary Lane | Anthropic's Own Marketing Team Hit a Ceiling With Claude. https://www.luminarylane.app/blog/anthropic-marketing-team-claude-copilot-ceiling/

Tags:AI AgentsOpenClawPaid AdsMeta AdsGoogle AdsMedia BuyingGEOAI SearchMarketing Automation

AI agents for marketing and beyond