InVideo AI: Future of Ads, or Slop at Scale?

InVideo just dropped a campaign that might be one of the sharpest AI ads to date. Or one of the most controversial.

Not because the ad itself is “good” or “bad.” But because of what it demonstrates.

The premise is simple. A local business wants awareness and local footfall. A single prompt arrives. Then a “creative team” appears on screen. A writer, director, producer, and sound designer. They brainstorm, storyboard, pull assets, debate tone, change direction midstream, swap narrators, land a punchline, and ship a finished promo.

The twist is that the “team” is not human. It is AI agents collaborating in real time.

Some people will see this and think: finally, creativity at the speed of thought. Others will see it and think: here comes manufactured content. At industrial scale.

So let’s unpack what’s actually happening here. Not the hype. Not the fear. The shift.

What this campaign is really showing

On the surface, it’s a product story.

Under the surface, it’s a proof-of-concept for a new production model. Prompt-to-video, orchestrated by role-based agents, pulling from your assets, and iterating like a team would.

That matters because we are crossing a line:

  • Yesterday: AI helped you edit.
  • Today: AI can generate components.
  • Now: AI attempts to run the full production loop. Brief to concept to execution to polish.

If that sounds incremental, it isn’t. The bottleneck in content has never been “ideas.” It has been translation. Turning intent into something shippable, on brand, on time, and fit for a channel.

This is what changes. The translation cost collapses.

The “agents” idea. Why it clicks so hard

Most AI video tooling gets described as features: text-to-video, voiceover, stock replacement, templates.

Agents are a different mental model. They mimic how work gets done.

Instead of one tool trying to be everything, you have multiple role-based systems that divide the labor:

  • Writer: Hook, script, narrative beats
  • Director: Framing, pacing, scene intent
  • Producer: Assets, structure, feasibility, assembly
  • Sound designer: Voice, music cues, timing, emphasis

The output is not just “a video.” It’s a workflow that looks like collaboration.

And that’s why the campaign is sticky. It doesn’t just show a capability. It shows an operating model.

Fast definition. What “AI agents” means in this context

AI agents are role-based AI workers that take responsibility for a portion of the task, coordinate with other roles, and iteratively refine toward a shared goal.

In practical terms, this is orchestration. Task decomposition. Decision loops. And multi-step iteration that feels closer to a real production process than a single prompt and a single output.

In enterprise marketing teams, agentic video tools compress production time while making governance, briefing quality, and brand standards the real constraints.

Why the bakery storyline matters. It’s not about video

The reason this lands is the bakery.

A small business is a stand-in for every team that has historically been excluded from “premium” creative production. Not because they lacked ideas, but because they lacked:

  • Budget
  • Time
  • Specialist talent
  • Access to production infrastructure

If AI production becomes cheap and fast, a new baseline emerges.

Customer expectations tend to move in one direction. Up.

In other industries we’ve seen this pattern repeatedly:

  • Shipping went from weeks to days. Then days to “why isn’t it here tomorrow?”
  • Support went from office hours to 24/7 chat.
  • Information went from gatekept to instant.

Content is heading the same way.

When a local business can generate credible, channel-ready creative quickly, the competitive advantage shifts away from “who can produce” and toward “who can differentiate.”

So is this the future of content. Or a shortcut that kills creativity?

Both outcomes are plausible, because the tool is not the strategy.

Here are the three trajectories I think matter.

1) Creativity gets unlocked for more people

AI reduces the friction between an idea and a first draft. That can empower founders, small teams, educators, non-profits, internal comms teams, and marketers who have always had the brief but not the bandwidth.

If you’ve ever had a good concept die in a doc because production was too heavy, you know how big this is.

The upside version of the future looks like:

  • More experimentation
  • More niche creativity
  • More localized storytelling
  • Faster learning cycles

2) The internet floods with “content wallpaper”

When production becomes cheap, volume spikes. When volume spikes, attention gets harder. When attention gets harder, teams chase what performs. When teams chase what performs, sameness creeps in.

The downside version of the future looks like:

  • Infinite mediocre ads
  • Homogenized pacing and tone
  • Interchangeable visual language
  • “Good enough” content dominating feeds

That’s the fear behind “slop at scale.” Not that content exists. That it becomes meaningless.

3) Premium creative becomes more premium

There is a third outcome that’s often missed.

When baseline production becomes abundant, true differentiation becomes rarer.

Human advantages do not disappear. They concentrate around the things AI struggles with reliably:

  • Strategy and intent. What are we trying to change in the market?
  • Cultural nuance. What does this mean here, with these people?
  • Original point of view. What do we stand for that others don’t?
  • Brand taste. What is “on brand” beyond templates?
  • Ethical judgment. What should we not do even if we can?
  • Lived insight. What’s the human truth behind the message?

In that world, AI does not replace creative leaders. It raises the bar on them.

The practical question every marketing leader needs to answer

People debate whether AI can “replace creatives.” That’s not the operational question.

The operational question is: Where do you want humans to be irreplaceable, and where do you want machines to be fast?

Because if AI handles production, your competitive edge moves to:

  • The quality of your briefs
  • The clarity of your brand system
  • The strength of your POV
  • The governance of your outputs
  • The measurement of creative impact
  • The speed of iteration without brand drift

A simple maturity test you can run this week

If AI can produce at scale, the risk is not “bad videos.” It’s unmanaged systems.

Ask this:

Who owns the continuous loop of prompting, testing, learning, scaling, and deprecating AI-driven creative workflows in your organization?

If the answer is “no one,” you don’t have an AI capability. You have scattered experiments.

My take

Production is getting cheaper. Differentiation is getting harder.

So the real decision is not whether you can generate more content. It’s whether you can scale output without losing taste, brand truth, and accountability.

Is this the future of content. Or a shortcut that kills creativity? It depends on who owns the brief, who owns the guardrails, and who is willing to say no.


A few fast answers before you act

Can AI agents replace a creative team?

AI agents can replicate parts of the production workflow and speed up iteration. They do not automatically replace strategy, taste, or cultural judgment. Those still require accountable humans.

What does “prompt-to-video” actually mean?

Prompt-to-video is the ability to turn a single idea into a finished video. Script, scenes, voice, music, edit, and formatting. Without traditional filming or manual timeline editing.

Will this create more generic ads?

It can, especially when teams optimize for speed over differentiation. The antidote is strong briefs, clear brand constraints, and human ownership of taste and intent.

Use vs Integrate: AI Tools That Transform

The pilot phase is over. “Use” loses. “Integrate” wins.

Those who merely use AI will lose. Those who integrate AI will win. The experimentation era produced plenty of impressive demos. Now comes the part that separates winners from tourists. Making AI an operating capability that compounds.

Most organizations are still stuck in tool adoption. A team runs a prompt workshop. Marketing trials a copy generator. Someone adds an “intelligent chatbot” to the website. Useful, yes. Transformational, no.

The real shift is “use vs integrate”. Because the differentiator is not whether you have access to AI. Everyone does. The differentiator is whether you can make AI repeatable, governed, measurable, and finance-credible across workflows that actually move revenue, cost, speed, and quality.

If you want one question to sanity-check your AI maturity, it is this.
Who owns the continuous loop of scouting, testing, learning, scaling, and deprecating AI capabilities across the business?

What “integrating AI” actually means

Integration is not “more prompts”. It is process integration with an operating model around it.

In practice, that means treating AI like infrastructure. Same mindset as data platforms, identity, and analytics. The value comes from making it dependable, safe, reusable, and measurable.

Here is what “AI as infrastructure” looks like when it is real:

  • Data access and permissions that are designed, not improvised. Who can use what data, through which tools, with what audit trail.
  • Human-in-the-loop checkpoints by design. Not because you distrust AI. Because you want predictable outcomes, accountability, and controllable risk.
  • Reusable agent patterns and workflow components. Not one-off pilots that die when the champion changes teams.
  • A measurement layer finance accepts. Clear KPI definitions, baselines, attribution logic, and reporting that stands up in budget conversations.

This is why the “pilot phase is over”. You do not win by having more pilots. You win by building the machinery that turns pilots into capabilities.

In enterprise operating models, AI advantage comes from repeatable workflow integration with governance and measurement, not from accumulating tool pilots.

The bottleneck is collapsing. But only for companies that operationalize it

A tangible shift is the collapse of specialist bottlenecks.

When tools like Lovable let teams build apps and websites by chatting with AI, the constraint moves. It is no longer “can we build it”. It becomes “can we govern it, integrate it, measure it, and scale it without creating chaos”.

The same applies to performance management. The promise of automated scorecards and KPI insights is not that dashboards look nicer. It is that decision cycles compress. Teams stop arguing about what the number means, and start acting on it.

But again, the differentiator is not whether someone can generate an app or a dashboard once. The differentiator is whether the organization can make it repeatable and governed. That is the gap between AI theatre and AI advantage.

Ownership. The million-dollar question most companies avoid

I still see many organizations framing AI narrowly. Generating ads. Drafting social posts. Bolting a chatbot onto the site.

Those are fine starter use cases. But they dodge the million-dollar question. Who owns AI as an operating capability?

In my view, it requires explicit, business-led accountability, with IT as platform and risk partner. Two ingredients matter most.

  1. A top-down mandate with empowered change management

    Leaders need a shared baseline for what “integration” implies. Otherwise, every initiative becomes another education cycle. Legal and compliance arrive late. Momentum stalls. People get frustrated. Then AI becomes the next “tool rollout” story. This is where the mandate matters. Not as a slogan, but as a decision framework. What is in scope. What is out of scope. Which risks are acceptable. Which are not. What “good” looks like.

  2. A new breed of cross-functional leadership

    Not everyone can do this. You need a leader whose superpower is connecting the dots across business, data, technology, risk, and finance. Not a deep technical expert, but someone with strong technology affinity who asks the right questions, makes trade-offs, and earns credibility with senior stakeholders. This leader must run AI as an operating capability, not a set of tools.

    Back this leader with a tight leadership group that operates as an empowered “AI enablement fusion team”. It spans Business, IT, Legal/Compliance, and Finance, and works in an agile way with shared standards and decision rights. Their job is to move fast through scouting, testing, learning, scaling, and standardizing. They build reusable patterns and measure KPI impact so the organization can stop debating and start compounding.

    If that team does not exist, AI stays fragmented. Every function buys tools. Every team reinvents workflows. Risk accumulates quietly. And the organization never gets the benefits of scale.

AI will automate the mundane. It will transform everything else

Yes, AI will automate mundane tasks. But the bigger shift is transformation of the remaining work.

AI changes what “good” looks like in roles that remain human-led. Strategy becomes faster because research and synthesis compress. Creative becomes more iterative because production costs drop. Operations become more adaptive because exception handling becomes a core capability.

The workforce implication is straightforward. Your advantage will come from people who can direct, verify, and improve AI-enabled workflows. Not from people who treat AI as a toy, or worse, as a threat.

There is no one AI tool to rule them all

There is no single AI tool that solves everything. The smart move is to build an AI tool stack that maps to jobs-to-be-done, then standardize how those tools are used.

Also, not all AI tools are worth your time or your money. Many tools look great in demos and disappoint in day-to-day execution.

So here is a practical way to think about the landscape. A stack, grouped by what the tool does.

Here is one good example of a practical AI tool stack by use case

Foundation models and answer engines

  • ChatGPT: General-purpose AI assistant for reasoning, writing, analysis, and building lightweight workflows through conversation.
  • Claude (Anthropic): General-purpose AI assistant with strong long-form writing and document-oriented workflows.
  • Gemini (Google): Google’s AI assistant for multimodal tasks and deep integration with Google’s ecosystem.
  • Grok (xAI): General-purpose AI assistant positioned around fast conversational help and real-time oriented use cases.
  • Perplexity AI: Answer engine that combines web-style retrieval with concise, citation-forward responses.
  • NotebookLM: Document-grounded assistant that turns your sources into summaries, explanations, and reusable knowledge.
  • Apple Intelligence: On-device and cloud-assisted AI features embedded into Apple operating systems for everyday productivity tasks.

Creative production. Image, video, voice

  • Midjourney: High-quality text-to-image generation focused on stylized, brandable visual outputs.
  • Leonardo AI: Image generation and asset creation geared toward design workflows and production-friendly variations.
  • Runway ML: AI video generation and editing tools for fast content creation and post-production acceleration.
  • HeyGen: Avatar-led video creation for localization, explainers, and synthetic presenter formats.
  • ElevenLabs: AI voice generation and speech synthesis for narration, dubbing, and voice-based experiences.

Workflow automation and agent orchestration

  • Zapier: No-code automation for connecting apps and triggering workflows, increasingly AI-assisted.
  • n8n: Workflow automation with strong flexibility and self-hosting options for technical teams.
  • Gumloop: Drag-and-drop AI automation platform that connects data, apps, and AI into repeatable workflows.
  • YourAtlas: AI sales agent that engages leads via voice, SMS, or chat, qualifies them, and books appointments or routes calls without humans.

Productivity layers and knowledge work

  • Notion AI: AI assistance inside Notion for writing, summarizing, and turning workspace content into usable outputs.
  • Gamma: AI-assisted creation of presentations and documents with fast narrative-to-slides conversion.
  • Granola AI: AI notepad that transcribes your device audio and produces clean meeting notes without a bot joining the call.
  • Buddy Pro AI: Platform that turns your knowledge into an AI expert you can deploy as a 24/7 strategic partner and revenue-generating asset.
  • Revio: AI-powered sales CRM that automates Instagram outreach, scores leads, and provides coaching to convert followers into revenue.
  • Fyxer AI: Inbox assistant that connects to Gmail or Outlook to draft replies in your voice, organize email, and automate follow-ups.

Building software faster. App builders and AI dev tools

  • Lovable: Chat-based app and website builder that turns requirements into working product UI and flows quickly.
  • Cursor AI: AI-native code editor that accelerates coding, refactoring, and understanding codebases with embedded assistants.

Why this video is worth your time

Tool lists are everywhere. What is rare is a ranking based on repeated, operational exposure across real businesses.

Dan Martell frames this in a way I like. He treats tools as ROI instruments, not as shiny objects. He has tested a large number of AI tools across his companies, then sorts them into what is actually worth adopting versus what is hype.

That matters because most teams do not have a tooling problem. They have an integration problem. A “best tools” list only becomes valuable when you connect it to your operating model, your workflows, your governance, and your KPI layer.

The takeaway for digital leaders

If you are a CDO, CIO, CMO, or you run digital transformation in any serious way, here is the practical stance.

  • Stop optimising for pilots. Start optimising for capabilities.
  • Decide who owns the continuous loop. Make it explicit. Fund it properly.
  • Build reusable patterns with governance. Measure what finance accepts.
  • Treat tools as interchangeable components. Your real advantage is the operating model that lets you reuse, scale, and improve AI capabilities over time.

That is what “integrate” means. And that is where the winners will be obvious.


A few fast answers before you act

What does “integrating AI” actually mean?

Integrating AI means embedding AI into core workflows with clear ownership, governance, and measurement. It is not about running more pilots or using more tools. It is about making AI repeatable, auditable, and finance-credible across the workflows that drive revenue, cost, speed, and quality.

What is the difference between using AI and integrating AI?

Using AI is ad hoc and tool-led. Teams experiment with prompts, copilots, or point solutions in isolation. Integrating AI is workflow-led. It standardizes data access, controls, reusable patterns, and KPIs so AI outcomes can scale across the organization.

What is the simplest way to test AI maturity in an organization?

Ask who owns the continuous loop of scouting, testing, learning, scaling, and deprecating AI capabilities. If no one owns this end to end, the organization is likely accumulating pilots and tools rather than building an operating capability.

What does “AI as infrastructure” look like in practice?

AI as infrastructure includes standardized access to data, policy-based permissions, auditability, human-in-the-loop checkpoints, reusable workflow components, and a measurement layer that links AI activity to business KPIs.

Why do governance and measurement matter more than AI tools?

Because tools are easy to demo and hard to scale. Governance protects quality and compliance. Measurement protects budgets. Without baselines and attribution that finance trusts, AI remains experimentation instead of an operating advantage.

What KPIs make AI initiatives finance-credible?

Common KPIs include cycle-time reduction, cost-to-serve reduction, conversion uplift, content throughput, quality improvements, and risk reduction. What matters most is agreeing on baselines and attribution logic with finance upfront.

What is a practical first step leaders can take in the next 30 days?

Select one or two revenue or cost workflows. Define the baseline. Introduce human-in-the-loop checkpoints. Instrument measurement. Then standardize the pattern so other teams can reuse it instead of starting from scratch.

CES 2026: Robots, Trifolds, Screenless AI

CES 2026. The signal through the noise

If you want the “CES executive summary,” it looks like this:

  • Health gets quantified hard. A new class of “longevity” devices is trying to become your at-home baseline check. Not a gimmick. A platform.
  • Displays keep mutating. Fold once. Fold twice. Roll. Stretch. The form factor war is back.
  • Robots stop being cute. More products are moving from “demo theatre” to “do a task repeatedly.”
  • Smart home continues its slow merge. Locks, sensors, ecosystems. Less sci-fi. More operational.
  • AI becomes ambient. Not “open app, type prompt.” More “wear it, talk to it, let it see.”

Watch the highlights here:

Now the real plot twist. The best AI announcements at CES 2026

CES is not an AI conference, but CES 2026 made one thing obvious: the next interface is not a chat box. It is context. That means cameras, microphones, on-device inference, wearables, robots, and systems that run across devices. That brings us to the most stunning AI announcements from CES 2026.

Watch the highlights here:

The 5 AI patterns CES 2026 made impossible to ignore

  1. Physical AI becomes the headline
    Humanoid robots were no longer treated purely as viral content. The narrative moved toward deployment, safety, scaling, and real-world task learning.
  2. Wearable AI is back, but in more plausible clothing
    The “AI pin” era burned trust fast. CES 2026’s response was interesting: build assistants into things people already wear, and give them perception.
  3. “Screenless AI” is not a gimmick. It is a strategy.
    A surprising number of announcements were variations of the same idea: capture context (vision + audio + sensors), infer intent, act proactively, and stay out of the way until needed.
  4. On-device intelligence becomes a product feature, not an engineering detail
    Chips and system software matter again because latency, privacy, and cost matter again. When AI becomes ambient, tolerance for “wait, uploading” goes to zero.
  5. The trust problem is now the product problem
    If devices are “always listening” or “always seeing,” privacy cannot be a settings page. It must be a core UX principle: explicit indicators, on-device processing where possible, clear retention rules, and user control that does not require a PhD.

In consumer technology and enterprise product organizations, CES signals matter less as individual gadgets and more as evidence of where interfaces and trust models are heading next.

Wrap-up. What this means if you build products or brands

CES 2026 made the direction of travel feel unusually clear. The show was not just about smarter gadgets. It was about AI turning into a layer that sits inside everyday objects, quietly capturing context, interpreting intent, and increasingly acting on your behalf. Robots, wearables, health scanners, and “screenless” assistants are all expressions of the same shift: computation moving from apps into environments. The remaining question is not whether this is coming. It is how quickly these experiences become trustworthy, affordable, and normal, and which companies manage to turn CES-grade demos into products people actually keep using.


A few fast answers before you act

What was the real AI signal from CES 2026?

The signal was the shift from “AI features” to AI-native interaction models. Products increasingly behave like agents that act across tasks, contexts, and devices.

Why are robots suddenly back in the conversation?

Robots are a visible wrapper for autonomy. They make the question tangible. Who acts. Under what constraints. With what safety and trust model.

What does “screenless AI” mean in practice?

It means fewer taps and menus, and more intent capture plus action execution. Voice, sensors, and ambient signals become inputs. The system completes tasks across apps and devices.

What is the biggest design challenge in an agent world?

Control and confidence. Users need to understand what the system will do, why it will do it, and how to stop or correct it. Trust UX becomes core UX.

What is the most transferable takeaway?

Design your product and brand for “context as the interface.” Make the rules explicit, keep user control obvious, and treat trust as a first-class feature.