Pomelli Photoshoot: Fast studio-quality assets

Start from a single image of your product and easily create high quality, customized product shots to elevate your marketing.

A jar in your hand. A whole shoot in your CMS

Start with the most ordinary thing in e-commerce. A single product photo, shot on a desk, held in a hand, good enough for internal approval but nowhere near “campaign-ready”. Then imagine turning that one image into a set of studio and lifestyle shots that look like you planned the lighting, the surface, the props, and the framing.

That is the pitch behind Photoshoot, a feature inside Pomelli from Google Labs: take a basic product image and generate professional-grade marketing imagery fast, without booking a studio for every new variant. “Studio-grade” here means assets that can sit on a PDP or paid social without instantly looking like “placeholder content”.

How Photoshoot turns one product photo into usable marketing imagery

Photoshoot is not just “generate me a nicer background”. It is a guided flow designed to keep output consistent.

  1. Pick a product photo. The input can be imperfect. The tool is explicitly designed to handle “don’t worry about polish”.
  2. Choose a template. Templates are pre-built shot styles (for example studio or lifestyle) that constrain composition so results do not drift into random aesthetics.
  3. Generate. Pomelli applies your brand aesthetic via its Business DNA, then generates new shots. Business DNA is Pomelli’s saved brand profile derived from your website (voice, fonts, imagery, color palette).
  4. Refine. You iterate with finishing touches, then download assets or store them back into Business DNA for reuse in later campaigns.

Under the hood, Google describes this as combining business context (Business DNA) with Nano Banana image generation to produce the final scenes.

In high-velocity retail and FMCG e-commerce teams shipping new SKUs (Stock Keeping Units) and promos weekly across many markets, this is the shortest path from “we have a product” to “we have compliant, channel-ready variants”.

The real question is whether one approved product shot can produce enough on-brand variants to increase throughput without increasing review drag.

Why it lands. Because it cuts the real friction, not the fun part

Most teams are not blocked on “having ideas”. They are blocked on throughput with consistency: getting enough variants, in enough formats, that still look on-brand, pass review, and do not trigger rework across design, legal, and local markets.

This is why the mechanism matters. Because Photoshoot grounds outputs in Business DNA and constrains composition via templates, the results tend to feel brand-consistent faster, which reduces review churn and makes variant production scalable.

Extractable takeaway: If you want generative creative to survive enterprise review, do not start with infinite freedom. Start with constraints that encode your brand (a reusable brand profile) and your channel rules (shot templates), then let the model fill in the pixels inside that box.

The business intent is blunt. Production leverage for asset variants

“Production leverage” is the multiplier you get when one person-hour produces many more usable assets without multiplying headcount or agency spend. For e-commerce teams, Photoshoot is essentially a variant engine.

  • More PDP (Product Detail Pages) imagery coverage without re-shooting every pack change.
  • More paid social iterations without waiting on design queues.
  • Faster seasonal refreshes when the same SKU needs a new context (spring, gifting, back-to-work).
  • A tighter loop between merchandising and creative because the cost of “try another angle” collapses.

Important reality check: you still need governance. Treat outputs like any other marketing asset. Rights, claims, pack accuracy, and local compliance do not disappear just because generation is fast.

Where to try it?

The Pomelli app on Google Labs is where you can access the experience.

However availability is currently limited. Pomelli has been launched as a public beta experiment in the United States, Canada, Australia, and New Zealand (English).

What to steal for your next asset sprint if the app is available in your region

  • Codify brand constraints first. Build a reusable “brand profile” (fonts, tone, visual rules) before you chase more generations.
  • Template your shots like you template layouts. Decide the 6 to 10 shot types you actually need (hero studio, detail crop, lifestyle context, ingredient cue) and standardize them.
  • Design for review speed. Define what “acceptable” means (pack legibility, logo integrity, claims, background rules), then generate inside those rails.
  • Run a SKU ladder test. Start with 10 SKUs across easy and hard surfaces (glass, reflective, metallic). If it fails there, it will fail at scale.
  • Instrument the pipeline. Track time-to-first-usable, approval rate, and rework causes. That is how you prove leverage, not by “wow, looks nice”.

A few fast answers before you act

What is Pomelli Photoshoot, in one sentence?

Pomelli Photoshoot is a feature inside Google Labs’ Pomelli that turns a single product photo into professional-style studio and lifestyle marketing images using brand context and image generation.

What is the mechanic marketers should care about?

You choose a product image, select a curated template (studio or lifestyle), generate variants grounded in your Business DNA, then refine and download or reuse those assets in future campaigns.

What does “Business DNA” actually mean here?

Business DNA is Pomelli’s saved brand profile derived from your website, such as tone of voice, fonts, imagery, and color palette, which Pomelli uses to keep generated outputs consistent.

Where is Pomelli available right now?

Pomelli is in public beta in English in the United States, Canada, Australia, and New Zealand. It is not currently available in Germany.

What is the first safe way to pilot this in an enterprise team?

Pilot it on a small SKU set with strict shot templates and review criteria, then measure approval rate and rework reasons before scaling variant production.

Google Home Mini: Disney Little Golden Books

You start reading a Disney Little Golden Book out loud, and your Google Home joins in. Sound effects land on cue. The soundtrack shifts with the scene. The story feels produced, not just read.

The partnership. Disney storybooks with an audio layer

Google and Disney bring select Disney Little Golden Books to life by letting Google Home add sound effects and soundtracks as the story is read aloud.

How it works. Voice recognition that follows the reader

The feature uses voice recognition to track the pacing of the reader. If you skip ahead or go back, the sound effects adjust accordingly. If you pause reading, ambient music plays until you begin again. Because it can follow your pacing in real time, the audio can land on cue without you triggering effects manually.

Why it lands. Produced storytime without a screen

In family living-room media, the win is turning passive reading into a shared, timed audio experience without adding another screen. The listener hears the same beats the reader sees, so the room stays in one moment instead of splitting attention across devices.

Extractable takeaway: When you add an audio layer to an analog ritual, sync it to human pacing rather than button presses, so the experience feels guided while staying hands-free.

The real question is whether the audio layer earns its place by deepening the ritual, not by adding novelty.

This is a strong pattern for smart speakers because it increases interactivity without pulling a family into more screen time.

How you start. One voice command

To activate it, say, “Hey Google, let’s read along with Disney.”

Always listening during the story

Unlike typical commands, the smart speaker’s microphone stays on during the story so the device can follow along and add sound effects in the right moments.

Privacy note in the product promise

To address privacy concerns, Google says it does not store the audio data after the story has been completed.

Where it works

This feature works on Google Home, Home Mini, and Home Max speakers in the US.

What to copy for read-along audio experiences

  • Anchor to a ritual. Start with something people already do, then add audio that fits the habit.
  • Follow the human pace. Track reading speed, pauses, and backtracking so timing feels natural.
  • Keep it screen-free. Make the audio layer the enhancement, not a gateway to another display.
  • State the privacy posture. If the mic stays on, explain clearly what is and is not retained.

A few fast answers before you act

What is “Read along with Disney” on Google Home?

It is a Google and Disney feature that adds sound effects and music to select Disney Little Golden Books while you read aloud.

How does it stay in sync with the reader?

Voice recognition follows the pacing of the read-out-loud audio and adjusts if you pause, skip ahead, or go back.

How do you start it?

Use the voice command shown in the post, then begin reading the supported book out loud so the speaker can follow along.

What is the key experience detail that makes it feel “produced”?

The audio layer lands on cue as you read, so the story rhythm feels guided without the reader needing to trigger effects manually.

What is the stated privacy promise during the story?

The product promise described here is that audio is used to follow the reading experience and is not kept after the story completes.

Pizza Hut: Pie Tops II

Pizza Hut is the official pizza of the NCAA, a men’s basketball tournament known informally as March Madness and played each spring in the United States.

For last year’s tournament, Pizza Hut created what was billed as the world’s first shoe that ordered a pizza. Now, to celebrate their second year as the official pizza of the NCAA, Pizza Hut, Droga5 and the Shoe Surgeon launched Pie Tops II. It is a limited-edition high top shoe that not only uses your geolocation to order the current Pizza Hut deal at the press of a button, but also allows users to pause the game while they receive their delivery.

A TV ad has also been released to highlight the new pause feature of these newly relaunched Pie Top shoes.

A sneaker button that behaves like a remote

The mechanism is deliberately simple. Put a single button on the shoe. Tie it to an app. Map the press to two jobs: order, then pause. The shoe becomes a physical shortcut for a very specific March Madness moment, when people want food but do not want to miss play. That works because it removes friction at the exact moment attention is highest.

In second-screen sports viewing, the strongest interactions reduce interruption while keeping attention on the live game.

Why it lands on game day

Pie Tops II works because it converts a familiar tension into a prop. Hunger versus attention. Convenience versus FOMO. The “pause” feature turns a delivery problem into a punchline, and the shoe format makes the whole thing instantly tellable.

Extractable takeaway: If you can turn a high-frequency habit into a one-action ritual, you make the brand feel like part of the event, not just an ad around it.

The real intent behind the novelty

This is not really about footwear. The real question is how Pizza Hut earns a place inside the live ritual instead of advertising around it. It is about owning a behavior loop during March Madness. By behavior loop here, I mean a repeatable sequence of trigger, action, and reward that keeps the brand attached to the moment. Pizza ordering, deal recall, and a reason to talk about Pizza Hut in the same breath as the game. The smart move here is not the gadget but the way it turns brand utility into event behavior. Limited-edition scarcity does the rest, because it makes the product itself a piece of shareable culture.

What brands can steal from Pie Tops II

  • Pick one moment to own: design for a specific tension that happens repeatedly during an event, not for “sports fans” in general.
  • One control, two outcomes: a single action that triggers both utility and delight is more memorable than a complex feature list.
  • Make the object do the storytelling: the product should explain the campaign in one sentence, even without a logo.
  • Build viewer control into the idea: letting people keep the game in their hands makes the brand feel helpful, not interruptive.
  • Scarcity as distribution: limited runs can function like media spend when the object is inherently talkable.

A few fast answers before you act

What are Pie Tops II?

They are limited-edition Pizza Hut sneakers designed for March Madness that let you order pizza via a button press and, as described, pause the game while you wait for delivery.

What problem is this campaign solving?

It dramatizes a familiar game-day problem. People want food without missing play. The stunt turns that tension into a memorable product feature and a shareable story.

Why does the “pause” feature matter more than the pizza-ordering feature?

Ordering is convenient. Pausing is emotionally resonant because it speaks directly to FOMO during live sports. It is the twist that makes the idea travel.

Is this wearable tech or brand entertainment?

It is primarily brand entertainment packaged as a functional shortcut. The utility makes it credible. The novelty makes it worth talking about.

What is the reusable pattern for other brands?

Create a physical or tactile shortcut for a high-frequency moment. Keep the interaction to one obvious action. Then tie it to an event where people already have strong emotions and repeat behaviors.