Oakley: Pro Vision with Google Cardboard

Oakley: Pro Vision with Google Cardboard

When you picture a virtual reality (VR) headset, you probably imagine something high-tech and far too expensive to feel practical. Google Cardboard takes that assumption and flips it by turning a simple cardboard cutout into a phone-powered VR viewer.

Oakley borrows that logic and puts it exactly where people already accept cardboard. The packaging. Instead of being thrown away, the box becomes the device that unlocks the experience.

Packaging that turns into a VR product

Google launched Google Cardboard as a cardboard cutout that turns Android phones into a VR headset. Oakley integrates that fold-and-slot concept into its sunglass packaging, so customers can transform the pack into a viewer and use their phone to access 360-degree content.

The payoff is described as a “you are there” look at extreme sports like surfing, skiing, mountain biking, skateboarding, and skydiving. It is less about specs and more about perspective.

In consumer product marketing, converting packaging from waste into a usable experience can create perceived value without adding new components.

Why this lands for an action-sports brand

This works because the medium matches the promise. Oakley is not only showing extreme sports. It is letting you look from inside the moment, using viewer control to make the content feel personal. The “VR made from packaging” twist also creates a good kind of surprise. The customer discovers the brand added value where they expected disposal.

Extractable takeaway: If your story is about immersion or perspective, build the experience trigger into something the customer already touches, then let the first interaction deliver the benefit before they read any explanation.

The commercial intent underneath

This is a purchase-adjacent experience. It turns the post-purchase moment into brand time, and it extends the product narrative beyond the sunglasses themselves. The packaging becomes a bridge between retail and content, with the customer doing the assembly that makes the story memorable.

The real question is whether the packaging can turn post-purchase curiosity into a usable brand experience, not whether it can imitate premium VR hardware.

What to steal from packaging-led immersion

  • Reuse an accepted “throwaway” material. If it is already in hand, it is frictionless distribution.
  • Make the first use obvious. Assembly and activation should be legible without instructions.
  • Match the experience to brand territory. Immersive POV content fits performance and extreme sports.
  • Design for sharing. If it looks clever on camera, people will demonstrate it for you.

A few fast answers before you act

What is Oakley Pro Vision in this context?

It is a packaging-led idea where an Oakley box folds into a Google Cardboard style VR viewer, using a phone to deliver 360-degree extreme sports content.

Why use Google Cardboard instead of a dedicated headset?

Because it lowers cost and setup. A phone plus folded cardboard is enough to deliver an immersive experience without asking people to buy new hardware.

What does 360-degree content add versus normal video?

It gives viewer control over where to look, which increases the sense of presence and makes the experience feel closer to a real point of view.

Where does the marketing value come from?

From turning packaging into a reusable object and extending brand time after purchase, while linking the product to high-adrenaline moments people want to feel.

What is the main failure mode with this pattern?

If the fold, fit, or onboarding is unclear, people will not assemble it. The physical usability has to be as strong as the content.

Project Soli: Hands Become the Interface

Project Soli: Hands Become the Interface

Google ATAP builds what people actually use

Google ATAP is tasked with creating cool new things that we’ll all actually use. At the recently concluded Google I/O event, they showcase Project Soli. A new kind of wearable technology that wants to make your hands and fingers the only user interface you’ll ever need.

This is not touchless interaction as a gimmick. It is a rethink of interface itself. Your gestures become input. Your hands become the control surface.

The breakthrough is radar, not cameras

To make this possible, Project Soli uses a radar that is small enough to fit into a wearable like a smartwatch.

The small radar picks up movements in real time and interprets how gestures alter its signal. This enables precise motion sensing without relying on cameras or fixed environmental conditions.

In wearable computing and ambient interfaces, the real unlock is interaction that works in motion, without relying on tiny screens.

The real question is whether wearables can move beyond miniaturized apps and make interaction work in motion, without a screen-first mindset.

The implication is straightforward. Interaction moves from screens to motion. User interfaces become something you do, not something you tap.

Why this matters for wearable tech

Wearables struggle when they copy the smartphone model onto tiny screens. Wearable UX should treat the screen as optional, not primary.

Extractable takeaway: When the screen becomes the bottleneck, shift the interface to sensing and interpretation, then keep the gesture vocabulary small enough to learn fast.

Instead of shrinking interfaces, it removes them. The wearable becomes a sensor-driven layer that listens to intent through movement.

If this approach scales, it changes what wearable interaction can be. Less screen dependency. More natural control. Faster micro-interactions.


What Soli teaches about hands-first UX

  • Start with intent, not UI. Define the handful of moments where a gesture is faster than hunting for a screen.
  • Design for motion. Favor interactions that work while walking, commuting, or doing something else with your attention.
  • Keep the gesture set teachable. A small, consistent vocabulary beats a large library that nobody remembers.

A few fast answers before you act

Is Project Soli just gesture control?

It is gesture control powered by a radar sensor small enough for wearables, designed to make hands and fingers the primary interface.

Why use radar instead of cameras?

Radar can sense fine motion without relying on lighting, framing, or line-of-sight in the same way camera-based systems do.

What is the real promise here?

Interfaces that disappear. Interaction becomes physical, immediate, and wearable-friendly.

What should a product team prototype first?

Pick one high-frequency moment where a quick gesture could replace a screen tap, and test whether the sensing feels reliable in motion.

What is the biggest adoption risk?

If gestures feel inconsistent or hard to learn, people will default back to the screen. The bar is effortless, not novel.

FOREO: MODA Digital Makeup Artist

FOREO: MODA Digital Makeup Artist

Never got the hang of applying makeup with your own hands? MODA from FOREO is billed as a digital makeup artist that takes the “tutorial” culture online and turns it into an automated, 30-second application moment.

From a chosen look to a mapped face

The flow starts in an app: you select a style to emulate. That style can come from MODA’s image library, a celebrity photo, or a picture of a fashionable friend. MODA then scans facial features to align the look. In other words, it maps facial landmarks so placement follows the wearer’s features. MODA then adapts colors and shapes to suit the wearer’s skin tone and face shape.

How the device applies the look

Once the selection is set, the user places their face into the device and MODA “paints” the chosen look directly onto the face, described as using makeup ink that is FDA-approved. Here, “ink” refers to the makeup medium the device dispenses onto the skin. The proposition is speed and repeatability: copy a look, personalize it, apply it, done.

In consumer beauty tech, shifting makeup from manual skill to an automated service experience changes the value from “how well you apply” to “how fast you can experiment”.

Why this idea has an audience

Online videos teaching people to copy celebrity styles are already a mass behavior. MODA’s bet is that many people do not want more instruction. They want a shortcut. Because the device applies the look for you after scanning and personalization, “trying a look” can become as easy as choosing one. The real question is whether the applied result looks credible enough that people will trust it without extra tutorial time. This framing is compelling because it shifts beauty from a practiced skill to a repeatable service moment.

Extractable takeaway: When a category is stuck on “learn the skill,” the highest-leverage innovation is often a service layer that turns inspiration into a fast, repeatable outcome, not another tutorial.

What MODA teaches about beauty UX

  • Collapse inspiration to action. Let people pick a reference look and get to an applied result quickly.
  • Personalize by default. Use scanning and simple adjustments so the outcome fits the individual, not just the template.
  • Design for repeatability. Make it easy to re-run a look, tweak it, and compare outcomes without starting from scratch.

A few fast answers before you act

What is MODA in one line?

A device billed as a “digital makeup artist” that uses an app selection plus facial scanning to apply a chosen makeup look in about 30 seconds.

What makes this different from AR try-on?

AR try-on is an on-screen overlay that previews a look digitally. MODA’s promise is physical application on the face after scanning and customization.

How does a user choose a look?

Through an integrated smartphone app, choosing from a library or supplying a reference image such as a celebrity photo or a friend’s picture.

How does MODA personalize a look to your face?

It’s described as scanning facial features and then adapting the chosen reference look by adjusting placement, shapes, and color choices to better fit the wearer’s face shape and skin tone before applying it.

Who is MODA pitched for?

People who want to experiment with different looks quickly, especially those who do not enjoy the learning curve of manual application and tutorials.