Project Soli: Hands Become the Interface

Project Soli: Hands Become the Interface

Google ATAP builds what people actually use

Google ATAP is tasked with creating cool new things that we’ll all actually use. At the recently concluded Google I/O event, they showcase Project Soli. A new kind of wearable technology that wants to make your hands and fingers the only user interface you’ll ever need.

This is not touchless interaction as a gimmick. It is a rethink of interface itself. Your gestures become input. Your hands become the control surface.

The breakthrough is radar, not cameras

To make this possible, Project Soli uses a radar that is small enough to fit into a wearable like a smartwatch.

The small radar picks up movements in real time and interprets how gestures alter its signal. This enables precise motion sensing without relying on cameras or fixed environmental conditions.

In wearable computing and ambient interfaces, the real unlock is interaction that works in motion, without relying on tiny screens.

The real question is whether wearables can move beyond miniaturized apps and make interaction work in motion, without a screen-first mindset.

The implication is straightforward. Interaction moves from screens to motion. User interfaces become something you do, not something you tap.

Why this matters for wearable tech

Wearables struggle when they copy the smartphone model onto tiny screens. Wearable UX should treat the screen as optional, not primary.

Extractable takeaway: When the screen becomes the bottleneck, shift the interface to sensing and interpretation, then keep the gesture vocabulary small enough to learn fast.

Instead of shrinking interfaces, it removes them. The wearable becomes a sensor-driven layer that listens to intent through movement.

If this approach scales, it changes what wearable interaction can be. Less screen dependency. More natural control. Faster micro-interactions.


What Soli teaches about hands-first UX

  • Start with intent, not UI. Define the handful of moments where a gesture is faster than hunting for a screen.
  • Design for motion. Favor interactions that work while walking, commuting, or doing something else with your attention.
  • Keep the gesture set teachable. A small, consistent vocabulary beats a large library that nobody remembers.

A few fast answers before you act

Is Project Soli just gesture control?

It is gesture control powered by a radar sensor small enough for wearables, designed to make hands and fingers the primary interface.

Why use radar instead of cameras?

Radar can sense fine motion without relying on lighting, framing, or line-of-sight in the same way camera-based systems do.

What is the real promise here?

Interfaces that disappear. Interaction becomes physical, immediate, and wearable-friendly.

What should a product team prototype first?

Pick one high-frequency moment where a quick gesture could replace a screen tap, and test whether the sensing feels reliable in motion.

What is the biggest adoption risk?

If gestures feel inconsistent or hard to learn, people will default back to the screen. The bar is effortless, not novel.

FOREO: MODA Digital Makeup Artist

FOREO: MODA Digital Makeup Artist

Never got the hang of applying makeup with your own hands? MODA from FOREO is billed as a digital makeup artist that takes the “tutorial” culture online and turns it into an automated, 30-second application moment.

From a chosen look to a mapped face

The flow starts in an app: you select a style to emulate. That style can come from MODA’s image library, a celebrity photo, or a picture of a fashionable friend. MODA then scans facial features to align the look. In other words, it maps facial landmarks so placement follows the wearer’s features. MODA then adapts colors and shapes to suit the wearer’s skin tone and face shape.

How the device applies the look

Once the selection is set, the user places their face into the device and MODA “paints” the chosen look directly onto the face, described as using makeup ink that is FDA-approved. Here, “ink” refers to the makeup medium the device dispenses onto the skin. The proposition is speed and repeatability: copy a look, personalize it, apply it, done.

In consumer beauty tech, shifting makeup from manual skill to an automated service experience changes the value from “how well you apply” to “how fast you can experiment”.

Why this idea has an audience

Online videos teaching people to copy celebrity styles are already a mass behavior. MODA’s bet is that many people do not want more instruction. They want a shortcut. Because the device applies the look for you after scanning and personalization, “trying a look” can become as easy as choosing one. The real question is whether the applied result looks credible enough that people will trust it without extra tutorial time. This framing is compelling because it shifts beauty from a practiced skill to a repeatable service moment.

Extractable takeaway: When a category is stuck on “learn the skill,” the highest-leverage innovation is often a service layer that turns inspiration into a fast, repeatable outcome, not another tutorial.

What MODA teaches about beauty UX

  • Collapse inspiration to action. Let people pick a reference look and get to an applied result quickly.
  • Personalize by default. Use scanning and simple adjustments so the outcome fits the individual, not just the template.
  • Design for repeatability. Make it easy to re-run a look, tweak it, and compare outcomes without starting from scratch.

A few fast answers before you act

What is MODA in one line?

A device billed as a “digital makeup artist” that uses an app selection plus facial scanning to apply a chosen makeup look in about 30 seconds.

What makes this different from AR try-on?

AR try-on is an on-screen overlay that previews a look digitally. MODA’s promise is physical application on the face after scanning and customization.

How does a user choose a look?

Through an integrated smartphone app, choosing from a library or supplying a reference image such as a celebrity photo or a friend’s picture.

How does MODA personalize a look to your face?

It’s described as scanning facial features and then adapting the chosen reference look by adjusting placement, shapes, and color choices to better fit the wearer’s face shape and skin tone before applying it.

Who is MODA pitched for?

People who want to experiment with different looks quickly, especially those who do not enjoy the learning curve of manual application and tutorials.

Sen.se: Mother and the Motion Cookies

Sen.se: Mother and the Motion Cookies

Sensors are showing up everywhere, from wrist wearables like Jawbone UP and Fitbit to the first wave of “smart home” kits. The promise is always the same. Data that helps you understand your day, then nudges you when something matters.

Mother and the Motion Cookies, from connected-objects startup Sen.se, is positioned as a more flexible take on that idea. Instead of buying a single-purpose gadget for each habit, you get one “Mother” hub and a set of small sensor tags. The Motion Cookies. You decide what you want to track, attach a Cookie to the relevant object, and set alerts for the moments you care about.

Definition tightening: A Motion Cookie is a small sensor you can stick to an object. The “Mother” device is the home base that receives the signals and turns them into simple dashboards and notifications.

If you strip away the friendly character design, this is a configurable rules engine for everyday life. The sensors stay the same. The meaning changes based on what you attach them to and what you tell the app to watch for.

Watch the demo video for more.

A sensor kit that behaves like a toolkit

The smart move here is that the hardware is deliberately generic. One sensor type can be repurposed across dozens of “jobs”, depending on where you place it. Toothbrush, medication box, door, bag, water bottle. The product is less about owning the perfect device, and more about reassigning the same device as your priorities change.

In consumer IoT, products only survive if setup friction stays low and the data translates into a simple action.

Why the “Mother” framing makes the tech feel usable

Smart home products often fail at the handoff between capability and comprehension. Mother softens that gap by packaging sensing as caregiving. The real question is whether a sensor system can feel understandable enough that people actually try it. That emotional framing reduces the intimidation factor and makes experimentation feel normal.

Extractable takeaway: When your product is technically broad, give users a friendly mental model and a small first win, then let reconfiguration become the habit that unlocks the long tail of use cases.

What connected-product teams should copy

  • Design for reassignment, not perfection. People’s routines change. Your hardware should survive those changes.
  • Make “setup” the product. If a user cannot get to value in minutes, they will not get to value at all.
  • Translate sensing into verbs. “Brush”, “open”, “arrive”, “drink”, “take”. Verbs beat metrics.
  • Alert sparingly. The fastest way to kill trust is to spam people with “insights” they did not ask for.

A few fast answers before you act

What is Mother and the Motion Cookies?

It is a smart home kit with one central hub and multiple small sensor tags. You attach a sensor to an object, choose what you want to track, and get updates or alerts based on that behaviour.

What is the core idea compared to a single-purpose wearable?

Reconfigurability. The same sensors can be reassigned to different objects and routines, so the system adapts to what you want to measure this week, not what the device designer assumed forever.

What problem is it trying to solve?

Turning ambient behaviour into something actionable, without requiring you to buy a new gadget for every habit or household scenario.

Why does the “Mother” framing matter?

It makes a technically broad sensor system feel more understandable and less intimidating. That framing helps users see the product as practical support, not just instrumentation.

What makes this kind of product hard to sustain?

Reliance on companion apps and backend services, plus the challenge of keeping alerts useful rather than noisy. If the system becomes high-maintenance, it stops feeling like help.