Project Soli: Hands Become the Interface

Google ATAP builds what people actually use

Google ATAP is tasked with creating cool new things that we’ll all actually use. At the recently concluded Google I/O event, they showcase Project Soli. A new kind of wearable technology that wants to make your hands and fingers the only user interface you’ll ever need.

This is not touchless interaction as a gimmick. It is a rethink of interface itself. Your gestures become input. Your hands become the control surface.

The breakthrough is radar, not cameras

To make this possible, Project Soli uses a radar that is small enough to fit into a wearable like a smartwatch.

The small radar picks up movements in real time and interprets how gestures alter its signal. This enables precise motion sensing without relying on cameras or fixed environmental conditions.

The implication is straightforward. Interaction moves from screens to motion. User interfaces become something you do, not something you tap.

In wearable computing and ambient interfaces, the real unlock is interaction that works in motion, without relying on tiny screens.

Why this matters for wearable tech

Wearables struggle when they copy the smartphone model onto tiny screens. Project Soli pushes in the opposite direction.

Instead of shrinking interfaces, it removes them. The wearable becomes a sensor-driven layer that listens to intent through movement.

If this approach scales, it changes what wearable interaction can be. Less screen dependency. More natural control. Faster micro-interactions.



A few fast answers before you act

Is Project Soli just gesture control?

It is gesture control powered by a radar sensor small enough for wearables, designed to make hands and fingers the primary interface.

Why use radar instead of cameras?

Radar can sense fine motion without relying on lighting, framing, or line-of-sight in the same way camera-based systems do.

What is the real promise here?

Interfaces that disappear. Interaction becomes physical, immediate, and wearable-friendly.

Restaurant of the Future: AR Dining

The restaurant of the future is a technology experience

Restaurants of the future are no longer defined only by food, service, or ambiance.

They become technology-driven environments, where digital interfaces blend directly into the dining experience.

Smartglasses, augmented reality, gesture-based interfaces, customer face identification, avatars, and seamless wireless payments begin to coexist at the table.

The result is not a single gadget. It is a fully integrated experience.

When dining becomes augmented

In the restaurant of the future, the menu does not need to live on paper or even on a phone.

Information can appear in front of the guest through smartglasses or augmented displays. Dishes can be visualized before ordering. Nutritional details, origin stories, or preparation methods can surface on demand.

Gestures replace clicks. Presence replaces navigation.

The dining experience becomes interactive without feeling mechanical.

Identity replaces interaction

Face recognition and customer identification change how restaurants think about service.

Returning guests can be recognized instantly. Preferences, allergies, and past orders can be recalled automatically. Avatars and digital assistants can guide choices or explain dishes without interrupting human staff.

The restaurant adapts to the guest, not the other way around.

Payment disappears into the experience

Wireless payment technologies remove the most artificial moment in dining.

There is no need to ask for the bill. No waiting. No interruption.

Payment happens seamlessly as part of the experience, triggered by confirmation, gesture, or departure. Money moves, but attention stays on dining.

Mirai Resu. Japan’s restaurant of the future

To illustrate this vision, a short video from Mirai Resu in Japan shows what a fully integrated restaurant experience can look like.

Smartglasses, augmented visuals, gesture-based interaction, avatars, and invisible payment mechanisms come together into a single flow.

This is not a concept mock-up. It is a concrete glimpse into how dining, technology, and experience design merge.

In hospitality experience design, technology only “wins” when it fades into the flow and makes the human experience feel more effortless.

The real shift. Experience over interface

The most important takeaway is not the individual technologies. It is the shift away from explicit interfaces toward ambient interaction. Guests do not use systems. They experience them. Technology fades into the background. The experience becomes the focus.


A few fast answers before you act

Is this about replacing staff with machines?

No. The value is removing friction so staff can focus more on hospitality and less on transactional steps.

Why does augmented reality matter in dining?

It can add information and interaction in-context, without pulling guests out of the moment or forcing phone-first behavior.

What does the Mirai Resu example actually demonstrate?

It demonstrates orchestration. Multiple technologies can be combined into one coherent service flow, rather than isolated gimmicks.

Where does “customer identification” fit in this vision?

It enables recognition on approach and service personalization, but it only works when guests understand the trade and feel in control.

What is the design principle to steal?

Design for experience continuity. Keep attention on dining, and make technology support the flow rather than interrupt it.