Project Soli: Hands Become the Interface

Google ATAP builds what people actually use

Google ATAP is tasked with creating cool new things that we’ll all actually use. At the recently concluded Google I/O event, they showcase Project Soli. A new kind of wearable technology that wants to make your hands and fingers the only user interface you’ll ever need.

This is not touchless interaction as a gimmick. It is a rethink of interface itself. Your gestures become input. Your hands become the control surface.

The breakthrough is radar, not cameras

To make this possible, Project Soli uses a radar that is small enough to fit into a wearable like a smartwatch.

The small radar picks up movements in real time and interprets how gestures alter its signal. This enables precise motion sensing without relying on cameras or fixed environmental conditions.

In wearable computing and ambient interfaces, the real unlock is interaction that works in motion, without relying on tiny screens.

The real question is whether wearables can move beyond miniaturized apps and make interaction work in motion, without a screen-first mindset.

The implication is straightforward. Interaction moves from screens to motion. User interfaces become something you do, not something you tap.

Why this matters for wearable tech

Wearables struggle when they copy the smartphone model onto tiny screens. Wearable UX should treat the screen as optional, not primary.

Extractable takeaway: When the screen becomes the bottleneck, shift the interface to sensing and interpretation, then keep the gesture vocabulary small enough to learn fast.

Instead of shrinking interfaces, it removes them. The wearable becomes a sensor-driven layer that listens to intent through movement.

If this approach scales, it changes what wearable interaction can be. Less screen dependency. More natural control. Faster micro-interactions.


What Soli teaches about hands-first UX

  • Start with intent, not UI. Define the handful of moments where a gesture is faster than hunting for a screen.
  • Design for motion. Favor interactions that work while walking, commuting, or doing something else with your attention.
  • Keep the gesture set teachable. A small, consistent vocabulary beats a large library that nobody remembers.

A few fast answers before you act

Is Project Soli just gesture control?

It is gesture control powered by a radar sensor small enough for wearables, designed to make hands and fingers the primary interface.

Why use radar instead of cameras?

Radar can sense fine motion without relying on lighting, framing, or line-of-sight in the same way camera-based systems do.

What is the real promise here?

Interfaces that disappear. Interaction becomes physical, immediate, and wearable-friendly.

What should a product team prototype first?

Pick one high-frequency moment where a quick gesture could replace a screen tap, and test whether the sensing feels reliable in motion.

What is the biggest adoption risk?

If gestures feel inconsistent or hard to learn, people will default back to the screen. The bar is effortless, not novel.

EA SPORTS: Madden NFL 15 GIFERATOR

To launch their new game Madden NFL 15, EA Sports wanted to connect with young, football-obsessed fans and grow its association with the real world NFL. Since the average football fan was watching the game with their smartphone in hand, EA teamed up with Google to allow sport fans to provoke rivals from the comfort of their own sofa and bring trash talk into the 21st century.

Using pioneering technology, live NFL data was fused with Madden 15 game footage to generate GIF highlights for every single game. All of this was delivered via real-time ads across sports websites and apps. As a result there was an ever growing collection of GIFs that football fans could simply take, edit and share to shove in the face of their rivals.

How the GIFERATOR works

The mechanic is a real-time trigger loop. As live NFL moments happen, a data signal maps those moments to a library of Madden NFL 15 visuals, headlines, and team-specific ingredients. The system then assembles a ready-to-share GIF that matches what fans are watching, right when the emotion spike is highest.

In sports marketing, second-screen behavior turns live moments into shareable social currency.

Why it lands

The creative idea is not “GIFs”. It is timing plus relevance. Because the asset shows up while the emotion spike is still live, it feels native to the fan conversation instead of delayed brand content. When fans are already checking stats, group chats, and social feeds mid-game, you meet them where their thumbs already are. The format just happens to be the internet’s fastest unit of trash talk.

Extractable takeaway: If you can translate a live moment into a personalized, ready-to-share asset within the same minute, you convert attention into participation, and participation into distribution.

Where the real value sits

The real question is how to make a boxed game feel as live, social, and rivalry-ready as the sport it simulates.

This is also a credibility move. By fusing live NFL action with Madden footage, the game positions itself as culturally current, not just a boxed product. It borrows the emotional heat of real games and channels it into the Madden universe, play after play.

What second-screen marketers should steal

  • Build a trigger map: define which live signals create which assets, and keep the mapping simple enough to scale all season.
  • Design for viewer control: let people tweak copy or choose variants, so the output feels like “mine”, not “an ad”.
  • Win the second screen: deliver creative where fans already browse during live events, not only on your owned channels.
  • Make rivalry the editor: structure content around opponents, not around generic brand lines, so sharing feels inevitable.
  • Ship a content engine, not a one-off: the compounding library is the advantage, because it stays fresh week after week.

A few fast answers before you act

What is the Madden GIFERATOR?

It is a real-time GIF creation system that generates Madden NFL 15-themed GIFs that match what is happening in live NFL games, designed for instant sharing and trash talk.

Why does “real-time” matter here?

Because it catches fans during peak emotion. The closer the asset appears to the live moment, the more it feels like part of the conversation instead of an interruption.

What is the core pattern to reuse?

Use live signals to automatically assemble relevant, lightweight assets, then distribute them on the channels people naturally use while watching.

Is this mainly a social campaign or an ad campaign?

Both. The distribution is described as real-time advertising across sports sites and apps, while the product experience is built for fans to edit and share the output socially.

What is the biggest execution risk?

Relevance drift. If the mapping from live moments to generated assets feels off, or if the output arrives too late, it stops feeling “in the game” and becomes just another banner.