Audi: Urban Future at Design Miami 2011

A 190m² LED city surface that reacts to people

Audi, to showcase its A2 concept at Design Miami 2011, created a 190 m2 three-dimensional LED surface that provided a glimpse of the future of our cities where infrastructure and public space is shared between pedestrians and driverless cars. The installation demonstrated how the city surface would continuously gather information about people’s movements and allow vehicles to interact with the environment.

The installation used a real-time graphics engine and tracking software that received live inputs from 11 Xbox Kinect cameras mounted above the visitors’ heads. Through the cameras, the movement of the visitors was processed into patterns of movement displayed on the LED surface.

In global mobility and smart-city work, embodied demos beat decks when you need belief fast.

The punchline: the street becomes an interface

This is a future-city story told through interaction, not a render. You do not watch a concept. You walk on it. The floor responds, and suddenly “data-driven public space” is something you can feel in your body. Here, “data-driven public space” means a shared surface that senses movement and responds with immediate feedback.

In smart city and mobility innovation, the fastest way to make future infrastructure feel believable is to turn sensing and responsiveness into a physical interaction people can experience in seconds.

Why it holds your attention

Because it turns an abstract topic, infrastructure sharing, sensing, autonomous behavior, into a single, legible experience. Your movement creates immediate visual feedback, and that feedback makes the bigger idea believable for a moment.

Extractable takeaway: If a future system is hard to explain, compress it into one cause-and-effect loop a person can control, then let the feedback do the convincing.

What Audi is signaling here

The real question is whether a smart-city vision can be made legible through a single, shared interaction.

A vision of cities where surfaces sense movement continuously and systems adapt in real time. Not just cars that navigate, but environments that respond.

Moves to borrow for experiential design

  • Make the future physical: Translate complex futures into one physical interaction people can understand instantly.
  • Show the feedback loop: Use real-time input, processing, output, so the concept feels alive.
  • Let visitors generate the proof: Make the visitor the driver of the demo so their movement generates the proof.

A few fast answers before you act

What did Audi build for Design Miami 2011?

A 190 m2 three-dimensional LED surface installation showcasing an “urban future” concept tied to the Audi A2 concept.

What was the installation demonstrating?

A future city surface that continuously gathers information about people’s movements and enables vehicles to interact with the environment.

How was visitor movement captured?

Visitor movement was captured via 11 Xbox Kinect cameras mounted above visitors’ heads, feeding live inputs to tracking software.

What was the core mechanic?

Real-time tracking of visitor movement was translated into dynamic patterns displayed on the LED surface.

Why did this format make the idea feel believable fast?

Because visitors could trigger immediate feedback with their own movement, turning an abstract “responsive city” claim into a felt experience.

NikeID Loop: Sneaker Customization Concept

Here is another interesting concept coming out of Miami Ad School, this time for Nike.

Since Nike has a huge range of sneakers, it’s next to impossible to try each one of them at the store. In fact, it’s not even possible to find them all at the store.

So a unique interactive mirror using Microsoft’s Kinect technology was created to customize the sneakers on the user’s feet. This way, one could try on every pair of Nike sneakers ever made in record time.

The core problem this concept tackles

Retail has a physical constraint. Shelf space. Inventory. Time. Nike’s catalog depth makes “try everything” impossible, even in flagship stores.

This concept flips the constraint by moving variety from physical inventory into a digital layer, while keeping the try-on moment anchored in the body. By “digital layer” here, I mean a live overlay that swaps variants in the mirror without needing physical stock. Your feet. Your stance. Your movement.

The real question is how you let shoppers explore more options without turning the store into a warehouse or the decision into homework.

Why the mirror mechanic is powerful

Because the mirror tracks movement and renders variants instantly, it keeps the try-on believable in motion, which is what makes fast switching persuasive instead of gimmicky.

Extractable takeaway: When you can add choice in software while preserving an embodied try-on moment, you reduce assortment friction without reducing confidence.

  • It keeps context real. You see the shoe on you, not on a product page.
  • It compresses decision time. Rapid switching creates a new kind of “browsing”.
  • It turns discovery into play. The experience is inherently interactive, which increases dwell time.
  • It reduces inventory friction. The store can showcase breadth without stocking breadth.

In retail environments where shoppers want high-confidence fit and style decisions in minutes, embodied digital try-on can expand perceived assortment without expanding stock.

What this implies for customization and personalization

NikeID is already about making a product feel personal. A Kinect-style mirror extends that by making customization immediate and visual, which can increase confidence before purchase.

This kind of embodied customization is worth betting on, because it makes breadth feel real without demanding more shelf space.

The concept also suggests a future where “catalog” becomes a service layer. The physical store is the stage for decision-making, not a warehouse for options.

What to take from this if you run retail CX

  1. Start with the constraint. Space and assortment are physical limits. Digital can expand them.
  2. Keep the experience embodied. Seeing a product on yourself is stronger than seeing it on a screen.
  3. Design for speed. Rapid iteration can become a feature, not a compromise.
  4. Make the output actionable. The experience should flow naturally into saving, sharing, or ordering.

A few fast answers before you act

What is the NikeID Loop concept?

It is a Miami Ad School concept for Nike that uses an interactive mirror and Microsoft Kinect technology to let users customize and “try” different Nike sneakers on their feet digitally.

What problem does it solve in stores?

It addresses the fact that Nike’s full range of sneakers cannot be stocked or tried in one location, by shifting variety into a digital interface.

Why use Kinect or motion tracking?

Motion tracking lets the system align the visual shoe to the user’s feet in real time, keeping the experience believable as people move.

Is this a product or a concept?

In this case, it is presented as a concept coming out of Miami Ad School, showing a possible direction for interactive retail.

What is the transferable lesson?

If you can remove physical constraints through an embodied digital layer, you can increase choice, speed, and confidence without expanding inventory.

NuFormer: Interactive 3D video mapping test

NuFormer, after executing 3D video mapping projections onto objects and buildings worldwide, adds interactivity to the mix in this test.

Here the spectators become the controller and interact with the building in real time using gesture-based tracking (Kinect). People influence the projected content using an iPad, iPhone, or a web-based application available on both mobile and desktop. For this test, Facebook interactivity is used, but the idea is that other social media signals can also be incorporated.

From mapped surface to live interface

Projection mapping usually works like a film played on architecture. This flips it into a live system. The building is still the canvas, but the audience becomes an input layer. Gesture tracking drives the scene changes, and second-screen control, meaning a phone or browser used as a remote, extends participation beyond the people standing closest to the sensor.

Extractable takeaway: Interactive mapping is most compelling when the control model, the set of simple inputs people can learn instantly (wave, move, tap), is legible at a glance and the projection responds quickly enough that people trust the cause-and-effect.

In large-scale public brand experiences, projection mapping becomes more than spectacle when it gives the crowd meaningful viewer control instead of a one-way show.

Why the “crowd as controller” move matters

Interactivity changes what people remember. A passive crowd remembers visuals. An active crowd remembers ownership. The moment someone realises their movement, phone, or social input changes the facade, the projection stops being “content” and becomes “play.”

The real question is whether your interaction model makes people feel in control within seconds, or confused for minutes.

Because the facade responds immediately to a person’s input, the crowd shifts from watching to experimenting, which keeps people around long enough to teach each other and try again.

That also changes the social dynamics around the installation. People look for rules, teach each other controls, and stick around to try again. The result is longer dwell time and more organic filming, because participation is the story.

What brands can do with this, beyond a tech demo

As described in coverage and in NuFormer’s own positioning, branded content, logos, or product placement can be incorporated into interactive projection applications. The strategic upside is that you can design a brand moment that is co-created by the crowd, rather than merely watched.

When social signals are part of the input (Facebook in this case), the experience can also create a bridge between the physical venue and online participation. That hybrid loop is where campaigns can scale.

Patterns for your next mapping brief

  • Pick one primary control. Gesture, phone, or web. Then add a secondary layer only if it increases participation rather than confusion.
  • Make feedback immediate. The projection must respond fast or people assume it is fake or broken.
  • Design for “spectator comprehension.” Bystanders should understand what changed and why, from a distance.
  • Use social inputs carefully. Keep the mapping between input and output obvious so it feels fair, not random.
  • Plan for crowd flow. Interactive mapping is choreography. Sensors, sightlines, and safe space matter as much as visuals.

A few fast answers before you act

What is “interactive projection mapping” in this NuFormer test?

It is 3D projection mapping where the projected content changes in real time based on audience input. Here that input includes Kinect gesture tracking plus control via iPad, iPhone, and web interfaces.

Why add phones and web control when you already have gesture tracking?

Gesture tracking usually limits control to people near the sensor. Second-screen control expands participation to more people and enables a clearer “turn-taking” interaction model.

How does Facebook interactivity fit into a projection experience?

It acts as an additional input stream, letting social actions influence what appears on the building. The key is to make the mapping from social action to visual change understandable.

What is the biggest failure mode for interactive mapping?

Latency and ambiguity. If the response is slow or the control rules are unclear, crowds disengage quickly because they cannot tell whether their input matters.

What should a brand measure in an interactive mapping activation?

Dwell time, participation rate (people who trigger changes), repeat interaction, crowd size over time, and the volume and quality of user-captured video shared during the event window.