Audi: Urban Future at Design Miami 2011

A 190m² LED city surface that reacts to people

Audi, to showcase its A2 concept at Design Miami 2011, created a 190 m2 three-dimensional LED surface that provided a glimpse of the future of our cities where infrastructure and public space is shared between pedestrians and driverless cars. The installation demonstrated how the city surface would continuously gather information about people’s movements and allow vehicles to interact with the environment.

The installation used a real-time graphics engine and tracking software that received live inputs from 11 Xbox Kinect cameras mounted above the visitors’ heads. Through the cameras, the movement of the visitors was processed into patterns of movement displayed on the LED surface.

The punchline: the street becomes an interface

This is a future-city story told through interaction, not a render. You do not watch a concept. You walk on it. The floor responds, and suddenly “data-driven public space” is something you can feel in your body.

In smart city and mobility innovation, the fastest way to make future infrastructure feel believable is to turn sensing and responsiveness into a physical interaction people can experience in seconds.

Why it holds your attention

Because it turns an abstract topic. infrastructure sharing, sensing, autonomous behavior. into a single, legible experience. Your movement creates immediate visual feedback, and that feedback makes the bigger idea believable for a moment.

What Audi is signaling here

A vision of cities where surfaces sense movement continuously and systems adapt in real time. Not just cars that navigate, but environments that respond.

What to steal for experiential design

  • Translate complex futures into one physical interaction people can understand instantly.
  • Use real-time feedback loops. Input, processing, output. so the concept feels alive.
  • Make the visitor the driver of the demo. Their movement should generate the proof.

A few fast answers before you act

What did Audi build for Design Miami 2011?

A 190 m2 three-dimensional LED surface installation showcasing an “urban future” concept tied to the Audi A2 concept.

What was the installation demonstrating?

A future city surface that continuously gathers information about people’s movements and enables vehicles to interact with the environment.

How was visitor movement captured?

The post says 11 Xbox Kinect cameras mounted above visitors’ heads provided live inputs to tracking software.

What was the core mechanic?

Real-time tracking of visitor movement translated into dynamic patterns displayed on the LED surface, visualizing how a responsive city surface might behave.

NikeID Loop – Sneaker Customization Concept

Here is another interesting concept coming out of Miami Ad School, this time for Nike.

Since Nike has a huge range of sneakers, its next to impossible to try each one of them at the store. In fact its not even possible to find them all at the store.

So a unique interactive mirror using Microsofts Kinect Technology was created to customize the sneakers on the users feet. This way one could try on every pair of Nike sneakers ever made in record time.

The core problem this concept tackles

Retail has a physical constraint. Shelf space. Inventory. Time. Nike’s catalog depth makes “try everything” impossible, even in flagship stores.

This concept flips the constraint by moving variety from physical inventory into a digital layer, while keeping the try-on moment anchored in the body. Your feet. Your stance. Your movement.

Why the mirror mechanic is powerful

  • It keeps context real. You see the shoe on you, not on a product page.
  • It compresses decision time. Rapid switching creates a new kind of “browsing”.
  • It turns discovery into play. The experience is inherently interactive, which increases dwell time.
  • It reduces inventory friction. The store can showcase breadth without stocking breadth.

What this implies for customization and personalization

NikeID is already about making a product feel personal. A Kinect-style mirror extends that by making customization immediate and visual, which can increase confidence before purchase.

The concept also suggests a future where “catalog” becomes a service layer. The physical store is the stage for decision-making, not a warehouse for options.

What to take from this if you run retail CX

  1. Start with the constraint. Space and assortment are physical limits. Digital can expand them.
  2. Keep the experience embodied. Seeing a product on yourself is stronger than seeing it on a screen.
  3. Design for speed. Rapid iteration can become a feature, not a compromise.
  4. Make the output actionable. The experience should flow naturally into saving, sharing, or ordering.

A few fast answers before you act

What is the NikeID Loop concept?

It is a Miami Ad School concept for Nike that uses an interactive mirror and Microsoft Kinect technology to let users customize and “try” different Nike sneakers on their feet digitally.

What problem does it solve in stores?

It addresses the fact that Nike’s full range of sneakers cannot be stocked or tried in one location, by shifting variety into a digital interface.

Why use Kinect or motion tracking?

Motion tracking lets the system align the visual shoe to the user’s feet in real time, keeping the experience believable as people move.

Is this a product or a concept?

In this case, it is presented as a concept coming out of Miami Ad School, showing a possible direction for interactive retail.

What is the transferable lesson?

If you can remove physical constraints through an embodied digital layer, you can increase choice, speed, and confidence without expanding inventory.

NuFormer: Interactive 3D video mapping test

NuFormer, after executing 3D video mapping projections onto objects and buildings worldwide, adds interactivity to the mix in this test.

Here the spectators become the controller and interact with the building in real time using gesture-based tracking (Kinect). People influence the projected content using an iPad, iPhone, or a web-based application available on both mobile and desktop. For this test, Facebook interactivity is used, but the idea is that other social media signals can also be incorporated.

In large-scale public brand experiences, projection mapping becomes more than spectacle when it gives the crowd meaningful viewer control instead of a one-way show.

From mapped surface to live interface

Projection mapping usually works like a film played on architecture. This flips it into a live system. The building is still the canvas, but the audience becomes an input layer. Gesture tracking drives the scene changes, and second-screen control extends participation beyond the people standing closest to the sensor.

Standalone takeaway: Interactive mapping is most compelling when the control model is instantly legible (wave, move, tap) and the projection responds quickly enough that people trust the cause-and-effect.

Why the “crowd as controller” move matters

Interactivity changes what people remember. A passive crowd remembers visuals. An active crowd remembers ownership. The moment someone realises their movement, phone, or social input changes the facade, the projection stops being “content” and becomes “play.”

That also changes the social dynamics around the installation. People look for rules, teach each other controls, and stick around to try again. The result is longer dwell time and more organic filming, because participation is the story.

What brands can do with this, beyond a tech demo

As described in coverage and in NuFormer’s own positioning, branded content, logos, or product placement can be incorporated into interactive projection applications. The strategic upside is that you can design a brand moment that is co-created by the crowd, rather than merely watched.

When social signals are part of the input (Facebook in this case), the experience can also create a bridge between the physical venue and online participation. That hybrid loop is where campaigns can scale.

What to steal for your next mapping brief

  • Pick one primary control. Gesture, phone, or web. Then add a secondary layer only if it increases participation rather than confusion.
  • Make feedback immediate. The projection must respond fast or people assume it is fake or broken.
  • Design for “spectator comprehension.” Bystanders should understand what changed and why, from a distance.
  • Use social inputs carefully. Keep the mapping between input and output obvious so it feels fair, not random.
  • Plan for crowd flow. Interactive mapping is choreography. Sensors, sightlines, and safe space matter as much as visuals.

A few fast answers before you act

What is “interactive projection mapping” in this NuFormer test?

It is 3D projection mapping where the projected content changes in real time based on audience input. Here that input includes Kinect gesture tracking plus control via iPad, iPhone, and web interfaces.

Why add phones and web control when you already have gesture tracking?

Gesture tracking usually limits control to people near the sensor. Second-screen control expands participation to more people and enables a clearer “turn-taking” interaction model.

How does Facebook interactivity fit into a projection experience?

It acts as an additional input stream, letting social actions influence what appears on the building. The key is to make the mapping from social action to visual change understandable.

What is the biggest failure mode for interactive mapping?

Latency and ambiguity. If the response is slow or the control rules are unclear, crowds disengage quickly because they cannot tell whether their input matters.

What should a brand measure in an interactive mapping activation?

Dwell time, participation rate (people who trigger changes), repeat interaction, crowd size over time, and the volume and quality of user-captured video shared during the event window.