NuFormer: Interactive 3D video mapping test

NuFormer, after executing 3D video mapping projections onto objects and buildings worldwide, adds interactivity to the mix in this test.

Here the spectators become the controller and interact with the building in real time using gesture-based tracking (Kinect). People influence the projected content using an iPad, iPhone, or a web-based application available on both mobile and desktop. For this test, Facebook interactivity is used, but the idea is that other social media signals can also be incorporated.

In large-scale public brand experiences, projection mapping becomes more than spectacle when it gives the crowd meaningful viewer control instead of a one-way show.

From mapped surface to live interface

Projection mapping usually works like a film played on architecture. This flips it into a live system. The building is still the canvas, but the audience becomes an input layer. Gesture tracking drives the scene changes, and second-screen control extends participation beyond the people standing closest to the sensor.

Standalone takeaway: Interactive mapping is most compelling when the control model is instantly legible (wave, move, tap) and the projection responds quickly enough that people trust the cause-and-effect.

Why the “crowd as controller” move matters

Interactivity changes what people remember. A passive crowd remembers visuals. An active crowd remembers ownership. The moment someone realises their movement, phone, or social input changes the facade, the projection stops being “content” and becomes “play.”

That also changes the social dynamics around the installation. People look for rules, teach each other controls, and stick around to try again. The result is longer dwell time and more organic filming, because participation is the story.

What brands can do with this, beyond a tech demo

As described in coverage and in NuFormer’s own positioning, branded content, logos, or product placement can be incorporated into interactive projection applications. The strategic upside is that you can design a brand moment that is co-created by the crowd, rather than merely watched.

When social signals are part of the input (Facebook in this case), the experience can also create a bridge between the physical venue and online participation. That hybrid loop is where campaigns can scale.

What to steal for your next mapping brief

  • Pick one primary control. Gesture, phone, or web. Then add a secondary layer only if it increases participation rather than confusion.
  • Make feedback immediate. The projection must respond fast or people assume it is fake or broken.
  • Design for “spectator comprehension.” Bystanders should understand what changed and why, from a distance.
  • Use social inputs carefully. Keep the mapping between input and output obvious so it feels fair, not random.
  • Plan for crowd flow. Interactive mapping is choreography. Sensors, sightlines, and safe space matter as much as visuals.

A few fast answers before you act

What is “interactive projection mapping” in this NuFormer test?

It is 3D projection mapping where the projected content changes in real time based on audience input. Here that input includes Kinect gesture tracking plus control via iPad, iPhone, and web interfaces.

Why add phones and web control when you already have gesture tracking?

Gesture tracking usually limits control to people near the sensor. Second-screen control expands participation to more people and enables a clearer “turn-taking” interaction model.

How does Facebook interactivity fit into a projection experience?

It acts as an additional input stream, letting social actions influence what appears on the building. The key is to make the mapping from social action to visual change understandable.

What is the biggest failure mode for interactive mapping?

Latency and ambiguity. If the response is slow or the control rules are unclear, crowds disengage quickly because they cannot tell whether their input matters.

What should a brand measure in an interactive mapping activation?

Dwell time, participation rate (people who trigger changes), repeat interaction, crowd size over time, and the volume and quality of user-captured video shared during the event window.

Fiat Street Evo

Leo Burnett Iberia has launched a new app called Fiat Street Evo [iTunes Link], which is also the world’s first not-printed car catalogue! Its a catalogue that’s virtually on every street in your city!

Fiat Street Evo recognizes traffic signs as if they were QR codes and associates each sign with a feature of the new Fiat Punto Evo. For example; a STOP sign will tell the user all about the new breaking system, a CURVE ahead sign will tell the user that the car has an intelligent lighting system that guides you in curves. And the list goes on with every feature of the car.

Reporters Without Borders: QR Codes That Speak

You scan a QR code in a magazine ad, then hold your iPhone over a leader’s mouth. The mouth starts talking. But it is not the leader’s voice. It is a journalist explaining what censorship looks like in that country.

Print ads are hitting above their weight lately. Recently, you could test-drive a Volkswagen right inside a print ad, thanks to a special app. Now, QR codes are used to get dictators talking in a set of print ads created by Publicis Brussels for the free-press advocacy group Reporters Without Borders (RWB).

In the ads for RWB you scan the QR code with your iPhone and then place the phone over the leader’s mouth. The mouth starts talking, but it turns out to be the voice of a journalist discussing media censorship in that particular country.

Currently there are Gaddafi, Ahmadinejad and Putin versions.

In public-interest and advocacy communication, this kind of print-to-phone interaction works because it turns a static message into a lived moment of contradiction. The “authoritarian voice” is visually present, but the truth comes from someone who is usually silenced.

How the ad “speaks”

The mechanism is a simple overlay. The printed QR code launches a mobile experience, and the phone screen becomes the animated mouth layer when you align it with the face in the ad.

Definition-tightening: QR codes, short for Quick Response codes, act as a bridge from paper to a mobile destination. The ad uses that bridge to deliver audio and motion, without needing the page itself to be electronic.

Why this lands harder than a normal poster

The interaction forces you to participate in the message. You physically place your device over the mouth, so you are complicit in “giving a voice”. Then the reveal flips expectations and reframes the act as a statement about censorship.

What to steal for interactive print

  • Make the overlay do meaning work. The phone is not a gimmick. It is the message delivery device.
  • Engineer a single, clear reveal. The twist needs to land in seconds.
  • Design for alignment and clarity. If the user cannot line it up easily, they quit.
  • Keep the outcome unmistakable. Audio plus a visible mouth movement makes the payoff obvious.

A few fast answers before you act

What is the core idea of these Reporters Without Borders print ads?

They use a QR code and a phone overlay to make a leader’s mouth appear to speak, then reveal a journalist’s voice explaining censorship in that country.

Why use QR codes in a print campaign like this?

QR codes create a fast bridge from paper to mobile audio and motion, which lets print deliver a message that feels alive rather than static.

What makes this more than a tech trick?

The interaction supports the meaning. You “activate” speech, then hear the voice of journalism instead of power, which reinforces the theme of suppressed information.

What are the main execution risks?

Poor alignment, slow loading, or unclear instructions. Any friction can break the moment before the reveal lands.

How can brands apply the pattern without copying the politics?

Use print as the stage and mobile as the moving layer. Make the overlay essential to the message, and build toward one clean, immediate reveal.