Feel the View

Ford in Italy, together with agency GTB Rome, teams up with Aedo, a local start-up that creates devices for people with visual impairments. Together they design a prototype device that attaches to a car window and decodes the landscape outside, allowing visually impaired passengers to experience it with the tip of their fingers.

The device transforms the flat surface of a car window into a tactile display. The prototype captures photos via an integrated camera and converts them into haptic sensory stimuli. Here, “haptic” means tactile patterns you can feel with your fingertips. The result is not primarily visual. It is perceptible through touch and hearing.

In automotive and mobility experience design, the real bar is whether the same journey can be translated across senses without creating a separate experience.

Why this matters as accessible experience design

This is an assistive interface built around a real, emotional moment. Looking out of a window during a drive. It treats “the view” as an experience that can be translated into other senses, rather than a privilege reserved for sighted passengers. Because the window is where attention naturally goes, using it as the tactile surface makes participation feel shared rather than segregated.

Extractable takeaway: If you want inclusive innovation to land, translate the same moment into multiple senses instead of designing a parallel version of the experience.

Inclusive innovation should be judged by whether it expands participation in the same moment, not by how novel the technology sounds.

The product idea in one line

Capture what is outside the car, then render it on the window surface as a tactile and audio layer that can be explored in real time.

The real question is whether your design lets people participate in the same moment as everyone else, without extra friction or stigma.

What to take from this if you build inclusive innovation

  • Start with a human moment. Here, it is shared travel and the desire to participate in what others are seeing.
  • Use the environment as the interface. The window is already where attention goes. It becomes the display.
  • Translate, do not replace. The concept does not mimic sight. It converts the same input into touch and sound.

A few fast answers before you act

What is “Feel the View”?

A Ford Italy concept with GTB Rome and Aedo that prototypes a car-window device converting outside landscapes into a tactile and audio experience for visually impaired passengers.

How does the prototype work at a high level?

An integrated camera captures what is outside, then the system transforms the input into haptic stimuli on the window surface, supported by audio cues.

What is the core design principle?

Make the experience accessible by translating the same real-world scene into senses the user can rely on, in the moment.

Is this a production product or a prototype concept?

It is described as a prototype concept rather than a production feature, so treat it as a design pattern more than a released product.

What can you apply even if you do not build haptics?

Start from a shared human moment, pick the surface where attention already goes, then translate the same scene into other senses instead of creating a parallel experience.

JWT Brazil: Black Bar Donation

Videos that are recorded vertically and then posted online generally end up with black bars on either side. Lots of viewers find that wasted space annoying. So JWT Brazil came up with the “Black Bar Donation” campaign, which lets creators donate those bars to NGOs that need help promoting themselves.

On the campaign microsite, people select a vertical video to upload, tag it with the NGO of choice, and then publish it directly to their own channel with the NGO messaging living inside the black bars.

Turning a formatting mistake into donated media

The idea is neat because it starts from a real irritation. The bars are normally dead space. Here they become a donation surface that travels with the content, wherever the video gets shared or embedded. By “donation surface,” I mean a fixed, consistently visible part of the frame reserved for the NGO message. The “media spend” is created from a mistake people already make every day.

The mechanism: creator-led distribution with a cause payload

Traditional NGO awareness depends on buying reach or earning press. This flips the model. Creators supply the distribution. The campaign supplies the insert. Here, the “cause payload” is the NGO message container that sits in the bars and stays consistent across creator videos. NGOs receive a consistent message container that rides along with user-generated video. This is a stronger pattern than producing yet another standalone PSA, because it turns creator distribution into donated inventory.

The real question is whether your cause message can hitchhike on creator distribution instead of demanding attention on its own.

It also gives creators a low-effort way to feel helpful. Upload once, choose a cause, publish. No new platform to build an audience on. No complicated call to action.

In digital marketing where attention is scarce, the smartest cause campaigns repurpose existing media waste into useful inventory without asking audiences to change their habits.

Why the “black bars” frame is a strong creative device

The bars work because they are visually stable. They sit outside the main video action, so the NGO message does not compete with the creator’s content. At the same time, the contrast is impossible to miss because the bars are solid, empty shapes that viewers are already staring at.

Extractable takeaway: When you can transform a widely repeated user error into a benefit for someone else, you get scale through behaviour, not through budget.

A pattern for scale without media spend

  • Find a ubiquitous waste surface. Dead space, downtime, defaults, leftovers. Anything people already produce at scale.
  • Make contribution feel effortless. One clear action, one clear outcome. No learning curve.
  • Keep the creator’s content intact. Add value around it, not on top of it.
  • Design for portability. The message should travel with the asset as it gets re-shared.
  • Make the intent obvious. Viewers should instantly understand that the added space supports a cause.

A few fast answers before you act

What is “Black Bar Donation” in one sentence?

It is a campaign that repurposes the black side bars on vertical videos as donated ad space for NGOs, so the NGO message travels with the video when it is published and shared.

Why does this work better than a normal PSA video?

Because it piggybacks on content people already choose to watch. The NGO message becomes part of the viewing frame, not an interruption users try to skip.

What makes this campaign scalable?

The supply is user behaviour. As long as creators keep shooting vertical video and uploading it, the campaign has new “inventory” to convert into donated space.

What is the biggest risk with this model?

Quality control and brand safety. If the creator video is problematic, the NGO message can end up adjacent to content it would never choose intentionally.

How would you adapt this idea for other platforms or formats?

Look for other consistent “frame” areas that do not disrupt the core content. Then build a simple creator workflow that lets people attach a cause payload without editing tools.

Toyota: A Siri-ous Safety Message

By hijacking Siri, Toyota in Sweden has found a new way to get people to turn off their phones in the car and stop texting.

With the help of Saatchi & Saatchi they created a radio ad that interacts with the phone without human intervention. It relies on the iPhone being plugged in and charging, and on the “Hey Siri” wake phrase being enabled, so even if the driver is not paying attention, their phone is.

Click here to watch the video on AdsSpot website.

Two separate ads ran during rush hour. One was designed for Apple’s Siri, and the other for Android with the “OK Google” wake phrase.

How the hijack works

The mechanism is voice-command interception. The ad speaks the wake phrase and a follow-up instruction that prompts the assistant to switch the device into airplane mode, provided the phone is in a state where it will listen hands-free. The trick is that radio is ambient, so the command can be delivered even when the driver is not actively using the phone.

In passenger vehicles where phones are commonly used for navigation and messaging, road-safety campaigns win when they reduce distraction without adding driver effort.

Why it lands

This works because it demonstrates the problem and the solution in the same breath. The message is not only “do not text”. It is “your phone can be compelled to stop being a temptation”. The moment your device responds makes the risk feel real, and it makes the remedy feel immediate.

Extractable takeaway: If you can make the safety behavior happen automatically at the moment of risk, you remove reliance on willpower. That shift from intention to automation is what makes behavior change scalable.

What the campaign is really saying about attention

The real question is how to remove temptation at the exact moment distraction becomes possible.

The deeper point is that distraction is not a moral failure. It is a design failure. If the environment keeps inviting you to look, eventually you will. Toyota reframes the ask from “be better” to “build a system that makes the right thing easier”.

What safety campaigns can steal from this

  • Use the medium’s superpower: radio is always-on and hands-free, so it can reach people at the exact time the habit happens.
  • Make the behavior visible: when the phone reacts, the lesson becomes undeniable.
  • Design for constraints: define the exact conditions required for the mechanic to work, then build the idea around them.
  • Offer an immediate fix: a safety message lands harder when it includes a concrete action, not only a warning.
  • Keep the premise singular: one problem, one intervention, one clear outcome.

A few fast answers before you act

What is “A Siri-ous Safety Message”?

It is a Toyota Sweden road-safety campaign built around radio ads that trigger voice assistants to switch a phone into airplane mode, aiming to reduce distracted driving.

How can a radio ad control a phone?

By speaking the wake phrase and a follow-up command that the assistant will interpret, if the device is plugged in and hands-free voice activation is enabled.

Why run two versions of the ad?

Because “Hey Siri” and “OK Google” are different triggers. Separate edits let the concept work across major phone ecosystems.

Is the main value the tech trick or the message?

The trick earns attention. The value is the behavior change prompt. It turns “turn off your phone” from advice into a demonstrated, immediate action.

What could make this backfire?

If people feel the intervention is intrusive, or if it interferes with legitimate in-car use like navigation. The campaign needs the safety intent to be unmistakable and the boundaries to be clear.