Jaguar launches in-car cashless fuel payment

Drive up to a Shell pump. Choose your fuel amount on the car’s touchscreen. Pay without leaving the seat. In a world-first, Jaguar and Land Rover owners can pay for fuel via the touchscreen of their car at Shell service stations. Rather than paying at the pump or queuing to pay in the shop, installing the Shell app via InControl means drivers can drive up to a pump at participating Shell service stations, select how much fuel they require, and pay with PayPal or Apple Pay on the vehicle’s touchscreen.

For more details click here.

Why this matters beyond fuel

This is not really a “payments innovation” story. It is a friction story. The value comes from removing context switching. No wallet. No phone. No queue. The car becomes the interface where the need happens.

It moves checkout into the moment of intent

The moment you decide to refuel is the moment you can complete the transaction. That reduces drop-off, reduces effort, and makes the experience feel modern without changing the core product.

It turns the car into a commerce surface

Once the dashboard becomes a trusted place to authenticate and pay, the opportunity expands to other “on-the-go” services where drivers normally step out, wait, or juggle devices.

It is a clean example of partner-led experience design

Jaguar provides the in-car platform. Shell provides the forecourt context and operational integration. The user experiences it as one flow, not two brands handing off a task.

The reusable pattern

  1. Embed the action where the context already is. Put the transaction inside the primary interface, not a separate detour.
  2. Keep the flow short and explicit. Select, confirm, pay, receipt. Anything more breaks the promise.
  3. Design for trust signals. Clear station identification, clear confirmation, and a clear receipt reduce “did it work” anxiety.
  4. Make the benefit obvious in one sentence. “Pay from your car” is enough. The value is immediate.

What to measure beyond views

  • Adoption. Percentage of eligible drivers who activate the in-car payment feature.
  • Repeat usage. Whether people use it again after the first try.
  • Time saved. Reduction in “fuel stop duration” compared with paying in-store.
  • Experience confidence. Drop-off rates between selecting the pump and confirming payment.

Risks and guardrails that matter

  • False positives. The system must reliably know which station and which pump the driver is using.
  • Failure recovery. If payment fails, the user needs a clear next step that does not create embarrassment at the pump.
  • Trust. Drivers need clear confirmation, receipts, and predictable behavior every time.

A few fast answers before you act

What is Jaguar’s in-car cashless fuel payment?

A Shell fuel payment flow that lets Jaguar and Land Rover drivers select an amount and pay from the vehicle touchscreen via the Shell app in InControl.

What problem does it solve?

It removes the need to pay at the pump or queue inside the shop. The entire task completes from the car.

What is the core mechanism?

A contextual in-car experience that links the driver, the station, and the payment method into one short flow.

What is the most reusable lesson?

Move checkout into the moment of intent inside the primary interface. Then keep the steps minimal and confidence high.

What is the biggest failure mode?

Any ambiguity about station or pump, or any unclear “did I pay” outcome. Trust collapses fast in payments.

The world’s first emotionally powered store

You step into a pop-up store in central London because Christmas shopping feels like a chore. You sit down, look at product ideas on a screen, and the system watches your face as you react. Not in a creepy sci-fi way, but in a deliberately framed “let’s reconnect with the emotional spirit of giving” way. Your expressions become signals. The store turns those signals into a personal report, then suggests the gift that triggers the strongest “this feels right” response.

That is the idea behind eBay’s “emotionally powered store,” created with American technology firm Lightwave. Using intelligent bio-analytic technology and facial coding, eBay records which products provoke the strongest feelings of giving. Then, through personalised emotion reports, it suggests the gift that stirs the most feeling.

What eBay is actually testing here

This is not only a seasonal stunt. It is a test of whether emotion can be treated as data in a retail environment, and whether that data can be turned into a better decision loop.

The store reframes the problem:

  • the problem is not “too little choice”
  • the problem is decision fatigue, stress, and loss of motivation
  • the solution is not more filters, it is faster emotional clarity

The mechanics. Simple, but provocative

At the core is a clean input-output system:

  • Input. A sequence of gift ideas shown in a tight flow.
  • Measurement. Facial coding and bio-analytic signals that infer which moments create the strongest emotional engagement.
  • Output. A personalised emotion report that recommends the gift that creates the strongest “giving” response.

The tech is almost secondary. The real innovation is the framing. A store that does not just sell products. It guides you toward the gift that feels most meaningful.

Why this matters for next-generation shopping environments

A lot of “next-gen retail” bets on bigger screens, more sensors, and more automation. This one bets on something more human.

It treats the emotional state of the shopper as a first-class design constraint:

  • reduce stress
  • re-anchor the experience in intent and empathy
  • make the decision feel more satisfying, not just more efficient

That is a powerful signal for any brand that sells gifts, experiences, or anything identity-driven. The product is rarely the only thing being purchased. The feeling of choosing it matters.

The leadership question sitting underneath the pop-up

The interesting question is not “does facial coding work.” The interesting question is what happens when retail experiences start optimizing for emotion as deliberately as they optimize for conversion.

If you can capture emotional response at the moment of choice, you can start redesigning:

  • the sequence in which products are presented
  • the language and imagery that drives confidence
  • the point at which a recommendation should trigger
  • the moment where a shopper’s motivation drops, and how to recover it

That is where this moves from a pop-up into a capability.


A few fast answers before you act

Q: What is an “emotionally powered store”?
A retail concept that uses bio-analytic signals and facial coding to measure emotional reactions, then recommends products based on the strongest response.

Q: What is eBay trying to solve with this experience?
Christmas gift-buying stress and decision fatigue. The store is designed to reconnect shoppers with the emotional spirit of giving.

Q: What role does Lightwave play?
Lightwave provides the technology support for the bio-analytic and facial coding layer used in the pop-up.

Q: What is the output for the shopper?
A personalised emotion report and a gift recommendation based on the products that provoke the strongest feelings of giving.

Q: What is the broader takeaway for retail innovation?
Emotion becomes a measurable input for experience design, not just a brand aspiration.

Gatebox: The Virtual Home Robot

You come home after work and someone is waiting for you. Not a speaker. Not a disembodied voice. A character in a glass tube that looks up, recognizes you, and says “welcome back.” She can wake you up in the morning, remind you what you need to do today, and act as a simple control layer for your smart home.

That is the proposition behind Gatebox. It positions itself as a virtual home robot, built around a fully interactive holographic character called Azuma Hikari. The pitch is not only automation. It is companionship plus utility. Face recognition. Voice recognition. Daily routines. Home control. A “presence” that turns a smart home from commands into a relationship.

What makes Gatebox different from Alexa, Siri, and Cortana

Gatebox competes on a different axis than mainstream voice assistants.

Voice assistants typically behave like tools. You ask. They answer. You command. They execute.

Gatebox leans into a different model:

  • Character-first interface. A persistent persona you interact with, not just a voice endpoint.
  • Ambient companionship. It is designed to greet you, nudge you, and keep you company, not only respond on demand.
  • Smart home control as a baseline. Home automation is part of the offer, not the story.

The result is a product that feels less like a speaker and more like a “someone” in the room.

Why the “holographic companion” framing matters

A lot of smart home innovation focuses on features. Gatebox focuses on behavior.

It is designed around everyday moments:

  • waking you up
  • reminding you what to remember
  • welcoming you home
  • keeping a simple loop of interaction alive across the day

That is not just novelty. It is a design bet that people want technology to feel relational, not transactional.

What the product is, in practical terms

At its most basic, Gatebox:

  • controls smart home equipment
  • recognizes your face and your voice
  • runs lightweight daily-life interactions through the Azuma Hikari character

It is currently available for pre-order for Japanese-speaking customers in Japan and the USA, at around $2,600 per unit. For more details, visit gatebox.ai.

The bigger signal for interface design

Gatebox is also a clean case study in where interfaces can go next.

Instead of:

  • screens everywhere
  • apps for everything
  • menus and settings

It bets on:

  • a single persistent companion interface
  • a character that anchors interaction
  • a device that makes “home AI” feel present, not hidden in the cloud

That is an important shift for anyone building consumer interaction models. The interface is not the UI. The interface is the relationship.


A few fast answers before you act

Q: What is Gatebox in one sentence?
A virtual home robot that combines smart home control with a holographic companion character, designed for everyday interaction.

Q: Who is Azuma Hikari?
Gatebox’s first character. A fully interactive holographic girl that acts as the interface for utility and companionship.

Q: What can it do at a basic level?
Control smart home equipment, recognize face and voice, run daily routines like wake-up, reminders, and greetings.

Q: Why compare it to Alexa, Siri, and Cortana?
Because it is positioned as more than a voice assistant. It is a character-first, companion-style interface.

Q: What is the commercial status?
Available for pre-order for Japanese-speaking customers in Japan and the USA, at around $2,600 per unit.