Microsoft HoloLens: The Next Step of Computing

Microsoft brings holograms into the real world

At Microsoft’s Windows 10 event, the company unveils a new augmented reality experience for the platform called HoloLens.

Using a special holographic headset, Windows 10 users can make holograms appear in real life. Not on a screen. In the room, anchored to space.

This is the kind of step-change that reframes computing from something you look at to something you live inside.

Watch below how Microsoft demonstrates holograms as spatial interfaces, not screen content.

What makes HoloLens different

HoloLens is positioned as an untethered augmented reality experience, built to feel like a real device rather than a lab prototype.

The device is said to use:

  • See-through lenses
  • Spatial sound
  • Advanced sensors
  • A dedicated holographic processing unit

Together, these elements aim to deliver a state-of-the-art mixed reality experience without cables or external trackers.

In this context, augmented reality means digital objects are layered into the real world through see-through optics, not a fully immersive virtual environment.

Why this matters

HoloLens signals a shift in interface design. Instead of dragging windows around a flat screen, digital objects become part of physical space. Apps turn into holograms. Workflows become spatial. Interaction becomes more natural because it maps to how people already understand the world.

In global digital product and marketing teams, the significance is not just the headset. It is the move from screen-first design to space-first interaction.

Extractable takeaway: HoloLens is important because it presents AR not as a feature inside existing software, but as a new computing layer where interface, content, and context are all anchored to physical space.

What to steal from this launch

The real question is not whether holograms look futuristic. It is whether a new interface model changes behavior in a way people can feel immediately.

That is what this launch gets right. It demonstrates the shift through experience, not just specification. The message is simple: when a technology changes where interaction happens, it also changes how products should be designed.

  • Lead with the interaction shift, not the feature list. Show what changes in the user’s behavior before explaining the underlying technology.
  • Make the benefit visible in context. Demonstrate the experience in a real environment so people immediately understand the practical value.
  • Use the demo as proof, not decoration. The strongest launch moments show the product working in the exact conditions users care about.
  • Explain the stack after the experience lands. Once the audience feels the change, technical details reinforce credibility instead of creating friction.
  • Design for the new interface model. If interaction moves from screens to space, content, UI, and workflows must be rethought for that environment.

A few fast answers before you act

Is HoloLens virtual reality?

No. It is augmented reality using see-through lenses that overlay holograms onto the real world.

What is the key technical promise?

Untethered, spatially aware holograms powered by sensors, spatial sound, and a dedicated holographic processing unit.

Why is being untethered important?

Untethered hardware makes the experience feel like a real computing device instead of a lab setup, which lowers friction for everyday use and demonstration.

What changes when apps become spatial?

The interface moves off the screen and into physical space, which changes how people place, view, and interact with digital content while moving through the real world.

What makes this feel like a new computing layer?

The shift is not only visual. It combines sensing, sound, and spatial anchoring so digital objects behave as if they belong in the room, not just on a display.

Jibo: The Social Robot for the Family

A robot that provides a personal and meaningful human experience is set to become reality through Jibo, an 11 inch tall, 6 pound, swiveling circular robot. Friendly, helpful and intelligent, Jibo is billed as the world’s first social robot for the family. Here, “social robot” means a robot designed to feel present and interactive in everyday home life, not just to complete tasks.

Here is a short demo video created for its crowdfunding campaign.

The pitch is “relationship”, not “utility”

The mechanism is straightforward. A small tabletop robot with a swiveling body and a screen uses motion, timing, and conversational cues to feel present in the room, rather than behaving like a static gadget. That matters because a sense of presence makes the product easier to imagine in the home than a static device would.

In consumer technology launches, the hard part is not explaining what the product does. It is making people feel why they would want it in their home.

Why it lands

This works because it frames the robot as a character. When a device has personality, the viewer stops evaluating it like a spec sheet and starts imagining it as part of daily routines. That shift is exactly what a crowdfunding-style launch needs, because belief and emotional attachment matter before the product is widely available.

Extractable takeaway: If you are launching something unfamiliar, do not lead with feature lists. Lead with a clear role the audience can picture, then use design and behavior to make that role feel natural and desirable.

What the business intent really is

The demo video is doing more than product explanation. It is creating a category frame. “Social robot for the family” is a positioning stake, and the crowdfunding moment is the fastest way to turn curiosity into momentum, pre-orders, and a community that will advocate for the concept.

The real question is not whether the robot can do enough, but whether people can imagine wanting it around them every day. For a product like this, positioning the relationship comes before explaining the utility.

What product marketers should borrow

  • Make a new category legible. Give the audience a simple label they can repeat to others.
  • Use behavior as proof. How the product moves, reacts, and “shows attention” can persuade faster than technical claims.
  • Sell the role. “What is this in my life” beats “what is this in the lab”.
  • Build community early. Crowdfunding works best when supporters feel like first insiders, not early buyers.

A few fast answers before you act

What is Jibo?

Jibo is a small tabletop robot positioned as a “social robot for the family”, designed to deliver a more personal, human-feeling interaction than a typical gadget.

How big is it?

The project describes Jibo as about 11 inches tall and around 6 pounds.

What does “social robot” mean here?

It refers to a robot designed for human interaction and presence in the home, using behavior and personality cues rather than only task execution.

Why launch via a crowdfunding demo video?

Because new categories need belief before they need scale. A demo video can communicate the role, the feeling, and the promise quickly, then convert interest into early supporters.

What is the main lesson for product marketers?

When the product is unfamiliar, show the “relationship” it creates in context, then let the technology sit behind the experience.

The Adaptive Storefront: BLE Retail Display

Shop windows, billboards, bus stops, and car showrooms do not have to be passive experiences. In the video below, a prototype interactive digital display adapts to whoever stands in front of it.

The display identifies shoppers using Bluetooth Low Energy (BLE) and reacts to personal data stored on the shopper’s mobile device, such as shopping habits and preferences. Shoppers can swipe through personalised content, place items in a virtual shopping cart, and purchase straight from the display.

When glass turns into a shoppable interface

This “adaptive storefront” concept takes a familiar retail surface and makes it behave like a storefront UI. Here, “adaptive storefront” means the window can recognise a nearby device via BLE and change what it shows based on data available on that device. Not a poster. Not a looped video. A live interface that changes per person and lets you complete an action while you are still in that high-intent moment of attention.

How the prototype behaves in front of a shopper

  • Detect. BLE proximity is used to recognise that a specific shopper is present.
  • Adapt. The display adjusts what it shows based on data available on the shopper’s phone.
  • Let the shopper drive. Swiping changes what is on screen, rather than forcing a fixed sequence.
  • Close the loop. Items can be added to a cart and purchased directly from the display.

In physical retail environments, the storefront is the first high-attention interface a brand controls before a shopper reaches the shelf.

Why it lands

Because the display can recognise a nearby device and accept input on the surface, it compresses discovery, consideration, and purchase into one interaction. The value is not the novelty of a “smart window”. It is the reduction of steps between interest and action, while the shopper’s intent is still fresh. The real question is whether you can do that with clear permission and control, not silent personalisation.

Extractable takeaway: A surface becomes valuable when it combines context with immediate action. Personalisation only earns its keep when it removes friction and helps a shopper decide faster, not when it merely looks clever.

What it is really trying to unlock for brands

Behind the demo is a clear ambition. Turn high-footfall surfaces into conversion surfaces. If the experience is permissioned and useful, it can bridge the gap between physical browsing and digital checkout without forcing a shopper to open an app, search, and start over.

That also hints at a measurement upgrade. A storefront that can be interacted with can be instrumented. What people swipe. What they ignore. What they add. Where they drop. That is a very different feedback loop than counting impressions.

Practical takeaways for adaptive storefronts

  • Start with one job-to-be-done. For example, “help me shortlist”, “show me what is in stock”, or “let me buy in two taps”.
  • Make control obvious. If swiping is the interaction, design the UI so people understand it in one second.
  • Keep data minimal and on-device. Use only what is needed to improve relevance, and avoid making the experience feel intrusive.
  • Design for the environment. Glare, distance, dwell time, and group behaviour change everything compared to mobile UX.
  • Plan the opt-in moment. The experience works best when the shopper understands why the screen adapts and what they get in return.

A few fast answers before you act

What is an “adaptive storefront” in plain terms?

It is a storefront display that changes what it shows based on who is standing in front of it, and lets the shopper interact and buy directly on the surface.

Why use BLE for this type of experience?

BLE enables low-power proximity detection, so a display can recognise a nearby device and trigger the right experience without requiring scanning a code each time.

What data is needed to personalise the display?

Only enough to improve relevance. For example, stated preferences, browsing history, or saved items, ideally kept on the shopper’s phone and shared with clear permission.

What makes this feel useful instead of creepy?

Permission, transparency, and value. The shopper should understand what is happening, control it, and get something meaningfully better than a generic screen.

What should you measure in a pilot?

Opt-in rate, interaction rate, add-to-cart rate, conversion rate, and whether the experience reduces time-to-decision without increasing drop-off.