360 Videos on Facebook

Disney drops you into the Star Wars universe. You can pan around the scene and explore the world in 360 degrees as part of the launch hype for The Force Awakens. It is one of the first big brand uses of Facebook’s new 360-degree video support.


(Note: View the video directly on Facebook by clicking on the above image.)

Next, GoPro pushes the same format into action sports. A 360-degree surf film with Anthony Walsh and Matahi Drollet lets you experience the ride in a more immersive, head-turning way than a standard clip.

gopro
(Note: View the video directly on Facebook by clicking on the above image.)

Facebook makes 360 video a native format

In September, Facebook launches 360-degree video support. That matters because it turns a niche format into a platform behaviour. Brands can publish immersive video where the audience already is, without asking people to install anything new.

Mobile rollout is the unlock

Facebook announces that 360 video support is rolling out to mobile devices, so it is no longer limited to desktop viewing. That is the moment the format becomes mainstream.

Why brands care. Distribution scale

Facebook’s own numbers underline why marketers pay attention. The platform cites more than 8 billion video views from 500 million users on a daily basis (as referenced in the Q3 2015 earnings context). If 360 video becomes part of that daily habit, it is a meaningful new canvas for storytelling and experience marketing.

Facebook supports creators with a 360 hub

To accelerate adoption, Facebook launches a dedicated 360 video microsite with resources like upload guidelines, common questions, and best practices.


A few fast answers before you act

What launches the 360 format on Facebook in this post?
Facebook adds native support for 360-degree video, and early brand examples quickly follow.

Which two examples headline the post?
Disney promoting Star Wars: The Force Awakens, and GoPro publishing a 360 surf video with Anthony Walsh and Matahi Drollet.

What changes when mobile support rolls out?
360 viewing is no longer limited to desktop, which makes the format accessible in everyday mobile usage.

What scale stats are cited to show why this matters?
More than 8 billion video views from 500 million users on a daily basis.

Where does Facebook publish creator guidance?
A dedicated 360 video microsite with upload guidelines and best practices.

Project Soli: Hands Become the Interface

Google ATAP builds what people actually use

Google ATAP is tasked with creating cool new things that we’ll all actually use. At the recently concluded Google I/O event, they showcase Project Soli. A new kind of wearable technology that wants to make your hands and fingers the only user interface you’ll ever need.

This is not touchless interaction as a gimmick. It is a rethink of interface itself. Your gestures become input. Your hands become the control surface.

The breakthrough is radar, not cameras

To make this possible, Project Soli uses a radar that is small enough to fit into a wearable like a smartwatch.

The small radar picks up movements in real time and interprets how gestures alter its signal. This enables precise motion sensing without relying on cameras or fixed environmental conditions.

The implication is straightforward. Interaction moves from screens to motion. User interfaces become something you do, not something you tap.

In wearable computing and ambient interfaces, the real unlock is interaction that works in motion, without relying on tiny screens.

Why this matters for wearable tech

Wearables struggle when they copy the smartphone model onto tiny screens. Project Soli pushes in the opposite direction.

Instead of shrinking interfaces, it removes them. The wearable becomes a sensor-driven layer that listens to intent through movement.

If this approach scales, it changes what wearable interaction can be. Less screen dependency. More natural control. Faster micro-interactions.



A few fast answers before you act

Is Project Soli just gesture control?

It is gesture control powered by a radar sensor small enough for wearables, designed to make hands and fingers the primary interface.

Why use radar instead of cameras?

Radar can sense fine motion without relying on lighting, framing, or line-of-sight in the same way camera-based systems do.

What is the real promise here?

Interfaces that disappear. Interaction becomes physical, immediate, and wearable-friendly.

Big Data to predict traffic jams

Big Data is increasingly being used to find solutions to problems around the world. In this latest example, Microsoft has partnered with the Federal University of Minas Gerais, one of Brazil’s largest universities, to undertake research that helps predict traffic jams up to an hour in advance.

With access to traffic data (including historical numbers where available), road cameras, Bing traffic maps, and drivers’ social networks, Microsoft and team are set to establish patterns that help foresee traffic jams 15 to 60 minutes before they happen.

Microsoft has tested this model in London, Chicago, Los Angeles, and New York, and claims to have achieved a prediction accuracy of 80 percent.