The (near) future of UI (is here)


by Conrad VanLandingham

I recently returned from my time at SXSW 2016, and it was another year of a pandemonious mix of ups and downs. Ups from the innovative explosion of the convergence of so many creative disciplines. Downs from lines, lines and more lines resulting from the convergence of so many creative disciplines. While most of these lines ended at tacos and margaritas (or at least you could hope), many also featured an immersive VR or AR experience.

Photo from https://www.flickr.com/photos/nanpalmero/16858510411/ Image courtesy of Nan Palmero

Usually, I just see this technology as nothing more then a temporary escape from reality (or an occasional marketing gimmick). But brands are trying to tout that there’s more to this future then just gimmick.

Walking around SXSW you might think that these devices and experiences have reached a critical mass. From IBM, to Samsung, to Sony and even McDonalds: the trendiest of all lounges and parties let you escape to a virtual experience. You can fly among the clouds, cycle the Rockies, and drive exotic cars. Heck, you can even walk inside of your own Happy Meal and customize it. Even the smallest startup parties had 360 video booths and I couldn’t count the number of GoPro VR rigs I saw mounted on helmets floating around.

While the hardware undoubtedly improves every year - the novelty of the experience is always just that: novelty. An escape. Entertainment. Folly. However, as the hardware improves and the experience becomes more immersive, the most important aspect that we must consider is what these experiences are saying about the future of UI.

Our fundamental UI today is based around a convention from 30 years ago:

Xerox os

This fundamental idea of windows, buttons, boxes and walls was invented when the primary input device was a mouse and keyboard. By-and-large, these concepts still define the majority of interfaces we use today. But the future of UI doesn’t involve a mouse and keyboard. Nor does even modern UI.

How does this evolve as we start considering examples from augmented and virtual reality interfaces?

Meron Gribetz of Meta Vision thinks that the future of UI exists where the interface is one in the world around you. I had the opportunity to watch his demonstration of their Meta 2 Augmented Reality platform at SXSW. It was eye opening for me in two ways:

  1. AR devices have a serious chance of becoming mainstream soon…as glitchy as their demonstration was, it was still magical (You should watch their recent TED talk demonstration - it is very similar to what I saw at SXSW).

  2. The future of UI isn’t going to consist of boxes and windows, but rather elements that reveal themselves as they’re needed. Rows of icons and folders are antiquated.

Meron demonstrating a live hologram call and transferring an object between the two callers Meron demonstrating a live hologram call and transferring an object between the two callers, in realtime

Meron vowed that their Meta headsets would replace his company’s employees’ desktop computers entirely by SXSW 2017. While that statement might be a little bold, it’s not crazy to think of how AR devices could dramatically change the way we interact.

Which brings me back to these windows, buttons, boxes and walls. Why are we still relying on UI paradigms that were invented for a clumsy lint-magnet ball mouse and clanky keyboard? With experiments invested in alternative control paradigms such as Google’s Project Soli and Jacquard, to consumer products such as Alexa and Siri, I believe we are on the cusp of a UI renaissance.

Part of this renaissance will be driven by how we manipulate UI. The key isn’t to craft some trendy control mechanism for your product - like a fancy touch gesture on a mobile app that no one understands without your lengthy on-boarding video that everyone opts out of. Nor is it to play to your lowest common denominator by just mimicking some traditional interface because your user might be more familiar with it - like a camera app that features a skeuomorphic shutter button.

The goal is to leverage users’ natural and intuitive skills in order to reduce the resistance for a user to translate their human need to a machine command. Both of my examples above only add more UI, and more resistance.

Let me explain.

Amazon’s Alexa is not popular because of the quality of its answers or that it always understands what you say (although these are both important). I argue that Alexa is instead popular because you know exactly how to use it, and its usage involves an extremely small path of resistance: you just ask it a question, in your own words. Don’t even touch a button. You shouldn’t have to put effort into translating your question into machine instruction. Therefore, Alexa has an amazing UI…but it’s completely invisible.

All interfaces have paths of resistance, but the goal is always to make this path as short as possible. By contrast, in Photoshop, how many steps does it take to determine a paint brush? Color? Brush size? Type? Shape? It is strenuous to translate your creative vision into machine instruction to create your artwork. This tension limits your creativity.

Whether or not the future of UI involves strapping devices that look like awkward toasters to our head, hardware will be playing a key role in allowing us to build products that shrink paths of resistance. But we must utilize this hardware - our touch, voice, etc - coupled with intelligence and insights (such as using big data to anticipate what your user will need before they ask for it). Then, the interface can disappear.

Another important component of future UI is recognizing that there exists a world around your user. I agree with Meron when he says that computers are amazing, but often, we don’t notice how terrible they are. Phones break our eye-contact with others every time they go off. Distraction from our devices ruin person-to-person moments. The problem though is not so much the interruption, but the lack in perspective. Modern UIs are optimized for a single vantage point, hunched over a rectangular screen and isolated from those that don’t share your view. As soon as you open up that text message, those around you are cut off from your experience.

AR and holographic interfaces have the potential to powerfully change this as they can support interfaces optimized for multiple vantage points. SciFi movies have long dreamed about UIs that adapt to their audience.

Taking a queue from Hollywood, the future of UI then also recognizes that technology must adapt to those around us and draw us in, as opposed to divide. The difference between our digital and physical lives continues to grow smaller, and interfaces need to start supporting a future that understands we don’t want to be hunched over our screens on our own all the time.

This means we should optimize quicker interactions, interactions that don’t involve us always hunched over our screens, and interactions that can engage those also around us.

As product developers, we need to start making some strong considerations so that our products maintain their relevancy:

  1. Interfaces should become invisible by reducing their path of resistance so much that a user’s intuition can take over.
  2. This means more products need to leverage machine learning, insights from big data, and more advanced control mechanisms - including voice, refined gesture control, motion, or the other 12+ sensors already in our pockets. But - only if this supports #1.
  3. Software has “eaten” the world so to speak, and it’s time for hardware to come back into the spotlight so that we can build software that better leverages #1, #2, and the world around us.

The future of UI is moving in a direction where screens will make up a smaller percentage of your overall user interaction. Users are becoming more accustomed to interacting with machines in entirely new ways. It is vital to start thinking about how you can leverage these ideas to move your UIs past windows, buttons, boxes and walls.

Share Share on Twitter Share on Facebook Share on LinkedIn

How Can We Help?

Reaching out doesn’t mean you’re ready to start a project, but we’d love to learn more about the challenge you’re facing, answer any questions, and see if we might be a good fit for working together.

Contact Us