The last 24 months have been filled with announcements about ‘new technologies’ from smart watches to augmented reality glasses, beacons and sensors.
Like the mass adoption of the personal computer beginning in the 1980s — and smartphone again in the 2000s, technology is primed for a paradigmatic shift that will usher in a new phase of interaction design.
Wearables, virtual reality and haptic UI are just starting to crystalize into what is becoming the next generation of ubiquitous user interfaces — a post-screen era.
What drives this shift exactly? We believe there are two main factors that combined, act as a catalyst for a new generation of user interfaces beyond the common Graphical User Interfaces (GUIs) of smartphones and personal computers.
1. People Feel a Growing Need
Ubiquitous computing creates the necessity of ubiquitous interfaces. To date, the most popular user interfaces are GUIs due to our increasingly connected world and massive need for a centralized method of control. Currently this is a need being filled most almost solely by the smartphone, a fantastic device, but far from the ideal solution.
While the need for different interfaces is massive - nobody really enjoys browsing through applications to turn on a light switch — the majority of interfaces are still graphic-based. The most-used modern interfaces explicitly favor our sense of sight above all others, requiring full visual attention to switch on a light, turn up the music or text a friend.
Now that constant contact with technology is a cornerstone of modern life in many countries, screens play a major role in defining the way we live. The common understanding that our personal health and social interactions are impacted by our screen time has both older and younger generations of users growing distrustful of technology and calling for a change.
2. Technological Barriers are Falling…Fast
While the interest in alternatives to the GUI is growing, a number of interesting developments are occurring on the technical side with regards to building new technologies — particularly hardware.
With 2 billion smartphones in use globally, the explosion of new developments in miniaturized and high performance components paired with the falling costs of electronic parts as left us at a turning point. We are approaching a place where adding wireless chips to any object will cost less than a dollar — allowing virtually anything to be ‘smart’. This impact can be seen everywhere especially in the massive growth of lower volume wireless devices on Kickstarter and Indiegogo.
Additionally, the standardization of communications, increased number of APIs, and the improvement of computing power (now chips are the size of a single grain of rice) means that the invention of new interfaces is easier than ever for companies, startups and individuals alike.
8 Examples of User Interfaces that Will Shape Our Near Future
Both developments — user demand and dropping technological barriers will give way to the shift from a centralized graphical user interface like smartphones to ubiquitous and specialized user interfaces like wearables and sensors.
But the course has been set and the shift to ubiquitous interfaces is already in motion. To shed light on the new interfaces we are just beginning to see — and our predictions for how they might develop, we have compiled a list of 8 kinds of high-potential user interfaces. They represent a broad range of ways in which technology will merge not only with our environments and objects but with our senses and bodies.
1. Gesture Recognition
The smartphone, while employing a nice little gyroscope doesn’t much take into account what it’s like to move around in the physical world. More and more we are seeing gesture based inputs, although they are currently relegated primarily for use in gaming, having not quite found their place among the traditional mouse, keyboard and tablet.
Gesture recognition at this moment has an issue of scale. Platforms that take into account the entire body have yet to find their application except in the case of gaming. In the future, the miniaturization of the gesture recognition hardware for integration into architectural spaces will become very interesting because it will allow the input device to appear disembodied and therefore unobtrusive and seamless.
Already in circulation, dedicated wearable devices are the first step toward a widely-understood method of interacting with technology through a specialized device. While smart watches and miniaturized screens offer a not-too-distant smartphone-like experience, specialized wearables like Jawbone or FuelBand are unobtrusive in use — not infringing on the activity they are designed to track.
We expect that wearables will become even more specialized as we begin to design away from the ‘can-display-anything’ screens and move toward function specific designs. Likewise since the majority of popular consumer wearables are worn on the wrist, we expect a range of new devices that have a more considered integration with the body.
One way to fully create an off-screen technological experience is to allow the technology to occupy space and engage in a simplified human-like way. Generally considered the stuff of SciFi, in the next two decades we will see the adoption of robots in a number of ways.
Robots in the industrial sector has dramatically changed the way that manufacturing works- giving way to higher processing power, better sensors and decreased cost. This shift in the industrial space will have a ripple effect outside the factories and industrial facilities that trickles down to our homes and work environments. The next few decades could easily give way to the introduction of robots into the customer service industry and domestic sphere when costs drop low enough in the next few decades.
4. Brainwave Interfaces
We will get to engaging the conscious senses later — but a number of devices are actually taking a step further to use unconscious feedback as input. At the moment most off-the-shelf (or off the kickstarter campaign) brainwave trackers have only scratched the surface, tracking electrical activity of constantly firing neurons and generally collecting data, mapping a progression or output of some sort.
Monitoring brainwave impulses will become particularly interesting when a point is reached where we can seamlessly collect specific impulses and map them to outputs that make interaction more seamless. Companies like Melon, Emotiv and Muse are already making strides in this space and connecting the data to bridge the gap from quantifying one’s self to understanding one’s self.
5. Augmented Reality & Virtual Reality
Virtual and Augmented reality has nearly an infinite number of interesting use cases. The most widely recognized at the moment is gaming — however we could imagine ways to implement it to everything from immersive advertising campaigns to automotive design. Right now viewers fall into three distinct categories — Virtual Reality, Virtual Notification and Augmented Reality.
The accompanying hardware for these three categories of experience is still lagging behind because it’s being developed at polar ends of the spectrum. On the high-cost end, Oculus Rift all the way down to the maker-friendly Google Cardboard or DODOcase Pop-Up VR Viewers and very little in between. The massive amount of investment, however for products like Oculus Rift is a good signifier that as soon as 2015-16 we will see an explosion of activity in the VR space.
6. Sensors & Actuators
In order to extend the possibilities for interface we have to not only make our devices smart (and the methods by which we control them) but also our environments. Reconciling the spaces and objects we interact with currently and the technology we would like them to interact with is a big job that will require not only smart new objects with embedded technology but also retrofitting things that aren’t connected.
With sensors and actuators we will be able to create meaningful connections between space and object, and will the extremely low cost of sensors — we can do this almost anywhere. The integration of sensors will streamline a number of daily activities such as turning on lights in the home when you arrive or starting the coffee maker when you wake up.
7. Speech recognition
When Apple first launched Siri as a feature of the iPhone 4S, it was a stretch of the imagination to conceive of voice as a viable input that would compete, let alone replace non-time based media like text or photo. Regardless,speech recognition is actually one of the most intuitive ways to interact with technology because it fully mimics a human-to-human interaction. Likewise, it is also an important feature for the seamless use of a number of other potential interfaces such as robotics, sensors etc.
In the next generation of user interfaces it only makes sense that we would control, command and requests functions of our objects much in the same way we do of other humans. The leaps forward in Natural Language Processing (NLP) will allow these interfaces to respond and react seamlessly.
8. Multisensory User Interfaces
Multisensory, natural or tangible user interfaces use multiple sense inputs and feedback for more intuitive interactions. Devices that combine a number of sensory inputs — or engage a sense in an intuitive way will be the way forward in creating more livable spaces with technology. As humans we have built up thousands of years worth of tactile and sensory knowledge about our material world. Streamlining our interactions so that they feel more like our interactions with low-tech objects will be the next method of designing in a post-screen era.
These types of sensory interactions are a growing field and will show a way forward when connecting our material world and ubiquitous computing. Creating experiences that feel familiar and therefore intuitive while employing cutting edge technology.