New types of interfaces are appearing everywhere. Speech, haptics and gesture are becoming more and more common – but what is the reason for this shift? How do graphical user interfaces (GUI) hold us back? We explore 10 reasons why we need a new generation of ubiquitous user interfaces (UUI).
7 minute read
October 26, 2014
The goal of this post is to point out some of the shortcomings of graphical user interfaces, in particular, touchscreens like our smartphone, with the intention of giving food for thought on what a new generation of ubiquitous user interfaces could look or feel like.
"Computing is not about computers any more. It is about living." - Nicholas Negroponte, Founder of MIT Media Lab
Over the past 35 years, personal computers and mobile devices have merged an incredible number of physical products and services into a single generic device - the smartphone.
This change has brought huge benefits in terms of productivity, but also drawbacks in terms of usability. It has required the introduction of menus and graphical imitations of real things, without shape and feel to naturally differentiate them.
Graphical user interfaces require your focus and attention, as well as your willingness to learn and adapt to new structures and ever-changing menus. This is a problem for many people, especially when it comes to extremely simple, creative or mission-critical tasks where you need your attention elsewhere.
One of the most common assumptions is that centralizing tasks and information means greater efficiency for the user. As a result, devices are ruled by menus and folder structures. This approach was carried through to all kinds of applications; including ones that don’t directly require a computer — notably the control of connected devices.
However, complexity and menus mean that it takes longer for you to get to where you want to go. And sometimes, time is a deal breaker.
If we look at the light switch example, using the smartphone interface required several steps for operation. These include pulling out your smartphone, unlocking it, searching for the right app and opening it to select the right setting. This process is time-consuming and a step in the wrong direction when compared with the user flow of a traditional light switch.
Turning on a light takes 10 to 15 steps on a smartphone, which translates to about 10–15 seconds including loading times. Imagine yourself waiting inside of a hallway waiting for 15 seconds to see the light come on.
As a result, when faced with using their phone or walking to the wall to hit a light switch — most users will choose the latter –we have even observed this first hand. It’s simply faster, easier and less distracting to flip a switch than scrolling through pages of apps each time you want to turn on the light.
3. Cognitive Load
Spreading cognitive load onto multiple senses means that you can do things without having to think about them. For example, when you hit a light switch, you use your eyes, ears, touch, awareness of positioning and also your motoric memory to perform the activity. On a smartphone, you lose all of that. That is because graphical user interfaces require you to process all information mainly through your eyes and a very reduced version of haptics.
When you check your phone, you mentally have to drop everything else you’re doing — a conversation, a task or a creative process — and channel full attention toward your sense of sight. The ability to focus on “the music rather than the instrument” is a crucial skill and human need.
The PC was developed with the intention of revolutionizing education and work. With time, things like shopping and social relationships were added. The adoption of the smartphone, a miniaturized version of the PC, meant that you were now carrying all of these applications into your home and your bedroom, including pop-up notifications and constant availability.
Multitasking limits our ability to perform well because our brains are wired to switch between tasks rather than perform several tasks at once. By shifting rapidly from one thing to another, we interrupt our train of thought, hindering our workflow. Our productivity goes down by as much as 40% when we’re doing several things at once.
Although we constantly feel connected, the overload of digital distractions in our lives is causing us to lose touch with reality. Designing for stickiness can have serious negative side effects such as anxiety, social isolation, lack of empathy. These side effects inevitably impact our health, relationships and society as a whole.
On average, people check their smartphone every 15 minutes or less and becomes anxious if they aren’t allowed to do so.
79% of people reach for their smartphone within 5 minutes of waking up.
What this essentially means is that we’re filling every single gap in our lives with some form of external input, be it games, news or text messages. This leads to the fact that 84% of people say that they could not go a single day without their smartphones. People are increasingly showing symptoms of withdrawal, stress or lack of reflection and creativity. There are even a growing number of clinics that treat things like “smartphone addiction”. 1 in 3 people would rather give up sex than their smartphone.
The inability to "just be” dramatically decreases the ability to be creative, a process which requires time for reflection.
6. Social Impact
A smartphone is designed for one person. Not only in the way you use it, but also in the way the screen is designed. The screen is only visible from a small angle so that you're not able to share what you’re doing or have other people participate.
Imagine yourself in a conversation with a friend. You’re really involved in the dialogue and your friend mentions a book you should read. Stopping the conversation to pull out your smartphone and browse through apps to take a simple note completely interrupts the social flow and conversation.
People are getting tired of this. It's considered rude if you pull out your smartphone at the dinner table or during a conversation because it shows that you’re not sharing your undivided attention with the people in front of you and that you’re actually somewhere else.
The PC was designed for people sitting at a desk.
The smartphone was designed to fit in your pocket.
Social networking, for example, happens a lot more on smartphones than on tablets because you can do it quickly on the subway or during your break. Online shopping, on the other hand, happens mostly on tablets, simply because of the screen size (although the smartphone and tablet are almost identical from a technological perspective). People prefer devices which are designed for a specific situation, even if this choice is not made consciously.
Smartphones are designed to meet the most common denominator, to meet the needs that satisfy the biggest number of people. This approach makes sense but does not mean it is also the best design for all people or situations. With dropping hardware costs and easier access to local manufacturing, we now have the opportunity to design interfaces which are truly designed for specific user needs, be it in the cold winter of Siberia, an emergency room or your home– all places that require a very specific user interface.
Most of us never think about this, but many people actually have problems using a smartphone. The size of the screen and in particular the buttons require good eyesight and motor control of your hands.
Yet, there are 300 million people worldwide that are visually impaired and even more people that are physically handicapped.
9. User Experience
People love real things. Things that you can smell, taste, feel and touch. Clicking a virtually represented button behind glass will never feel real.
Having a conversation through video chat is nothing compared to the awareness, the warmth and the smell of a person next to you.
People are looking for a way to interact with technology that feels real and natural and that addresses all human senses, like touching a pen or the sound, smell and touch of turning a page in a newly printed book. People want to regain a certain user experience and quality of life that is being lost through touch screens.
10. Limitation of human abilities
Humans have more than 5 senses. Depending on what scientist you ask, people have at least 10 different senses with various organs to perceive external stimuli. Some of the lesser-known senses are the awareness of the relative position of body parts, the position in three-dimensional space and the sense of equilibrium.
A screen limits us to vision, sound and a “light” version of haptics.
We can do better than that.
Multisensory, natural or tangible user interfaces use multiple senses, inputs and feedback for more intuitive interactions. Devices that combine a number of sensory inputs — or engage a sense in an intuitive way – will be the way forward in creating more livable spaces with technology. Streamlining our interactions so that they feel more like our interactions with low-tech objects will be the next method of designing in a post-screen era.
That's why we created our first product, Nuimo Control. Nuimo Control is a smart home remote controller that focuses on the essentials, with simple inputs that control one or many of your favorite connected devices.
At Senic, we’re driven by a mission to create products that put humans back at the center. We believe the next evolution of user interfaces should enhance our lives, rather than distract us from the things that matter.
Would you like to read more content like this? Subscribe to our newsletter below.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Part 7/8: Seamless Interactions — Why Our First Product is just Step 1.