Touchez | NUI

Natural User Interface

In computing, a natural user interface, or NUI, or Natural Interface is the common parlance used by designers and developers of human-machine interfaces to refer to a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions.

In the 1970s, ’80s and ’90s Steve Mann developed a number of user-interface strategies using natural interaction with the real world as an alternative to a command-line interface (CLI) or graphical user interface (GUI). Mann referred to this work as “Natural User Interfaces”, “Direct User Interfaces”, and “Metaphor-Free Computing”. 

Mann’s EyeTap (an earlier version of Google Glass) technology typically embodies an example of a natural user interface.

Steve Mann

NUI is not a replacement (in a competitive sense) for say the keyboard and the mouse, which has had its place for decades and will be around for a few more years, NUI will find its niche inside the computer sector. Like video tape and DVDs and the cinema. Only for NUI to gradually take a larger slice of the pie while the years progress.

Its proximity, between people and its devices (such as smart phone interaction , changes our relationship with this Natural User Interface. It is a user interface designed to re-use existing skills for interacting directly with content.

The phrase “reuse existing skills” helps us focus on how to create interfaces that are natural. Your users are experts in many skills that they have gained just because they are human. They have been practicing for years skills for human-human communication, both verbal and non-verbal, and human-environmental interaction. Computing power and input technology has progressed to a point where we can take advantage of these existing non-computing skills. NUIs do this by letting users interact with computers using intuitive actions such as touching, gesturing, and talking, and presenting interfaces that users can understand primarily through metaphors that draw from real-world experiences.

This is in contrast to GUI, which uses artificial interface elements such as windows, menus, and icons for output and pointing device such as a mouse for input, or the CLI, which is described as having text output and text input using a keyboard.

At first glance, the primary difference between these definitions is the input modality — keyboard verses mouse verses touch. There is another subtle yet important difference: CLI  (Command Line Interface) and GUI are defined explicitly in terms of the input device, while NUI is defined in terms of the interaction style. Any type of interface technology can be used with NUI as long as the style of interaction focuses on reusing existing skills.
NUI comes in various forms,

  • Haptic technology: Outputting information to our skin.
  • Gesture recognition: The ability to communicate with computers with our bodies and especially our arms and fingers.
  • Tangible and touch tech: The ability to directly indicate selections and manipulate objects in ways that computers can understand.
  • Voice recognition and generation: The ability to speak to a computer as we would speak to another person.
  • Ocular control or gaze monitoring: The ability to point with our eyes.
  • Cerebral (brain) interfaces: Using thoughts or brain waves to communicate to computers.
  • Near field communications: Letting us place objects in proximity to initiate data transfer and indicate selections or focus.
  • OLED and eInk displays: Visualizations of the abstractions around us, everywhere.
  • Heads-up displays: The personalized augmentation of the world around us

However voice, gesture and touch do not necessarily mean it is a NUI interface. What may be good to work with inside a car, may be absolutely useless inside an aeroplane.

This change from GUI to NUI is not just about the hardware and software, of course this becomes faster over time, but it’s about:

  • Who is doing what?
  • Where?
  • With Whom?
  • For how much?
  • And how?

5 Trends for the next few years in NUI

  • Motion Tracking Sensors

A novelty now but will be integrated into embeddable devices, just like cams are on phones and laptops.

Conference rooms will be fitted out with sensors and the “room” will know when someone enters and through speech the visitor can talk to the room “do a search on such and such a subject” or “pull up info on….” Not just hand tracking as that already exists but the sensors are aware of the people inside the room which adds to the fullness of the experience.

  • Touch less interaction

Interactive spaces inside rooms will be created a lot more where people can interact through gestures with their devices. The sensors will be able to see how many people are in the room and the concept of interaction. More than just a Kinect.

  • Data Mobility

At this point all systems are setup like silos, android, iOs but this will change in the future and it will also be easier to grab and share media from other devices in a seamless fashion. Like the swipe of media of the iPad towards a touch wall, these interactions between multiple platforms will be common in a few years.

  • Collaboration

In today’s work environment humans learn the tools, software to collaborate/work together. In future this will change, the computer sw will learn more about humans.

AI is to make rapid progress. More information inside these apps with the new sensors will be able to respond to us in a more humane fashion. Such as the Nest thermostat sensors.

  • The role between Computers and Humans

We will have relationships with computers instead of just using the computer as a tool.

It will learn to partner up with you than just work on command. Have a more emotional attachment with their devices (good example already are the smartphones one has and hardly anyone can be away from).  Speech recognition programs will strengthen this bond.

The emotional connection we have with friends and animals will be extended to computer devices and its software.