A user interface is an interface through which a person can control specific software or hardware. Ideally, user interfaces should be user-friendly in order to make the interaction as instinctive and intuitive as possible. For computer software, this is referred to as a graphical user interface.
Unlike the modern day norm with computers, earlier computers were far too slow for graphical user interfaces. At the beginning, people only had the CLI. With the command line (user) interface, users could only issue commands in the form of command lines as a command prompt. This has evolved to the TUI that is now used for installation of operating systems. The fact that computers were increasingly adopted by people, and that the number of households with computers kept on increasing, made it necessary to develop a user-friendly interface for computers.
Thus, this led to the development of the GUI, which has now established itself permanently with better computing performance. Further technological advancements saw the development of the VUI, an interface that allows humans to interact with computers through a voice platform. In various computer games, such as Kinect, players are currently able to use an NUI. This interface uses the natural movement of a person to control the game software. PUI and BCI are currently being developed. The latter aims to control software through a person’s thoughts. More information about the interfaces can be found below.
Among the places where the command line interface is used are DOS computers. The user sees a command line and a prompt indicating the current position. Interaction is only possible through the entry of commands. The computer processes these and then displays another row with an entry prompt. This type of operating system is outdated. CLIs have largely been superseded by GUIs.
A text user interface is character-oriented. The execution is done in hardware text mode, but the screen is also extensively utilized. The programmer only has 256 characters in one font. The navigation is usually done using the keyboard and not the mouse. Examples include the Norton Commander or Turbo Pascal starting with Version 5.0. Furthermore, this interface is also used in boot loaders and all BIOS setup programs. Installation of operating systems also used this type of interface.
The graphical user interface (GUI) is the most commonly used interface in most of the modern software applications. It refers to the window containing all the software elements. User interaction occurs through the mouse and keyboard. One can also use buttons and menus on the software window. This window is the interface between the user and the software. Typical elements, such as toolbars, are also common. Such elements also allow for a somewhat similar process across different operating systems for common interactions. The design of a graphical user interface can be determined with the aid of a screendesign.
When developing the first graphical user interfaces, bits of the real world were used as a model in order to make software operation more comprehensible. This is primarily reflected on the symbols used such as a recycle bin, a folder, a disk icon for saving. From today’s perspective, most of these images are outdated but still continue to be used. In essence, the study room was used as a metaphor. The “desktop” is the work desk. The folders are often placed in the cabinet, but the usage and logic of the structure of folders was adopted.
With the new images as well, a reference to things that are known should always be created. This makes the user interaction easier. The GUI aims at making it possible for people to visually recognize what a button does. As a result, users do not have to memorize all the commands as was the case with CLIs.
When designing graphical user interfaces, there are guidelines that help improve the user-friendliness as well as the standardization. Examples include the eight golden rules from Ben Shneiderman.  Below is a description of a few of these rules:
In a voice user interface, interaction between the user and machine takes place through voice input and output. For example, a user can verbally select a person from a saved phonebook in order to call the person. Speech-to-text applications or voice recognition software also use the voice controlled interface. The advantage of this form of interaction is that users do not require anything else apart from the voice. Their hands are free, and they do not have to constantly stare at the display. Text input on devices that have a small keyboard (smartphones) can also be made easier by using voice user interfaces.
Examples include the Apple’s assistant, Siri, S-Voice by Samsung, and Google’s voice search. Among the prerequisites for successful VUIs is that users get a good listening experience. Particularly when using automated voice answering machines for customer hotlines, the caller should not be overburdened with long voice announcements. Voice interaction is very natural since communication among human beings has long been used.
With tangible user interfaces, the interaction takes place through dice, balls, and other physical objects. TUIs are rarely encountered in the everyday life, but their development has advanced significantly. The reason why they are rarely encountered is because the interaction using physical objects no longer functions if the objects cannot be located. Additionally, if you have a computer on your work desk, a tangible user interface makes little sense. Museums and exhibitions are good examples of areas where tangible user interfaces come in handy. The physical objects of a TUI are conspicuous and encourage interaction.
The user can use these playfully, which in turn facilitates the learning effect, e.g., in museums. The physical object makes the experience more memorable. Hence, the use in exhibitions, where the user is able to remember the one stand that he actively experienced something at. A tangible user interface offers many different options since the object can be modeled in shape, color, surface, etc. From a sandbox with wooden blocks to a magnifying glass for images, everything is possible.
The natural user interface should enable a user interaction that is as natural and intuitive as possible. At the same time, the actual interface is barely visible, e.g., on a touchscreen. With NUIs, user input is done using gestures and touches. Combination with the VUI is also possible. Thanks to direct feedback of the device, the operation appears more natural than the input with a mouse and keyboard. Besides the use with touchscreens, NUIs are also used in video games.
For example, the Nintendo Wii enables actions on the screen by moving the controller with your hand. Another example is an Xbox extension with Kinect that makes it possible to control a character on the screen through one’s own body movement. In both cases, the game reacts to the natural movements, thus making the interaction appear natural.
A perceptual user interface is a perception driven user interface that is currently still being explored. PUIs should be able to combine the concepts of both the GUI and VUI as well as incorporate electronic gesture recognition to facilitate interaction with the computer. The addition of the listening and visual senses of the gestures, this interface aims at improving the perception even further.
Brain computer interface uses human thoughts. So far, there has been a lot of success in this area and the research is very promising. The research is based on different application areas. Electrodes are used to measure brainwaves that are in turn calculated using various algorithms. This makes it possible to control robotic arms, etc. This type of interaction is a big relief to people with disabilities in their everyday life. A similar implementation, which aims to control a vehicle through human thoughts, is also being worked on in the automobile field.
There a number of similarities and differences in the creation of a graphical user interface and a website.
For instance, the navigation on a website is done by the user. The user chooses the path through the page structure. In a GUI, the software developer can control the options that the user has to a particular moment. If a function is not available, the developer can choose to make the software hide this option at that particular moment. In the case of a dialog that stretches on to several windows, the software can be designed in such a manner that does not allow the user to navigate back. In a website, the previous page can always be viewed at any time. Therefore, the navigation should also be taken into account when designing a website. The page hierarchy can become confusing very quickly, and its clarity should always be ensured. A breadcrumb navigation can be helpful in this case. Long click paths also play a key role – for the Google bot as well.
A user uses a software program over a long period of time, thus making him/her familiar with the many elements of the GUI. In the case of a website, the dwell time is not that long. Users often view pages as being a small part of the entire Internet. Website designers should therefore make sure that the user experience on their website is kept at par with other websites. For instance, the navigation bar is generally found on the left or at the top. Placing it at the bottom of the page irritates users and will cause most users to leave the page. As a webmaster, one should therefore ensure that the basic things on the website are just as they appear everywhere else on the Internet. This makes the user feel comfortable and is more likely to become into a customer.