Coming soon: new ways to interact with machines

,
HMI, human-machine interactions

Our electronic and computing devices are becoming smaller, more adapted to our needs, and closer to us physically. From the very first heavy, stationary and complex computers, we have moved on to our smartphones, ever at the ready. What innovations can we next expect? Éric Lecolinet, researcher in human-machine interactions at Télécom ParisTech, answers our questions about this rapidly changing field.

How do you define human-machine interactions?

Human-machine interactions refer to all the interactions between humans and electronic or computing devices, as well as the interactions between humans via these devices. This includes everything from desktop computers and smartphones to airplane cockpits and industrial machines! The study of these interactions is very broad, with applications in virtually every field. It involves developing machines capable of representing data that the user can easily interpret and allowing the user to interact intuitively with this data.

In human-machine interactions, we distinguish between output data, which is sent by the machine to the human, and input data, which is sent from the human to the machine.  In general, the output data is visual, since it is sent via screens, but it can also be auditory or even tactile, using vibrations for example. Input data is generally sent using a keyboard and mouse, but we can also communicate with machines through gestures, voice and touch!

The study of human-machine interactions is a multidisciplinary field. It involves IT disciplines (software engineering, machine learning, signal and image processing), as well as social science disciplines (cognitive psychology, ergonomics, sociology). Finally, design, graphic arts, hardware, and new materials are also very important areas involved in developing new interfaces.

 

How have human-machine interactions changed?

Let’s go back a few years to the 1950s. At that time, computer devices were computing centers: stationary, bulky machines located in specialized laboratories. The humans were the ones who adapted to the computers: you had to learn their language and become an expert in the field if you wanted to interact with them.

The next step was the personal computer, the Macintosh, in 1984, following work by Xerox Parc in the 1970s. What a shock! The computer belonged to you, was in your office and home. First the desktop PC was developed, followed by the laptop that you could take with you anywhere: here the idea of ownership emerged, and machines become mobile. And finally, these first personal computers were made to facilitate interaction. It was no longer the user’s job to learn the machine’s language. The machine itself facilitated the interaction, particularly with the WIMP (Window Icon Menu Pointer) model, the desktop metaphor.

While we can observe the miniaturization of machines since the 2000s, the true breakthrough came with the iPhone in 2007.  This was a new paradigm, which significantly redefined the human-machine interface, making the primary goal that of adapting as much as possible to humans. Radical choices were made: the interface was made entirely tactile, with no physical keyboard, and it featured a high-resolution multi-touch screen, proximity sensors that turned off the screen when lifted to the user’s ear, and a display that adapted to the way the phone was held.

Machines therefore continue to become smaller, more mobile, and closer to the body, like connected watches and biofeedback devices. In the future, we can imagine having connected jewelry, clothing, and tattoos! And more importantly, our devices are becoming increasingly intelligent and adapted to our needs. Today we no longer learn how to use the machines; the machines adapt to us.

 

There has been a lot of talk in the media lately about vocal interfaces, which could be the next revolution in human-machine interfaces.

This technology is very interesting. A lot of progress is being made and it will become more and more useful. There is certainly a lot of work being carried out on these vocal interfaces, and more services are now available, but, for me, they will never replace the keyboard and mouse. For example, it is not suitable for word processing or digital drawing! It is great for certain, specific tasks, like telling your telephone, “find me a movie for tonight at 8 o’clock,” while walking or driving, or for warehouse workers who must give machines instructions without using their hands. Yet the interactional bandwidth, or the amount of information that can be transferred using this method, remains limited. Also, for daily use, confidentiality issues arise: do you really want to speak out loud to your smartphone in the subway or at the office?

 

We also hear a lot of talk about brain-machine interfaces…

This is promising technology, especially for people with severe disabilities. But it is far from being available for use by the general public, in video games for example, which require very fast interaction times. The technology is slow and restrictive. Unless people accept to have electrodes implanted into their brains, they will need to wear a net of electrodes on their heads, which will need to be calibrated to prevent them from moving, and conductive gel will need to be applied to improve their effectiveness.

A technological breakthrough could theoretically soon make applications of this technology available for the general public, but I think many other innovations will be on the market before these brain-machine interfaces.

 

What fields of innovation will human-machine interfaces be geared towards?

There are a lot of possibilities, a wide range of research is currently being carried out on the topic! Many projects are focusing on gestural interactions, for example, and some devices have already appeared on the market. The idea is to use 2D or 3D gestures, and different types of touch and the pressure to interact with a smartphone, computer, TV, etc. At Telecom ParisTech, for example, we have developed a prototype for a smart watch called “Watch it”, which can be controlled using a vocabulary of gestures. This allows you to interact with the device without even looking at it!

https://www.youtube.com/watch?time_continue=10&v=8Q8Feehr0Dc

This project also allowed us to explore the possibilities of interacting with a connected watch, a small object that is difficult to control with our fingers. We thought of using the watch strap as a touch interface, to scroll through the screen of the watch. There will be ongoing development in these small, wearable objects that are so close to our bodies. For example, we could someday have connected jewelry! For example, researchers are working on interfaces projected directly onto the skin to interact with these types of small devices.

Tangible interfaces are also an important area for research. The idea is that virtually all the objects in our everyday lives could become interactive, with interactions related to their use: there would be no need to search through different menus, the object would correspond to a specific function. These objects can also change shape (shape changing interfaces). In this field of research, we have developed Versapen: an augmented, modular pen. It is composed of modules that the user can arrange to create new functions for the object, and each module can be programmed by the user. We therefore have a tangible interface that can be fully customized!

Finally, one of the major revolutions in human-machine interfaces is augmented reality. This technology is recent but is already functional. There are applications everywhere, for example in video games and assistance during maintenance operations. At Télécom ParisTech, we worked in collaboration with EDF to develop augmented reality devices. The idea is to project information onto the control panels of nuclear power plants, in order to guide employees in maintenance operations.

It is very likely that augmented reality, both virtual and mixed, will continue to develop in the coming years. The so-called GAFA companies (Google, Amazon, Facebook, Apple) invest considerable sums in this area. These technologies have already made huge leaps, and their use is becoming widespread. In my opinion, this will be one of the next key technology areas, just like big data and artificial intelligence today. And as a researcher specialized in human-machine interfaces, I feel it is important to position ourselves in this area!

Read more on I’MTech: What is augmented reality?

[box type=”info” align=”” class=”” width=””]

Social Touch Project: conveying emotions to machines through touch

Tap, rub, stroke… Our touch gestures communicate information about our emotions and social relationships. But what if this became a way to communicate with machines? The Social Project Touch, launched in December 2017, seeks to develop a human-machine interface capable of transmitting tactile information via connected devices. Funded by the ANR and DGA, the project is supported by the LTCI laboratory at Télécom ParisTech, ISIR, the Heudyasic laboratory and i3, a CNRS mixed research unit that includes Télécom ParisTech, Mines ParisTech and École Polytechnique. “You could send touch messages to contacts, “emotitouches”, which would convey a mood, a feeling,” explains Éric Lecolinet, the project coordinator. “But it could also be used for video games! We want to develop a bracelet that can send heat, cold, puffs of air, vibration, tactile illusions, that could enable a user to communicate via touch with an avatar in a virtual reality environment.”[/box]

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *