Human-Machine Interaction

HUMAN-MACHINE INTERACTION

The Human-Machine Interaction (HMI), or Human-Computer Interaction (HCI), field is particularly interested in providing a bidirectional way of communication between humans and machines in real, virtual, and augmented environments.

The intriguing aspect of this field lies in its multidisciplinary nature, involving not only computer scientists, but also physicians, educators, entertainers, lawyers, and many other figures with very diverse backgrounds.

In the MMSP lab we mainly focus on the computational methods pertaining advanced human-system interaction with sensors:

  • Brain-Computer Interfaces
    • Motor imagery
    • Inner speech
  • HMI systems for emotion and mood detection
  • HMI systems for behaviour understanding 
  • Speech emotion recognition (SER)

In particular we study electroencephalographic, electromyographic, photoplethysmographic, and galvanic skin response physiological data acquired through wearable and portable devices. Moreover we are particularly interested in speech analysis and on the emotions conveyed in verbal communication.

Brain-Computer Interfaces

To learn more contact

Aurora Saibene

HMI - BCI Research Sub-Topic Supervisor

Brain-Computer Interfaces (BCI) provide both a translation of neural signals into machine commands and a feedback to the BCI users.

We collaborate with Sara Nocco, Research Fellow at the Department of Informatics, Systems and Communication @unimib, Silvia Corchs, Associate Professor at the Department of Theoretical and Applied Sciences @uninsubria, and Jordi Solé-Casals, Full Professor at the Department of Engineering @uvic.

Among the BCI applications, the MMSP lab focuses on ...

Motor imagery

Motor imagery (MI) tasks involve the imagination of voluntary movements and represent good allies for neuroplasticity, which is the ability that the brain has to change its structure in response to new situations.

In the EEG field, considering the characteristics of motor imagery, the MI tasks have been exploited especially when having to deal with Brain Computer Interfaces (BCIs), which allow an on-line decoding of brain dynamics and can be exploited to control heterogeneous systems (e.g., wheelchairs).

Inner speech

Inner speech recognition is defined as the internalised process in which the person thinks in pure meanings, generally associated with an auditory imagery of own inner “voice”.

Speech-related brain computer interfaces provide effective vocal communication strategies for controlling devices through speech commands interpreted from brain signals, improving the quality of life of people who have lost the capability to speak, by restoring communication with their environment.

Mood assessment and monitoring

To learn more contact

Claudia Rabaioli

HMI - Mood Research Sub-Topic Supervisor

Emotion and mood are two distinct but related phenomena. Moving from disorder driven analysis, we are delving in the assessment and monitoring of mood outside of clinical practice.

We collaborate with Francesco Ferrise, Nicolò Dozio and Riccardo Giussani, Full Professor, Research Fellow and PhD Candidate at the Department of Mechanical Engineering @polimi, and with Daniele Luigi Romano, Researcher at the Department of Psychology @unimib.

We are currently focusing on ...

Ecological momentary assessment

Ecological momentary assessment (EMA) allows a continuous and comfortable self-evaluation of mood for example exploiting easy to use and quick to respond visual questionnaires such as PickAMood.

Understanding how mood changes emotions

We have designed an experimental protocol to understand if different moods influence differently how we perceive emotional content.

Speech emotion recognition

To learn more contact

Alessandra Grossi

HMI - SER Research Sub-Topic Supervisor

Speech emotion recognition (SER) is the task of detecting emotions from the human voice, considering both acoustic and linguistic information. The majority of SER classification models described in the literature have been developed for specific domains, which limits their applicability to other situations or use cases.

The MMSP lab mainly focuses on ...

Define a general SER model

Define a general SER model that take into account subjects of different ages or languages.

Consider different scenarios

Recognize emotions from data collected in different setting and conditions.

Used Devices

The MMSP lab is now focusing on the use of wearable and portable devices, in particular for the HMI research topics we are using ...

Unicorn Hybrid Black

Unicorn Hybrid Black (g.tec medical engineering GmbH).

Shimmer3 GSR+ Unit

The data recorded through Shimmer3 GSR+ (http://www.shimmersensing.com/) are: heart rate and GSR.