Researchers from the Universidad Carlos III de Madrid has developed an emotion-reading AI.
It's bad enough your significant other insists that there's something wrong when there really isn't, but now it looks as if we'll be dealing with an emotion-reading AI in the not-too-distant further thanks to researchers from the Universidad Carlos III de Madrid. But don't get too paranoid just yet: we may still be a ways off from talking to HAL 9000 and listening to him/it cheering us up first thing in the morning with a little tune if we seem a little down in the dumps. Dude, clam it: I need my morning coffee first.
As explained in their study, and published in the Journal on Advances in Signal Processing, the computer system created by these researchers automatically adapts its dialogue to the user's situation so that its responses are on the same page as the user's emotional state. To detect the user's mood, the system uses up to sixty different acoustic parameters including tone of voice, speed of speech, duration of pauses and even the energy of the voice signal. In particular, it looks for negative emotions like anger, boredom and doubt.
With emotion detected, the system then determines the user's overall intention in a given dialogue. "For example, if the system did not correctly recognize what the interlocutor wanted to say several times, or if it asked the user to repeat information that s/he had already given, these factors could anger or bore the user when s/he was interacting with the system," reads a news release provided by the Universidad Carlos III de Madrid. "Moreover, the authors of the study point out that it is important that the machine be able to predict how the rest of the dialogue is going to continue."
The researchers solved the problem by developing a statistical method that uses earlier dialogues to learn what actions the user is most likely to take at any given moment. Once the system has detected the user's intention -- along with his/her emotional state -- it automatically adapts the dialogue to the situation the user is experiencing.
"For example, if s/he has doubts, more detailed help can be offered, whereas if s/he is bored, such an offer could be counterproductive," the report states. "The authors defined the guidelines for obtaining this adaptation by carrying out an empirical evaluation with actual users; in this way they were able to demonstrate that an adaptable system works better in objective terms (for example, it produces shorter and more successful dialogues) and it was perceived as being more useful by the users."
Needless to say, if this AI goes commercial, we might as well kiss human tech support -- whether it's local or based overseas -- goodbye. Adios.
No, I did NOT say "huffarump!" Can you tell how stressed I am NOW? Can you?? Answer me, HAL!