Intel Develops Controversial AI to Detect Emotional States of Students

An Intel-developed software solution aims to apply the power of artificial intelligence to the faces and body language of digital students. According to Protocol, the solution is being distributed as part of the "Class" software product and aims to aid in teachers' education techniques by allowing them to see the AI-inferred mental states (such as boredom, distraction, or confusion) of each student. Intel aims to expand the program into broader markets eventually. However, the technology has been met with pushbacks that bring debates on AI, science, ethics and privacy to the forefront.

The AI-based feature, which was developed in partnership with Classroom Technologies, is integrated with Zoom via the former's "Class" software product. It can be used to classify students' body language and facial expressions whenever digital classes are held through the videoconferencing application. Citing teachers' own experiences following remote lessons taken during the COVID-19 pandemic, Michael Chasen, co-founder and CEO of Classroom Technologies, hopes its software gives teachers additional insights, ultimately bettering remote learning experiences.

But while Intel and Classroom Technologies' aim may be well-intentioned, the basic scientific premise behind the AI solution - that body language and other external signals can be accurately used to infer a person's mental state - is far from being a closed debate.

We don't yet fully understand the external dimensions through which people express their internal states. For example, the average human being expresses themselves through dozens (some say even hundreds) of micro expressions (dilating pupils, for instance), macro expressions (smiling or frowning), bodily gestures, or physiological signals (such as perspiration, increased heart rate, and so on). 

It's interesting to ponder the AI technology's model - and its accuracy - when the scientific community itself hasn't been able to reach a definite conclusion on translating external action toward internal states. Building houses on quicksand rarely works out.

Another noteworthy and potential caveat for the AI engine is that expressing emotions also vary between cultures. While most cultures would equate smiling with an expression of internal happiness, Russian culture, for instance, reserves smiles for close friends and family - being overly smiley in the wrong context is construed as a lack of intelligence or honesty. Expand this towards the myriad of cultures, ethnicities, and individual variations, and you can imagine the implications of these personal and cultural "quirks" on the AI model's accuracy.

According to Nese Alyuz Civitci, a machine-learning researcher at Intel, the company's model was built with the insight and expertise of a team of psychologists, who analyzed the ground truth data captured in real-life classes using laptops with 3D cameras. The team of psychologists then proceeded to examine the videos, labeling the emotions they detected throughout the feeds. For the data to be valid and integrated into the model, at least two out of three psychologists had to agree on how to label it. 

Francisco Pires
Freelance News Writer

Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.

  • hotaru251
    Couldn't a parent list this as collecting a childs data w/o permission which is against law in some places (like CA)?
    Reply
  • USAFRet
    hotaru251 said:
    Couldn't a parent list this as collecting a childs data w/o permission which is against law in some places (like CA)?
    This will go back and forth in the courts.
    Eventually, someone will lose.
    Reply
  • DavidC1
    From the article,
    Systems such as these can either prove beneficial, leading teachers to ask the right question, at the right time, to a currently troubled student. But it can also be detrimental to student performance, well-being, and even their academic success, depending on its accuracy and how teachers use it to inform their opinions on students.

    In fact that's the least controversial part of the problems it can create. What about intrusion of privacy?

    It's said DARPA's motto is that everything has two sides. Meaning it can be used for good and bad. The significance is that they don't mean it in a general way. They mean that it'll always be used for the good and bad.

    Intel is also responsible for pushing a research that can read what you are thinking and translate that into an image/video on a screen. Remember the two side quote - what happens during future interrogations, where you have to struggle in your head to keep a secret? Where they can just jack you to a computer and see what you think?

    The excuse is that it's going to be done to "help mental health patients and the disabled" or nonsense like that. Sure, only if that's how the technologies are used. It never is only used that way.

    See how governments are starting to become authoritarian over the world with the covid thing, and technologies are starting to enable all the worst dystopian novels and movies put together. Not a good formula at all.
    Reply