Google claims its latest AI models are capable of identifying emotions — a development that has sparked concern among experts

Google recently announced that its latest AI models can reportedly identify emotions, raising significant concerns among experts.

Unveiled on Thursday, the PaliGemma 2 model family offers advanced capabilities to analyze images and interpret the emotions of individuals depicted in photos. According to Google, these models go beyond basic object detection, generating detailed captions that describe actions, emotions, and the overall narrative within a scene.

However, this emotion-recognition capability is not built-in; PaliGemma 2 requires specific fine-tuning to perform such tasks. Despite this, experts are alarmed by the implications. Sandra Wachter, a data ethics professor at the Oxford Internet Institute, expressed skepticism: “Assuming we can ‘read’ people’s emotions is highly problematic. It’s akin to consulting a Magic 8 Ball for advice.”

Emotion-detection technology has long been pursued by tech companies for applications ranging from sales training to safety measures. Yet, the scientific basis remains questionable. Most of these systems draw inspiration from psychologist Paul Ekman’s theory of six universal emotions—anger, surprise, disgust, happiness, fear, and sadness—but subsequent research has highlighted cultural and personal variations in emotional expression.

Experts argue that emotion recognition is inherently unreliable, often biased by the designers’ assumptions. A 2020 MIT study revealed that facial analysis models could inadvertently favor certain expressions, such as smiling, while assigning negative emotions disproportionately to individuals of specific racial groups.

Google claims to have conducted extensive testing on PaliGemma 2 to minimize biases, using benchmarks like FairFace, a dataset of headshots categorized by race. Yet, critics note that FairFace only represents a limited range of demographic groups, raising questions about the model’s fairness.

“Interpreting emotions is deeply subjective, influenced by cultural and personal contexts,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute. “Even without AI, research shows emotions cannot be reliably inferred from facial expressions alone.”

Regulators in regions like the EU have already taken action against the use of emotion detection in sensitive scenarios, such as schools and workplaces, through laws like the AI Act. However, models like PaliGemma 2, available on platforms such as Hugging Face, present risks of misuse, potentially causing real-world harm.

“If this so-called emotion recognition is based on pseudoscience, it risks perpetuating discrimination, especially in areas like law enforcement or hiring,” Khlaaf warned.

Google maintains that its testing addressed ethical concerns, including issues related to representation, safety, and child protection. Yet, Wachter believes more must be done:
“Responsible innovation involves anticipating consequences from the very first day in the lab and throughout a product’s lifecycle. Without this foresight, such technology could pave the way for a dystopian future where emotions dictate access to jobs, loans, or education.”


Leave a Reply

Your email address will not be published. Required fields are marked *