The Psychology of Anthropomorphism

Today's Guest Post is authored by Mowaffak Allaham.  Mowaffak is a graduate student at GMU and a research assistant at the GMU Social Robotics Lab. Follow him on twitter at @mowaffakallaham.

Author: Mowaffak Allaham
Psychologists have identified the ability to perceive the minds of others as necessary for meaningful social interactions. Ongoing research is trying to determine factors that underpin mind perception, as this ability not only allows us to perceive the mind of a fellow human, but also to perceive it in nonhuman objects or agents. This tendency to imbue the real, or imagined, behavior of nonhuman agents with humanlike characteristics, motivations, intentions, or emotions [1] is called anthropomorphism.

A critical prerequisite to understanding the minds of other humans is to attribute the presence of mental states to their minds in the first place – intentions, desires, and beliefs. During the process of anthropomorphism, the attribution of mental states can even be applied to non-human objects or agents (e.g: 3D avatars or robots).  In a classic experiment exploring this phenomenon, Fritz Heider and Marianne Simmel [2] presented participants with a video of two animated triangles either chasing or hiding from one another.


This study demonstrated our innate tendency to attribute personality traits, and therefore a mind, even for simple, geometric shapes! Since then, anthropomorphism has intrigued many psychologists, and more recently neuroscientists, as a window into the cognitive mechanisms that drive our perceptions of the mental states in others.

Interestingly, one study found that an absence of social connections increased the tendency to anthropomorphize, presumably to satisfy our motivation for social connection.  In contrast, people with a strong sense of social connection were less likely to anthropomorphize non-human agents.

Research on anthropomorphism has expanded beyond the confines of psychology, reaching newly emerging fields like human-robot interaction. Computer scientists and roboticists are actively exploring the factors that influence our perception of robots.

Along these lines, scientists at the Robotics Institute at Carnegie Mellon University have proposed six design suggestions for a humanoid robotic head [1] to support the perception of humanness in robots. Further, these researchers have isolated some facial features in particular, such as eyes, nose, and eyebrows, as major contributors to a robot’s humanness. However, even robots that do not include all of these features, like Kismet at MIT [3], are sufficient for our minds to anthropomorphize and treat them in a very human-like way.

There is no doubt that robots are becoming more present in our lives, but what are the psychological implications of this new technology? Earlier this year Boston Dynamics revealed a video demonstrating their new robot “Spot”. This autonomous robot has four hydraulic legs and a sensor head to help it move across rough terrain. Although Spot’s appearance was quite robotic, many people condemned the act of kicking it during the recorded video demonstration. Some took it further to initiate a campaign to stop robot abuse. Interestingly, such reactions can inform us how people perceived Spot to have a mind similar to that of humans, therefore a feeling of pain, despite its obvious animalistic embodiment.

But what role does one’s belief, or knowledge, of a specific agent play in anthropomorphizing it? Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, UK, has told CNN that for him, as a roboticist, kicking Spot was “quite an impressive test” since usually kicking a robot will knock it over. Was his prior knowledge of artificial intelligence sufficient to allow Sharkey to perceive Spot as a mindless agent?  His attitude was in contrast with those who perceived Spot, a robot without a head, as a mindful robot that feels pain despite lacking even the basic characteristics of an animal.  Nuances like these are essential to our understanding of how we anthropomorphize others and require greater understanding if we are to improve human-robot interactions. Knowing more about the cognitive, or neurological, process of anthropomorphism could assist computer scientists and roboticists to reverse engineer and implement the underlying principles in future caregiver robots, for example, improving interactions with patients. In other words, cracking the mechanism that underlies anthropomorphism could bring us closer to having robots that read, and help, the minds of others.

References:

[1] DiSalvo, Carl F., et al. "All robots are not created equal: the design and perception of humanoid robot heads." Proceedings of the 4th conference on Designing interactive systems: processes, practices, methods, and techniques. ACM, 2002.

[2] Animating Anthropomorphism: Giving Minds To Geometric Shapes Scientific American.

[3] Breazeal, Cynthia. "Toward sociable robots." Robotics and autonomous systems 42.3 (2003): 167-175.

Further Readings:

Epley, Nicholas, et al. "When we need a human: Motivational determinants of anthropomorphism." Social Cognition 26.2 (2008): 143-155.

Epley, Nicholas, Adam Waytz, and John T. Cacioppo. "On seeing human: a three-factor theory of anthropomorphism." Psychological review 114.4 (2007): 864.

No comments:

Post a Comment