Don't Don't Break the Chain... Make the Chain!

Make the chain!
(source: Wikipedia)
One of the worst productivity tips I've ever heard came from Jerry Seinfeld.  Here's the "tip": never skip a day of work.  Specifically, as a comedian-writer Seinfeld's goal was to write as much as possible.  To this end, he marked a calendar with a big red "X" for every day he worked.  After a few days of this, a chain of X's would form.  Then came the productivity goal: "Don't break the chain."

I've tried this trick for some activities and I've found it to be a terrible piece of advice.  Here's why: I always failed.  Eventually, I broke the chain.  Eventually, life reared up and made it impossible to stick with a habit.  Something always comes up: illness, events, deadlines, fatigue.  And guess what?  Failing sucks!

The problem with "Don't Break the Chain" is that it is an avoidance goal: it frames a goal or habit as something not to do.  The problem with this is that doesn't describe the behavior that one should be doing and leaves too much room for failure, as I've described.

Instead, I propose the following counter to Seinfeld's tip: "Make a chain."  Track every time you do something.  Make a check mark. Stick a sticker.  Put a coin in a jar. Whatever.  Just give yourself credit for every time you do it right and keep track of how many times you did it.  In this case, it doesn't matter if you skip a day.  Pick up the next day, or the next.  Or the next week.  Small failures don't matter if your goal is to make a huge chain; just pick up where you left off.

This has worked great for me while I've taken up running.  At first, I got all bummed out when I missed a day of my running plan: I broke the chain.  But then I reframed the goal as "make the chain" and now I can never fail.  If I miss a day, I just pick up where I left off.  Each workout I complete, I cross it off.  I see a permanent record of my progress that cannot be taken away.

Stay happy :)

The Psychology of Anthropomorphism

Today's Guest Post is authored by Mowaffak Allaham.  Mowaffak is a graduate student at GMU and a research assistant at the GMU Social Robotics Lab. Follow him on twitter at @mowaffakallaham.

Author: Mowaffak Allaham
Psychologists have identified the ability to perceive the minds of others as necessary for meaningful social interactions. Ongoing research is trying to determine factors that underpin mind perception, as this ability not only allows us to perceive the mind of a fellow human, but also to perceive it in nonhuman objects or agents. This tendency to imbue the real, or imagined, behavior of nonhuman agents with humanlike characteristics, motivations, intentions, or emotions [1] is called anthropomorphism.

A critical prerequisite to understanding the minds of other humans is to attribute the presence of mental states to their minds in the first place – intentions, desires, and beliefs. During the process of anthropomorphism, the attribution of mental states can even be applied to non-human objects or agents (e.g: 3D avatars or robots).  In a classic experiment exploring this phenomenon, Fritz Heider and Marianne Simmel [2] presented participants with a video of two animated triangles either chasing or hiding from one another.


This study demonstrated our innate tendency to attribute personality traits, and therefore a mind, even for simple, geometric shapes! Since then, anthropomorphism has intrigued many psychologists, and more recently neuroscientists, as a window into the cognitive mechanisms that drive our perceptions of the mental states in others.

Interestingly, one study found that an absence of social connections increased the tendency to anthropomorphize, presumably to satisfy our motivation for social connection.  In contrast, people with a strong sense of social connection were less likely to anthropomorphize non-human agents.

Research on anthropomorphism has expanded beyond the confines of psychology, reaching newly emerging fields like human-robot interaction. Computer scientists and roboticists are actively exploring the factors that influence our perception of robots.

Along these lines, scientists at the Robotics Institute at Carnegie Mellon University have proposed six design suggestions for a humanoid robotic head [1] to support the perception of humanness in robots. Further, these researchers have isolated some facial features in particular, such as eyes, nose, and eyebrows, as major contributors to a robot’s humanness. However, even robots that do not include all of these features, like Kismet at MIT [3], are sufficient for our minds to anthropomorphize and treat them in a very human-like way.

There is no doubt that robots are becoming more present in our lives, but what are the psychological implications of this new technology? Earlier this year Boston Dynamics revealed a video demonstrating their new robot “Spot”. This autonomous robot has four hydraulic legs and a sensor head to help it move across rough terrain. Although Spot’s appearance was quite robotic, many people condemned the act of kicking it during the recorded video demonstration. Some took it further to initiate a campaign to stop robot abuse. Interestingly, such reactions can inform us how people perceived Spot to have a mind similar to that of humans, therefore a feeling of pain, despite its obvious animalistic embodiment.

But what role does one’s belief, or knowledge, of a specific agent play in anthropomorphizing it? Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, UK, has told CNN that for him, as a roboticist, kicking Spot was “quite an impressive test” since usually kicking a robot will knock it over. Was his prior knowledge of artificial intelligence sufficient to allow Sharkey to perceive Spot as a mindless agent?  His attitude was in contrast with those who perceived Spot, a robot without a head, as a mindful robot that feels pain despite lacking even the basic characteristics of an animal.  Nuances like these are essential to our understanding of how we anthropomorphize others and require greater understanding if we are to improve human-robot interactions. Knowing more about the cognitive, or neurological, process of anthropomorphism could assist computer scientists and roboticists to reverse engineer and implement the underlying principles in future caregiver robots, for example, improving interactions with patients. In other words, cracking the mechanism that underlies anthropomorphism could bring us closer to having robots that read, and help, the minds of others.

References:

[1] DiSalvo, Carl F., et al. "All robots are not created equal: the design and perception of humanoid robot heads." Proceedings of the 4th conference on Designing interactive systems: processes, practices, methods, and techniques. ACM, 2002.

[2] Animating Anthropomorphism: Giving Minds To Geometric Shapes Scientific American.

[3] Breazeal, Cynthia. "Toward sociable robots." Robotics and autonomous systems 42.3 (2003): 167-175.

Further Readings:

Epley, Nicholas, et al. "When we need a human: Motivational determinants of anthropomorphism." Social Cognition 26.2 (2008): 143-155.

Epley, Nicholas, Adam Waytz, and John T. Cacioppo. "On seeing human: a three-factor theory of anthropomorphism." Psychological review 114.4 (2007): 864.