Active vs. Passive Learning

Elaborate. Monkey.
(source)
Thinking about our thinking is an important step to maximizing our potential for learning.  Metacognition during learning allows us to evaluate our own learning to determine if it is being effective.  Am I really learning the material?  Do I really understand the material?  Thinking about our own comprehension is an important part of any learning process.

I recently read an article by Kathrin Stanger-Hall, a biology professor at UGA who has an interest in evaluating the effectiveness of teaching strategies.  This article points out the difference between active and passive learning strategies.  For example, passive learning may be limited to the following mindless study habits:

  • Reading the material
  • Going to class
  • Making index cards
  • Highlighting
  • Looking up information
  • Reviewing notes

On the surface, these behaviors seem like learning.  However, while these activities may be parts of a successful learning process, they do not include an active awareness where the student is working to connect the information within a larger context or to evaluate self-understanding of the material.

Active approaches to learning would augment these activities with the following sort of thinking:

  • Asking "How does it work?"
  • Drawing the process or system
  • Writing study questions to evaluate self-understanding
  • Reorganizing information into new categories
  • Comparing and contrasting information

This type of approach to learning leverages the concept of elaboration that we've discussed before:  somewhat non-intuitively, our brains seem to encode information better with more details and comparisons, rather than less.

As we learn, metacognition should be simmering in the background.  Are we learning?  How can we connect this with other information we understand?  How could we distill the essential elements?  This self-awareness is essential for learning to remain productive and efficient.

Is Introspection Useful?

Look within.
I have this intuitive sense that it's useful to think about our thinking, a.k.a to engage in introspection or metacognition.  For me, this intuition has swollen to become even more, a bit of an obsession.  Practical introspection could be considered the founding principle of this blog.  But is my intuitive sense correct?  Is introspection useful?  Or, is it a narcissistic drain on our cognitive resources? 


At the core of my personal philosophy toward introspection is an inherent skepticism.  Essentially, I propose that the goal of introspection is to adopt a stance of skepticism toward our own thinking with the objective of tuning our decision making to improve our mental lives and get better at life.

Skepticism truly is at the heart of this process.  I propose that improvement requires a continuous process of asking "Is this bullshit?" even (or especially) about our own thoughts.  By calling bullshit on our own habitual modes of thinking, we can reach a new understanding about the cognitive traps we might be falling into on a daily basis that cause us stress or hold us back from our goals.  We can then slowly work to discard these destructive patterns of thought with healthier ones.

As I've mentioned, this just makes sense to me.  But I could imagine a scenario where this might get out of hand.  The term "analysis paralysis" comes to mind.  One could introspect his life away, questioning each layer of decision making until he is frozen in inaction.

Of course, nothing in life is inherently good or bad, it's all how something is applied.  Exercise is a great example.  If done responsibly, exercise is beneficial.  You can get healthier and more physically fit. However if one were to exercise to the point of excess, injury or illness may result.  Introspection must be the same way.  There must be a self-analyzing sweet spot.  But what is that balance?

Ironically, my excessive introspection has led me to believe that the sweet spot is far on the side of less thinking.  As I have skeptically analyzed my thoughts as they have floated by over the years, I have come to realize that most of them can be binned as useless "worrying".  I have come to loathe thinking that does not perform a useful service to me and I have found that most thinking is useless thinking.

This line of thinking has led my to another principle.  I propose that thinking is useless unless it results in a decision and, typically, an action.  In other words, thinking is just energy-sapping wheel-spinning unless it actually cause you to do something in the real world that has consequences. Furthermore, by taking action we are essentially performing a mini-experiment about how our actions impact reality.  We can then learn from our actions and plan our next actions based on evidence.  This mode of thinking-while-acting is very close to my personal interpretation of flow and is the default state that I try to achieve.  

Here's how it's playing out right now in my dome...

Me: Well, Homunculus, is introspection useful?  

Homunculus: I think the answer is yes if the objective is to squash negative thinking that impairs our ability to make shit happen.  In other words, I propose that we should be skeptical that all of our thinking is useful and be ruthless in turning those thoughts off that get in our way.

Me: How do I know if it's negative thinking and not a rational weighing of risks?  I shouldn't always charge head first into a situation.  That's risky!

Homunculus: It's hard to know for sure what will happen but start taking some actions in the direction that seems the best and see what happens.  You'll learn more by experimenting in this way then by worrying about abstractions.

Me: But I'm worried about this, that, and the other thing...  What if they happen?

Homunculus: Again, it's all theoretical.  Take some small actions in the direction that looks most promising (test the waters, if you must) and learn/think with some new information at hand.

Me: Homunculus, you're the man.

Homunculus: No, you're the man.  Stay happy!

Don't Don't Break the Chain... Make the Chain!

Make the chain!
(source: Wikipedia)
One of the worst productivity tips I've ever heard came from Jerry Seinfeld.  Here's the "tip": never skip a day of work.  Specifically, as a comedian-writer Seinfeld's goal was to write as much as possible.  To this end, he marked a calendar with a big red "X" for every day he worked.  After a few days of this, a chain of X's would form.  Then came the productivity goal: "Don't break the chain."

I've tried this trick for some activities and I've found it to be a terrible piece of advice.  Here's why: I always failed.  Eventually, I broke the chain.  Eventually, life reared up and made it impossible to stick with a habit.  Something always comes up: illness, events, deadlines, fatigue.  And guess what?  Failing sucks!

The problem with "Don't Break the Chain" is that it is an avoidance goal: it frames a goal or habit as something not to do.  The problem with this is that doesn't describe the behavior that one should be doing and leaves too much room for failure, as I've described.

Instead, I propose the following counter to Seinfeld's tip: "Make a chain."  Track every time you do something.  Make a check mark. Stick a sticker.  Put a coin in a jar. Whatever.  Just give yourself credit for every time you do it right and keep track of how many times you did it.  In this case, it doesn't matter if you skip a day.  Pick up the next day, or the next.  Or the next week.  Small failures don't matter if your goal is to make a huge chain; just pick up where you left off.

This has worked great for me while I've taken up running.  At first, I got all bummed out when I missed a day of my running plan: I broke the chain.  But then I reframed the goal as "make the chain" and now I can never fail.  If I miss a day, I just pick up where I left off.  Each workout I complete, I cross it off.  I see a permanent record of my progress that cannot be taken away.

Stay happy :)

The Psychology of Anthropomorphism

Today's Guest Post is authored by Mowaffak Allaham.  Mowaffak is a graduate student at GMU and a research assistant at the GMU Social Robotics Lab. Follow him on twitter at @mowaffakallaham.

Author: Mowaffak Allaham
Psychologists have identified the ability to perceive the minds of others as necessary for meaningful social interactions. Ongoing research is trying to determine factors that underpin mind perception, as this ability not only allows us to perceive the mind of a fellow human, but also to perceive it in nonhuman objects or agents. This tendency to imbue the real, or imagined, behavior of nonhuman agents with humanlike characteristics, motivations, intentions, or emotions [1] is called anthropomorphism.

A critical prerequisite to understanding the minds of other humans is to attribute the presence of mental states to their minds in the first place – intentions, desires, and beliefs. During the process of anthropomorphism, the attribution of mental states can even be applied to non-human objects or agents (e.g: 3D avatars or robots).  In a classic experiment exploring this phenomenon, Fritz Heider and Marianne Simmel [2] presented participants with a video of two animated triangles either chasing or hiding from one another.


This study demonstrated our innate tendency to attribute personality traits, and therefore a mind, even for simple, geometric shapes! Since then, anthropomorphism has intrigued many psychologists, and more recently neuroscientists, as a window into the cognitive mechanisms that drive our perceptions of the mental states in others.

Interestingly, one study found that an absence of social connections increased the tendency to anthropomorphize, presumably to satisfy our motivation for social connection.  In contrast, people with a strong sense of social connection were less likely to anthropomorphize non-human agents.

Research on anthropomorphism has expanded beyond the confines of psychology, reaching newly emerging fields like human-robot interaction. Computer scientists and roboticists are actively exploring the factors that influence our perception of robots.

Along these lines, scientists at the Robotics Institute at Carnegie Mellon University have proposed six design suggestions for a humanoid robotic head [1] to support the perception of humanness in robots. Further, these researchers have isolated some facial features in particular, such as eyes, nose, and eyebrows, as major contributors to a robot’s humanness. However, even robots that do not include all of these features, like Kismet at MIT [3], are sufficient for our minds to anthropomorphize and treat them in a very human-like way.

There is no doubt that robots are becoming more present in our lives, but what are the psychological implications of this new technology? Earlier this year Boston Dynamics revealed a video demonstrating their new robot “Spot”. This autonomous robot has four hydraulic legs and a sensor head to help it move across rough terrain. Although Spot’s appearance was quite robotic, many people condemned the act of kicking it during the recorded video demonstration. Some took it further to initiate a campaign to stop robot abuse. Interestingly, such reactions can inform us how people perceived Spot to have a mind similar to that of humans, therefore a feeling of pain, despite its obvious animalistic embodiment.

But what role does one’s belief, or knowledge, of a specific agent play in anthropomorphizing it? Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, UK, has told CNN that for him, as a roboticist, kicking Spot was “quite an impressive test” since usually kicking a robot will knock it over. Was his prior knowledge of artificial intelligence sufficient to allow Sharkey to perceive Spot as a mindless agent?  His attitude was in contrast with those who perceived Spot, a robot without a head, as a mindful robot that feels pain despite lacking even the basic characteristics of an animal.  Nuances like these are essential to our understanding of how we anthropomorphize others and require greater understanding if we are to improve human-robot interactions. Knowing more about the cognitive, or neurological, process of anthropomorphism could assist computer scientists and roboticists to reverse engineer and implement the underlying principles in future caregiver robots, for example, improving interactions with patients. In other words, cracking the mechanism that underlies anthropomorphism could bring us closer to having robots that read, and help, the minds of others.

References:

[1] DiSalvo, Carl F., et al. "All robots are not created equal: the design and perception of humanoid robot heads." Proceedings of the 4th conference on Designing interactive systems: processes, practices, methods, and techniques. ACM, 2002.

[2] Animating Anthropomorphism: Giving Minds To Geometric Shapes Scientific American.

[3] Breazeal, Cynthia. "Toward sociable robots." Robotics and autonomous systems 42.3 (2003): 167-175.

Further Readings:

Epley, Nicholas, et al. "When we need a human: Motivational determinants of anthropomorphism." Social Cognition 26.2 (2008): 143-155.

Epley, Nicholas, Adam Waytz, and John T. Cacioppo. "On seeing human: a three-factor theory of anthropomorphism." Psychological review 114.4 (2007): 864.

Is Meditation Self-Help Bullshit?

Are you wasting your time,
meditating monkey?
A recent article by Virginia Heffernan in the New York Time Magazine is excoriating the gradual westernization of mindfulness meditation, demonizing this trend as somehow running counter to the essence of this ancient practice.  I think at the heart of this article is a desire to protect people from self-help snake oil but there is a palpable vibe in the article of an anti-self-help bias that is unfortunate.

As I've discussed before, our attitudes about change influence our ability to change.  So, while I agree with Ms. Heffernan that self-help advice should be evaluated critically, binning the entire self-help movement as bullshit isn't helping anyone either.

In this context, I can't help but reevaluate the purpose of mindfulness meditation (and mindfulness, in general).  Is mindfulness mediation useful? 

As someone who has practiced mindfulness meditation as an attempt to manage stress, I have concluded that meditation is simply a concerted effort to implement a reappraisal of bad thoughts.  Specifically, it has been suggested that rumination, or the endless replay of negative thinking, may contribute to depression.  Cognitive reappraisal is a well-known approach for dealing with a number of negative or disruptive thoughts and meditation is just a practiced form of this.  In the mindfulness meditation style I have tried, namely Mindfulness-Based Stress Reduction popularized by Jon Kabat-Zinn, one reappraises negative thoughts as "thoughts", taking a meta-level view of them and partitioning them as something distinct from our experiencing mind.

Personally, this makes sense to me.  Just as I wouldn't accept the self-help advice of some rando, I am not going to trust that my automatic catastrophizing about the world is based on fact.  During mindfulness meditation, I am taking a skeptical stance toward my own worries and recognizing them for what they are: worries, not reality.  Ultimately, the proof is in the pudding when it comes to the value of mindfulness meditation.  If it helps someone cope with the challenges of life, then great.  We should all be active participants in our own mental health, experimenting with approaches until we get results.

In this way, stress management is like exercise.  Is one form of exercise better than another?  Is meditation better than a book club?  The answer is: it depends.  It depends on who is doing it, whether they enjoy it and whether they stick with it.  If the answer to these questions is "yes", then the long term outcome is likely to be positive.