The goal of this assignment is
for you to get practice thinking about your thinking and thinking about other
people’s thinking. Thinking about your thinking and thought processes is
meta-cognition. Hence the title of the assignment, meta-cognition log or
metacoglog for short.
You will do this by keeping a
weekly reasoning log. Keep track of instances of reasoning that you think
are worth reflecting on. I encourage you to do this in one place, maybe a word
doc or a google doc. Here are just a sampling of the sort of things you might
reflect on: Did you get in an interesting argument with your parents? Did you
encounter a frustrating exchange on Facebook or Twitter? Read a good
op-ed in the newspaper or magazine? Trying to decide which internship or job to
take? Have you seen some bad or mistaken arguments about our current pandemic?
Think you've noticed a weird cognitive glitch? I will model examples of meta-cognition
logs (metacoglogs) of my own in our second class.
You should submit one log entry at the end of each week. It
is due by Sunday, at 5pm. The first will be due on July 19th,
at 5pm. You will submit it via e-learn. Over the semester, you will submit
a total of 5 log entries, with the last one being submitted on August 16th.
In your log, you should describe for me the reasoning
instance and your reflections on it. If it’s related to something we’ve talked about in the course,
great! Tell me how it relates. If you don’t think we’ve
mentioned it, that’s great too! Maybe you’ll discover something
These aren't meant to be
long. Good reflections might be as short as 300 words. I’m
looking for something in the range of 300 to 400 words. Aim for interesting and thoughtful.
To get full credit, you must do two things successfully: a) describe
the reasoning; and b) explain to me why you think it's relevant to consider in
the context of a Critical Reasoning course.
Some Sample Entries
Log #1 – Availability heuristic and Mindhunters
My wife and I sometimes watch ‘true crime’ television and movies.
Recently we watched the show Mindhunters, on Netflix. The show focuses on FBI
investigators who are interviewing, to better understand, what we now call
serial killers. The show covers some grizzly stuff, so naturally my wife and I
have been double-checking the locks of our doors downstairs before going to
bed. We didn’t really do this before. But we found ourselves just a teensy-bit
more afraid after watching this show a few nights in a row.
Now, we aren’t really engaging in explicit reasoning when we do this,
but there is some reasoning going on. Namely, our actions show that we are
thinking there is some chance this could happen, so better to lock the doors.
And since we didn’t do this before, it shows that we’ve become more concerned.
So, implicitly, we seem to think the threat is greater now than we did before.
It struck me, however, that our perception of the likelihood of being
murdered was likely being influenced by the availability heuristic. This is one
of the cognitive pitfalls discussed in Chapter 1. It refers to ways individuals
judge the frequency or probability of something by how easily we can think of
examples. Since we had just watched several episodes of Mindhunter, we could
easily think of several examples of grizzly murders by serial killers. Given
the ease in recall, it is likely that our minds came to have the wrong
perception that murders are a decent threat to us. I looked it up and in 2018
(the last year I could find data from Stats Canada), there were 1.82 homicides
per 100,000 people. The real threat of being murdered is tiny. We are very safe
in our suburb of Surrey. Further, that rate probably overstates how likely it
is that we could be murdered, since many murders are committed by people who
know each other, not serial killers killing strangers. The rate for that is going
to be much lower. It frankly isn’t something to worry about.
At some level we both know this. And yet, it seems or feels like there
is some danger, at least when its dark and we’ve turned off the tv. Perhaps
this is a version of a cognitive illusion that we discussed in class. Even
though our system 2 processes know that we are perfectly safe, system 1 –
because its just seen those grizzly murders – feels as if there is some threat
that we should take precaution against. So we continue to feel something is
true, even as we know it isn’t really the case.
Log #2 – Confirmation bias
Recently there has been a debate in North America about whether or not
“cancel culture” is a problem. This is a complex issue, but part of the debate
is about whether people are to quick to want someone to be fired, or punished,
simply because they disagree with what they are saying or arguing. Some people
are concerned this is a growing trend and that it is stifling free and open
debate. Anyways, recently Harper’s magazine published an open letter by a bunch
of journalists and academics that raised concerns with this and some related
issues. A friend and I were talking about this the other day. I told him I
thought they identified a problem. My friend wasn’t sure what he thought and
expressed some skepticism that the situation was worse today than say 20 or 30
years ago. We decided to evaluate this claim. And I pointed to various pieces
of evidence – for example, there was a data scientist at a progressive think
tank who was fired for posting a tweet that linked to research on how riots
(like those that followed the murder of George Floyd) can change how people
vote in an election. I mentioned some other bits of evidence in favor of the
few that “cancel culture” was more of a problem.
But it occurred to me, after our discussion, that I was primarily
looking for evidence that confirmed my pre-existing belief – that
cancel-culture is a problem. This is a common cognitive pitfall that we
discussed in class. Confirmation bias refers to the tendency we all have to
focus on potential evidence for views we already hold, and to neglect or
discount contrary evidence. In this case, I was mentally looking for evidence
in favor of the cancel culture hypothesis. But I also should ask, what would
constitute evidence against this? Is there any? What would it look like? A fair
assessment of this issue requires looking for and searching out evidence
against your belief. In this case, evidence against it would be people who
spoke out in controversial ways but weren’t cancelled or we didn’t see any
efforts to do this.
This doesn’t mean I’ve changed my mind – I still think there is good
evidence for my view. But it does show how we are all prone to confirmation