Weaning yourself off cognitive models. Part 5: Cognitive biases and heuristics

Twelve. I have written about this several times so this will be shorter. Labelling our human ‘cognitive biases’ has become very popular in the media and elsewhere thanks to people like Kahneman and Thaler.  The basic phenomena seem robust, it is just the cognitive modelling and explanations of this that I find shaky.

What has been learned is that humans do not make ‘decisions’ (this is verbal behaviour, notice) on a totally rational consideration of outcomes. Perhaps no surprise. That much I do not dispute.  But as the explanation of this, the cognitive models propose that our cognitive processing systems have limitations and in-built biases. So, this is a blame-the-victim explanation since our own faulty processing leads us into errors.

Going to our Assumptions from Blog 2 of this series, these biases are really about talking and discourses, language strategies we commonly use with people. My explanation instead is that when we talk about outcomes and ‘decisions’, such discourses always include weighing social outcomes as well, and this is what ‘leads us astray’.  With this alternative explanation, we do not blame-the-victim because taking account of social outcomes is extremely important in our lives and we would be foolish not to take them into account.  Social context impacts on all our decisions so we need to include these.

In this way, the mathematicians and economists are the foolish ones because they work out decisions based on purely non-social outcomes and call those the best or optimal answers.  But as we all know from life, we can make whatever decisions we like but if we get the social outcomes wrong all is lost anyway and what we propose will not work out.

In fact, in a past Blog (not in this series) I looked at how ‘rationality’ really means ‘excluding the social’.  This is good for science and dealing with the non-social world, since we do not want our measurements of wind-speed velocity of sparrows to be influenced by who we happen to be talking to at the time.  We do not want a science where E=mC2  when talking to our mother but E=4mC3  when talking to our siblings.

But the application of this same ‘rational’ decision making (which purposefully excludes the social just as science does) to psychology, law, economics, mental health, government, bureaucracy, ecology, religion and emotion, is a big mistake.

I hope it can be seen that acting irrationally is always a case of strong alternative social consequences (dementia aside, but perhaps even then) so it is not really irrational after all.

To properly examine these ‘decision biases’, then, research should be looking at the hidden social outcomes for saying our answers to decision questions. This means that something like discourse analysis methods need to replace the primarily experimental, non-social methods used in all past research (like Kahneman, Thaler).

For example, if there is a choice between $10 and $20 in a typical experimental test of ‘cognitive biases’, I would be irrational to choose the $10, surely?  However, this also depends on the social contexts, which are totally ignored (indeed, excluded from consideration) in these experiments. It could be that if I took the $20 then people in my community or family would think I am greedy and I could lose many other useful social and economic consequences in the future because of doing that (if they found out, which would also be involved here). Which, then, is being irrational—ignoring the social effects and future social outcomes or losing half the money?

So, judgments of what are rational or ‘irrational’ choices actually requires a thorough knowledge of all the person’s contexts, not just the one in focus currently, and especially the social ones. For this reason, anthropologists and sociologists have been very careful and wise to observe and analyse in great detail the social context of so-called irrational behaviours (Durkheim, Evans-Pritchard). In fact, they make the point that there are no irrational behaviours, just ones that have different cultural-social contexts to ours.  With their better contextual research methods, they have some of the best and most detailed examples of showing how ‘irrational’ behaviours turn out to be the most sophisticated, when seen in their total social, economic and cultural context.

To put this boldly, if you wish to categorize behaviour as being irrational or as being non-functional (this is the DSM version of irrational), this really means that you have not properly investigated all the contexts! And you just cannot say ‘Oh, those social reasons do not count.’ (Doing this is irrational, actually.)

The Table below list some of these ‘cognitive biases’ and particularly the ones used in clinical psychology to make judgements about people acting irrationally.  In the far column I have added a possible discourse analysis of what might be going on; how the ‘mistake’ from the ‘bias’ results from the way we socially use language in everyday life.

To spell out one example, the ‘anchoring bias’ of Kahneman and Tversky shows that people rely too heavily on certain traits (note how verbal this is!) or ‘pieces of information’ and this leads their decisions astray.  From a contextual, behaviour analytic, or ecological framework, on the other hand, this shows that people rely in conversation too much on what can be named (so you can report it or make a story about it) and the multi-tasking nuances get lost—these nuances are probably not lost when carrying out the decisions but are lost when talking about your decisions.

‘Cognitive biases’ Common explanation or cognitive theory  

Contextual or discourse analysis account

 

Anchoring

A cognitive bias wherein one relies too heavily on one trait or piece of information: “All I know is that she used the word ‘jerk’

 

People rely too heavily on what can be named (the ‘information’), at the expense of what can be done but not named shaped by their contexts

 

Availability heuristic When people predict the frequency of an event based on how easily an example can be brought to mind: “I can’t see how she might possibly like me!”; “I can’t think of anything or anyone that might help me”

 

Relying too heavily on what words can be said or thought (‘come to mind’) rather than their experience or wordless actions shaped by their contexts

 

Representativeness heuristic Where people judge the probability or frequency of a hypothesis by considering how much the hypothesis resembles available data: “She used the word ‘jerk’ so she must hate me” People follow words and the similarity between different verbal accounts (which arise from discursive experience and training) rather than on experience or non-word training shaped by their contexts

 

Optimistic bias (Guerin, 2017) People claim they are better drivers than the average driver, and that bad events are less likely to happen to them than others When people are asked these judgements (they probably do not even make them otherwise) they need to socially defend what they say.  To do this they can only really compare what they (as an individual) can say about their own record and what they can say about the ‘average’ person’s record, and this socially-verbally directed comparison leads to over-estimation of own luck

 

Arbitrary inference

 

Jumping to a conclusion without good reason: Might be mind-reading, “I know she is going to reject me”, or assuming the outcomes, “It’s not even worth going because I know I am going to fail.” Reason giving and establishing (wrong) facts that are convincing. Use of extremes common.  Making presumptions that are probably incorrect but difficult to notice in conversation.  Strategizing statements to make them look “as if” facts.

 

Selective abstraction

 

Focusing on one aspect of a situation and ignoring the rest: “I will fail because I made one mistake right in the middle Use of partial facts.   Making presumptions that are probably incorrect but difficult to notice in conversation.  Using categories incorrectly as a strategy.

 

Overgeneralisation Over-stating that if something happened once it will always happen: “I don’t like going to parties. I tried it once and it was awful.

 

Use of extremes.  Use of abstraction to make challenging more difficult.
Magnification and minimisation

 

Minimizing positive outcomes and maximizing negative outcomes: exaggerating or catastrophizing

 

Use of extremes or hedges to maximize and minimize.
Personalisation Falsely taking responsibility for something bad: “My child failed their test. I am such a bad parent.

Blaming: Doing the same but putting the responsibility onto someone else entirely.

 

Discursive placement of responsibility in opposite way to usual strategies (you are responsible for good outcomes, other people for bad outcomes).  Question to analyse is what are the other contextual arrangements so this is arrived at?

 

Dichotomous thinking

 

Thinking in black and white; you are either a good person or a bad person

 

Strategic use of word categories to establish facts.
Ignoring the positive or filtering

 

 

Focus on the negative aspects only: “The boss said I did well in the interview but he gave me one strange look in the middle that I think said it all!

Disqualifying or discounting the positive is similar: “She said she liked me but I know she only felt she had to say that.”

 

Reason giving in reverse of the usual pattern (emphasizing the negative rather than positive because of the particular social context).

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s