Thinking, Fast And Slow

The most recent two MindMatters shows are a little like bookends concerning the process of thinking and forming beliefs, and what's involved. While we didn't get into Thinking Fast And Slow specifically, as you'll see, there's quite a bit of overlap of the subject matter discussed. Not to mention tie-ins to other subjects and several of the big news events of the day. Enjoy!

MindMatters: Brainwashing Is Easy, Thinking Isn't


Covid-19. Trump. Social Justice. War. Human rights. Economics. Whatever the issue, it seems that every day we are being told we must adopt a particular position. And to do so "or else". Under incredible pressure to be in the right and to feel good about ourselves, we are bombarded with "ways to think" that are quite often delivered by overt propaganda, but that are also, perhaps more than we realize, covert and not aware to us consciously. How do the social programmers do this? Who are they? And what knowledge of psychology do they use to meet their agendas? Is it possible that many of the views which we hold dear are actually prefabricated for us?

This week on MindMatters we delve into some of the big social, cultural and political issues of the day, the perspectives we take on them, and how it is we come to a specific understanding or stance on something. Questioning what we believe - and why we do - is a responsibility we all take upon ourselves, for ourselves - but also for others. At a time in human history when truth is under egregious attack, how might one effectively examine one's own thinking? And how do we know when our thoughts are or aren't our own?




And:

MindMatters: The Impenetrable Fortress of Thoughtitude: When Belief Trumps Truth


We all have belief systems, maps to reality that inform our perspectives and help us form the bedrock values we have about ourselves, others, and the world at large. This means thoughts on everything from religion and politics to how we interact with friends, and the specific truths about reality we have come to know and adapted to in our everyday lives. But when it comes to taking in new facts, what are the psychological and emotional processes involved in bringing ourselves to a higher or more constructive "place" with this new information? And how does the weaker part of our character seek to stifle new information in its desire to "be right" and remain "comfortable"?

This week on MindMatters we discuss the difficulties and challenges of looking at our own thought processes, default beliefs, and sometimes obsolete "knowledge" of things. There's a reason people don't like discussing politics or religion at the dinner table, but that won't stop us from doing it here. Did Mohammad really exist? Did Jesus? Are Democrats or Republicans always wrong? And how do our thoughts on such things prevent us from looking at data that might otherwise change our minds? With some determination, and truth as the ultimate value, we have the tools to form a more constructive view of ourselves, and of the world in which we live.


 
Yesterday I started to watch Peterson's latest Interview with Dr. Iain McGilchrist about his new book "The matter with things" (discussed here). I noticed that McGilchrist and also Peterson seemed to have found some rather concerning inconsistencies in Kahneman's "Thinking, Fast And Slow" work. Unfortunately both don't follow up on their problems with Kahneman's work. It almost sounds like Kahneman (at least in this one instance) claimed that a study or research showed that something is the way he says it is, while in actuality the study itself, when looked up, said the complete opposite! What makes this even more concerning is that Peterson also seems to have noticed similar things in Kahneman's work, judging on the way he reacts. Given that I think there is a good chance that both McGilchrist and Peterson know what they are talking about here, that doesn't sound too good to me. So maybe a revaluation of Kahneman's work is in order? Frankly speaking, it almost sounds like Kahneman did the sort of thing a bad or really sloppy scientist would do. Given that Kahneman is a top scientist and smart, the question arises if it was just a mistake or if Kahneman has done that on purpose (and maybe in other places in the book as well)?

Here you can see it, with about a minute of context before the relevant part starts at 36:10:

 
Yesterday I started to watch Peterson's latest Interview with Dr. Iain McGilchrist about his new book "The matter with things" (discussed here). I noticed that McGilchrist and also Peterson seemed to have found some rather concerning inconsistencies in Kahneman's "Thinking, Fast And Slow" work. Unfortunately both don't follow up on their problems with Kahneman's work. It almost sounds like Kahneman (at least in this one instance) claimed that a study or research showed that something is the way he says it is, while in actuality the study itself, when looked up, said the complete opposite! What makes this even more concerning is that Peterson also seems to have noticed similar things in Kahneman's work, judging on the way he reacts. Given that I think there is a good chance that both McGilchrist and Peterson know what they are talking about here, that doesn't sound too good to me. So maybe a revaluation of Kahneman's work is in order? Frankly speaking, it almost sounds like Kahneman did the sort of thing a bad or really sloppy scientist would do. Given that Kahneman is a top scientist and smart, the question arises if it was just a mistake or if Kahneman has done that on purpose (and maybe in other places in the book as well)?

Here you can see it, with about a minute of context before the relevant part starts at 36:10:

He goes into it in more detail in the book - chapter 18 on intuition (esp. pp. 723-740). The gist is that Kahneman gives a very one-sided view of "systems 1 and 2", and when there's data to contradict his theory, he either misreads or misrepresents it. He doesn't take into account hemisphere differences, and doesn't understand intuition very well.

From his Wiki page, there's this too:
Part of the book has been swept up in the replication crisis facing psychology and the social sciences. It was discovered many prominent research findings were difficult or impossible for others to replicate, and thus the original findings were called into question. An analysis[45] of the studies cited in chapter 4, "The Associative Machine", found that their replicability index (R-index)[46] is 14, indicating essentially no reliability. Kahneman himself responded to the study in blog comments and acknowledged the chapter's shortcomings: "I placed too much faith in underpowered studies."[47] Others have noted the irony in the fact that Kahneman made a mistake in judgment similar to the ones he studied.[48]

A later analysis[49] made a bolder claim that, despite Kahneman's previous contributions to the field of decision making, most of the book's ideas are based on 'scientific literature with shaky foundations'. A general lack of replication in the empirical studies cited in the book was given as a justification.
And he authored his latest book with Cass Sunstein.
 
He goes into it in more detail in the book - chapter 18 on intuition (esp. pp. 723-740). The gist is that Kahneman gives a very one-sided view of "systems 1 and 2", and when there's data to contradict his theory, he either misreads or misrepresents it. He doesn't take into account hemisphere differences, and doesn't understand intuition very well.
That was my take away from it too. That intuition can be fooled with certain specialized tricks, or social pressure - but is, in general, quite accurate in certain contexts.
I've listened to several podcasts/presentations from McGilchrist recently, and one thing he did say was just how important intuition is in making leaps in knowledge, including science and math. The right hemisphere being able to see the bigger/overall picture.
It's important to be well versed and competent in the topic though. It likely helps to have a broad base of other knowledge too.
 
That was my take away from it too. That intuition can be fooled with certain specialized tricks, or social pressure - but is, in general, quite accurate in certain contexts.
Yeah, one of the examples he gives in the book is the study on doctors' ability to diagnose, the point being that they're no good. In fact, the study showed just the opposite, that experienced physicians make stunningly accurate diagnoses often based on minimal data.
 
I don't know how widespread the idea that "doctors cannot diagnose" is, but recently there has been a shift in the way doctors operate. Doctors used to be able to examine the patient, and through the knowledge of the patient's physical state, habits, story, family story, the environment, what illnesses are in the air, etc diagnose, follow, and through maybe some intuition and an artful trial and error, lead the patient to recovery (for good doctors anyway). Now, it's all about "protocols", where most doctors operate in an algorithmic way (if then else), which leads to the current situation of the one shoe that fits all paradigm of medicine, paving the way for machine learning medicine with little to no genuine human input.
 
I don't know how widespread the idea that "doctors cannot diagnose" is, but recently there has been a shift in the way doctors operate. Doctors used to be able to examine the patient, and through the knowledge of the patient's physical state, habits, story, family story, the environment, what illnesses are in the air, etc diagnose, follow, and through maybe some intuition and an artful trial and error, lead the patient to recovery (for good doctors anyway). Now, it's all about "protocols", where most doctors operate in an algorithmic way (if then else), which leads to the current situation of the one shoe that fits all paradigm of medicine, paving the way for machine learning medicine with little to no genuine human input.
A good example of what McGilchrist talks about with the difference between the left and right hemispheres. The right being the intuitive knowledge based diagnosis, and the left being the rigid algorithmic method that "knows it's 100% right" - that goes against reality/humanity.
 
I'm doing a research project in part based on the Dual Process Theory of cognition (System 1 and System 2 thinking) and after digging around and reading some of the published studies in journals that Kahneman notes I ran into another prominent thinker in this area and a book he published that I will order (or ask as a X-mas present since it is pricey :mad:) since it looks interesting. No reviews yet on amazon, but might be worth the price and time.

Thinking Twice: Two minds in one brain [Hardcover]
Jonathan St BT Evans


I just finished this book. Few months ago, while reading this thread for a first time, I decided to order it telling myself that it could be interesting to add and to compare it with the infos already presented here.
Moreover, in his preface the author indicates having had discussions with a number of other scholars who helped him with the project, including Tim Wilson.

In his approach of the two minds, he talks about an intuitive mind and a reflective mind, and he implies that each of them use conscious AND unconscious processes to function. Other than that we find a similar description of the two minds as the one existing in the literature of dual process theory :
Preface

«[...] It seems that while we feel as though we have but a single mind, we actually have two.

One is an old intuitive mind which evolved early and shares many [of] its features with other animals. We also have a new, reflective mind which evolved much later and makes us distinctively human. These are not the conscious and unconscious minds of Freudian theory. Although the old mind often seems to work in an automatic way and the new mind corresponds more to the conscious person, both minds have conscious and unconscious aspects. The two minds hypothesis, explored in this book, is that the new mind was added to the old mind, which continues to influence much of our behaviour. The two usually co-operate but can also conflict. »

On this last point, he places an emphasis on the equal possibility for both minds to be biased, and not only the intuitive mind, as often cited :
Addendum

Dubious and confused constructs

« The list of feature normally attributed to ‘System’ 1 and 2 processing (which I’m here calling type 1 and 2) in Table 1.2 have numerous problems associated with them. Most of them are addressed directly in the book. For example, I argue in Charpter 7 that it is a mistake to consider one process conscious and the other unconscious. In Chapter 2 I say that it is better to think of the reflective mind as uniquely developed rather than uniquely present in human beings. I also comment several times that the intuitive mind, while having animal-like features, is more developed in humans. I show the problems with the fast/slow distinction (see above) and look critically at the automatic/controlled distinction (e.g. in Chapter 7), pointing out the limited form of ‘control’ that the reflective mind can actually exercise.

A difficulty which I have largely avoided, rather than addressed, is the assumption that many authors make that intuitive, type 1 processing leads to cognitive biases and reflective, type 2 processing to normatively correct answers. While this is common, and I understand the reason for it, it is simply wrong. I have been quite explicit about this in recent publications (Evans, 2007a, 2008) and have developed my recent account of cognitive biases to place equal emphasis on type 1 and 2 processing (Evans, 2006b, 2007a). While I do not discuss the issue of normative rationality very directly in the book, I never make the mistake of simply taking the correctness of a response as diagnostic of its mind of origin. I do, however, point the reader to cases where either intuitive or reflective processing is more likely to be helpful to the individual. »


1674050565899.png

1674050800031.png

I found many parts of the book difficult to read and understand. I’m not a native english speaker and I am still ‘novice’ in psychology, which may explain why on one hand, but I still found it harder compared to what’s been shared on the forum. I’ll attribute this to the writing style.

Among others, I highlighted two points on which he was decisive, which contradict ideas shared by the cassiopaean network :

- He is darwinist and refutes the possibility of intelligent design :
Chapter 6 :Thinking about the social world
Social influence, conformity, and obedience


« […] Social influence is the foundation of our education system. If you do see yourself as a scientist, you may be shocked that powerful groups in the United States campaign for children not to be taught evolutionary theory (which is founded on massive scientific evidence) or to be taught ‘intelligent design’ as a rival scientific account, even though it is contradicted by equally large amounts of evidence.[...] »

- He is determistic and dissmisses the existence of free will :
Chapter 7 :Consciousness and control -
The illusion of conscious intention

« […] As the social psychologist John Bargh has observed, psychological research has strongly undermined the basis for this folk belief in free will. First, many of our actions can be shown to be controlled unconsciously and ‘automatically’, far more than was suspected even 20 years ago. Second, actions which appear to everyone, including the participant, to be under volontary control can be determined by the circumstances in which people find themselves.
If people had free will, it is hard to see how we could do experimental psychology. […]

Experimental manipulations are just as effective in causing the behaviour of ‘controlled’ as ‘automatic’ thinking, so in what sense does the former enjoy free will to determine itself?#23 »
And to this last sentence is a note that I found somehow contradictory to the ‘so sure’ stance of the previous statements :
« #23 : In practice, there is always a lot of variability in behaviour within the best designed experiment. So we never claim that our experimental manipulations are the sole cause of the behaviour we observe. In this sense, behaviour is never shown to be completely determined by external influences. »

Then, to explain his point on free will :
« I am not going to enter here into philosophical debate about whether there are ever circumstances under which an individual can be said to be exercising free will. However, a little reflection will show why the feeling that we do things because we intend to do them must be an illusion. […] To understand why we believe in conscious will, consider first a simpler illusion : that of perceived causes. Generally, the brain will perceive a causal link between two events when they co-occur in space-time. […]

Wegner suggests that the illusion of a conscious will is produced in a similar way. When the brain generates ‘intentional’ actions, these (normally) result in conscious thoughts of intention at the same time. The association leads to a feeling of causation.[...] »

The Mind Matters team discusses the topic of free will in this video:

 
Another point that poped-up in my mind that I find interesting to precise on the subject :
Jonathan St BT Evans:

"I do, however, point the reader to cases where either intuitive or reflective processing is more likely to be helpful to the individual."
What he basically states in the book is the efficiency of the intuitive mind on subjects/choices linked to familiar situations, that we already experienced.
When unfamiliar situations come up, the intuitive mind is more likely to be biased, thus comes the utility of the reflective mind to study carefully the matter at hand to complete the intuition with elements that it couldn't see.

This comes back to the work of Iain McGilchrist, with his explanation on how using both minds :
(From my notes from The Master and His Emissary conversation with Jordan Peterson)
1°- The Right hemisphere is actively receptive to whatever is new,

2°-- it ends sort of processed by the Left hemisphere into categories, analysed, to try to understand it, but of course (whatever it is) it is much bigger than any other category,

3°--- so they all break down and they get restored in the Right hemisphere into a new Whole, enriched.

*Example : It's like learning a new piece of music,
- you're 1st of all attracted to it as a whole,
- you've then realized that you need to practice that piece on 'part 28'
- and then when you go on stage you've got to just forget about all that
--> the work isn't lost, it's just no longer present.
 
Back
Top Bottom