Buddy
The Living Force
I found some information that appears to be interesting which I used to compose this post.I'm not sure exactly where this should go, so relocation of this post is appreciated if it's necessary. Thanks.
On a physics/science blog, one of the authors (Bee) wrote a post that, while eventually talking about potential bias in science, starts off with what appears to be an interesting study that apparently has never been published. Forget the biased title of the blog - this post is about the relationship between unbalanced brain chemistry in the individual and the resulting cognitive bias that is suggested.
Her full post is here: _http://backreaction.blogspot.com/2010/01/is-physics-cognitively-biased.html
The relevant excerpt:
...To briefly summarize it: they recruited two groups of people, 20 each. One were self-declared believers in the paranormal, the other one self-declared skeptics. This self-description was later quantified with commonly used questionnaires like the Australian Sheep-Goat Scale (with a point scale rather than binary though).
These people performed two tasks. In one task they were briefly shown (short) words that sometimes were sensible words, sometimes just random letters. In the other task they were briefly shown faces or just random combination of facial features. (These both tasks apparently use different parts of the brain, but that may not be so relevant to the point of this post.
Also, they were shown both to the right and left visual field separately for the same reason, but again, I don't know if that's so important right now).
The participants had to identify a "signal" (word/face) from the "noise" (random combination) in a short amount of time, too short to use the part of the brain necessary for rational thought. The researchers counted the hits and misses. They focused on two parameters from this measurement series. The one is the trend of the bias: whether it's randomly wrong, has a bias for false positives or a bias for false negatives (Type I error or Type II error). The second parameter is how well the signal was identified in total. The experiment was repeated after a randomly selected half of the participants received a high dose of levodopa (a Parkinson medication that increases the dopamine level in the brain), the other half a placebo.
The result was the following. First, without the medication the skeptics had a bias for Type II errors (they more often discarded as noise what really was a signal), whereas the believers had a bias for Type I errors (they more often saw a signal where it was really just noise).
The bias was equally strong for both, but in opposite directions. It is interesting though not too surprising that the expressed worldview correlates with unconscious cognitive characteristics. Overall, the skeptics were better at identifying the signal. Then, with the medication, the bias of both skeptics and believers tended towards the mean (random yes/no misses), but the skeptics overall became as bad at identifying signals as the believers who stayed equally bad as without extra dopamine.[emphasis added]
Again, the actual research isn't published (yet) or reviewed, so take it with a grain of salt, but the implication seems significant: increased dopamine decreases one's ability to separate valid signal from noise and makes one more likely to falsely identify noise as signal.
Bee heads off and focuses on the skeptic/believer angle, relates it to "false patterns" rather than "false signals", and how the relationship may or may not be justified, but I think the basic finding itself is important and doesn't really support such a direction.
False patterns don't seem to be directly dependent on false signals: one can take perfectly valid signal data and develop all sorts of false patterns around it. Granted, the inability to correctly identify signal versus noise would lead to more false patterns being observed (as any pattern based on false signals would obviously be false), but that isn't what was tested for: the study (as related) explicitly aimed for reactions without rational interpretation.
No interpretation, no patterns. So, what was being tested wasn't the quality of the interpretation of signals but whether something was a signal at all, right?
A couple of comments following the blog article got my attention and seemed significant, so for the benefit of the reader I'll include them here:
----------------------------------------------
At 6:39 PM, January 06, 2010, Anonymous Austin said...
Just wanted to point out that the study said nothing about pattern recognition. In fact, from what you stated about the duration of time ("too short to use the part of the brain necessary for rational thought") to make the decision, no pattern recognition was involved or affected by the test: patterns take thought to see.
So, while I agree that pattern recognition is an evolutionary boon, is involved in creativity, and is present in both scientists and "believers", that says nothing about the quality of the patterns being observed. Bad signal-vs.-noise separation would, obviously, lead to bad patterns (GIGO, anyone?), but even good signal-vs.-noise separation could lead to bad patterns.
The study results seem to say that what was affected wasn't the interpreted quality of the signal (which wasn't tested), just whether it *was* a signal or was just noise. The correlation between "believers" and false signal detection might be more related to the GIGO issue rather than an assumed increase in pattern detection ability.
---------------------------------------------------------------------------
At 8:16 PM, January 06, 2010, Anonymous Anonymous said...
For what it's worth, I saw nothing that I could identify in the CMB.
The study you cite is cute, but as with most psychological studies, it doesn't pay to try to milk the data for more than is actually there. Thinking you detect a signal and being willing to act on a signal are not the same thing, although in this simplistic, no-risk situation, they are made to appear to be. And science isn't just about how many times you say 'ooh!' in response to what you think is a signal. Science is very much about having that 'signal' validated by others using independent means.
I'm really not sure who or what you are trying to jab with this post, other than the poke at ESP.
And I'm seconding Austin with respect to pattern recognition. :)
-------------------------------------------------------------------------------------
At 2:43 AM, January 07, 2010, Blogger Bee said...
Austin, Anonymous: With "pattern recognition" I was simply referring to finding the face/word in random noise. You seem to refer to pattern recognition as pattern in a time series instead, sorry, I should have been clearer on that. However, you might find the introduction of this paper [_http://www.usz.ch/non_cms/neurologie/LehreForsch/Neuropsychologie/Publikationen/2008/08_09_BruggerCortex.pdf] interesting which more generally is about the issue of mistakenly assigning meaning to the meaningless rspt causal connections where there are none. It's very readable. This paper (it seems to be an introduction to a special issue) also mentions the following
"The meaningfulness of a coincidence is in the brain of the beholder, and while ‘‘meaningless coincidences’’ do not invite explanatory elaborations, those considered meaningful have often lured intelligent people into a search for underlying rules and laws (Kammerer, 1919, for a case study)."
Seems like there hasn't been much research on that though. Best,
B.
------------------------------------------------------------------------------------
Here is the abstract of the study that has yet to be published:
_http://www.mitpressjournals.org/doi/abs/10.1162/jocn.2009.21313
Early Access
Posted Online July 30, 2009.
(doi:10.1162/jocn.2009.21313)
© 2009 Massachusetts Institute of Technology
Dopamine, Paranormal Belief, and the Detection of Meaningful Stimuli
Peter Krummenacher1,2, Christine Mohr3, Helene Haker4, and Peter Brugger1
1University Hospital Zurich, Switzerland
2Collegium Helveticum, Zurich, Switzerland
3University of Bristol, Bristol, UK
4Psychiatric University Hospital, Zurich, Switzerland
Abstract
Dopamine (DA) is suggested to improve perceptual and cognitive decisions by increasing the signal-to-noise ratio. Somewhat paradoxically, a hyperdopaminergia (arguably more accentuated in the right hemisphere) has also been implied in the genesis of unusual experiences such as hallucinations and paranormal thought.
To test these opposing assumptions, we used two lateralized decision tasks, one with lexical (tapping left-hemisphere functions), the other with facial stimuli (tapping right-hemisphere functions). Participants were 40 healthy right-handed men, of whom 20 reported unusual, “paranormal” experiences and beliefs (“believers”), whereas the remaining participants were unexperienced and critical (“skeptics”). In a between-subject design, levodopa (200 mg) or placebo administration was balanced between belief groups (double-blind procedure).
For each task and visual field, we calculated sensitivity (d') and response tendency (criterion) derived from signal detection theory. Results showed the typical right visual field advantage for the lexical decision task and a higher d' for verbal than facial stimuli. For the skeptics, d' was lower in the levodopa than in the placebo group. Criterion analyses revealed that believers favored false alarms over misses, whereas skeptics displayed the opposite preference. Unexpectedly, under levodopa, these decision preferences were lower in both groups.
We thus infer that levodopa (1) decreases sensitivity in perceptual–cognitive decisions, but only in skeptics, and (2) makes skeptics less and believers slightly more conservative. These results stand at odd to the common view that DA generally improves signal-to-noise ratios. Paranormal ideation seems an important personality dimension and should be assessed in investigations on the detection of signals in noise.
------------------------------------------------------------
Comments, critiques, anything is welcome.
On a physics/science blog, one of the authors (Bee) wrote a post that, while eventually talking about potential bias in science, starts off with what appears to be an interesting study that apparently has never been published. Forget the biased title of the blog - this post is about the relationship between unbalanced brain chemistry in the individual and the resulting cognitive bias that is suggested.
Her full post is here: _http://backreaction.blogspot.com/2010/01/is-physics-cognitively-biased.html
The relevant excerpt:
...To briefly summarize it: they recruited two groups of people, 20 each. One were self-declared believers in the paranormal, the other one self-declared skeptics. This self-description was later quantified with commonly used questionnaires like the Australian Sheep-Goat Scale (with a point scale rather than binary though).
These people performed two tasks. In one task they were briefly shown (short) words that sometimes were sensible words, sometimes just random letters. In the other task they were briefly shown faces or just random combination of facial features. (These both tasks apparently use different parts of the brain, but that may not be so relevant to the point of this post.
Also, they were shown both to the right and left visual field separately for the same reason, but again, I don't know if that's so important right now).
The participants had to identify a "signal" (word/face) from the "noise" (random combination) in a short amount of time, too short to use the part of the brain necessary for rational thought. The researchers counted the hits and misses. They focused on two parameters from this measurement series. The one is the trend of the bias: whether it's randomly wrong, has a bias for false positives or a bias for false negatives (Type I error or Type II error). The second parameter is how well the signal was identified in total. The experiment was repeated after a randomly selected half of the participants received a high dose of levodopa (a Parkinson medication that increases the dopamine level in the brain), the other half a placebo.
The result was the following. First, without the medication the skeptics had a bias for Type II errors (they more often discarded as noise what really was a signal), whereas the believers had a bias for Type I errors (they more often saw a signal where it was really just noise).
The bias was equally strong for both, but in opposite directions. It is interesting though not too surprising that the expressed worldview correlates with unconscious cognitive characteristics. Overall, the skeptics were better at identifying the signal. Then, with the medication, the bias of both skeptics and believers tended towards the mean (random yes/no misses), but the skeptics overall became as bad at identifying signals as the believers who stayed equally bad as without extra dopamine.[emphasis added]
Again, the actual research isn't published (yet) or reviewed, so take it with a grain of salt, but the implication seems significant: increased dopamine decreases one's ability to separate valid signal from noise and makes one more likely to falsely identify noise as signal.
Bee heads off and focuses on the skeptic/believer angle, relates it to "false patterns" rather than "false signals", and how the relationship may or may not be justified, but I think the basic finding itself is important and doesn't really support such a direction.
False patterns don't seem to be directly dependent on false signals: one can take perfectly valid signal data and develop all sorts of false patterns around it. Granted, the inability to correctly identify signal versus noise would lead to more false patterns being observed (as any pattern based on false signals would obviously be false), but that isn't what was tested for: the study (as related) explicitly aimed for reactions without rational interpretation.
No interpretation, no patterns. So, what was being tested wasn't the quality of the interpretation of signals but whether something was a signal at all, right?
A couple of comments following the blog article got my attention and seemed significant, so for the benefit of the reader I'll include them here:
----------------------------------------------
At 6:39 PM, January 06, 2010, Anonymous Austin said...
Just wanted to point out that the study said nothing about pattern recognition. In fact, from what you stated about the duration of time ("too short to use the part of the brain necessary for rational thought") to make the decision, no pattern recognition was involved or affected by the test: patterns take thought to see.
So, while I agree that pattern recognition is an evolutionary boon, is involved in creativity, and is present in both scientists and "believers", that says nothing about the quality of the patterns being observed. Bad signal-vs.-noise separation would, obviously, lead to bad patterns (GIGO, anyone?), but even good signal-vs.-noise separation could lead to bad patterns.
The study results seem to say that what was affected wasn't the interpreted quality of the signal (which wasn't tested), just whether it *was* a signal or was just noise. The correlation between "believers" and false signal detection might be more related to the GIGO issue rather than an assumed increase in pattern detection ability.
---------------------------------------------------------------------------
At 8:16 PM, January 06, 2010, Anonymous Anonymous said...
For what it's worth, I saw nothing that I could identify in the CMB.
The study you cite is cute, but as with most psychological studies, it doesn't pay to try to milk the data for more than is actually there. Thinking you detect a signal and being willing to act on a signal are not the same thing, although in this simplistic, no-risk situation, they are made to appear to be. And science isn't just about how many times you say 'ooh!' in response to what you think is a signal. Science is very much about having that 'signal' validated by others using independent means.
I'm really not sure who or what you are trying to jab with this post, other than the poke at ESP.
And I'm seconding Austin with respect to pattern recognition. :)
-------------------------------------------------------------------------------------
At 2:43 AM, January 07, 2010, Blogger Bee said...
Austin, Anonymous: With "pattern recognition" I was simply referring to finding the face/word in random noise. You seem to refer to pattern recognition as pattern in a time series instead, sorry, I should have been clearer on that. However, you might find the introduction of this paper [_http://www.usz.ch/non_cms/neurologie/LehreForsch/Neuropsychologie/Publikationen/2008/08_09_BruggerCortex.pdf] interesting which more generally is about the issue of mistakenly assigning meaning to the meaningless rspt causal connections where there are none. It's very readable. This paper (it seems to be an introduction to a special issue) also mentions the following
"The meaningfulness of a coincidence is in the brain of the beholder, and while ‘‘meaningless coincidences’’ do not invite explanatory elaborations, those considered meaningful have often lured intelligent people into a search for underlying rules and laws (Kammerer, 1919, for a case study)."
Seems like there hasn't been much research on that though. Best,
B.
------------------------------------------------------------------------------------
Here is the abstract of the study that has yet to be published:
_http://www.mitpressjournals.org/doi/abs/10.1162/jocn.2009.21313
Early Access
Posted Online July 30, 2009.
(doi:10.1162/jocn.2009.21313)
© 2009 Massachusetts Institute of Technology
Dopamine, Paranormal Belief, and the Detection of Meaningful Stimuli
Peter Krummenacher1,2, Christine Mohr3, Helene Haker4, and Peter Brugger1
1University Hospital Zurich, Switzerland
2Collegium Helveticum, Zurich, Switzerland
3University of Bristol, Bristol, UK
4Psychiatric University Hospital, Zurich, Switzerland
Abstract
Dopamine (DA) is suggested to improve perceptual and cognitive decisions by increasing the signal-to-noise ratio. Somewhat paradoxically, a hyperdopaminergia (arguably more accentuated in the right hemisphere) has also been implied in the genesis of unusual experiences such as hallucinations and paranormal thought.
To test these opposing assumptions, we used two lateralized decision tasks, one with lexical (tapping left-hemisphere functions), the other with facial stimuli (tapping right-hemisphere functions). Participants were 40 healthy right-handed men, of whom 20 reported unusual, “paranormal” experiences and beliefs (“believers”), whereas the remaining participants were unexperienced and critical (“skeptics”). In a between-subject design, levodopa (200 mg) or placebo administration was balanced between belief groups (double-blind procedure).
For each task and visual field, we calculated sensitivity (d') and response tendency (criterion) derived from signal detection theory. Results showed the typical right visual field advantage for the lexical decision task and a higher d' for verbal than facial stimuli. For the skeptics, d' was lower in the levodopa than in the placebo group. Criterion analyses revealed that believers favored false alarms over misses, whereas skeptics displayed the opposite preference. Unexpectedly, under levodopa, these decision preferences were lower in both groups.
We thus infer that levodopa (1) decreases sensitivity in perceptual–cognitive decisions, but only in skeptics, and (2) makes skeptics less and believers slightly more conservative. These results stand at odd to the common view that DA generally improves signal-to-noise ratios. Paranormal ideation seems an important personality dimension and should be assessed in investigations on the detection of signals in noise.
------------------------------------------------------------
Comments, critiques, anything is welcome.