Between two and three million years ago, a small creature hardly larger than a pygmy chimpanzee but with a much larger brain relative to its body weight began a remarkable journey. The initial part of that journey didn’t involve much by today’s standards, merely the ability to scavenge and possibly chase-hunt the creatures of the sub-Saharan African savannahs, to make some rather modest stone-flaked tools for that purpose, and eventually to migrate over the African and possibly the Eurasian land mass. This little creature, arguably our first unequivocally human ancestor, was known as Homo habilis (“domestic” man). How the modest abilities of this first human emerged and were transformed into the prodigious human achievements and civilization that exist today is arguably the most important scientific mystery of all. The solution to this mystery will not only help to explain where and why we evolved as we did - it will additionally shed light on how we may continue to evolve in the future.
But, first, some basic questions must be asked, including: what is human nature and what is the basis of it? How much of human nature is related to our genes? Is human nature related to the size and shape or lateralization of our brain? How did human nature evolve? Although our hairless skin and elongated body make our appearance quite different from our primate cousins, it is not our anatomy but our unique brain and behavior that most people consider special. Typical behaviors considered uniquely human include propositional (grammatical) language, mathematics, advanced tool use, art, music, religion, and judging the intent of others. However, outside of religion, which has yet to be documented in any other extant species, at least one other - and, in some cases, several - advanced species have been shown to possess one or more of the above traits. For example, dolphins understand and can use simple grammar in their contact with humans (Herman,
1986) and probably use even more sophisticated grammar in their own ultrasonic communications. Certain avian species such as parrots can count up to ten (Pepperberg,
1990) and, like apes, use mathematical concepts such as similarity and transitivity (Lock and Colombo,
1996). Orangutans display highly advanced tool use, including the preparation of tools for use in procuring food (van Schaik,
2006). As regards music and art, singing is a highly developed and plastic form of communication in songbirds (Prather and Mooney,
2004), apes have proven to be adept musical instrumentalists in their drumming (Fitch,
2006), and elephants and
chimpanzees have been known to create realistic and abstract paintings.
[1] Finally, chimpanzees (but not monkeys) are able to determine the mental states of others and to engage in mirror self-recognition (Lock and Colombo,
1996), attributes normally considered part of a general mental capability known as the “theory of mind” (see later chapters).
What mostly defines humans, then, is not a unique ability to engage in a particular behavior but rather the way in which we perform it. Three features of human behavior are particularly salient: its context-independence, its generativity, and its degree of abstraction. Context-independent cognition, emphasized in the comparative analysis of Lock and Colombo (
1996), refers to the ability to perform mental operations on new and different types of information in different settings. The behavior of chimpanzees may be viewed as much more contextually dependent than that of humans because it differs considerably depending on whether they are in the wild or in captivity; in the wild, for example, chimpanzees are relatively more likely to use tools but less likely to use symbols (Lock and Colombo,
1996). Generativity refers to the incredible amount of and variety of human cognitive output - whether it be in the tens of thousands of words in a typical language’s lexicon, the almost limitless varieties of song and paintings, or the incredible technological progress that has continued largely unabated from the end of the Middle Stone Age to the present. Finally, the abstract nature of human cognition, similar to what Bickerton
(1995) has referred to as “off-line” thinking and what Sud- dendorf and Corballis
(1997) term “mental time travel,” strikingly sets humans apart from all other species, which engage largely in the present. While some species can use symbols, only humans can create abstract ones like numbers, words, and religious icons, and it is difficult to conceive of even such advanced creatures as chimpanzees and dolphins as going beyond a simple emotional concept of death or the fulfillment of a current motivationally driven state to such spatially and temporally distant religious concepts as heaven and eternity. Indeed, apes spend the vast majority of their waking lives in immediate, nearby activities (eating and grooming) (see Bortz,
1985; Whiten,
1990), and even Neanderthals appear to have been more constrained in their spatial and temporal mental spheres (Wynn and Coolidge,
2004).
There are two major features that characterize all of the advanced cognitive skills in humans:
- they all appear to have first emerged between 50,000 and 80,000 years ago, first in Africa and later in Europe and elsewhere;
- and the context-independent, generative, and abstract expressions of these skills require high levels of a critical neurotransmitter in the brain known as dopamine.
Hence, the emergence of intellectually modern humans around 80,000 years ago arguably represented the beginning of what I will refer to as the “dopaminergic mind.” How that mind depends on dopamine, how it came to evolutionary fruition, and the dangers its continued evolution pose for the denizens of industrialized societies in particular will all be discussed in later chapters of this book. First, however, I attempt to refute commonly held explanations (myths) of how human nature evolved. The first myth is that the evolution of human intelligence was primarily a product of genetic selection, while the second is that the specific size, shape, or lateralization of our brain is critical for us to be considered human.
Myths concerning the origins of human behavior
Was human intelligence genetically selected?
There are many reasons to believe that the origin of advanced human behavior was at least partly controlled by genetic evolution. For one, estimates of the heritability of intelligence, based largely on twin studies that compare the concordance (similarity) of identical twins (which share the same genome) to fraternal twins (which only share the same genetic makeup as regular siblings), are around 0.50 (see Dickens and Flynn,
2001). There are also genetic differences between chimpanzees and modern humans on the order of about 1.2 percent (Carroll,
2003), which in principle could allow for selection for particular genes that may have helped produce the intellectual capabilities of modern humans. Certainly, advanced intelligence should help members of a species survive and reproduce, which according to Darwinian mechanisms should allow that trait to be passed on genetically to offspring. Indeed, it is highly likely that some genetic changes at least indirectly helped to advance human intelligence, although I will argue in Chapter
5 that most of these were probably associated with an overall physiological adaptation that occurred with the dawn of Homo habilis.
There are more compelling reasons, though, to believe that advanced human intellectual abilities are not primarily due to genetic selection. First of all, genetic expression and transmission have been documented to be modifiable at many levels by a wide variety of influences (especially maternal) that can themselves be passed to offspring in a mode known as “epigenetic inheritance” (Harper,
2005). Indeed, there are ongoing major increases in intelligence (Dickens and Flynn,
2001) and various clinical disorders (Previc,
2007) in the industrialized societies that are occurring despite stable or even opposing genetic influences. For example, the prevalence of autism, characterized by severely deficient social and communication skills, is dramatically increasing despite the fact that most individuals with autism never marry and thereby pass on their genes (see Chapter
4). Second, heritability estimates for intelligence and many other normal and abnormal traits may be overblown because fraternal twins do not share as similar a prenatal environment (a major source of epigenetic inheritance) as most identical twins due to the lack of a shared blood supply (Prescott etal.,
1999) and because of the greater similarity of rearing in identical twins (Mandler,
2001). Third, dramatic changes in physiology, anatomy, and behavior are believed to occur when the timing of gene expression is affected by disturbances in key regulatory or hormonal centers such as the thyroid (Crockford,
2002; McNamara,
1995). Fourth, anatomical findings (McDougall et al.,
2005) and genetic clock data (Cann et al.,
1987; Hammer,
1995; Templeton,
2002; von Haeseler etal.,
1996) clearly place the most recent ancestor common to all modern humans at around years,
[1] yet there is little or no evidence of art, music, religion, beads, bone tools, fishing, mining, or any other advanced human endeavors until more than 100,000 years later (McBrearty and Brooks,
2000; Mellars,
2006; Shea,
2003). One hundred thousand years may not seem like a large amount of time, in that it only constitutes about 5 percent of the total time elapsed from the appearance of Homo habilis, but it is more than ten times longer than from the dawn of the most ancient civilization to the present. Finally, there is no convincing evidence that genetic factors have played any role whatsoever in one of the most striking of all human features - the functional lateralization of the brain (Previc,
1991).
Although it still remains to be determined exactly how many genes humans actually have, the current best estimate is around 20,000-25,000. Given the 1.2 percent genetic divergence between chimpanzees (our genetically closest living relative) and modern humans, there would first appear to be a sufficient amount of discrepant genetic material to account for neurobehavioral differences between us and our nearest primate relation. However, the vast majority of our genome appears to be nonfunctional “junk” DNA and most of the remaining DNA is involved in gene regulation, with only a tiny percentage of the total DNA (<1.5 percent) actually used in transcribing the amino acid sequences that create proteins (Carroll,
2003). The “coded” sections of the human genome also appear to show less variation between humans and apes than the “non-coded” sections (Carroll,
2003; Mandel,
1996), and much of that difference relates to genes for the protein-intensive olfactory system.
[2] In fact, there is no evidence that any proteins, receptors, neurotransmitters, or other components of our basic neural machinery do not exist in chimpanzees (Rakic,
1996). Rather, most of the different genetic sequencing between chimpanzees and humans is in regulatory sections of the genome that affect gene expression (Carroll,
2003), presumably including those that affect brain and body development conjointly. As but one example, there are many genes that affect calcium production, which in turn helps regulate skeletal growth as well as the production of key brain transmitters (see Previc,
1999). Also, there are many genes that can affect the thyroid gland, which has an important influence on body metabolism, body growth, brain activity, and brain size and is arguably a major force for speciation during evolution (Crockford,
2002) and one of the few endocrine structures known to have altered its function during human evolution (Gagneux et al.,
2001; Previc,
2002). It is likely, therefore, that changes in regulatory-gene activity and other factors that influence gene expression played some role in the evolution of humans, most probably in its earliest stages (see Chapter
5).
[1]
To say that there may have been some influences on gene regulation in humans during the course of our evolution is more defensible than the notion that specific genes or sets of genes determine advanced human capabilities. Rarely does a single gene or small set of genes affect a major brain or non-brain disease, and higher cognitive capacities involve even more genes (Carroll,
2003). For example, the combined variance accounted for by several key genes known to contribute to intelligence and to various clinical disorders in humans is less than 10 percent (Comings et al.,
1996). The polygenic nature of higher cognition is not surprising when one considers the many cognitive skills - discussed in much greater detail by Previc (
1999) and in Chapter
3 - that are required for listening to, comprehending, and responding appropriately to a simple sentence such as “Build it and they will come.” First, a motor system must recreate in our own minds what is being said; second, an incredibly rapid auditory processor must decode a multitude of acoustic transients and phonemes; third, a capability for abstraction serves to link the spoken words to their correct meaning; fourth, working memory is required to keep the first clause of the sentence in mind as we await the final one; fifth, cognitive flexibility is needed to realize that, after hearing the second part of the sentence, the first part isn’t about construction but expresses a more profound thought; sixth, an ability to judge speaker intent aids in further recognizing that this sentence as spoken by a particular individual (e.g. a philosopher) is not about construction; and finally, there must be an ability to assemble and correctly sequence a collection of phonemes that provides a spoken response that we (or any other individual) may have never uttered before. Despite all of this, some researchers such as Pinker and Bloom (
1990) have postulated that a single gene or small set of genes may have mutated to create specific language capabilities (e.g. grammar) only found in humans. Indeed, there was great excitement among the scientific world that a “grammar gene” had been identified in a small English family of supposedly grammar-deficient individuals (Gopnik,
1990), who were later shown to have a mutation of a gene known as “Foxp2” (Lai et al.,
2001). There eventually turned out to be several major problems with this finding, however. The first was that the affected family members did not have a selective loss of grammar but rather exhibited many other language problems as well as severe speech articulation difficulties, an inability to carry out simple facial gestures (like winking), behavioral disorders such as autism, and even nonlinguistic cognitive deficits (their average nonverbal intelligence quotient was found to be only eighty-six, or fourteen points below their unaffected relatives) (Vargha-Khadem et al.,
1995). Moreover, the Foxp2 gene mutation turns out not to be associated with the deficits exhibited by most individuals with specific language impairments (Newbury et al.,
2002), nor does the human Foxp2 gene resemble that of other species (e.g. avians and dolphins) who possess advanced vocal communication skills (Webb and Zhang,
The final factor mitigating the importance of the Foxp2 gene in human linguistic evolution comes from a recent DNA finding in Neanderthals, from whom the ancestors of modern humans diverged nearly 400,000 years ago. At least one of the two major variants of the modern human Foxp2 gene relative to that of chimpanzees was once thought to have occurred as recently as 10,000 years ago (Enard et al.,
2002), or long after the emergence of the common human genome. However, an analysis of the DNA of Neanderthals shows that they, too, possessed both modern human Foxp2 variants (Krause et al.,
2007), which indicates that these variants must be at least 400,000 years old given the estimated date of divergence of the Neanderthal and modern human lineages (Chapter
5).
Another phenomenon tied to the evolution of humans is the lateralization of the human brain for advanced cognitive functions. Two of the most well-known manifestations of cerebral lateralization are the overwhelming and universal preponderance of right-handedness in humans - about 85-90 percent of individuals in Western societies exhibit some form of right motor dominance - and the greater likelihood of suffering serious speech and language deficits (known as aphasias) following damage to the left hemisphere in adulthood.
[1] Although brain lateralization of some sort or another is common in the animal world, the degree of functional lateralization of the human brain is remarkable compared to that of other mammalian brains and especially that of the chimpanzee. Indeed, one of the great triumphs of modern neuroscience was the demonstration, mainly through studies of “split-brain” patients in which the connections between the hemispheres were severed to relieve epilepsy (Gazzaniga,
2005), that the left and right hemispheres of most humans not only differ in their linguistic capabilities but also possess very different personalities (the left is more active, controlling, and emotionally detached while the right is more earthy and emotional) and intellects (the left is more analytical, abstract, and future-oriented while the right one is more concrete, better at judging emotional and mental states, and better at visual manipulations, especially 3-D geometrical ones in body space). Indeed, the cognitive and personality differences between the left and right hemispheres of most humans are greater than between almost any two humans, and the specialized functions of the left hemisphere arguably render it almost as dissimilar to those of the right hemisphere as human intellectual functions in general differ from chimpanzees.
[1]
Although many theorists such as Annett
(1985) and Crow (
2000) have posited that left-hemispheric language dominance is largely determined by a single gene - and despite evidence that, at least in some species, the overall direction of body asymmetry is subject to genetic influences (Ruiz-Lozano et al.,
2000) - the evidence is strongly against a genetic explanation for brain lateralization in humans. First, the likelihood of one member of a twin pair having the same hand dominance as the other is no greater for identical than for fraternal twins (Reiss et al.,
1999),
[2] and speech dominance for monozygotic twin pairs shows a similarly weak concordance (Jancke and Steinmetz,
1994). Second, neither handedness nor speech lateralization (see Tanaka et al.,
1999; Woods,
1986) appears to be related to the genetically influenced asymmetrical position of the major body organs such as the heart, which, in any case, is the same in humans as in chimpanzees. Third, there does not appear to be any evolutionary advantage conferred by the typical pattern of left-hemispheric dominance for handedness, as left-handers and right-handers on average do not differ in academic or athletic achievement or any other personality variables (see Hardyck et al.,
1976; Peters et al.,
2006; Previc,
1991), although there may be very slight deficits for some individuals with ambiguous dominance (Peters et al.,
[3] Fourth, the development of cerebral lateralization is heavily dependent on both cultural and prenatal factors. As an example of cultural factors, aphasia following left-hemispheric damage was very uncommon a few centuries ago in Europe when the vast majority of adults were illiterate and not exposed to the left-right reading and writing of Western languages, and right-handedness remains much less prevalent in existing illiterate populations (see Previc,
1991). As an example of prenatal factors, handedness and other forms of motoric lateralization are greatly reduced in otherwise normal infants born before the beginning of the third trimester and are affected by fetal positioning in the final trimester, which may be crucial as a source of early asymmetrical motion experience in bipedal humans (Previc,
1991). Indeed, the entire edifice of human laterality may be based primarily on primordial prenatal (i.e. nongenetic) factors (Previc,
1991).
Finally, the notion that language and language-linked brain lateralization are determined genetically is contradicted by the nature of human language as a very robust behavior that does not depend on a particular sensory modality (e.g. hearing) or motor system (e.g. speech). For example, individuals who cannot speak or move their hands can communicate with their feet, and those who cannot hear or see can use their hands to receive messages. Humans have invented languages dependent on speech sounds but also on manual signs, tactile signals, fundamental (musical) frequencies, visual icons, clicks, whistles, and probably other signals as well, all demanding many of the same skills described above for speech comprehension and production. Moreover, the mechanisms of language have expropriated the same systems used in more basic motor functions such as chewing, hand movements and eye movements, the latter two of which accompany linguistic thought (Kelso and Tuller,
1984; Kingston,
1990; McGuigan,
1966; Previc et al.,
2005). And, the fact that speech is housed mostly in the left hemisphere of humans certainly doesn’t imply a causal (or more specifically, a genetic) linkage because the loss in early life of the left hemisphere does not affect subsequent language ability in any measurable way (see next section). Indeed, a pure “language” gene/protein would have to be a strange one in that it would have to:
- affect language at a superordinate level, independent of any particular sensorimotor modality;
- affect one hemisphere more than another, even though the lateralization process does not appear to be under genetic control
- and even though language proceeds just fine in the absence of the originally favored hemisphere;
- and affect no other sensorimotor or cognitive systems, even though these other systems are closely tied to language processing and output and are, in some case, necessary for language to occur.
Needless to say, no pure language gene has been found or is likely to ever be found.
In summary, a direct, major role of direct genetic selection in language and other higher-order cognitive functions is unlikely. This is consistent with the fact that all major intellectual advances during human evolution proceeded in sub-Saharan Africa (McBrearty and Brooks,
2000; Previc,
1999), even though ancestral humans had populated wide swaths of Africa, Europe, and Asia nearly two million years ago. If cognitive ability and not physiological and dietary adaptations - which occurred mostly in sub-Saharan Africa, for reasons to be discussed in Chapter
5 - was the major trait genetically selected for, then why were the other regions of the world in which cognitive ability would have also proven beneficial unable to rival sub-Saharan Africa as the cradle of human evolution?
Did our larger brains make us more intelligent?
The second “myth” concerning human evolution - that we got smarter mainly because our brains got bigger - remains very popular, even among researchers in the field. Yet, there are even more powerful arguments against this view than against the genetic selection theory. After all, elephants by far have bigger brains than anyone else in the animal kingdom, yet most would not be considered intellectual giants; conversely, birds have very small brains (hence, the derogatory term “bird-brain”), but we now realize that some bird species (e.g. parrots) actually possess relatively advanced cognitive capacities, such as language, arithmetic, and reasoning skills (Pepperberg,
1990).
Accordingly, most brain scientists accept that a better measure than brain size for predicting intelligence is brain-to-body weight; using this measure, humans fare very well, along with other creatures that we might consider intelligent (chimpanzees, dolphins, parrots). However, there are problems even with this measure, because the lowly tree shrew - a small, energetic creature that was an early ancestor of primates such as monkeys but is hardly noted for its intellectual prowess - ranks above all others in brain-body ratio (Henneberg,
1998). Moreover, the correlation between brain/body size and intelligence in humans has generally been shown to be very modest, with a typical coefficient that is barely more than the correlation between height and intelligence (~0.3) (see Previc,
1999). Since no researchers have claimed that height is causally related to intelligence, there is no reason to assume that the equally modest relationship between brain size and intelligence is also causally related. Moreover, when examining the relationship between brain size and intelligence within families to control for dietary and other environmental differences that differ among families, the correlation becomes essentially random (Schoenemann et al.,
2000). Indeed, there are even among humans of normal body sizes great variations in brain size, ranging normally from 1,000 cc to over 1,500 cc, and some of the most brilliant minds throughout history have actually had estimated brain sizes toward the low end of that range. The Nobel prize-winning novelist Anatole France had a brain size of only 1,000 g - about the same as the human ancestor Homo erectus, who lived over a million years ago - and most individuals with microcephaly (extremely small brains) without other associated disorders such as Down’s syndrome, growth retardation etc. tend to be of normal intelligence (see Skoyles,
1999, for a review). For example, one well-studied young mother given the moniker “C2” is estimated to have a brain size around 740 cc (at the low end of the Homo erectus range), despite an estimated intelligence quotient (IQ) of 112 (above that of the average human). Finally, the importance of brain size to intelligence is dubious from an evolutionary perspective, in that most of the increase in brain-to-body size in humans over the past million years is explained not by an increase in brain size but rather by a decrease in the size of our digestive tract that was arguably made possible by the reduced digestive demands associated with the increased consumption of meat and the cooking of plant foods (Hen- neberg,
1998). Ironically, the average human brain actually shrank over the past 100,000 years or so from 1,500 cc to 1,350 cc (Carroll,
2003), despite the aforementioned explosion of human intellectual capability (see Chapter
5).
Consequently, researchers have used yet another measure that compares the relative size of different structures such as the cerebral cortex - the outer, mostly gray mantle or “bark” of our brains, on which most of our higher-order cognitive capacities depend (hence, the positive connotations of having lots of “gray matter”) - relative to the brain distribution for an insectivore such as the tree shrew. By this measure, also known as the progression index, the human neocortex is by some accounts effectively 2.5 times larger than the chimpanzee’s relative to the rest of the brain (Rapoport,
1990), although this has been disputed (Holloway,
2002). Even more strikingly, the area of the neocortex associated with mental reasoning - the prefrontal lobes - occupy 29 percent of the neocortex of humans but only 17 percent of that of the chimpanzee (Rapoport,
1990), although a more recent review argued for no difference in relative frontal-lobe sizes between humans and other apes (Semendeferi et al.,
2002). At first glance, this suggests that the size of at least one portion of our brain may indeed account for the intellectual advancement of humans relative to the great apes.
However, the attribution of human intelligence to a larger neocortex or larger prefrontal cortex in particular is as erroneous as the notion that overall brain size is relevant to human intelligence. For one, an individual by the name of Daniel Lyon who had a brain of only 624 cc but essentially normal intellect is believed to have had an especially small cerebral cortex relative to the size of the rest of his brain (Skoyles,
1999). Moreover, research has shown that removal of only the prefrontal cortex in infancy produces no long-term deficits in intelligence in monkeys (Tucker and Kling,
1969). In fact, removal of major portions of the left hemisphere in infancy produces remarkably few long-term linguistic and other intellectual decrements (de Bode and Curtiss,
2000), even though in normal adult humans the left hemisphere is the predominant hemisphere for most linguistic and analytic intelligence. One child who received a hemispherectomy even ended up with above-average language skills and intelligence, despite the fact that his entire left hemisphere had been removed at the relatively late age of five-and-a-half years and he had disturbed vision and motor skills on one side of his body due to the surgery (Smith and Sugar,
1975). Even more striking is the finding of Lorber
(1983), who described many cases of children with hydrocephalus, which occurs when overproduction of cerebrospinal fluid in the center of the brain puts pressure on the cerebral cortex and, if left untreated, eventually compresses and damages it. In one of Lorber’s most dramatic cases, a child with only 10 percent of his cerebral cortical mantle remaining eventually ended up with an overall intelligence of 130 (in the genius range), with a special brilliance in mathematics (Lorber). Somewhat cheekily, Lorber went so far as to entitle his famous chapter “Is your brain really necessary?”
The failure of large-scale cortical removal in infants and young children to substantially affect subsequent intelligence is complemented by the dramatically different intelligences that exist for similarly sized brains. The most striking example of this involves the left and right hemispheres, which are almost identical in weight and shape, aside from a few minor anatomical differences that appear to have little functional significance (see Previc,
1991). Yet, as noted earlier, it would be hard to imagine two more different intellects and personalities. The left hemisphere is impressive at abstract reasoning, mathematics, and most language functions, yet it has difficulty in interpreting simple metaphors and proverbs, in judging the intent of others, and in performing other simple social tasks. By contrast, the right hemisphere is poor at most language functions (it has the grammar of a six-year-old and the vocabulary of an eleven-year-old) and does poorly on logical reasoning tests, yet it is superior to the left hemisphere at proverb interpretation, understanding the intent of others, self-awareness, emotional processing, social interaction, certain musical tasks, and 3-D geometry (Gazzaniga,
. Another important example of the stark contrast between an anatomically normal brain and severe abnormalities in higher mental functioning involves the disorder known as phenylketonuria. In this genetic disorder, the enzyme phenylalanine hydroxylase is absent and unable to convert phenylalanine to tyrosine (the precursor to the neurotransmitter dopamine), resulting in a buildup of pyruvic acid and a decrease in tyrosine (as well as dopamine). Because these problems only emerge after birth, when the basic size and shape of the brain has already been established, the brains of persons suffering from PKU appear grossly normal, even though those with PKU suffer severe mental retardation if their excess phenylalanine is not treated by dietary restrictions.
In conclusion, there are compelling reasons to reject as myth the standard view that the evolution of human intelligence and other advanced faculties was determined by direct genetic influences that conspired to change the size and shape of the human brain. On the basis of his own findings with hydrocephalic children, Lorber (
1983: 12) concluded that there is an “urgent need to think afresh and differently about the function of the human brain that would necessitate a major change in the neurological sciences.” Unfortunately, the revolution in the perspective of the neuroscientific community at large has yet to occur.
The evolution of human intelligence: an alternative view
The pervasiveness of the myth that the ability of humans to think and create in advanced ways is dependent on the overall size of our brains is surprising in that few of us would automatically conclude that a bigger computer is a better one. Indeed, some of the massive early computers had less than one-billionth the speed and capacity of current notebook computers. Rather, it is how the system works - a collective product of such functions as internal speed, amount of parallel processing etc., known as its “functional architecture” - that largely determines its performance. In fact, by any stretch of the imagination, our brain is far larger than it would ever have to be to perform the advanced intellectual functions that we evolved. For example, the number of nerve cells in it (100 billion, as a generally accepted estimate) times the average number of connections per nerve cell (10,000, another generally accepted estimate) times the number of firings per second (up to 1,000, for rapidly firing neurons) allows our brain to perform a comparable number of calculations (1018) as our very best state-of-the-art computers. While using but a fraction of their hardware capabilities, such computers can crunch massive numbers in microseconds, generate real-world scenes in milliseconds, understand the complex syntax of language, and play chess better than the greatest of humans.
What, then, is the essence of why we are so intelligent relative to the rest of the animal world, and especially other primates? Perhaps the most important clue - indeed, the “Rosetta Stone” of the brain - lies in the differences between the left and right hemispheres of the human brain.
[1] As already noted, it is our left hemisphere and its grammatical, mathematical, and logical reasoning skills that most obviously differentiates our intellectual capability from that of chimpanzees, despite the comparable size and overall shape of its right hemispheric counterpart and the lack of a known gene underlying its advanced intellect. The right hemisphere is marginally heavier and the left hemisphere may have slightly more gray matter, but the left-right differences are far smaller than between brains of two different humans. There also typically exist larger right frontal (anterior) and left occipital (posterior) protrusions, as if the brain was slightly torqued to the right, as well as some differences in the size, shape, and neural connections of the ventral prefrontal and temporal-parietal prefrontal regions of cortex that house, among other things, the anterior and posterior speech centers (Previc,
1991). However, there is evidently no functional significance to the torque to the right, and the other changes do not appear to be causally linked to the functional lateralization of the brain, since they generally occur after the process of cerebral lateralization is well underway (see Previc,
1991).
A much more likely candidate for why the left and right hemispheres differ so much in their functions is the relative predominance of four major neurotransmitters that are used to communicate between neurons at junctures known as the synapses. The four most important lateralized neurotransmitters are dopamine, norepinephrine, serotonin, and acetylcholine.
[1] On the basis of a wide variety of evidence (see Flor- Henry,
1986; Previc,
1996; Tucker and Williamson,
1984), it is generally accepted that dopamine and acetylcholine predominate in the left hemisphere and that norepinephrine and serotonin predominate in the right hemisphere. The latter two transmitters are heavily involved in arousal, which explains why the right hemisphere is generally more involved in emotional processing. So, that leaves dopamine and acetylcholine as the two most likely candidates for understanding why the left hemisphere of humans has evolved such an advanced intellect, with at least five suggestions that it is dopamine rather than acetylcholine that underlies human intelligence. Three of these pertain to how dopamine is distributed in the brain, and the other two of these pertain to the known role of dopamine in cognition.
The three major arguments for a role of dopamine in advanced intelligence are:
- dopamine is highly concentrated in all nonhuman species with advanced intelligence;
- only dopamine has expanded throughout primate and hominid evo- lution;[2] and
- dopamine is especially rich in the prefrontal cortex, the single-most important brain region involved in mathematics, reasoning, and planning.
Nonhuman brains of other advanced species such as parrots, dolphins, and primates differ in many fundamental respects, but they are similar in that all have relatively high amounts of dopamine (Previc,
1999). Most striking is the tiny overall size and almost absent cortical mass of birds, which nonetheless have a well-developed dopamine-rich striatum (Waldmann and Gunturkun, 1993) and nidopallium caudolaterale that support their impressive mathematical and linguistic competencies. Indeed, the latter structure is considered analogous to the mammalian prefrontal cortex because of its high percentage of input from subcortical dopamine regions and its critical role in cognitive shifting and goal- directed behavior (Gunturkun,
2005). As regards the second argument, the dopaminergic innervation of the cerebral cortex is known to have undergone a major expansion in primates; for example, dopamine is limited to specific brain areas in rodents but is densely represented in certain cortical layers in all regions of the monkey brain (see Gaspar et al.,
1989). Moreover, the dopaminergic expansion has continued into humans, judging from the almost two-fold increase (adjusted for overall brain size) in the size of the human caudate nucleus - in which dopamine is most densely concentrated - relative to that of the chimpanzee caudate (Rapoport,
1990). By contrast, there is no evidence that any other transmitter has expanded as much, if at all, during human evolution, and the cholinergic content of the human cerebral cortex may have actually diminished (Perry and Perry,
1995). Finally, dopamine appears to be especially well-represented in the prefrontal cortex of humans, and chemical removal of dopamine from an otherwise intact prefrontal cortex essentially duplicates all of the intellectual deficits produced by outright damage to this region (Brozoski et al.,
1979; Robbins, 2000). One major feature of the prefrontal cortex is its ability to recruit other cortical regions in performing parallel mental operations, which is likely to be dopami- nergically mediated because dopamine is well-represented in the upper layers of the cerebral cortex, where the connections to the other cortical regions mostly reside (Gaspar et al., 1989; Chapter
2).
Two other reasons linking dopamine with advanced intelligence are its direct roles in normal intelligence and in the intellectual deficits found in clinical disorders in which dopamine levels are reduced. Far more than acetylcholine, dopamine is involved in six major skills underlying advanced cognition: motor planning and execution; working memory (which allows us to engage in parallel processing because we can process and operate on different types of material at the same time); cognitive flexibility (mental shifting); temporal processing/speed; creativity; and spatial and temporal abstraction (see Chapter
3 for a greater elucidation). Working memory and cognitive flexibility are considered the two most important components of what is known as “executive intelligence,” and tasks that assess it have been shown by brain imaging to directly activate dopamine systems in the brain (Monchi et al.,
2006). Enhanced parallel processing and processing speed, which modern computers rely on to achieve their impressive processing power, are particularly associated with high general intelligence in humans (Bates and Stough,
1998; Fry and Hale,
2000), as are dopamine levels themselves (Cropley et al.,
2006; Guo et al.,
2006). Second, dopamine is arguably the neurotransmitter most involved in the intellectual losses in a number of disorders including Parkinson’s disease and even normal aging, phenylketonuria, and iodine-deficiency syndrome (see Chapter
4). In phenylketonuria, for example, the genetically mediated absence of the phenylalanine hydroxylase enzyme prevents the synthesis of tyrosine, an intermediary substance in the synthesis of dopamine by the brain (Diamond et al.,
1997; Welsh et al.,
1990). Neurochemical imbalances rather than neuroanatomical abnormalities are believed to be the major cause (and basis of treatment) for almost every brain-related psychological disorder. In particular, either elevated or diminished dopamine contributes in varying degrees to Alzheimer’s disease and normal aging, attention- deficit disorder, autism, Huntington’s disease, iodine-deficiency syndrome (cretinism), mania, obsessive-compulsive disorder, Parkinson’s disease, phenylketonuria, schizophrenia, substance abuse, and Tourette’s syndrome, all of which are associated with changes in cognition, motor function, and/or motivation (see Previc,
1999; Chapter
4).
The importance of dopamine to these disorders partly explains why dopamine is by far the most widely studied neurotransmitter in the brain. For example, in the exhaustive Medline database listing studies involving different neurotransmitters, dopamine was the subject of over 60,000 brain articles through 2008, whereas serotonin - the next most- studied neurotransmitter and a substance that has important interactions with dopamine - was the subject of about 38,000 brain articles and acetylcholine (the other predominant left-hemispheric transmitter) was the subject of less than 17,000 brain papers.
The rise of dopamine during human evolution
If the principal reason for the uniqueness of human intellectual and other behavior is that the neurochemical balance in our brains favors dopamine, the remaining great question is how dopamine ended up being so plentiful in the human brain, especially in its left hemisphere. Certainly, there are no new genes that appeared in humans that control the production of the major dopamine synaptic receptors (Previc,
1999), and no variation in these genes within humans seems to strongly correlate with variations in intelligence (Ball et al.,
1998). As will be discussed further in Chapter
5, it is more likely that dopamine levels may have been indirectly altered by genetic changes that affected calcium production, thyroid hormones, or some more general physiological mechanism or adaptation, given that both calcium and thyroid hormones are involved in the conversion of tyrosine to dopa (the immediate precursor to dopamine) in the brain. The stimulatory effect of thyroid hormones on both skeletal growth and dopamine metabolism, as well as the stimulatory effect of dopamine on growth hormone, are especially attractive mechanisms for explaining the triple convergence of intelligence, brain size, and skeletal height. The importance of the thyroid hormones is also suggested by the previously noted finding that elevated levels of thyroid hormone in humans represent the first confirmed endocrinological difference between chimpanzees and humans (Gagneaux et al., 2001).
As reviewed by Previc (
1999,
2007), there are even more plausible nongenetic explanations for why dopamine levels increased during human evolution. As will be discussed further in Chapters
4 and
5, the most likely candidate for a nongenetic or epigenetic inheritance of high dopamine levels is the ability of maternal factors - specifically, the mother’s neurochemical balance - to influence levels of dopamine in offspring. Not only have dopaminergic systems been shown to be influenced by a host of maternal factors, but there is also compelling evidence that the neurochemical balance of humans has, in fact, changed in favor of increasing dopamine due to a combination of five major factors:
- a physiological adaptation to a thermally stressful environment (which requires dopamine to activate heat-loss mechanisms);
- increased meat and shellfish consumption (which led to greater supplies of dopamine precursors and conversion of them into dopamine);
- demographic pressures that increased competition for resources and rewarded dopaminergically mediated achievement-motivation;
- a switch to bipedalism, which led to asymmetric vestibular exposure in fetuses during maternal locomotion and resting and ultimately elevated dopamine in the left hemisphere of most humans (see Previc, 1991); and major increases in the adaptive value of dopaminergic traits such as achievement, conquest, aggression, and masculinity beginning with late-Neolithic societies.
The link between bipedalism and brain lateralization exemplifies how epigenetic transmission could have become a seemingly permanent part of our inheritance during our evolution. Bipedalism, together with asymmetric prenatal positioning, creates asymmetrical gravitoinertial forces impacting the fetus, which may in turn create asymmetrical vestibular functioning and neurochemical differences between the hemispheres (Previc,
1991,
1996). Although the ultimate cause of the neurochemical lateralization may be nongenetic, the switch to bipedal- ism was a permanent behavioral fixture so that the resulting cerebral lateralization would continue for all future generations of humans and superficially appear as it if had become part of the genome itself.
Before addressing the changes in human brain dopamine levels during evolution and history in Chapters
5 and
6, it will first be necessary to further detail the nature of dopaminergic systems in the human brain (Chapter
2) and dopamine’s role in normal and abnormal behavior (Chapters
3 and
4). Finally, Chapter
7 will discuss the consequences of the “dopaminergic mind” not only for humans but for other species. As part of that critique, the impressive accomplishments of the dopaminergic mind throughout history will be weighed against the damage it has caused to itself, to others, and to the Earth’s ecosytems. It will be concluded that for humans to prosper (and perhaps even survive), the dopaminergic imperative that propelled humans to such great heights and more than anything else defined us as humans must in the end be relinquished.