Wednesday, December 31, 2008

Predictions for 2009

Human genomics

The Neanderthal genome will be fully sequenced. There will be no evidence of interbreeding with modern humans (although proponents of the multiregional model will remain unconvinced). By comparing this genome with ours, we may reconstruct the genome of archaic humans who lived almost a million years ago and who were ancestral to Neanderthals and modern humans.

Meanwhile, work will begin on sequencing the genome of early modern humans (10,000 – 40,000 years ago). This project should ultimately prove to be more interesting by showing us how much modern humans have evolved during their relatively short existence. We will probably find out that John Hawks erred on the low side in concluding that natural selection had changed 7% of the human genome over the past 40,000 years.

Darwin remembered

With the 150th anniversary of The Origin of Species, much will appear in 2009 about Charles Darwin and his life. We already know how he came up with his theory of evolution (Darwin salted away almost everything he wrote), although a few questions remain unanswered. What would he have done if he had lived longer? What did he have in mind for future projects?

Probably not much. He had said everything he wanted to say. The Origin of Species (1859) came out of a backlog that had built up in his mind during the previous twenty years. Then came The Descent of Man (1871), which used material left out of The Origin. Finally, The Expression of the Emotions in Man and Animals (1872) was largely a spin-off of The Descent. With this trilogy completed, he had little more to say. A younger Darwin might have addressed one of the dilemmas of evolution. How do selected characteristics perpetuate themselves? What keeps them from being blended away into non-existence with each generation of sexual reproduction? Darwin might have learned about another contemporary scientist, Gregor Mendel, and together the two of them might have proposed a particulate theory of genetics—more than thirty years before later evolutionists rediscovered Mendel’s work. The field of genetics would have developed much faster and, under Darwin’s guidance, may have avoided some of its later blind alleys (e.g., mutation pressure, saltationism, etc.).

Perhaps. But Darwin was unprepared for success. He had finally got to tell the world everything he had so long held back. And the world listened. From then on, a sense of emptiness took over, as if his remaining years were little more than an epilogue.

The Crisis?

The Second Great Depression will not begin in 2009. In any case, what scares me is not the prospect of a sudden drop in the standard of living. Rather, it’s that of a gradual decline to almost half its current value. That scenario is scarier and likelier. And it’s probably already started. For the past fifteen years, median wages have stagnated despite decent economic growth. What will happen when growth stays in the 0-2% range?

Wednesday, December 17, 2008

Neanderthals in my sinus?

About two years ago, Gregory Cochran teased GNXP readers with a suggestion that Neanderthals might still be living among us. There was a flurry of speculation. Sasquatches? Yeti? Scottish redheads? Finally, the answer has come out. Greg thinks there may be infectious organisms that originally developed from Neanderthal tumors several tens of millennia ago. These organisms might look like amoebae, but genetically they would be Neanderthals.

The general idea is that host cell line infections can occur (TVT, Tasmanian devil facial tumor, and a contagious leukemia in Syrian hamsters), can mutate into something that is nonlethal and/or chronic if selection favors that (TVT usually goes away with time), can infect related species (TVT can be experimentally transmitted to wolves, jackals, coyotes and red foxes), and might exist in humans today. There are human diseases that appear infectious for which the transmissible agent has not been identified - sarcoidosis, for example. So modern humans might suffer from infectious organisms directly derived from Neanderthals or other archaic humans. As far as I know, no one has yet thought of looking for Neanderthal-derived cells inside people. Since such cells would have the required genetic code for making human signal molecules, they might be particularly likely to employ baroque forms of host manipulation.

… there _could_ be Neanderthal-derived cell line infections, and this is really the only scenario I've been able to come up with that gives us live Neanderthals - hiding in your sinuses, or maybe your prostate. The only one so far. There are other known infectious diseases in which some metazoan has completely chucked complexity and gone back to being a germ: whirling disease in fish, for example.

Will it then be possible to resurrect Neanderthals à la Jurassic Park? Another GNXP commenter, Eric J. Johnson, poured cold water on the idea:

… the problem would be a lack of purifying selection on all the morphogen genes, not to mention all the neuron-specific genes, etc. The tumor doesn't need any of that stuff. How fast they would all turn to garbage, I don't know. Probably pretty fast.

Wednesday, December 10, 2008

Gene-culture co-evolution and evolutionary psychology

How much do human populations differ from each other in real, functional terms? The question remains open, but an answer is starting to unfold. In 2007, a team led by anthropologist John Hawks found that natural selection seems to have modified at least 7% of the human genome over the last 40,000 years, i.e., during the period when modern humans spread out of Africa and peopled the other continents. In addition, as they moved into these different physical and cultural environments, the pace of genetic change seems to have speeded up, particularly after the advent of agriculture 10,000 years ago. The rate of change may then have been over a hundred times what it had been during most of human evolution (Hawks et al., 2007)

We do not fully know the nature of these recent genetic changes. John Hawks suggests they may reflect adaptations to new ecological and cultural settings, specifically to cold, to new diets (cereals, milk, etc.), to new epidemic diseases associated with the spread of agriculture (smallpox, malaria, yellow fever, typhus, cholera), and to new forms of “communication, social interactions, and creativity.”

There thus seem to have been multiple EEAs in relatively recent times, and not simply one situated in the Pleistocene. Some of them would correspond to the different physical environments that modern humans moved into as they spread out of Africa 40 to 50 thousand years. Most however, seem to have arisen in the past 10 thousand years and correspond to different cultural environments.

John Hawks is certainly not the first one to suggest that culture has been a key part of the human adaptive landscape. Usually referred to as ‘gene-culture co-evolution’, this paradigm has had many proponents, notably Pierre van den Berghe, Charles Lumsden, and E.O. Wilson. It has nonetheless remained marginal, even among evolutionary psychologists. This is partly because of the influence of John Tooby and Leda Cosmides, whose influence was critical during the early years of evolutionary psychology:

It is no more plausible to believe that whole new mental organs could evolve since the Pleistocene—i.e., over historical time—than it is to believe that whole new physical organs such as eyes would evolve over brief spans. It is easily imaginable that such things as the population mean retinal sensitivity might modestly shift over historical time, and similarly minor modifications might have been made in various psychological mechanisms. However, major and intricate changes in innately specified information-processing procedures presentover brief spans of historical time. (Tooby & Cosmides, 1989)

In a more recent article, they have backed away from this position: “Although the hominid line is thought to have originated on edges of the African savannahs, the EEA is not a particular place or time.” Each biological adaptation has its own EEA, which is simply a composite of whatever selection pressures brought it into being (Tooby & Cosmides, 2005). There are thus potentially as many EEAs as there are adaptations. It follows, then, that some EEAs may have existed later in time than others.

How much later? Tooby and Cosmides considered complexity to be one limiting factor. The more complex the adaptation, the more genes it would involve, and the longer the time needed to coordinate the evolution of all those genes. Therefore, recent biological evolution has probably only involved simple traits, certainly nothing as complex as mental ones. Such traits could have arisen only through a faster process, notably cultural evolution.

The problem with this argument is that complex traits do not arise ex nihilo. They arise through modifications, deletions, or additions to existing traits. And such changes can occur through a single point mutation at a regulatory gene. As Harpending and Cochran (2002) point out:


Even if 40 or 50 thousand years were too short a time for the evolutionary development of a truly new and highly complex mental adaptation, which is by no means certain, it is certainly long enough for some groups to lose such an adaptation, for some groups to develop a highly exaggerated version of an adaptation, or for changes in the triggers or timing of that adaptation to evolve. That is what we see in domesticated dogs, for example, who have entirely lost certain key behavioral adaptations of wolves such as paternal investment. Other wolf behaviors have been exaggerated or distorted.

Gene-culture co-evolution also presents difficulties that are inherent to the paradigm itself:

1. The linkages between genes and culture tend to be remote, indirect, multiple, and complex. There are some straightforward ones, such as between lactose intolerance and consumption of dairy products, but such linkages are probably unrepresentative of gene-culture co-evolution.

2. With only a few minor exceptions, gene-culture co-evolution is specific to humans. Cross-species comparisons, so common in other fields of evolutionary study, are thus of little help (van den Berghe & Frost, 1986).

These difficulties are not insuperable. To some degree, they reflect an unconscious desire to study human evolution with the same conceptual tools that have been used to study the evolution of other species. Other tools will have to be developed, or simply borrowed from the social sciences of psychology, sociology, and anthropology. Thus, there are no real barriers to renewed use of this paradigm, particularly as we move beyond the single-EEA model and investigate this 7% of the human genome that has apparently changed over the past 40,000 years.

References

Harpending, H. & G. Cochran. (2002).
"In our genes", Proceedings of the National Academy of Sciences, 99(1), 10-12.

Hawks, J., E.T. Wang, G.M. Cochran, H.C. Harpending, & R.K. Moyzis. (2007).
Recent acceleration of human adaptive evolution. Proceedings of the National Academy of Sciences (USA), 104, 20753-20758.

Tooby, J. & L. Cosmides. (2005). Conceptual foundations of evolutionary psychology, in: D. M. Buss (Ed.) The Handbook of Evolutionary Psychology, Hoboken, NJ: Wiley, pp. 5-67.

Tooby, J. & L. Cosmides. (1989). Evolutionary psychology and the generation of culture, Part I. Theoretical considerations, Ethology and Sociobiology, 10, 29-49.

van den Berghe, P.L., & Frost, P. (1986). Skin color preference, sexual dimorphism and sexual selection: A case of gene-culture co-evolution? Ethnic and Racial Studies, 9, 87-113.


Wednesday, December 3, 2008

More on father absence

I used to believe in a direct causal link between father absence and early sexual maturity in girls. The reasoning was that a daughter’s sexual development is accelerated when her biological father is replaced by a strange male (such as a stepfather). At the time, I saw this finding as a way to counter the argument that sociobiology denies human plasticity. It also offered hope that we could remedy a large number of social problems by ensuring father presence. Now I can’t help wondering whether all of this distorted my sense of judgment … and that of others.

One of the best studies on this subject is by Surbey (1990), who used a large sample (1,247 daughters) and measured several possible confounding factors: family size, birth order, weight, height, Quetelet Index, and socio-economic status (SES). On none of these measures did the father-absent daughters (16% of the sample) significantly differ from the father-present daughters. Nonetheless, they matured 4-5 months earlier than those who lived with both parents continuously and 7 months earlier than those who had experienced only an absent mother.

That sounds convincing. Yet how well was SES really controlled? The subjects were apparently university students, so they would have shared the SES of their mothers. But what about the SES of their absent fathers? What do we know about them? Typically nothing. And does SES fully capture all of the factors that distinguish father-absent daughters from father-present ones? Could it be that these two groups differ somewhat in their physiological make-up and, perhaps, in their genetic background?

These doubts led Mendle et al. (2006) to control for genetic background by examining the daughters of twin mothers. It turned out that the daughters did not differ in age of menarche if one mother was still living with the biological father and the other was not. Moreover, when the mother’s age of menarche was controlled among unrelated daughters, age of menarche no longer differed between daughters living with stepfathers and those living with biological fathers.


The presence of a step-uncle was as strongly predictive of early menarche as presence of a stepfather. It does not seem necessary for a child to experience the direct environmental influence of a stepfather to exhibit an accelerated age of menarche—as long as she is genetically related to someone who does have a stepfather. In a pair of twin mothers of which only one raises her children with a stepfather, the offspring of both twins are equally likely to display early age of menarche. It therefore appears that some genetic or shared environmental confound accounts for the earlier association found in female children living with stepfathers.

Mendle et al. (2006) raised another point. The correlation between father-absence and early menarche may be an artefact of population substructure:


The wholly Caucasian population of our Australian sample may explain our failure to replicate the strong father-absence association observed in more ethnically diverse American samples. Given that African American and Latina girls experience menarche on average 6 months prior to Caucasians (Herman-Giddens et al., 1997), it may be that the previously established associations between early menarche and lack of a traditional two-parent family structure are affected by racial differences in family structure. Correlates of early menarche may additionally be complicated by effects of poverty or socioeconomic status. For example, Obeidallah, Brennan, Brooks-Gunn, Kindlon, and Earls (2000) obtained a difference in age of menarche between Caucasian and Latina girls, but this effect disappeared after controlling for socio-economic status.

Why didn’t other studies control for ethnicity? Apparently because the authors felt that SES controls were sufficient. This may be true for Hispanic Americans but it is not for African Americans. Even among Hispanics, there may still be substructure effects. It is known that Hispanic SES correlates with European ancestry, so controlling for SES would bias this population toward individuals who are more genetically similar to European Americans.

All of this makes me wonder about all of the data that supposedly prove the adverse effects of single motherhood. Undoubtedly, there are adverse effects. But there are probably many “pseudo-effects” that would persist even if the biological father could be forced to stay around.

For what it’s worth, I spent part of my pre-adult life in a father-absent family (my father died of a cerebral hemorrhage). Yes, there were adverse effects, poverty in particular. Nonetheless, I think I would have ended up being substantially the same kind of person even if my father had continued to live.

References

Mendle, J., Turkheimer, E., D’Onofrio, B.M., Lynch, S.K., Emery, R.E., Slutske, W.S., Martin, N.G. (2006). Family structure and age at menarche: a children-of-twins approach. Developmental Psychology, 42, 533-542.

Nettle, D. (2008). Why do some dads get more involved than others? Evidence from a large British cohort. Evolution and Human Behavior, 29, 416-423.

Surbey, M.K. (1990). Family composition, stress, and the timing of human menarche. In T.E. Ziegler & F.B. Bercovitch (eds.) Socioendocrinology of Primate Reproduction, pp. 11-32, New York: Wiley-Liss Inc.


Thursday, November 27, 2008

Does paternal investment increase child IQ?

The latest issue of Evolution and Human Behavior has an article on paternal investment and IQ. Using a longitudinal dataset of children born in Britain in 1958, Nettle (2008) found a significant positive correlation between a child’s IQ at age 11 and the father’s degree of family involvement. The less a father cared for his offspring, the less intelligent they were. Nettle concluded that paternal investment affects childhood IQ.

Now, correlation is not causation. It can be shown, for instance, that Presbyterian ministers in Boston have earnings that significantly correlate over time with the price of rum in Havana. But that doesn’t mean they’ve been dabbling in the rum trade. It simply means there’s a common causal factor, in this case the North American business cycle.

Similarly, a common cause may explain the correlation between low investment by fathers and low intelligence in their children. Deadbeat dads tend to be more present-oriented and probably less intelligent. Since intelligence has a large heritable component, their children would be less intelligent on average.

In all fairness, the Nettle study did control for social class, which in turn partially controls for time preference (i.e., whether the fathers were present-oriented or future-oriented). Specifically, the fathers were coded in terms of five occupational categories: professional, managerial and technical, skilled, partly skilled, and unskilled. I doubt, though, that this factor would have accounted for most variability in time preference. Even within the British working class, there is considerable variation, notably by religion and ethnicity, in the respective weighting that people give to present impulses versus future obligations.

Reference

Nettle, D. (2008). Why do some dads get more involved than others? Evidence from a large British cohort. Evolution and Human Behavior, 29, 416-423.

Wednesday, November 19, 2008

Many genes and one g?

The October issue of Scientific American has an article on the search for genes that influence intelligence. Twin studies suggest that such genes exist, yet efforts to date have been disappointing for Robert Plomin, a behavioral geneticist at the Institute of Psychiatry in London.

Failing to find genes for intelligence has, in itself, been very instructive for Plomin. Twin studies continue to persuade him that the genes exist. “There is ultimately DNA variation responsible for it,” he says. But each of the variations detected so far only makes a tiny contribution to differences in intelligence. “I think nobody thought that the biggest effects would account for less than 1 percent,” Plomin points out.

That means that there must be hundreds—perhaps thousands—of genes that together produce the full range of gene-based variation in intelligence.

Variation in intelligence thus seems to be an accumulation of small effects from very many genes. For a long time, I had trouble reconciling this view with the concept of g—this single mental property, still unknown, that accounts for most variation in human intelligence. I now realize that this contradiction is more apparent than real. Whatever this single property might be, it doesn’t necessarily correspond to a single gene or even a single gene complex. It could be affected by an indefinite number of genes.

Reference

Zimmer, C. (2008). The search for intelligence. Scientific American, October, pp. 68-75.

Tuesday, November 11, 2008

Echoes of the Upper Paleolithic?

Among early modern humans, men faced less mate competition with increasing distance from the equator. They were proportionately fewer in number and fewer of them could afford a second wife. This was partly because hunting distances were correspondingly longer, so that more men died of hunting fatalities, and partly because longer winters made polygyny costlier. With fewer opportunities for food gathering, women were less self-reliant and depended more on men for food (Frost, 2006).

But what happened to those women who remained single? If they had no providers, wouldn’t they have died of starvation and wouldn’t these deaths have balanced out the operational sex ratio?

The short answer is that they remained with their parents and never had children. It is, above all, children who incur food-provisioning costs and make a male provider necessary. Nonetheless, it is interesting to speculate about these single women who must have been numerous among early Europeans, particularly in the continental Arctic of Ice Age Europe where opportunities for food gathering were few and far between.

If contemporary hunter-gatherers are a guide, such women become secondary caregivers by caring for younger siblings or aging parents. This spinsterhood is temporary, unless the woman suffers from a serious disability. How, then, would a hunter-gatherer society cope with large numbers of women who remain single? I addressed this question in my 1994 Human Evolution article.

Patterns of behaviour become stereotyped over time, with the result that their ritualized vestiges can persist much longer than the conditions that created them. A surplus of unattached females should be associated with a pattern of specialization in communal rather than family-oriented niches, e.g. shamanism, maintenance of base-camp dwellings, and tending of communal fires. Another pattern should be taboos that would have come to define this caste of unmarried women, e.g. virginity as a mark of caste membership, immunity from harm for fear of their shamanistic power.

Shamanism is strongly linked in early European traditions to women, especially virgins. This linkage is weaker in Siberian cultures, where female shamans predominate but are nonetheless married, and virtually unknown to the native peoples of North America, among whom most shamans are married men (Czaplicka, 1969: 243-255; Saladin d'Anglure, 1988; Hallowell, 1971: 19-22). The oldest sources from Greco-Roman, Germanic, and Slavic culture areas show an overwhelming preponderance of women among seers, witches, sibyls, oracles, and soothsayers (Baroja 1964: 24-57). Gimbutas (1982) and Dexter (1985) have argued that virgin females in early Europe were seen as "storehouses" of fertile energy and thereby possessed of extraordinary power. Thus, at the dawn of the Christian era the geographer Pomponius Mela mentioned nine virgin priestesses on an island off Brittany who knew the future and gave oracular responses to sailors who consulted them (Chadwick, 1966: 79). The first-century historian Cornelius Tacitus described a virgin prophetess among the Bructeri in present-day Germany, saying that this tribe "regards many women as endowed with prophetic powers and, as the superstition grows, attributes divinity to them" (Tacitus Histories 4:61). A similar caste of prophetesses, called dryades, existed among the Gauls (Chadwick, 1966: 80-81).

Single women also figured in what appear to be ritualized communal activities. The first-century geographer Strabo described a community of women who inhabited an island at the mouth of the Loire where "no man sets foot." (Geography 4.4.6) A sacred rite required them to unroof the temple and roof it again before sunset, a rite which Lefèvre (1900: 93) interpreted as recalling an age when women daily removed their hut's thatched roof to air the smoke-filled interior.

Another pattern links female virginity to the tending of communal fires. In both Roman and Greek mythology a virgin goddess, Vesta or Hestia, guards the communal hearth. The cult of Vesta required that the sacred fire of Rome be tended by a caste of virgin women — the Vestals. There is general agreement that this cult constituted an archaic element of Roman religion; the word Vesta, itself an archaism, appears to have come down unchanged from proto-Indo-European (6,000 B.P.), suggesting ritualization at an early date (Dumézil, 1970: 311-324). The idea that a celibate female must guard the hearth still survives in European folklore, the most familiar example being Cinderella — an unmarried woman whose name came from her having to sleep by the cinders of the fireplace.

Is there direct evidence of ‘excess’ women among Ice Age Europeans? We have a snapshot of one extended family from the Magdalenian period. The Maszycka Cave in Poland has yielded the remains of 3 men, 5 women and 8 children, all apparently from the same family and all apparently dying the same sudden death (Kozlowski & Sachse-Kozlowska, 1995).

References

Baroja, J.C. (1964). The World of Witches, trans. by N. Glendinning, Weidenfeld and Nicolson.

Chadwick, N.K. (1966). The Druids, University of Wales Press.

Czaplicka, M.A. (1969). Aboriginal Siberia. A Study in Social Anthropology, Clarendon.

Dexter, M.R. (1985). Indo-European reflection of virginity and autonomy. Mankind Quarterly, 26, 57-74.

Dumézil, G. (1970). Archaic Roman Religion, University of Chicago Press.

Frost, P. (2006). European hair and eye color - A case of frequency-dependent sexual selection? Evolution and Human Behavior, 27, 85-103

Frost, P. (1994). Geographic distribution of human skin colour: A selective compromise between natural selection and sexual selection? Human Evolution, 9, 141-153.

Gimbutas, M.A. (1982). The Goddesses and Gods of Old Europe, 6,500-3,500 BC, myths and cult images. University of California.

Hallowell, A.I. (1971). The Role of Conjuring in Saulteaux Society, Publications of the Philadelphia Anthropological Society, Vol. 2, Octagon.

Kozlowski, S.K., & Sachse‑Kozlowska, E. (1995). Magdalenian family from the Maszycka Cave. Jahrbuch der Römisch Germanischen Zentral Museums Mainz, 40, 115‑205. Jahrgang 1993, Mainz.

Lefèvre, A. (1900). Les Gaulois - origines et croyances, Librairie C. Reinwald.

Saladin d'Anglure, B. (1988). Penser le "féminin" chamanique. Recherches amérindiennes au Québec, 18(2-3), 19-50.

Strabo. (1923). The Geography of Strabo, Loeb Classical Library, William Heinemann.

Tacitus. (1969). The Histories, Loeb Classical Library, William Heinemann.

Tuesday, November 4, 2008

Vitamin D and skin color. Part II

Is white skin an adaptation to the cereal diet that Europeans have been consuming for the past five to seven thousand years?

When early Europeans switched from hunting and gathering to cereal agriculture, the new diet may have provided less vitamin D (i.e., from fatty fish), which the body needs to metabolize calcium and create strong bones. There would thus have been stronger selection for endogenous production of vitamin D in the skin’s tissues. Since such production requires UV-B light and since melanin blocks UV, this selection may have favored a lighter skin color (Sweet, 2002). In addition, cereals seem to increase vitamin D requirements by decreasing calcium absorption and by shortening the half-life of the main blood metabolite of vitamin D (Pettifor, 1994; see Paleodiet).

Undoubtedly, lighter skin allows more UV-B into the skin. As Robins (1991, pp. 60-61) notes, black African skin transmits three to five times less UV than does European skin. But is this a serious constraint on vitamin D production? Apparently not. Blood metabolites of vitamin D show similar increases in Asian, Caucasoid, and Negroid subjects when their skin is either artificially irradiated with UV-B or exposed to natural sunlight from March to October in Birmingham, England (Brazerol et al., 1988; Ellis et al., 1977; Lo et al., 1986; Stamp, 1975; also see discussion in Robins, 1991, pp. 204-205).

The vitamin D hypothesis also implies that European skin turned white almost at the dawn of human history. Cereal agriculture did not reach northern Europe until some 5,000 years ago and, presumably, the whitening of northern European skin would not have been complete until well into the historical period. Is this a realistic assumption, given the depictions of white-skinned Europeans in early Egyptian art?

References

Brazerol, W.F., McPhee, A.J., Mimouni, F., Specker, B.L., & Tsang, R.C. (1988). Serial ultraviolet B exposure and serum 25 hydroxyvitamin D response in young adult American blacks and whites: no racial differences. Journal of the American College of Nutrition, 7, 111-118.

Ellis, G., Woodhead, J.S., & Cooke, W.T. (1977). Serum-25-hydroxyvitamin-D concentrations in adolescent boys. Lancet, 1, 825-828.

Lo, C.W., Paris, P.W., & Holick, M.F. (1986). Indian and Pakistani immigrants have the capacity as Caucasians to produce vitamin D in response to ultraviolet radiation. American Journal of Clinical Nutrition, 44, 683-685.

Pettifor, J.M. (1994). Privational rickets: a modern perspective. Journal of the Royal Society of Medicine, 87, 723-725.

Robins, A.H. (1991). Biological perspectives on human pigmentation. Cambridge Studies in Biological Anthropology, Cambridge: Cambridge University Press.

Stamp, T.C. (1975). Factors in human vitamin D nutrition and in the production and cure of classical rickets. Proceedings of the Nutrition Society, 34, 119-130.

Sweet, F.W. (2002). The paleo-etiology of human skin tone. Backintyme Essays.

Tuesday, October 28, 2008

Skin color and vitamin D

Differences in human skin color are commonly explained as an adaptive response to solar UV radiation and latitude. The further away from the equator you are, the weaker will be solar UV and the less your skin will need melanin to prevent sunburn and skin cancer.

A variant of this explanation involves vitamin D, which the body needs to make strong bones and which the skin produces with the help of UV-B. The further away from the equator you are, the lighter your skin will be to let enough UV-B into its tissues for vitamin D production. Or so the explanation goes.

To test this hypothesis, Osborne et al. (2008) measured skin color and bone strength in a hundred white and Asian adolescent girls from Hawaii. Skin color was measured at the forehead and the inner arm. Bone strength was measured by section modulus (Z) and bone mineral content (BMC) at the proximal femur. A multiple regression was then performed to investigate the influences of skin color, physical activity, age, ethnicity, developmental age, calcium intake, and lean body mass on Z and BMC. Result: no significant relationship between skin color and bone strength.

Is there, in fact, any hard evidence that humans vary in skin color because they need to maintain the same level of vitamin D production in the face of varying levels of UV-B? Robins (1991, pp. 204-205) found the data to be unconvincing when he reviewed the literature. In particular, there seems to be little relationship between skin color and blood levels of 25-OHD—one of the main circulating metabolites of vitamin D:

The vulnerability of British Asians to rickets and osteomalacia has been ascribed in part to their darker skin colour, but this idea is not upheld by observations that British residents of West Indian (Afro-Caribbean) origin, who have deeper skin pigmentation than the Asians, very rarely manifest clinical rickets … Moreover, artificial irradiation of Asian, Caucasoid and Negroid subjects with UV-B produced similar increases in blood 25-OHD levels irrespective of skin pigmentation … A study under natural conditions in Birmingham, England, revealed comparable increases in 25-OHD levels after the summer sunshine from March to October in groups of Asians, West Indians and Caucasoids … This absence of a blunted 25-OHD response to sunlight in the dark-skinned West Indians at high northerly latitudes (England lies farther north than the entire United States of America except for Alaska) proves that skin colour is not a major contributor to vitamin D deficiency in northern climes.

The higher incidence of rickets in British Asians probably has less to do with their dark color than with their systematic avoidance of sunlight (to remain as light-skinned as possible).

Skin color and natural selection via solar UV

Solar UV seems to be a weak agent of natural selection, be it through sunburn, skin cancer, or vitamin D deficiency. Brace et al. (1999) studied skin color variation in Amerindians, who have inhabited their continents for 12,000-15,000 years, and in Australian Aborigines, who have inhabited theirs for some 50,000 years. Assuming that latitudinal skin-color variation in both groups tracks natural selection by solar UV, their calculations show that this selection would have taken over 100,000 years to create the skin-color difference between black Africans and northern Chinese and ~ 200,000 years to create the one between black Africans and northern Europeans (Brace et al., 1999). Yet modern humans began to spread out of Africa only about 50,000 years ago. Clearly, something other than solar UV has also influenced human variation in skin color ... and one may wonder whether lack of solar UV has played any role, via natural selection, in the extreme whitening of some human populations.

Indeed, people seem to do just fine with a light brown color from the Arctic Circle to the equator. Skeletal remains from pre-contact Amerindian sites show little evidence of rickets or other signs of vitamin D deficiency—even at latitudes where Amerindian skin is much darker than European skin (Robins, 1991, p. 206).

Why, then, are Europeans so fair-skinned when ground-level UV radiation is equally weak across Europe, northern Asia, and North America at all latitudes above 47º N? (Jablonski & Chaplin, 2000). Proponents of the vitamin D hypothesis will point to the Inuit and say that non-Europeans get enough vitamin D at high northerly latitudes from fatty fish. So they don’t need light skin. In actual fact, if we look at the indigenous peoples of northern Asia and North America above 47º N, most of them live far inland and get little vitamin D from their diet. For instance, although the Athapaskans of Canada and Alaska live as far north as the Inuit and are even somewhat darker-skinned, their diet consists largely of meat from land animals (caribou, deer, ptarmigan, etc.). The same may be said for the native peoples of Siberia.

Conversely, fish consumption is high among the coastal peoples of northwestern Europe. Skeletal remains of Danes living 6,000-7,000 years ago have the same carbon isotope profile as those of Greenland Inuit, whose diet is 70-95% of marine origin (Tauber, 1981). So why are Danes so light-skinned despite a diet that has long included fatty fish?

Skin color and sexual selection via male choice

Latitudinal variation in human skin color is largely an artefact of very dark skin in sub-Saharan agricultural peoples and very light skin in northern and eastern Europeans. Elsewhere, the correlation with latitude is much weaker. Indeed, human skin color seems to be more highly correlated with the incidence of polygyny than with latitude (Manning et al., 2004).

This second correlation is especially evident in sub-Saharan Africa, where high-polygyny agriculturalists are visibly darker than low-polygyny hunter-gatherers (i.e., Khoisans, pygmies) although both are equally indigenous. Year-round agriculture allows women to become primary food producers, thereby freeing men to take more wives. Thus, fewer women remain unmated and men are less able to translate their mate-choice criteria into actual mate choice. Such criteria include a preference, widely attested in the African ethnographic literature, for so-called 'red' or 'yellow' women — this being part of a general cross-cultural preference for lighter-skinned women (van den Berghe & Frost, 1986). Less mate choice means weaker sexual selection for light skin in women and, hence, less counterbalancing of natural selection for dark skin in either sex to protect against sunburn and skin cancer. Result: a net increase in selection for dark skin.

Just as weaker sexual selection may explain the unusually dark skin of sub-Saharan agricultural peoples, stronger sexual selection may explain the unusually light skin of northern and eastern Europeans, as well as other highly visible color traits.

Among early modern humans, sexual selection of women varied in intensity along a north-south axis. First, the incidence of polygyny decreased with distance from the equator. The longer the winter, the more it cost a man to provision a second wife and her children, since women could not gather food in winter. Second, the male death rate increased with distance from the equator. Because the land could not support as many game animals per unit of land area, hunting distance increased proportionately and hunters more often encountered mishaps (drowning, falls, cold exposure, etc.) or ran out of food, especially if other food sources were scarce.

Sexual selection of women was strongest where the ratio of unmated women to unmated men was highest. This would have been in the ‘continental Arctic’, a steppe-tundra environment where women depended the most on men for food and where hunting distances were the longest (i.e., long-distance hunting of highly mobile herds with no alternate food sources). Today, this environment is confined to the northern fringes of Eurasia and North America. As late as 10,000 years ago, it reached much further south. This was particularly so in Europe, where the Scandinavian icecap had pushed the continental Arctic down to the plains of northern and eastern Europe (Frost, 2006).

The same area now corresponds to a zone where skin is almost at the physiological limit of depigmentation and where hair and eye color have diversified into a broad palette of vivid hues. This ‘European exception’ constitutes a major deviation from geographic variation in hair, eye, and skin color (Cavalli-Sforza et al., 1994, pp. 266-267).

References

Brace, C.L., Henneberg, M., & Relethford, J.H. (1999). Skin color as an index of timing in human evolution. American Journal of Physical Anthropology, 108 (supp. 28), 95-96.

Cavalli-Sforza, L.L., Menozzi, P., & Piazza, A. (1994). The History and Geography of Human Genes. Princeton: Princeton University Press.

Frost, P. (2006). European hair and eye color - A case of frequency-dependent sexual selection? Evolution and Human Behavior, 27, 85-103

Jablonski, N.G. & G. Chaplin. (2000). The evolution of human skin coloration, Journal of Human Evolution, 39, 57-106.

Manning, J.T., Bundred, P.E., & Mather, F.M. (2004). Second to fourth digit ratio, sexual selection, and skin colour. Evolution and Human Behavior, 25, 38-50.

Osborne, D.L., C.M. Weaver, L.D. McCabe, G.M. McCabe, R. Novotony, C. Boushey, & D.A. Savaiano. (2008). Assessing the relationship between skin pigmentation and measures of bone strength in adolescent females living in Hawaii. American Journal of Physical Anthropology, 135(S46), 167.

Robins, A.H. (1991). Biological perspectives on human pigmentation. Cambridge Studies in Biological Anthropology, Cambridge: Cambridge University Press.

Tauber, H. (1981). 13C evidence for dietary habits of prehistoric man in Denmark. Nature, 292, 332-333.

van den Berghe, P.L., & Frost, P. (1986). Skin color preference, sexual dimorphism and sexual selection: A case of gene-culture co-evolution? Ethnic and Racial Studies, 9, 87-113.

Tuesday, October 21, 2008

More on gene-culture co-evolution

According to the online magazine Seed, “a growing number of scientists argue that human culture itself has become the foremost agent of biological change.” Much of this change has been surprisingly recent:

In the DNA of a group of 5,000-year-old skeletons from Germany, they discovered no trace of the lactase allele, even though it had originated a good 3,000 years beforehand. Similar tests done on 3,000-year-old skeletons from Ukraine showed a 30 percent frequency of the allele. In the modern populations of both locales, the frequency is around 90 percent.

I thought I was on top of the literature, but this was new to me. It’s even more proof that human evolution did not stop with the advent of Homo sapiens. It has continued … even after the transition from prehistory to history!

The same article also has some thoughts from Bruce Lahn, the evolutionary geneticist who has mapped human variation at two genes, ASPM and microcephalin, that seem to regulate the growth of brain tissue.

Even if Lahn could prove to everyone's satisfaction that ASPM and microcephalin are under selection, whether intelligence is the trait being selected for would be far from a settled question. It could be, as Lahn suggested, that some other mental trait is being selected, or that the activity of ASPM and microcephalin in other parts of the body is what is under selection. More work will certainly be done. But one can speculate with far more confidence about what drove the dramatic increase in intelligence attested by the fossil record: the advent of human culture.

"Intelligence builds on top of intelligence," says Lahn. "[Culture] creates a stringent selection regime for enhanced intelligence. This is a positive feedback loop, I would think." Increasing intelligence increases the complexity of culture, which pressures intelligence levels to rise, which creates a more complex culture, and so on. Culture is not an escape from conditioning environments. It is an environment of a different kind.


Reference

Phelan, B. (2008). How we evolve. A growing number of scientists argue that human culture itself has become the foremost agent of biological change. Seed Posted October 7, 2008

Wednesday, October 15, 2008

Polygyny or patrilocality?

Have all humans been more or less equally polygynous? The answer seems to be yes if we believe a team of researchers from the University of Arizona. They found that genetic diversity is higher on the maternally inherited X chromosome than on chromosomes inherited by both sexes (autosomes) in samples from five different populations: Biaka (Central African Republic), Mandenka (Senegal), San (Namibia), Basques (France), Han (China), and Melanesians (Papua New Guinea). Their conclusion: “our results point to a systematic difference between the sexes in the variance in reproductive success; namely, the widespread effects of polygyny in human populations.” In other words, proportionately more women than men have been contributing to the gene pool (Hammer et al., 2008).

It’s no surprise that polygyny has existed in the five populations under study. Almost all human populations are polygynous to some degree. The surprise is the relative lack of difference between the European and African subjects. Indeed, according to this study, the Basques have been more polygynous than the Mandenka have been. This is truly counterintuitive. Among the Basques, polygyny is normally limited to its serial form (marriage to a second wife upon the death of the first), as well as occasional cuckoldry. Among the Mandenka, polygyny is the preferred marriage type.

For some people in the blogosphere, this is simply scientific truth and we just have to accept it, however counterintuitive it may seem. There is nonetheless an alternate explanation: patrilocality. In many societies, a wife goes to live in her husband’s community after marriage. This has the effect of inflating the genetic diversity of women in any one community.

These two confounding levels of explanation, polygyny and patrilocality, bedeviled the previous methodology of comparing maternally inherited mtDNA with the paternally inherited Y chromosome. With the new methodology, patrilocality biases the results even more because the Y chromosome is no longer a point of reference.

The University of Arizona researchers do not mention patrilocality in their paper although they do discuss ‘sex-biased forces.’ Under this heading, they tested a model where only females migrate between communities (‘demes’) and at such a rate that panmixia eventually results. They concluded that this factor could not be significant. To my mind, the model is unrealistic, partly because the assumed migration rate is far too high and partly because two demes are used to represent a real world where brides are exchanged among many communities separated by varying genetic distances. To be specific, the more genetically different a bride is from her host community, the further away will be her community of origin, and the lower will be the probability of panmixia between the two.

To the extent that the methodology is biased toward patrilocality effects, any polygyny effects will be less apparent. If this new methodology primarily tracks differences in patrilocality, no major differences would be observable among the different population samples.

In addition, there may be a weak inverse relationship between patrilocality and polygyny. Patrilocality correlates with patriarchy, which correlates with high paternal investment, which inversely correlates with polygyny. If so, the two effects – polygyny and patrilocality – would tend to cancel each other out in the data.

Finally, the burden of proof is on those who propose new methodologies, especially one that produces inconsistent results. The University of Arizona researchers themselves say as much: “Our findings of high levels of diversity on the X chromosome relative to the autosomes are in marked contrast to results of previous studies in a wide range of species including humans.” More importantly, their findings run counter to the comparative literature on human mating systems. To cite only one authority, Pebley and Mbugua (1989) note:

In non-African societies in which polygyny is, or was, socially permissible, only a relatively small fraction of the population is in polygynous marriages. Chamie's (1986) analysis of data for Arab Muslim countries between the 1950s and 1980s shows that only 5 to 12 percent of men in these countries have more than one wife. … Smith and Kunz (1976) report that less than 10 percent of nineteenth-century American Mormon husbands were polygynists. By contrast, throughout most of southern West Africa and western Central Africa, as many as 20 to 50 percent of married men have more than one wife … The frequency is somewhat lower in East and South Africa, although 15 to 30 percent of husbands are reported to be polygynists in Kenya and Tanzania.

References

Hammer, M.F., Mendez, F.L., Cox, M.P., Woerner, A.E., & Wall, J.D. (2008). Sex-biased evolutionary forces shape genomic patterns of human diversity. PLoS Genet, 4(9), e1000202. doi:10.1371/journal.pgen.1000202

Pebley, A. R., & Mbugua, W. (1989). Polygyny and Fertility in Sub-Saharan Africa. In R. J. Lesthaeghe (ed.), Reproduction and Social Organization in Sub-Saharan Africa, Berkeley: University of California Press, pp. 338-364.

Wednesday, October 8, 2008

Ancient reading and writing

The French journal L’Histoire has a special issue on reading and writing in ancient societies. One article, about Mesopotamia, makes several points that support an argument I have made: the invention of writing, especially alphabetical writing, created a strong selection pressure for people who had the rare ability to take dictation or copy written texts with a low error rate and over extended lengths of time (Frost, 2007).

1. In the ancient world, reading and writing required much stamina, concentration, and memorization, more than is the case today with current reader-friendly language. This may be seen in the long training needed to make a good scribe.

To learn cuneiform writing, the students followed a specific and very standardized curriculum that has been reconstituted thanks to the thousands of exercises that have been found. Training began with writing of simple signs and then writing of lists of syllables and names. Next came copying of long lexical lists that corresponded to all sorts of realities: names of trades, animals, plants, vases, wooden objects, fabrics, … Then came copying of complex Sumerian ideograms, even though Sumerian had become a dead language, with their pronunciation and their translation in Akkadian. Learning of Sumerian was completed by copying increasingly difficult texts: proverbs and contracts, and then hymns.


2. Scribes were not recruited from the general population. Their profession seems to have been largely family-transmitted, and was recognized as such.

Learning of cuneiform, in the early 2nd millennium, took place in a master’s home and not in an institutional “school”. The tradition was often passed down within families, with scribes training their children.

3. Although writing was generally done by scribes, many more people could read and, if need be, write.

It has long been believed that in ancient Mesopotamia only a very small part of the population knew how to read and write and that these skills were reserved for specialists, i.e., scribes. Several recent studies have called this idea into question and have shown that access to reading, and even writing, was not so uncommon. Some kings, and also the members of their entourage, family, ministers, or generals, as well as merchants, could do without a reader’s services, when necessary, and decipher on their own the letters sent to them. Sometimes, they were even able to take a quill—the sharpened end of a reed—and write their own tablets.

The last point may help us understand a chicken-and-egg question. If reading and writing are associated with specific genetic predispositions, how did people initially manage to read and write? (see previous posts: Decoding ASPM: Part I, Part II, Part III)

The answer is that these predispositions are not necessary for reading and writing. But they do help. Specifically, they help the brain process written characters faster. In this way, natural selection has genetically reinforced an ability that started as a purely cultural innovation.

This may be a recurring pattern in human evolution. Humans initially took on new tasks, like reading and writing, by pushing the envelope of mental plasticity. Then, once these tasks had become established and sufficiently widespread, natural selection favored those individuals who were genetically predisposed to do them better.

The term is ‘gene-culture co-evolution’ and it’s still a novel concept. Until recently, anthropologists thought that human cultural evolution had simply taken over from human genetic evolution, making the latter unnecessary and limiting it to superficial ‘skin-deep’ changes. But recent findings now paint a different picture. Genetic evolution has actually accelerated in our species over the past 40,000 years, and even more over the past 10,000-15,000. The advent of agriculture saw the rate increase a 100-fold. In all, natural selection has changed at least 7% of the genome during the existence of Homo sapiens. (Hawks et al., 2007; see previous post). And this is a minimal estimate that excludes much variation that may or may not be due to selection. The real figure could be higher. Much higher.

References

Frost, P. 2007. "The spread of alphabetical writing may have favored the latest variant of the ASPM gene", Medical Hypotheses, 70, 17-20.

Hawks, J., E.T. Wang, G.M. Cochran, H.C. Harpending, and R.K. Moyzis. (2007). Recent acceleration of human adaptive evolution. Proceedings of the National Academy of Sciences (USA), 104(52), 20753-20758

Lion, B. (2008). Les femmes scribes de Mésopotamie. L’Histoire, no. 334 (septembre), pp. 46-49.

Wednesday, October 1, 2008

Thoughts on the crisis

Historians will argue back and forth over the causes of the current economic crisis, just as they still argue over the causes of the Great Depression. But there is consensus on some points:

  • The U.S. government wanted to increase home ownership among Hispanic and African Americans. Since it was politically unacceptable to impose racial quotas on mortgage money, this could be done only by pressuring banks to relax lending practices, even to the point of eliminating down payments (!). The rules were thus loosened across the board for everyone.
  • The increase in home buyers set off an inflationary spiral that became self-perpetuating. People bought houses with the intention of reselling them at a much higher prices.
  • The rising house prices fuelled demand for housing construction. Entire exurbs of McMansions were being built until the crisis began.
  • The Federal Reserve kept all of this going by providing lenders with cheap money.

In sum, the boom kept going as long as enough people could borrow enough money to buy more and more homes at higher and higher prices.

It couldn’t go on forever. On the one hand, wages have not kept pace with housing prices, not to mention the rising cost of oil and food. On the other, mortgages were being given to people who were, by any honest measure, insolvent.

Will the crisis be resolved by the proposed $700 billion bailout? This one might be resolved, at least for now. But the same kind of speculative bubble could happen elsewhere in the economy for similar reasons. The U.S. economy is increasingly geared to creating illusory value.

In all fairness, the bailout may buy time to dismantle the bubble economy before more damage is done. The U.S. government could stop badgering lenders to relax their lending criteria. The Federal Reserve could stop providing cheap money. The speculators and deadbeats could be stripped of their ill-gotten gains.

It won’t happen.

What then? A full-blown recession will likely be averted for another two years. By then, any further bailout would reach astronomical figures and simply drag down those who were wise enough to shun speculation and improvidence.

When I was an undergrad, I remember reading a Marxist book on economics. Among other things, it argued that the boom-bust cycle is inevitable. Once a boom has set in, decision-making becomes less and less optimal. Market discipline slackens and incompetence increasingly goes unpunished. Even if you do get fired or if your company goes under, you can always get rehired elsewhere. Many people also take advantage of the boom to make money through pure speculation, and they will do their utmost to keep the boom going until speculation has become the main driving force. Why not? It’s their bread and butter … or rather their cocaine and cognac.

And so, the longer the boom goes on, the greater the load of inefficiency that the economy has to bear. Eventually a crisis becomes inevitable and even desirable … to clean all the gunk out of the system.

But there is another wrinkle to the current boom-bust cycle. It is playing out against the backdrop of a worsening commodity crisis. Demand is increasing faster than supply for the basics of life, particularly oil, food, and water. Resource-rich countries will be all right. But things will be less rosy for areas that have a high ratio of people to resources, like the Eastern U.S., California, Western Europe, and many areas of the Third World.

Nonetheless, many of these same areas have embarked on a program of aggressive population growth through immigration. The U.S. is projected to grow by 135 million in just 42 years—a 44% increase (Camarota, 2008). The United Kingdom is slated to grow by 16 million in 50 years—a 26% increase (United Kingdom – Wikipedia).

This situation might be manageable if the immigrants were going into export sectors that can earn foreign exchange and pay for increased imports of oil, food, and water (yes, fresh water will become an item of international trade). But they aren’t. For the most part, they are being brought in to serve the needs of agribusiness, slaughterhouses, landscapers, homebuilders, hotel and restaurant services, and so forth.

Yes, we are living in interesting times.

Reference

Camarota, S.A. (2008). How many Americans? The Washington Post. Tuesday, September 2, 2008; Page A15

Wednesday, September 24, 2008

Common genetic variants and intelligence

The New York Times has run an article on genetic research by Dr. David Goldstein of Duke University. His main finding is that most human diseases with a genetic basis are not due to common alleles. They are apparently due to rare alleles that have not been eliminated by natural selection. This seems to argue against the common variant theory of disease, i.e., natural selection has caused many modern diseases by favoring genetic variants that keep us going as long as we can reproduce and then let us fall apart once we’re reproductively useless.

Dr. Goldstein has also found that common genetic variants do not explain variation in IQ, at least not among different human populations:

He says he thinks that no significant genetic differences will be found between races because of his belief in the efficiency of natural selection. Just as selection turns out to have pruned away most disease-causing variants, it has also maximized human cognitive capacities because these are so critical to survival. “My best guess is that human intelligence was always a helpful thing in most places and times and we have all been under strong selection to be as bright as we can be,” he said.

This is more than just a guess, however. As part of a project on schizophrenia, Dr. Goldstein has done a genomewide association study on 2,000 volunteers of all races who were put through
cognitive tests. “We have looked at the effect of common variation on cognition, and there is nothing,” Dr. Goldstein said, meaning that he can find no common genetic variants that affect intelligence. His view is that intelligence was developed early in human evolutionary history and was then standardized.

The finding itself is not surprising. The human brain is a complex organ with more than a trillion nerve cells. Clearly, a lot of genes are brain-related. If natural selection has caused one such gene to vary from one human population to the next, the same selection pressure has probably caused others to vary as well. Thus, in the event that human populations differ genetically in cognitive performance, the overall difference should reflect an accumulation of small differences at many gene sites—often too small to measure.

But what about g? Doesn’t g imply that one gene accounts for most genetic variation in intelligence? Perhaps. Alternately, g may correspond to a large number of brain genes that co-vary because they lie next to each other on the genome. In any case, the chances are not good that we will find g by trolling through the common variants we have discovered so far. The genome is a big place. Such a random search would be like looking for a needle in a haystack.

Tuesday, September 16, 2008

What is g anyway?

G is general intelligence, a common property we see in the similar test scores that people show for different cognitive tasks. But just what is this common property that makes some people generally smarter than others? There have been attempts to identify g with a specific brain characteristic. Unfortunately, as Anderson (1995) notes:

In general, these attempts have all been failures. It has always been possible to show a disassociation between any putative single psychological process and measures of general intelligence. For example, while it is possible to show correlations between g and memory measures, it is also possible to show normal intelligence in people with severe amnesia, thus eliminating the possible equating of memory and intelligence. While vocabulary measures correlate with IQ, subjects with global aphasia can have normal IQ and subjects with mental retardation can show semantic proficiency and precocity.

Whatever g is, it seems to be some general property and is not eliminated by damage to one brain area. Miller (1994) suggested that g might correlate with myelin, i.e., the fatty sheath that surrounds neurons. More myelination means faster nerve conduction, quicker reaction time and, ergo, higher intelligence. Regrettably, this hypothesis no longer seems tenable:

While IQ correlates with reaction time (RT) encouraging the hypothesis that neuron conduction velocity accounts for the individual variation in IQ, there is also discouraging information. Correcting IQ-RT correlations for neural conduction velocity does not diminish the strength of the relationship and neural conduction velocity does not correlate with RT. (Anderson, 1995).

Barrett and Eysenck (1993) have also failed to find a significant correlation between measures of nerve conduction velocity and IQ.

In his review of the literature, Anderson (1995) discounts other candidates: neuron number (no empirical support); cerebral cortical cell number (does not correlate with problem-solving performance in rats); and volume of the cerebellar granule cell layer (does not correlate with attention to novelty in rats. He concludes that the likeliest candidate seems to be the range and extent of neuronal processes that alter brain connectivity:

Dendritic arborization has been correlated to educational attainment and been shown to be more complex in brain regions critical for language. The molecular layer volume of the cerebellum, which correlated with attention to novelty in rates, is largely composed of Purkinje cell dendritic arborizations. Synapse number correlates with dementia severity in Alzheimer disease. Further, a change in connectivity can explain the IQ-RT correlation and the brain size-IQ correlation. (Anderson, 1995)

Thatcher et al. (2005) come to a similar conclusion in their comparison of EEG measurements to predict performance on the Weschler Intelligence test:

… it is hypothesized that general intelligence is positively correlated with faster processing times in frontal connections as reflected by shorter phase delays. Simultaneously, intelligence is positively related to increased differentiation in widespread local networks or local assemblies of cells as reflected by reduced EEG coherence and longer EEG phase delays, especially in local posterior and temporal lobe relations. The findings are consistent with a ‘network binding’ model in which intelligence is a function of the efficiency by which the frontal lobes orchestrate posterior and temporal neural resources.

Finally, we should not assume that IQ captures all variation in cognitive performance. In general, IQ tests involve answering a series of discrete questions over a limited span of time. Yet this kind of cognitive task is only a subset of all possible tasks that confront the human mind.

For instance, when something puzzles me, I may think about it over several days or longer. It will often sit in the back of my mind until it is re-activated by a piece of relevant information. Sometimes, I will get up in the middle of the night to jot down a possible answer. Then there are the lengthy, monotonous tasks: driving non-stop to Montreal, transcribing old hard copy into an electronic file, keeping track of different ‘things-to-do’ over the course of a day, and so on.

How well does a one-hour test measure performance on such tasks?

References

Anderson, B. (1995). G explained. Medical Hypotheses, 45, 602-604.

Barrett, P.T., & Eysenck, H.J. (1993). Sensory nerve conduction and intelligence: A replication. Personality and Individual Differences, 15, 249-260.

Miller, E. (1994). Intelligence and Brain Myelination: A Hypothesis. Personality and Individual Differences, 17, 803-833.

Thatcher, R.W., North, D., & Biver, C. (2005). EEG and intelligence: Relations between EEG coherence, EEG phase delay and power. Clinical Neurophysiology, 116, 2129-2141.

Wednesday, September 10, 2008

Decoding ASPM: Part III

Since its discovery two years ago, the new ASPM variant has vanished down the memory hole. Why the hasty burial? One reason is linked to current views about the human mind. The dominant view, at least among psychologists, is that cognitive ability varies similarly among people for all aspects of mental performance, so much so that this variability seems to be explained by one factor alone, called general intelligence or g.

Thus, when Philippe Rushton and his associates studied ASPM, they looked to see whether its variants co-varied with indices of general intelligence, either IQ or brain size. When nothing turned up, they concluded that any relationship to mental ability must be a weak one (Rushton et al., 2007).

In an e-mail, Philippe Rushton went on to explain that:


… these [IQ] tests are highly predictive of work performance, which is often evaluated over long time periods and likely gives plenty of room for excellence from the unmeasured qualities you expect are important. For example, Salgado, Anderson, Moscoso, Bertua, and Fruyt (2003) demonstrated the international generalizability of GMA across 10 member countries of the European Community (EC), thus contradicting the view that criterion-related validity is moderated by differences in a nation's culture, religion, language, socioeconomic level, or employment legislation. They found scores predicted job performance ratings 0.62 and training success 0.54.


Yes, these are high correlations, but they still leave a lot of variability unexplained. Moreover, in the case of ASPM, we may be looking at something that improves mental performance on a very specific task—one that most people no longer engage in. How often do people take dictation nowadays?

And there is evidence that g is not everything. As Steve Sailer notes:

g, like any successful reductionist theory, has its limits. Males and females, while similar on mean g (but not on the standard deviation of g: guys predominate among both eggheads and knuckleheads), differ on several specific cognitive talents. Men, Jensen reports in passing, tend to be better at visual-spatial skills (especially at mentally rotating 3-d objects) and at mathematical reasoning. Women are generally superior at short-term memory, perceptual speed, and verbal fluency. Since the male sex is stronger at logically manipulating objects, while the female sex prevails at social awareness, that explains why most nerds are male, while most "berms" (anti-nerds adept at interpersonal skills and fashion) are female. Beyond cognition, there are other profound sex dissimilarities in personality, motivation, and physiology.

Clearly, if the new ASPM variant does have an effect on the brain, it cannot be a general one that influences all brain tissues. This was already being pointed out at the time of its discovery by anthropologist John Hawks:

Nobody currently knows what these alleles may have done. It seems likely that people with the allele have some sort of cognitive advantage, which ultimately translates into a reproductive benefit. This advantage is probably not associated with greater brain sizes, because the average brain size appears not to have changed appreciably during the past 30,000 years.

So what is going on now? Nothing really. An article came out a year ago about a possible relationship between the old ASPM variant and tonal languages like Chinese (Dediu & Ladd, 2007). But this was the sort of blackboard musing that I like to indulge in. Currently, as far as I know, no lab research is being done.

References

Dediu D.L. & Ladd D.R. (2007).
Linguistic tone is related to the population frequency of the adaptive haplogroups of two brain size genes, ASPM and Microcephalin. Proc. Natl. Acad. Sci. U.S.A., 104 (26), 10944–9. doi:10.1073/pnas.0610848104.

Rushton, J.P., Vernon, PA.., Bons, T.A. (2007). No evidence that polymorphisms of brain regulator genes Microcephalin and ASPM are associated with general mental ability, head circumference or altruism. Biology Letters-UK, 3, 157–60.


Salgado, J. F., Anderson, N., Moscoso, S., Bertua, C., & Fruyt, F. D. (2003). International validity generalization of GMA and cognitive abilities: A European community meta-analysis. Personnel Psychology, 56, 573-605.

Wednesday, September 3, 2008

Decoding ASPM: Part II

In my last post, I reviewed the debate over ASPM, a gene implicated in regulation of brain growth. A new ASPM variant arose some 6,000 years ago in our species, apparently somewhere in the Middle East. It then spread outward, becoming much more prevalent in the Middle East and Europe than in East Asia. This temporal and spatial spread seems to match that of alphabetical writing, specifically the emergence of literate, scribal classes who had to process alphabetical script under premodern conditions (continuous text with little or no punctuation, real-time stenography, absence of automated assistance for publishing or copying, etc.). Since this subpopulation enjoyed prestige and apparently high reproductive success, it may have been a vector for the new ASPM variant (Frost, 2007).

All of this assumes the existence of a heritable cognitive ability that is specific to reading and writing of alphabetical script. This assumption runs counter to the view, held by many psychologists, that human cognition shows heritable variation only for general intelligence (commonly referred to as g). This view was key to ending debate over human variation in ASPM. As Philippe Rushton and other psychologists have shown, the new ASPM variant does not improve performance on standard IQ tests. Nor does it correlate with increased brain size.

Recently, however, it has been found that ASPM variants do not correlate with brain size in other primate species. Instead, they seem to regulate the growth of specific brain tissues, especially within the cerebral cortex. Parallel to this is another finding that alphabetical script processing is localized within a specific region of the brain, the ‘Visual Word Form Area (VWFA):

Brain imaging studies reliably localize a region of visual cortex that is especially responsive to visual words. This brain specialization is essential to rapid reading ability because it enhances perception of words by becoming specifically tuned to recurring properties of a writing system. The origin of this specialization poses a challenge for evolutionary accounts involving innate mechanisms for functional brain organization. (McCandliss et al., 2003).

Psychological, neuropsychological, and neuroimaging data converge to suggest that the human brain of literate subjects contains specialized mechanisms for visual word recognition (functional specialization), which map in a systematic way onto the properties of a cortical subregion of the left posterior occipitotemporal sulcus (reproducible localization).

… Such observations predict the existence of highly specialized but patchy and distributed neuronal populations coding for alphabetic stimuli at the single-neuron level. The intermingling of such neurons with others coding for objects or faces would translate into a partial regional selectivity at the single-voxel level, which is all that we can presently measure with PET or fMRI. (Cohen & Dehaene, 2004)


One puzzling issue remains: why is there a reproducible cortical site responsive to visual words? Reading is a recent cultural activity of the human species. The 5400 years that have elapsed since its invention are too short to permit the evolution of dedicated biological mechanisms for learning to read. (Cohen & Dehaene, 2004).

Indeed, how could the VWFA have arisen so recently and over such a short time? This is a puzzle only if we assume that evolution creates new features from scratch. But this is not how evolution usually works. Typically, new features evolve through a process of tinkering with old features that may have served some other purpose. And such tinkering may occur over less than a dozen generations. As Harpending and Cochran (2002) note, domestic dog breeds display much diversity in cognitive and behavioral characteristics, yet this diversity has arisen largely within the time span of human history.

A second puzzle is of the chicken and egg sort. If the VWFA is crucial for reading and writing, how did humans first learn to read and write? Some light has been shed on this puzzle by lesion studies, particularly one where part of the VWFA was surgically removed:


… our patient presented a clear-cut reading impairment following surgery, while his performance remained flawless in object recognition and naming, face processing, and general language abilities.


… Furthermore, the deficit was still present 6 months after surgery, albeit with some degree of functional compensation. This confirms that the VWFA is indeed indispensable for expert reading.

(Gaillard et al., 2006)

So a person can learn to read and write without a VWFA. This brain area did not arise to make reading and writing possible. It simply arose to make these tasks easier.

But what about human populations that have never used alphabetical script? This is notably true for the Chinese, who have long had a logographic script:

As the most widely used logographic script, Chinese characters have thousands of diverse word forms and differ markedly from alphabetic scripts in orthography. In Chinese characters, there is a distinctive square-combined configuration within each character and no obvious letter-sound correspondence. … Although some phonological information is encoded in some characters, this information is not consistent and is not at a level of correspondences between phonemes and letters. … In alphabetic stimuli, it is clear that each individual letter is the basic unit of words, so how different letters are combined is critical in defining orthographic regularities. In Chinese, however, it is still not clear what the basic processing units really are. (Liu et al., 2008)

When the brains of Chinese subjects were studied by functional MRI, Chinese characters seemed to be processed in the same region of the brain (the VWFA) that other populations use to process alphabetical characters. The Chinese subjects, however, seemed to be using other regions as well:

These results indicated that in addition to the VWFA being located in the left middle fusiform gyrus (BA 37), the left middle frontal cortex (BA 9) might also be an indispensable area for orthographic processing of Chinese characters, as opposed to alphabetic orthographies. (Liu et al., 2008)

For human populations that use alphabetical scripts, the VWFA seems to be much more of a bottleneck for text processing. This finding seems to dovetail with other evidence suggesting that logographic script evokes meaning more directly and does not impose the same set of cognitive demands (Frost, 2007).

References

Cohen, L., & Dehaene, S. (2004). Specialization within the ventral stream: the case for the visual word form area. Letter to the Editor. NeuroImage, 22, 466-476.

Frost, P. 2007. "The spread of alphabetical writing may have favored the latest variant of the ASPM gene", Medical Hypotheses, 70, 17-20.

Gaillard, R., Naccache, L., Pinel, P., Clémenceau, S., Volle, E., Hasboun, D., Dupont, S., Baulac, M., Dehaene, S., Adam, C., & Cohen, L. (2006). Direct intracranial, fMRI, and lesion evidence for the causal role of left inferotemporal cortex in reading. Neuron, 50, 191-204.

Harpending, H. and G. Cochran. 2002.
"In our genes", Proceedings of the National Academy of Sciences, 99(1), 10-12.

Liu, C., Zhang, W-T., Tang, Y-Y., Mai, X-Q., Chen, H-C., Tardif, T., & Luo, Y-J. (2008). The visual word form area: evidence from an fMRI study of implicit processing of Chinese characters. NeuroImage, 40, 1350-1361.

McCandliss, B.D., Cohen, L., and Dehaen, S. (2003). The visual word form area: expertise for reading in the fusiform gyrus. Trends in Cognitive Sciences, 7, 293-299.

Wednesday, August 27, 2008

Decoding the ASPM puzzle

Remember the kerfuffle over ASPM two years ago? ASPM is a gene that regulates brain growth. It evolved considerably in the primate lineage leading to humans and continued to evolve even after the emergence of modern humans, with the latest variant arising about 6000 years ago somewhere in the Middle East. The new variant then proliferated within and outside this region, reaching higher incidences in the Middle East (37–52%) and in Europe (38–50%) than in East Asia (0–25%).

Interest died down when it was found that this variant, despite its apparent selective advantage, does not seem to improve cognitive performance, at least not on standard IQ tests (Mekel-Bobroy et al., 2007; Rushton et al., 2007). Nor do ASPM variants correlate with human brain size variability (Rushton et al., 2007).

Now, new light has been shed on this puzzle by a paper on ASPM in other primates. This gene was initially linked to overall brain size because non-functioning variants cause microcephaly in humans. A comparative study of primate species, however, has shown that evolution of ASPM does not correlate with major changes in whole brain or cerebellum size:


Particularly striking is the result that only major changes of cerebral cortex size and not major changes in whole brain or cerebellum size are associated with positive selection in ASPM. This is consistent with an expression report indicating that ASPM’s expression is limited to the cerebral cortex of the brain (Bond et al. 2002). Our findings stand in contrast to recent null findings correlating ASPM genotypes with human brain size variation. Those studies used the relatively imprecise phenotypic trait of whole brain instead of cerebral cortex size (Rushton, Vernon, and Bons 2006; Woods et al. 2006; Thimpson et al. 2007). Although previous studies have shown that parts of the brain scale strongly with one another and especially with whole brain (e.g., Finlay and Darlington 1995), evidence here suggests that different brain parts still have their own evolutionary and functional differentiation with unique genetic bases. (Ali & Meier, 2008)

This is a point I raised a year ago. If we look at how the new ASPM variant spread geographically and temporally, it seems to match a very specific mental ability, and not general intelligence:

At present, we can only say that it [the new variant] probably assists performance on a task that exhibited the same geographic expansion from a Middle Eastern origin roughly 6000 years ago. The closest match seems to be the invention of alphabetical writing, specifically the task of transcribing speech and copying texts into alphabetical script. Though more easily learned than ideographs, alphabetical characters place higher demands on mental processing, especially under premodern conditions (continuous text with little or no punctuation, real-time stenography, absence of automated assistance for publishing or copying, etc.).

…How well are these tasks evaluated by standard IQ tests? Although most tests involve reading, transcribing, and taking dictation, these abilities are not evaluated over long, uninterrupted time periods. If we look at the two studies that discounted a cognitive advantage for the new ASPM variant, neither tested its participants for longer than 82 min and the tests themselves involved a mix of written and verbal tasks.

It seems premature to conclude that the new ASPM variant is unrelated to cognitive functioning. Current IQ tests do not adequately evaluate mental processing of alphabetical writing, particularly under premodern conditions. Yet this is the cognitive task whose origin and spread most closely coincide with those of the new ASPM variant in human populations. It is also a demanding task that only a fraction
of the population could perform in antiquity, in exchange for privileged status and probably superior reproductive opportunities. (Frost, 2007)

Is this cognitive task localized in a specific part of the brain? There seems to be evidence for such localization … which I will review in my next post.

References

Ali, F. and Meier, R. (2008).
Positive selection in ASPM is correlated with cerebral cortex evolution across primates but not with whole brain size. Molecular Biology & Evolution. Advance access

Frost, P. 2007. "The spread of alphabetical writing may have favored the latest variant of the ASPM gene", Medical Hypotheses, 70, 17-20.

Mekel-Bobrov, N., Posthuma D., Gilbert S.L., et al. (2007). The ongoing adaptive evolution of ASPM and Microcephalin is not explained by increased intelligence. Hum Mole Genet, 16, 600–8.

Rushton, J.P., Vernon, PA.., Bons, T.A. (2007). No evidence that polymorphisms of brain regulator genes Microcephalin and ASPM are associated with general mental ability, head circumference or altruism. Biology Letters-UK, 3, 157–60.

Tuesday, August 19, 2008

The European pattern of skin, hair, and eye color

The following is an executive summary for one of my book proposals. The book itself will probably take me a year to write and I’m sure I’ll have to update the manuscript continually as new information comes in. Comments are welcome.

Humans look strikingly different in Europe, particularly within a zone centered on the East Baltic and covering the north and the east. Here, skin is unusually white. Hair is not only black but also brown, flaxen, golden, or red. Eyes are not only brown but also blue, gray, hazel, or green.

This pattern also stands out chronologically. It arose very late during the time of modern humans and long after their arrival in Europe some 35,000 years ago. Such is the conclusion now emerging from genetic studies of skin, hair, and eye color.

Europeans owe their light skin to alleles that go back only c. 11,000 years at one gene and 12,000–3,000 years at another. As a Science journalist remarked: “the implication is that our European ancestors were brown-skinned for tens of thousands of years.” They were also uniformly black-haired and brown-eyed. Then, just as recently, their hair and eye color diversified as new alleles began to proliferate at two other genes.

The challenge now will be to narrow the time window. If these changes happened after 7,000 BP, the cause might be northern Europe’s shift from hunting and gathering to cereal agriculture. The change in diet may have reduced the intake of vitamin D, thus favoring the survival of paler Europeans whose skin could synthesize more of this vitamin.

This theory explains how European skin could have turned pale almost at the dawn of history. It leaves unexplained, however, why selection for lighter skin would have multiplied the number and variety of alleles for hair or eye color, especially when so many have little effect on skin color.

If these changes had happened earlier, before 10,000 BP, the cause might involve the last ice age. At that time, the tundra ecozone ran further south in Europe than in Asia, having been pushed down on to the plains of northern and eastern Europe by the Scandinavian icecap. The lower, sunnier latitudes created an unusually bioproductive tundra that could support large herds of game animals and, in turn, a substantial human population—but at the cost of a recurring shortage of male mates. Among present-day hunter-gatherers, similar environments raise the male death rate because the men must cover long distances while hunting migratory herds. The man shortage cannot be offset by more polygyny, since only a very able hunter can provide for a second wife (tundra offers women few opportunities for food gathering, thus reducing their self-reliance in feeding themselves and their children). With fewer men altogether and fewer being polygynous, the sex ratio is skewed toward a female surplus.

In this buyer’s market, men will select those women who look the most feminine. Since human skin color is sexually dimorphic (women are the ‘fair sex’), this sexual selection would eventually whiten the entire population. Where pigmentation has no female-specific form, as with hair and eye color, sexual selection would favor women with color variants that stand out by their novelty, the outcome being an increasingly diverse polymorphism.

Wednesday, August 13, 2008

Cavalli-Sforza's about-face

The renowned geneticist Luca Cavalli-Sforza is identified with the position that human races do not exist. In his opus The History and Geography of Human Genes, he included a chapter on the ‘failure of the race concept’ and declared that “the classification into races has proved to be a futile exercise".

This position gets much play in the media. An article published in The Economist tells us that the work of Cavalli-Sforza "challenges the assumption that there are significant genetic differences between human races, and indeed, the idea that 'race' has any useful biological meaning at all." This is also how he is seen in an article in The Stanford Magazine:

And he has received another kind of recognition —stacks of hate mail from white supremacists —for his well-publicized insistence that DNA studies can serve as an antidote to racism because they reveal an underlying genetic unity that cuts across racial groupings, making race a scientifically meaningless concept.


Yet not everyone believes he is a convinced antiracist:

How is it, then, that Cavalli-Sforza now finds himself accused of cultural insensitivity, neocolonialism and "biopiracy"? Late in his career, as he struggles to organize his most ambitious project yet -- a sweeping survey of human genetic diversity -- why are some people calling him a racist?


Perhaps because some people feel he is too inconsistent. On this issue, there are really two Cavalli-Sforzas: the one who denounced the race concept in 1994 … and the one who upheld it in 1976:

Today, all continents of the world are inhabited by representatives of the three major human races: African, Caucasian and Oriental. The proportions of the three groups still differ considerably in the various countries, and the migrations are too recent for social barriers between racial groups to have disappeared. The trend, however, seems to be in the direction of greater admixture.

On the most general level. geographic and ecological boundaries (which acted as partial barriers to expansion and migration) help to distinguish three major racial groups: Africans, Caucasians, and a highly heterogeneous group that we may call "Easterners". The Easterners include subgroups that were separated in various older classifications, such as American Natives (American Indians) and Orientals (Chinese, Japanese, Koreans). Some regard Australian aborigines as a separate race, but they do not differ much from Melanesians. From the Melanesians, we can trace a sequence of relativelygradual changes through the transition to Indonesians, then to Southeast Asians, and on to East Asians. American Natives and Eskimos probably both came from a related Northeast Asian stock from (or through) Siberia into North America. Eskimos, however, came much later than American Indians, and they subsequently expanded further eastward to Greenland.

The African continent contains, in the north and east, populations that have various degrees of admixture with Caucasians by all criteria of analysis. In the western, central and southern parts of the continent, Africans are relatively homogeneous - although some isolated groups of hunter-gatherers (like Pygmies and Bushmen) show cultural and physical peculiarities that suggest they should be considered somewhat separately. In fact, the Pygmies at least have attributes that indicate they may be "proto-African" groups — populations that have been the least altered by more recent events.

We tend to side with those taxonomists who prefer to group the human species into a few large racial groups (such taxonomists have been called "lumpers"). Others ("splitters") prefer to distinguish a large number of groups differing in relatively subtle ways. (Bodmer & Cavalli-Sforza, 1976, pp. 563-572)


Even later in time, particularly in journal articles, one can find references to race-based thinking:

The first split in the phylo-genetic tree separates Africans from non-Africans, and the second separates two major clusters, one corresponding to Caucasoids, East Asians, Arctic populations, and American natives, and the other to Southeast Asians, (mainland and insular), Pacific islanders, and New Guineans and Australians. Average genetic distances between the most important clusters are proportional to archaeological separation times. (Cavalli-Sforza et al., 1988)

What happened after 1976 to change Cavalli-Sforza’s views on race? Very little in terms of data. Four years earlier, the case against the race concept had already been made in a paper by Harvard geneticist Richard Lewontin. Frank Livingstone, an anthropologist, had even earlier presented similar arguments in his 1962 paper: “On the non-existence of human races”. Both papers had been published in leading journals and were still being widely discussed when Cavalli-Sforza co-authored a genetics textbook in 1976. Evidently, he was not convinced.

At least not then. As one anthropologist told me: “I don't think our perception of the general patterns of genetic variation changed much from '76 to '94, but the intellectual climate that geneticists operate in sure did.”

References

Anon. (2000). The Human Genome Survey, The Economist, 1 July 2000, pg. 11

Bodmer, W.F. and L.L. Cavalli-Sforza. (1976). Genetics, Evolution, and Man. WH Freeman and Company, San Francisco. pp 563-572.

Cavalli-Sforza, L.L., Menozzi, P. & Piazza, A. (1994). The History and Geography of Human Genes. Princeton: Princeton University Press.

Cavalli-Sforza, L.L., Piazza, A., Menozzi, P., and Mountain, J. (1988). Reconstruction of human evolution: Bringing together genetic, archaeological, and linguistic data. Proc. Natl. Acad. Sci. USA, 85, 6002-6006.

Leslie, M. (1999). The History of Everyone and Everything. The Stanford Magazine. May-June.

Lewontin, R.C. (1972). The apportionment of human diversity. Evolutionary Biology, 6, 381-398.

Livingstone, F.B. (1962). On the non-existence of human races. Current Anthropology, 3, 279-281.