Miranda Wrongs: Reading Too Much into the Genome
“We meant it for the best.” – Dr. Caron speaking of the Miranda settlers, in Whedon’s Serenity
When the sequence of the human genome was declared essentially complete in 2003, all biologists (except perhaps Craig Venter) heaved a sigh of gladness that the data were all on one website, publicly available, well-annotated and carefully cross-linked. Some may have hoisted a glass of champagne. Then they went back to their benches. They knew, if nobody else did, that the work was just beginning. Having the sequence was the equivalent of sounding out the text of an alphabet whose meaning was still undeciphered. For the linguistically inclined, think of Etruscan.
The media, with a few laudable exceptions, touted this as “we now know how genes work” and many science fiction authors duly incorporated it into their opuses. So did people with plans for improving humanity. Namely, there are initiatives that seriously propose that such attributes as virtue, intelligence, specific physical and mental abilities or, for that matter, a “happy personality” can (and should) be tweaked by selection in utero or engineering of the genes that determine these traits. The usual parties put forth the predictable pro and con arguments, and many articles get published in journals, magazines and blogs.
This is excellent for the career prospects and bank accounts of philosophers, political scientists, biotech entrepreneurs, politicians and would-be prophets. However, biologists know that all this is a parlor game equivalent to determining the number of angels dancing on the top of a pin. The reason for this is simple: there are no genes for virtue, intelligence, happiness or any complex behavioral trait. This becomes obvious by the number of human genes: the final count hovers around 20-25,000, less than twice as many as the number in worms and flies. It’s also obvious by the fact that cloned animals don’t look and act like their prototypes, Cc being the most famous example.
Genes encode catalytic, structural and regulatory proteins and RNAs. They do not encode the nervous system; even less do they encode complex behavior. At the level of the organism, they code for susceptibilities and tendencies — that is, with a few important exceptions, they are probabilistic rather than deterministic. And although many diseases develop from malfunctions of single genes, this does not indicate that single genes are responsible for any complex attribute. Instead they’re the equivalent of screws or belts, whose loss can stop a car but does not make it run.
No reputable biologist suggests that genes are not decisively involved in outcomes. But the constant harping on trait heritability “in spite of environment” is a straw man. Its main prop, the twin studies, is far less robust than commonly presented — especially when we take into account that identical twins often know each other before separation and, even when adopted, are likely to grow up in very similar environments (to say nothing of the data cherry-picking for publication). The nature/nurture debate has been largely resolved by the gene/environment (GxE) interplay model, a non-reductive approximation closer to reality. Genes never work in isolation but as complex, intricately-regulated cooperative networks and they are in constant, dynamic dialogue with the environment — from diet to natal language. That is why second-generation immigrants invariably display the body morphology and disease susceptibilities of their adopted culture, although they have inherited the genes of their natal one.
Furthermore, there’s significant redundancy in the genome. Knockouts of even important single genes in model organisms often have practically no phenotype (or a very subtle one) because related genes take up the slack. The “selfish gene” concept as presented by reductionists of all stripes is arrant nonsense. To stay with the car analogy, it’s the equivalent of a single screw rotating in vacuum by itself. It doth not even a cart make, let alone the universe-spanning starship that is our brain/mind.
About half of our genes contribute directly to brain function; the rest do so indirectly, since brain function depends crucially on signal processing and body feedback. This makes the brain/mind a bona fide complex (though knowable) system. This attribute underlines the intrinsic infeasibility of instilling virtue, intelligence or good taste in clothes by changing single genes. If genetic programs were as fixed, simple and one-to-one mapped as reductionists would like, we would have answered most questions about brain function within months after reading the human genome. As a pertinent example, recent work indicates that the six extended genomic regions that were defined by SNP analysis to contribute the most to IQ — itself a population-sorting tool rather than a real indicator of intelligence — influence IQ by a paltry 1%.
The attempts to map complex behaviors for the convenience and justification of social policies began as soon as societies stratified. To list a few recent examples, in the last decades we’ve had the false XYY “aggression” connection, the issue of gay men’s hypothalamus size, and the sloppy and dangerous (but incredibly lucrative) generalizations about brain serotonin and “nurturing” genes. Traditional breeding experiments (cattle, horses, cats, dogs, royal families) have an in-built functional test: the progeny selected in this fashion must be robust enough to be born, survive and reproduce. In the cases where these criteria were flouted, we got such results as deafness, mental instability, and physical fragility, as with Alexei Romanov (the hemophiliac son of Tsar Nicholas II).
There are no genes for virtue, intelligence, happiness or any complex behavioral trait.
I will leave aside the enormous and still largely unmet technical challenge of such implementation, which is light years distant from casual notes that airily prescribe, “just add tetracycline to the inducible vector that carries your gene” or “inject artificial chromosomes or siRNAs.” I play with all these beasties in the lab, and can barely get them to behave in homogeneous cell lines. Because most cognitive problems arise not from huge genomic errors but from small shifts in ratios of “wild-type” (non-mutated) proteins which affect brain architecture before or after birth, approximate engineering solutions will be death sentences. Moreover, the proposals usually advocate that such changes be done in somatic cells, not the germ line (which would make them permanent). This means intervention during fetal development or even later — a far more difficult undertaking than germline alteration. The individual fine-tuning required for this in turn brings up differential resource access (and no, I don’t believe that nanotech will give us unlimited resources).
Let’s now discuss the improvement touted in “enhancement” of any complex trait. All organisms are jury-rigged across scales: that is, the decisive criterion for an adaptive change (from a hemoglobin variant to a hip-bone angle) is function, rather than elegance. Many details are accidental outcomes of an initial chance configuration — the literally inverted organization of the vertebrate eye is a prime example. Optimality is entirely context-dependent. If an organism or function is perfected for one set of circumstances, it immediately becomes suboptimal for all others. That is the reason why gene alleles for cystic fibrosis and sickle cell anemia persisted: they conferred heterozygotic resistance to cholera and malaria, respectively. Even if it were possible to instill virtue or musicality (or even the inclination for them), fixing them would decrease individual and collective fitness. Furthermore, the desired state for all complex behaviors is fluid and relative.
The concept that pressing the button of a single gene can change any complex behavior is entirely unsupported by biological evidence at any scale: genomic, molecular, cellular, organismic. Because interactions between gene products are complex, dynamic and give rise to pleiotropic effects, such intervention can cause significant harm even if implemented with full knowledge of genomic interactions (which at this point is no even partially available). It is far more feasible to correct an error than to “enhance” an already functioning brain. Unlike a car or a computer, brain hardware and software are inextricably intertwined and cannot be decoupled or deactivated during modification (see: Why Our Brains Will Never Live in the Matrix)
If such a scenario is optional, it will introduce extreme de facto or de jure inequalities. If it is mandatory, beyond the obvious fact that it will require massive coercion, it will also result in the equivalent of monocultures, which is the surest way to extinction regardless of how resourceful or dominant a particular species is. And no matter how benevolent the motives of the proponents of such schemes are, all utopian implementations, without exception, degenerate into slaughterhouses and concentration camps.
The proposals to augment “virtue” or “intelligence” fall solidly into the linear progress model advanced by monotheistic religions, which takes for granted that humans are in a fallen state and need to achieve an idealized perfection. For the religiously orthodox, this exemplar is a god; for the transhumanists, it’s often a post-singularity AI. In reality, humans are a work in continuous evolution both biologically and culturally and will almost certainly become extinct if they enter any type of stasis, no matter how “perfect.”
But higher level arguments aside, the foundation stone of all such discussions remains unaltered and unalterable: any proposal to modulate complex traits by changing single genes is like preparing a Mars expedition based on the Ptolemaic view of the universe.