《Crick 4 *4 *4 遺傳密碼錶》的起草origin與修訂evolution (七)
Two major criticisms of the coevolution theory have been put forward. First, the coevolution scenario is very sensitive to the choice of amino acid precursor-product pairs, and the choice of these pairs is far from being straightforward. Indeed, in the original formulation of the coevolution theory, Wong did not directly use biochemically established relationships between amino acids but instead employed inferred reactions of primordial metabolism that remain debatable (70, 103). Amirnovin (108) generated a large set of random codes and found that, if the original 8 precursor-product pairs proposed by Wong (70) are considered, the standard code shows a substantially higher codon correlation score (a measure that calculates number of adjacent codons coding for precursor-product amino acids) than most of the random codes (only 0.1% of random codes perform better). However, after the pairs Gln-His and Val-Leu are removed (the validity of the latter pair has been questioned (109)), the proportion of better random codes rises to 3.6%, and if the precursor-product pairs are taken from the well-characterized metabolic pathways of E. coli, the proportion that a random code shows a stronger correlation reaches 34%. Second, the biological validity of the statistical analysis of Wong (70) appears dubious (109). Ronneberg et al., together with consistent definition of amino acid precursor-product pairs, suggested that, according to the wobble rule, the genetic code contains not 61 functional codons coding for amino acids, but 45 codons, where each two codons of the form NNY are considered as one because no known tRNA can distinguish codons with U or C in the third base position. Under this assumption, there was no statistical support for the coevolution scenario of the evolution of the code (109) (but see (110)).
Is a compromise scenario plausible?
As discussed above, despite a long history of research and accumulation of considerable circumstantial evidence, none of the three major theories on the nature and evolution of the genetic code is unequivocally supported by the currently available data. It appears premature to claim, e.g., that 『the coevolution theory is a proven theory』 (103), or 『There is very significant evidence that cognate codons and/or anticodons are unexpectedly frequent in RNA-binding sites […]. This suggests that a substantial fraction of the genetic code has a stereochemical basis』 (75). Is it conceivable that each of these theories captures some aspects of the code』s origin and evolution, and combined, they could yield a more realistic picture? In principle, it is not difficult to speculate along these lines, for instance, by imagining a scenario whereby first abiogenically synthesized amino acids captured their cognate codons owing to their respective stereochemical affinities, after which the code expanded according to the coevolution theory, and finally, amino acid assignments were adjusted under selection to minimize the effect of translational misreadings and point mutations on the genome. Such a composite theory is extremely flexible and consequently can 「explain」 just about anything by optimizing the relative contributions of different processes to fit the structure of the standard code. Of course, the falsifiability or, more generally, testability of such an overadjusted scenario become issues of concern. Nevertheless, examination of the specific predictions of each theory might take one some way toward falsification of the composite scenario.
The coevolution scenario implies that the genetic code should be highly robust to mistranslations, simply, because the identified precursor-product pairs consist of physico-chemically similar amino acids (97). However, several detailed analyses have suggested that coevolution alone cannot explain the observed level of robustness of the standard code so that additional evolution under selection for error minimization would be necessary to arrive to the standard code (82, 85, 111). Thus, in terms of the plausibility of a composite scenario, coevolution and error minimization are compatible. However, error minimization also appears to be necessary whereas the necessity of coevolution remains uncertain.
The affinities between cognate triplets and amino acids detected in aptamer selection experiments appear to be independent of the highly optimized amino acid assignments in the standard code table (112). Thus, even if these affinities are relevant for the origin of the code, the error minimization properties of the standard code are still in need of an explanation. The proponents of the stereochemical theory argue that some of the amino acid assignments are stereochemically defined, whereas others have evolved under selective pressure for error minimization, resulting in the observed robustness of the standard code. Indeed, it has been shown that, even when 8–10 amino acid assignments in the standard code table are fixed, there is still plenty of room to produce highly optimized genetic codes (112). However, this mixed stereochemistry-selection scenario seems to clash with some evidence. Perhaps, rather paradoxically, amino acids for which affinities with cognate triplets have been reported, largely, are considered to be late additions to the code: only 4 of the 8 amino acids with reported stereochemical affinities are phase 1 amino acids according to the coevolution theory (Fig. 4). Notably, arginine, the amino acid for which the evidence in support of a stereochemical association with cognate codons appears to be the strongest, is the 「worst positioned」 amino acid in the code table, i.e., of all amino acids, a change in the codon assignment for arginine results in the greatest increase in the code』s fitness (e.g., (86)). This unusual position of arginine in the code table makes it tempting to consider a different combined scenario of the code』s evolution whereby the early stage of this evolution involved, primarily, selection for error minimization, whereas at a later stage, the code was modified through recruitment of new amino acids that involved the (weak) stereochemical affinities.
Go to:
Universality of the genetic code and collective evolution
Whether the code reflects biosynthetic pathways according to the coevolution theory or was shaped by adaptive evolutionary forces to minimize the burden caused by improper translated proteins or even to maximize the rate of the adaptive evolution of proteins (113–115), a fundamental but often overlooked question is why the code is (almost) universal. Of course, the stereochemical theory, in principle, could offer a simple solution, namely, that the codon assignments in the standard code are unequivocally dictated by the specific affinity between amino acids and their cognate codons. As noticed above, however, the affinities are equivocal and weak, and do not account for the error-minimization property of the code. An alternative could be that the code evolved to (near) perfection in terms of robustness to translational errors or, perhaps, some other optimization criteria, and this (nearly) perfect standard code outcompeted all other versions. We have seen, however, that, at least with respect to error minimization, this is far from being the case (Fig. 3). What remains as an explanation of the code』s universality is some version of frozen accident combined with selection that brought the code to a relatively high robustness that was sufficient for the evolution of complex life.
Under the frozen accident view, the universality of the code can be considered an epiphenomenon of the existence of a unique LUCA. The LUCA must have had a code with at least a minimal fitness compatible with cellular life, and that code was frozen ever since (except for the observed limited variation). The implicit assumption behind this line of reasoning is that LUCA already possessed a translation system that was (nearly) as advanced as the modern version. Indeed, the universality of the key components of the translation system including a nearly complete set of aminoacyl-tRNA synthetases among the extant cellular life forms (116, 117) strongly suggests that the main features of the translation system were fixed at a pre-LUCA stage of evolution.
The recently proposed hypothesis of collective evolution of primordial replicators explains the universality of the code through a combination of froze accident and a distinct type of selection pressure (118, 119). The central idea is that universality of the genetic code is a condition for maintaining the (horizontal) flow of genetic information between communities of primordial replicators, and this information flow is a condition for the evolution of any complex biological entities. Horizontal transfer of replicators would provide the means for the emergence of clusters of similar codes, and these clusters would compete for niches. This idea of collective evolution of ensembles of virus-like genetic entities as a stage in the origin of cellular life apparently goes back to Haldane』s classic paper of 1928 (120) but was subsequently recast in modern terms and expanded (121–124), and developed in physical terms (125, 126). Vetsigian et al. (118) explored the fate of the code under collective evolution using a simple evolutionary model which is a generalization of the population-genetic model of code evolution described by Sella and Ardell (90, 91). It has been shown that, taking into consideration the selective advantage of error-minimizing codes, within a community of subpopulations of genetic elements capable of horizontal gene exchange, evolution leads to a nearly universal, highly robust code (118).
推薦閱讀:
※《Crick 4 *4 *4 遺傳密碼錶》的起草origin與修訂evolution (八)
TAG:生物化學與分子生物學 | 密碼學 | 美蘇冷戰 |