Error catastrophe
WikiDoc Resources for Error catastrophe 
Articles 

Most recent articles on Error catastrophe Most cited articles on Error catastrophe 
Media 
Powerpoint slides on Error catastrophe 
Evidence Based Medicine 
Cochrane Collaboration on Error catastrophe 
Clinical Trials 
Ongoing Trials on Error catastrophe at Clinical Trials.gov Trial results on Error catastrophe Clinical Trials on Error catastrophe at Google

Guidelines / Policies / Govt 
US National Guidelines Clearinghouse on Error catastrophe NICE Guidance on Error catastrophe

Books 
News 
Commentary 
Definitions 
Patient Resources / Community 
Patient resources on Error catastrophe Discussion groups on Error catastrophe Patient Handouts on Error catastrophe Directions to Hospitals Treating Error catastrophe Risk calculators and risk factors for Error catastrophe

Healthcare Provider Resources 
Causes & Risk Factors for Error catastrophe 
Continuing Medical Education (CME) 
International 

Business 
Experimental / Informatics 
Error catastrophe is a term used to describe the extinction of an organism (often in the context of microorganisms such as viruses) as a result of excessive RNA mutations. The term specifically refers to the predictions of mathematical models similar to that described below, and not to an observed phenomenon.
Many viruses 'make mistakes' (or mutate) during replication. The resulting mutations increase biodiversity among its population and to help subvert the ability of a mammal's immune system to recognise it in a subsequent infection. The more mutations (mistakes) the virus makes during replication, the more likely it is to avoid recognition by the immune system and the more diverse its population will be (see the article on biodiversity for an explanation of the selective advantages of this). However if it makes too many mutations it may lose some of its biological features which have evolved to its advantage, including its ability to reproduce at all.
The question arises: how many mutations can be made during each replication before the population of viruses begins to lose selfidentity?
A basic mathematical model
Consider a virus which has a genetic identity modeled by a string of ones and zeros (eg 11010001011101....). Suppose that the string has fixed length L and that during replication the virus copies each digit one by one, making a mistake with probability q independently of all other digits.
Due to the mutations resulting from erroneous replication, there exist up to 2^{L} distinct strains derived from the parent virus. Let x_{i} denote the concentration of strain i; let a_{i} denote the rate at which strain i reproduces; and let Q_{ij} denote the probability of a virus of strain i mutating to strain j.
Then the rate of change of concentration x_{j} is given by
 <math>\dot{x}_j = \sum_i a_i Q_{ij} x_i</math>
At this point, we make a mathematical idealisation: we pick the fittest strain (the one with the greatest reproduction rate a_{j}) and assume that it is unique (ie that the chosen a_{j} satisfies a_{j} > a_{i} for all i); and we then group the remaining strains into a single group. Let the concentrations of the two groups be x , y with reproduction rates a>b; let Q be the probability of a virus in the first group mutating to a member of the second group and let R be the probability of a member of the second group returning to the first (via an unlikely and very specific mutation). The equations governing the development of the populations are:
 <math>
\begin{cases} \dot{x} = & a(1Q)x + bRy \\ \dot{y} = & aQx + b(1R)y \\ \end{cases} </math>
We are particularly interested in the case where L is very large, so we may safely neglect R and instead consider:
 <math>
\begin{cases} \dot{x} = & a(1Q)x \\ \dot{y} = & aQx + by \\ \end{cases} </math>
Then setting z = x/y we have
 <math>
\begin{matrix} \frac{\partial z}{\partial t} & = & \frac{\dot{x} y  x \dot{y}}{y^2} \\ && \\ & = & \frac{a(1Q)xy  x(aQx + by)}{y^2} \\ && \\ & = & a(1Q)z (aQz^2 +bz) \\ && \\ & = & z(a(1Q) aQz b) \\ \end{matrix} </math>.
Assuming z achieves a steady concentration over time, z settles down to satisfy
 <math> z(\infty) = \frac{a(1Q)b}{aQ} </math>
(which is deduced by setting the derivative of z with respect to time to zero).
So the important question is under what parameter values does the original population persist (continue to exist)? The population persists if and only if the steady state value of z is strictly positive. ie if and only if:
 <math> z(\infty) > 0 \iff a(1Q)b >0 \iff (1Q) > b/a .</math>
This result is more popularly expressed in terms of the ratio of a:b and the error rate q of individual digits: set b/a = (1s), then the condition becomes
 <math> z(\infty) > 0 \iff (1Q) = (1q)^L > 1s </math>
Taking a logarithm on both sides and approximating for small q and s one gets
 <math>L \ln{(1q)} \approx Lq > \ln{(1s)} \approx s</math>
reducing the condition to:
 <math> Lq < s </math>
RNA viruses which replicate close to the error threshold have a genome size of order 10^{4} base pairs. Human DNA is about 3.3 billion (10^{9}) base units long. This means that the replication mechanism for DNA must be orders of magnitude more accurate than for RNA.
The theory of error catastrophe has been criticized as being based on an unrealistic assumption, namely, that all variants of strain "j", the fittest strain, have a finite replication rate. That is, no replication error or errors will cause replication to cease. At high error rates, above a threshold value, it follows that a population of replicating organisms with random genomic sequences will be produced which outcompetes strain "j", eventually driving it to extinction. Thus, the assumption is incompatible with the well established principle in biology that the genomic sequence encodes the biological functions of the organism. The phenomenon of error catastrophe predicted by the mathematical model, has not been convincingly shown to occur.
Applications of the theory
Some viruses such as polio or hepatitis C operate very close to the critical mutation rate (ie the largest q that L will allow). Drugs have been created to increase the mutation rate of the viruses in order to push them over the critical boundary so that they lose self identity. However, given the criticism of the basic assumption of the mathematical model, this approach is problematic.
Recently scientists have discovered an enzyme (A3G) that may cause HIV to mutate to death, which could allow error catostrophe for AIDS to become a usable treatment method. [1] Researchers have also identified a pharmaceutical agent, KP1461 that similarly is believed to act on the HIV by intoducing errors into the viral genome. It does this by being incorporated into a copy strand of viral DNA as an analog of cytidine which should normally pair with guanosine but then instead pairs with adenosine introducing a mutation into the genome. Overtime these guanosinetoadenosine mutations, which can occur randomly anywhere through out the viral genome, would be expected to build up leading to an error catastrophe. Although in vitro tests have demonstrated viral population collapse using this method, it has not been proven to work in vivo. [2]
The result introduces a catch22 mystery for biologists: in general, large genomes are required for accurate replication (high replication rates are achieved by the help of enzymes), but a large genome requires a high accuracy rate q to persist. Which comes first and how does it happen? An illustration of the difficulty involved is L can only be 100 if q' is 0.99  a very small string length in terms of genes.
See also
 Haldane's Dilemma