Thursday, 24 November 2016

Evolution, Energetics & Noise

Mitochondrial DNA (mtDNA) contains instructions for building important cellular machines. We have populations of mtDNA inside each of our cells -- almost like a population of animals in an ecosystem. Indeed, mitochondria were originally independent organisms, that billions of years ago were engulfed by our ancestor's cells and survived -- so the picture of mtDNA as a population of critters living inside our cells has evolutionary precedent! MtDNA molecules replicate and degrade in our cells in response to signals passed back and forth between mitochondria and the nucleus (the cell's "control tower"). Describing the behaviour of these population given the random, noisy environment of the cell, the fact that cells divide, and the complicated nuclear signals governing mtDNA populations, is challenging. At the same time, experiments looking in detail at mtDNA inside cells are difficult -- so predictive theoretical descriptions of these populations are highly valuable. 

Why should we care about these cellular populations? MtDNA can become mutated, wrecking the instructions for building machines. If a high enough proportion of mtDNAs in a cell are mutated, our cells struggle and we get diseases. It only takes a few cells exceeding this "threshold" to cause problems -- so understanding the cell-to-cell distribution of mtDNA is medically important (as well as biologically fascinating). Simple mathematical approaches typically describe only average behaviours -- we need to describe the variability in mtDNA populations too. And for that, we need to account for the random effects that influence them. 
 
​In our cells, signals from the "control tower" nucleus lead to the replication (orange) and degradation (purple) of mtDNA. These processes affect mtDNA populations that may contain normal (blue) and mutant (red) molecules. Our mathematical approach -- extending work addressing a similar but simpler system -- describes how the total number of machines, and the proportion of mutants, is likely to behave and change with time and as cells divide.

 
In the past, we have used a branch of maths called stochastic processes to answer questions about the random behaviour of mtDNA populations. But these previous approaches cannot account for the "control tower" -- the nucleus' control of mtDNA. To address this, we've developed a mathematical tradeoff -- we make a particular assumption (which we show not to be unreasonable) and in exchange are able to derive a wealth of results about mtDNA behaviour under all sorts of different nuclear control signals. Technically, we use a rather magical-sounding tool called "Van Kampen's system size expansion" to approximate mtDNA behaviour, then explore how the resulting equations behave as time progresses and cells divide.

Our approach shows that the cell-to-cell variability in heteroplasmy (the potentially damaging proportion of mutants in a cell) generally increases with time, and surprisingly does so in the same way regardless of how the control tower signals the population. We're able to update a decades-old and commonly-used expression (often called the Wright formula) for describing heteroplasmy variance, so that the formula, instead of being rather abstract and hard to interpret, is directly linked to real biological quantities. We also show that control tower attempts to decrease mutant mtDNA can induce more variability in the remaining "normal" mtDNA population. We link these and other results to biological applications, and show that our approach unifies and generalises many previous models and treatments of mtDNA -- providing a consistent and powerful theoretical platform with which to understand cellular mtDNA populations. The article is in the American Journal of Human Genetics here and a preprint version can be viewed here. Crossed from here.

The largest survey of opinions on vaccine confidence

Monitoring trust in immunisation programmes is essential if we are to identify areas and socioeconomic groups that are prone to vaccine-scepticism, and also if we are to forecast these levels of mistrust. Identification of vaccine-sceptic groups is especially important as clustering of non-vaccinators in social networks can serve to disproportionately lower the required vaccination levels for collective (or herd) immunity. To investigate these regions and socioeconomic groups, we performed a large-scale, data-driven study on attitudes towards vaccination. The survey — which we believe to be the largest on attitudes to vaccinations to date with responses from 67,000 people from 67 countries — was conducted by WIN Gallup International Association and probed respondents’ vaccine views by asking them to rate their agreement with the following statements: “vaccines are important for children to have”; “overall I think vaccines are safe”; “overall I think vaccines are effective”; and “vaccines are compatible with my religious beliefs”.

Our results show that attitudes vary by country, socioeconomic group, and between survey questions (where respondents are more likely to agree that vaccines are important than safe). Vaccine-safety related sentiment is particularly low in the European region, which has seven of the ten least confident countries, including France, where 41% of respondents disagree that vaccines are safe. Interestingly, the oldest age group — who may have been more exposed to the havoc that vaccine-preventable diseases can cause — hold more positive views on vaccines than the young, highlighting the association between perceived danger and pro-vaccine views. Education also plays a role. Individuals with higher levels of education are more likely to view vaccines as important and effective, but higher levels of education appear not to influence views on vaccine safety.



Vaccine World map of percentage negative ("tend to disagree" or "strongly agree") survey responses to the statement "overall I think vaccines are safe"

Our study, "The State of Vaccine Confidence 2016: Global Insights Through a 67-Country Survey" can be read for free in the journal EBioMedicine here with a commentary here. You can find other treatments in Science magazine, New Scientist, Financial Times, Le Monde and Scientific American. Alex, Iain, and Nick.

Wednesday, 31 August 2016

Understanding the strength and correlates of immunisation programmes

Childhood vaccinations are vital for the protection of children against dreadful diseases such as measles, polio, and diphtheria. In addition to providing personal protection, vaccines can also suppress epidemic outbreaks if a sufficiently large proportion of the population has immunity status – this “herd immunity” is important for society as many individuals are unable to vaccinate for medical reasons. Over the past half a century, public health organisations have made concerted efforts to vaccinate every child worldwide. However, notwithstanding the substantial improvements to vaccine coverage rates across the globe over the past few decades, there are still millions of unvaccinated children worldwide. The majority of these children live in countries where large numbers of the populations live in deprived, rural regions with poor access to healthcare. However, a number of children are denied vaccines because of parental attitudes and beliefs (which are often influenced by the media, religious groups, or anti-vaccination groups) – such hesitancy has been responsible for recent outbreaks in developing (e.g. Nigeria, Pakistan, Afghanistan) and developed (e.g. USA, UK) countries alike. Monitoring vaccine coverage rates, summarising recent vaccination behaviours, and understanding the factors which drive vaccination behaviour are thus key to our understanding vaccine acceptance, and can allow immunisation programmes to be more effectively tailored.

To understand these pertinent issues, we used machine learning tools on publicly-available vaccination and socioeconomic data (which can be found here  and the World Health Organization’s websites). We used Gaussian process regression to forecast vaccine coverage rates and used the predictive distributions over forecasted coverage rates to introduce a quantitative marker summarising a country’s recent vaccination trends and variability:  this summary is termed the Vaccine Performance Index. Parameterisations of this index can then be used to identify countries which are likely (over next few years) to have vaccine coverage rates far from those required for herd immunity levels or that are displaying worrying declines in rates and to assess which countries will miss immunisation goals set by global public health bodies. We find that these poorly-performing countries were mostly located in South-East Asia and sub-Saharan Africa though, surprisingly, a handful of European countries also perform poorly.


To investigate the factors associated with vaccination coverage, we sought links between socioeconomic factors with vaccine coverage and found that countries with higher levels of births attended by skilled health staff, gross domestic product, government health spending, and higher education levels have higher vaccination coverage levels (though these results are region-dependent).

Our vaccine performance index could aid policy makers’ assessments of the strength and resilience of immunisation programmes. Further,  identification of socioeconomic correlates of vaccine coverage points to factors to address to improve vaccination coverage. You can read further in our freely available paper – which is in collaboration with the London School of Hygiene and Tropical Medicine (Heidi Larson and David Smith) and IIT Delhi (Sumeet Agarwal) – in the open-access journal Lancet Global Health under the title “Forecasted trends in vaccination coverage and correlations with socioeconomic factors: a global time-series analysis over 30 years” and there is another free article unpacking it under the title "Global Trends in Vaccination Coverage". Alex, Iain, Nick.

Sunday, 10 January 2016

Energetic arguments constraining complex fungal systems

Fungi are ubiquitous and ecologically important organisms that grow over the resources they consume. Fungi decompose everything from dead trees to dung, but whatever substrate they consume, fungi are obliged to spend energy on growth, reproduction, and substrate digestion. Many fungi also recycle their own biomass to fuel further growth. Within this overall framework, each fungal species adopts a different strategy, depending on the relative investment in growth, recycling, digestion and reproduction. Collectively, these strategies determine ecologically critical rates of carbon and nutrient cycling, including rates of decomposition and CO2 release. Crucially, a given fungus will encounter more of a resource if it increases its growth rate, and it will obtain energy from that resource more rapidly if it increases its investment in transporters and digestive enzymes. However, any energy that is expended on growth or resource acquisition cannot be spent on spore production, so fungi necessarily confront trade-offs between these three essential processes.
An example of a foraging fungal network
To understand these trade-offs we developed an energy budget model which uses a common energy currency to systematically explore how different rates of growth, recycling, and investments in resource acquisition affect the amount of energy available for reproduction, and how those trade-offs are affected by characteristics of the resource environment. Our model helps to explain the complex range of strategies adopted by various fungi. In particular, it shows that recycling is only beneficial for fungi growing on recalcitrant, nutrient-poor substrates, and that when the timescale of reproduction is large compared to the time required for the fungus to double in size, the total energy available for reproduction will be maximal when a very small fraction of the energy budget is spent on reproduction. You can read about this free under the title "Energetic Constraints on Fungal Growth" and it appears in the glamorously titled American Naturalist. Luke, Mark and Nick

Thursday, 16 July 2015

Generations of generating functions in dividing cells

Cell biology is a unpredictable world, as we've written about before. The important machines in our cells replicate and degrade in processes that can be described as random; and when cells divide, the partitioning of these machines between the resulting cells also looks random. The number of machines we have in our cells is important, but how can we work with numbers in this unpredictable environment?
In our cells, machines are produced (red), replicate (orange), and degrade (purple) randomly with time, as well as being randomly partitioned when cells split and divide (blue). Our mathematical approach describes how the total number of machines is likely to behave and change with time and as cells divide.

Tools called "generating functions" are useful in this situation. A generating function is a mathematical function (like G(z) = z2, but generally more complicated) that encodes all the information about a random system. To find the generating function for a particular system, one needs to consider all the random things that can happen to change the state of that system, write them down in an equation (the "master equation") describing them all together, then use a mathematical trick to push that equation into a different mathematical space, where it is easier to solve. If that "transformed" equation can be solved, the result is the generating function, from which we can then get all the information we could want about a random system: the behaviour of its mean and variance, the probability of making any observation at any time, and so on.

We've gone through this mathematical process for a set of systems where individual cellular machines can be produced, replicated, and degraded randomly, and split at cell divisions in a variety of different ways. The generating functions we obtain allow us to follow this random cellular behaviour in new detail. We can make probabilistic statements about any aspect of the system at any time and after any number of cell divisions, instead of relying on assumptions that the system has somehow reached an equilibrium, or restricting ourselves to a single or small number of divisions. We've applied this tool to questions about the random dynamics of mitochondrial DNA (which we're very interested in! And this work connects explicitly with our recent eLife paper - blog article here) in cells that divide (like our cells) or "bud" (like yeast cells), but the approach is very general and we hope it will allow progress in many more biological situations. You can read about this, free, here under the title "Closed-form stochastic solutions for non-equilibrium dynamics and inheritance of cellular components over many cell divisions" in the Proceedings of the Royal Society A. Iain and Nick

Monday, 15 June 2015

How evolution deals with mitochondrial mutants (and how we can take advantage)


Our mitochondrial DNA (mtDNA) provides instructions for building vital machinery in our cells. MtDNA is inherited from our mothers, but the process of inheritance -- which is important in predicting and dealing with genetic disease -- is poorly understood. This is because mitochondrial behaviour during development (the process through which a fertilised egg becomes an independent organism) is rather complex. If a mother's egg cell begins with a mixed population of mtDNA -- say with some type A and some type B -- we usually observe hard-to-predict mtDNA differences between cells in the daughter. So if the mother's egg cell starts off with 20% type A, egg cells in the daughter could range (for example) from 10%-30% of type A, with each different cell having a different proportion of A. This increase in variability, referred to as the mtDNA bottleneck, is important for the inheritance of disease. It allows cells with higher proportions of mutant mtDNA to be removed; but also means that some cells in the next generation may contain a dangerous amount of mutant mtDNA. Crucially, how this increase in variability comes about during development is debated. Does variability increase because of random partitioning of mtDNAs at cell divisions? Is it due to the decreased number of mtDNAs per cell, increasing the magnitude of genetic drift? Or does something occur during later development to induce the variability? Without knowing this in detail, it is hard to propose therapies or make predictions addressing the inheritance of disease.

We set out to answer this question with maths! Several studies have provided data on this process by measuring the statistics of mixed mtDNA populations during development in mice. The different studies provided different interpretations of these results, proposing several different mechanisms for the bottleneck. We built a mathematical framework that was capable of modelling all the different mechanisms that had been proposed. We then used a statistical approach called approximate Bayesian computation to see which mechanism was most supported by the existing data. We identified a model where a combination of copy number reduction and random mtDNA duplications and deletions is responsible for the bottleneck. Exactly how much variability is due to each of these effects is flexible -- going some way towards explaining the existing debate in the literature.  We were also able to solve the equations describing the most likely model analytically. These solutions allow us to explore the behaviour of the bottleneck in detail, and we use this ability to propose several therapeutic approaches to increase the "power" of the bottleneck, and to increase the accuracy of sampling in IVF approaches.




A "bottleneck" acts to increase mtDNA variability between generations. But how is this bottleneck manifest? Our approach suggests that a combination of copy number reduction (pictured as a "true" copy number bottleneck), and later random turnover of mtDNA (pictured as replication and degradation), is responsible.



Our excellent experimental collaborators, led by Joerg Burgstaller, then tested our theory by taking mtDNA measurements from a model mouse that differed from those used previously and which, could in principle have shown different behaviour. The behaviour they observed agreed very well with the predictions of our theory, providing encouraging validation that we have identified a likely mechanism for the bottleneck. New measurements also showed, interestingly, that the behaviour of the bottleneck looks similar in genetically diverse systems, providing evidence for its generality. You can read about this in the free (open-access) journal eLife under the title "Stochastic modelling, Bayesian inference, and new in vivo measurements elucidate the debated mtDNA bottleneck mechanism"  Iain and Nick

Monday, 27 April 2015

The function of mitochondrial networks

Mitochondria are dynamic energy-producing organelles, and there can be hundreds or even thousands of them in one cell. Mitochondria (as we've blogged about before - e.g. here) do not exist independently of each other: sometimes they form giant fused networks across the cell, sometimes they are fragmented, and sometimes they take on intermediate shapes. Which state is preferred (fragmented, fused or in between) seems to depend on, for example, cell-division stage, age, nutrient availability and stress levels. But what is exactly the reason for the cell preferring one morphology over another?
Nonlinear phenomena -- like some percolation effects -- could help account for the functional advantage of mitochondrial networks
We recently wrote an open-access paper (free here in the journal BioEssays) in which we try to answer the question: what is it about fused mitochondrial networks that could make them preferable to fragmented mitochondria? Our paper differs from previous work in that we attempt to use a range of mathematical tools to gain insight into this complex biological system and we try to hit on the root physiological and physical roles. We use physical models, simulations, and numerical estimations to compare ideas, to reason about existing hypotheses, and to propose some new ones. Among the possibilities we consider are the effects of fusion on mitochondrial quality control, on the spread of important protein machinery throughout the cell, on the chemistry of important ions, and on the production and distribution of energy through the cell. The models we use are quite simple, but we propose ideas for improving them, and experiments that will lead to further progress.

Taking a mathematical perspective leads to a central idea: for fused mitochondria to be 'preferred' by the cell, there must be some nonlinear advantage to fusion. That's what the fuzzy line is representing in the figure above. A big mitochondrion formed by fusing two smaller ones must in some sense be 'better' than the sum of the two smaller ones, or there would be no reason why a fused state is preferred.

Mitochondria can fuse to form large continuous networks across the cell. From a mathematical and physical viewpoint, we evaluate existing and novel possible functions of mitochondrial fusion, and we suggest both experiments and modelling approaches to test hypotheses
What is the source of this nonlinearity? We find several physical and chemical possibilities. Large pieces of fused mitochondria are better at sharing their contents (e.g. proteins, enzymes, and possibly even DNA) than smaller pieces of fused mitochondria. If the 'fusedness' of the mitochondrial population increases by a factor of two, the efficiency with which they share their contents increases by more than two! Also, fusion can reduce damage. If a mitochondrion gets physically or chemically damaged, having some fused non-damaged neighbours can help to reduce the overall harm to the cell. Finally, fusion may increase energy production because of a nonlinear chemical dependence of energy production on mitochondrial membrane potential. Fusing more mitochondria may, under certain circumstances, have the effect of increasing energy production. Hanne, Iain and Nick

Thursday, 11 December 2014

Turbocharging the back of the envelope

The numbers that we use to describe the world are rarely exact. How long will it take you to drive to work? Perhaps "between 20 and 30 minutes". It would be unwise (and unnecessary) to say "exactly 23.4 minutes".

This uncertainty means that "back-of-the-envelope" calculations are very valuable in estimating and reasoning about numerical problems, particularly in the sciences. The idea here is to perform a calculation using rough guesses of the quantities involved, to get an "order of magnitude" estimate of the answer you're after. Made famous in physics as "Fermi problems", attributed to Enrico Fermi (who used rough reasoning to deduce quantities from the power of an atomic bomb to the number of piano tuners in Chicago), this approach is integral in many current applications of maths and science. Cool books like "Street-fighting Mathematics", "Guesstimation", "Back of the envelope physics", the excellent "What If?" section of xkcd, and the lateral interview questions facing some job candidates: "how much of the world's water is contained in a cow?" are all examples.

Calculations in biology, such as the time it takes for a protein (foreground) to diffuse through an E. coli cell (background), are often subject to large uncertainties. Our approach and web tool allows us to track this uncertainty and obtain a probability distribution over possible answers (plotted).
We've built a free online calculator (Caladis -- calculate a distribution) that complements this approach by allowing one to take the uncertainty in one's estimates into account throughout a calculation. For example, what volume of CO2 is produced by our yearly driving? We could say that we cover 8000 miles per year "give or take" 1000 miles, and find that our car's CO2 emissions are between 100 and 150 grams per kilometre. Our calculator allows us to do the necessary conversions and sums while taking this possible variability into account -- doing maths with "probability distributions" describing our uncertainty. We no longer obtain a single (possibly inaccurate) answer, but a distribution telling us how likely any particular answer is -- in this case a rather concerning bell-shaped distribution between 1 and 2 tonnes which can be viewed here

In the sciences, particularly in biology, measurements often have substantial uncertainties -- due to experimental error, natural variability in the system of interest, or both -- and so using distributions rather than single numbers in calculations allows us to understand and process more about the question of interest. "Back-of-the-envelope" calculations are certainly useful in biology but, owing to the uncertainties involved, one can trust one's estimates better if one has a smart envelope that takes that uncertainty into account.  We've written an accompanying paper "Explicit tracking of uncertainty increases the power of quantitative rule-of-thumb reasoning in cell biology" (free to all in Biophysical Journal) showing how to use our calculator -- in conjunction with the excellent Bionumbers online database, a collection of (often uncertain) experimental measurements in biology -- to make real biological calculations more powerful. Do have a go at using our calculator at www.caladis.org : it's user-friendly and there are lots of examples showing how it works! Iain and Nick

Thursday, 4 December 2014

Therapies for mtDNA disease: models and implications

Mitochondrial DNA (mtDNA) is a molecule in our cells that contains information about how to build important cellular machines that provide us with the energy required for life. Mutations in mtDNA can prevent our cells from producing these machines correctly, causing serious diseases. Mutant mtDNA can be passed from a carrier mother to her children, and as the amount of mutated mtDNA inherited can vary, children's symptoms can be much more severe (often deadly) than those in the mother.

Several therapies exist to prevent or minimise the inheritance of mutant mtDNA from mother to daughter. These range from simply using a donor mother's eggs (in which case the child inherits no genes from the "mother") to amazing new techniques where a mother's nucleus is transferred into a donor's egg cell which has had its nucleus removed (so that the child inherits nuclear DNA from the mother and father, and healthy mtDNA from the donor). The UK is currently debating whether to allow these new therapies: several potential scientific issues have been identified in their application.

If a mother carries an mtDNA mutation, (A) no clinical intervention can lead to her child inheriting that mutation and developing an mtDNA disease. Several "classical" (B-C) and modern (D-E) strategies exist to attempt to prevent the inheritance of mutant mtDNA, which we review (see paper link below)


As experiments with human embryos are heavily restricted, experiments in animals provide the bulk of our knowledge about how these therapies may work. We have previously written about our research in mice, highlighting a possible issue arising from mtDNA "segregation", where one type of mtDNA (possibly carrying a harmful mutation) may proliferate over another: this phenomenon could, in some circumstances, nullify the beneficial effects of mtDNA therapies. Another possible issue involves the effects of "mismatching" between the mother and father's nuclear DNA and the donor's mtDNA: current experimental evidence is conflicted regarding the strength of this effect. Finally, mismatch between donor mtDNA and any leftover mother mtDNA may also lead to biological complications.

We have recently written a paper explaining and reviewing the current state of knowledge of these effects, summarising the evidence from existing animal experiments. We are positive about implementing these therapies, which have the potential to prevent the inheritance of devastating diseases. However, we note cautions about this implementation, noting that several scientific questions remain debated or unanswered. We particularly highlight that "haplotype matching", a strategy to ensure that donor and mother mtDNA are as similar as possible, will largely remove these concerns. Iain

Wednesday, 12 November 2014

Mitochondrial motion in plants



Mitochondria are often likened to the power stations of the cell, producing energy that fuels life's processes. However, compared to traditional power stations, they're very dynamic: mitochondria move through the cell, and fuse together and break apart (among other things). Interestingly, their ability to move and undergo fusion and fission affects their functionality, and so has powerful implications for understanding disease and cellular energy supplies.


video
Because of this central role, it is important to understand the fundamental biological mechanisms that govern mitochondrial dynamics. Several important genes controlling mitochondrial dynamics are known in humans (and other organisms), but plant mitochondria (despite the fundamental importance of plant bioenergetics for our society) are less well understood.
Our collaborators, David Logan and his team, working with a plant called Arabidopsis, observed that a particular gene, entertainingly called "FRIENDLY", affected mitochondrial dynamics when it was artificially perturbed. (This approach, artificially interfering with a gene to explore the effects that it has on the cell and the overall organism, is a common one in cell biology.) We've just written a paper with them "FRIENDLY regulates mitochondrial distribution, fusion, and quality control in Arabidopsis" (free here) exploring these effects. Plants with disrupted FRIENDLY had unusual clusters of mitochondria in their cells, their mitochondria were stressed, and cell death and poor plant growth resulted.

Simulation of mitochondrial dynamics

We used a 3D computational and mathematical model of randomly-moving mitochondria within the cell to show that an increased "association time" (the friendly mitochondria stick around each other for longer) was sufficient to explain the experimental observations of clustered mitochondria. Our paper thus identifies an important genetic player in determining mitochondrial dynamics in plants; and explores in substantial detail the intra-cellular, bioenergetic, and physiological implications of perturbation to this important gene. Iain and Nick


Thursday, 23 October 2014

'Mitoflashes' indicate acidity changes rather than free radical bursts


As we've written about before, mitochondria generate the energy required by our cells through respiration that involves using an "electrochemical gradient" as an energy store (a bit like pumping water up into a reservoir for energy storage to then harness it flowing down the gradient of a hill to turn a turbine), and produces superoxide (free oxygen radicals) as a by-product (a bit like sparks when the pumps are running hot). The fundamental importance of this machinery which not only delivers energy, but is also involved in disease and aging  has led to its investigation in great molecular detail (comparable to taking the turbines and generators apart to learn about their function). Much less is known about how mitochondria actually behave when they are fully functional in their natural environment inside our cells (comparable to looking at the fully intact and running turbine), and progress has been difficult since suitable `tools' are scarce.

A debate exists in the scientific literature about one of the key "tools" used in the investigation of living cells. A particular fluorescent sensor protein called cpYFP (circularly permuted yellow fluorescent protein) is used in biological experiments, ostensibly as a way of measuring the levels of superoxide/free oxygen radicals  in a mitochondrion. Our colleagues, however, have cast doubt on the ability of cpYFP to measure superoxide, providing evidence that it instead responds to pH, part of the above electrochemical gradient. This debate was complicated by the fact that in biology, pH and superoxide can vary together, as the amount of "driving" and amount of "sparks" might be expected to.

As another analogy: If we found an unknown measuring device and we did not know how it works, but we saw that it responds during sunny weather, we may conclude that it measures warm temperature. However, it may in fact measure high atmospheric pressure which is, like warm temperatures, often correlated with good weather.  
The protein cpYFP changes its fluorescence in response to pH changes, but is unaffected by superoxide changes.

A recent and fascinating paper in Nature observed that "flashes" of the cpYFP sensor during early development of worms (as a model for other animals and humans) were correlated with their eventual lifespan. However, despite the debate about what it is exactly that the  cpYFP sensor measures, the paper interpreted it as responding to superoxide: looking at the correlation in the light of the so called “free radical theory of aging". This long-standing and much debated theory hypothesizes that the cause of why we age and eventually die is related to the constant production of free oxygen radicals in our mitochondria causing a steady increase in damage to our cells weakening their energetic machinery more and more and making them prone to illnesses.

In response to this, our colleagues decided to settle the question about what the sensor actually measures chemically, removing biological complications from the system. In the analogy of the unknown measurement device, the device was now tested under controlled temperature and controlled pressure to clearly distinguish between the two. They produced an experimental setup where a mix of chemicals was used to generate superoxide in the absence of any pH change. cpYFP in this mix did not show any signal, showing that it remains unresponsive to superoxide. In concert, they showed that even small changes in pH produced a dramatic response in cpYFP signal. Finally, they investigated the physical structure of cpYFP, showing that a large opening in the barrel-like structure of the protein exposes a pH-sensitive chemical group to its environment (comparable to showing how exactly the inner mechanics of the unknown measurement device can pick up pressure changes). We thus concluded, in a recent publication "The ‘mitoflash’ probe cpYFP does not respond to superoxide" (in the journal Nature here) that the cpYFP sensor reports pH rather than superoxide, and that results using cpYFP (including the above Nature paper, which remains fascinating) should be interpreted as such. Iain, Markus and Nick

Friday, 6 June 2014

Evolutionary competition within our cells: the maths of mitochondrial DNA

Women may carry mutated copies of mitochondrial DNA (mtDNA) -- a molecule that describes how to build important cellular machinery relating to cellular energy supply. If this mutant mtDNA is passed on to that woman's child, the child may develop a mitochondrial disease, which are often degenerative, fatal, and incurable.

Joerg created mice that contained two types of mtDNA -- here illustrated as blue (lab mouse mtDNA) and yellow (mtDNA from a mouse from a wild population). We used several different wild mice from across Europe to represent the mtDNA diversity one may find in a human population. We found that throughout a mouse's lifetime, one mtDNA type often outcompetes another (here, yellow beats blue), with different patterns across different tissues.
Amazing new therapies potentially allow a carrier mother A and a father B to use another woman C's egg cells to conceive a baby without much of mother A's mtDNA being present. The approach involves taking nuclear DNA content from A and B (so that most of the child's features are inherited from the true mother and father), and placing it into C's egg cells, which contain a background of healthy mtDNA. You can read about, what are misleadingly called, three-parent babies here.


Something that is less discussed is that, in this process, a small amount of A's mutant mtDNA can be "carried over" into C's cell. If this small amount remains small through the child's life, there is no danger of disease, as the larger amount of healthy C mtDNA will allow the child's cell to function normally. We can think of the resulting situation as a competition between A and C -- if A and C are evenly matched, the small amount of A will remain small; if C beats A, the small amount of A will disappear with time; and if A beats C, the small amount of A will increase and may eventually come to dominate over C.

Until recently it has been fair to assume that A and C are always about evenly matched (unless something is drastically different between A or C). However, evidence for this idea was based on model organisms in laboratories, which do not have the same amount of genetic diversity as found in human populations. Our collaborator Joerg addressed this by capturing wild mice from across central Europe, selecting a set that showed a comparable degree of genetic diversity to that expected in a human population. He used these, with our modelling and mathematical analysis, to show that pronounced differences between A and C often exist, and are more likely in more diverse populations. The possibility that A beats C, and mutant mtDNA comes to dominate the child's cells, therefore cannot be immediately discounted in a diverse population. We propose "haplotype matching" -- ensuring that A and C are as similar as possible -- to ameliorate this potential risk. It's open as to whether one can generalize from observations in mice to people and it's also open as to whether our conclusions, which used lab-mice as parent A (which are not entirely typical creatures) of necessity generalize to other non-lab mouse types. 

Our mathematical approach also allowed us to explore, in detail, the dynamics by which this competition within cells occurs. We were able to use our data rather effectively by having a statistical model that allowed us to reason jointly about a range of data sets. We found that the degree to which one population of mtDNA beat the other depended on how genetically different they were.  We found that different tissues were like different environments: some favouring C over A and some vice-versa. This is perhaps surprising to some as this evolution in the proportions of different genetic species is not something we imagine occurring inside us, during our lives, and as something that might differ between our organs. We found several different regimes, where the strength of competition changes with time and as the organism develops: when our cells are multiplying faster they show a more marked preference for one of the species. We've shown our results to the UK HFEA in its ongoing assessment of these therapies, and you can read, for free, about our work called ``mtDNA Segregation in Heteroplasmic Tissues Is Common In Vivo and Modulated by Haplotype Differences and Developmental Stage'' in the journal Cell Reports here. Iain, Joerg, Nick.


large Image
We found that one mtDNA type beat another in different ways across many different tissue type. Here, the height (or depth) of a column represents how much the mtDNA from a wild mouse wins (or loses) against that from a lab mouse in different tissues. The bottom row corresponds to the smallest difference between wild and lab mtDNA; the top row corresponds to the greatest difference.

Thursday, 10 April 2014

What's the difference? Telling apart two sets of signals

We are constantly observing ordered patterns all around us, from the shapes of different types of objects (think of different leaf shapes, yoga poses), to the structured patterns of sound waves entering our ears and the fluctuations of wind on our faces. Understanding the structure in observations like these have much practical utility: For example, how do we make sense of the ordered patterns of heart beat intervals for medical diagnosis, or the measurements of some industrial process for quality checking? We have recently published an article that automatically learns the discriminating structure in labeled datasets of ordered measurements (or time series or signals)---that is, what is it about production-line sensor measurements that predict a faulty process, or what is it about the shape of Eucalyptus leaves that distinguish them from other types of leaves?

Conventional methods for comparing time series (within the area of time-series data mining) involve comparing their measurements through time, often using sophisticated methods (with science fiction names like "dynamic time warping") that squeeze together pairs of time series patterns to find the best match. This approach can be extremely powerful, allowing new time series to be classified (e.g., in the case of a heart beat measurement, labelling it as a "healthy" heart beat or a "congestive heart failure"; or in the case of leaf shapes, labelling it as "Eucalyptus", "Oak", etc.), by matching them to a database of known time series and their classifications. While this approach can be good at telling you whether your leaf is a "Eucalyptus", it does not provide much insight into what it is about Eucalyptus leaves that is so distinctive. It also requires one to compare a new leaf to all other leaves in your database, which can be an intensive process. 


A) Comparing time series by alignment B) Comparing time series by their structural features: in this we probe many structural features of the time series simultaneously (ii) and then distil out the relevant ones (iii).
Our method learns the properties of a given class of time series (e.g., the distinguishing characteristics of Eucalyptus leaves) and classifies new time series according to these learned properties. It does so by comparing thousands of different time-series properties simultaneously, that we developed in previous work that we blogged about here. Although there is a one-time cost to learn the distinguishing properties, this investment provides interpretable insights into the properties of a given dataset (this kind of task is very useful for scientists when they want to understand the difference between their control data and the data from their experimental interventions) and can allow new time series to be classified rapidly. The result is a general framework for understanding the differences in structure between sets of time series. It can be used to understand differences between various types of leaves, heart beat intervals, industrial sensors, yoga poses, rainfall patterns, etc. and is a contribution to helping the data science/ big-data/ time-series data mining literature deal with...bigger data.
Each of the dots corresponds to a time series. The colours correspond to (computer generated) time series of six different types. We identify features that allow us to do a good job of distinguishing these six types.

Our work will be appearing with the name "Highly comparative, feature based, time-series classification" in the acronymically titled IEEE TKDE and you can find a free version of it here. Ben and Nick.

Wednesday, 9 April 2014

Polyominoes: mapping genotypes to phenotypes


Biological evolution sculpts the natural world and relies on the conversion of genetic information (stored as sequences, usually of DNA, called genotypes) into functional physical forms (called phenotypes). The complicated nature of this conversion, which is called a genotype-phenotype (or GP) map, makes the theoretical study of evolution very difficult. It is hard to say how a population of individuals may evolve without understanding the underlying GP map.

This is due to the two fundamental forces of evolution -- mutations and natural selection -- acting on different aspects of an organism. Mutations occur to genotypes (G), while natural selection, the ultimate adjudicator of the fate of mutations in the population, acts on the phenotype (P). Without understanding the link between these two -- the GP map -- we can't easily say, for example, how many mutations we expect important proteins within a virus strain to undergo with time, and thus how quickly the virus will evolve to be unrecognised by our immune systems.

Simple models for the mapping of genotype to phenotype have helped answer important questions for some model biological systems, such as RNA molecules and a coarse-grained model of protein folding. One important class of biological structure which has not yet been modelled in this way are protein complexes: structures formed through proteins binding together, fulfilling vital biological functions in living organisms. In this work, we introduce the "polyomino" model, based on the self-assembly of interacting square tiles to form polyomino structures. The square tiles that make up a polyomino are assigned different "sticky patches", modelling the interactions between different proteins that form a complex. A huge range of structures can be formed by varying the details of these patches, mimicking the range of protein complexes that exist in biology (though there are some obvious differences in the shapes of structures that can be formed).
Our simple model explores the interactions between protein subunits, and how these interactions shape a surface that evolution explores. (top) Sickle-cell anemia involves a mutation that changes the way proteins interact, making normally independent units form a dangerous extended structure. (bottom) Our polyomino model models this effect. The resultant dramatic effects on structure, fitness, and evolution can then be explored.
Despite its abstraction we show that the polyomino model displays several important features which make it a potentially useful model for the GP map underlying protein complex evolution. On top of this, we demonstrate that our model possesses similar properties to RNA and protein folding models, interestingly suggesting that universal features may be present in biological GP maps and that the "landscapes" upon which evolution searches may thus have general properties in common. You can find the paper free here and you can read about polominoes here and play a game here. Iain

Tuesday, 1 April 2014

Fast inference about noisy biology

Biology is a random and noisy world -- as we've written about several times before! (e.g. here and here) This often means that when we try to measure something in biology -- for example, the number of a particular type of proteins in a cell, or the size of a cell -- we'll get rather different results in each cell we look at, because random differences between cells mean that the exact numbers are different in each case. How can we find a "true" picture? This is rather like working out if a coin is biased by looking at lots of coin-flip results.

Measuring these random differences between cells can actually tell us more about the underlying mechanisms for things like (to use the examples above) the cellular population of proteins, or cellular growth. However, it's not always straightforward to see how to use these measurements to fill out the details in models of these mechanisms. A model of a biological process (or any other process in the world) may have several "parameters" -- important numbers which determine how the model behaves (the bias of a coin, is an example, telling us what proportion of times we'll see heads). These parameters may include, for example, rates with which proteins are produced and degraded. The task of using measurements to determine the values of these parameters in a model is generally called "parametric inference". In a new paper, I describe a new and efficient way of performing this parametric inference given measurements of the mean and variance of biological quantities. This allows us to find a suitable model for a system describing both the average behaviour and typical departures from this average: the amount of randomness in the system. The algorithm I propose is an example of approximate Bayesian computation (ABC) which allows us to deal with rather "messy" data: I also describe a fast (analytic) approach that can be used when the data is less messy (Normally distributed).


Parametric inference often consists of picking a trial set of parameters for a model and seeing if the model with those parameters does a good job of matching experimental data. If so, those parameters are recorded as a "good" set, otherwise, they're discarded as a "bad" set. The increase in efficiency in my proposed approach is due to the fact that we can perform a quick, preliminary check to see if a particular parameterisation is "bad", before spending more computer time on rigorously showing that it is "good". I show a couple of examples in which this preliminary checking (based on fast computation of mean results before using stochastic simulation to compute variances) speeds up the process by 20-50% on model biological problems -- hopefully allowing some scientists to grab a little more coffee time! This work will be coming out in the journal Statistical Applications in Genetics and Molecular Biology with the title `Efficient parametric inference for stochastic biological systems with measured variability' and you'll find the article (free) here. Iain

Tuesday, 1 October 2013

Inferring the evolutionary history of photosynthesis : C 4 yourself

Biological evolution is a complex, stochastic process which dictates fundamental properties of life. Our understanding of evolutionary history is severely limited by the sparsity of the fossil record: we only have a handful of fossilised snapshots to infer how evolution may have progressed throughout the history of life. Many physicists and mathematicians have attempted theoretical treatments of the process of evolution, using varying degrees of abstraction, in order to provide a more solid quantitative foundation with which to study this complex and important phenomenon, but the predictive power of these theoretical models, and their ability to answer specific biological questions, is often questioned.

Figure. (left) Examples of steps in an evolutionary space that involve individual changes from the absence of a C4 feature (0) to the presence of that feature (1). The bitstrings represent possible sets of plant features. (right) Steps from C3 to C4 embedded in the high-dimensional evolutionary space involved in our model. Coloured points mark sets of plant features that are compatible with one or more plants that currently exist: pathways involving these compatible sets are more likely to represent evolutionary history. 




We recently focussed on one remarkable product of evolution in plants: so-called "C4 photosynthesis". C4 consists of a complex set of changes to the genetic and physiological features which have evolved in some plants and act to increase the efficiency of photosynthesis. This complex set of changes has evolved over 60 times convergently: that is, plants from many different lineages independently "discover" C4 photosynthesis through evolution. We were interested in the evolutionary history of how these discoveries occurred -- both motivated by fundamental biology and the possibility of "learning from evolution" and using information about the evolution of C4 to design more efficient crop plants.

To this end, we modelled the evolution of C4 as a pathway through a space containing many different possible plant features. The pathway starts at C3 -- the precursor to C4 -- and progressively takes steps in different directions, acquiring one-by-one the features that sum up to C4 photosynthesis. Using a survey of plant properties from across the wide scientific literature, we identified which intermediate states these pathways were likely to pass through, given observed properties of plants that currently possess some, but not all, C4 features. We were then able to use a new inference technique to predict the ordering in which these likely pathways traverse the evolutionary space. We showed that this approach worked by both successfully inferring the known evolutionary steps in synthetic datasets and correctly predicting previously unknown properties of several plants, which we verified experimentally. Our (open access) paper is here and there's a less technical summary and commentary here. Our approach showed that C4 photosynthesis can evolve through a range of distinct evolutionary pathways, providing a potential explanation for its striking convergence. Several of these different pathways were made explicitly visible when we examined the inferred evolutionary histories of different plant lineages -- different families are likely to have converged on C4 through different evolutionary routes. Furthermore, the most likely initial steps towards C4 photosynthesis are surprisingly not directly related to photosynthesis, being solutions to different biological challenges, but also providing evolutionary "foundations" upon which the machinery of C4 can evolve further. We hope that the recipes for C4 photosynthesis that we have inferred find use in efficient crop design, and anticipate our inference procedure being of use in the study of other specific biological questions regarding evolutionary histories. Iain

Wednesday, 3 April 2013

A compound methodological eye on nature’s signals


A compound methodological eye on nature’s signals: Background signals are both empirical (e.g. ECGs and human speech) and simulated (e.g. correlated noise and maps); the arctic krill eye shows output from thousands of time-series analysis methods wrapped around it [Fig.1 of our paper showing the results of applying 8651 methods to a set of time series]. Image created by B. D. Fulcher Accreditation details for the krill eye can be found here.

We are constantly interacting with signals in the world around us: noticing the fluctuating breeze against our faces, observing the intermittent flickering of a candle, or becoming absorbed in the regularity of one’s own pulse. Researchers across science have developed highly sophisticated methods for understanding the structure in these types of time-varying processes, and identifying the types of mechanisms that produce them. However, scientists collaborate between disciplines surprisingly rarely, and therefore tend to use a small number of familiar methods from their own discipline. But how do the standard methods used in economics relate to those used in biomedicine or statistical physics?

In a recent article "Highly comparative time-series analysis: the empirical structure of time series and their methods" that appeared, accessible free, in Journal of the Royal Society Interface, we investigated what can be learned by comparing such methods from across science simultaneously. We collected over 9000 scientific methods for analysing signals, and compared their behaviour on a collection of over 35 000 diverse real-world and model-generated time series. The result provides a more unified and highly comparative scientific perspective on how scientists measure and understand structure in their data. For example, we showed how methods from across science that display similar behaviour to a given target can be retrieved automatically, or how different real-world or model-generated data with similar properties to a target time series can be retrieved similarly. Further examples of the kinds of questions we ask are in the boxes in the figure below. The result provides an interdisciplinary scientific context for both data and their methods. We also introduced a range of techniques for exploiting our library of methods to treat specific challenges in classification and medical diagnosis. For example, we showed how useful methods for diagnosing pathological heart beat series or Parkinsonian speech segments can be selected automatically, often yielding unexpected methods developed in disparate disciplines or in the distant past.

Representing a time series by the results of the behaviour of a set of automatically selected statistical methods and, unusually, representing statistical methods by their behaviour on a set of time series provides a form of empirical fingerprint for our time series and our methods. Given this fingerprint we can automatically answer questions like those posed in the boxes above. This gives us a powerful complement to the more conventional process of studying our methods and our data. [Based on Fig 2 of our paper]

We are developing a web platform to help this kind of comparative interdisciplinary scientific analysis, which can be found at http://www.comp-engine.org/timeseries/ The plan is to use this to allow people to exchange data, code for methods and to put each object in its context. Ben, Max and Nick

Tuesday, 5 February 2013

Evolutionary inference for functions

How might we reason about the forms of our unseen ancestors? I discuss a possible application to speech sounds in an earlier blog article (necrophonetics). A paper with John Moriarty which provides relevant theory came out lately in Royal Society Interface as "Evolutionary inference for function-valued traits: Gaussian process regression on phylogenies" (free version from this page). The gist of the idea is that some things in nature, like sounds or patterns, evolve in time and are best described as mathematical functions. Gaussian processes are a class of process which are very suited to the evolution of functions. An example of an evolving function would be a drawing of a line which is copied repeatedly (see here for a movie of us making school students do this). Having done the theory, Pantelis Hadjipantelis from Warwick (a student of John Aston) and  Chris Knight and David Springate helped take this further. They investigated whether our theory could be made to work in practice and considered careful simulated examples. In these we could see how our best estimate about characteristics of the evolutionary process and the form of the ancestors compared against (simulated) reality. We did reasonably well. On the way we used Independent Components Analysis - a very handy method. This work will be appearing shortly in Royal Society Interface as "Function-Valued Traits in Evolution" free version here. Having convinced ourselves of the relevance of the method for simulated data the next step was to consider real data that Chris Knight has - that paper is under-way. If this interests you then Mhairi Kerr produced a masters thesis on the topic working with Vincent Macaulay. This has some further introductory content. Nick

Functions can evolve along evolutionary trees - just like genetic sequences. On the left-hand we provide a simulation of function evolution. On the right we use the data from the leaves of the evolutionary tree to reconstruct the common ancestral function. Red line is the value of the function we expect/predict and black line is an actual value (in grey is a measure of our uncertainty)

Tuesday, 29 January 2013

Statistics vs Physics

While there's a whole branch of physics called statistical physics (probably a misleading title) physicists often get only a few hours of statistical training in their undergraduate degrees. This is surprising to some who think of physicists as the most mathematical of scientists. In fact you can find a diversity of statistical crimes/accidents in physics papers (and I'm sure you can find them in my own). In partial acknowledgement of this, I organised this Royal Society Discussion Meeting and edited this volume of the Philosophical Transactions of the Royal Society “Signal Processing and Inference for the Physical Sciences” with the excellent Prof Tom Maccarone (now at Texas Tech Astrophysics and Astronomy). Our goal was to expose physical scientists to some new topics in statistical inference and some data analysts to physical challenges. Lots of the volume is free and there are also talks from the authors and slides on this page. We provide an introduction "Inference for the Physical Sciences" which we hope can serve as a jumping off point for physical scientists wanting to use statistical tools. Max Little also wrote an article highlighting some challenges in signal processing in biophysics "Signal processing for molecular and cellular biological physics" putting some of our other work in context (see previous blog articles on finding steps beneath the noise and on molecular dance steps). For those with an interest in Machine Learning I think the talks by Bishop, Gharamani, Roberts and Hyv√§rinen are worth a look. Nick

Dr Ben Fulcher made the image above - similar signals are linked up (see a pending blog article) and we have to guess whether the green event that mysteriously occurred in Russia was a blue test explosion or a red earthquake...

Wednesday, 3 October 2012

Exploring noise in cellular biology


We're used to thinking about machines as robust, hard-wearing objects made from solid materials like metal and plastics. If they crack, split or overheat they are liable to malfunction, and if we subject them to too much jostling and shaking we're asking for trouble. However, the biochemical machines responsible for keeping us alive work in a rather different world -- they're made from soft, organic materials, and contained in a disorganised bag (the cell) that is constantly shaken, bumping our machines against each other and other cellular inhabitants. How can the delicate processes required by living organisms take place in this chaotic environment? And how can scientific progress be made in such a tumultuous, unpredictable world?

Extrinsic factors can modulate the stability of essential, but noisy, cellular circuits


Iain recently wrote an article, targeted at a broad audience, looking at some of these questions. One of the most important cellular processes that has to take place in this chaotic world is that of 'gene expression': the interpretation of genetic blueprints which describe how to build cellular machinery, and the subsequent construction process. Gene expression can be likened to using a bad photocopier to copy books from a library that opens and closes randomly, then using these photocopies (which are prone to decay) to construct machines. This problematic environment gives rise to many medically important random effects, including bacterial resistance to antibiotics and differing responses to anti-cancer drugs. We are particularly interested in how fluctuating power supplies (see our other blog articles here, here and here!) influence the cell's ability to produce these machines, and what effects this unreliable power has on medically important processes. The article -- available here and appearing in the expository magazine Significance -- takes a look at how cellular noise arises, current techniques for its detection and analysis, and its influence on important biological phenomena. Iain

Sunday, 23 September 2012

Organizing networks using their dense regions

Many systems in fields ranging from biology to sociology, to politics and finance can be represented as networks. For example, in protein interaction networks each node represents a protein and each link, connecting a pair of nodes, quantifies the strength of the interaction between those proteins. Similarly, in political voting networks nodes represent politicians and the edges connecting pairs of politicians represent the similarity of their legislative voting records. Despite the significant differences in the underlying systems, the common network representation enables researchers in different fields to ask questions that can be surprisingly similar. Given this, it would be useful to have a systematic method to highlight similarities in networks from different fields to identify problems that might be tackled using the same techniques. For example, if a biological network representing covariation in neural activity in different regions of the brain could be shown to be structurally similar to a financial network representing correlation of stock returns, certain analytical tools and models might be applicable to both problems.
A taxonomy of networks

In our paper, we tackle this problem by first developing a method to quantify the similarity of different networks based on their community structure. A community in a network, loosely put, is a set of nodes which are more connected to each other than they are to the rest of the network (like a group of friends who have the majority of the social interactions with each-other). We introduce the idea of “mesoscopic response functions” which are curves that summarize the community structure of each network at different scales and enable us to define a single number that quantifies the similarity of network pairs. Importantly, this approach allows us to compare networks with different numbers of nodes and different link densities. We then use this similarity measure to construct taxonomies of networks. From an historical perspective, classification of objects in this way has been central to the progress of science, as demonstrated by the periodic table of elements in chemistry and the phylogenetic tree of organisms in biology.

The taxonomies constructed using our approach are successful at grouping networks that are known to be similar. For example, political voting networks for the US Congress, UK House of Commons and United Nation are clustered together in the same group. Perhaps more importantly, the method also identifies networks that are not grouped with members of the same class and are therefore unusual in some way. For example, a Facebook network for Caltech is not grouped with the Facebook networks of other universities. We also used the technique to detect historically significant financial and political changes in temporal sequences of networks; we found the stock market network corresponding to the 1987 crash and the voting network corresponding to the American Civil War to stand out from their respective sequences of networks.

You can read the full story in our paper “Taxonomies of networks from community structure” in Physical Review E 86, 036104 (2012)  In the paper, we demonstrate the range of fields in which this approach can be usefully applied using a set of 746 networks and case studies that include US Congressional voting, Facebook friendship, fungal growth, United Nations voting, and stock market return correlation networks. Dan, JP and Nick