Team:UNAM Genomics Mexico/Modeling
From 2012.igem.org
Modelling Overview
Nanotubes!! | The logic | Random info |
Why make a model?
In this chapter of the project’s description, we would like to take a moment to wonder, why should we make a model? Obviously, this part of the description was written by team members that worked creating the model, so I wouldn’t be surprised if it turns out to be biased in favor of models being awesome (not just computational models, also beauty pageants). First, it is quite useful when groping in the dark searching for a project. It helps the design of a project, making sure it is planned the best way possible. If it wasn’t for the model, maybe we would have chosen some other Escherichia coli’s transcription factors that cross talked with Bacillus subtilis, instead of our current transcription factors, which minimize the noise with their lack of cross talk. While gossip is enjoyable at a market, it’s better to avoid it at a molecular level.
Another reason to spend the time it takes to make a model is the prediction of behavior. Sure, right now nobody would think that Bacillus subtilis would explode violently if it was put in an environment at 30° Celsius, but that is a generalization of something that someone realized sometime. We could say that we were using a model when we grew Bacillus subtilis at 30° Celsius. Well, it’s easy to see why that wouldn’t convince anyone to make a computational model, or participate in a beauty pageant, so we will try using our own model and its predictions as an example. For instance, our model predicted that our Boolean Or was going to act as a Boolean Or. Had it predicted something else, chances are that many members of our team would have committed suicide. In most civilizations, control over life and death would be enough argument for making a model.
The third and final argument for making a computational model is the characterization in silico. Sure, for Tony Stark and Batman, it’s easy to afford a really big amount of wetlab experiments, but for the rest of us mortals, that’s not always possible. It’s in these cases when computers come to the rescue. Instead of repeating once and again the same experiment varying slightly the parameters, we can simulate our system and predict the outcome. Of course it takes time to make a simulation, and find the correct parameters for it, but it takes more time to earn the money to pay for the darned experiments.
Justification of formalisms
In our project, we use different mathematical formalisms to predict more accurately the behavior of our system. According to our own experience, a simple ODE model is difficult to calibrate if the expected behaviors are not known. Due to the underlying assumption of absolute interaction capabilities, it is easy to ignore important concurrent delays under a simplistic mass-action dynamic. For logic gates, achieving correct time-scales is crucial. Thus, we refine ODE behaviors using a reductionist approach: rule based modeling. Through the use of Kappa, we are able to accurately predict behaviors and thus refine our ODE model. For example, a diffusion limit through membrane channels can be implemented through the use of a Michaelis-Menten dynamic, something that unless we knew was required, would have been very difficult to foresee. Moreover, a reductionist approach evidentiates the mechanistic understanding required to achieve a believable model. For an iGEM project, reductionist understanding of the project is a tremendous advantage when characterizing standardized parts.
However, rule-based models are difficult to scale, not to mention that they are computationally very expensive, and therefore a compact ODE system is still preferable for large-scale simulations and iterations. Consequently, we utilize rule-based models to refine an ODE approximation, with which we can further scan parameters and explore the solution space. In other words, we use different formalisms to take maximum advantage of their respective strengths, avoid their weaknesses, and thus achieve a superior model.