In mathematical oncology, and in the wider sphere of mathematical biology, we often need to fit an ODE model to experimental data to infer parameter values. In an ideal world, we want to do this in a way which is statistically correct, so that our model can account for variation in the data between experiments and replicates. We should also aim to do this in a way which is easily reproducible. Stan1 allows us to do both with relative ease!
After attending the ECMTB in September, I found that there were several other Math Oncologists who were either using or interested in using Stan, an open-source platform for statistical modelling. I also found that, like myself, everyone I spoke to had a hard time knowing where to start with it – the user manual is monolithic and, though rich in information, imposing. In a bid to save others from this frustration, I thought I might share some of my own experience with it, with a particular focus on Bayesian inference with ODE models, since this is primarily what I have been using it for.
For those of you who are not yet familiar, Stan uses its own platform-independent programming language which allows you to write simple and reproducible code for performing full and approximate Bayesian inference and penalized maximum likelihood estimation. It has built-in libraries for tackling things like linear algebra and ODE approximation.
Typically, if we are fitting ODE models to experimental data, we shall use full Bayesian inference, which Stan implements using a Markov chain Monte Carlo (MCMC) method, yielding a posterior distribution (or several) that estimates a parameter (or several parameters). Broadly speaking, this posterior distribution is estimated by constructing a Markov chain whose stationary distribution (that is, a distribution which is invariant with respect to the Markov chain after applying the chain’s transition function) resembles the posterior distribution of interest.
Of course, this is much easier said than done – the name of the game when designing any MCMC algorithm is constructing a Markov chain with an arbitrary stationary distribution, meaning very careful selection of a transition function. Thankfully, this difficult legwork has already been done for us many times over, and several algorithms for performing MCMC already exist. Stan in particular uses a Hamiltonian Monte Carlo (HMC) algorithm, originated by Duane et al.2 in 1987 and proposed for wider statistical application by Neal3 in 1996. The mathematics and physics behind this method are fairly involved, so we won’t give a detailed treatment here, but the method is discussed extensively by Betancourt in his 2018 review4. The short, hand-wavy explanation is that the algorithm exploits the geometry of the parameter set to efficiently explore the target distribution, with the added bonus of giving us a stronger level of confidence in our inference when compared to other pre-existing algorithms.
Stan has interfaces with R, Python, MATLAB and most other languages often used for data analysis, and it can run on Windows, Mac and Linux systems. I personally like to use Python and R for most programming and data analysis. Though these both have their own dedicated Stan interfaces, PyStan and RStan, having tried both, I would recommend instead the CmdStan5 interfaces for these languages – CmdStanR and CmdStanPy, respectively. These are lightweight interfaces to the command line interface for Stan, and in my experience, are both more user-friendly and easier to get up and running. Another benefit of these interfaces is that they are updated much more frequently than PyStan and RStan, meaning that new features are more frequently added and bugs are more frequently addressed.
Stan’s programming language is straightforward. There are 7 code blocks which we can fill in, and though they are all technically optional, leaving them blank won’t let us do anything interesting. Say, for example, we wanted to fit an ODE model to some experimental data. We would use the blocks in the following way.
The structure here means that for a simple model, we can infer parameter values for a set of data in only a few lines of code. This is also the structure of the Stan code regardless of which interface you are using, which is what makes the models produced using the platform easily reproducible and simple to share with others regardless of the interface they are using. I’ve included a link to a GitHub repository10 in the references with a few Stan models I wrote up which fit a logistic curve to ten sets of dummy data I generated in python. An example of such a set of data and the code used to produce it is also in the GitHub repository, so you can try it out yourself and play around with the number of replicates or the number of data points per replicate. I also included an R file you can use to quickly unpack this data and run the models yourself, which should act as a sufficient pseudocode for any other coding languages.
Once you get started, something that may come up quite frequently after you run your models is a warning about divergent transitions. This is telling you that for some reason or another, the sampler was not able to effectively explore the parameter space. What it does not tell you is why this is happening – just a few vague options for fixing the problem.
Stan uses a leapfrog algorithm which provides a stepwise approximation of the Hamiltonian trajectory of our model. A divergence arises when this approximated trajectory deviates too much from the true trajectory, measured by the distance between the Hamiltonian value and the initial value.1 If that all sounds too technical, suffice to say that we do not want divergences to arise because they render our model fit unreliable. I find it difficult to bemoan divergences despite how frequently they crop up when starting a model from scratch. There are not many algorithms that indicate issues arising in such a simple way, and which tell us so plainly to doubt our results no matter how sensible they might appear.
There are typically three main avenues for mitigating divergences: decreasing step size between transitions, using more informative priors, or reparametrizing the model. The first is the easiest: all you need to do is increase the
adapt_delta parameter before running the model, (hopefully) giving a better fit at the expense of a longer wait for the model to run. The second is best practice in the first place, since you should always have your priors be as informative as possible from the get-go. Generally, this doesn’t include much work beyond what will already have been done during initial exploratory data analysis on experimental data.
The third, model reparameterization, is perhaps the hardest of the three to do, and how it should be done depends on the model being used and the amount of data available. The objective should be to improve the model geometry and make it easier for the sampler to explore the sample space. For example, if you are trying to infer parameters for a hierarchical model, one way to do this is switching from a centred to non-centred parameterization, in which you recover group-level parameters with a scaling and translation of a latent gaussian variable rather than sampling them from the hierarchical hyperparameters directly. If you have large amounts of data, however, it may be better to use a centred parameterisation6. The three models provided in the GitHub repository include both a non-hierarchical and hierarchical model, and for the latter I included both centred and non-centred parameterisations. Each Stan file included has well below 100 lines of code.
If none of this works, I sometimes find it useful to try and generate a simulated data set with known parameter values and then try running the model with it. If the model runs nicely and no issues arise, it may be an indicator that the model you are trying to use is a poor fit for the data. Doing this also allows you to zero in on any individual parameters which might be causing problems, since you can set the parameters instead as known data and try inferring each parameter individually.
There are a few packages that are particularly useful for analysing results, although I’ll note now that several are only available for R, which seems to be the popular choice for running Stan. Posterior7 is useful for analysing and processing Stan Modelling results and bayesplot8 is good for visualising these results. ShinyStan9 is an excellent package which plots nearly all the information you might need after running a model with minimal effort. This is a personal favourite of mine – when trying to diagnose issues with model geometry, this package can show you plots of the log posteriors where each divergence is mapped individually, showing you whether these divergences are all occurring in one area of the sample space or not. I’ve also included in the references a few articles, papers and case studies which I found particularly helpful starting off. The caveat is that since the time of writing, Stan has undergone numerous updates, and so the code in several of them is outdated, but the information and explanations they contain, I feel, are still useful, and in CmdStanR, at least, you will be told exactly which parts of the code have been deprecated, so it is easy enough to produce a functionally identical model from these references. I also heartily recommend “Bayesian Data Analysis”11 by Gelman et al. for those of you who are new to Bayesian modelling but want a nice, rigorous, ground-up walkthrough of the theoretical basis behind it.
Stan’s applications aren’t just limited to the world of mathematical biology. Bayesian inference is often used for forecasting, and Stan has been implemented for disease transmission modelling13, predicting outcomes of presidential elections17, estimating the impact of weather on future crop yields18 and financial volatility modelling19. Those interested in sports statistics can find a wide array of fun things to do with Stan, too – there are case studies on obtaining basketball player data20 and estimating the probability of a successful golf putt21. There is even an entire R package using Stan dedicated to estimating, visualising and predicting football models – footBayes – which one might use to give themselves a healthy, arguably unfair advantage in a fantasy football league. I include the link to the package for any interested parties to peruse out of what I hope is strictly academic curiosity22.
I hope that this helps you get started, and hopefully once you do, you will see the potential Stan has. I think that a widely used, simple and reproducible platform for sophisticated statistical inference is invaluable, and I hope to see more people in the sphere of math oncology using it in future.
© 2023 - The Mathematical Oncology Blog