John Metzcar, Catherine R. Jutzeler, Paul Macklin, Alvaro Kohn-Luque, Sarah C. Bruningk

Read the paperWe propose to name such hybrid approaches combining big data and ML with mechanistic modeling "mechanistic learning."

To my mind, the only controversial aspect of Benzekry's suggestion is his use of the royal "we" for a single author paper (I joke...).

The brief overview of the history of mathematical modeling in the Introduction is excellent. It is a daunting task to review "what is mathematical modeling?" The authors use the phrase

Knowledge-driven modeling stands in contrast to

The first type is arguably the most intuitive. We already typically begin a project by fitting some data with a mechanistic model, resulting in a distribution of patient-specific parameters. What do you do with that list of parameters? One straight-forward option is to run them through a machine learning model, to see if they predict relevant clinical outcomes. In doing this, you have effectively translated the information from

We are trying this approach out in B-precursor ALL. We have Flow Cytometry characterization data for many patients, which gives us information about cell types in each patients. We use this data to train a

It also works in reverse: data-modeling provides a platform for subsequent mechanistic approaches. Recent work from Heyrim Cho, Russ Rockne et. al. in AML [4] starts with high-dimensional data (single-cell RNA-sequencing) and then the authors reduce the dimensionality of the dataset (e.g. PCA, t-SNE, etc). This reduced space is then used to model flow and transport between differentiation cell states using PDE's acted out in reduced-dimensional space. Importantly, that space is interpretable using pseudo-time analysis.

An alternative is to formulate a specific guess on how relevant variables interact between input and output through the formulation of a mathematical model.

Mathematicians (mathematical biologists) are proposing their 'guess' of how the world works. This guess of course uses reason, intuition, historical evidence, and most importantly external validation. But the starting point is a reach out into the dark, grasping for the truth.

I need not convince the reader that the guessing is a worthy exercise, in itself. Similarly, the authors are firmly in favor of their fellow mathematicians:

It is tempting to suggest that knowledge-driven models are inherently interpretable. Yet, the implementation of chains of relationships can formulate complex inverse problems. Subsequently, post hoc processing through parameter identifiability and sensitivity analyses is key. This can identify previously unknown interactions between system components to generate hypotheses for experimental and clinical validation.

In summary, I encourage you to read the review paper. I think it provides several promising strategies to merge two distinct fields that have each earned a solid reputation for progress in math oncology.

- Benzekry, S., 2020. Artificial intelligence and mechanistic modeling for clinical decision making in oncology. Clinical Pharmacology & Therapeutics, 108(3), pp.471-486.
- Metzcar, J., Jutzeler, C.R., Macklin, P., Köhn-Luque, A. and Brüningk, S.C., 2024. A review of mechanistic learning in mathematical oncology. Frontiers in Immunology, 15, p.1363144.
- Gunawardena, J., 2014. Models in biology: ‘accurate descriptions of our pathetic thinking’. BMC biology, 12, pp.1-11.
- Cho, H., Ayers, K., DePills, L., Kuo, Y.H., Park, J., Radunskaya, A. and Rockne, R., 2018. Modelling acute myeloid leukaemia in a continuum of differentiation states. Letters in biomathematics, 5(Suppl 1), p.S69.

© 2023 - The Mathematical Oncology Blog