One thing that is clear from the pandemic crisis that is shaking the world is the crucial need we have for models that allow us to estimate the future behavior of the epidemic. The dynamics of the spread of an epidemic are simply not amenable to intuitive estimation. So it is critical to have computational models that permit us to project the near- and middle-term behavior of the disease, based on available data and assumptions.
Scott Page is a complexity scientist at the University of Michigan who has written extensively on the uses and interpretation of computational models in the social sciences. His book, The Model Thinker: What You Need to Know to Make Data Work for You, does a superlative job of introducing the reader to a wide range of models. One of his key recommendations is that we should consider many models when we are trying to understand a particular kind of phenomenon. (Here is an earlier discussion of the book; link.) Page contributed a very useful article to the Washington Post this week that sheds light on the several kinds of pandemic models that are currently being used to understand and predict the course of the pandemic at global, national, and regional levels ("Which pandemic model should you trust?"; (link). Page describes the logic of "curve-fitting" models like the Institute for Health Metrics and Evaluation (IHME) model as well as epidemiological models that proceed on the basis of assumptions about the causal and social processes through which disease spreads. The latter attempt to represent the process of infection from infected person to susceptible person to recovered person. (Page refers to these as "microfoundational" models.) Page points out that all models involve a range of probable error and missing data, and it is crucial to make use of a range of different models in order to lay a foundation for sound public health policies. Here are his summary thoughts:
All this doesn’t mean that we should stop using models, but that we should use many of them. We can continue to improve curve-fitting and microfoundation models and combine them into hybrids, which will improve not just predictions, but also our understanding of how the virus spreads, hopefully informing policy.
Even better, we should bring different kinds of models together into an “ensemble.” Different models have different strengths. Curve-fitting models reveal patterns; “parameter estimation” models reveal aggregate changes in key indicators such as the average number of people infected by a contagious individual; mathematical models uncover processes; and agent-based models can capture differences in peoples’ networks and behaviors that affect the spread of diseases. Policies should not be based on any single model — even the one that’s been most accurate to date. As I argue in my recent book, they should instead be guided by many-model thinking — a deep engagement with a variety of models to capture the different aspects of a complex reality. (link)Page's description of the workings of these models is very helpful for anyone who wants to have a better understanding of the way a pandemic evolves. Page has also developed a valuable series of videos that go into greater detail about the computational architecture of these various types of models (link). These videos are very clear and eminently worth viewing if you want to understand epidemiological modeling better.
Social network analysis is crucial to addressing the challenge of how to restart businesses and other social organizations. Page has created "A Leader's Toolkit For Reopening: Twenty Strategies to Reopen and Reimagine", a valuable set of network tools and strategies offering concrete advice about steps to take in restarting businesses safely and productively. Visit this site to see how tools of network analysis can help make us safer and healthier in the workplace (link).
Another useful recent resource on the logic of pandemic models is Jonathan Fuller's recent article "Models vs. evidence" in Boston Review (link). Fuller is a philosopher of science who undertakes two tasks in this piece: first, how can we use evidence to evaluate alternative models? And second, what accounts for the disagreements that exist in the academic literature over the validity of several classes of models? Fuller has in mind essentially the same distinction as Page does, between curve-fitting and microfoundational models. Fuller characterizes the former as "clinical epidemiological models" and the latter as "infectious disease epidemiological models", and he argues that the two research communities have very different ideas about what constitutes appropriate use of empirical evidence in evaluating a model. Essentially Fuller believes that the two approaches embody two different philosophies of science with regard to computational models of epidemics, one more strictly empirical and the other more amenable to a combination of theory and evidence in developing and evaluating the model. The article provides a level of detail that would make it ideal for a case study in a course on the philosophy of social science.
Joshua Epstein, author of Generative Social Science: Studies in Agent-Based Computational Modeling, gave a brief description in 2009 of the application of agent-based models to pandemics in "Modelling to Contain Pandemics" (link). Epstein describes a massive ABM model of a global pandemic, the Global-Scale Agent Model (GSAM), that attempted to model the spread of the H1N1 virus in 1996. Here is a video in which Miles Parker explains and demonstrates the model (link).
Another useful resource is this video on "Network Theory: Network Diffusion & Contagion" (link), which provides greater detail about how the structure of social networks influences the spread of an infectious disease (or ideas, attitudes, or rumors).
My own predilections in the philosophy of science lean towards scientific realism and the importance of identifying underlying causal mechanisms. This leaves me more persuaded by the microfoundational / infectious disease models than the curve-fitting models. The criticisms that Nancy Cartwright and Jeremy Hardie offer in Evidence-Based Policy: A Practical Guide to Doing It Better of the uncritical methodology of randomized controlled trials (link) seem relevant here as well. The IHME model is calibrated against data from Wuhan and more recently northern Italy; but circumstances were very different in each of those locales, making it questionable that the same inflection points will show up in New York or California. As Cartwright and Hardie put the point, "The fact that causal principles can differ from locale to locale means that you cannot read off that a policy will work here from even very solid evidence that it worked somewhere else" (23). But, as Page emphasizes, it is valuable to have multiple models working from different assumptions when we are attempting to understand a phenomenon as complex as epidemic spread. Fuller makes much the same point in his article:
Just as we should embrace both models and evidence, we should welcome both of epidemiology’s competing philosophies. This may sound like a boring conclusion, but in the coronavirus pandemic there is no glory, and there are no winners. Cooperation in society should be matched by cooperation across disciplinary divides. The normal process of scientific scrutiny and peer review has given way to a fast track from research offices to media headlines and policy panels. Yet the need for criticism from diverse minds remains.
1 comment:
Whenever there is significant uncertainty, like now, the likelihood of any model being "right" is pretty small - so it would always be sage advice to use many models - I would be extremely wary of anyone who relied on a single model. I have not been in the least surprised by the general misunderstandings about models.
The expectation that the answer you got from a model was the only one - or the right one - was something I continually fought against when I was building models. They will ALWAYS be simplifications of the 'real' world and, as such, cannot be expected to behave in precisely the same way.
Thanks for a good article
Post a Comment