MUCH of the news about COVID-19 includes projections about where the coronavirus pandemic may be going. Many of these predictions are at great odds with one another, likely causing many to ask: How do prediction models work and can they be trusted?
Recently, the University of Washington’s Institute for Health Metrics and Evaluation projected that COVID-19 deaths in the United States would drop to near zero by June. Last week, a group from Harvard University put forth a model suggesting a far different scenario: Absent a vaccine, COVID-19 will be with us for at least several more years.
Such disparity occurs because different types of models are used. Those who analyze the COVID-19 pandemic generally use either transmission models or curve-fitting models.
The Harvard projection is an example of the transmission model, which simulates how viruses spread through communities.
Transmission models can predict how quickly an infection can explode in a community that has no immunity. The model takes into account the number of people, on average, to whom an infected person can transmit the virus. Controlling the epidemic hinges on that number, which can be brought down by physical distancing and other measures.
But transmission models developed by different researchers can come up with results that don’t agree because the values used in the modeling equations vary. One prognostication might reflect infection rates that change with the season, while another keeps them constant. Some models assume once the infected recover, they are permanently immune, while others assume immunity wanes over time.
As of now, we do not know whether COVID-19 has seasonal patterns or if immunity will be long-lasting.
The Harvard transmission model used data from previous related coronaviruses to determine how easily COVID-19 can spread. The model shows that without a vaccine, we may need intermittent periods of physical distancing to avoid overloading health care facilities—and that the COVID-19 epidemic will recur in multiple waves.
Curve-fitting models use available COVID-19 data to try to determine if trends exist. That’s the method the Institute for Health Metrics used, gathering data from multiple countries to look for patterns.
Researchers building curve-fitting models can get very different answers. One reason has to do with the input data that are used. While numbers of confirmed cases might seem like an obvious choice of data to plug in, surges in confirmed cases could reflect either the fact that more testing is being done or an actual increase in infections.
The Institute for Health Metrics model, which is frequently cited by the White House, used data on COVID-19 deaths. However, deaths from COVID-19 may not be completely reported; New York City’s health department recently adjusted upward the city’s death count by 3,700 to account for potential undercounting.
To make predictions, curve-fitting models simply extrapolate curves into the future. These models can give very different answers depending on the shape of the curve used for extrapolation.
For example, if the commonly used bell-shaped curve is extrapolated far enough into the future, it eventually will show that projected deaths fall to nearly zero, which may not be accurate.
In contrast, if a curve is extrapolated that doubles every week and heads dramatically upward—known as an “exponential growth curve”—very different conclusions emerge and the projected death toll would soar.
Because of the pitfalls of extrapolation, the Institute for Health Metrics model’s prediction of nearly zero COVID-19 deaths this summer is not considered reliable and should be viewed with extreme skepticism.
The many and varied numbers coming out of various COVID-19 models can be confusing to the public and to policymakers. Colorful graphs with smooth curves convey a false sense of precision and mask the uncertainties in the numbers. The builders of these models need to work especially hard to communicate to the general public and policymakers, in lay terms, why they got the results they did and what key assumptions they relied upon.
Curve-fitting models are useful for making very short-term predictions, but beyond that many researchers, including me, don’t have much confidence in them. If you are looking for longer-term clues regarding how the pandemic will play out, pay attention to transmission models, even though they have their own built-in uncertainties.
Predictive modeling is one of the tools that will influence the decisions policymakers will make about when and how to reopen our society, so we need to be critical consumers of them. These models can help guide decisions about how much testing we need and how many trained contact tracers must be in place before social distancing measures can be safely relaxed.
But builders of these models need to be transparent about their models’ assumptions, strengths and weaknesses, and make clear what purpose they are best used for.
A little clarity can go a long way as we try to navigate our way out of this pandemic.