Selecting Robust Climate Change Projections for Agricultural Systems
By: Kerrie Geil | 01-10-2021
USDA-ARS, SCINet Postdoctoral Fellow, Las Cruces, NM
Scientists of many disciplines, including agricultural fields, often use climate change projections in their research. For example, these projections have been used to estimate how crops or ecological systems may be impacted by future climate conditions, or to predict the spread of diseases and pests. Many sources of climate change projections exist for these research applications: over 100 different global general circulations models (GCMs) in the Coupled Model Intercomparison Project (CMIP) archives; large ensembles generated from a single GCM (such as NCAR’s CESM large ensemble); dynamically downscaled GCM products that generate higher spatial resolution information using regional climate models (such as CORDEX); and statistically downscaled GCM products that generate higher resolution information using empirical equations (such as the MACA product). Unfortunately, there are no quality standards or model performance thresholds implemented for any of this data. How should a scientist choose the most appropriate and robust source of projections for their particular research application considering the many sources and varied quality of available data?
The best practice is to spend time evaluating the performance of climate model simulations using metrics that are relevant to each particular research application and to then avoid the use of climate projections from any model that doesn’t perform well. In other words, a scientist should ensure that a model can simulate, fairly realistically, the phenomenon of interest/impact before using its projections. This applies to “bias-corrected” downscaled products as well as GCMs because 1) a poorly performing model shouldn’t be downscaled in the first place and 2) many model biases exist but only a couple are corrected in the downscaling process.
In reality though, this best practice is almost never followed due to the time, computational resources, and evaluation knowledge required for a model performance analysis. Instead, scientists often select one of the many available downscaled GCM products and use a multi-model average of projections, without considering the implications of model bias. For any domain scientist (outside of climate science) this is completely understandable. What ecologist, hydrologist, or rangeland scientist wants to spend months assessing the quality of climate models when they could grab a single climate projection product and instead spend that time focusing on cropland, hydrology, or rangeland science questions? We must work toward a model evaluation solution that produces more robust science while also being much more convenient and understandable for scientists to implement.
As part of my SCINet postdoctoral fellowship, I am working with members of the USDA-ARS Vesicular Stomatitis (VS) grand challenge project to determine the most robust climate model projections for predicting changes in the geographic range of the livestock disease, VS, under future climate conditions (forthcoming article in the journal Climate). As part of this Grand Challenge Project, ARS scientists Debra Peters, Luis Rodriguez, Lee Cohnstaedt, Barbara Drolet, Justin Derner, and Emile Elias along with our collaborator from USDA APHIS, Angela Pelzel-McCluskey are developing process-based early warning strategies to predict the spread of vector-borne disease across the US. My research will improve those predictions through a more objective approach to selecting climate data driving the spread of disease. Our experience with this project will have application to other agricultural problems where scientists need to select the climate change model projections for their research.
I am a climate scientist trained in climate model evaluation and selection for research and decision making applications. During my time as a postdoc at USDA, I plan to work on a range of projects to evaluate climate model performance and to assist in selection of climate projections for specific research applications. Eventually, I plan to develop a web-hosted tool for selecting robust model projections using the results and knowledge gained from these analyses. For scientists who are currently evaluating model performance, this tool will save countless research hours. For scientists who are not looking at model performance before selecting climate projections, this tool will provide more robust results.
I am currently working with the VS Grand Challenge group but am looking for additional collaborations. Please don’t hesitate to contact me if you are interested in collaborating!