Model Biases in the Great Lakes Region

As part of GLISA’s Ensemble Project, GLISA evaluates model temperature and precipitation biases, or model errors when compared to observations, for several commonly used Global and Regional Climate Models (GCMs and RCMs). Model bias is an important metric to consider before choosing or using climate projections in your work, because bias is one indicator for the quality of a projection — a model with large biases points to large errors. However, small model bias does not necessarily equate to higher model quality, either, as the model may simulate, for example, the correct temperature or amount of precipitation for the wrong reasons. Here, GLISA has brought together several helpful resources to explain the role of bias and bias correction, how to manage uncertainties associated with model bias especially when bias is large, and we provide data on annual and seasonal biases for several commonly used GCMs and RCMs.

GLISA has evaluated biases in the following models:

  • 40 GCMs from the Fifth Climate Model Intercomparison Project (CMIP5)
  • 19 dynamically downscaled simulations from the North-American Coordinated Regional Climate Downscaling Experiment (NA-CORDEX)
  • 6 dynamically downscaled simulations from the UW-RegCM4 dataset

GLISA is also working on evaluating lake-effect precipitation biases, which can be found here.

Resources

Summary of Model Biases | White Paper: Overview and Guidance on Bias and Bias Correction

The following figures and tables are taken from GLISA’s summary of model biases project page. It is important to remember that model bias is only one indicator of model quality. GLISA’s first-order credibility requirement is that a climate model simulates the Great Lakes, their horizontal and vertical dynamics, and lake-land-atmosphere interactions. View our climate model checklist to see which models meet GLISA’s lake simulation requirement.

Bias Figures

Seasonal precipitation and temperature bias for all evaluated datasets are shown in the figure below.  Read more about GLISA’s analysis of model biases here.

click to enlarge

Bias Tables

Annual and seasonal temperature and precipitation biases for all evaluated models:

click to enlarge

Download Custom Bias Table

Users can filter the list of models by their maximum amount of bias and download a table formatted like the one above. Read more about “small” versus “large” bias to inform your selection.    

Bias Methodology

In any bias calculation, it is important to identify high-quality observational data to use as a benchmark. We consulted with our Scientific Advisory Committee who recommended NOAA’s National Centers for Environmental Information (NCEI) Climate Divisions Dataset as a reliable historical data set. The main drawback of these data are they are only available for the United States, and equivalent data are not available for Ontario. We considered other commonly used data sets, such as the University of Delaware temperature and precipitation observations that are global in coverage, but we found them to have their own set of biases compared to the U.S. Climate Division Dataset.

Regional biases are calculated for the spatial average of U.S. climate divisions in the Great Lakes region, including those of: Illinois, Indiana, Michigan, Minnesota, New York, Ohio, Pennsylvania, and Wisconsin (see Figure 1). There are no data over the Great Lakes, and Ontario is omitted from our bias calculation because the observational data are US-only (see discussion of data limitations in “Methodology” section above). 

Fig. 1: Climate divisions (colored) used to calculate regional biases

With the help of our scientific advisors, we settled on an approach where we calculated the percent bias for precipitation and percent and absolute bias for temperature simulations. The period 1980-1999 was used in our bias calculation because this was the full length of years available in the UW-RegCM4, and we wanted to be able to compare biases across all model data sets. Annual and seasonal climatologies (20-year means) were calculated for temperature and precipitation. Those 20-year means in the models were compared to the 20-year observed mean. 

Precipitation bias was defined as the percent of bias (i.e., difference between the mean model simulation and mean observed measurement) compared to the magnitude of the model’s simulation: 

 (model mean – observed mean) / |model mean|  x100%

Temperature bias was defined by the absolute measure of bias in degrees Celsius:

model mean – observed mean