Summary of Model Biases

All models are biased, and different climate variables (e.g., temperature or precipitation) exhibit different amounts of bias even in the same climate model. Table 1 provides an overview of seasonal temperature and precipitation biases in the models GLISA evaluated for the Great Lakes region. Annual biases are not reported in Table 1 because they can be misleading — oftentimes models exhibit large seasonal biases with opposite signs of one another (e.g., strong positive spring bias and strong negative fall bias) that average close to zero, annually. In reality, those strong biases do not “cancel” each other out to form a more realistic picture of annual changes. Rather, strong seasonal biases should be closely examined and used to determine if the model information is still usable or if the model lacks credibility and should be discarded.

Table 1 (click to enlarge): Seasonal precipitation and temperature bias for CMIP5, NA-CORDEX, and UW-RegCM4 projection datasets. All model biases are evaluated against NOAA’s National Centers for Environmental Information (NCEI) Climate Divisions Dataset from 1980-1999. More on GLISA’s methodology is available here.

Bias is also not the only metric that should be used to determine model credibility. GLISA emphasizes the value of looking beyond model biases and investigating the representation of important climate processes in GCMs to determine model credibility (see GLISA publication). GLISA has developed a Climate Model Buyer’s Guide that provides a checklist of model evaluation criteria for users in the Great Lakes region to use in their assessment of model credibility. 

In Table 1, bias is represented in both directions, where blue shades are too wet (precipitation) or too cold (temperature) while red shades are too dry or too hot. During the winter (Dec-Jan-Feb) and spring (March-April-May), models are almost entirely too wet and mostly too cold. During the summer (June-July-Aug) and fall (Sept-Oct-Nov), models overall are more equally too wet/dry and too cold/hot. For the fall, CMIP5 models are mostly too dry while NA-CORDEX and UW-RegCM4 models are too wet or unbiased. In all models, winter is often the most biased for both variables.

Additionally, CMIP5 has the most seasonal variability in bias, whereas NA-CORDEX and UW-RegCM4 are more consistent in bias trends (i.e. when one season is too warm, the remaining seasons of the same model are more likely to also be too warm). Only the summer shows a slight correlation between the two variables, meaning models that are too dry are more likely to be too hot and vice versa.

Bias in Context: “Small” vs “Large” Bias

Clearly, there is no perfect model, and perhaps not even a “good” model, depending on your definition of “good.”  The question becomes, “How much bias are you willing to accept?” The answer will be unique to each individual user based on the type of climate question being asked and how much uncertainty can be managed. 

GLISA suggests large temperature bias may be defined as anything more than 2°C, since temperature is well known and well simulated on weather scales.  Future temperature change of 1.5°C or more is an important adaptation threshold used in the IPCC Special Report, so model errors less than 1.5°C are desired.  When a model’s temperature bias (i.e., model error) is larger than its future climate change signal (i.e., temperature change), there is high uncertainty in the climate projection.

GLISA suggests small precipitation bias might be defined as less than 10% model error compared to observations.  However, it is not uncommon to see much larger precipitation biases, even over 100%.  None of the models GLISA evaluated have <10% precipitation bias and <1.5°C during all seasons.  Many models did have <10% precipitation bias for one season, but only two models had <10% bias in all four seasons (UW-RegCM4’s CNRM and CMIP5’s MIROC-ESM).    

If a model’s bias is not small (based on the user’s definition of “small”) GLISA recommends additional model evaluation to explain the bias and determine if the the projection offers usable information.  

Since different users may choose to accept different levels of bias, we have summarized the number of GCMs and RCMs that meet multiple thresholds for temperature and precipitation biases in Table 2.  Users can select the maximum amount of temperature and precipitation bias they are willing to accept in any season to see how many models meet that requirement.  For example, there are 17 models whose seasonal biases are less than 2.5°C and 25% for precipitation.  We chose not to present this matrix based on annual biases because oftentimes seasonal biases “cancelled” each other giving the impression of low annual bias.  It is important to note that none of the models meet GLISA’s definition of small (<1.5°C and <10%) bias for all four seasons.  Once a user decides on the bias thresholds they want to use from the matrix, they can export bias data for those models using the drop down menu below.

Table 2 (click to enlarge): Matrix showing the number of climate models (65 total) that have seasonal temperature (°C) and precipitation (%) biases less than the designated thresholds. For example, there are seven models whose bias is <2 degrees C and <30% for precipitation. This matrix can be used to identify the number of models available to users based on the amount of uncertainty (magnitude of bias) one is willing to accept or, alternatively, how much bias must be accepted to form an ensemble of size X.

To generate a customized data file (CSV) and table (PNG) of model biases based on a selection from matrix, use the drop down menu below. 

For example, a selection of 50% precipitation bias and 2.0ºC temperature bias produces the table below: 


For a table of all models’ bias, please use Table 3 below.

Table 3 (click to enlarge): Data table of seasonal and annual precipitation and temperature bias for all of CMIP5, NA-CORDEX, and UW-RegCM4 projection datasets. All model biases are evaluated against NOAA’s National Centers for Environmental Information (NCEI) Climate Divisions Dataset from 1980-1999. More on GLISA’s methodology is available here.