5. Where do the uncertainties of climate models lie? What are their limitations?

As computational power is limited, there is a lower limit to the grid cell size for which climate models can be calculated (see also “What is a climate model?”). However, there are processes at scales below the model’s spatial resolution (normally around 100 x 100 km), e.g. clouds, convection in the atmosphere, eddies in the ocean, land surface processes (Fig. 1). The physics of these processes needs to be “parameterized”. These parameterizations are approximations of the specific phenomena to be modelled, at the scales the model can actually resolve. Parameterization is also used as an approximation of climate processes which are not yet fully understood. Parameterizations are the main source of uncertainty in climate models.

Fig. 1: Climate processes and properties that typically need to be parameterized within global climate models (MetEd, The COMET Program, UCAR).

As our knowledge of the climate as well as our empirical observations are incomplete, we cannot always narrow down parameterized variables into a single value. Therefore, tests with the model are run. Estimations of parameterized variables are put into the model to find the value, or set of values, that give the best representation of the climate. This process is called “model tuning”. Modelers tune their models to ensure that the long-term average state of the climate is accurate – including factors such as absolute temperatures, sea ice concentrations, surface albedo and sea ice extent.

There are also some limitations associated with modelling climate at regional and local scales. To bridge the gap from the large spatial scales represented by GCMs to the smaller scales required for assessing regional climate change and its impacts, different downscaling methods are used. There are two ways of downscaling: regional climate models (RCMs) and empirical-statistical downscaling (ESD). Regional climate models (RCMs) take the low-resolution solution provided by the GCMs and include finer topographical details such as the influence of lakes, mountain ranges and a sea breeze to calculate more detailed information. These models can achieve a resolution of around 25km x 25km. As it is the larger scale model information that drives the finer-scale model, this approach only provides limited improvability of the data.

Empirical-statistical downscaling (ESD) is an alternative that does not require much computing power. ESD uses observed climate data to establish a statistical relationship between the global and local climate. According to this relationship, local changes can be derived based on the large scale projections coming from GCMs or observations.

Both RCMS and ESD give relatively consistent results with each other as well as with observed data (Fig. 2). However, RCMs as well as ESD downscaled information relies heavily on the quality of the information that it is based on, i.e. the observed data or the GCM data input. Downscaling only provides more location-specific data, it does not make up for any uncertainties that stem from the data or GCM it relies on.

Fig. 2: A comparison between RCM results based on different climate models (colored dots with error bars) and ESD results (red region showing the 90% confidence interval for the model ensemble), actual observations are shown as black symbols (Førland et al., 2011).

Global as well as downscaled climate models can simulate climate quite accurately, but sometimes they show substantial deviations from observed climate, known as “bias”, especially at the regional and local scale. Bias is defined as the systematic difference between a modelled climate property (e.g. mean temperature) and the corresponding real property. Bias correction can be applied to account for these differences. An empirical transfer function between simulated and observed climate properties is calibrated and applied to the model output data to match observational climate data. Bias correction is a mere post-processing and cannot fix problems with the actual climate model.

Individual climate models may also struggle to accurately depict natural climate variability, i.e. natural short-term fluctuations on seasonal or multi-seasonal time scales (e.g. North Atlantic Oscillation (NAO) or El Niño Southern Oscillation (ENSO)). However, when combining several independent models, this variability can be reduced. Averaging an ensemble of different climate models can produce forecasts, which show better skill, higher reliability and consistency in predicting climate (Hagedorn et al., 2005).

In conclusion, modern climate models can definitely provide reliable projections at larger, global scales. However, they reach their limits when having to deal with small scale processes at regional or local scale and short-term climate variability. To deal with these problems, there are some effective methods available (as described above). Even though models will never predict our climate system 100% accurately, they are definitely still skilled in giving us a reasonably precise prediction of future climate; or, to put it in George Box’s words: “All models are wrong, but some are useful”.


Førland, E. J., Benestad, R., Hanssen-Bauer, I., Haugen, J. E., and Skaugen, T. E., 2011, Temperature and precipitation development at Svalbard 1900–2100: Advances in Meteorology, v. 2011.
Hagedorn, R., Doblas-Reyes, F. J., and Palmer, T. N., 2005, The rationale behind the success of multi-model ensembles in seasonal forecasting – I. Basic concept: Tellus A, v. 57, no. 3, p. 219-233.

You may also like...

Comment on this FAQ

Your email address will not be published. Required fields are marked *