Date: Wednesday 12 December 2012
Recent efforts, in which numerous climate models have been run for a common set of experiments, have produced large datasets of projections of future climate. Those multi-model ensembles sample initial condition, parameter as well as structural uncertainties, and they have prompted a variety of approaches to quantifying uncertainty in future regional climate change. Yet assessments still mostly use model ranges as uncertainties and equal-weighted averages as best-guess results. In principal, one might expect that the reliability of projections could be improved and uncertainties could be reduced if models were selected or weighted based on some criteria of model performance. In practice, however, no general all-purpose metric has been found that unambiguously identifies a ‘good’ model. Results may also be biased because of the small sample of models, and by the fact that models share components and parameterizations. New results from the CMIP5 intercomparison will be presented.