Understanding the Accuracy of Weather Forecasts Oral PresentationHow accurate is the weather forecast? This is a question which really matters to users and which the Met Office monitors and evaluates in several different ways. Understanding accuracy starts in the model development process, where various tests are performed on candidate physical developments before they are adopted into an operational version of the weather forecast production system. Even before an operational release occurs, any new changes go through both objective and subjective tests to compare against the existing system. Once our data is operational, the Met Office has a real time verification system which takes observations from across the globe and compares these to forecasts at different lead times. This helps us to identify weaknesses in the forecast, which then motivates further scientific work and development.We also know that perceptions of accuracy and trust are strongly correlated for members of the public and that people who have a greater confidence in the accuracy of weather forecasts are more likely to feel confident on taking actions based on forecast information. This lends further motivation for weather forecasts to have good accuracy as we want people to take action if they need to e.g. for a weather warning or to avoid a weather hazard.The rise of machine learning (ML) models also poses a challenge for traditional accuracy scores – ML models are designed to score well against these metrics - but does this always represent a better forecast for users? This presentation brings together an overview of the whole end-to-end process of accuracy evaluation and monitoring and discusses current Met Office performance alongside a forward look at the challenges of monitoring accuracy for ML models.Key words: AI/ ML, Modelling, Verification Speaker/s Dr Caroline Bain