Correcting, Improving, and Verifying Automated Guidance in a New Warning Paradigm
The prototype Probabilistic Hazards Information (PHI) system allows forecasters to experimentally issue dynamically evolving severe weather warning and advisory products in a testbed environment, providing hypothetical end users with specific probabilities that a given location will experience severe weather over a predicted time period. When issuing these products, forecasters are provided with an automated, first-guess storm identification object which is intended to support the probabilistic warning issuance process. However, empirical results from experimentation suggest forecasters have a general distrust of the automated guidance, leading to frequent adjustments to the automated information. Additionally, feedback from several years of experimentation suggest that forecasters have limited direct experience with how storm-scale severe weather probabilities tend to evolve in different convective situations.
To help address some of these concerns, the first part of this thesis provides a detailed analysis of the maximum attainable predictability of the automated guidance during the spring season of 2015, and offers a comparison of the verification statistics from automation to those of the corresponding storm-based warnings issued by the National Weather Service during the same time period. The second part of this thesis addresses storm-scale severe weather probability trends by developing a machine learning model to predict the evolution of a storm’s likelihood of producing severe weather. This model uses the ensemble average of six machine learning members trained on variables obtained from the initial automated guidance, environmental parameters, and a storm’s history, to predict future probabilities of severe weather occurrence over a predicted duration of a storm. Finally, the model was implemented and tested during the 2017 PHI prototype experiment.