Hurricane Irma ravaged the Caribbean Islands and the state of Florida. Sixteen days prior, Hurricane Harvey ravaged the coastal area of Texas. Weather experts said this was the first time in recorded history that two Category 4 hurricanes made landfall in the United States in the same season. Each hurricane caused billions of dollars in damage and impacted millions of people.
Especially in Irma’s case, there was considerable discussion of the computer modeling used to predict the path and strength of the hurricane. As I have worked in computer modeling for most of my 30+ year career, this piqued my interest and I decided to educate myself a little more on the topic as it relates to hurricanes.
The first resource I pursued is one of our software developers here at AFT, John Lindsay. John has an undergraduate degree in Meteorology and a Master’s degree in Computer Science. John is a self-described weather geek who has a weather station at his home. We talked about weather modeling and he gave me some links to read further, which I did.
Most of the readers of my blogs are engineers who either use AFT computer modeling software or at least familiar with the type of software we develop. The AFT software that has the most parallels to weather modeling is our AFT Impulse waterhammer modeling software. There are four main reasons I think this:
- AFT Impulse, like weather modeling, is transient in nature
- There are many uncertainties that need to be considered
- The predictions impact human safety, property and the environment
- Considerable human judgement and expertise are needed
Before discussing these topics, I want to discuss what I gathered about computer modeling and severe weather. And rather than focus on the broad topic of weather modeling, I want to focus on the narrower topic of modeling severe weather events like hurricanes
Predicted storm track for Irma 3 days before landfall in Florida (source National Hurricane Center)
The Weather Models
Weather models have different geographic reach. Some are global. Some are regional. And some are highly local. Meteorologists use the results from various models as inputs to more localized models to give very specific local forecasts.
My colleague John Lindsay emphasized to me several times that “all of the weather models have quirks”. For example, as we live in Colorado in the USA, which is right on the eastern edge of the Rocky Mountains, John discussed how certain models have difficulty in mountain areas.
When it comes to weather forecasting of major storms, the Europeans have the most highly developed resources. The European Center for Medium-Range Weather Forecasts (ECMWF) is located in England and makes heavy use of supercomputers and detailed computational modeling. They have models that simulate the weather around the entire world simultaneously rather just regionally or locally. If you paid close attention to the forecasts for Hurricanes Harvey and Irma, you would have heard the forecasters discuss several models including “the European Model”. When they said this, they were referring to the ECMWF "Integrated Forecast System". The ECMWF typically generates a detailed forecast twice daily.
Those same forecasters also mentioned at times the “GFS” forecast. This is the Global Forecast System from the American National Weather Service. The GFS forecast also models the entire world simultaneously. They release an update four times each day.
The global models are broken down into grid points roughly 10-20 miles (15-30 km) apart.
According to MIT meteorology professor Kerry Emanuel,
The top performing model is usually the European model, which is slightly ahead in long-term accuracy over the American one, Emanuel said. But that doesn't mean the European will be better every time, he said.
"Good forecasters look at the whole suite" of models, Emanuel said.
One of the successes the European model had was predicting that Hurricane Sandy in 2012 would move onto the American northeast coast seven days in advance when the American model did not predict this until four days in advance.
I have not researched the accuracy of the European ECMWF vs. the American GFS models for Irma very thoroughly, but I did find this CNN article from September 2 (seven days before the Sept 9 landfall of Irma in Florida). The American model had Irma turning north and missing the American coast while the European model had Irma continuing its westward track nearer to Florida.
Predicted storm direction for Irma 7 days before landfall in Florida (source CNN)
Those who perform transient modeling are aware of how sensitive predictions can be to the initial conditions. Weather modeling experts use every scrap of data they can find regarding the current condition of the atmosphere to provide initial conditions. This involves data on air temperature, pressure, wind speed, humidity, as well as ocean water temperature. The air data used can vary with altitude when available.
This data is gathered through a network of satellites, ground weather stations, ocean buoys, weather balloons, high altitude aircraft (which drop weather sensors into the storm) and maybe the coolest of all – the aircraft that fly right into the hurricane to take data – also known as “hurricane hunters”.
NOAA Hurricane Hunter Aircraft Flying into Irma on September 5, 2017
Detailed Models vs. “Ensemble” Models
The global models from the ECMWF and GFS produce a detailed model which as I understand it is kind of a best guess, and as many as 51 lower resolution ensemble models. The ensemble models vary the input data in different ways to assess how the sensitive the forecast is to uncertainties in the input. These are also used to help develop the cone of uncertainty.
Weather Modeling Parallels to Modeling with AFT Impulse
AFT Impulse, like weather modeling, is transient in nature
Transient modeling is particularly difficult because poor initial conditions or weaknesses/inaccuracies in the computational engine can amplify over time and ruin the simulation. The more you can connect and verify the initial conditions and computational methods to measurements or known conditions, the more confident you can be in the predictions.
There are many uncertainties that need to be considered
Severe weather modeling has more uncertainties than a typical waterhammer model, but the fundamental issue is similar. The person doing the modeling needs to evaluate the various uncertainties and decide how to best compensate for them in order to come up with a pragmatic result.
As many of you know, one area of waterhammer modeling that has considerable uncertainty is when transient cavitation occurs. One of the recommendations we make to users is to perform a “sensitivity analysis”. This is similar in principle to the “ensemble” models used in severe weather forecasts.
I found it interesting how much effort is used in initializing the hurricane forecast models. Said Kerry Emanuel of MIT,
About half of the computing power thrown at any hurricane forecast is spent entirely on describing this initial, current state, he said. The other half is spent projecting the future.
The predictions impact human safety, property and the environment
This is a particularly challenging issue for severe weather modeling. Their predictions are used to recommend either evacuation or non-evacuation. If they are wrong in either direction then people, property and the environment are put at risk. For example, in anticipation of Hurricane Irma, two nuclear power plants in Florida shut down. This is expensive and always slightly risky.
If Irma then did not track over Florida and the weather modelers were wrong, then unnecessary and potentially risky actions were pursued at these nuclear plants. Further, all of the millions of evacuees are put in danger on the roads of Florida.
In Hurricane Harvey’s case, the cities south of Houston were under mandatory evacuation. But the city of Houston itself was not. The excessive rain that fell on Houston when Harvey stalled over the city for several days led to extensive flooding and loss of human life. Whether an evacuation on short notice would have been “the lesser of two evils” is not clear and certainly was an extremely difficult decision to make for the government officials involved.
Similar issues arise when using AFT Impulse although on a smaller scale. Generally AFT Impulse users are not under the time pressure associated with approaching severe weather which makes things easier. However, the engineering analyst can come under severe pressure from their superiors or customers when their predictions lead them to conclude an expensive solution is required. Mistakes impact human safety, property and the environment.
Considerable human judgement and expertise are needed
Even when all the computer modeling simulations are run, in some cases the results are ambiguous and require human judgement and expertise. For hurricane prediction, here is a quote from an article in The Atlantic based on discussions with Neal Dorst, a research meteorologist at NOAA’s Hurricane Research Division:
Yet those model runs alone do not determine what shows up in a broadcast or on a news site… Ultimately, the team at the National Hurricane Center might sit down with hundreds of simulated storm-track runs. Major models like the European or the U.S.-operated Global Forecast System also prepare dozens of “ensemble” runs, where they adjust small aspects of the weather’s initial state.
Then an experienced human being sits down and looks at them all.
“The official track is—you have a human being sitting there, weighing runs, usually mentally, with which ones he trusts the most and which ones agree,” Dorst told me. “You always want to have a human being somewhere in the loop when you’re issuing forecasts saying, this is where [the hurricane] will go.”
That researcher will draw the anticipated-track line, based on the model runs and their own experience. Then they add the cone of uncertainty around the anticipated track. This cone doesn’t have anything to do with what the models say: It’s an average of how forecasters have gotten other tropical cyclones wrong over the past five years.
AFT Impulse waterhammer modeling similarly requires judgment and expertise. Questions that require humans to make judgements include the following:
- When is a detailed model necessary?
- When is a simple model good enough?
- When can modeling be skipped altogether based on experience and intuition?
- What can be ignored?
- What must be included?
- How should boundary conditions be handled?
- How should some equipment be modeled when the original data has been lost or modified?
- What is the worst case situation?
I have always found computer modeling to be a fascinating field of endeavor. Human ingenuity is used to arrive at the governing equations. Human ingenuity is used to develop numerical methods to solves those equations. Human ingenuity is used to build the computing platforms to crunch the numbers. And, finally, human ingenuity is used to apply and judge the results.
Human ingenuity knows no limits. So expect to see our ability to simulate the natural world around us continue to evolve and improve. Hopefully with the result that we all have improved human safety, improved economic resources and a safer and improved environment.
For Further Reading
Here is an interesting article on how the models performed for Hurricane Harvey: https://arstechnica.com/science/2017/09/at-times-during-harvey-the-european-model-outperformed-humans/
Here is an interesting article written between Harvey and Irma about the progress of numerical weather simulation: https://www.newyorker.com/tech/elements/our-weather-prediction-models-keep-getting-better-and-hurricane-irma-is-the-proof
Here is an interesting page from NOAA about the biases in the various models: http://www.wpc.ncep.noaa.gov/mdlbias/biastext.shtml