Ever wonder why it’s so hard to predict where and when a hurricane will exactly make landfall or how strong it will be? It’s not from a lack of information available; it’s actually from too much information available.


Hurricane forecasters have many different computer models to aid in predicting a storm’s path and its intensity. All of them are good, but they each take into account many variables and when looking at a five day and beyond forecast, prediction can vary widely. Many times, meteorologists will refer to these as the “spaghetti models” because when laid out on a map, the storm paths resemble strings of spaghetti. Another cliché reference is the “cone of uncertainty” taking into account paths from one extreme to the other. A rule of thumb: the average error rate in forecasts for each 24 hour period is about 75 miles in each direction as a storm gets closer to landfall. One such example: in the summer of 2004 as Hurricane Ivan bore down on the state of Florida, four different computer models run on September 9th, 2004 put the path of the storm everywhere from Miami to the Florida panhandle. The storm eventually made landfall at about the Alabama/Florida border, then literally split in two, ran up through northern states and made a loop, hitting Louisiana and Texas a week later (Ivan was indeed a very bizarre storm).


Before we get to specific models, understand there are four different categories of models with a subset of three different types:

 

Model Categories:

 

  • Dynamic: These models look at the current conditions in the atmosphere to arrive at a conclusion for how intense a storm might be or where it will go…

 

  • Statistical: These models ignore current atmospheric conditions and predict movement based on interactions of past storms with other parameters…

 

  • Statistical-Dynamic: Kind of the best of both worlds combining what storms done previously with current conditions. This however gets a little tricky as too much data can create inaccurate forecasts…

 

  • Consensus: Alphabetically, this should come first, but it’s listed last because consensus modeling looks at information from numerous models of a variety of types to create its own forecast…

 

Model Types:

 

  • Tracking: These models are used specifically to forecast the path of a storm…

 

  • Intensity: These models are used specifically to forecast how strong or weak a storm will be…

 

  • Hybrid: These models forecast both the track and intensity of a storm…

 

One more thing to know about computer models: if you look closely at the spaghetti model graphics of your local forecast you’ll sometime see acronyms for some of the models that end in the letter “I”. These plots are what are known as “interpolated” models (as an example GFDI is the interpolated version of the GFDL model). An interpolated model is something of a very educated guess used to help forecasters adjust paths and intensities of hurricanes to catch up with current conditions. Why would this be done? Usually it’s a time crunch issue: because of the availability of information from a variety of resources (satellites, aircraft and other models), new conclusive data might not exist for up to half a day. Forecasters sometimes can't wait that long and need a more up to date model run. Interpolated adjustments are up to six hours before the next full analysis; if you happen to see a “2” behind a computer model acronym that signifies an interpolated adjustment of between 6 and 12 hours prior to the next full analysis.

 

The six most popular hurricane computer models are listed below. All are track/intensity hybrid types and fall into the dynamic category (acronyms in parentheses are ATCF (Automated Tropical Cyclone Forecasting System) identification codes). Below the chart is additional information about each model; further down is even more information about other models:

 

ECMWF (EMX)

GFDL (GFDL)

GFS (GFSO)

HWRF (HWRF)

NOGAPS (NGPS)

UKMET (UKM)

European Center for Medium Range Weather Forecasting Model

Geophysical Fluid Dynamics Laboratory
Model

Global Forecast System
(Previously known as AVN)
NWS Hurricane Weather
Research
Model

Navy Operational Global Atmospheric Prediction
System

United Kingdom Meteorological Office
Program

European Center for Medium Range Weather Forecasting

National Weather Service (NOAA) (Created in Princeton)

National Centers
for
Environmental Prediction

National Weather Service(NOAA) Environmental Modeling Center

U.S. Navy Fleet Numerical Meteorology & Oceanography Cntr

The MET Office &
Public Weather Service

Shinfield Park, Reading, England

Princeton, NJ and California

Camp Springs, Maryland

Camp Springs, Maryland

Monterey,
California

Devon,
England


*ECMWF (EMX) is a four dimensional model considered the preeminent medium range global forecast tool. ECMWF is most effective in tracking the late development of a storm and is the most complex and expensive computer program used in severe tropical weather forecasting. The European Center for Medium Range Weather Forecasting, home of ECMWF, is supported by 28 European countries and makes its data available to the U.S. EPS is a low resolution version of ECMWF; EMXI is the interpolated version of ECMWF...

 

*GFDL (GFDL) was originally designed to forecast cyclones; it is considered one of the most accurate early model predictors on Earth as it creates a three-dimensional grid by combining information and data from multiple sources. GFDL is “nested” within the GFS system but specifically focuses on the Atlantic and Pacific basins (detailed regional forecast model). GFDL is pretty accurate, usually coming in first or second on computer model outcomes. GFDN is the Navy’s version of GFDL (it is also sometimes referenced as NGFDL). While GFDL will not be developed any further past 2008 (HWRF will eventually replace it) development of GFDN will continue for the foreseeable future. Both GFDI and GHMI are interpolated versions of previous cycle GFDL models; GFNI is the interpolated version for GFDN…

 

*GFS (GFSO) measures storm variables at twenty-eight different levels in the atmosphere and is a worldwide forecast computer model. Because it covers the entire planet, GFS can forecast storms up to three weeks before development, but doesn’t do as good a job plotting where any particular storm will end up. GFSI is the interpolated adjusted model of a previous GFS cycle. Regarding the Atlantic hurricane basin, forecasters like GFDL more than GFS as GFS “overdevelops” a lot of tropical storms. Although it is considered a hybrid track/intensity type, GFS usually does a better job of predicting the track of a storm when compared to intensity. AVNO is the aviation component of GFS; AVNI is the interpolated version of AVNO

 

*HWRF (HWRF) launched in 2007 and as mentioned above is scheduled to eventually replace GFDL. HWRF is superior to GFDL in predicting the track of a storm, but GFDL still does a better job with intensity forecasting. HWRF is a three dimensional real time Doppler based computer model collating information from a variety of sources such as satellites, buoys and hurricane reconnaissance aircraft. Much of NOAA’s forecasting in the coming years will hinge upon HWRF computer models. HWRF is a specialized version of the WRF (Weather Research and Forecasting) model. HWFI is the interpolated version of the previous HWRF cycle…

 

*NOGAPS (NGPS) specifically wasn’t designed to forecast hurricanes, but it works pretty well. NOGAPS uses upper air data to predict storm paths and the strength of a storm. Although it is a track/intensity hybrid, NOGAPS functions much better as a track only model…

 

*UKMET (UKM) is also a four dimensional model very similar to the NOGAPS system. Like NOGAPS, UKMET does a better job of predicting a storm’s track and not as good a job forecasting intensity. UKMET does however have a separate model for predicting intensity: EGRR. EGRI is the interpolated version of the previous EGRR cycle…

 

Other Dynamic Models:

 

BAM:          Beta and Advection Model- Also run out of Camp Springs, Maryland at the National Centers for Environmental Prediction, BAM goes hand in hand with GFS by following the GFS trajectory and then categorizing results into three areas: Shallow (BAMS), Medium (BAMM) and Deep (BAMD). The further away from each other the BAM models land, the more complex a forecast for a storm will probably turn out to be. BAM models are forecast track only and don’t measure the intensity of systems…

CMC GEM:   Environment Canada Global Environmental Multiscale Model- Well, that’s a mouthful, huh? The CMC GEM (an acronym for the Canadian Meteorological Centre Growth Equation Model) is a four dimensional program similar to both ECMWF and UKMET that forecasts both track and intensity. The model, which only covers North America, has been through some changes recently, one of the most major being an upgrade in June 2009. CMC GEM is usually referred to as just CMC; CMCI is the interpolated version of the previous CMC cycle… 
 

LBAR:         Limited area BARotrophic Model- The program looks at vertical winds by relying on upper level air pressure.  Upper level and lower level systems (high and low pressure) can push storms on different tracks; LBAR outputs a two dimensional forecast for predicting a hurricane path and like the BAM models does not measure intensity. LBAR gets some of its raw data from the GFS model . Like NHC98 (below), LBAR has been surpassed by a number of newer more accurate programs

 

NHC98:        National Hurricane Center 1998 Model- This NHC model, also referred to as A98E), relies on initial latitude and longitude to forecast a storm’s progression and like BAM and LBAR does not measure intensity. NHC98 uses the output of CLIPER to help in coming to its conclusions. It was an update to NHC90 and is the sixth version of the series. As you can probably deduce from the number 98, this is a very, VERY old model and is not as reliable as some of the newer programs…

 

Statistical Models:

 

CLIPER:        CLImatology and PERsistence Model- This is a tracking only model (no intensity forecasting) run by the National Hurricane Center and like SHIPS (see below), uses climate history to produce a “trackcast”. CLIPER is a three day statistical model; CLIPER5 is the five day derivative. CLIPER5 ignores current atmospheric factors and therefore is used as a benchmark for measuring the accuracy of other storm models (for more specific information on CLIPER5 baseline, see the James L. Franklin/NHC link below). Though CLIPER was very popular pre-1980, with today's more sophisticated computers and available resources, it has become somewhat of an antiquated forecast tool. Most TV meteorologists won't even show CLIPER in their forecasts, but you'll see it pop up a lot on websites...

 

SHIFOR:     Statistical Hurricane Intensity FORcast- This is an intensity only computer model (as opposed to a forecast    tracking model) supplemented by both SHIFOR5 and Decay-SHIFOR5. Like CLIPER, SHIFOR relies upon historical data of similar storms to arrive at its conclusions and therefore is also used as a benchmark against other intensity models. Decay-SHIFOR5 (DSHIFOR5) was introduced in 2006 and factors in the decay of a storm when the system interacts with a land mass. SHIFOR was a very accurate model but has recently been surpassed by newer technology…

 

Statistical-Dynamic Models:

 

LGEM:           Logistic Growth Equation Model- This intensity only program has the same raw data as SHIPS, but uses different algorithms to arrive at conclusions. LGEM places more of an emphasis on the changes to the environment for the 24 hours prior; SHIPS looks at environmental changes over the course of the complete forecast period…

 

SHIPS:       Statistical Hurricane Intensity Prediction Scheme- SHIPS outshines most others in the area of predicting the  storm’s intensity as it relies on climate history using predictors from the GFS model. Like DSHIFOR5, the DSHP model (Decay SHIP) factors in what a land mass will do to the intensity of the storm…

 

Consensus Models:

 

CONU:           An acronym for five interpolated models that make up the consensus. For 2010, those models included AVNI,       GFDI, GFNI, NGPI and UKMI. With the CONU program, as little as two of the five models can be plugged in to create a forecast…

 

FSSE:             Florida State University Super Ensemble- There are actually two FSSE models: the first is a track only consensus model combining data from the following five interpolated models and one consensus model: GFDI, GFSI, GUNA (consensus of GFDI, UKMI, NGPI and GFSI), OFCI and UKMI. The second part of FSSE is an intensity consensus model combining data from three interpolated models (GFSI, OFCI and UKMI) one statistical-dynamic model (DSHP) and one statistical model (SHIFOR5). FSSE was developed in 2005 with funding from Weather Predict; its information is only available to subscribers (and yes, the NHC does receive this information). One really interesting thing about this model is that it constantly revaluates other models almost to the point where it learns from their past mistakes. FSSE rewards and penalizes other models by changing the weighted structure according to the accuracy of both storm intensity and location from previous forecasts…

 

GEFS:          National Weather Service Global Ensemble Forecast System- Also known as AEMN (Automated Environmental Monitoring Network), this track/intensity hybrid program is based on the GFS system but made up of twenty different models. Although GEFS produces a forecast up to sixteen days in advance, it is not as reliable as the ECMWF model. AEMI is the interpolated previous cycle adjusted model of GEFS/AEMN…

 

GUNA:        A consensus model combining data from a number of interpolated models. For 2010, those models were AVNI, GFDI, NGPI and UKMI. The acronym dates back to 1998 when the old GUNS consensus model (GFDL, UKMET and NOGAPS) added the aviation model part of GFS (AVNO) to create GUNA. CGUN is a version of GUNA corrected for individual model biases. Like most consensus models, individual models used for drawing conclusions can change from year-to-year. If you are paying attention, you'll notice the models in GUNA are the same as the models for CONU with the exception of GFNI in the CONU model…

 

ICON:            A simple intensity only model computing the average when all of the following models are present: DSHP, GHMI, HWFI and LGEM. Individual models that make up the consensus of ICON can change from year-to-year ; the models listed above were in use for the 2010 hurricane season

 

IVCN:             Another simple intensity only model computing the average when combining DSHP, GFNI, GHMI, HWFI and the LGEM models. IVCN requires data from at least two of the five models to form a consensus. Individual models that make up the consensus of IVCN can change from year-to-year ; the models listed above were in use for the 2010 hurricane season

 

TCON:         A consensus model combining data from five interpolated models: EGRI, GFSI, GHMI, HWFI and NGPI.         Individual models that make up the consensus of TCON can from change year-to-year. TCCN is a version of TCON corrected for model biases…

 

TVCN:           A consensus model combining data from the following interpolated models: EGRI, EMXI, GFSI, GFNI, GHMI,     HWFI and NGPI. TVCN doesn’t always use all seven of these models for a consensus, but needs at least two for creating its quorum. Like ICON, IVCN and TCON, individual models that make up the consensus of TVCN can change from year-to-year. TVCC is a version of TVCN corrected for model biases…

 

Finally, there’s one more model to tell you about that isn’t necessarily a storm model, but is tied to the effects of severe tropical weather:

 

SLOSH:       Sea Lake and Overland Surges from Hurricanes Model. This computer model focuses strictly on a hurricane’s   storm surge, the rising of seawater as a hurricane hits land…


For more info regarding the hurricane computer models, follow this link to the
Weather Underground, this link for a web based article from South Florida’s Sun Sentinel, this link to the National Hurricane Center, or this link to Florida State University's Model Identifiers page. That FSU page was used as a reference for listing which models were used in consensus models in 2010; as mentioned, individual for those programs change from year-to-year. If you want to go a little deeper into understanding models, accuracy and historical performance, follow this link to a 2007 Forecast Verification Report by James L. Franklin of the NHC, or this link to a Klipsch Community Forum on the 2010 Hurricane Season.


site created by JJ Paolino © 2008-2011
content provided by
Donovan Myrie © 2008-2011