By Dr. Joshua Woodbury, Swiss Re
Technological advances are changing the game in almost every facet of our lives, and insurance is no different. When we think of technology and insurance, blockchain, self-driving cars and monitoring devices often take center stage, but behind the scenes, advances in flood modeling are changing our view of flood insurance. While the vast majority of residential and small commercial flood insurance is still written through the National Flood Insurance Program (NFIP), many private insurance companies, equipped with the latest flood modeling tools, are starting to challenge the status quo.
For most of the last 40 years, flood risk in the U.S. has fallen within the scope of the Federal Emergency Management Agency (FEMA), which oversees the development of flood risk maps across the country. These maps, which cover more than 98% of the US population, are used for disaster preparedness, zoning practices, and flood insurance rates. For most consumers purchasing flood insurance, these maps are key to determining their risk and ultimately their flood insurance premiums. While significant resources have been put into developing these maps, the low risk granularity (i.e. 100 year zones, 500 year zones, and outside), frequent flooding outside of designated flood zones, a lack of any risk accumulation metric, and no information on NFIP loss experience, have made private insurers reluctant to rely on these maps.
The difficulty in understanding flood risk is due to its high resolution, localized nature. More than hurricane or earthquake risk, flood risk can change from one house to the next, or even within a few feet, as it depends on topography, land use, soil type, flood protection and other local features. These dependencies have traditionally made FEMA mapping resource intensive and probabilistic flood catastrophe models mostly infeasible.
So what has changed? Basically, three things: better understanding of the physics behind flooding, the availability of high resolution data and increasingly more powerful computing resources.
While many of the critical concepts of flood modeling have been understood for quite some time, improved understanding of how water flows over floodplains has opened the door for more realistic flood modeling. For example, when water stays within the river banks, we can generally assume it flows in one direction (1-dimensional flow), i.e. downstream, which is relatively simple to model. However, this assumption breaks down once the river banks have been overtopped and water is allowed to flow in multiple directions. Laboratory experiments and improved modeling techniques have advanced our understanding of this type of multidimensional water flow, which in turn has improved our ability to develop realistic flood maps and models.
In addition, the availability of key input data at high resolution and high quality has allowed model developers to more accurately understand flood risk. For example, topography information is, in many places, available at resolutions of 10 meters or higher, thanks to advancements in satellite imaging and remote sensing capabilities. Land use information is also improving, giving us a more realistic view of how much flooding can occur and where. Simply put, better data is allowing us to understand better the small-scale characteristics that change flood risk.
Finally, combining multidimensional flow modeling with high resolution data requires powerful computing techniques. The availability of relatively inexpensive processing power and improvements in efficiencies have made large scale, high resolution modeling possible. Not only do we have the data, we now have the computing power to crunch the numbers on a much larger scale. We can now physically model flood waters throughout the Mississippi basin at resolutions that were unthinkable not too long ago.
Although these advancements are exciting on their own, the models that come with these advancements are changing our thinking on flood insurance for several key reasons. Firstly, rather than just 100 or 500-year zones, the models delineate flood risk at a much higher resolution. This is important because the potential loss per insured value for a location in a 40-year flood zone is significantly higher than a location that is just within the 100-year zone. Secondly, many of the models are accounting for more than just river flooding. This allows insurers to rate for high rainfall-induced flooding, which often occurs outside FEMA designated flood zones. Finally, the fully probabilistic models are more than just flood zones. Most include the full four box modeling approach used in the more mature earthquake and wind models, providing event sets, vulnerability components and financial models. These models provide users with expected annual losses as well as loss frequency curves for single risk and treaty, allowing insurers to control their flood risk. Essentially, these models enable us to understand and thus underwrite flood risk.
While the models are relatively new, many in the private market see them as an opportunity to provide their current customers with a better service and as a way to differentiate themselves from their competitors. They are finding out that in many cases, flood insurance can be provided for significantly lower costs and at better terms than an NFIP policy. A better product at a lower cost is hard to beat, and insurers are taking notice. The door to the private flood market is now open.
This article was previously published in the Pulse.
Josh Woodbury is currently a flood specialist in the natural catastrophe modeling team at Swiss Re. He has been involved in model and product development for flood in the US and Canada. He joined Swiss Re in 2013.
Prior to Swiss Re, Josh completed his master's and PhD at Cornell University in Water Resource Systems Engineering. Before entering Cornell, he completed a bachelor's degree in civil and environmental engineering at Clarkson University.