WSIM-GLDAS: Acquisition, Exploration, and Integration (Python)

Authors

Josh Brinks

Elaine Famutimi

Published

June 3, 2024

Overview

In this lesson, you will acquire the dataset called Water Security Indicator Model - Global Land Data Assimilation System (WSIM-GLDAS) from the NASA Socioeconomic Data and Applications Center (SEDAC) website, perform various pre-processing tasks, and explore the dataset with advanced visualizations. We will also integrate WSIM-GLDAS with a global administrative boundary dataset called geoBoundaries and a population dataset called the Gridded Population of the World. You will learn how to perform numerous data manipulations and visualizations throughout this lesson.

Learning Objectives

After completing this lesson, you should be able to:

  • Retrieve WSIM-GLDAS data from SEDAC.
  • Retrieve Administrative Boundary using the geoBoundaries API.
  • Subset WSIM-GLDAS data for a region and time period of interest.
  • Visualize geospatial data to highlight precipitation deficit patterns.
  • Write a pre-processed NetCDF-formatted file to disk.
  • Perform visual exploration with histograms, choropleths, and time series maps.
  • Integrate gridded population data with WSIM-GLDAS data to perform analyses and construct visualizations to understand how people are impacted.
  • Summarize WSIM-GLDAS and population raster data using zonal statistics.
Programming Review

This is the Python version of this lesson. If you would like to work through the R version please visit WSIM-GLDAS: Acquisition, Exploration, and Integration

Introduction

The water cycle is the process of circulation of water on, above, and under the Earth’s surface (NOAA 2019). Human activities produce greenhouse gas emissions, land use changes, dam and reservoir development, and groundwater extraction have affected the natural water cycle in recent decades (IPCC 2023). The influence of these human activities on the water cycle has consequential impacts on oceanic, groundwater, and land processes, influencing phenomena such as droughts and floods (Zhou 2016).

Precipitation deficits, or periods of below-average rainfall, can lead to drought, characterized by prolonged periods of little to no rainfall and resulting water shortages. Droughts often trigger environmental stresses and can create cycles of reinforcement impacting ecosystems and people (Rodgers 2023). For example, California frequently experiences drought but the combination of prolonged dry spells and sustained high temperatures prevented the replenishment of cool fresh water to the Klamath River, which led to severe water shortages in 2003 and again from 2012 to 2014. These shortages affect agricultural areas like the Central Valley, which grows almonds, one of California’s most important crops, with the state producing 80% of the world’s almonds. These severe droughts, coupled with competition for limited freshwater resources, resulted in declining populations of Chinook salmon due to heat stress and gill rot disease disrupting the food supply for Klamath basin tribal groups (Guillen 2002; Bland 2014).

1

Data Science Review

A raster dataset is a type of geographic data in digital image format with numerical information stored in each pixel. (Rasters are often called grids because of their regularly-shaped matrix data structure.) Rasters can store many types of information and can have dimensions that include latitude, longitude, and time. NetCDF is one format for raster data; others include Geotiff, ASCII, and many more. Several raster formats like NetCDF can store multiple raster layers, or a “raster stack,” which can be useful for storing and analyzing a series of rasters.

The Water Security (WSIM-GLDAS) Monthly Grids, v1 (1948 - 2014) The Water Security (WSIM-GLDAS) Monthly Grids, v1 (1948 - 2014) dataset “identifies and characterizes surpluses and deficits of freshwater, and the parameters determining these anomalies, at monthly intervals over the period January 1948 to December 2014” (ISciences and Center For International Earth Science Information Network-CIESIN-Columbia University 2022b). The dataset can be downloaded from the NASA SEDAC website. Downloads of the WSIM-GLDAS data are organized by a combination of thematic variables (composite surplus/deficit, temperature, PETmE, runoff, soil moisture, precipitation) and integration periods (a temporal aggregation) (1, 3, 6, 12 months). Each variable-integration combination consists of a NetCDF raster (.nc) file ( with a time dimension that contains a raster layer for each of the 804 months between January, 1948 and December, 2014. Some variables also contain multiple attributes each with their own time series. Hence, this is a large file that can take a lot of time to download and may cause computer memory issues on certain systems. This is considered BIG data.

Knowledge Check
  1. How would you best describe the water cycle?
    1. A prolonged period of little to no rainfall.
    2. Low precipitation combined with record temperatures.
    3. The circulation of water on and above Earth’s surface.
    4. A cycle that happens due to drought.
  2. What human interventions affect the water cycle? (select all that apply)
    1. Greenhouse gas emissions
    2. Land use changes
    3. Dam and reservoir development
    4. Groundwater overexploitation
  3. What is a precipitation deficit?
    1. A period of rainfall below the average.
    2. A prolonged period of little to no rainfall.
    3. A period of chain reactions.
    4. A period of rainfall above the average.
Data Science Review

The Water Security (WSIM-GLDAS) Monthly Grids dataset used in this lesson is hosted by NASA’s Socioeconomic Data and Applications Center (SEDAC), one of several Distributed Active Archive Centers (DAACs). SEDAC hosts a variety of data products including geospatial population data, human settlements and infrastructure, exposure and vulnerability to climate change, and satellite-based land use, air, and water quality data. In order to download data hosted by SEDAC, you are required to have a free NASA EarthData account. You can create an account here: NASA EarthData.

Acquiring the Data

For this lesson, we will work with the WSIM-GLDAS dataset Composite Anomaly Twelve-Month Return Period NetCDF file. This contains water deficit, surplus, and combined “Composite Anomaly” variables with an integration period of 12 months. Integration period represents the time period at which the anomaly values are averaged over. The 12-month integration averages water deficits (droughts), surpluses (floods), and combined (presence of droughts and floods) over a 12-month period ending with the month specified. We begin with the 12-month aggregation because this is a snapshot of anomalies for the entire year making it useful to get an understanding of a whole year; once we identify time periods of interest in the data, we can take a closer look using the 3-month or 1-month integration periods.

We’ll start by downloading the file directly from the SEDAC website. The dataset documentation describes the composite variables as key features of WSIM-GLDAS that combine “the return periods of multiple water parameters into composite indices of overall water surpluses and deficits (ISciences and Center For International Earth Science Information Network-CIESIN-Columbia University 2022a)”. The composite anomaly files present the data in terms of how often they occur; or a “return period.” For example, a deficit return period of 25 signifies a drought so severe that we would only expect it to happen once every 25 years. Please go ahead and download the file.

  • First, go to the SEDAC website at https://sedac.ciesin.columbia.edu/. You can explore the website by themes, datasets, or collections. We will use the search bar at the top to search for “water security wsim”. Find and click on the Water Security (WSIM-GLDAS) Monthly Grids, v1 (1948–2014) dataset. Take a moment to review the dataset’s overview and documentation pages.

  • When you’re ready, click on the Data Download tab. You will be asked to sign in using your NASA EarthData account.

  • Find the Composite Class, and click on the Variable Composite Anomaly Twelve-Month Return Period.

Reading the Data

Data Science Review

This lesson uses the xarray, rioxarray, rasterio, pandas, geopandas,exactextractr, numpy, calendar, requests, and plotnine packages. If you’d like to learn more about the functions used in this lesson you can use the help guides on their package websites.

The xarray, rioxarray, and rasterio packages are designed to work with large and complex raster spatial data while the geopandas package lets you handle and analyze vector spatial data (more on that later). requests provides methods for users to interact with online databases and websites to directly download data. exactextractr performs fast and efficient zonal statistics that summarizes raster and vector data. calendar assists with manipulating dates. pandas and numpy makes standard data processing much easier, and lastly, plotnine is used for creating standard plots and figures.

Make sure they are installed before you begin working with the code in this document. If you’d like to learn more about the functions used in this lesson you can use the help guides on their package websites.

Once you have downloaded the WSIM-GLDAS file to your local computer, install and load the Python packages required for this exercise.

Once you’ve completed the download and placed the composite_12mo.nc file into your working directory and read the file with the xr.open_dataset() function.

import xarray as xr

file_path = "data/composite_12mo.nc"
wsim_gldas = xr.open_dataset(file_path, engine = 'h5netcdf')

wsim_gldas
<xarray.Dataset>
Dimensions:                       (lon: 1440, lat: 600, time: 793)
Coordinates:
  * lon                           (lon) float64 -179.9 -179.6 ... 179.6 179.9
  * lat                           (lat) float64 89.88 89.62 ... -59.62 -59.88
  * time                          (time) datetime64[ns] 1948-12-01 ... 2014-1...
Data variables:
    deficit                       (time, lat, lon) float32 ...
    deficit_cause                 (time, lat, lon) float32 ...
    surplus                       (time, lat, lon) float32 ...
    surplus_cause                 (time, lat, lon) float32 ...
    both                          (time, lat, lon) float32 ...
    crs                           (time) int32 ...
    integration_period_end_month  (time) object ...
Attributes:
    date_created:  2021-03-24T22:39:08+0000
    Conventions:   CF-1.6
    Title:         Water Security Indicator Model -- Global Land Data Assimil...
    Institution:   NASA Socioeconomic Data and Applications Center (SEDAC), C...
    References:    Crowley, C., Baston, D., Brinks, J. 2020. Water Security I...

Running the wsim_gldas object name we created at the end of the code chunk provides us with basic information about our WSIM raster layer. The coordinates section lists the 3 dimensions. The first 2 dimensions are the spatial extents (x/y–longitude/latitude) and time is the 3rd dimension. The output lists the 5 attributes (deficit, deficit_cause, surplus, surplus_cause, both) along with the layer’s CRS and the integration period in the Data variables section.

This means that the total number of individual raster layers in this NetCDF is 4020 (5 attributes x 804 time steps/months). Again, BIG data.

Attribute Selection

We can manage this large file by selecting a single variable; in this case “deficit” (drought). We’ll also take the CRS variable that contains our spatial reference information, but we can leave the integration, because we already know it’s 12 months.

# subset data variables with double brackets
wsim_gldas = wsim_gldas[["deficit", "crs"]]
# check info
wsim_gldas
<xarray.Dataset>
Dimensions:  (time: 793, lat: 600, lon: 1440)
Coordinates:
  * lon      (lon) float64 -179.9 -179.6 -179.4 -179.1 ... 179.4 179.6 179.9
  * lat      (lat) float64 89.88 89.62 89.38 89.12 ... -59.38 -59.62 -59.88
  * time     (time) datetime64[ns] 1948-12-01 1949-01-01 ... 2014-12-01
Data variables:
    deficit  (time, lat, lon) float32 ...
    crs      (time) int32 ...
Attributes:
    date_created:  2021-03-24T22:39:08+0000
    Conventions:   CF-1.6
    Title:         Water Security Indicator Model -- Global Land Data Assimil...
    Institution:   NASA Socioeconomic Data and Applications Center (SEDAC), C...
    References:    Crowley, C., Baston, D., Brinks, J. 2020. Water Security I...

Checking again we see that we’re down to 2 data variables (deficit and crs).

Time Selection

Specifying a temporal range of interest will make the file size smaller and even more manageable. We’ll select every year for the range 2000-2014. This can be accomplished by generating a sequence for every year between December 2000 and December 2014 with pandas, and then passing that list of dates using the function .sel(). Remember, we’re using the 12-month integration of WSIM-GLDAS. This means that each time step listed averages the deficit over the 12 prior months. Therefore, if we only select a sequence of December months spanning 2000-2014, each resulting layer will represent the average deficit for that year.

# generate a list of dates for subsetting
import pandas as pd
# create the list of dates
keeps = pd.date_range(start="2000-12-01", end="2014-12-01", freq = "YS-DEC")
keeps
# subset the wsim_gldas object w/ keeps
wsim_gldas = wsim_gldas.sel(time= keeps)

# check the time dimension
wsim_gldas.time
<xarray.DataArray 'time' (time: 15)>
array(['2000-12-01T00:00:00.000000000', '2001-12-01T00:00:00.000000000',
       '2002-12-01T00:00:00.000000000', '2003-12-01T00:00:00.000000000',
       '2004-12-01T00:00:00.000000000', '2005-12-01T00:00:00.000000000',
       '2006-12-01T00:00:00.000000000', '2007-12-01T00:00:00.000000000',
       '2008-12-01T00:00:00.000000000', '2009-12-01T00:00:00.000000000',
       '2010-12-01T00:00:00.000000000', '2011-12-01T00:00:00.000000000',
       '2012-12-01T00:00:00.000000000', '2013-12-01T00:00:00.000000000',
       '2014-12-01T00:00:00.000000000'], dtype='datetime64[ns]')
Coordinates:
  * time     (time) datetime64[ns] 2000-12-01 2001-12-01 ... 2014-12-01

Now we’re down to a single attribute (“deficit”) with 15 time steps. We can take a look with a quick plot.

# plot it
p = wsim_gldas.deficit.plot(x="lon", y="lat", col="time", col_wrap = 3, cmap = "Reds_r", aspect = 1.5, size = 1.5, cbar_kwargs =  {'orientation':'vertical', 'shrink':0.5, 'label':'Deficit Anomaly'})

Although we have now reduced the data to a single attribute with a restricted time of interest, we can take it a step further and limit the spatial extent to a country or state of interest.

Knowledge Check
  1. Which of these best describe a raster dataset?
    1. A type of geographic data made up of square cells.
    2. A table or list of geographic numbers.
    3. A geographic region of interest.
    4. A type of geographic data made up of lines and points.
  2. Which of the following is true about the information that rasters can store? (select all that apply)
    1. Attributes (thematic content)
    2. Dimensions (information expressing spatial or temporal extent information)
    3. Geographic coordinates
    4. A list of numbers

Spatial Selection

Data Science Review

GeoJSON is a format for encoding, storing and sharing geographic data as vector data, i.e., points, lines and polygons. It’s commonly used in web mapping applications to represent geographic features and their associated attributes. GeoJSON files are easily readable by both humans and computers, making them a popular choice for sharing geographic data over the internet.

We can spatially crop the raster stack with a few different methods. Options include using a bounding box in which the outer geographic coordinates are specified (xmin, ymin, xmax, ymax), using another raster object, or using a vector boundary like a shapefile or GeoJSON to crop the extent of the original raster data.

In this example we use a vector boundary to accomplish the geoprocessing task of cropping the data to an administrative or political unit. First, we acquire the data in GeoJSON format for the United States from the geoBoundaries API. (Note it is also possible to download the vectorized boundaries directly from https://www.geoboundaries.org/ in lieu of using the API).

To use the geoBoundaries’ API, the root URL below is modified to include a 3 letter code from the International Standards Organization used to identify countries (ISO3), and an administrative level for the data request. Administrative levels correspond to geographic units such as the Country (administrative level 0), the State/Province (administrative level 1), the County/District (administrative level 2), and so on:

https://www.geoboundaries.org/api/current/gbOpen/ISO3/LEVEL/

Data Science Review

Built by the community and William & Mary geoLab, the geoBoundaries Global Database of Political Administrative Boundaries is an online, open license (CC BY 4.0 / ODbL) resource for administrative boundaries (i.e., state, county, province) for every country in the world. Since 2016, this project has tracked approximately 1 million spatial units within more than 200 entities; including all UN member states.

For this example we adjust the bolded components of the sample URL address below to specify the country we want using the ISO3 Character Country Code for the United States (USA) and the desired Administrative Level of State (ADM1).

import requests
import geopandas as gp

# make the request to geoboundarie's website for the USA boundaries
usa = requests.get("https://www.geoboundaries.org/api/current/gbOpen/USA/ADM1/")

In the line of code above, we used requests.get to obtain metadata from the URL. We assign the result to a new variable called “usa”. Next we will examine the content.

# parse the content into a readable format
usa = usa.json()
# look at the labels for available information
usa
{'boundaryID': 'USA-ADM1-66186276',
 'boundaryName': 'United States of America',
 'boundaryISO': 'USA',
 'boundaryYearRepresented': '2018',
 'boundaryType': 'ADM1',
 'boundaryCanonical': 'States',
 'boundarySource': 'United States Census Bureau, MAF/TIGER Database',
 'boundaryLicense': 'Public Domain',
 'licenseDetail': 'nan',
 'licenseSource': 'www.census.gov/programs-surveys/geography/technical-documentation/naming-convention/cartographic-boundary-file.html',
 'boundarySourceURL': 'www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html',
 'sourceDataUpdateDate': 'Thu Jan 19 07:31:04 2023',
 'buildDate': 'Dec 12, 2023',
 'Continent': 'Northern America',
 'UNSDG-region': 'Europe and Northern America',
 'UNSDG-subregion': 'Undefined',
 'worldBankIncomeGroup': 'High-income Countries',
 'admUnitCount': '56',
 'meanVertices': '5260.0',
 'minVertices': '217',
 'maxVertices': '116646',
 'meanPerimeterLengthKM': '3649.857981716045',
 'minPerimeterLengthKM': '71.88282565790827',
 'maxPerimeterLengthKM': '63080.298391256256',
 'meanAreaSqKM': '167051.82016912085',
 'minAreaSqKM': '176.92992294183617',
 'maxAreaSqKM': '1522647.4030532858',
 'staticDownloadLink': 'https://github.com/wmgeolab/geoBoundaries/raw/9469f09/releaseData/gbOpen/USA/ADM1/geoBoundaries-USA-ADM1-all.zip',
 'gjDownloadURL': 'https://github.com/wmgeolab/geoBoundaries/raw/9469f09/releaseData/gbOpen/USA/ADM1/geoBoundaries-USA-ADM1.geojson',
 'tjDownloadURL': 'https://github.com/wmgeolab/geoBoundaries/raw/9469f09/releaseData/gbOpen/USA/ADM1/geoBoundaries-USA-ADM1.topojson',
 'imagePreview': 'https://github.com/wmgeolab/geoBoundaries/raw/9469f09/releaseData/gbOpen/USA/ADM1/geoBoundaries-USA-ADM1-PREVIEW.png',
 'simplifiedGeometryGeoJSON': 'https://github.com/wmgeolab/geoBoundaries/raw/9469f09/releaseData/gbOpen/USA/ADM1/geoBoundaries-USA-ADM1_simplified.geojson'}

The parsed content contains 32 components. Item 29 is a direct link to the GeoJSON file (gjDownloadURL) where the vector boundary data is located. Next we will obtain the GeoJSon and check the results.

# directly read in the geojson with sf from the geoboundaries server
usa = gp.read_file(usa['gjDownloadURL'])
# check the visuals
usa.boundary.plot()

Upon examination, we see that this GeoJSon includes all US states and overseas territories. For this demonstration, we can simplify it to the contiguous United States. (Of course, it could also be simplified to other areas of interest simply by adapting the code below.)

We first create a list of the geographies we wish to remove and assign them to a variable called “drops”. Next, we reassign our “usa” variable to include only the entries in the continental US and finally we plot the results.

# create a list of territories we don't want in our CONUSA boundary
drops = ["Alaska", "Hawaii", "American Samoa", "Puerto Rico", "Commonwealth of the Northern Mariana Islands", "Guam", "United States Virgin Islands"]
# select all the states and territories not in the above list
usa = usa[~usa.shapeName.isin(drops)]
# check the visuals
usa.boundary.plot()

We can take this a step further and select a single state for analysis. Here we use a slightly different method by creating a new object called “texas” by subsetting the state out by name.

# extract just texas from the CONUSA boundary
texas = usa[usa["shapeName"].str.contains("Texas")]
# check the visuals
texas.boundary.plot()

From here we can clip the WSIM-GLDAS raster stack by using the stored boundary of Texas. You can call the sf::st_crop() function to crop the WSIM-GLDAS layer, but as you see below, more simply, you can just use bracket indexing to crop a stars object with a sf object.

Drought in the News

Texas experienced a severe drought in 2011 that caused rivers to dry up and lakes to reach historic low levels (StateImpact 2014). The drought was further exacerbated by high temperatures related to climate change in February of 2013. Climate experts discovered that the drought was produced by “La Niña”, a weather pattern that causes the surface temperature of the Pacific Ocean to be cooler than normal. This, in turn, creates drier and warmer weather in the southern United States. La Niña can occur for a year or more, and returns once every few years (NOAA 2023).

It is estimated that the drought cost farmers and ranchers about $8 billion in losses.(Roeseler 2011) Furthermore, the dry conditions fueled a series of wildfires across the state in early September of 2011, the most devastating of which occurred in Bastrop County, where 34,000 acres and 1,300 homes were destroyed (Roeseler 2011).

import rioxarray as rio
# specify the CRS for rasterio
wsim_gldas = wsim_gldas.rio.write_crs("epsg: 4326")
# clip the wsim object to the extent of texas border
wsim_gldas_texas = wsim_gldas.rio.clip(texas.geometry.values)

Finally, we visualize the last time-step in the WSIM-GLDAS dataset (15/December, 2014) and render it with an overlay of the Texas boundary to perform a visual check of our processing.

# check the visuals
wsim_gldas_texas.deficit.plot(x="lon", y="lat", col="time", col_wrap = 3, cmap = "Reds_r", aspect = 0.8, size =2.25, cbar_kwargs =  {'orientation':'horizontal', 'shrink':0.6, 'label':'Deficit Anomaly'})

Voila! We successfully cropped the WSIM-GLDAS to the extent of Texas and created an overlay map with both dat sets to check the results. If you were carrying out further analysis or wanted to share your work with colleagues, you may want to save the processed WSIM-GLDAS to disk.

Multidimensional (deficit, time, latitude, longitude) raster files can be saved with xarray’s to_netcdf() function and vector data can be saved using geopanda’s to_file().

# the grid_mapping attribute will kick an error when we try to write to disk so we have to delete it first
del wsim_gldas_texas['deficit'].attrs['grid_mapping']
# write the processed wsim-gldas file to disk as a netcdf
wsim_gldas_texas.to_netcdf("wsim_gldas_tex.nc")
# write the Texas boundary to disk
texas.to_file('texas.geojson')

The size of the pre-processed dataset is 1.6 MB compared to the original dataset of 1.7 GB. This is much more manageable in cloud environments, workshops, and git platforms. T

Advanced Visualizations and Data Integrations

Now that we’ve introduced the basics of manipulating and visualizing WSIM-GLDAS, we can explore more advanced visualizations and data integrations. Let’s clear the workspace and start over again with the same WSIM-GLDAS Composite Anomaly Twelve-Month Return Period we used earlier. We will spatially subset the data to cover only the Continental United States (CONUSA) which will help to minimize our memory footprint. We can further reduce our memory overhead by reading in just the variable we want to analyze. In this instance we can read in just the deficit attribute from the WSIM-GLDAS Composite Anomaly Twelve-Month Return Period file, rather than reading the entire NetCDF with all of its attributes.

Coding Review

Random Access Memory (RAM) is where data and programs that are currently in use are stored temporarily. It allows the computer to quickly access data, making everything you do on the computer faster and more efficient. Unlike the hard drive, which stores data permanently, RAM loses all its data when the computer is turned off. RAM is like a computer’s short-term memory, helping it to handle multiple tasks at once.

For this exercise, we can quickly walk through similar pre-processing steps we performed earlier in this lesson and then move on to more advanced visualizations and integrations. Read the original 12-month integration data back in, filter with a list of dates for each December spanning 2000-2014, and then crop the raster data with the boundary of the contiguous United States using our geoBoundaries object.

# read it back in
file_path = "data/composite_12mo.nc"
wsim_gldas = xr.open_dataset(file_path, engine = 'h5netcdf')
# list of dates we want to keep
keeps = pd.date_range(start="2000-12-01", end="2014-12-01", freq = "YS-DEC")
# subset for the dates
wsim_gldas = wsim_gldas.sel(time= keeps)
# subset for the variable of interest and the crs info
wsim_gldas = wsim_gldas[["deficit", "crs"]]
# give the time variable pretty names
wsim_gldas = wsim_gldas.assign_coords(time=list(range(2000,2015)))
# clip wsim_gldas
wsim_gldas = wsim_gldas.rio.write_crs("epsg: 4326")
wsim_gldas = wsim_gldas.rio.clip(usa.geometry.values)

Double check the object information.

# check the basic information again
wsim_gldas
<xarray.Dataset>
Dimensions:  (lon: 231, lat: 99, time: 15)
Coordinates:
  * lon      (lon) float64 -124.6 -124.4 -124.1 -123.9 ... -67.62 -67.38 -67.12
  * lat      (lat) float64 49.12 48.88 48.62 48.38 ... 25.38 25.12 24.88 24.62
  * time     (time) int32 2000 2001 2002 2003 2004 ... 2010 2011 2012 2013 2014
    crs      int32 0
Data variables:
    deficit  (time, lat, lon) float32 nan nan nan nan nan ... nan nan nan nan
Attributes:
    date_created:  2021-03-24T22:39:08+0000
    Conventions:   CF-1.6
    Title:         Water Security Indicator Model -- Global Land Data Assimil...
    Institution:   NASA Socioeconomic Data and Applications Center (SEDAC), C...
    References:    Crowley, C., Baston, D., Brinks, J. 2020. Water Security I...

You will want to review the printout to make sure it looks okay.

  • Does it contain the variables you were expecting?

  • Do the values for the variables seem plausible?

Other basic descriptive analyses are useful to verify and understand your data. One of these is to produce a frequency distribution (also known as a histogram), which is reviewed below.

Knowledge Check
  1. What are the two ways we reduced our memory footprint when we loaded the WSIM-GLDAS dataset for this lesson? (select all that apply)
    1. Spatially subsetting the data to the CONUSA
    2. Reading in only one year of data
    3. Reading in only the attribute of interest (i.e., ‘deficit’)
    4. All of the above.

Annual CONUSA Time Series

The basic data properties reviewed in the previous step are useful for exploratory data analysis, but we should perform further inspection. We can start our visual exploration of annual drought in the CONUSA by creating a map illustrating the deficit return period for each of the years in the WSIM-GLDAS object.

# check visuals
wsim_gldas.deficit.plot(x="lon", y="lat", col="time", col_wrap = 3, cmap = "Reds_r", aspect = 1, size =2, vmin = -60, vmax = 0, cbar_kwargs =  {'orientation':'horizontal', 'shrink':0.6, 'label':'Deficit Anomaly'})

This visualization shows that there were several significant drought events (as indicated by dark red deficit return-period values) throughout 2000-2014. Significant drought events included the southeast in 2000, the southwest in 2002, the majority of the western 3rd in 2007, Texas-Oklahoma in 2011, Montana-Wyoming-Colorado in 2012, and the entirety of the California coast in 2014. The droughts of 2012 and 2011 are particularly severe and widespread with return periods greater than 50 years covering multiple states. Based on historical norms, we should only expect droughts this strong every 50-60 years!

Knowledge Check
  1. The output maps of Annual Mean Deficit Anomalies for the CONUSA indicate that…
    1. The most significant deficit in 2004 is located in the western United States.
    2. The least significant deficit 2003 is located in the Midwest.
    3. The most significant deficit in 2011 is located around Texas and neighboring states.
    4. The least significant deficit in 2000 is located in the southeast.

Monthly Time Series

We can get a more detailed look at these drought events by using the 1-month composite WSIM-GLDAS dataset and cropping the data to a smaller spatial extent matching one of the events we’ve noted in the previous plot. Let’s take a closer look at the 2014 California drought.

Drought in the News

The California drought of 2012-2014 was the worst in 1,200 years (“Evidence Suggests California’s Drought Is the Worst in 1,200 Years” 2014). This drought caused problems for homeowners, and even conflicts between farmers and wild salmon! Governor Jerry Brown declared a drought emergency and called on residents to reduce water intake by 20%. Water use went up by 8% in May of 2014 compared to 2013, in places like coastal California and Los Angeles. Due to the water shortages, the state voted to fine water-wasters up to $500 dollars. The drought also affected residents differently based on economic status. For example, in El Dorado County, located in a rural area east of Sacramento, residents were taking bucket showers and rural residents reported wells, which they rely on for fresh water, were drying up. The federal government eventually announced a $9.7 million emergency drought aid for those areas (Sanders 2014).

Additionally, there were thousands of adult salmon struggling to survive in the Klamath River in Northern California, where water was running low and warm due to the diversion of river flow into the Central Valley, an agricultural area that grows almond trees. Almonds are one of California’s most important crops, with the state producing 80% of the world’s almonds. However, salmon, which migrate upstream, could get a disease called gill rot, which flourishes in warm water and already killed tens of thousands of Chinook in 2002. This disease was spreading through the salmon population again due to this water allocation, affecting local Native American tribes that rely on the fish (Bland 2014).

In order to limit the amount of computing memory required for the operation, we will first clear items from the in-memory workspace and then reload a smaller composite file, we’ll start by removing the 12-month composite object.

# remove the large wsim object from the environment
del wsim_gldas

Now let’s load the composite 1-month file from SEDAC into the workspace. The attributes and dimensions will be the same as the 12-month integration so we’ll skip ahead to directly loading in the deficit variable without performing a check on the structure using proxy = TRUE.

# read in the 1 month wsim-gldas data
file_path = "data/composite_1mo.nc"
wsim_gldas_1mo = xr.open_dataset(file_path, engine = 'h5netcdf')
# check the basic info
wsim_gldas_1mo
<xarray.Dataset>
Dimensions:                       (lon: 1440, lat: 600, time: 804)
Coordinates:
  * lon                           (lon) float64 -179.9 -179.6 ... 179.6 179.9
  * lat                           (lat) float64 89.88 89.62 ... -59.62 -59.88
  * time                          (time) datetime64[ns] 1948-01-01 ... 2014-1...
Data variables:
    deficit                       (time, lat, lon) float32 ...
    deficit_cause                 (time, lat, lon) float32 ...
    surplus                       (time, lat, lon) float32 ...
    surplus_cause                 (time, lat, lon) float32 ...
    both                          (time, lat, lon) float32 ...
    crs                           (time) int32 ...
    integration_period_end_month  (time) object ...
Attributes:
    date_created:  2021-03-25T04:20:33+0000
    Conventions:   CF-1.6
    Title:         Water Security Indicator Model -- Global Land Data Assimil...
    Institution:   NASA Socioeconomic Data and Applications Center (SEDAC), C...
    References:    Crowley, C., Baston, D., Brinks, J. 2020. Water Security I...

Once again, we’ll subset the time dimension for our period of interest. However, this time we want every month for 2014 so we can take a closer look at the California drought.

# list of dates to keep
keeps = pd.date_range(start="2014-01-01", end="2014-12-01", freq = "MS")
# select just the dates
wsim_gldas_1mo = wsim_gldas_1mo.sel(time= keeps)
# keep just the deficit and crs info
wsim_gldas_1mo = wsim_gldas_1mo[["deficit", "crs"]]
# check the info
wsim_gldas_1mo
<xarray.Dataset>
Dimensions:  (time: 12, lat: 600, lon: 1440)
Coordinates:
  * lon      (lon) float64 -179.9 -179.6 -179.4 -179.1 ... 179.4 179.6 179.9
  * lat      (lat) float64 89.88 89.62 89.38 89.12 ... -59.38 -59.62 -59.88
  * time     (time) datetime64[ns] 2014-01-01 2014-02-01 ... 2014-12-01
Data variables:
    deficit  (time, lat, lon) float32 ...
    crs      (time) int32 ...
Attributes:
    date_created:  2021-03-25T04:20:33+0000
    Conventions:   CF-1.6
    Title:         Water Security Indicator Model -- Global Land Data Assimil...
    Institution:   NASA Socioeconomic Data and Applications Center (SEDAC), C...
    References:    Crowley, C., Baston, D., Brinks, J. 2020. Water Security I...

Now we have 12 rasters with monthly data for 2014. Let’s zoom in on California and see how this drought progressed over the course of the year.

# isolate only the california border
california = usa[usa["shapeName"].str.contains("California")]
# set the crs for rasterio; it doesn't pick it up from xarray on its own
wsim_gldas_1mo = wsim_gldas_1mo.rio.write_crs("epsg: 4326")
# clip/crop the wsim raster
wsim_gldas_california = wsim_gldas_1mo.rio.clip(california.geometry.values)

import calendar

# give the time dimension pretty labels
wsim_gldas_california = wsim_gldas_california.assign_coords(time=list(calendar.month_name[1:]))
# plot it
wsim_gldas_california.deficit.plot(x="lon", y="lat", col="time", col_wrap = 3, cmap = "Reds_r", aspect = 0.9, size =2.5, vmin = -60, vmax = 0, cbar_kwargs =  {'orientation':'vertical', 'shrink':0.6, 'label':'Deficit Anomaly'})

This series of maps shows a startling picture. California faced massive water deficits throughout the state in January and February. This was followed by water deficits in the western half of the state in May-August. Although northern and eastern California saw some relief by September, southwest California continued to see deficits through December.

Zonal Summaries

To this point we’ve described the 2014 California drought by examining the state as a whole. Although we have a sense of what’s happening in different cities or counties by looking at the maps, they do not provide quantitative summaries of local areas.

Zonal statistics are one way to summarize the cells of a raster layer that lie within the boundary of another data layer (which may be in either raster or vector format). For example, aggregating deficit return periods with another raster depicting land cover type or a vector boundary (shapefile) of countries, states, or counties, will produce descriptive statistics by that new layer. These statistics could include the sum, mean, median, standard deviation, and range.

For this section, we begin by calculating the mean deficit return period within California counties. First, we retrieve a vector dataset of California counties from the geoBoundaries API. Since geoBoundaries does not attribute which counties belong to which states, we utilize a spatial operation called intersect to select only those counties in California.

# get the usa county boundaries
usa_counties = requests.get("https://www.geoboundaries.org/api/current/gbOpen/USA/ADM2/")
# parse the request for the data
usa_counties = usa_counties.json()
# download with the provided link
usa_counties = gp.read_file(usa_counties['gjDownloadURL'])
# intersect california state level with usa counties
california_counties = usa_counties.overlay(california, how='intersection')
# check the intersection
california_counties.plot()

The output of that intersection looks as expected. As noted above, in general a visual and/or tabular check on your data layers is always a good idea. If you expect 50 counties in a given state, you should see 50 counties resulting from your intersection of your two layers, etc. You may want to be on the look out for too few (such as an island area that may be in one layer but not the other) or too many counties (such as those that intersect with a neighboring state).

Coding Review

The exactextractr (Daniel Baston 2023) R package summarizes raster values over groupings, or zones, also known as zonal statistics. Zonal statistics help in assessing the statistical characteristics of a certain region.

We will perform our zonal statistics using the exactextractr package (Daniel Baston 2023). It is the fastest, most accurate, and most flexible zonal statistics tool for the R programming language.

Now let’s carry out the extraction and check the January output.

from exactextract import exact_extract
# run the extraction
cc_summaries = exact_extract(wsim_gldas_california.deficit, california_counties, "mean", output = 'pandas', include_cols = "shapeName_1", include_geom = True)
# make the column names pretty
col_names = [["county"], calendar.month_name[1:], ["geometry"]]
col_names = sum(col_names, [])
cc_summaries.columns = col_names

exactextractr returns summary statistics in the same order of the input boundary file, therefore we can join the California county names to the exactextract summary statistics output for visualization. We can take a quick look at the first 10 counties to see their mean deficit return period for January-June.

# check first 10 rows
cc_summaries[0:10]
county January February March April May June July August September October November December geometry
0 Alpine -41.762836 -16.325727 -13.645101 -8.343271 -28.199844 -29.179773 -25.325485 -14.808688 -14.582643 -4.376792 -4.113872 -5.714340 POLYGON ((-119.57953 38.70561, -119.59866 38.6...
1 Mohave -41.926029 -7.661513 -9.152430 -14.206533 -60.000000 -10.475436 -6.967257 -7.795761 -2.742674 -4.661378 -6.620070 -11.090995 MULTIPOLYGON (((-114.48678 34.71911, -114.4860...
2 Tuolumne -46.240211 -19.108027 -16.389606 -8.347591 -42.286755 -24.053486 -22.873270 -26.276834 -29.332735 -5.976635 -3.198179 -2.996842 POLYGON ((-119.88476 38.35619, -119.86968 38.3...
3 Tehama -57.917274 -49.567669 -23.608667 -21.017950 -28.339994 -46.608841 -42.693581 -33.756401 -8.716129 -1.808360 -7.166282 -5.475178 POLYGON ((-121.49789 40.43201, -121.47404 40.4...
4 Trinity -59.978626 -45.856117 -45.540985 -25.803623 -43.663658 -60.000000 -60.000000 -46.338074 -2.438062 2.108574 -6.474706 -7.222751 POLYGON ((-123.05144 40.26815, -123.01859 40.2...
5 Curry -60.000004 -56.224873 -26.244995 -4.247256 -9.093712 -54.969276 -60.000004 -60.000004 -2.081964 2.505597 -5.030906 -5.104263 MULTIPOLYGON (((-123.82206 41.99551, -123.8342...
6 San Benito -60.000000 -60.000000 -55.619816 -23.027861 -60.000000 -60.000000 -60.000000 -59.993259 -60.000000 -59.984707 -37.763729 -4.135725 POLYGON ((-121.62947 36.91169, -121.60737 36.8...
7 Josephine -59.999996 -22.464207 -25.885933 -14.265512 -13.829797 -32.273193 -59.999996 -43.312248 -12.886646 1.938376 -6.264188 -8.443334 MULTIPOLYGON (((-123.23112 42.00384, -123.3475...
8 Sierra -19.329338 -18.765692 -11.403096 -11.497561 -18.052834 -27.915047 -27.949060 -14.572026 -6.030163 -5.473642 -1.692187 -30.551401 POLYGON ((-120.00937 39.44512, -120.03522 39.4...
9 El Dorado -44.008751 -17.298189 -12.065713 -9.303976 -33.611919 -28.847393 -25.674936 -12.958253 -5.670928 -6.029935 -2.770664 -10.949467 POLYGON ((-119.90433 38.93333, -119.88766 38.9...

As we expected after seeing the raw WSIM-GLDAS raster, there are significant widespread deficits. We can get a better visual by constructing a choropleth using the county vector boundaries.

Knowledge Check
  1. What can zonal statistics be used for?
    1. Subsetting data to a time period of interest.
    2. Creating a time series.
    3. Summarizing the values of cells that lie within a boundary.
    4. All of the above.

County Choropleths

Now that we’ve inspected the raw data we can make a choropleth out of the mean deficit return period data. geopandas makes this

import matplotlib.pyplot as plt
# setup the panels for the figures
fig, axs = plt.subplots(4,3, 
                        figsize=(8,8),
                        facecolor='w', 
                        sharex=True, 
                        sharey=True)
# convert list of 2D axis positions into a 1d list of number 1-12
axs = axs.ravel()
# make the layout pretty/tight
fig.tight_layout()
# title and size
fig.suptitle('Monthly California Deficits for 2014', fontsize=14, y=1.05)
# loop through columns to make the plot for each month
for i, month in enumerate(cc_summaries.columns[1:13]):
    axs[i].set_title(month, fontsize= 11)
    cc_summaries.plot(column=month, ax=axs[i], cmap = "Reds_r")
    california_counties.plot(ax=axs[i], edgecolor='black', color='none', linewidth=0.25)
# loop through again and turn off the axes for a cleaner map
for ax in axs:
    ax.axis('off')

# assume it's the first (and only) mappable
patch_col = axs[0].collections[0]
# setup the colorbar/legend
cb = fig.colorbar(patch_col, ax=axs, shrink=0.5, label = "Deficit Anomaly")

Due to the widespread water deficits in the raw data, the mean values do not appear much different from the raw deficit raster layer, however, choropleth maps, also called thematic maps, can make it easier for users to survey the landscape by visualizing familiar places (like counties) that place themselves and their lived experiences alongside the data.

While this paints a striking picture of widespread water deficits, how many people are affected by this drought? Although the land area appears rather large, if one is not familiar with the distribution of population and urban centers in California it can be difficult to get a sense of the direct human impact. (This is partly because more populous locations are usually represented by smaller land areas and the less populous locations are usually represented by large administrative boundaries containing much more land area). Normalizing a given theme by land area may be something an analyst wants to do but we cover another approach below.

Knowledge Check
  1. Choropleth maps, aka thematic maps, are useful for
    1. Visualizing data in familiar geographies like counties or states
    2. Finding directions from one place to another
    3. Visualizing data in uniform geographic units like raster grid cells.
    4. Calculating return periods.

Integrating Population Data

Gridded Population of the World (GPW) is a data collection from SEDAC that models the distribution of the global human population as counts and densities in a raster format (Center For International Earth Science Information Network-CIESIN-Columbia University 2018).We will take full advantage of exactextractr to integrate across WSIM-GLDAS, geoBoundaries, and GPW. To begin, we need to download the 15-minute resolution (roughly 30 square kilometer at the equator) population density data for the year 2015 from GPW. This version of GPW most closely matches our time period (2014) and the resolution of WSIM-GLDAS (0.25 degrees). Although in many applications one might choose to use GPW’s population count data layers, because we are using exactextractr we can achieve more accurate results (especially along coastlines) by using population density in conjunction with land area estimates from the exactextractr package.

Data Review

The Gridded Population of the World Version 4 is available in multiple target metrics (e.g. counts, density), periods (2000, 2005, 2010, 2015, 2020), and spatial resolutions (30 sec, 2.5 min, 15 min, 30 min, 60 min). Read more about GPW at the collection home page on SEDAC. GPW is one of four global gridded population datasets available in raster format. These datasets vary in the degree to which they use additional information as ancillary variables to model the spatial distribution of the population from the administrative units (vector polygons) in which they originate. A review of these datasets and their underlying models is found in a paper by Leyk and colleagues (Leyk et al. 2019). You can learn more about gridded population datasets at POPGRID.org. Fitness-for-use is an important principle in determining the best dataset to use for a specific analysis. The question we ask here is — what is the population exposure to different levels of water deficit in California? — uses spatially coarse inputs and is for a place with high-quality data inputs, GPW is a good choice for this analysis. Users with vector-format census data (at the county or sub-county level) could also adopt this approach for those data. In the case of California, the US Census data and GPW will produce nearly identical estimates because GPW is based on the census inputs.

Load in the population count layer.

# read in GPW with rioxarray
file_path = "data/gpw_v4_population_count_rev11_2015_15_min.tif" 
# Open with rioxarray
gpw = rio.open_rasterio(file_path)

For this example we’ll classify the WSIM-GLDAS deficit return period raster layer into eight categories. Binning the data will make it easier to manage the output and interpret the results.

import numpy
# list the class breaks
wsim_bins = [numpy.inf, 0, -3, -5, -10, -20, -40, -60, numpy.NINF]
# classify the wsim layer with the established breaks
wsim_class = xr.apply_ufunc(
    numpy.digitize,
    wsim_gldas_1mo,
    wsim_bins)

In our previous example, we used exactextractr’s built-in 'mean' function, but we can also pass custom functions to exactextractr that will carry out several operations at once as well. The following code could be combined into a single function passed to exactextractr, but it is presented here as multiple functions in order to follow along more easily. You can read more about exactextractr arguments on the package help guide. The key arguments to be aware of are the calls to:

  1. ['coverage', 'values', 'weights']: These are the 3 operations we’re requesting be calculated. coverage calculates the corresponding area of the WSIM-GLDAS raster cell that is covered by the California boundary, values returns the value of the WSIM cell covered by the border, and weights returns the population count weight we supplied in the next argument.
  2. weights = gpw: summarizes each WSIM-GLDAS cell’s deficit return period with the corresponding population count value.
# run the extraction
cal_wsim_gpw = exact_extract(
    wsim_class.deficit, 
    california_counties, 
    ['coverage', 'values', 'weights'], 
    output = 'pandas', 
    include_geom = False, 
    weights = gpw)

This returns a DataFrame with a row for every county in the california_counties layer we passed to exact_extract.

# check the first few rows
cal_wsim_gpw[:6]
band_1_coverage band_2_coverage band_3_coverage band_4_coverage band_5_coverage band_6_coverage band_7_coverage band_8_coverage band_9_coverage band_10_coverage ... band_3_weights band_4_weights band_5_weights band_6_weights band_7_weights band_8_weights band_9_weights band_10_weights band_11_weights band_12_weights
0 [5.243542545940727e-05, 0.363015353679657, 0.0... [5.243542545940727e-05, 0.363015353679657, 0.0... [5.243542545940727e-05, 0.363015353679657, 0.0... [5.243542545940727e-05, 0.363015353679657, 0.0... [5.243542545940727e-05, 0.363015353679657, 0.0... [5.243542545940727e-05, 0.363015353679657, 0.0... [5.243542545940727e-05, 0.363015353679657, 0.0... [5.243542545940727e-05, 0.363015353679657, 0.0... [5.243542545940727e-05, 0.363015353679657, 0.0... [5.243542545940727e-05, 0.363015353679657, 0.0... ... [11718.642578125, 33362.7265625, 20356.484375,... [11718.642578125, 33362.7265625, 20356.484375,... [11718.642578125, 33362.7265625, 20356.484375,... [11718.642578125, 33362.7265625, 20356.484375,... [11718.642578125, 33362.7265625, 20356.484375,... [11718.642578125, 33362.7265625, 20356.484375,... [11718.642578125, 33362.7265625, 20356.484375,... [11718.642578125, 33362.7265625, 20356.484375,... [11718.642578125, 33362.7265625, 20356.484375,... [11718.642578125, 33362.7265625, 20356.484375,...
1 [5.183835652400326e-11, 9.799367398954928e-05,... [5.183835652400326e-11, 9.799367398954928e-05,... [5.183835652400326e-11, 9.799367398954928e-05,... [5.183835652400326e-11, 9.799367398954928e-05,... [5.183835652400326e-11, 9.799367398954928e-05,... [5.183835652400326e-11, 9.799367398954928e-05,... [5.183835652400326e-11, 9.799367398954928e-05,... [5.183835652400326e-11, 9.799367398954928e-05,... [5.183835652400326e-11, 9.799367398954928e-05,... [5.183835652400326e-11, 9.799367398954928e-05,... ... [19699.185546875, 21581.478515625, 41436.62109... [19699.185546875, 21581.478515625, 41436.62109... [19699.185546875, 21581.478515625, 41436.62109... [19699.185546875, 21581.478515625, 41436.62109... [19699.185546875, 21581.478515625, 41436.62109... [19699.185546875, 21581.478515625, 41436.62109... [19699.185546875, 21581.478515625, 41436.62109... [19699.185546875, 21581.478515625, 41436.62109... [19699.185546875, 21581.478515625, 41436.62109... [19699.185546875, 21581.478515625, 41436.62109...
2 [0.01140472013503313, 0.5333179831504822, 0.55... [0.01140472013503313, 0.5333179831504822, 0.55... [0.01140472013503313, 0.5333179831504822, 0.55... [0.01140472013503313, 0.5333179831504822, 0.55... [0.01140472013503313, 0.5333179831504822, 0.55... [0.01140472013503313, 0.5333179831504822, 0.55... [0.01140472013503313, 0.5333179831504822, 0.55... [0.01140472013503313, 0.5333179831504822, 0.55... [0.01140472013503313, 0.5333179831504822, 0.55... [0.01140472013503313, 0.5333179831504822, 0.55... ... [3108.652587890625, 218.21519470214844, 2.9919... [3108.652587890625, 218.21519470214844, 2.9919... [3108.652587890625, 218.21519470214844, 2.9919... [3108.652587890625, 218.21519470214844, 2.9919... [3108.652587890625, 218.21519470214844, 2.9919... [3108.652587890625, 218.21519470214844, 2.9919... [3108.652587890625, 218.21519470214844, 2.9919... [3108.652587890625, 218.21519470214844, 2.9919... [3108.652587890625, 218.21519470214844, 2.9919... [3108.652587890625, 218.21519470214844, 2.9919...
3 [0.03276188299059868, 0.33793187141418457, 0.4... [0.03276188299059868, 0.33793187141418457, 0.4... [0.03276188299059868, 0.33793187141418457, 0.4... [0.03276188299059868, 0.33793187141418457, 0.4... [0.03276188299059868, 0.33793187141418457, 0.4... [0.03276188299059868, 0.33793187141418457, 0.4... [0.03276188299059868, 0.33793187141418457, 0.4... [0.03276188299059868, 0.33793187141418457, 0.4... [0.03276188299059868, 0.33793187141418457, 0.4... [0.03276188299059868, 0.33793187141418457, 0.4... ... [363.4409484863281, 135.38462829589844, 990.64... [363.4409484863281, 135.38462829589844, 990.64... [363.4409484863281, 135.38462829589844, 990.64... [363.4409484863281, 135.38462829589844, 990.64... [363.4409484863281, 135.38462829589844, 990.64... [363.4409484863281, 135.38462829589844, 990.64... [363.4409484863281, 135.38462829589844, 990.64... [363.4409484863281, 135.38462829589844, 990.64... [363.4409484863281, 135.38462829589844, 990.64... [363.4409484863281, 135.38462829589844, 990.64...
4 [0.24912063777446747, 0.0036622267216444016, 0... [0.24912063777446747, 0.0036622267216444016, 0... [0.24912063777446747, 0.0036622267216444016, 0... [0.24912063777446747, 0.0036622267216444016, 0... [0.24912063777446747, 0.0036622267216444016, 0... [0.24912063777446747, 0.0036622267216444016, 0... [0.24912063777446747, 0.0036622267216444016, 0... [0.24912063777446747, 0.0036622267216444016, 0... [0.24912063777446747, 0.0036622267216444016, 0... [0.24912063777446747, 0.0036622267216444016, 0... ... [88.7540512084961, 12151.6591796875, 30.048315... [88.7540512084961, 12151.6591796875, 30.048315... [88.7540512084961, 12151.6591796875, 30.048315... [88.7540512084961, 12151.6591796875, 30.048315... [88.7540512084961, 12151.6591796875, 30.048315... [88.7540512084961, 12151.6591796875, 30.048315... [88.7540512084961, 12151.6591796875, 30.048315... [88.7540512084961, 12151.6591796875, 30.048315... [88.7540512084961, 12151.6591796875, 30.048315... [88.7540512084961, 12151.6591796875, 30.048315...
5 [7.440718036377802e-05, 1.1325451851007529e-05] [7.440718036377802e-05, 1.1325451851007529e-05] [7.440718036377802e-05, 1.1325451851007529e-05] [7.440718036377802e-05, 1.1325451851007529e-05] [7.440718036377802e-05, 1.1325451851007529e-05] [7.440718036377802e-05, 1.1325451851007529e-05] [7.440718036377802e-05, 1.1325451851007529e-05] [7.440718036377802e-05, 1.1325451851007529e-05] [7.440718036377802e-05, 1.1325451851007529e-05] [7.440718036377802e-05, 1.1325451851007529e-05] ... [25774.646484375, 727.9356079101562] [25774.646484375, 727.9356079101562] [25774.646484375, 727.9356079101562] [25774.646484375, 727.9356079101562] [25774.646484375, 727.9356079101562] [25774.646484375, 727.9356079101562] [25774.646484375, 727.9356079101562] [25774.646484375, 727.9356079101562] [25774.646484375, 727.9356079101562] [25774.646484375, 727.9356079101562]

6 rows × 36 columns

In addition to a row for each WSIM-GLDAS raster cell covered by the california boundary, there are 36 columns we need to decipher. Each column is prefixed by band_<number>_. The 12 bands represent the 12 months of WSIM-GLDAS data we passed. The 12 bands are replicated 3 times; resulting in 36 columns. Each set of 12 bands represent the 3 operations we requested summaries for (['coverage', 'values', 'weights']). To complicate things further, each cell contains a list with multiple values. This is because most counties overlap multiple WSIM-GLDAS cells. We need to disaggregate the nested lists of values before proceeding further.

# create list of columns to unnest
explode_cols = list(cal_wsim_gpw.columns)
# unnest the column value lists
cal_wsim_gpw = cal_wsim_gpw.explode(explode_cols)
# check the results
cal_wsim_gpw[:10]
band_1_coverage band_2_coverage band_3_coverage band_4_coverage band_5_coverage band_6_coverage band_7_coverage band_8_coverage band_9_coverage band_10_coverage ... band_3_weights band_4_weights band_5_weights band_6_weights band_7_weights band_8_weights band_9_weights band_10_weights band_11_weights band_12_weights
0 0.000052 0.000052 0.000052 0.000052 0.000052 0.000052 0.000052 0.000052 0.000052 0.000052 ... 11718.642578 11718.642578 11718.642578 11718.642578 11718.642578 11718.642578 11718.642578 11718.642578 11718.642578 11718.642578
0 0.363015 0.363015 0.363015 0.363015 0.363015 0.363015 0.363015 0.363015 0.363015 0.363015 ... 33362.726562 33362.726562 33362.726562 33362.726562 33362.726562 33362.726562 33362.726562 33362.726562 33362.726562 33362.726562
0 0.068331 0.068331 0.068331 0.068331 0.068331 0.068331 0.068331 0.068331 0.068331 0.068331 ... 20356.484375 20356.484375 20356.484375 20356.484375 20356.484375 20356.484375 20356.484375 20356.484375 20356.484375 20356.484375
0 0.263593 0.263593 0.263593 0.263593 0.263593 0.263593 0.263593 0.263593 0.263593 0.263593 ... 192.583374 192.583374 192.583374 192.583374 192.583374 192.583374 192.583374 192.583374 192.583374 192.583374
0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 ... 298.508392 298.508392 298.508392 298.508392 298.508392 298.508392 298.508392 298.508392 298.508392 298.508392
0 0.617599 0.617599 0.617599 0.617599 0.617599 0.617599 0.617599 0.617599 0.617599 0.617599 ... 2315.501465 2315.501465 2315.501465 2315.501465 2315.501465 2315.501465 2315.501465 2315.501465 2315.501465 2315.501465
0 0.06764 0.06764 0.06764 0.06764 0.06764 0.06764 0.06764 0.06764 0.06764 0.06764 ... 218.215195 218.215195 218.215195 218.215195 218.215195 218.215195 218.215195 218.215195 218.215195 218.215195
0 0.442568 0.442568 0.442568 0.442568 0.442568 0.442568 0.442568 0.442568 0.442568 0.442568 ... 2.991935 2.991935 2.991935 2.991935 2.991935 2.991935 2.991935 2.991935 2.991935 2.991935
0 0.360957 0.360957 0.360957 0.360957 0.360957 0.360957 0.360957 0.360957 0.360957 0.360957 ... 34.219231 34.219231 34.219231 34.219231 34.219231 34.219231 34.219231 34.219231 34.219231 34.219231
1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 19699.185547 19699.185547 19699.185547 19699.185547 19699.185547 19699.185547 19699.185547 19699.185547 19699.185547 19699.185547

10 rows × 36 columns

Now we have a single row for each unique WSIM-GLDAS/California County combination. The original row indices corresponding to the order of the counties in the california_county layer are replicated for the number of cells the county overlapped. In the first 10 rows we see that the first county (0) overlapped 9 WSIM-GLDAS cells (there are 9 rows with a 0 index).

We will need to perform a few more processing steps to prepare this DataFrame for a time series visualization integrating all of the data. First we’ll replace the band prefixes with calendar months names. Then, we will use the pd.melt function to transform the data from wide format to long format in order to produce a visualization in plotnine.

There’s not a simple way to perform pattern based melts in pandas so we’ll perform these operations on each of the 3 operations (['coverage', 'values', 'weights']) separately and then bind them back together.

First the coverage.

# select just columns with word "coverage" in them
cal_wsim_cov = cal_wsim_gpw.filter(like = 'coverage', axis = 1)
# rename them for months
cal_wsim_cov.columns = calendar.month_name[1:]
# melt them and create new columns; month and coverage
cal_wsim_cov= pd.melt(cal_wsim_cov, var_name='month', value_name= 'coverage')

Now values.

# select just columns with word "coverage" in them
cal_wsim_val = cal_wsim_gpw.filter(like = 'values', axis = 1)
# rename them for months
cal_wsim_val.columns = calendar.month_name[1:]
# melt them and create new columns; month and wsim class
cal_wsim_val= pd.melt(cal_wsim_val, var_name='month', value_name = 'wsim_class')

Lastly density.

# select just columns with word "coverage" in them
cal_wsim_weight = cal_wsim_gpw.filter(like = 'weight', axis = 1)
# rename them for months
cal_wsim_weight.columns = calendar.month_name[1:]
# melt them and create new columns; month and population count
cal_wsim_weight= pd.melt(cal_wsim_weight, var_name='month', value_name = 'cell_pop_count')

Put them back together. The rows are all in the same order as the counties and months so we don’t have to perform a proper merge; just bind them back together.

# concactenate them back together
cal_summaries = pd.concat(
    [cal_wsim_cov, #the coverages with the county and month names
    cal_wsim_val["wsim_class"], # just the class values
    cal_wsim_weight["cell_pop_count"]], # just the pop count values
    axis=1)

cal_summaries
month coverage wsim_class cell_pop_count
0 January 0.000052 5 11718.642578
1 January 0.363015 6 33362.726562
2 January 0.068331 7 20356.484375
3 January 0.263593 6 192.583374
4 January 1.0 7 298.508392
... ... ... ... ...
15907 December 0.08602 2 788672.6875
15908 December 0.00022 5 2462.314941
15909 December 0.001785 5 8885.547852
15910 December 0.00063 4 25215.951172
15911 December 0.000264 4 396.317535

15912 rows × 4 columns

Now we have a simplified DataFrame in long format where each row represents a unique county/month/WSIM cell combination. The first row details the the first county passed from california_counties (index 0), in the January WSIM-GLDAS layer, covering 0.000052 of a WSIM cell that was class 5 (defict between -20 and -40), which overlayed a GPW population count cell with 1,1717.64 people.

Coding Review

Converting data from wide to long or long to wide formats is a key component to data processing, however, it can be confusing. To read more about melting/pivoting longer (wide to long) and casting/pivoting wider (long to wide) check out the pandas vignette How to Shape the Layout of Tables.

We can estimate the number of people represented by each county/month/WSIM cell combination by multiplying the coverage fraction by the population count.

# multiply
cal_summaries["wsim_class_pop"] = cal_summaries["coverage"]*cal_summaries["cell_pop_count"]
# round so we don't have fractions of people
cal_summaries.wsim_class_pop = cal_summaries['wsim_class_pop'].astype('float').round(0)
# check the values
cal_summaries[:5]
month coverage wsim_class cell_pop_count wsim_class_pop
0 January 0.000052 5 11718.642578 1.0
1 January 0.363015 6 33362.726562 12111.0
2 January 0.068331 7 20356.484375 1391.0
3 January 0.263593 6 192.583374 51.0
4 January 1.0 7 298.508392 299.0

To estimate the number of people in each unique month/WSIM class combination we’ll summarize by group with the pd.groupby() function.

# group by month and class and reset the index
cal_summaries = cal_summaries.groupby(['month', 'wsim_class'])['wsim_class_pop'].sum().reset_index()

Ultimately we want to calculate the fraction of the population in each WSIM-GLDAS class so we need to get the total population to divide into the WSIM class population.

# sum the population by month (will be the same for every month in a given year)
cal_summaries['month_pop'] = cal_summaries.groupby(['month'])['wsim_class_pop'].transform('sum')
# divide the class total by month sum to get the fraction
cal_summaries['wsim_class_frac'] = cal_summaries['wsim_class_pop'] / cal_summaries['month_pop']
# check the values
cal_summaries
month wsim_class wsim_class_pop month_pop wsim_class_frac
0 April 0 407597.0 36565982.0 0.011147
1 April 3 115254.0 36565982.0 0.003152
2 April 4 5217618.0 36565982.0 0.142690
3 April 5 20640931.0 36565982.0 0.564485
4 April 6 6052406.0 36565982.0 0.165520
... ... ... ... ... ...
76 September 3 6428278.0 36565982.0 0.175799
77 September 4 6369109.0 36565982.0 0.174181
78 September 5 4146780.0 36565982.0 0.113405
79 September 6 4651595.0 36565982.0 0.127211
80 September 7 13125040.0 36565982.0 0.358941

81 rows × 5 columns

Before plotting we’ll make the month labels more legible for plotting, convert the WSIM-GLDAS return period classes into a categorical class, convert the month into an ordered categorical class, and set the WSIM-GLDAS class palette.

# wsim class as category
cal_summaries['wsim_class'] = cal_summaries['wsim_class'].astype('category')
# give ordinal pretty labels
cal_summaries['wsim_class'] = cal_summaries['wsim_class'].cat.rename_categories(
    {0: "0", 1: "-3", 2: "-5", 3: "-10", 4: "-20", 5: "-40", 6: "-50", 7: "-60"})
# same for the months
cal_summaries["month"] = pd.Categorical(cal_summaries["month"],
                             categories=["January", "February", "March", "April", "May", "June", "July",
                                         "August", "September", "October", "November", "December"],
                             ordered=True)
# set our desired palette
leg_colors=['#9B0039',
    # -50 to -40
    '#D44135',
    # -40 to -20
    '#FF8D43',
    # -20 to -10
    '#FFC754',
    # -10 to -5
    '#FFEDA3',
    # -5 to -3
    '#fffdc7',
    # -3 to 0
    '#FFF4C7',
    # 0-3
    "#FFFFFF"]

Although matplotlib is the more “python” way to plot, we are going to use the python version of R’s popular ggplot2 package. It’s a simpler code structure, and will keep synergy between the R and Python versions of this lesson.

from plotnine import *
# plot it
(ggplot(cal_summaries, aes('month', 'wsim_class_frac', fill = 'wsim_class', group='wsim_class'))+ 
scale_fill_manual(values = leg_colors[::-1])+
geom_bar(stat='identity', position='stack')+ 
 labs(title = "Population Under Water Deficits in 2014 California Drought",
                subtitle = "Categorized by Intensity of Deficit Return Period",
                x = "",
                y = "Fraction of Population*",
                caption = "*Population derived from Gridded Population of the World (2015)",
                fill = "Deficit Class")+
theme_minimal()+
theme(axis_text_x=element_text(rotation=25, hjust=1))
)

This figure really illustrates the human impact of the 2014 drought. Nearly 100% of the population was under a 60+ year deficit in January followed by 66% in May and approximately 40% for the remainder of the summer. That is a devastating drought!

Knowledge Review
  1. There are several options for spatially subsetting (or clipping) a raster object to a region of interest. What method was used in this lesson?
    1. Using a list of dates.
    2. Using another raster object.
    3. Specifying a bounding box.
    4. Using a vector boundary dataset.
  2. When running into memory issues, what is something you can do to reduce the computational load?
    1. Work with one time frame or region at a time.
    2. Save it as a new file.
    3. Subset the data to a region of interest/time frame.
    4. Find other data to work with.
  3. What is the importance of subsetting data?
    1. Freeing up space.
    2. Analyzing a certain time or area of interest.
    3. Making code run faster.
    4. All of the above.
  4. Gridded population datasets…(select all that apply)
    1. Show where river tributaries are.
    2. Model the distribution of the global human population in raster grid cells.
    3. Allow analyses of the number of persons impacted by a hazard such as drought.
    4. Vary in the extent to which ancillary data are used in their production.

In this Lesson, You Learned…

Congratulations! Now you should be able to:

  • Navigate the SEDAC website to find and download datasets.
  • Access administrative boundaries from geoBoundaries data using API.
  • Temporally subset a NetCDF raster stack using R packages such as dplyr and lubridate.
  • Crop a NetCDF raster stack with a spatial boundary.
  • Write a subsetted dataset to disk and create an image to share results.
  • Identify areas of severe drought and select these areas for further analysis.
  • Summarize data by county using the exactextractr tool.
  • Integrate WSIM-GLDAS deficit, GPW population, and geoBoundaries administrative boundary data to create complex time series visualizations.

Lesson 2

In this lesson we explored the California drought of 2014. In our next lesson, we will examine near real-time flood data in California using the MODIS data product.

Lesson 2: Moderate Resolution Imaging Spectroradiometer (MODIS) Near-Real Time (NRT) flood data

References

Bland, Alastair. 2014. “California Drought Has Wild Salmon Competing with Almonds for Water.” National Public Radio. https://www.npr.org/sections/thesalt/2014/08/21/342167846/california-drought-has-wild-salmon-competing-with-almonds-for-water.
Center For International Earth Science Information Network-CIESIN-Columbia University. 2018. “Gridded Population of the World, Version 4 (GPWv4): Population Count, Revision 11.” Palisades, NY: NASA Socioeconomic Data; Applications Center (SEDAC). https://doi.org/10.7927/H4JW8BX5.
Daniel Baston. 2023. Exactextractr: Fast Extraction from Raster Datasets Using Polygons. https://isciences.gitlab.io/exactextractr/.
“Evidence Suggests California’s Drought Is the Worst in 1,200 Years.” 2014. Woods Hole Oceanographic Institution. https://phys.org/news/2014-12-evidence-california-drought-worst-years.html.
Guillen, George. 2002. “Klamath River Fish Die-Off.” Mortality Report AFWO-01-03. Arcata, CA: U.S. Fish & Wildlife Service. https://www.trrp.net/DataPort/doc.php?id=301.
IPCC. 2023. “Climate Change 2021 the Physical Science Basis,” June. https://doi.org/10.1017/9781009157896.
ISciences, and Center For International Earth Science Information Network-CIESIN-Columbia University. 2022a. “Documentation for the Water Security Indicator Model - Global Land Data Assimilation System (WSIM-GLDAS) Monthly Grids, Version 1.” Palisades, NY: NASA Socioeconomic Data; Applications Center (SEDAC). https://doi.org/10.7927/X7FJ-JJ41.
———. 2022b. “Water Security Indicator Model - Global Land Data Assimilation System (WSIM-GLDAS) Monthly Grids, Version 1.” Palisades, NY: NASA Socioeconomic Data; Applications Center (SEDAC). https://doi.org/10.7927/Z1FN-KF73.
Leyk, Stefan, Andrea E. Gaughan, Susana B. Adamo, Alex de Sherbinin, Deborah Balk, Sergio Freire, Amy Rose, et al. 2019. “The Spatial Allocation of Population: A Review of Large-Scale Gridded Population Data Products and Their Fitness for Use.” Earth System Science Data 11 (3): 1385–1409. https://doi.org/10.5194/essd-11-1385-2019.
NOAA. 2019. “Water Cycle.” National Oceanic; Atmospheric Administration. https://www.noaa.gov/education/resource-collections/freshwater/water-cycle.
———. 2023. “What Are El Nino and La Nina?” National Oceanic; Atmospheric Administration. https://oceanservice.noaa.gov/facts/ninonina.html.
Rodgers, Alison Ince. 2023. “Understanding Droughts.” National Geographic Society. https://education.nationalgeographic.org/resource/understanding-droughts/.
Roeseler, Charles. 2011. “Exceptional Drought Continues Across Southeast Texas.” National Weather Service. https://www.weather.gov/media/hgx/stormsignals/vol87.pdf.
Sanders, Sam. 2014. “Despite California’s Drought, Taps Still Flowing in LA County.” National Public Radio. https://www.npr.org/2014/07/20/333019977/despite-californias-drought-taps-still-flowing-in-la-county.
StateImpact. 2014. “Everything You Need to Know about the Texas Drought.” National Public Radio. https://stateimpact.npr.org/texas/tag/drought/.
Zhou, Haddeland, T. 2016. “Human-Induced Changes in the Global Water Cycle.” Geophysical Monograph Series. https://doi.org/10.1002/9781118971772.ch4.

Footnotes

  1. Photo Credit, NASA/JPL.↩︎