Notebook prepared by Ben Maathuis, Bas Retsios and Willem Nieuwenhuis. ITC-University of Twente, Enschede. The Netherlands
Within this Notebook your are going to use ILWISPy in conjunction with a number of common used Python libraries, like Numpy and Matplotlib, which have already been introduced before. Also use is made here of Pandas.
Within this notebook we focus on time series data. A number of open data sources are used:
Attention will be given to the following aspects:
Before we start, first load the required libraries to run this Notebook. If there is a message after executing the cell below, ensure that you install, using the command from within your python folder 'python -m pip install ...package...', e.g. for Pandas: 'python -m pip install pandas'.
Sample data available at: https://filetransfer.itc.nl/pub/52n/ilwis_py/sample_data/Intro_Timeseries_ILWISPy.zip. Unzip the file. Here it is assumed that the data folder '/Intro_Timeseries_ILWISPy' is situated within the notebook folder! It is furthermore assumed that you have locally installed ILWIS386 for data visualization when data is written to disk. This will be especially important when visualizing the time series data and anomalies created in the later part of this notebook, e.g. using the animation functionality of ILWIS 386, in order to check the results obtained.
Prerequisite: You have reviewed the notebook 'Intro_ILWISPy.ipynb' and 'Intro_RS_ILWISPy', both available at: https://filetransfer.itc.nl/pub/52n/ilwis_py/notebooks/. Note that the files with postfix _result.html files show the processed results if the notebook would be executed.
Import required libraries and set your ILWISPy working directory
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import subplots
from matplotlib.colors import ListedColormap
import ilwis
from datetime import datetime, timedelta
import pandas as pd
from pandas import DataFrame
from json import load as jload
#use the Python function os.getcwd() to retrieve the current folder used and add a sub-folder containing the sample data
work_dir = os.getcwd() + '/Intro_Timeseries_ILWISPy'
#set the working directory for ILWISPy
ilwis.setWorkingCatalog(work_dir)
print(work_dir)
ilwis.version()
D:\jupyter\notebook_scripts\ilwispy_tutorial\tutorial_release/Intro_Timeseries_ILWISPy
'1.0 build 20230403'
The gridded river discharge data used below has been downloaded for a selected area, the Buzi catchment in Mozambique, for the years 2020 and 2021. The data represents the discharge in m3/sec (per pixel) generated by the GloFAS system (see: https://www.globalfloods.eu/). The data has already been retrieved in grib2 format and was dowloaded from: https://cds.climate.copernicus.eu/cdsapp#!/dataset/cems-glofas-historical?tab=form. The data is going to be assigned as an ILWISPy raster coverage in the cells below, the dimensions are checked (note that 2020 is a leap year!)
glofas_2020 = ilwis.RasterCoverage('historical_2020.grib')
glofas_2021 = ilwis.RasterCoverage('historical_2021.grib')
Retrieve some meta data information, like raster stack dimensions, some additional information on geometry, projection, map extent aand for 2020 some additional statistics
print(glofas_2020.size())#leap year!
print(glofas_2021.size())
Size(57, 52, 366) Size(57, 52, 365)
Check some of the other meta data
coordSys = glofas_2020.coordinateSystem()
print(coordSys.toWKT())
print()
print(coordSys.toProj4())
print()
print(glofas_2020.envelope())
print()
stats2020 = glofas_2020.statistics(ilwis.PropertySets.pHISTOGRAM)
#print(stats.histogram())
print(stats2020[ilwis.PropertySets.pMIN]) # minimum value on the map
print(stats2020[ilwis.PropertySets.pMAX]) # maximum value on the map
print()
stats2021 = glofas_2021.statistics(ilwis.PropertySets.pHISTOGRAM)
#print(stats.histogram())
print(stats2021[ilwis.PropertySets.pMIN]) # minimum value on the map
print(stats2021[ilwis.PropertySets.pMAX]) # maximum value on the map
PROJCS["historical_2020.grib",GEOCS["historical_2020.grib",DATUM[" unnamed",[?],ELLIPSOID["Normal Sphere (r=6370997)",6370997.000000000000,0.000000000000],PRIMEM["Greenwich",0, AUTHORITY["EPSG",8901"]]],PROJECTION[""],,UNIT[degrees,1.0]] +proj=longlat +a=6370997.000000000000 +b=6370997.000000000000 32.200000 -21.050000 35.050000 -18.450000 0.0 3169.25 0.0 7799.1875
Transform the stack (in ilwis format) to a 1D numpy array (using np.fromiter(iter()) for the two years under consideration, combine the two 1D arrays (using append) and transform back to a 3D array (using reshape)
#create 1D arrays
glofas_2np_2020 = np.fromiter(iter(glofas_2020), np.float64, glofas_2020.size().linearSize())
glofas_2np_2021 = np.fromiter(iter(glofas_2021), np.float64, glofas_2021.size().linearSize())
#combine the two - using append
array_both = np.append(glofas_2np_2020, glofas_2np_2021)
#create a 3D stack of the tow years combined
glofas_2np = array_both.reshape((glofas_2020.size().zsize+glofas_2021.size().zsize, glofas_2020.size().ysize, glofas_2020.size().xsize))
#check the dimensions obtained
print(glofas_2np.shape)
(731, 52, 57)
Create a simple interactive plot using MatPlotLib Imshow of the first layer
# create a plot of the numpy array using MatPlotLib - Imshow
%matplotlib notebook
fig1 = plt.figure(figsize =(10, 7))
plt.imshow(glofas_2np[0], interpolation='none', vmin=0, vmax=350, cmap='jet')
plt.axis('on')
plt.colorbar(shrink=0.6)
plt.title('Discharge (m3-sec)')
Text(0.5, 1.0, 'Discharge (m3-sec)')
We continue with inspection of the data, navigate with your mouse cursor over the map in the cell above and check the location X=50 and y=28, this is the outlet location of the Buzi River!
In the cell below the data for the outlet location is retrieved for 2020
l = 28
c = 50
z = (glofas_2020.size().zsize)
print(z)
Q_2020 = []
for n in range(0,(z)):
point_value = (glofas_2020.pix2value(ilwis.Pixel(c,l,n)))
Q_2020.append(point_value)
print('Values extracted for selected location:', Q_2020)
366 Values extracted for selected location: [202.640625, 191.953125, 180.234375, 169.6875, 160.1875, 151.15625, 142.375, 134.125, 126.6875, 120.625, 118.3125, 122.3125, 138.28125, 164.0625, 180.28125, 193.96875, 219.25, 250.59375, 272.3125, 386.03125, 745.3125, 871.1875, 752.375, 640.65625, 557.796875, 502.71875, 458.515625, 420.71875, 394.453125, 379.0625, 361.90625, 340.28125, 318.578125, 298.890625, 282.640625, 269.796875, 261.5625, 255.125, 250.546875, 254.125, 263.0625, 285.296875, 415.03125, 603.703125, 655.40625, 617.46875, 566.703125, 537.3125, 513.171875, 480.609375, 447.84375, 419.96875, 394.0625, 370.84375, 367.046875, 400.859375, 437.984375, 432.546875, 404.140625, 374.9375, 353.921875, 351.875, 345.21875, 325.8125, 308.6875, 294.28125, 278.21875, 262.875, 249.875, 237.59375, 225.125, 213.65625, 203.6875, 195.75, 188.625, 180.34375, 172.0, 168.125, 176.25, 187.46875, 185.53125, 177.25, 168.28125, 159.53125, 151.21875, 143.4375, 136.21875, 129.78125, 126.4375, 129.53125, 131.96875, 128.96875, 123.375, 117.625, 113.28125, 113.34375, 116.96875, 117.5, 114.34375, 110.84375, 113.40625, 134.90625, 160.0, 162.71875, 154.40625, 145.03125, 136.78125, 131.25, 131.375, 135.21875, 136.90625, 135.5625, 131.1875, 125.75, 121.15625, 116.96875, 112.0, 106.53125, 102.0625, 99.90625, 99.03125, 97.21875, 94.9375, 93.34375, 91.25, 87.90625, 83.71875, 79.5, 75.5625, 71.90625, 68.53125, 65.46875, 62.65625, 60.0625, 57.65625, 55.25, 52.90625, 50.75, 48.90625, 47.25, 45.65625, 44.21875, 42.8125, 41.4375, 40.21875, 39.1875, 38.125, 36.875, 35.59375, 34.3125, 33.09375, 32.0, 31.15625, 30.65625, 30.21875, 29.71875, 29.0625, 28.3125, 27.46875, 26.65625, 25.96875, 25.65625, 25.8125, 26.0625, 26.03125, 25.75, 25.46875, 25.40625, 25.90625, 26.78125, 27.4375, 27.90625, 29.59375, 34.1875, 39.453125, 41.40625, 40.578125, 38.8125, 37.109375, 35.75, 34.53125, 33.21875, 31.859375, 30.578125, 29.40625, 28.328125, 27.484375, 26.828125, 26.140625, 25.359375, 24.5625, 23.765625, 22.953125, 22.140625, 21.40625, 20.765625, 20.21875, 19.734375, 19.3125, 18.9375, 18.78125, 19.0625, 19.625, 20.046875, 20.09375, 19.90625, 19.640625, 19.40625, 19.1875, 19.09375, 19.21875, 19.421875, 19.46875, 19.34375, 19.203125, 19.0625, 18.859375, 18.53125, 18.09375, 17.609375, 17.1875, 16.875, 16.734375, 17.0, 18.015625, 19.28125, 19.890625, 19.71875, 19.171875, 18.53125, 17.9375, 17.453125, 17.0625, 16.75, 16.6484375, 17.0859375, 18.875, 23.28125, 28.828125, 31.515625, 31.234375, 29.7890625, 28.171875, 26.75, 25.6640625, 25.0, 24.875, 27.5546875, 37.671875, 48.6015625, 51.5, 49.640625, 46.6640625, 43.671875, 40.875, 38.3046875, 35.9921875, 34.0078125, 32.3046875, 30.875, 29.671875, 28.6640625, 27.6953125, 26.6640625, 25.5625, 24.4453125, 23.4296875, 22.609375, 22.296875, 23.8046875, 27.1015625, 29.1796875, 29.0625, 27.90625, 26.6015625, 25.5078125, 24.734375, 24.2890625, 24.03125, 23.7265625, 23.3125, 22.953125, 23.09375, 24.8125, 29.359375, 34.4921875, 38.390625, 46.203125, 61.6640625, 74.3515625, 78.3125, 78.46875, 76.1484375, 71.90625, 67.21875, 62.8671875, 59.1328125, 56.5234375, 55.171875, 53.9296875, 51.921875, 49.34375, 46.6171875, 43.9609375, 41.4765625, 39.21875, 37.2109375, 35.4296875, 33.84375, 32.53125, 31.640625, 31.90625, 34.5625, 39.234375, 42.3125, 42.1640625, 40.65625, 37.34375, 35.734375, 41.0546875, 50.453125, 54.3828125, 52.71875, 49.21875, 45.6015625, 42.359375, 39.7890625, 40.609375, 44.921875, 48.5390625, 50.296875, 53.328125, 57.90625, 63.5859375, 72.9375, 84.34375, 109.5625, 150.625, 197.3359375, 296.5546875, 1200.1796875, 2681.7578125, 2467.0859375, 1824.1953125, 1392.453125, 1149.125, 975.2734375, 856.109375, 784.125, 724.3203125, 712.15625, 853.0625, 916.9375, 878.140625, 860.171875, 847.375, 826.484375, 793.1875, 740.90625, 740.40625, 1042.625, 1115.171875, 908.609375, 770.265625, 696.328125, 718.796875]
Repeat the procedure for 2021
l = 28
c = 50
z = (glofas_2021.size().zsize)
print(z)
Q_2021 = []
for n in range(0,(z)):
point_value = (glofas_2021.pix2value(ilwis.Pixel(c,l,n)))
Q_2021.append(point_value)
print('Values extracted for selected location:', Q_2021)
365 Values extracted for selected location: [791.40625, 781.078125, 739.71875, 712.109375, 680.703125, 662.4375, 660.125, 686.484375, 730.59375, 795.640625, 836.703125, 834.125, 814.671875, 809.84375, 840.5, 897.0, 951.96875, 1277.578125, 2162.265625, 2325.5, 1961.1875, 1731.28125, 2065.71875, 3331.296875, 2911.265625, 2335.953125, 2016.796875, 1814.421875, 1694.078125, 1624.40625, 1532.09375, 1427.625, 1366.6875, 1436.84375, 1429.03125, 1335.59375, 1240.5625, 1163.34375, 1107.6875, 1076.53125, 1027.125, 988.21875, 962.65625, 931.03125, 999.59375, 1170.25, 1684.28125, 3062.5625, 3439.34375, 2875.65625, 2359.8125, 2030.25, 1799.15625, 1652.53125, 1555.84375, 1455.6875, 1358.0, 1275.65625, 1261.5625, 1270.53125, 1217.96875, 1141.53125, 1067.40625, 1008.25, 973.28125, 931.375, 889.40625, 847.875, 803.8125, 758.75, 718.125, 680.4375, 648.15625, 617.5625, 588.09375, 558.78125, 532.375, 509.9375, 487.53125, 467.0, 447.6875, 431.5, 434.75, 455.5625, 457.46875, 441.5625, 435.46875, 430.8125, 416.875, 399.8125, 381.5625, 363.65625, 346.8125, 331.09375, 316.65625, 303.5625, 291.90625, 282.8125, 275.15625, 265.53125, 255.40625, 246.21875, 237.6875, 229.3125, 221.1875, 213.3125, 206.21875, 203.5, 206.53125, 205.03125, 198.65625, 191.53125, 185.03125, 179.53125, 175.5625, 171.8125, 166.65625, 160.625, 154.6875, 149.53125, 146.125, 146.28125, 155.03125, 170.4375, 175.34375, 169.59375, 161.59375, 153.8125, 146.5, 139.75, 133.5625, 128.0625, 123.0, 118.25, 113.5625, 108.96875, 104.65625, 100.59375, 96.9375, 93.90625, 91.5625, 89.5, 87.28125, 85.28125, 84.1875, 83.9375, 83.15625, 81.09375, 78.21875, 75.1875, 72.25, 69.59375, 67.25, 65.46875, 65.6875, 69.0, 73.03125, 74.125, 72.84375, 71.4375, 71.375, 71.90625, 71.0, 68.90625, 66.46875, 63.875, 61.15625, 58.34375, 55.6875, 53.1875, 51.03125, 49.53125, 49.6875, 52.0625, 54.8125, 55.5625, 54.46875, 52.5625, 50.46875, 48.265625, 46.046875, 43.875, 41.84375, 39.9375, 38.171875, 36.5, 34.9375, 33.515625, 32.328125, 31.59375, 31.828125, 32.65625, 32.890625, 32.171875, 30.96875, 29.828125, 29.3125, 29.9375, 32.953125, 38.96875, 43.203125, 43.453125, 41.875, 39.921875, 37.953125, 36.046875, 34.3125, 32.921875, 31.90625, 31.296875, 31.0625, 30.8125, 30.203125, 29.359375, 28.390625, 27.359375, 26.34375, 25.40625, 24.546875, 23.71875, 22.921875, 22.125, 21.421875, 20.84375, 20.4375, 20.5625, 21.921875, 25.703125, 38.78125, 60.390625, 68.734375, 66.578125, 62.15625, 57.828125, 53.953125, 50.484375, 47.671875, 45.5703125, 43.578125, 41.3515625, 39.046875, 36.953125, 35.2109375, 34.4765625, 35.8046875, 37.5234375, 37.5078125, 36.1484375, 34.4140625, 32.671875, 31.015625, 29.5, 28.2109375, 27.2890625, 27.0703125, 28.46875, 32.140625, 35.421875, 35.9140625, 34.5859375, 32.890625, 31.4609375, 30.8203125, 32.234375, 35.375, 37.359375, 37.171875, 36.4765625, 36.90625, 37.46875, 36.765625, 35.125, 33.265625, 31.6640625, 30.640625, 30.1171875, 29.875, 29.6640625, 29.484375, 30.3515625, 33.3828125, 37.5859375, 41.28125, 43.8515625, 44.6796875, 45.5859375, 48.34375, 49.9375, 48.75, 46.25, 43.90625, 42.5703125, 42.6328125, 45.3671875, 53.03125, 61.3984375, 63.4296875, 61.109375, 58.1015625, 56.796875, 57.453125, 57.671875, 57.265625, 57.546875, 57.546875, 55.9453125, 53.2734375, 50.4296875, 47.765625, 45.328125, 43.171875, 41.28125, 39.4375, 37.6015625, 35.921875, 34.5078125, 33.640625, 31.78125, 30.1875, 29.65625, 29.1484375, 28.1640625, 26.8515625, 25.4765625, 24.3828125, 24.2890625, 27.9765625, 41.2578125, 58.171875, 68.890625, 77.1875, 85.8125, 90.78125, 90.09375, 85.5, 79.578125, 73.703125, 69.28125, 65.359375, 61.265625, 57.453125, 54.03125, 51.046875, 48.40625, 45.953125, 43.546875, 41.203125, 39.015625, 37.03125, 35.421875, 35.015625, 41.21875, 65.984375, 115.484375, 146.984375, 149.578125, 147.828125, 155.765625, 178.5, 200.609375, 213.0625, 223.140625, 223.78125, 211.96875, 200.828125]
The two lists obtained above are now merged into a single list
#concatonate 2 lists
Q_all = Q_2020 + Q_2021
print(len(Q_all))
731
Create an appropriate time stamp for the X-axis and check the output index list created
# initializing date range for X-Axis
plot_date = datetime.strptime("01-1-2020", "%d-%m-%Y")
z = (glofas_2020.size().zsize+glofas_2021.size().zsize)
date_generated = pd.date_range(plot_date, periods=z)
print(date_generated.strftime("%d-%m-%Y"))
Index(['01-01-2020', '02-01-2020', '03-01-2020', '04-01-2020', '05-01-2020', '06-01-2020', '07-01-2020', '08-01-2020', '09-01-2020', '10-01-2020', ... '22-12-2021', '23-12-2021', '24-12-2021', '25-12-2021', '26-12-2021', '27-12-2021', '28-12-2021', '29-12-2021', '30-12-2021', '31-12-2021'], dtype='object', length=731)
%matplotlib inline
fig2 = plt.figure(figsize =(14, 7))
plt.plot(date_generated, Q_all, label='Discharge')
plt.xlabel('Time')
plt.ylabel('Discharge (m3/sec)')
plt.title('Discharge extracted for a given pixel')
plt.legend()
<matplotlib.legend.Legend at 0x2363f8e0fa0>
Hourly temperature of air at 2m above the surface of land, sea or in-land waters is retrieved from the ERA5-Land Reanalysis for 20230101, from 00:00 to 23:00 UTC, in grib format, see also: https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-land?tab=overview
Load the data.
grib_in = ilwis.RasterCoverage('CDS_temperature.grib')
print(grib_in.size())
Size(3600, 1801, 24)
# select a raster band layer from the initial data stack, here band 0 - representing 00:00 UTC
# note numbering starts from 0
B1 = ilwis.do('selection',grib_in,"rasterbands(0)")
print(B1.size().xsize)
print(B1.size().ysize)
print(B1.size().zsize)
3600 1801 1
Retrieve the other meta data
coordSys = B1.coordinateSystem()
print(coordSys.toWKT())
print()
print(coordSys.toProj4())
print()
print(B1.envelope())
print()
stats = B1.statistics(ilwis.PropertySets.pHISTOGRAM, 256)
#print(stats.histogram())
print(stats[ilwis.PropertySets.pMIN]) # minimum value on the map
print(stats[ilwis.PropertySets.pMAX]) # maximum value on the map
PROJCS["cds_temperature.grib",GEOCS["cds_temperature.grib",DATUM[" unnamed",[?],ELLIPSOID["Normal Sphere (r=6370997)",6370997.000000000000,0.000000000000],PRIMEM["Greenwich",0, AUTHORITY["EPSG",8901"]]],PROJECTION[""],,UNIT[degrees,1.0]] +proj=longlat +a=6370997.000000000000 +b=6370997.000000000000 -180.050000 -90.050000 179.950000 90.050000 217.78515625 312.697265625
%matplotlib inline
stats_input = B1.statistics(ilwis.PropertySets.pHISTOGRAM, 255) # here 255 bins are used to create the histogram
x=[a for (a,b) in stats_input.histogram()][:-1]
y =[b for (a,b) in stats_input.histogram()][:-1]
plt.plot(x,y,label='Raster Map values')
plt.xlabel('Data Range')
plt.ylabel('Data Frequency')
plt.title('Raster Histogram')
plt.legend()
<matplotlib.legend.Legend at 0x2363fd10ee0>
Display the imported band using MatplotLib Imshow
b1_2np = np.fromiter(iter(B1), np.float64, B1.size().linearSize())
#now we overwrite the initial variable created
b1_2np = b1_2np.reshape((B1.size().ysize, B1.size().xsize))
# create a plot of the numpy array using MatPlotLib - Imshow
%matplotlib notebook
fig = plt.figure(figsize =(10, 7))
plt.imshow(b1_2np, interpolation='none', vmin=219, vmax=312, cmap='jet')
plt.axis('on')
plt.colorbar(shrink=0.45)
plt.title('2 meter Temperature on 2023-01-01')
Text(0.5, 1.0, '2 meter Temperature on 2023-01-01')
Extract temperature value for a certain position, position can be selected using interactive plot above.
l = 832
c = 2087
z = (grib_in.size().zsize)
print(z)
stack_value = []
for n in range(0,(z)):
point_value = (grib_in.pix2value(ilwis.Pixel(c,l,n)))
stack_value.append(point_value)
print('Values extracted for selected location:', stack_value)
24 Values extracted for selected location: [296.67578125, 296.22178649902344, 296.2717590332031, 296.3474884033203, 296.19496154785156, 296.22576904296875, 298.7357635498047, 301.31219482421875, 303.63812255859375, 305.6784210205078, 307.09645080566406, 307.9588317871094, 308.47203063964844, 308.7301330566406, 308.6014404296875, 308.0208435058594, 306.2880554199219, 304.44432067871094, 303.1319885253906, 301.9327697753906, 301.15843200683594, 300.47023010253906, 299.4812316894531, 298.0426940917969]
T_kelvin = np.array(stack_value)
T_celcius = T_kelvin-273.15
print(T_celcius)
[23.52578125 23.0717865 23.12175903 23.1974884 23.04496155 23.07576904 25.58576355 28.16219482 30.48812256 32.52842102 33.94645081 34.80883179 35.32203064 35.58013306 35.45144043 34.87084351 33.13805542 31.29432068 29.98198853 28.78276978 28.00843201 27.3202301 26.33123169 24.89269409]
%matplotlib inline
fig2 = plt.figure(figsize =(10, 7))
plt.plot(T_celcius, label='pixel value')
plt.xticks([0,2,4,6,8,10,12,14,16,18,20,22,23])
plt.xlabel('Hour of the day - 2023-01-01')
plt.ylabel('Temperature (Kelvin)')
plt.title('Diurnal temperature extracted for a given pixel')
plt.legend()
<matplotlib.legend.Legend at 0x2363fe30e80>
Create a sub-map
#window definition X-min Y-max x-max y-min
grib_sub = ilwis.do('selection',grib_in,"boundingbox(2000 350, 2999 999)")
print(grib_sub.size())
Size(1000, 650, 24)
Now some calculations to get mean, min, max and standard deviation of the 24 map layers. We use the operation 'aggregaterasterstatistics'. Here the mean is further elaborated upon, you can replace it with the other statistics.
ilwis.operationMetaData('aggregaterasterstatistics')
'aggregaterasterstatistics(inputraster,statisticalmarker=mean|variance|standarddev|totalsumsquares|skew|kurtosis|max|min|maxindex|minindex|median|sum)'
# calculate mean, minimum maximum temperature and standard deviation for the given day
rc_mean = ilwis.do('aggregaterasterstatistics', grib_sub, 'mean')
rc_min = ilwis.do('aggregaterasterstatistics', grib_sub, 'min')
rc_max = ilwis.do('aggregaterasterstatistics', grib_sub, 'max')
rc_std = ilwis.do('aggregaterasterstatistics', grib_sub, 'standarddev')
# check mean temperature for some location for column,row (x,y) respectively
print(rc_mean.pix2value(ilwis.Pixel(500,100)))
269.8196881612142
#transform from Kelvin to Celsius
rc_meanC = ilwis.do('mapcalc','(@1 -273.15)', rc_mean)
#calculate the histogram of the 3 images provided
rc_meanC_stat = rc_meanC.statistics(ilwis.PropertySets.pMIN|ilwis.PropertySets.pMAX|ilwis.PropertySets.pHISTOGRAM, 256)
#print Min and Max values
print(rc_meanC_stat.prop(ilwis.PropertySets.pMIN), rc_meanC_stat.prop(ilwis.PropertySets.pMAX))
rc_min = rc_meanC_stat.prop(ilwis.PropertySets.pMIN)
rc_max = rc_meanC_stat.prop(ilwis.PropertySets.pMAX)
-35.80158462524412 32.34628321329757
# create a plot of the numpy array using MatPlotLib - Imshow
# using the Mean
meanC_2np = np.fromiter(iter(rc_meanC), np.float64, rc_meanC.size().linearSize())
meanC_2np = meanC_2np.reshape((rc_meanC.size().ysize, rc_meanC.size().xsize))
%matplotlib inline
fig3 = plt.figure(figsize =(10, 7))
plt.imshow(meanC_2np, interpolation='none', vmin=rc_min, vmax=rc_max, cmap='jet')
plt.axis('on')
plt.colorbar(shrink=0.45)
plt.title('Mean temperature of air at 2m above the surface on 2023-01-01')
Text(0.5, 1.0, 'Mean temperature of air at 2m above the surface on 2023-01-01')
We continue working with data obtained from altimetry measurements over lakes and major rivers. See also: http://land.copernicus.eu/global/products/wl. Two files are provided, one covering Lake Kariba (example for lake) and the other file covers the Jamuna (example for river). The data is provided in json format. You can also inspect the file using a text editor, note the attributes contained in the sections 'type', 'geometry', 'properties' and 'data'.
The example provided below is showing how to retrieve and process the data, you can uncomment the input file for river or lake accordingly in the cell below.
#Select the river or lake as input, comment of uncomment line below
#example river:
wl_in = work_dir+'/c_gls_WL_202102140004_Jamuna_ALTI_V2.1.0.json'
#example lake:
#wl_in = work_dir+'/c_gls_WL_202108272015_Kariba_ALTI_V2.1.0.json'
#load the data file as an ILWISPy feature coverage
fc = ilwis.FeatureCoverage(wl_in)
print("The file name selected is: ", fc)
The file name selected is: c_gls_WL_202102140004_Jamuna_ALTI_V2.1.0.json
Get some idea of what is contained in the data set
print(fc.featureCount())
print(fc.featureCount(ilwis.it.POINT))
print(fc.attributeCount())
1 1 29
Check the feature data types / domain
for i in range(fc.attributeCount()):
col_domain = fc[i]
print(col_domain)
Name: resource Columnindex: 0 Domain name: Text domain, Domain type: TextDomain Name: basin Columnindex: 1 Domain name: Text domain, Domain type: TextDomain Name: river Columnindex: 2 Domain name: Text domain, Domain type: TextDomain Name: comment Columnindex: 3 Domain name: Text domain, Domain type: TextDomain Name: country Columnindex: 4 Domain name: Text domain, Domain type: TextDomain Name: institution Columnindex: 5 Domain name: Text domain, Domain type: TextDomain Name: source Columnindex: 6 Domain name: Text domain, Domain type: TextDomain Name: references Columnindex: 7 Domain name: Text domain, Domain type: TextDomain Name: archive_facility Columnindex: 8 Domain name: Text domain, Domain type: TextDomain Name: time_coverage_start Columnindex: 9 Domain: ValueDomain, Range: timeinterval:-4712-01-01T12:00:00|-4712-01-01T-11:-59:-60 Name: time_coverage_end Columnindex: 10 Domain: ValueDomain, Range: timeinterval:-4712-01-01T12:00:00|-4712-01-01T-11:-59:-60 Name: updated Columnindex: 11 Domain: ValueDomain, Range: timeinterval:-4712-01-01T12:00:00|-4712-01-01T-11:-59:-60 Name: platform Columnindex: 12 Domain name: Text domain, Domain type: TextDomain Name: long_name Columnindex: 13 Domain name: Text domain, Domain type: TextDomain Name: processing_level Columnindex: 14 Domain name: Text domain, Domain type: TextDomain Name: processing_mode Columnindex: 15 Domain name: Text domain, Domain type: TextDomain Name: status Columnindex: 16 Domain name: Text domain, Domain type: TextDomain Name: copyright Columnindex: 17 Domain name: Text domain, Domain type: TextDomain Name: contacts Columnindex: 18 Domain name: Text domain, Domain type: TextDomain Name: inspire_theme Columnindex: 19 Domain name: Text domain, Domain type: TextDomain Name: gemet_keywords Columnindex: 20 Domain name: Text domain, Domain type: TextDomain Name: gcmd_keywords Columnindex: 21 Domain name: Text domain, Domain type: TextDomain Name: iso19115_topic_categories Columnindex: 22 Domain name: Text domain, Domain type: TextDomain Name: credit Columnindex: 23 Domain name: Text domain, Domain type: TextDomain Name: purpose Columnindex: 24 Domain name: Text domain, Domain type: TextDomain Name: resource_mimetype Columnindex: 25 Domain name: Text domain, Domain type: TextDomain Name: water_surface_reference_name Columnindex: 26 Domain name: Text domain, Domain type: TextDomain Name: water_surface_reference_datum_altitude Columnindex: 27 Domain name: value, Domain type: ValueDomain, Range: ? ? ? Name: missing_value Columnindex: 28 Domain name: value, Domain type: ValueDomain, Range: ? ? ?
Read the information provided for columnindex 1 'basin', etc. Also get an idea of the location, using the information provided by the geometry
fiter = iter(fc)
f = next(fiter)
print('Basin = ', f['basin'])
basin = (f['basin'])
#uncomment if the input file is containing altimetry data over a river
print('object = ', f['river'])
object = (f['river'])
#uncomment if the input file is containing altimetry data over a lake
#print('object = ', f['lake'])
#object = (f['lake'])
geom = f.geometry()
pointiter = iter(geom)
point = next(pointiter)
#print(point)
x = (point.x)
y = (point.y)
print('Location =', x, y)
Basin = Ganges-brahmaputra object = Jamuna Location = 89.6809 24.6611
All information can be obtained using a loop over the attribute table
for f in fc:
for col in fc.attributeTable().columns():
print(col + ': ' + str(f[col]))
resource: 0000000005533 basin: Ganges-brahmaputra river: Jamuna comment: Timeseries specified by LEGOS and computed by CLS on behalf of CNES and Copernicus Global Land country: Bangladesh institution: LEGOS, CLS, CNES source: Derived from satellite altimetry references: http://land.copernicus.eu/global/products/wl archive_facility: CLS time_coverage_start: 2008-07-21 01:26:00 time_coverage_end: 2021-02-14 00:04:00 updated: 2021-02-16 01:37:57 platform: JASON2/JASON3 long_name: Lake and River Water Level processing_level: LEVEL3B processing_mode: Near Real Time status: operational copyright: Copernicus Service Information 2017 contacts: Helpdesk: http://land.copernicus.eu/global/contactpage Accountable contact: European Commission DG Joint Research Centre copernicuslandproducts@jrc.ec.europa.eu Owner contact: European Commission DG Internal Market, Industry, Entrepreneurship and SMEs ENTR-COPERNICUS-ASSETS@ec.europa.eu inspire_theme: Hydrography gemet_keywords: water level, water management, hydrometry, hydrology, climate, seasonal variation, environmental data, environmental monitoring, monitoring, remote sensing gcmd_keywords: TERRESTRIAL HYDROSPHERE, SURFACE WATER iso19115_topic_categories: elevation,inlandWaters credit: Lake and River Water Level products are generated by the Global Land Service of Copernicus, the Earth Observation Programme of the European Commission and the Theia-land program supported by CNES. The research leading to the current version of the product has received funding from CNES, LEGOS, IRD and CLS. purpose: This product is first designed to fit the requirements of the Global Land component of Land Service of Copernicus. It can be also useful for all applications related to the environment monitoring, climate research and hydrology. resource_mimetype: application/json water_surface_reference_name: EGM2008 water_surface_reference_datum_altitude: -54.42 missing_value: 9999.999
Another way to loop over the attribute table is provided in the cell below
for col in fc.attributeTable().columns():
print(col + ': ' + str(fc.attributeTable().column(col)[0]))
resource: 0000000005533 basin: Ganges-brahmaputra river: Jamuna comment: Timeseries specified by LEGOS and computed by CLS on behalf of CNES and Copernicus Global Land country: Bangladesh institution: LEGOS, CLS, CNES source: Derived from satellite altimetry references: http://land.copernicus.eu/global/products/wl archive_facility: CLS time_coverage_start: 2008-07-21 01:26:00 time_coverage_end: 2021-02-14 00:04:00 updated: 2021-02-16 01:37:57 platform: JASON2/JASON3 long_name: Lake and River Water Level processing_level: LEVEL3B processing_mode: Near Real Time status: operational copyright: Copernicus Service Information 2017 contacts: Helpdesk: http://land.copernicus.eu/global/contactpage Accountable contact: European Commission DG Joint Research Centre copernicuslandproducts@jrc.ec.europa.eu Owner contact: European Commission DG Internal Market, Industry, Entrepreneurship and SMEs ENTR-COPERNICUS-ASSETS@ec.europa.eu inspire_theme: Hydrography gemet_keywords: water level, water management, hydrometry, hydrology, climate, seasonal variation, environmental data, environmental monitoring, monitoring, remote sensing gcmd_keywords: TERRESTRIAL HYDROSPHERE, SURFACE WATER iso19115_topic_categories: elevation,inlandWaters credit: Lake and River Water Level products are generated by the Global Land Service of Copernicus, the Earth Observation Programme of the European Commission and the Theia-land program supported by CNES. The research leading to the current version of the product has received funding from CNES, LEGOS, IRD and CLS. purpose: This product is first designed to fit the requirements of the Global Land component of Land Service of Copernicus. It can be also useful for all applications related to the environment monitoring, climate research and hydrology. resource_mimetype: application/json water_surface_reference_name: EGM2008 water_surface_reference_datum_altitude: -54.42 missing_value: 9999.999
Having inspected the meta data, we are going to load the section 'data' section as a Pandas dataframe and create a plot of the water level above the water surface reference datum. This is required as the data provided is not fully adhering to the common json format. Here a single point location is provided with additionally a number of obsevations, normally you would expect for each location the corresponding obesrvation (multi-point). As this is not the case here we open the file once more, but now with Pandas.
wl = jload(open(wl_in))
Alti_data = DataFrame(wl['data'])
Alti_data.head()#display the first number of records and their column names
identifier | time | datetime | water_surface_height_above_reference_datum | water_surface_height_uncertainty | |
---|---|---|---|---|---|
0 | 5533 | 2008.552076 | 2008/07/21 01:26 | 15.10 | 0.09 |
1 | 5533 | 2008.579167 | 2008/07/30 23:24 | 15.33 | 0.13 |
2 | 5533 | 2008.606259 | 2008/08/09 21:23 | 15.54 | 0.12 |
3 | 5533 | 2008.633350 | 2008/08/19 19:21 | 15.40 | 0.15 |
4 | 5533 | 2008.660443 | 2008/08/29 17:20 | 14.47 | 0.76 |
Implement some small adjustments in the active Pandas data frame, e.g. rename some column names and create a new column with only the date
Alti_data['Date_only'] = pd.to_datetime(Alti_data.datetime) #change date format
Alti_data['Date_only'] = Alti_data.Date_only.dt.date #extract year-month-day portion of the date
Alti_data=Alti_data.rename(columns = {'water_surface_height_above_reference_datum':'WSH'})
Alti_data=Alti_data.rename(columns = {'water_surface_height_uncertainty':'Disp'})
del Alti_data['time']
Alti_data.head()
identifier | datetime | WSH | Disp | Date_only | |
---|---|---|---|---|---|
0 | 5533 | 2008/07/21 01:26 | 15.10 | 0.09 | 2008-07-21 |
1 | 5533 | 2008/07/30 23:24 | 15.33 | 0.13 | 2008-07-30 |
2 | 5533 | 2008/08/09 21:23 | 15.54 | 0.12 | 2008-08-09 |
3 | 5533 | 2008/08/19 19:21 | 15.40 | 0.15 | 2008-08-19 |
4 | 5533 | 2008/08/29 17:20 | 14.47 | 0.76 | 2008-08-29 |
Create a final plot of the water level time series, including the uncertainty parameter and subsequently save this plot as a jpg image
# Plot with matplotlib
%matplotlib inline
fig, ax = subplots(figsize=(16,6))
ax.errorbar(Alti_data['Date_only'],
Alti_data['WSH'],
yerr = Alti_data['Disp'],
color = 'darkblue',
ecolor = 'darkred',
ls = '-',
marker = '.',
alpha = .8,
label = 'Mean dispersion = %4.2f cm)'%(Alti_data['Disp'].mean()*100))
ax.set_xlabel('Observation period')
ax.set_ylabel('Water Surface Height (m)')
ax.legend(loc = 'best')
ax.grid()
# Set title and labels for axes
ax.set(title="Water Surface Height above Reference Datum for " + basin +" / " + object + " at lon:" + " " + str(x) + " - lat: " + str(y))
[Text(0.5, 1.0, 'Water Surface Height above Reference Datum for Ganges-brahmaputra / Jamuna at lon: 89.6809 - lat: 24.6611')]
# To save the plot -uncomment the line below
#fig.savefig(work_dir+'/'+'TS_altimeter.jpg')
Provided is a time series of MODIS - NDVI maps (scaled 0-255) in geotif format from 'year - day of year' 2000-001 to 2020-353 (21 years), having a 16 day repeat interval - a total of 483 time steps (for 21 years and 23 ndvi maps per year). The maps are clipped based on the Woreda Fagta Lakoma, situated in the zone called Awi Agew in the Amhara region, Ethiopia. The data is provided as a GeoTiff.
Map showing the Area of Interest.
rcTS = ilwis.RasterCoverage('ndvi_ts.tif')
# probe its size
print(rcTS.size().xsize)
print(rcTS.size().ysize)
print(rcTS.size().zsize)
199 109 483
Let's have a look at the values for a specific pixel
#get info on specific pixel for all temporal intervals, using the input data
l = 50
c = 50
non_filtered = []
for n in range(0,(rcTS.size().zsize)):
ndvi_val = (rcTS.pix2value(ilwis.Pixel(c,l,n)))
non_filtered.append(ndvi_val)
print("Values extracted for input time series:", non_filtered)
print()
Values extracted for input time series: [195.0, 136.0, 153.0, 118.0, 164.0, 144.0, 118.0, 177.0, 196.0, 183.0, 172.0, 137.0, 219.0, 12.0, 75.0, 199.0, 192.0, 201.0, 206.0, 221.0, 203.0, 185.0, 140.0, 195.0, 136.0, 153.0, 163.0, 147.0, 138.0, 148.0, 130.0, 180.0, 190.0, 26.0, 99.0, 202.0, 182.0, 177.0, 212.0, 160.0, 151.0, 224.0, 197.0, 159.0, 198.0, 170.0, 168.0, 115.0, 159.0, 114.0, 103.0, 88.0, 108.0, 102.0, 118.0, 143.0, 214.0, 195.0, 190.0, 201.0, 195.0, 192.0, 191.0, 197.0, 207.0, 200.0, 193.0, 194.0, 172.0, 135.0, 139.0, 140.0, 124.0, 131.0, 129.0, 121.0, 117.0, 85.0, 113.0, 163.0, 151.0, 188.0, 47.0, 217.0, 217.0, 231.0, 218.0, 224.0, 194.0, 214.0, 145.0, 186.0, 132.0, 98.0, 132.0, 115.0, 130.0, 138.0, 139.0, 158.0, 147.0, 106.0, 183.0, 179.0, 136.0, 201.0, 219.0, 203.0, 208.0, 130.0, 186.0, 168.0, 173.0, 164.0, 186.0, 112.0, 168.0, 89.0, 117.0, 149.0, 154.0, 139.0, 137.0, 149.0, 141.0, 167.0, 172.0, 168.0, 107.0, 206.0, 190.0, 232.0, 185.0, 169.0, 204.0, 205.0, 200.0, 178.0, 144.0, 133.0, 137.0, 120.0, 159.0, 163.0, 104.0, 99.0, 165.0, 133.0, 166.0, 178.0, 208.0, 155.0, 220.0, 178.0, 181.0, 199.0, 71.0, 191.0, 186.0, 199.0, 164.0, 164.0, 114.0, 127.0, 168.0, 160.0, 166.0, 128.0, 150.0, 185.0, 189.0, 176.0, 204.0, 110.0, 82.0, 196.0, 203.0, 0.0, 189.0, 205.0, 197.0, 192.0, 212.0, 193.0, 113.0, 164.0, 137.0, 151.0, 168.0, 172.0, 154.0, 175.0, 169.0, 211.0, 215.0, 205.0, 53.0, 119.0, 199.0, 198.0, 215.0, 164.0, 200.0, 213.0, 171.0, 188.0, 153.0, 127.0, 156.0, 133.0, 105.0, 125.0, 141.0, 166.0, 177.0, 156.0, 148.0, 154.0, 187.0, 66.0, 143.0, 204.0, 201.0, 182.0, 197.0, 209.0, 189.0, 197.0, 191.0, 172.0, 193.0, 112.0, 146.0, 148.0, 104.0, 139.0, 157.0, 95.0, 166.0, 197.0, 204.0, 172.0, 169.0, 196.0, 166.0, 178.0, 182.0, 211.0, 193.0, 207.0, 157.0, 197.0, 107.0, 166.0, 102.0, 152.0, 159.0, 117.0, 167.0, 117.0, 119.0, 180.0, 169.0, 206.0, 184.0, 171.0, 132.0, 176.0, 179.0, 193.0, 217.0, 197.0, 217.0, 159.0, 202.0, 177.0, 177.0, 171.0, 115.0, 93.0, 149.0, 115.0, 139.0, 123.0, 177.0, 174.0, 194.0, 173.0, 177.0, 188.0, 190.0, 0.0, 191.0, 218.0, 219.0, 181.0, 197.0, 168.0, 186.0, 151.0, 166.0, 145.0, 141.0, 107.0, 182.0, 137.0, 90.0, 165.0, 137.0, 143.0, 72.0, 200.0, 226.0, 178.0, 153.0, 191.0, 132.0, 186.0, 189.0, 187.0, 199.0, 160.0, 117.0, 163.0, 158.0, 124.0, 138.0, 108.0, 119.0, 179.0, 184.0, 207.0, 166.0, 202.0, 88.0, 37.0, 121.0, 76.0, 115.0, 173.0, 184.0, 164.0, 160.0, 154.0, 167.0, 167.0, 153.0, 100.0, 161.0, 176.0, 165.0, 121.0, 97.0, 184.0, 185.0, 200.0, 188.0, 201.0, 199.0, 201.0, 209.0, 193.0, 211.0, 212.0, 180.0, 210.0, 209.0, 152.0, 152.0, 123.0, 155.0, 117.0, 177.0, 163.0, 151.0, 193.0, 190.0, 216.0, 169.0, 106.0, 199.0, 195.0, 220.0, 203.0, 205.0, 219.0, 217.0, 200.0, 210.0, 162.0, 192.0, 176.0, 121.0, 124.0, 151.0, 182.0, 158.0, 141.0, 78.0, 228.0, 220.0, 216.0, 206.0, 199.0, 210.0, 133.0, 223.0, 222.0, 223.0, 225.0, 218.0, 208.0, 197.0, 202.0, 192.0, 170.0, 177.0, 181.0, 184.0, 153.0, 185.0, 187.0, 207.0, 207.0, 184.0, 206.0, 184.0, 186.0, 215.0, 220.0, 225.0, 217.0, 222.0, 229.0, 222.0, 215.0, 222.0, 220.0, 198.0, 190.0, 194.0, 202.0, 204.0, 170.0, 201.0, 212.0, 230.0, 223.0, 222.0, 201.0, 145.0, 107.0, 209.0, 192.0, 217.0, 40.0, 214.0, 227.0, 227.0, 218.0, 210.0, 205.0, 192.0, 191.0, 187.0, 186.0, 210.0, 191.0, 201.0, 212.0, 197.0, 216.0, 29.0, 71.0, 225.0, 78.0, 170.0, 205.0, 81.0, 216.0, 220.0, 197.0, 200.0]
Create a simple plot to inspect if there are anoalies - noice in the time series extracted for this pixel
# create a simple plot showing the differences as a function of the temporal filter, using the settings applied
%matplotlib inline
plt.figure(figsize=(15,10))
plt.plot(non_filtered, label='original time series')
plt.ylim([0, 275])
plt.xlabel('Time Series Range: 2000 to end 2021')
plt.ylabel('Digital Number')
plt.title('NDVI temporal profile for selected pixel at line: '+str(l) +' - column: '+ str(c))
plt.legend()
<matplotlib.legend.Legend at 0x23607ae07c0>
As can be observed from the plot generated above, there are quite a number of 'dips', which might represent no data (e.g. due to persistent cloud cover - even during 16 days / multiple overpasses) or other atmospheric issues that result mainly in ndvi values which are lower h higher than expected. This can be corrected for using a temporal filtering procedure. See also: https://www.sciencedirect.com/science/article/pii/S0034425718303985
Satellite-derived NDVI time-series products are usually affected by contamination due to the atmosphere (aerosols and dust) or clouds, which results in a data set containing spikes and dips and therefore it is necessary to reduce this residual noise to improve the quality of the NDVI time-series data.
print(ilwis.operationMetaData('timesat'))
timesat(inputgridcoverage,iterationcount,upperenvelop,fitlastiteration,extendwindow)
Note that the timesat filter might take some time to process the imported time series
#execute timesat, note the settings applied
rcF=ilwis.do("timesat",(rcTS), 4, True, True, True)
#If data has already been filtered uncomment line below to load the filtered time series data
#rcF = ilwis.RasterCoverage('ndv_filtered.tif')
After processing has been completed let's check the values for the the same pixel, but now the filtered results
#get info on specific pixel for all temporal intervals for the filtered pixel
l = 50
c = 50
timesat_filtered = []
for n in range(0,(rcF.size().zsize)):
ndvi_val = (rcF.pix2value(ilwis.Pixel(c,l,n)))
timesat_filtered.append(ndvi_val)
print("Values extracted for filtered time series:", timesat_filtered)
Values extracted for filtered time series: [190.0, 180.0, 175.0, 166.0, 163.0, 168.0, 176.0, 185.0, 188.0, 186.0, 186.0, 187.0, 188.0, 190.0, 193.0, 196.0, 202.0, 212.0, 216.0, 215.0, 201.0, 187.0, 170.0, 162.0, 160.0, 158.0, 154.0, 147.0, 148.0, 151.0, 164.0, 174.0, 183.0, 187.0, 191.0, 195.0, 192.0, 194.0, 198.0, 198.0, 198.0, 196.0, 194.0, 188.0, 183.0, 176.0, 166.0, 150.0, 138.0, 118.0, 108.0, 103.0, 101.0, 107.0, 134.0, 171.0, 199.0, 208.0, 207.0, 203.0, 197.0, 195.0, 197.0, 200.0, 204.0, 204.0, 200.0, 188.0, 169.0, 153.0, 143.0, 138.0, 133.0, 129.0, 124.0, 120.0, 122.0, 129.0, 141.0, 159.0, 175.0, 191.0, 204.0, 217.0, 225.0, 227.0, 227.0, 224.0, 220.0, 207.0, 186.0, 173.0, 155.0, 143.0, 132.0, 129.0, 131.0, 137.0, 145.0, 154.0, 163.0, 172.0, 181.0, 189.0, 201.0, 210.0, 215.0, 213.0, 205.0, 195.0, 184.0, 177.0, 177.0, 179.0, 178.0, 172.0, 166.0, 159.0, 154.0, 151.0, 148.0, 144.0, 146.0, 149.0, 155.0, 164.0, 174.0, 184.0, 193.0, 198.0, 194.0, 195.0, 195.0, 203.0, 207.0, 206.0, 194.0, 174.0, 153.0, 140.0, 139.0, 142.0, 151.0, 156.0, 158.0, 162.0, 165.0, 170.0, 172.0, 175.0, 174.0, 177.0, 181.0, 185.0, 190.0, 195.0, 199.0, 200.0, 198.0, 191.0, 178.0, 171.0, 166.0, 163.0, 164.0, 164.0, 166.0, 170.0, 175.0, 182.0, 188.0, 196.0, 200.0, 201.0, 203.0, 202.0, 202.0, 202.0, 201.0, 205.0, 205.0, 208.0, 204.0, 192.0, 181.0, 169.0, 166.0, 164.0, 165.0, 167.0, 172.0, 183.0, 194.0, 205.0, 206.0, 202.0, 197.0, 194.0, 201.0, 206.0, 214.0, 215.0, 214.0, 209.0, 198.0, 187.0, 176.0, 163.0, 153.0, 140.0, 137.0, 143.0, 155.0, 164.0, 170.0, 167.0, 162.0, 161.0, 168.0, 179.0, 189.0, 197.0, 199.0, 202.0, 204.0, 205.0, 202.0, 197.0, 189.0, 179.0, 167.0, 158.0, 150.0, 147.0, 146.0, 150.0, 159.0, 174.0, 188.0, 194.0, 197.0, 194.0, 192.0, 188.0, 187.0, 191.0, 196.0, 203.0, 205.0, 202.0, 193.0, 184.0, 179.0, 169.0, 164.0, 161.0, 162.0, 164.0, 169.0, 173.0, 175.0, 179.0, 181.0, 182.0, 180.0, 176.0, 175.0, 181.0, 192.0, 201.0, 212.0, 215.0, 214.0, 207.0, 197.0, 188.0, 178.0, 168.0, 158.0, 152.0, 149.0, 150.0, 156.0, 167.0, 177.0, 184.0, 187.0, 186.0, 186.0, 188.0, 192.0, 202.0, 210.0, 215.0, 213.0, 206.0, 200.0, 192.0, 184.0, 174.0, 164.0, 154.0, 146.0, 140.0, 136.0, 135.0, 136.0, 142.0, 151.0, 167.0, 180.0, 195.0, 195.0, 198.0, 197.0, 191.0, 191.0, 193.0, 196.0, 196.0, 192.0, 184.0, 174.0, 164.0, 155.0, 147.0, 147.0, 162.0, 170.0, 178.0, 177.0, 170.0, 159.0, 144.0, 141.0, 140.0, 138.0, 149.0, 157.0, 171.0, 172.0, 169.0, 168.0, 166.0, 165.0, 164.0, 165.0, 168.0, 170.0, 167.0, 166.0, 167.0, 176.0, 184.0, 191.0, 196.0, 198.0, 200.0, 202.0, 204.0, 206.0, 208.0, 210.0, 214.0, 215.0, 211.0, 195.0, 181.0, 167.0, 165.0, 165.0, 167.0, 172.0, 175.0, 182.0, 188.0, 187.0, 187.0, 187.0, 195.0, 202.0, 209.0, 214.0, 215.0, 217.0, 217.0, 216.0, 213.0, 208.0, 199.0, 189.0, 175.0, 163.0, 152.0, 150.0, 150.0, 158.0, 177.0, 197.0, 220.0, 221.0, 219.0, 210.0, 210.0, 212.0, 216.0, 221.0, 224.0, 224.0, 221.0, 217.0, 212.0, 206.0, 198.0, 190.0, 183.0, 181.0, 180.0, 181.0, 185.0, 191.0, 198.0, 203.0, 204.0, 203.0, 202.0, 202.0, 207.0, 213.0, 218.0, 222.0, 224.0, 225.0, 225.0, 225.0, 224.0, 220.0, 213.0, 204.0, 200.0, 199.0, 199.0, 201.0, 205.0, 213.0, 222.0, 226.0, 225.0, 214.0, 206.0, 202.0, 203.0, 207.0, 211.0, 215.0, 221.0, 225.0, 226.0, 224.0, 217.0, 210.0, 202.0, 195.0, 193.0, 195.0, 199.0, 203.0, 205.0, 208.0, 211.0, 213.0, 213.0, 205.0, 201.0, 200.0, 201.0, 200.0, 206.0, 209.0, 217.0, 217.0, 219.0, 202.0]
Create a plot showing the original and filtered data. First create the date stamps for the x-axis
# initializing date range for X-Axis
x=16
date_generated = []
for year in range(2000, 2021):
start = datetime.strptime("01-01-" + str(year), "%d-%m-%Y")
end = datetime.strptime("01-01-" + str(year+1), "%d-%m-%Y")
date_generated = date_generated + [start + i * timedelta(days=x) for i in range(0, 1 + int((end-start).days / x))]
print(len(date_generated))
483
# create a simple plot showing the differences as a function of the temporal filter, using the settings applied
%matplotlib inline
plt.figure(figsize=(15,10))
plt.plot(date_generated, non_filtered, label='original time series')
plt.plot(date_generated, timesat_filtered, label='filtered time series')
plt.ylim([0, 275])
plt.xlabel('Time Series Range: 2000 - 2021')
plt.ylabel('Digital Number')
plt.title('NDVI temporal profile for selected pixel at line: '+str(l) +' - column: '+ str(c))
plt.legend()
<matplotlib.legend.Legend at 0x236078f19d0>
# uncomment the line below to store the filtered results in GeoTIFF format
#rcF.store("ndv_filtered.tif", "GTiff", "gdal")
To get an idea if a certain observation is in line with the average prevailing conditions you can compare the observation with the longer term average, or climatology. So now that we have a filtered time series, with the noice removed, we can calculate per time step over the years of observation, the average. The time series used here is 21 years and has 23 time steps per year, so the result should be a map stack of 23 layers containing the average (over the years under consideration) for each specific time step.
In the cell below a definition is created which subsequently is executed in the cell further below. The time series is assumed to have the same number of time steps for each year under consideration. Here calendar years 2001 - 2021 are used.
Retrieve the layers to calculate the short term climatology, e.g. 20 years data! = 20 years * 23 layers = 460 layers, here starting from second year, for the next 20 years
short_clim = []
short_clim = ilwis.do('selection',rcF,'rasterbands(23..483)')#select appropriate time range for climatology
Check map stack dimensions
print(short_clim.size().xsize)
print(short_clim.size().ysize)
print(short_clim.size().zsize)
199 109 460
In the cell below a definition is created to calculate the climatology. Review the inline comments given below and execute the definition
def computeClim(rc,start,end):
climatology = [] #new variable
ix_start = start - 1 # ix_start is the "start" temporal interval (e.g. January 1)
#print(start)
while ix_start < end: #procedure will continue untill last layers
ix = ix_start # ix is now January 1 of year 1
#print(ix)
selection = []
while ix < nr_maps: # generate list of indices
selection.append(str(ix))
ix = ix + nr_time_steps # ix is now January 1 of the next year
print(','.join(selection))
rcSel = ilwis.do('selection', rc, 'rasterbands(' + ','.join(selection) + ')')#select appropriate layers per time step
rcAvg = ilwis.do('aggregaterasterstatistics', rcSel, 'mean') #calculate the mean of all the layers per time step
climatology.append(rcAvg) #add to already calculated layers
ix_start = ix_start + 1 # jump to next temporal interval (e.g. January 16)
return climatology
#execute the definition 'computeClim', note the setting of the parameters that are being provided directly below
filtered_cor = short_clim # input filtered time series, note here full calendar years are provided
nr_maps = (filtered_cor.size().zsize) # length of the time series, here 10 years x 23 observations per year = 230
nr_time_steps = 23
start = 1
#run definition:
climatology = computeClim(filtered_cor,start,nr_time_steps) #output climatology = definition(input, start time-step, number of time-steps)
0,23,46,69,92,115,138,161,184,207,230,253,276,299,322,345,368,391,414,437 1,24,47,70,93,116,139,162,185,208,231,254,277,300,323,346,369,392,415,438 2,25,48,71,94,117,140,163,186,209,232,255,278,301,324,347,370,393,416,439 3,26,49,72,95,118,141,164,187,210,233,256,279,302,325,348,371,394,417,440 4,27,50,73,96,119,142,165,188,211,234,257,280,303,326,349,372,395,418,441 5,28,51,74,97,120,143,166,189,212,235,258,281,304,327,350,373,396,419,442 6,29,52,75,98,121,144,167,190,213,236,259,282,305,328,351,374,397,420,443 7,30,53,76,99,122,145,168,191,214,237,260,283,306,329,352,375,398,421,444 8,31,54,77,100,123,146,169,192,215,238,261,284,307,330,353,376,399,422,445 9,32,55,78,101,124,147,170,193,216,239,262,285,308,331,354,377,400,423,446 10,33,56,79,102,125,148,171,194,217,240,263,286,309,332,355,378,401,424,447 11,34,57,80,103,126,149,172,195,218,241,264,287,310,333,356,379,402,425,448 12,35,58,81,104,127,150,173,196,219,242,265,288,311,334,357,380,403,426,449 13,36,59,82,105,128,151,174,197,220,243,266,289,312,335,358,381,404,427,450 14,37,60,83,106,129,152,175,198,221,244,267,290,313,336,359,382,405,428,451 15,38,61,84,107,130,153,176,199,222,245,268,291,314,337,360,383,406,429,452 16,39,62,85,108,131,154,177,200,223,246,269,292,315,338,361,384,407,430,453 17,40,63,86,109,132,155,178,201,224,247,270,293,316,339,362,385,408,431,454 18,41,64,87,110,133,156,179,202,225,248,271,294,317,340,363,386,409,432,455 19,42,65,88,111,134,157,180,203,226,249,272,295,318,341,364,387,410,433,456 20,43,66,89,112,135,158,181,204,227,250,273,296,319,342,365,388,411,434,457 21,44,67,90,113,136,159,182,205,228,251,274,297,320,343,366,389,412,435,458 22,45,68,91,114,137,160,183,206,229,252,275,298,321,344,367,390,413,436,459
Get the map stack details on the climatology computed
print(climatology[0].size()) #extract dimensions for the first layer
print(len(climatology)) #length of map stack
Size(199, 109, 1) 23
Select climatology for a given location for all time steps
#get info on specific pixel for all temporal intervals for the filtered pixel
l = 50
c = 50
clim_sel = []
for n in range(0,len(climatology)):
clim_val = (climatology[n].pix2value(ilwis.Pixel(c,l)))
clim_sel.append(clim_val)
print("Values extracted for filtered time series:", clim_sel)
Values extracted for filtered time series: [170.89473684210526, 162.52631578947367, 156.68421052631578, 153.42105263157896, 152.8421052631579, 155.57894736842104, 160.0, 167.6315789473684, 175.57894736842104, 181.89473684210526, 186.68421052631578, 188.68421052631578, 191.52631578947367, 193.8421052631579, 197.26315789473685, 200.78947368421052, 203.52631578947367, 205.89473684210526, 206.78947368421052, 205.26315789473685, 201.31578947368422, 193.68421052631578, 184.21052631578948]
Extract for year 2013 (below average) to compare with climatology for selected location, 13 years x 23 time steps per year = 299, index / counting starts from 0, so first time step = 298 and last time step is 320, so for parameter 'rasterbands(b_start..b_end)' the index number are 'rasterbands(298..320)'
year_poor = []
year_poor = ilwis.do('selection',rcF,'rasterbands(298..320)') #select appropriate layer range for specific year
l = 50
c = 50
year_below = []
for n in range(0,(year_poor.size().zsize)):
year_val = (year_poor.pix2value(ilwis.Pixel(c,l,n)))
year_below.append(year_val)
print("Values extracted for selected year filtered time series:", year_below)
Values extracted for selected year filtered time series: [184.0, 174.0, 164.0, 154.0, 146.0, 140.0, 136.0, 135.0, 136.0, 142.0, 151.0, 167.0, 180.0, 195.0, 195.0, 198.0, 197.0, 191.0, 191.0, 193.0, 196.0, 196.0, 192.0]
Extract for year 2018 (above average) to compare with climatology for selected location, 18 years x 23 time steps per year = 414, index / counting starts from 0, so first time step = 413 and last time step is 436, so for parameter 'rasterbands(b_start..b_end)' the index number are 'rasterbands(413..436)'
year_good = []
year_good = ilwis.do('selection',rcF,'rasterbands(413..436)') #select appropriate layer range for specific year
l = 50
c = 50
year_above = []
for n in range(0,(year_good.size().zsize)):
year_val = (year_good.pix2value(ilwis.Pixel(c,l,n)))
year_above.append(year_val)
print("Values extracted for selected year filtered time series:", year_above)
Values extracted for selected year filtered time series: [198.0, 190.0, 183.0, 181.0, 180.0, 181.0, 185.0, 191.0, 198.0, 203.0, 204.0, 203.0, 202.0, 202.0, 207.0, 213.0, 218.0, 222.0, 224.0, 225.0, 225.0, 225.0, 224.0, 220.0]
# create a simple plot showing the differences between selected year and climatology
%matplotlib inline
plt.figure(figsize=(10,6))
plt.plot(clim_sel, label='NDVI climatology', color = 'b', linestyle = '-', dashes=[6, 2])
plt.plot(year_above, label='NDVI above', color = 'g')
plt.plot(year_below, label='NDVI below', color = 'r')
plt.ylim([120, 240])
plt.xlabel('For selected Year - Time step')
plt.xticks([0, 2, 4 , 6, 8, 10, 12, 14, 16, 18, 20, 22])
plt.ylabel('Digital Number')
plt.title('NDVI comparison with climatology for selected pixel at line: '+str(l) +' - column: '+ str(c))
plt.legend()
<matplotlib.legend.Legend at 0x236079948e0>
Convert the climatology, as a list, to an ILWIS Map List. Define the number of leayer, read the full list as a 1D array. Prepare the ilwis destination Raster Coverage (x, y and z dimensions), read georeference and data type from know source. Finally create a 3D array. Check the output printed.
#convert from list to ILWIS RasterCoverage - Maplist
data = np.fromiter(iter(climatology[0]), np.float64, climatology[0].size().linearSize())
for i in range(1, len(climatology)):
data = np.concatenate((data, np.fromiter(iter(climatology[i]), np.float64, climatology[i].size().linearSize())))
print(len(data))
climatology_maplist = ilwis.RasterCoverage()
climatology_maplist.setSize(ilwis.Size(climatology[0].size().xsize, climatology[0].size().ysize, len(climatology)))
print(len(climatology))
climatology_maplist.setGeoReference(climatology[0].geoReference())
climatology_maplist.setDataDef(climatology[0].datadef())
climatology_maplist.array2raster(data)
print(climatology_maplist.size())
43382 65073 86764 108455 130146 151837 173528 195219 216910 238601 260292 281983 303674 325365 347056 368747 390438 412129 433820 455511 477202 498893 23 Size(199, 109, 23)
Store the results obtained as an ILWIS MapList, visualize the anomaly map stack created using the animation functions in ILWIS 386. Use as Representation 'anom.rpr'
climatology_maplist.store('clim20y.mpl')
Anomalies are the deviation between the NDVI climatology (here the short term average) calculated and the actual values observed for the same time step for a given year. The year 2013 is used.
clim_in = climatology_maplist
print(clim_in.size())
time_step_in = ilwis.do('selection',rcF,'rasterbands(298..320)')
print(time_step_in.size())
#calculate the anomalies per time step for a whole calendar year
anom_2013 = ilwis.do('mapcalc',' @1-@2', time_step_in, clim_in)
Size(199, 109, 23) Size(199, 109, 23)
l = 50
c = 50
year_anom = []
for n in range(0,(anom_2013.size().zsize)):
anom_val = (anom_2013.pix2value(ilwis.Pixel(c,l,n)))
year_anom.append(anom_val)
print("Values extracted for selected year anomaly time series:", year_anom)
Values extracted for selected year anomaly time series: [13.10526315789474, 11.47368421052633, 7.3157894736842195, 0.5789473684210407, -6.84210526315789, -15.57894736842104, -24.0, -32.63157894736841, -39.57894736842104, -39.89473684210526, -35.68421052631578, -21.68421052631578, -11.52631578947367, 1.1578947368421098, -2.2631578947368496, -2.7894736842105203, -6.526315789473671, -14.89473684210526, -15.78947368421052, -12.26315789473685, -5.3157894736842195, 2.3157894736842195, 7.78947368421052]
Store your results and create an animation using ILWIS 386 to get an impression of the spatial distribution of the anomalies obtained.
anom_2013.store('anomaly_2013.mpl')
# create a simple plot showing the differences between selected year and climatology
%matplotlib inline
plt.figure(figsize=(10,6))
plt.plot(year_anom, label='NDVI Anomaly')
plt.ylim([-50, 50])
plt.axhline(y = 0.0, color = 'r', linestyle = '-', dashes=[6, 2])
plt.xlabel('For selected Year - Time step')
plt.xticks([0, 2, 4 , 6, 8, 10, 12, 14, 16, 18, 20, 22])
plt.ylabel('Anomaly')
plt.title('NDVI anomaly comparison (actual - climatology) for selected pixel at line: '+str(l) +' - column: '+ str(c))
plt.legend()
<matplotlib.legend.Legend at 0x23607a01f10>
This notebook ends with an example of the use of multi temporal satellite image assessment for change detection. As example the Grand Ethiopian Renaissance Dam (see: https://en.wikipedia.org/wiki/Grand_Ethiopian_Renaissance_Dam) is used. To access the changes the images as recorded by Sentinel-1 are used. For information on the data review: https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S1_GRD. The data, retrieved from Google Earth Engine, was pre-processed. Some of the details:
S1_VH_202001: composite created over time range from '2020-01-01' to '2020-01-31' - pre fill condition
S1_VH_202009: composite created over time range from '2020-09-01' to '2020-09-30' - fill during rainy season 2020
S1-VH_202108: composite created over time range from '2021-08-15' to '2021-09-15' - fill during rainy season 2021
The 3 images are already provided in an ILWIS file format, their data type is value and the backscatter unit is dB
#import raster images provided
rc1 = ilwis.RasterCoverage('S1_VH_202001.mpr')
rc2 = ilwis.RasterCoverage('S1_VH_202009.mpr')
rc3 = ilwis.RasterCoverage('S1_VH_202108.mpr')
# get information on dimensions
print(rc1.size().xsize)
print(rc1.size().ysize)
print(rc1.size().zsize)
# check the georeference of the raster
print (rc3.geoReference().name())
2088 1474 1 S1_VH.grf
Get yourself a bit familar with the image statistics, note the unit is in dB, check the minimum and maximum values of each of the images
#calculate the histogram of the 3 images provided
rc1_stat = rc1.statistics(ilwis.PropertySets.pMIN|ilwis.PropertySets.pMAX|ilwis.PropertySets.pHISTOGRAM, 256)
rc2_stat = rc2.statistics(ilwis.PropertySets.pMIN|ilwis.PropertySets.pMAX|ilwis.PropertySets.pHISTOGRAM, 256)
rc3_stat = rc3.statistics(ilwis.PropertySets.pMIN|ilwis.PropertySets.pMAX|ilwis.PropertySets.pHISTOGRAM, 256)
#print Min and Max values
print(rc1_stat.prop(ilwis.PropertySets.pMIN), rc1_stat.prop(ilwis.PropertySets.pMAX))
print(rc2_stat.prop(ilwis.PropertySets.pMIN), rc2_stat.prop(ilwis.PropertySets.pMAX))
print(rc3_stat.prop(ilwis.PropertySets.pMIN), rc3_stat.prop(ilwis.PropertySets.pMAX))
-33.784130529471945 2.1101234774166415 -34.136830256058424 8.025905624030916 -34.681657589417945 3.540328803674682
#prepare the histogram plot
x1=[a for (a,b) in rc1_stat.histogram()][:-1]
y1=[b for (a,b) in rc1_stat.histogram()][:-1]
x2=[a for (a,b) in rc2_stat.histogram()][:-1]
y2=[b for (a,b) in rc2_stat.histogram()][:-1]
x3=[a for (a,b) in rc3_stat.histogram()][:-1]
y3=[b for (a,b) in rc3_stat.histogram()][:-1]
From the histogram plot provided below give some special attention to the negative value range, which values represent water and could you think of a value providing the upper water limit?
#plot the histogram of the three images
#plot inline
%matplotlib inline
fig1 = plt.figure(figsize=(12, 8))
plt.subplot(1, 2, 1)
plt.plot(x1,y1,label='202001')
plt.plot(x2,y2,label='202009')
plt.plot(x3,y3,label='202108')
plt.xlabel('Data Range')
plt.ylabel('Frequency')
plt.title('Histogram')
plt.legend()
plt.title('full range')
plt.subplot(1, 2, 2)
plt.plot(x1,y1,label='202001')
plt.plot(x2,y2,label='202009')
plt.plot(x3,y3,label='202108')
plt.xlabel('Data Range')
plt.ylabel('Frequency')
plt.title('Histogram')
plt.ylim([0,7500])
plt.xlim([-40,-20])
plt.legend()
plt.title('selected range over water')
Text(0.5, 1.0, 'selected range over water')
# extract water, eventually modify upper limit threshold
#for 202001
rc1w = ilwis.do('mapcalc','iff(@1 < -26, 1, 0)', rc1)
rc1w = ilwis.do('setvaluerange', rc1w, 0, 1, 0.01)
#for 202009
rc2w = ilwis.do('mapcalc','iff(@1 < -23.5, 1, 0)', rc2)
rc2w = ilwis.do('setvaluerange', rc2w, 0, 1, 0.01)
#for 202108
rc3w = ilwis.do('mapcalc','iff(@1 < -23, 1, 0)', rc3)
rc3w = ilwis.do('setvaluerange', rc3w, 0, 1, 0.01)
rc4w = ilwis.do('mapcalc','@1+@2+@3', rc1w, rc2w, rc3w)
rc4w = ilwis.do('setvaluerange', rc4w, 0, 3, 1)
rc4w.store('water_change.mpr')
#rescale to display colour composite
rows = rc1.size().ysize
cols = rc1.size().xsize
rc1n = np.fromiter(iter(rc1), np.float64, rc1.size().linearSize())
rc1n = rc1n.reshape(rows,cols)
rc2n = np.fromiter(iter(rc2), np.float64, rc2.size().linearSize())
rc2n = rc2n.reshape(rows,cols)
rc3n = np.fromiter(iter(rc3), np.float64, rc3.size().linearSize())
rc3n = rc3n.reshape(rows,cols)
#rescale to display classified water map
w_cl = np.fromiter(iter(rc4w), np.uint8, rc1.size().linearSize())
w_cl = w_cl.reshape(rows,cols)
%matplotlib inline
plt.figure(figsize=(14,8))
plt.subplots_adjust(left=0.125,
bottom=0.1,
right=0.9,
top=0.9,
wspace= 0.3,
hspace=0.3)
plt.subplot(2,2,1)
plt.imshow(rc1n, cmap= 'gray')
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.title('Raster Coverage 1 = VH_Jan2020')
plt.grid(False)
plt.subplot(2,2,2)
plt.imshow(rc2n, cmap= 'gray')
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.title('Raster Coverage 2 = VH_Sep2020')
plt.grid(False)
plt.subplot(2,2,3)
plt.imshow(rc3n, cmap= 'gray')
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.title('Raster Coverage 3 = VH_Sep2021')
plt.grid(False)
plt.subplot(2,2,4)
plt.imshow(w_cl, cmap= 'jet', vmin=0, vmax=3)
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.title('classified = water changes')
Text(0.5, 1.0, 'classified = water changes')
#apply simpe scaling on original images
rc1b = ilwis.do('mapcalc','(@1 + 40)*5', rc1)
red = ilwis.do('setvaluerange', rc1b, 0, 255, 1)
rc2b = ilwis.do('mapcalc','(@1 + 40)*5', rc2)
green = ilwis.do('setvaluerange', rc2b, 0, 255, 1)
rc3b = ilwis.do('mapcalc','(@1 + 40)*5', rc3)
blue = ilwis.do('setvaluerange', rc3b, 0, 255, 1)
redfc = np.fromiter(iter(red), np.ubyte, rc1.size().linearSize())
redfc = redfc.reshape(rows,cols)
greenfc = np.fromiter(iter(green), np.ubyte, rc2.size().linearSize())
greenfc = greenfc.reshape(rows,cols)
bluefc = np.fromiter(iter(blue), np.ubyte, rc3.size().linearSize())
bluefc = bluefc.reshape(rows,cols)
fc_reservoir = np.dstack((redfc, greenfc, bluefc))
#plot interactive colour composite
%matplotlib notebook
plt.figure(figsize=(8,6))
plt.imshow(fc_reservoir)
<matplotlib.image.AxesImage at 0x236092ac7f0>
Time series analysis requires calculations to be performed on map lists / map stacks. To demonstrate the map list calculation functionality an aridity index (AI) is going to be calculated to obtain an impression using a numerical indicator of the degree of dryness of the climate at a given location. Here the AI as proposed by De Martonne is used. The ‘De Martonne’ Aridity index (IndexDM) defines aridity as the ratio of precipitation to mean temperature according to the equation below and classifies these into different climate types. Where:
Climatologies of the two parameters are provided, first load these as Raster Coverages
Tclim = ilwis.RasterCoverage('at_avg_climatology.mpl')
Pclim = ilwis.RasterCoverage('pcp_climatology.mpl')
To calculate de Martone Index (DM) first the sum of the precipitation has to be obtained as well as the average temperature
Psum = ilwis.do('aggregaterasterstatistics', Pclim, 'sum')
Tavg = ilwis.do('aggregaterasterstatistics', Tclim, 'mean')
Visualize the input data
# create a plot of the numpy array using MatPlotLib - Imshow
Psum_2np = np.fromiter(iter(Psum), np.float64, Psum.size().linearSize())
Psum_2np = Psum_2np.reshape((Psum.size().ysize, Psum.size().xsize))
Tavg_2np = np.fromiter(iter(Tavg), np.float64, Tavg.size().linearSize())
Tavg_2np = Tavg_2np.reshape((Tavg.size().ysize, Tavg.size().xsize))
#plot inline
%matplotlib inline
fig1 = plt.figure(figsize=(15, 10))
plt.subplot(1, 2, 1)
plt.imshow(Psum_2np, vmin=1, vmax=2500, cmap='jet')
plt.colorbar(fraction=0.025)
plt.title('Precipitation')
plt.subplot(1, 2, 2)
plt.imshow(Tavg_2np,vmin=1, vmax=35, cmap='jet')
plt.title('Temperature')
plt.colorbar(fraction=0.025)
<matplotlib.colorbar.Colorbar at 0x2361fe5a190>
Now we still need to classify the data according to the Aridity Index as proposed by De Martonne, classes are given below.
Climate type | Aridity Index | code |
---|---|---|
Arid | 0 - 10 | 1 |
Semi-arid | 10 - 20 | 2 |
Mediterranean | 20 - 24 | 3 |
Semi-humid | 24 - 28 | 4 |
Humid | 28 - 35 | 5 |
Very humid | 35 - 55 | 6 |
Extremely humid | > 55 | 7 |
Start De Martonne Index calculation using the statisitics from the time series obtained.
dm_index = ilwis.do('mapcalc','@1/(@2+10)', Psum, Tavg)
#dm_index.store('dmindex.mpr')
To classify the index we are going to create a domain and slice the 'continuous' data range of 'dm_index' into a number of discrete classes, specified according to the table above.
#note the slicing operation parameters required
print(ilwis.operationMetaData('sliceraster'))
sliceraster(inputgridcoverage, IntervalDomain)
From the operation MetaData a domain (specifying the name and data range) to slice the full data range is required. In the cell below an interval domain is created according to the class assignment proposed by De Martonne.
# create a new NumericItemRange and fill it with required class items
dmrange = ilwis.NumericItemRange()
dmrange.add(('Arid', 0.0, 10.0, 1.0))
dmrange.add(('Semi-arid', 10.0, 20.0, 1.0))
dmrange.add(('Mediterranean', 20.0, 24.0, 1.0))
dmrange.add(('Semi-humid', 24.0, 28.0, 1.0))
dmrange.add(('Humid', 28.0, 35.0, 1.0))
dmrange.add(('Very humid', 35.0, 55.0, 1.0))
dmrange.add(('Extremely humid', 55.0, 500.0, 1.0))
#assign the range to a new ItemDomain
dm_dom = ilwis.ItemDomain(dmrange)
Conduct the slicing operation and to check the results obtained in ILWIS 386 uncomment the last line in the cell below
dm_indexcl = ilwis.do('sliceraster', dm_index, dm_dom)
#dm_indexcl.store('dm_indexcl.mpr')
The same can also be done using the map calculation operator, note the equation provided below and execute the cell
#apply the De Martonne class intervals using the code column from the Aridity Index table above
#note here a backslash is used for readability of the equation
dm_indexcl_mc =ilwis.do('mapcalc','iff((@1>=0)and(@1<10),1,iff((@1>=10)and(@1<20),2, \
iff((@1>=20)and(@1<24),3,iff((@1>=24)and(@1<28),4,iff((@1>=28)and(@1<35),5,iff((@1>=35)and(@1<55),6, \
iff(@1>=55,7,?)))))))', dm_index)
As can be observed above, there is a large variability in precipitation in time and space. Also an index is available to evaluate the rainfall distribution and rain concentration. Here the Precipitation Concentration Index (PCI) method according to Michiels, Gabriels and Hartmann (1992) is going to be applied. In this index, the higher the PCI, the more irregular and greater the precipitation variability. To estimate this variability the input required is the monthly precipitation for a given year in mm/month. A yearly monthly precipitation map list is required to execute the calculations. The index applied firstly determines the coefficient of variation (CV): Where:
Subsequently the Precipitation Concentration Index (PCI) is related to the coefficient of variation (CV) using the following equation:
Finally the classification as provided in the table below is applied to characterize the PCI.
PCI Temporal Concentration | PCI Index | code |
---|---|---|
Uniform | < 10 | 1 |
Moderately concentrated | 11 - 15 | 2 |
Concentrated | 16 - 20 | 3 |
Strongly concentrated | > 20 | 4 |
Use the precipitation time series, first calculate the sum and average and then conduct the other required calculations to derive the PCI. Note the combined use of time series data (containing multiple time steps) with a single layer map.
pci_Psum = ilwis.do('aggregaterasterstatistics', Pclim, 'sum')
pci_Pmean = ilwis.do('aggregaterasterstatistics', Pclim, 'mean')
pci_s1 = ilwis.do('mapcalc', '@1-@2', Pclim, pci_Pmean)
pci_s2 = ilwis.do('mapcalc','pow(@1,2)', pci_s1) # or sq(@1), see below
pci_avg_s2 = ilwis.do('aggregaterasterstatistics', pci_s2, 'mean')
pci_s = ilwis.do('mapcalc','sqrt(@1)', pci_avg_s2)
pci_cv = ilwis.do('mapcalc','100*(@1/@2)', pci_s, pci_Pmean)
pci = ilwis.do('mapcalc','(100/12)*(1+sq(@1/100))', pci_cv)
#uncomment line below to store your intermediate result and visualize the output created using ILWIS 386
#pci.store('pci.mpr')
# create a new NumericItemRange and fill it with required class items
pcirange = ilwis.NumericItemRange()
pcirange.add(('Uniform', 0.0, 10.0, 1.0))
pcirange.add(('Moderately concentrated', 10.0, 15.0, 1.0))
pcirange.add(('Concentrated', 15.0, 20.0, 1.0))
pcirange.add(('Strongly concentrated', 20.0, 500, 1.0))
#assign the range to a new ItemDomain
pci_dom = ilwis.ItemDomain(pcirange)
Conduct the slicing operation and to check the results obtained in ILWIS 386 uncomment the last line in the cell below
pci_cl = ilwis.do('sliceraster',pci, pci_dom)
#pci_cl.store('pci_cl.mpr')
The same can also be done using the map calculation operator, note the equation provided below and execute the cell
#apply the Precipitation Concentration Index class intervals using the code column from the PCI table above
PCIcl_mc = ilwis.do('mapcalc','iff((@1>=0)and(@1<10),1,iff((@1>=10)and(@1<15),2,iff((@1>=15)and(@1<20),3,iff(@1>=20,4,?))))', pci)
Visualize your final results obtained
# create a plot of the numpy array using MatPlotLib - Imshow
dm_indexcl_2np = np.fromiter(iter(dm_indexcl_mc), np.float64, dm_indexcl.size().linearSize())
dm_indexcl_2np = dm_indexcl_2np.reshape((dm_index.size().ysize, dm_indexcl.size().xsize))
pci_cl_2np = np.fromiter(iter(PCIcl_mc), np.float64, pci_cl.size().linearSize())
pci_cl_2np = pci_cl_2np.reshape((pci_cl.size().ysize, pci_cl.size().xsize))
#plot inline
%matplotlib inline
fig1 = plt.figure(figsize=(15, 10))
plt.subplot(1, 2, 1)
plt.imshow(dm_indexcl_2np, vmin=0, vmax=8, interpolation='none', cmap=ListedColormap(['#FFFFFF', '#FF0000', '#FF00FF', '#FF8800', '#FFFF00', '#7CFC00', '#00562C', '#4068E1']))
plt.colorbar(fraction=0.025)
plt.title('De Martonne Aridity Index')
plt.subplot(1, 2, 2)
plt.imshow(pci_cl_2np, vmin=0, vmax=5, interpolation='none', cmap=ListedColormap(['#FFFFFF', '#0000FF', '#1E90FF', '#23B3E9', '#B2D1FF']))
plt.title('Precipitation Concentration Index')
plt.colorbar(fraction=0.025)
<matplotlib.colorbar.Colorbar at 0x2363faf8a90>
l0 = 50
c0 = 50
l3 = 1000
c3 = 300
PCI_class0 = pci_cl.pix2value(ilwis.Pixel(c0,l0)) # for column,row (x,y) respectively
PCI_class3 = pci_cl.pix2value(ilwis.Pixel(c3,l3)) # for column,row (x,y) respectively
PCI_Pclim0 = []
for n in range(0,(Pclim.size().zsize)):
Pclim_val = (Pclim.pix2value(ilwis.Pixel(c0,l0,n)))
PCI_Pclim0.append(Pclim_val)
PCI_Pclim3 = []
for n in range(0,(Pclim.size().zsize)):
Pclim_val = (Pclim.pix2value(ilwis.Pixel(c3,l3,n)))
PCI_Pclim3.append(Pclim_val)
print("PCI class extracted for selected pixel:", PCI_class0)
print("Precipitation values extracted for selected pixel:", PCI_Pclim0)
print()
print("PCI class extracted for selected pixel:", PCI_class3)
print("Precipitation values extracted for selected pixel:", PCI_Pclim3)
PCI class extracted for selected pixel: 3.0 Precipitation values extracted for selected pixel: [0.0, 0.0, 0.0, 1.0, 7.0, 16.0, 84.0, 95.0, 37.0, 10.0, 0.0, 0.0] PCI class extracted for selected pixel: 0.0 Precipitation values extracted for selected pixel: [44.0, 43.0, 158.0, 177.0, 163.0, 136.0, 157.0, 160.0, 135.0, 135.0, 124.0, 76.0]
# create a simple plot showing the differences between the classes selected
%matplotlib inline
plt.figure(figsize=(10,6))
plt.plot(PCI_Pclim0, label='Strongly Concentrated')
plt.plot(PCI_Pclim3, label='Uniform')
plt.ylim([0, 200])
plt.tick_params(labelbottom=False)
plt.xlabel('Month')
plt.ylabel('Precipitation')
plt.title('Precipitation climatology for selected pixels')
plt.legend()
<matplotlib.legend.Legend at 0x23609303be0>
The combined use of ILWISPy with other Python sitepackages provides a powerfull toolbox to handle and analyse time series data. A number of examples are provided within this notebook. To make the notebook more attractive Matplotlib has been used for visualization of selected graphics and maps. For (time series) visualization it is recommended to use the exisiting (animation) visualization capability as offered by ILWIS 386.