pySTEPS – The nowcasting initiative¶
Pysteps is a community-driven initiative for developing and maintaining an easy to use, modular, free and open source Python framework for short-term ensemble prediction systems.
The focus is on probabilistic nowcasting of radar precipitation fields, but pysteps is designed to allow a wider range of uses.
Pysteps is actively developed on GitHub, while a more thorough description of pysteps is available in the pysteps reference publication:
Pulkkinen, S., D. Nerini, A. Perez Hortal, C. Velasco-Forero, U. Germann, A. Seed, and L. Foresti, 2019: Pysteps: an open-source Python library for probabilistic precipitation nowcasting (v1.0). Geosci. Model Dev. Discuss., doi:10.5194/gmd-2019-94, in review. [source]
Documentation¶
The documentation is separated in three big branches, intended for different audiences.
User guide¶
This section is intended for new pysteps users. It provides an introductory overview to the pysteps package, explains how to install it and make use of the most important features.
pySTEPS reference¶
Comprehensive description of all the modules and functions available in pysteps.
pySTEPS developer guide¶
Resources and guidelines for pysteps developers and contributors.
Contents¶
User guide¶
This guide is gives an introductory overview to the pySTEPS package. It explains how to install and make use of the most important features.
For detailed reference documentation of the modules and functions available in the package see the pySTEPS reference.
** Under development **
Installing pysteps¶
Dependencies¶
The pysteps package needs the following dependencies
Additionally, the following packages can be installed for better computational efficiency:
Other optional dependencies include:
- cartopy or basemap (for geo-referenced visualization)
- h5py (for importing HDF5 data)
- pywavelets (for intensity-scale verification)
Anaconda install (recommended)¶
Conda is an open source package management system and environment management system that runs on Windows, macOS and Linux. Conda quickly installs, runs and updates packages and their dependencies. It also allows to easily create, save, load and switch between different environments on your local computer.
Since version 1.0, pySTEPS is available in conda-forge, a community-driven package repository for anaconda.
There are two installation alternatives using anaconda: install pySTEPS in pre-existing environment, or install it new environment.
New environment¶
In a terminal, to create a new conda environment and install pySTEPS, run:
$ conda create -n pysteps
$ source activate pysteps
This will create and activate the new python environment. The next step is to add the conda-forge channel where the pySTEPS package is located:
$ conda config --env --prepend channels conda-forge
Let’s set this channel as the priority one:
$ conda config --env --set channel_priority strict
The later step is not strictly necessary, but is recommended since the conda-forge and the default Anaconda channels are not 100% compatible.
Finally, to install pySTEPS and all its dependencies run:
$ conda install pysteps
Install from source¶
The recommended way to install pysteps from source is using pip to adhere to the [PEP517 standards](https://www.python.org/dev/peps/pep-0517/). Using pip instead of setup.py guarantees that all the package dependencies are properly handled during the installation process.
OSX users¶
pySTEPS uses Cython extensions that need to be compiled with multi-threading support enabled. The default Apple Clang compiler does not support OpenMP, so using the default compiler would have disabled multi-threading and you will get the following error during the installation:
clang: error: unsupported option '-fopenmp'
error: command 'gcc' failed with exit status 1
To solve this issue, obtain the lastest gcc version with Homebrew that has multi-threading enabled:
brew install gcc
To make sure that the installer uses the homebrew’s gcc, export the following environmental variables in the terminal (supposing that gcc version 8 was installed):
export CC=gcc-8
export CXX=g++-8
First, check that the homebrew’s gcc is detected:
which gcc-8
This should point to the homebrew’s gcc installation.
Under certain circumstances, Homebrew does not add the symbolic links for the gcc executables under /usr/local/bin. If that is the case, specify the CC and CCX variables using the full path to the homebrew installation. For example:
export CC=/usr/local/Cellar/gcc/8.3.0/bin/gcc-8
export CXX=/usr/local/Cellar/gcc/8.3.0/bin/g++-8
Then, you can continue with the normal installation procedure described next.
Installation¶
The latest pysteps version in the repository can be installed using pip by simply running in a terminal:
pip install git+https://github.com/pySTEPS/pysteps
Or, from a local copy of the repo (global installation):
git clone https://github.com/pySTEPS/pysteps
cd pysteps
pip install .
The above commands will install the latest version of the master branch, which is constantly under development.
The latest version can also be installed in Development Mode, i.e., in such a way that the project appears to be installed, but yet is still editable from the source tree:
pip install -e <path to local pysteps repo>
Setting up the user-defined configuration file¶
The pysteps package allows the users to customize the default settings and configuration. The configuration parameters used by default are loaded from a user-defined JSON file and then stored in the pysteps.rcparams AttrDict.
The pySTEPS configuration file (pystepsrc)¶
The pysteps package allows the users to customize the default settings and configuration. The configuration parameters used by default are loaded from a user-defined JSON file and then stored in the pysteps.rcparams AttrDict.
The configuration parameters can be accessed as attributes or as items in a dictionary. For e.g., to retrieve the default parameters the following ways are equivalent:
import pysteps
# Retrieve the colorscale for plots
colorscale = pysteps.rcparams['plot']['colorscale']
colorscale = pysteps.rcparams.plot.colorscale
# Retrieve the the root directory of the fmi data
pysteps.rcparams['data_sources']['fmi']['root_path']
pysteps.rcparams.data_sources.fmi.root_path
A less wordy alternative:
from pysteps import rcparams
colorscale = rcparams['plot']['colorscale']
colorscale = rcparams.plot.colorscale
fmi_root_path = rcparams['data_sources']['fmi']['root_path']
fmi_root_path = rcparams.data_sources.fmi.root_path
When the pysteps package imported, it looks for pystepsrc file in the following order:
- $PWD/pystepsrc : Looks for the file in the current directory
- $PYSTEPSRC : If the system variable $PYSTEPSRC is defined and it points to a file, it is used.
- $PYSTEPSRC/pystepsrc : If $PYSTEPSRC points to a directory, it looks for the pystepsrc file inside that directory.
- $HOME/.pysteps/pystepsrc (unix and Mac OS X) : If the system variable $HOME is defined, it looks for the configuration file in this path.
- $USERPROFILE/pysteps/pystepsrc (windows only): It looks for the configuration file in the pysteps directory located user’s home directory.
- Lastly, it looks inside the library in pysteps/pystepsrc for a system-defined copy.
The recommended method to set-up the configuration files is to edit a copy of the default pystepsrc file that is distributed with the package and place that copy inside the user home folder. See the instructions below.
For Linux and OSX users, the recommended way to customize the pysteps configuration is place the pystepsrc parameters file in the users home folder ${HOME} in the following path: ${HOME}/.pysteps/pystepsrc
To steps to setup up the configuration file in the home directory first we need to create the directory if it does not exist. In a terminal, run:
$ mkdir -p ${HOME}/.pysteps
The next step is to find the location of the library’s pystepsrc file being actually used. When we import pysteps in a python interpreter, the configuration file loaded is shown:
import pysteps
"Pysteps configuration file found at: /path/to/pysteps/library/pystepsrc"
Then we copy the library’s default rc file to that directory:
$ cp /path/to/pysteps/library/pystepsrc ${HOME}/.pysteps/pystepsrc
Edit the file with the text editor of your preference and change the default configurations with your preferences.
Finally, check that the new updated file is being loaded by the library:
import pysteps
"Pysteps configuration file found at: /home/user_name/.pysteps/pystepsrc"
For windows users, the recommended way to customize the pySTEPS configuration is place the pystepsrc parameters file in the users folder (defined in the %USERPROFILE% environment variable) in the following path: %USERPROFILE%/pysteps/pystepsrc
To steps to setup up the configuration file in the home directory first we need to create the directory if it does not exist. In a terminal, run:
$ mkdir -p %USERPROFILE%\pysteps
The next step is to find the location of the library’s pystepsrc file being actually used. When we import pysteps in a python interpreter, the configuration file loaded is shown:
import pysteps
"Pysteps configuration file found at: C:\path\to\pysteps\library\pystepsrc"
Then we copy the library’s default rc file to that directory:
$ cp C:\path\to\pysteps\library\pystepsrc %USERPROFILE%\pysteps\pystepsrc
Edit the file with the text editor of your preference and change the default configurations with your preferences.
Finally, check that the new updated file is being loaded by the library:
import pysteps
"Pysteps configuration file found at: C:\User\Profile\.pysteps\pystepsrc"
Below you can find the default pystepsrc file. The lines starting with “//” are comments and they are ignored.
// pysteps configuration
{
// "silent_import" : whether to suppress the initial pysteps message
"silent_import": false,
"outputs": {
// path_outputs : path where to save results (figures, forecasts, etc)
"path_outputs": "./"
},
"plot": {
// "motion_plot" : "streamplot" or "quiver"
"motion_plot": "quiver",
// "colorscale" : "BOM-RF3", "pysteps" or "STEPS-BE"
"colorscale": "pysteps"
},
"data_sources": {
"bom": {
"root_path": "./radar/bom",
"path_fmt": "prcp-cscn/2/%Y/%m/%d",
"fn_pattern": "2_%Y%m%d_%H%M00.prcp-cscn",
"fn_ext": "nc",
"importer": "bom_rf3",
"timestep": 6,
"importer_kwargs": {
"gzipped": true
}
},
"fmi": {
"root_path": "./radar/fmi",
"path_fmt": "%Y%m%d",
"fn_pattern": "%Y%m%d%H%M_fmi.radar.composite.lowest_FIN_SUOMI1",
"fn_ext": "pgm.gz",
"importer": "fmi_pgm",
"timestep": 5,
"importer_kwargs": {
"gzipped": true
}
},
"mch": {
"root_path": "./radar/mch",
"path_fmt": "%Y%m%d",
"fn_pattern": "AQC%y%j%H%M?_00005.801",
"fn_ext": "gif",
"importer": "mch_gif",
"timestep": 5,
"importer_kwargs": {
"product": "AQC",
"unit": "mm",
"accutime": 5
}
},
"opera": {
"root_path": "./radar/OPERA",
"path_fmt": "%Y%m%d",
"fn_pattern": "T_PAAH21_C_EUOC_%Y%m%d%H%M%S",
"fn_ext": "hdf",
"importer": "opera_hdf5",
"timestep": 15,
"importer_kwargs": {}
},
"knmi": {
"root_path": "./radar/KNMI",
"path_fmt": "%Y/%m",
"fn_pattern": "RAD_NL25_RAP_5min_%Y%m%d%H%M",
"fn_ext": "h5",
"importer": "knmi_hdf5",
"timestep": 5,
"importer_kwargs": {
"accutime": 5,
"qty": "ACRR",
"pixelsize": 1000.0
}
}
}
}
Installing the Example data¶
The examples scripts in the user guide as well as the pySTEPS build-in tests use the example radar data available in a separate repository: pysteps-data.
The data must be downloaded manually into your computer and the ref:pystepsrc” file need to configured to point to that example data.
First, download the data from the repository by clicking here.
Unzip the data into a folder of your preference. Once the data is unzip, the directory structure looks like this:
pysteps-data
|
├── radar
├── KNMI
├── OPERA
├── bom
├── fmi
├── mch
Now we need to update the pystepsrc file for the each of the data_sources to point to these directories, as described in The pySTEPS configuration file (pystepsrc).
pySTEPS examples gallery¶
Below is a collection of example scripts and tutorials to illustrate the usage of pysteps.
These scripts require the pySTEPS example data. See the installation instructions in the Installing the Example data section.
Note
Click here to download the full example code
Optical flow¶
This tutorial offers a short overview of the optical flow routines available in pysteps and it will cover how to compute and plot the motion field from a sequence of radar images.
from datetime import datetime
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
from pysteps import io, motion, rcparams
from pysteps.utils import conversion, transformation
from pysteps.visualization import plot_precip_field, quiver
Read the radar input images¶
First, we will import the sequence of radar composites. You need the pysteps-data archive downloaded and the pystepsrc file configured with the data_source paths pointing to data folders.
# Selected case
date = datetime.strptime("201505151630", "%Y%m%d%H%M")
data_source = rcparams.data_sources["mch"]
root_path = data_source["root_path"]
path_fmt = data_source["path_fmt"]
fn_pattern = data_source["fn_pattern"]
fn_ext = data_source["fn_ext"]
importer_name = data_source["importer"]
importer_kwargs = data_source["importer_kwargs"]
timestep = data_source["timestep"]
# Find the input files from the archive
fns = io.archive.find_by_date(
date, root_path, path_fmt, fn_pattern, fn_ext, timestep=5, num_prev_files=9
)
# Read the radar composites
importer = io.get_method(importer_name, "importer")
R, quality, metadata = io.read_timeseries(fns, importer, **importer_kwargs)
del quality # Not used
Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:574: RuntimeWarning: invalid value encountered in greater
if np.any(R > np.nanmin(R)):
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:575: RuntimeWarning: invalid value encountered in greater
metadata["threshold"] = np.nanmin(R[R > np.nanmin(R)])
# Convert to mm/h
R, metadata = conversion.to_rainrate(R, metadata)
# Store the reference frame
R_ = R[-1, :, :].copy()
# Log-transform the data [dBR]
R, metadata = transformation.dB_transform(R, metadata, threshold=0.1, zerovalue=-15.0)
# Nicely print the metadata
pprint(metadata)
Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/transformation.py:206: RuntimeWarning: invalid value encountered in less
zeros = R < threshold
{'accutime': 5,
'institution': 'MeteoSwiss',
'product': 'AQC',
'projection': '+proj=somerc +lon_0=7.43958333333333 +lat_0=46.9524055555556 '
'+k_0=1 +x_0=600000 +y_0=200000 +ellps=bessel '
'+towgs84=674.374,15.056,405.346,0,0,0,0 +units=m +no_defs',
'threshold': -10.0,
'timestamps': array([datetime.datetime(2015, 5, 15, 15, 45),
datetime.datetime(2015, 5, 15, 15, 50),
datetime.datetime(2015, 5, 15, 15, 55),
datetime.datetime(2015, 5, 15, 16, 0),
datetime.datetime(2015, 5, 15, 16, 5),
datetime.datetime(2015, 5, 15, 16, 10),
datetime.datetime(2015, 5, 15, 16, 15),
datetime.datetime(2015, 5, 15, 16, 20),
datetime.datetime(2015, 5, 15, 16, 25),
datetime.datetime(2015, 5, 15, 16, 30)], dtype=object),
'transform': 'dB',
'unit': 'mm/h',
'x1': 255000.0,
'x2': 965000.0,
'xpixelsize': 1000.0,
'y1': -160000.0,
'y2': 480000.0,
'yorigin': 'upper',
'ypixelsize': 1000.0,
'zerovalue': -15.0,
'zr_a': 316.0,
'zr_b': 1.5}
Lucas-Kanade (LK)¶
The Lucas-Kanade optical flow method implemented in pysteps is a local tracking approach that relies on the OpenCV package. Local features are tracked in a sequence of two or more radar images. The scheme includes a final interpolation step in order to produce a smooth field of motion vectors.
oflow_method = motion.get_method("LK")
V1 = oflow_method(R[-3:, :, :])
# Plot the motion field on top of the reference frame
plot_precip_field(R_, geodata=metadata, title="LK")
quiver(V1, geodata=metadata, step=25)
plt.show()

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/images.py:200: RuntimeWarning: invalid value encountered in greater
field_bin = np.ndarray.astype(input_image > thr, "uint8")
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/visualization/precipfields.py:210: RuntimeWarning: invalid value encountered in less
R[R < 0.1] = np.nan
Variational echo tracking (VET)¶
This module implements the VET algorithm presented by Laroche and Zawadzki (1995) and used in the McGill Algorithm for Prediction by Lagrangian Extrapolation (MAPLE) described in Germann and Zawadzki (2002). The approach essentially consists of a global optimization routine that seeks at minimizing a cost function between the displaced and the reference image.
oflow_method = motion.get_method("VET")
V2 = oflow_method(R[-3:, :, :])
# Plot the motion field
plot_precip_field(R_, geodata=metadata, title="VET")
quiver(V2, geodata=metadata, step=25)
plt.show()

Out:
Running VET algorithm
original image shape: (3, 640, 710)
padded image shape: (3, 640, 710)
padded template_image image shape: (3, 640, 710)
Number of sectors: 2,2
Sector Shape: (320, 355)
Minimizing
residuals 3102581.0581715414
smoothness_penalty 0.0
original image shape: (3, 640, 710)
padded image shape: (3, 640, 712)
padded template_image image shape: (3, 640, 712)
Number of sectors: 4,4
Sector Shape: (160, 178)
Minimizing
residuals 2506232.4819293534
smoothness_penalty 0.5027211304402006
original image shape: (3, 640, 710)
padded image shape: (3, 640, 720)
padded template_image image shape: (3, 640, 720)
Number of sectors: 16,16
Sector Shape: (40, 45)
Minimizing
residuals 2267290.5287230993
smoothness_penalty 41.433699598718086
original image shape: (3, 640, 710)
padded image shape: (3, 640, 736)
padded template_image image shape: (3, 640, 736)
Number of sectors: 32,32
Sector Shape: (20, 23)
Minimizing
residuals 2288499.8450945
smoothness_penalty 190.92626378296308
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/visualization/precipfields.py:210: RuntimeWarning: invalid value encountered in less
R[R < 0.1] = np.nan
Dynamic and adaptive radar tracking of storms (DARTS)¶
DARTS uses a spectral approach to optical flow that is based on the discrete Fourier transform (DFT) of a temporal sequence of radar fields. The level of truncation of the DFT coefficients controls the degree of smoothness of the estimated motion field, allowing for an efficient motion estimation. DARTS requires a longer sequence of radar fields for estimating the motion, here we are going to use all the available 10 fields.
oflow_method = motion.get_method("DARTS")
R[~np.isfinite(R)] = metadata["zerovalue"]
V3 = oflow_method(R) # needs longer training sequence
# Plot the motion field
plot_precip_field(R_, geodata=metadata, title="DARTS")
quiver(V3, geodata=metadata, step=25)
plt.show()

Out:
Computing the motion field with the DARTS method.
-----
DARTS
-----
Computing the FFT of the reflectivity fields...
Done in 0.86 seconds.
Constructing the y-vector...
Done in 0.45 seconds.
Constructing the H-matrix...
Done in 1.74 seconds.
Solving the linear systems...
Done in 1.20 seconds.
--- 4.336407661437988 seconds ---
Anisotropic diffusion method (Proesmans et al 1994)¶
This module implements the anisotropic diffusion method presented in Proesmans et al. (1994), a robust optical flow technique which employs the notion of inconsitency during the solution of the optical flow equations.
oflow_method = motion.get_method("proesmans")
R[~np.isfinite(R)] = metadata["zerovalue"]
V4 = oflow_method(R[-2:, :, :])
# Plot the motion field
plot_precip_field(R_, geodata=metadata, title="Proesmans")
quiver(V4, geodata=metadata, step=25)
plt.show()
# sphinx_gallery_thumbnail_number = 1

Total running time of the script: ( 1 minutes 10.691 seconds)
Note
Click here to download the full example code
Extrapolation nowcast¶
This tutorial shows how to compute and plot an extrapolation nowcast using Finnish radar data.
from datetime import datetime
import matplotlib.pyplot as plt
import numpy as np
from pprint import pprint
from pysteps import io, motion, nowcasts, rcparams, verification
from pysteps.utils import conversion, transformation
from pysteps.visualization import plot_precip_field, quiver
Read the radar input images¶
First, we will import the sequence of radar composites. You need the pysteps-data archive downloaded and the pystepsrc file configured with the data_source paths pointing to data folders.
# Selected case
date = datetime.strptime("201609281600", "%Y%m%d%H%M")
data_source = rcparams.data_sources["fmi"]
n_leadtimes = 12
root_path = data_source["root_path"]
path_fmt = data_source["path_fmt"]
fn_pattern = data_source["fn_pattern"]
fn_ext = data_source["fn_ext"]
importer_name = data_source["importer"]
importer_kwargs = data_source["importer_kwargs"]
timestep = data_source["timestep"]
# Find the input files from the archive
fns = io.archive.find_by_date(
date, root_path, path_fmt, fn_pattern, fn_ext, timestep, num_prev_files=2
)
# Read the radar composites
importer = io.get_method(importer_name, "importer")
Z, _, metadata = io.read_timeseries(fns, importer, **importer_kwargs)
# Convert to rain rate
R, metadata = conversion.to_rainrate(Z, metadata)
# Plot the rainfall field
plot_precip_field(R[-1, :, :], geodata=metadata)
plt.show()
# Store the last frame for plotting it later later
R_ = R[-1, :, :].copy()
# Log-transform the data to unit of dBR, set the threshold to 0.1 mm/h,
# set the fill value to -15 dBR
R, metadata = transformation.dB_transform(R, metadata, threshold=0.1, zerovalue=-15.0)
# Nicely print the metadata
pprint(metadata)

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:384: RuntimeWarning: invalid value encountered in greater
metadata["threshold"] = np.nanmin(R[R > np.nanmin(R)])
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/transformation.py:236: RuntimeWarning: invalid value encountered in less
R[R < threshold] = zerovalue
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/visualization/precipfields.py:210: RuntimeWarning: invalid value encountered in less
R[R < 0.1] = np.nan
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/transformation.py:206: RuntimeWarning: invalid value encountered in less
zeros = R < threshold
{'accutime': 5.0,
'institution': 'Finnish Meteorological Institute',
'projection': '+proj=stere +lon_0=25E +lat_0=90N +lat_ts=60 +a=6371288 '
'+x_0=380886.310 +y_0=3395677.920 +no_defs',
'threshold': -10.0,
'timestamps': array([datetime.datetime(2016, 9, 28, 15, 50),
datetime.datetime(2016, 9, 28, 15, 55),
datetime.datetime(2016, 9, 28, 16, 0)], dtype=object),
'transform': 'dB',
'unit': 'mm/h',
'x1': 0.0049823258887045085,
'x2': 759752.2852757066,
'xpixelsize': 999.674053,
'y1': 0.009731985162943602,
'y2': 1225544.6588913496,
'yorigin': 'upper',
'ypixelsize': 999.62859,
'zerovalue': -15.0,
'zr_a': 223.0,
'zr_b': 1.53}
Compute the nowcast¶
The extrapolation nowcast is based on the estimation of the motion field, which is here performed using a local tracking approach (Lucas-Kanade). The most recent radar rainfall field is then simply advected along this motion field in oder to produce an extrapolation forecast.
# Estimate the motion field with Lucas-Kanade
oflow_method = motion.get_method("LK")
V = oflow_method(R[-3:, :, :])
# Extrapolate the last radar observation
extrapolate = nowcasts.get_method("extrapolation")
R[~np.isfinite(R)] = metadata["zerovalue"]
R_f = extrapolate(R[-1, :, :], V, n_leadtimes)
# Back-transform to rain rate
R_f = transformation.dB_transform(R_f, threshold=-10.0, inverse=True)[0]
# Plot the motion field
plot_precip_field(R_, geodata=metadata)
quiver(V, geodata=metadata, step=50)
plt.show()

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/images.py:200: RuntimeWarning: invalid value encountered in greater
field_bin = np.ndarray.astype(input_image > thr, "uint8")
Verify with FSS¶
The fractions skill score (FSS) provides an intuitive assessment of the dependency of skill on spatial scale and intensity, which makes it an ideal skill score for high-resolution precipitation forecasts.
# Find observations in the data archive
fns = io.archive.find_by_date(
date,
root_path,
path_fmt,
fn_pattern,
fn_ext,
timestep,
num_prev_files=0,
num_next_files=n_leadtimes,
)
# Read the radar composites
R_o, _, metadata_o = io.read_timeseries(fns, importer, **importer_kwargs)
R_o, metadata_o = conversion.to_rainrate(R_o, metadata_o, 223.0, 1.53)
# Compute fractions skill score (FSS) for all lead times, a set of scales and 1 mm/h
fss = verification.get_method("FSS")
scales = [2, 4, 8, 16, 32, 64, 128, 256, 512]
thr = 1.0
score = []
for i in range(n_leadtimes):
score_ = []
for scale in scales:
score_.append(fss(R_f[i, :, :], R_o[i + 1, :, :], thr, scale))
score.append(score_)
plt.figure()
x = np.arange(1, n_leadtimes + 1) * timestep
plt.plot(x, score)
plt.legend(scales, title="Scale [km]")
plt.xlabel("Lead time [min]")
plt.ylabel("FSS ( > 1.0 mm/h ) ")
plt.title("Fractions skill score")
plt.show()
# sphinx_gallery_thumbnail_number = 3

Total running time of the script: ( 0 minutes 21.109 seconds)
Note
Click here to download the full example code
Generation of stochastic noise¶
This example script shows how to run the stochastic noise field generators included in pysteps.
These noise fields are used as perturbation terms during an extrapolation nowcast in order to represent the uncertainty in the evolution of the rainfall field.
from matplotlib import cm, pyplot
import numpy as np
import os
from pprint import pprint
from pysteps import io, rcparams
from pysteps.noise.fftgenerators import initialize_param_2d_fft_filter
from pysteps.noise.fftgenerators import initialize_nonparam_2d_fft_filter
from pysteps.noise.fftgenerators import generate_noise_2d_fft_filter
from pysteps.utils import conversion, rapsd, transformation
from pysteps.visualization import plot_precip_field, plot_spectrum1d
Read precipitation field¶
First thing, the radar composite is imported and transformed in units of dB. This image will be used to train the Fourier filters that are necessary to produce the fields of spatially correlated noise.
# Import the example radar composite
root_path = rcparams.data_sources["mch"]["root_path"]
filename = os.path.join(root_path, "20160711", "AQC161932100V_00005.801.gif")
R, _, metadata = io.import_mch_gif(filename, product="AQC", unit="mm", accutime=5.0)
# Convert to mm/h
R, metadata = conversion.to_rainrate(R, metadata)
# Nicely print the metadata
pprint(metadata)
# Plot the rainfall field
plot_precip_field(R, geodata=metadata)
# Log-transform the data
R, metadata = transformation.dB_transform(R, metadata, threshold=0.1, zerovalue=-15.0)
# Assign the fill value to all the Nans
R[~np.isfinite(R)] = metadata["zerovalue"]

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:574: RuntimeWarning: invalid value encountered in greater
if np.any(R > np.nanmin(R)):
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:575: RuntimeWarning: invalid value encountered in greater
metadata["threshold"] = np.nanmin(R[R > np.nanmin(R)])
{'accutime': 5.0,
'institution': 'MeteoSwiss',
'product': 'AQC',
'projection': '+proj=somerc +lon_0=7.43958333333333 +lat_0=46.9524055555556 '
'+k_0=1 +x_0=600000 +y_0=200000 +ellps=bessel '
'+towgs84=674.374,15.056,405.346,0,0,0,0 +units=m +no_defs',
'threshold': 0.01155375598376629,
'transform': None,
'unit': 'mm/h',
'x1': 255000.0,
'x2': 965000.0,
'xpixelsize': 1000.0,
'y1': -160000.0,
'y2': 480000.0,
'yorigin': 'upper',
'ypixelsize': 1000.0,
'zerovalue': 0.0,
'zr_a': 316.0,
'zr_b': 1.5}
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/visualization/precipfields.py:210: RuntimeWarning: invalid value encountered in less
R[R < 0.1] = np.nan
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/transformation.py:206: RuntimeWarning: invalid value encountered in less
zeros = R < threshold
Parametric filter¶
In the parametric approach, a power-law model is used to approximate the power spectral density (PSD) of a given rainfall field.
The parametric model uses a piece-wise linear function with two spectral slopes (beta1 and beta2) and one breaking point
# Fit the parametric PSD to the observation
Fp = initialize_param_2d_fft_filter(R)
# Compute the observed and fitted 1D PSD
L = np.max(Fp["input_shape"])
if L % 2 == 0:
wn = np.arange(0, int(L / 2) + 1)
else:
wn = np.arange(0, int(L / 2))
R_, freq = rapsd(R, fft_method=np.fft, return_freq=True)
f = np.exp(Fp["model"](np.log(wn), *Fp["pars"]))
# Extract the scaling break in km, beta1 and beta2
w0 = L / np.exp(Fp["pars"][0])
b1 = Fp["pars"][2]
b2 = Fp["pars"][3]
# Plot the observed power spectrum and the model
fig, ax = pyplot.subplots()
plot_scales = [512, 256, 128, 64, 32, 16, 8, 4]
plot_spectrum1d(
freq,
R_,
x_units="km",
y_units="dBR",
color="k",
ax=ax,
label="Observed",
wavelength_ticks=plot_scales,
)
plot_spectrum1d(
freq,
f,
x_units="km",
y_units="dBR",
color="r",
ax=ax,
label="Fit",
wavelength_ticks=plot_scales,
)
pyplot.legend()
ax.set_title(
"Radially averaged log-power spectrum of R\n"
r"$\omega_0=%.0f km, \beta_1=%.1f, \beta_2=%.1f$" % (w0, b1, b2)
)

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/noise/fftgenerators.py:197: RuntimeWarning: divide by zero encountered in log
F = np.exp(piecewise_linear(np.log(R), *pf))
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/checkouts/v1.1.0/examples/plot_noise_generators.py:74: RuntimeWarning: divide by zero encountered in log
f = np.exp(Fp["model"](np.log(wn), *Fp["pars"]))
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/visualization/spectral.py:61: RuntimeWarning: divide by zero encountered in log10
ax.plot(10.0*np.log10(fft_freq), 10.0*np.log10(fft_power), color=color, linewidth=lw, label=label, **kwargs)
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/visualization/spectral.py:61: RuntimeWarning: invalid value encountered in log10
ax.plot(10.0*np.log10(fft_freq), 10.0*np.log10(fft_power), color=color, linewidth=lw, label=label, **kwargs)
Nonparametric filter¶
In the nonparametric approach, the Fourier filter is obtained directly from the power spectrum of the observed precipitation field R.
Fnp = initialize_nonparam_2d_fft_filter(R)
Noise generator¶
The parametric and nonparametric filters obtained above can now be used to produce N realizations of random fields of prescribed power spectrum, hence with the same correlation structure as the initial rainfall field.
seed = 42
num_realizations = 3
# Generate noise
Np = []
Nnp = []
for k in range(num_realizations):
Np.append(generate_noise_2d_fft_filter(Fp, seed=seed + k))
Nnp.append(generate_noise_2d_fft_filter(Fnp, seed=seed + k))
# Plot the generated noise fields
fig, ax = pyplot.subplots(nrows=2, ncols=3)
# parametric noise
ax[0, 0].imshow(Np[0], cmap=cm.RdBu_r, vmin=-3, vmax=3)
ax[0, 1].imshow(Np[1], cmap=cm.RdBu_r, vmin=-3, vmax=3)
ax[0, 2].imshow(Np[2], cmap=cm.RdBu_r, vmin=-3, vmax=3)
# nonparametric noise
ax[1, 0].imshow(Nnp[0], cmap=cm.RdBu_r, vmin=-3, vmax=3)
ax[1, 1].imshow(Nnp[1], cmap=cm.RdBu_r, vmin=-3, vmax=3)
ax[1, 2].imshow(Nnp[2], cmap=cm.RdBu_r, vmin=-3, vmax=3)
for i in range(2):
for j in range(3):
ax[i, j].set_xticks([])
ax[i, j].set_yticks([])
pyplot.tight_layout()

The above figure highlights the main limitation of the parametric approach (top row), that is, the assumption of an isotropic power law scaling relationship, meaning that anisotropic structures such as rainfall bands cannot be represented.
Instead, the nonparametric approach (bottom row) allows generating perturbation fields with anisotropic structures, but it also requires a larger sample size and is sensitive to the quality of the input data, e.g. the presence of residual clutter in the radar image.
In addition, both techniques assume spatial stationarity of the covariance structure of the field.
# sphinx_gallery_thumbnail_number = 3
Total running time of the script: ( 0 minutes 2.110 seconds)
Note
Click here to download the full example code
Data transformations¶
The statistics of intermittent precipitation rates are particularly non-Gaussian and display an asymmetric distribution bounded at zero. Such properties restrict the usage of well-established statistical methods that assume symmetric or Gaussian data.
A common workaround is to introduce a suitable data transformation to approximate a normal distribution.
In this example, we test the data transformation methods available in pysteps in order to obtain a more symmetric distribution of the precipitation data (excluding the zeros). The currently available transformations include the Box-Cox, dB, square-root and normal quantile transforms.
from datetime import datetime
import matplotlib.pyplot as plt
import numpy as np
from pysteps import io, rcparams
from pysteps.utils import conversion, transformation
from scipy.stats import skew
Read the radar input images¶
First, we will import the sequence of radar composites. You need the pysteps-data archive downloaded and the pystepsrc file configured with the data_source paths pointing to data folders.
# Selected case
date = datetime.strptime("201609281600", "%Y%m%d%H%M")
data_source = rcparams.data_sources["fmi"]
root_path = data_source["root_path"]
path_fmt = data_source["path_fmt"]
fn_pattern = data_source["fn_pattern"]
fn_ext = data_source["fn_ext"]
importer_name = data_source["importer"]
importer_kwargs = data_source["importer_kwargs"]
timestep = data_source["timestep"]
# Get 1 hour of observations in the data archive
fns = io.archive.find_by_date(
date,
root_path,
path_fmt,
fn_pattern,
fn_ext,
timestep,
num_next_files=11,
)
# Read the radar composites
importer = io.get_method(importer_name, "importer")
Z, _, metadata = io.read_timeseries(fns, importer, **importer_kwargs)
# Keep only positive rainfall values
Z = Z[Z > metadata["zerovalue"]].flatten()
# Convert to rain rate
R, metadata = conversion.to_rainrate(Z, metadata)
Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:384: RuntimeWarning: invalid value encountered in greater
metadata["threshold"] = np.nanmin(R[R > np.nanmin(R)])
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/checkouts/v1.1.0/examples/data_transformations.py:70: RuntimeWarning: invalid value encountered in greater
Z = Z[Z > metadata["zerovalue"]].flatten()
Test data transformations¶
# Define method to visualize the data distribution with boxplots and plot the
# corresponding skewness
def plot_distribution(data, labels, skw):
N = len(data)
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax2.plot(np.arange(N + 2), np.zeros(N + 2), ":r")
ax1.boxplot(data, labels=labels, sym="", medianprops={"color": "k"})
ymax = []
for i in range(N):
y = skw[i]
x = i + 1
ax2.plot(x, y, "*r", ms=10, markeredgecolor="k")
ymax.append(np.max(data[i]))
# ylims
ylims = np.percentile(ymax, 50)
ax1.set_ylim((-1 * ylims, ylims))
ylims = np.max(np.abs(skw))
ax2.set_ylim((-1.1 * ylims, 1.1 * ylims))
# labels
ax1.set_ylabel(r"Standardized values [$\sigma$]")
ax2.set_ylabel(r"Skewness []", color="r")
ax2.tick_params(axis="y", labelcolor="r")
The Box-Cox transform is a well-known power transformation introduced by Box and Cox (1964). In its one-parameter version, the Box-Cox transform takes the form T(x) = ln(x) for lambda = 0, or T(x) = (x**lambda - 1)/lambda otherwise.
To find a suitable lambda, we will experiment with a range of values and select the one that produces the most symmetric distribution, i.e., the lambda associated with a value of skewness closest to zero. To visually compare the results, the transformed data are standardized.
data = []
labels = []
skw = []
# Test a range of values for the transformation parameter Lambda
Lambdas = np.linspace(-0.4, 0.4, 11)
for i, Lambda in enumerate(Lambdas):
R_, _ = transformation.boxcox_transform(R, metadata, Lambda)
R_ = (R_ - np.mean(R_)) / np.std(R_)
data.append(R_)
labels.append("{0:.2f}".format(Lambda))
skw.append(skew(R_)) # skewness
# Plot the transformed data distribution as a function of lambda
plot_distribution(data, labels, skw)
plt.title("Box-Cox transform")
plt.tight_layout()
plt.show()
# Best lambda
idx_best = np.argmin(np.abs(skw))
Lambda = Lambdas[idx_best]
print("Best parameter lambda: %.2f\n(skewness = %.2f)" % (Lambda, skw[idx_best]))

Out:
Best parameter lambda: 0.16
(skewness = 0.02)
data = []
labels = []
skw = []
First, let’s have a look at the original rain rate values.
data.append((R - np.mean(R)) / np.std(R))
labels.append("R")
skw.append(skew(R))
We transform the rainfall data into dB units: 10*log(R)
R_, _ = transformation.dB_transform(R, metadata)
data.append((R_ - np.mean(R_)) / np.std(R_))
labels.append("dB")
skw.append(skew(R_))
Transform the data using the square-root: sqrt(R)
R_, _ = transformation.sqrt_transform(R, metadata)
data.append((R_ - np.mean(R_)) / np.std(R_))
labels.append("sqrt")
skw.append(skew(R_))
We now apply the Box-Cox transform using the best parameter lambda found above.
R_, _ = transformation.boxcox_transform(R, metadata, Lambda)
data.append((R_ - np.mean(R_)) / np.std(R_))
labels.append("Box-Cox\n($\lambda=$%.2f)" % Lambda)
skw.append(skew(R_))
At last, we apply the empirical normal quantile (NQ) transform as described in Bogner et al (2012).
R_, _ = transformation.NQ_transform(R, metadata)
data.append((R_ - np.mean(R_)) / np.std(R_))
labels.append("NQ")
skw.append(skew(R_))
By plotting all the results, we can notice first of all the strongly asymmetric distribution of the original data (R) and that all transformations manage to reduce its skewness. Among these, the Box-Cox transform (using the best parameter lambda) and the normal quantile (NQ) transform provide the best correction. Despite not producing a perfectly symmetric distribution, the square-root (sqrt) transform has the strong advantage of being defined for zeros, too, while all other transformations need an arbitrary rule for non-positive values.
plot_distribution(data, labels, skw)
plt.title("Data transforms")
plt.tight_layout()
plt.show()

Total running time of the script: ( 0 minutes 15.665 seconds)
Note
Click here to download the full example code
Cascade decomposition¶
This example script shows how to compute and plot the cascade decompositon of a single radar precipitation field in pysteps.
from matplotlib import cm, pyplot
import numpy as np
import os
from pprint import pprint
from pysteps.cascade.bandpass_filters import filter_gaussian
from pysteps import io, rcparams
from pysteps.cascade.decomposition import decomposition_fft
from pysteps.utils import conversion, transformation
from pysteps.visualization import plot_precip_field
Read precipitation field¶
First thing, the radar composite is imported and transformed in units of dB.
# Import the example radar composite
root_path = rcparams.data_sources["fmi"]["root_path"]
filename = os.path.join(
root_path, "20160928", "201609281600_fmi.radar.composite.lowest_FIN_SUOMI1.pgm.gz"
)
R, _, metadata = io.import_fmi_pgm(filename, gzipped=True)
# Convert to rain rate
R, metadata = conversion.to_rainrate(R, metadata)
# Nicely print the metadata
pprint(metadata)
# Plot the rainfall field
plot_precip_field(R, geodata=metadata)
# Log-transform the data
R, metadata = transformation.dB_transform(R, metadata, threshold=0.1, zerovalue=-15.0)

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:384: RuntimeWarning: invalid value encountered in greater
metadata["threshold"] = np.nanmin(R[R > np.nanmin(R)])
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/transformation.py:236: RuntimeWarning: invalid value encountered in less
R[R < threshold] = zerovalue
{'accutime': 5.0,
'institution': 'Finnish Meteorological Institute',
'projection': '+proj=stere +lon_0=25E +lat_0=90N +lat_ts=60 +a=6371288 '
'+x_0=380886.310 +y_0=3395677.920 +no_defs',
'threshold': 0.0002548805471873859,
'transform': None,
'unit': 'mm/h',
'x1': 0.0049823258887045085,
'x2': 759752.2852757066,
'xpixelsize': 999.674053,
'y1': 0.009731985162943602,
'y2': 1225544.6588913496,
'yorigin': 'upper',
'ypixelsize': 999.62859,
'zerovalue': 0.0,
'zr_a': 223.0,
'zr_b': 1.53}
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/visualization/precipfields.py:210: RuntimeWarning: invalid value encountered in less
R[R < 0.1] = np.nan
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/transformation.py:206: RuntimeWarning: invalid value encountered in less
zeros = R < threshold
2D Fourier spectrum¶
Compute and plot the 2D Fourier power spectrum of the precipitaton field.
# Set Nans as the fill value
R[~np.isfinite(R)] = metadata["zerovalue"]
# Compute the Fourier transform of the input field
F = abs(np.fft.fftshift(np.fft.fft2(R)))
# Plot the power spectrum
M, N = F.shape
fig, ax = pyplot.subplots()
im = ax.imshow(
np.log(F ** 2), vmin=4, vmax=24, cmap=cm.jet, extent=(-N / 2, N / 2, -M / 2, M / 2)
)
cb = fig.colorbar(im)
ax.set_xlabel("Wavenumber $k_x$")
ax.set_ylabel("Wavenumber $k_y$")
ax.set_title("Log-power spectrum of R")

Cascade decomposition¶
First, construct a set of Gaussian bandpass filters and plot the corresponding 1D filters.
num_cascade_levels = 7
# Construct the Gaussian bandpass filters
filter = filter_gaussian(R.shape, num_cascade_levels)
# Plot the bandpass filter weights
L = max(N, M)
fig, ax = pyplot.subplots()
for k in range(num_cascade_levels):
ax.semilogx(
np.linspace(0, L / 2, len(filter["weights_1d"][k, :])),
filter["weights_1d"][k, :],
"k-",
basex=pow(0.5 * L / 3, 1.0 / (num_cascade_levels - 2)),
)
ax.set_xlim(1, L / 2)
ax.set_ylim(0, 1)
xt = np.hstack([[1.0], filter["central_wavenumbers"][1:]])
ax.set_xticks(xt)
ax.set_xticklabels(["%.2f" % cf for cf in filter["central_wavenumbers"]])
ax.set_xlabel("Radial wavenumber $|\mathbf{k}|$")
ax.set_ylabel("Normalized weight")
ax.set_title("Bandpass filter weights")

Finally, apply the 2D Gaussian filters to decompose the radar rainfall field into a set of cascade levels of decreasing spatial scale and plot them.
decomp = decomposition_fft(R, filter)
# Plot the normalized cascade levels
for i in range(num_cascade_levels):
mu = decomp["means"][i]
sigma = decomp["stds"][i]
decomp["cascade_levels"][i] = (decomp["cascade_levels"][i] - mu) / sigma
fig, ax = pyplot.subplots(nrows=2, ncols=4)
ax[0, 0].imshow(R, cmap=cm.RdBu_r, vmin=-5, vmax=5)
ax[0, 1].imshow(decomp["cascade_levels"][0], cmap=cm.RdBu_r, vmin=-3, vmax=3)
ax[0, 2].imshow(decomp["cascade_levels"][1], cmap=cm.RdBu_r, vmin=-3, vmax=3)
ax[0, 3].imshow(decomp["cascade_levels"][2], cmap=cm.RdBu_r, vmin=-3, vmax=3)
ax[1, 0].imshow(decomp["cascade_levels"][3], cmap=cm.RdBu_r, vmin=-3, vmax=3)
ax[1, 1].imshow(decomp["cascade_levels"][4], cmap=cm.RdBu_r, vmin=-3, vmax=3)
ax[1, 2].imshow(decomp["cascade_levels"][5], cmap=cm.RdBu_r, vmin=-3, vmax=3)
ax[1, 3].imshow(decomp["cascade_levels"][6], cmap=cm.RdBu_r, vmin=-3, vmax=3)
ax[0, 0].set_title("Observed")
ax[0, 1].set_title("Level 1")
ax[0, 2].set_title("Level 2")
ax[0, 3].set_title("Level 3")
ax[1, 0].set_title("Level 4")
ax[1, 1].set_title("Level 5")
ax[1, 2].set_title("Level 6")
ax[1, 3].set_title("Level 7")
for i in range(2):
for j in range(4):
ax[i, j].set_xticks([])
ax[i, j].set_yticks([])
pyplot.tight_layout()
# sphinx_gallery_thumbnail_number = 4

Total running time of the script: ( 0 minutes 2.178 seconds)
Note
Click here to download the full example code
STEPS nowcast¶
This tutorial shows how to compute and plot an ensemble nowcast using Swiss radar data.
from pylab import *
from datetime import datetime
from pprint import pprint
from pysteps import io, nowcasts, rcparams
from pysteps.motion.lucaskanade import dense_lucaskanade
from pysteps.postprocessing.ensemblestats import excprob
from pysteps.utils import conversion, dimension, transformation
from pysteps.visualization import plot_precip_field
# Set nowcast parameters
n_ens_members = 20
n_leadtimes = 6
seed = 24
Read precipitation field¶
First thing, the sequence of Swiss radar composites is imported, converted and transformed into units of dBR.
date = datetime.strptime("201701311200", "%Y%m%d%H%M")
data_source = "mch"
# Load data source config
root_path = rcparams.data_sources[data_source]["root_path"]
path_fmt = rcparams.data_sources[data_source]["path_fmt"]
fn_pattern = rcparams.data_sources[data_source]["fn_pattern"]
fn_ext = rcparams.data_sources[data_source]["fn_ext"]
importer_name = rcparams.data_sources[data_source]["importer"]
importer_kwargs = rcparams.data_sources[data_source]["importer_kwargs"]
timestep = rcparams.data_sources[data_source]["timestep"]
# Find the radar files in the archive
fns = io.find_by_date(
date, root_path, path_fmt, fn_pattern, fn_ext, timestep, num_prev_files=2
)
# Read the data from the archive
importer = io.get_method(importer_name, "importer")
R, _, metadata = io.read_timeseries(fns, importer, **importer_kwargs)
# Convert to rain rate
R, metadata = conversion.to_rainrate(R, metadata)
# Upscale data to 2 km to limit memory usage
R, metadata = dimension.aggregate_fields_space(R, metadata, 2000)
# Plot the rainfall field
plot_precip_field(R[-1, :, :], geodata=metadata)
# Log-transform the data to unit of dBR, set the threshold to 0.1 mm/h,
# set the fill value to -15 dBR
R, metadata = transformation.dB_transform(R, metadata, threshold=0.1, zerovalue=-15.0)
# Set missing values with the fill value
R[~np.isfinite(R)] = -15.0
# Nicely print the metadata
pprint(metadata)

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:574: RuntimeWarning: invalid value encountered in greater
if np.any(R > np.nanmin(R)):
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:575: RuntimeWarning: invalid value encountered in greater
metadata["threshold"] = np.nanmin(R[R > np.nanmin(R)])
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/visualization/precipfields.py:210: RuntimeWarning: invalid value encountered in less
R[R < 0.1] = np.nan
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/transformation.py:206: RuntimeWarning: invalid value encountered in less
zeros = R < threshold
{'accutime': 5,
'institution': 'MeteoSwiss',
'product': 'AQC',
'projection': '+proj=somerc +lon_0=7.43958333333333 +lat_0=46.9524055555556 '
'+k_0=1 +x_0=600000 +y_0=200000 +ellps=bessel '
'+towgs84=674.374,15.056,405.346,0,0,0,0 +units=m +no_defs',
'threshold': -10.0,
'timestamps': array([datetime.datetime(2017, 1, 31, 11, 50),
datetime.datetime(2017, 1, 31, 11, 55),
datetime.datetime(2017, 1, 31, 12, 0)], dtype=object),
'transform': 'dB',
'unit': 'mm/h',
'x1': 255000.0,
'x2': 965000.0,
'xpixelsize': 2000,
'y1': -160000.0,
'y2': 480000.0,
'yorigin': 'upper',
'ypixelsize': 2000,
'zerovalue': -15.0,
'zr_a': 316.0,
'zr_b': 1.5}
Deterministic nowcast with S-PROG¶
First, the motiong field is estimated using a local tracking approach based on the Lucas-Kanade optical flow. The motion field can then be used to generate a deterministic nowcast with the S-PROG model, which implements a scale filtering appraoch in order to progressively remove the unpredictable spatial scales during the forecast.
# Estimate the motion field
V = dense_lucaskanade(R)
# The S-PROG nowcast
nowcast_method = nowcasts.get_method("sprog")
R_f = nowcast_method(
R[-3:, :, :],
V,
n_leadtimes,
n_cascade_levels=8,
R_thr=-10.0,
decomp_method="fft",
bandpass_filter_method="gaussian",
probmatching_method="mean",
)
# Back-transform to rain rate
R_f = transformation.dB_transform(R_f, threshold=-10.0, inverse=True)[0]
# Plot the S-PROG forecast
plot_precip_field(
R_f[-1, :, :],
geodata=metadata,
title="S-PROG (+ %i min)" % (n_leadtimes * timestep),
)

Out:
Computing S-PROG nowcast:
-------------------------
Inputs:
-------
input dimensions: 320x355
Methods:
--------
extrapolation: semilagrangian
bandpass filter: gaussian
decomposition: fft
conditional statistics: no
probability matching: mean
FFT method: numpy
Parameters:
-----------
number of time steps: 6
parallel threads: 1
number of cascade levels: 8
order of the AR(p) model: 2
precip. intensity threshold: -10
************************************************
* Correlation coefficients for cascade levels: *
************************************************
-----------------------------------------
| Level | Lag-1 | Lag-2 |
-----------------------------------------
| 1 | 0.999255 | 0.996920 |
-----------------------------------------
| 2 | 0.998026 | 0.992118 |
-----------------------------------------
| 3 | 0.994338 | 0.981424 |
-----------------------------------------
| 4 | 0.983001 | 0.949390 |
-----------------------------------------
| 5 | 0.945160 | 0.843243 |
-----------------------------------------
| 6 | 0.826010 | 0.632825 |
-----------------------------------------
| 7 | 0.507358 | 0.316495 |
-----------------------------------------
| 8 | 0.134681 | 0.048310 |
-----------------------------------------
****************************************
* AR(p) parameters for cascade levels: *
****************************************
------------------------------------------------------
| Level | Phi-1 | Phi-2 | Phi-0 |
------------------------------------------------------
| 1 | 1.924228 | -0.925664 | 0.014605 |
------------------------------------------------------
| 2 | 1.878116 | -0.881830 | 0.029612 |
------------------------------------------------------
| 3 | 1.635865 | -0.645180 | 0.081187 |
------------------------------------------------------
| 4 | 1.475829 | -0.501351 | 0.158860 |
------------------------------------------------------
| 5 | 1.388922 | -0.469510 | 0.288372 |
------------------------------------------------------
| 6 | 0.954617 | -0.155697 | 0.556782 |
------------------------------------------------------
| 7 | 0.466991 | 0.079563 | 0.859003 |
------------------------------------------------------
| 8 | 0.130542 | 0.030729 | 0.990421 |
------------------------------------------------------
Starting nowcast computation.
Computing nowcast for time step 1... done.
Computing nowcast for time step 2... done.
Computing nowcast for time step 3... done.
Computing nowcast for time step 4... done.
Computing nowcast for time step 5... done.
Computing nowcast for time step 6... done.
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/transformation.py:236: RuntimeWarning: invalid value encountered in less
R[R < threshold] = zerovalue
As we can see from the figure above, the forecast produced by S-PROG is a smooth field. In other words, the forecast variance is lower than the variance of the original observed field. However, certain applications demand that the forecast retain the same statistical properties of the observations. In such cases, the S-PROG forecasts are of limited use and a stochatic approach might be of more interest.
Stochastic nowcast with STEPS¶
The S-PROG approach is extended to include a stochastic term which represents the variance associated to the unpredictable development of precipitation. This approach is known as STEPS (short-term ensemble prediction system).
# The STEPES nowcast
nowcast_method = nowcasts.get_method("steps")
R_f = nowcast_method(
R[-3:, :, :],
V,
n_leadtimes,
n_ens_members,
n_cascade_levels=6,
R_thr=-10.0,
kmperpixel=2.0,
timestep=timestep,
decomp_method="fft",
bandpass_filter_method="gaussian",
noise_method="nonparametric",
vel_pert_method="bps",
mask_method="incremental",
seed=seed,
)
# Back-transform to rain rates
R_f = transformation.dB_transform(R_f, threshold=-10.0, inverse=True)[0]
# Plot the ensemble mean
R_f_mean = np.mean(R_f[:, -1, :, :], axis=0)
plot_precip_field(
R_f_mean,
geodata=metadata,
title="Ensemble mean (+ %i min)" % (n_leadtimes * timestep),
)

Out:
Computing STEPS nowcast:
------------------------
Inputs:
-------
input dimensions: 320x355
km/pixel: 2
time step: 5 minutes
Methods:
--------
extrapolation: semilagrangian
bandpass filter: gaussian
decomposition: fft
noise generator: nonparametric
noise adjustment: no
velocity perturbator: bps
conditional statistics: no
precip. mask method: incremental
probability matching: cdf
FFT method: numpy
Parameters:
-----------
number of time steps: 6
ensemble size: 20
parallel threads: 1
number of cascade levels: 6
order of the AR(p) model: 2
velocity perturbations, parallel: 10.88,0.23,-7.68
velocity perturbations, perpendicular: 5.76,0.31,-2.72
precip. intensity threshold: -10
************************************************
* Correlation coefficients for cascade levels: *
************************************************
-----------------------------------------
| Level | Lag-1 | Lag-2 |
-----------------------------------------
| 1 | 0.999274 | 0.997051 |
-----------------------------------------
| 2 | 0.997599 | 0.990959 |
-----------------------------------------
| 3 | 0.989080 | 0.966670 |
-----------------------------------------
| 4 | 0.943473 | 0.845313 |
-----------------------------------------
| 5 | 0.726287 | 0.528079 |
-----------------------------------------
| 6 | 0.220366 | 0.100895 |
-----------------------------------------
****************************************
* AR(p) parameters for cascade levels: *
****************************************
------------------------------------------------------
| Level | Phi-1 | Phi-2 | Phi-0 |
------------------------------------------------------
| 1 | 1.925219 | -0.926617 | 0.014322 |
------------------------------------------------------
| 2 | 1.865977 | -0.870468 | 0.034087 |
------------------------------------------------------
| 3 | 1.517706 | -0.534463 | 0.124565 |
------------------------------------------------------
| 4 | 1.328469 | -0.408062 | 0.302597 |
------------------------------------------------------
| 5 | 0.725385 | 0.001242 | 0.687391 |
------------------------------------------------------
| 6 | 0.208244 | 0.055005 | 0.973941 |
------------------------------------------------------
Starting nowcast computation.
Computing nowcast for time step 1... done.
Computing nowcast for time step 2... done.
Computing nowcast for time step 3... done.
Computing nowcast for time step 4... done.
Computing nowcast for time step 5... done.
Computing nowcast for time step 6... done.
The mean of the ensemble displays similar properties as the S-PROG forecast seen above, although the degree of smoothing also depends on the ensemble size. In this sense, the S-PROG forecast can be seen as the mean of an ensemble of infinite size.
# Plot some of the realizations
fig = figure()
for i in range(4):
ax = fig.add_subplot(221 + i)
ax.set_title("Member %02d" % i)
plot_precip_field(R_f[i, -1, :, :], geodata=metadata, colorbar=False, axis="off")
tight_layout()

As we can see from these two members of the ensemble, the stochastic forecast mantains the same variance as in the observed rainfall field. STEPS also includes a stochatic perturbation of the motion field in order to quantify the its uncertainty.
Finally, it is possible to derive probabilities from our ensemble forecast.
# Compute exceedence probabilities for a 0.5 mm/h threshold
P = excprob(R_f[:, -1, :, :], 0.5)
# Plot the field of probabilities
plot_precip_field(
P,
geodata=metadata,
drawlonlatlines=False,
type="prob",
units="mm/h",
probthr=0.5,
title="Exceedence probability (+ %i min)" % (n_leadtimes * timestep),
)
# sphinx_gallery_thumbnail_number = 5

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/postprocessing/ensemblestats.py:97: RuntimeWarning: invalid value encountered in greater_equal
X_[X >= x] = 1.0
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/postprocessing/ensemblestats.py:98: RuntimeWarning: invalid value encountered in less
X_[X < x] = 0.0
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/visualization/precipfields.py:214: RuntimeWarning: invalid value encountered in less
R[R < 1e-3] = np.nan
Total running time of the script: ( 0 minutes 21.534 seconds)
Note
Click here to download the full example code
Ensemble verification¶
In this tutorial we perform a verification of a probabilistic extrapolation nowcast using MeteoSwiss radar data.
from datetime import datetime
import matplotlib.pyplot as plt
import numpy as np
from pprint import pprint
from pysteps import io, nowcasts, rcparams, verification
from pysteps.motion.lucaskanade import dense_lucaskanade
from pysteps.postprocessing import ensemblestats
from pysteps.utils import conversion, dimension, transformation
from pysteps.visualization import plot_precip_field
Read precipitation field¶
First, we will import the sequence of MeteoSwiss (“mch”) radar composites. You need the pysteps-data archive downloaded and the pystepsrc file configured with the data_source paths pointing to data folders.
# Selected case
date = datetime.strptime("201607112100", "%Y%m%d%H%M")
data_source = rcparams.data_sources["mch"]
n_ens_members = 20
n_leadtimes = 6
seed = 24
The data are upscaled to 2 km resolution to limit the memory usage and thus be able to afford a larger number of ensemble members.
root_path = data_source["root_path"]
path_fmt = data_source["path_fmt"]
fn_pattern = data_source["fn_pattern"]
fn_ext = data_source["fn_ext"]
importer_name = data_source["importer"]
importer_kwargs = data_source["importer_kwargs"]
timestep = data_source["timestep"]
# Find the radar files in the archive
fns = io.find_by_date(
date, root_path, path_fmt, fn_pattern, fn_ext, timestep, num_prev_files=2
)
# Read the data from the archive
importer = io.get_method(importer_name, "importer")
R, _, metadata = io.read_timeseries(fns, importer, **importer_kwargs)
# Convert to rain rate
R, metadata = conversion.to_rainrate(R, metadata)
# Upscale data to 2 km
R, metadata = dimension.aggregate_fields_space(R, metadata, 2000)
# Plot the rainfall field
plot_precip_field(R[-1, :, :], geodata=metadata)
plt.show()
# Log-transform the data to unit of dBR, set the threshold to 0.1 mm/h,
# set the fill value to -15 dBR
R, metadata = transformation.dB_transform(R, metadata, threshold=0.1, zerovalue=-15.0)
# Set missing values with the fill value
R[~np.isfinite(R)] = -15.0
# Nicely print the metadata
pprint(metadata)

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:574: RuntimeWarning: invalid value encountered in greater
if np.any(R > np.nanmin(R)):
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:575: RuntimeWarning: invalid value encountered in greater
metadata["threshold"] = np.nanmin(R[R > np.nanmin(R)])
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/visualization/precipfields.py:210: RuntimeWarning: invalid value encountered in less
R[R < 0.1] = np.nan
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/transformation.py:206: RuntimeWarning: invalid value encountered in less
zeros = R < threshold
{'accutime': 5,
'institution': 'MeteoSwiss',
'product': 'AQC',
'projection': '+proj=somerc +lon_0=7.43958333333333 +lat_0=46.9524055555556 '
'+k_0=1 +x_0=600000 +y_0=200000 +ellps=bessel '
'+towgs84=674.374,15.056,405.346,0,0,0,0 +units=m +no_defs',
'threshold': -10.0,
'timestamps': array([datetime.datetime(2016, 7, 11, 20, 50),
datetime.datetime(2016, 7, 11, 20, 55),
datetime.datetime(2016, 7, 11, 21, 0)], dtype=object),
'transform': 'dB',
'unit': 'mm/h',
'x1': 255000.0,
'x2': 965000.0,
'xpixelsize': 2000,
'y1': -160000.0,
'y2': 480000.0,
'yorigin': 'upper',
'ypixelsize': 2000,
'zerovalue': -15.0,
'zr_a': 316.0,
'zr_b': 1.5}
Forecast¶
We use the STEPS approach to produce a ensemble nowcast of precipitation fields.
# Estimate the motion field
V = dense_lucaskanade(R)
# Perform the ensemble nowcast with STEPS
nowcast_method = nowcasts.get_method("steps")
R_f = nowcast_method(
R[-3:, :, :],
V,
n_leadtimes,
n_ens_members,
n_cascade_levels=6,
R_thr=-10.0,
kmperpixel=2.0,
timestep=timestep,
decomp_method="fft",
bandpass_filter_method="gaussian",
noise_method="nonparametric",
vel_pert_method="bps",
mask_method="incremental",
seed=seed,
)
# Back-transform to rain rates
R_f = transformation.dB_transform(R_f, threshold=-10.0, inverse=True)[0]
# Plot some of the realizations
fig = plt.figure()
for i in range(4):
ax = fig.add_subplot(221 + i)
ax.set_title("Member %02d" % i)
plot_precip_field(R_f[i, -1, :, :], geodata=metadata, colorbar=False, axis="off")
plt.tight_layout()
plt.show()

Out:
Computing STEPS nowcast:
------------------------
Inputs:
-------
input dimensions: 320x355
km/pixel: 2
time step: 5 minutes
Methods:
--------
extrapolation: semilagrangian
bandpass filter: gaussian
decomposition: fft
noise generator: nonparametric
noise adjustment: no
velocity perturbator: bps
conditional statistics: no
precip. mask method: incremental
probability matching: cdf
FFT method: numpy
Parameters:
-----------
number of time steps: 6
ensemble size: 20
parallel threads: 1
number of cascade levels: 6
order of the AR(p) model: 2
velocity perturbations, parallel: 10.88,0.23,-7.68
velocity perturbations, perpendicular: 5.76,0.31,-2.72
precip. intensity threshold: -10
************************************************
* Correlation coefficients for cascade levels: *
************************************************
-----------------------------------------
| Level | Lag-1 | Lag-2 |
-----------------------------------------
| 1 | 0.998861 | 0.996545 |
-----------------------------------------
| 2 | 0.997589 | 0.992801 |
-----------------------------------------
| 3 | 0.993111 | 0.981614 |
-----------------------------------------
| 4 | 0.955226 | 0.892324 |
-----------------------------------------
| 5 | 0.776431 | 0.586100 |
-----------------------------------------
| 6 | 0.190418 | 0.132265 |
-----------------------------------------
****************************************
* AR(p) parameters for cascade levels: *
****************************************
------------------------------------------------------
| Level | Phi-1 | Phi-2 | Phi-0 |
------------------------------------------------------
| 1 | 1.516658 | -0.518387 | 0.040795 |
------------------------------------------------------
| 2 | 1.491181 | -0.494785 | 0.060306 |
------------------------------------------------------
| 3 | 1.329807 | -0.339032 | 0.110238 |
------------------------------------------------------
| 4 | 1.174900 | -0.229971 | 0.287947 |
------------------------------------------------------
| 5 | 0.809165 | -0.042160 | 0.629642 |
------------------------------------------------------
| 6 | 0.171448 | 0.099618 | 0.976820 |
------------------------------------------------------
Starting nowcast computation.
Computing nowcast for time step 1... done.
Computing nowcast for time step 2... done.
Computing nowcast for time step 3... done.
Computing nowcast for time step 4... done.
Computing nowcast for time step 5... done.
Computing nowcast for time step 6... done.
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/transformation.py:236: RuntimeWarning: invalid value encountered in less
R[R < threshold] = zerovalue
Verification¶
Pysteps includes a number of verification metrics to help users to analyze the general characteristics of the nowcasts in terms of consistency and quality (or goodness). Here, we will verify our probabilistic forecasts using the ROC curve, reliability diagrams, and rank histograms, as implemented in the verification module of pysteps.
# Find the files containing the verifying observations
fns = io.archive.find_by_date(
date,
root_path,
path_fmt,
fn_pattern,
fn_ext,
timestep,
0,
num_next_files=n_leadtimes,
)
# Read the observations
R_o, _, metadata_o = io.read_timeseries(fns, importer, **importer_kwargs)
# Convert to mm/h
R_o, metadata_o = conversion.to_rainrate(R_o, metadata_o)
# Upscale data to 2 km
R_o, metadata_o = dimension.aggregate_fields_space(R_o, metadata_o, 2000)
# Compute the verification for the last lead time
# compute the exceedance probability of 0.1 mm/h from the ensemble
P_f = ensemblestats.excprob(R_f[:, -1, :, :], 0.1, ignore_nan=True)
Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/postprocessing/ensemblestats.py:97: RuntimeWarning: invalid value encountered in greater_equal
X_[X >= x] = 1.0
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/postprocessing/ensemblestats.py:98: RuntimeWarning: invalid value encountered in less
X_[X < x] = 0.0
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/postprocessing/ensemblestats.py:101: RuntimeWarning: Mean of empty slice
P.append(np.nanmean(X_, axis=0))
roc = verification.ROC_curve_init(0.1, n_prob_thrs=10)
verification.ROC_curve_accum(roc, P_f, R_o[-1, :, :])
fig, ax = plt.subplots()
verification.plot_ROC(roc, ax, opt_prob_thr=True)
ax.set_title("ROC curve (+%i min)" % (n_leadtimes * timestep))
plt.show()

reldiag = verification.reldiag_init(0.1)
verification.reldiag_accum(reldiag, P_f, R_o[-1, :, :])
fig, ax = plt.subplots()
verification.plot_reldiag(reldiag, ax)
ax.set_title("Reliability diagram (+%i min)" % (n_leadtimes * timestep))
plt.show()

rankhist = verification.rankhist_init(R_f.shape[0], 0.1)
verification.rankhist_accum(rankhist, R_f[:, -1, :, :], R_o[-1, :, :])
fig, ax = plt.subplots()
verification.plot_rankhist(rankhist, ax)
ax.set_title("Rank histogram (+%i min)" % (n_leadtimes * timestep))
plt.show()
# sphinx_gallery_thumbnail_number = 5

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/verification/ensscores.py:197: RuntimeWarning: invalid value encountered in greater_equal
mask_nz = np.logical_or(X_o >= X_min, np.any(X_f >= X_min, axis=1))
Total running time of the script: ( 0 minutes 19.890 seconds)
Note
Click here to download the full example code
Handling of no-data in Lucas-Kanade¶
Areas of missing data in radar images are typically caused by visibility limits such as beam blockage and the radar coverage itself. These artifacts can mislead the echo tracking algorithms. For instance, precipitation leaving the domain might be erroneously detected as having nearly stationary velocity.
This example shows how the Lucas-Kanade algorithm can be tuned to avoid the erroneous interpretation of velocities near the maximum range of the radars by buffering the no-data mask in the radar image in order to exclude all vectors detected nearby no-data areas.
from datetime import datetime
from matplotlib import cm, colors
import matplotlib.pyplot as plt
import numpy as np
from pysteps import io, motion, nowcasts, rcparams, verification
from pysteps.utils import conversion, transformation
from pysteps.visualization import plot_precip_field, quiver
Read the radar input images¶
First, we will import the sequence of radar composites. You need the pysteps-data archive downloaded and the pystepsrc file configured with the data_source paths pointing to data folders.
# Selected case
date = datetime.strptime("201607112100", "%Y%m%d%H%M")
data_source = rcparams.data_sources["mch"]
root_path = data_source["root_path"]
path_fmt = data_source["path_fmt"]
fn_pattern = data_source["fn_pattern"]
fn_ext = data_source["fn_ext"]
importer_name = data_source["importer"]
importer_kwargs = data_source["importer_kwargs"]
timestep = data_source["timestep"]
# Find the two input files from the archive
fns = io.archive.find_by_date(
date, root_path, path_fmt, fn_pattern, fn_ext, timestep=5, num_prev_files=1
)
# Read the radar composites
importer = io.get_method(importer_name, "importer")
R, quality, metadata = io.read_timeseries(fns, importer, **importer_kwargs)
del quality # Not used
Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:574: RuntimeWarning: invalid value encountered in greater
if np.any(R > np.nanmin(R)):
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:575: RuntimeWarning: invalid value encountered in greater
metadata["threshold"] = np.nanmin(R[R > np.nanmin(R)])
# Convert to mm/h
R, metadata = conversion.to_rainrate(R, metadata)
# Keep the reference frame in mm/h and its mask (for plotting purposes)
ref_mm = R[0, :, :].copy()
mask = np.ones(ref_mm.shape)
mask[~np.isnan(ref_mm)] = np.nan
# Log-transform the data [dBR]
R, metadata = transformation.dB_transform(
R, metadata, threshold=0.1, zerovalue=-15.0
)
# Keep the reference frame in dBR (for plotting purposes)
ref_dbr = R[0].copy()
ref_dbr[ref_dbr < -10] = np.nan
# Plot the reference field
plot_precip_field(ref_mm, title="Reference field")
circle = plt.Circle((620, 400), 100, color="b", clip_on=False, fill=False)
plt.gca().add_artist(circle)
plt.show()

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/transformation.py:206: RuntimeWarning: invalid value encountered in less
zeros = R < threshold
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/checkouts/v1.1.0/examples/LK_buffer_mask.py:81: RuntimeWarning: invalid value encountered in less
ref_dbr[ref_dbr < -10] = np.nan
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/visualization/precipfields.py:210: RuntimeWarning: invalid value encountered in less
R[R < 0.1] = np.nan
Notice the “half-in, half-out” precipitation area within the blue circle. As we are going to show next, the tracking algorithm can erroneously interpret precipitation leaving the domain as stationary motion.
Also note that the radar image includes NaNs in areas of missing data. These are used by the optical flow algorithm to define the radar mask.
Sparse Lucas-Kanade¶
By setting the optional argument ‘dense=False’ in ‘xy, uv = LK_optflow(…..)’, the LK algorithm returns the motion vectors detected by the Lucas-Kanade scheme without interpolating them on the grid. This allows us to better identify the presence of wrongly detected stationary motion in areas where precipitation is leaving the domain (look for the red dots within the blue circle in the figure below).
# get Lucas-Kanade optical flow method
LK_optflow = motion.get_method("LK")
# Mask invalid values
R = np.ma.masked_invalid(R)
R.data[R.mask] = np.nan
# Use default settings (i.e., no buffering of the radar mask)
fd_kwargs1 = {"buffer_mask":0}
xy, uv = LK_optflow(R, dense=False, fd_kwargs=fd_kwargs1)
plt.imshow(ref_dbr, cmap=plt.get_cmap("Greys"))
plt.imshow(mask, cmap=colors.ListedColormap(["black"]), alpha=0.5)
plt.quiver(
xy[:, 0],
xy[:, 1],
uv[:, 0],
uv[:, 1],
color="red",
angles="xy",
scale_units="xy",
scale=0.2,
)
circle = plt.Circle((620, 245), 100, color="b", clip_on=False, fill=False)
plt.gca().add_artist(circle)
plt.title("buffer_mask = 0 (default)")
plt.show()

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/images.py:200: RuntimeWarning: invalid value encountered in greater
field_bin = np.ndarray.astype(input_image > thr, "uint8")
By default, the LK algorithm considers missing values as no precipitation, i.e., no-data are the same as no-echoes. As a result, the fixed boundaries produced by precipitation in contact with no-data areas are interpreted as stationary motion. One way to mitigate this effect of the boundaries is to introduce a slight buffer of the no-data mask so that the algorithm will ignore all the portions of the radar domain that are nearby no-data areas. This is achieved by passing the keyword argument ‘buffer_mask = 10’ within the feature detection optional arguments ‘fd_kwargs’.
# with buffer
buffer = 10
fd_kwargs2 = {"buffer_mask":buffer}
xy, uv = LK_optflow(R, dense=False, fd_kwargs=fd_kwargs2)
plt.imshow(ref_dbr, cmap=plt.get_cmap("Greys"))
plt.imshow(mask, cmap=colors.ListedColormap(["black"]), alpha=0.5)
plt.quiver(
xy[:, 0],
xy[:, 1],
uv[:, 0],
uv[:, 1],
color="red",
angles="xy",
scale_units="xy",
scale=0.2,
)
circle = plt.Circle((620, 245), 100, color="b", clip_on=False, fill=False)
plt.gca().add_artist(circle)
plt.title("buffer_mask = %i" % buffer)
plt.show()

Dense Lucas-Kanade¶
The above displacement vectors produced by the Lucas-Kanade method are now interpolated to produce a full field of motion (i.e., ‘dense=True’). By comparing the velocity of the motion fields, we can easily notice the negative bias that is introduced by the the erroneous interpretation of velocities near the maximum range of the radars.
UV1 = LK_optflow(R, dense=True, fd_kwargs=fd_kwargs1)
UV2 = LK_optflow(R, dense=True, fd_kwargs=fd_kwargs2)
V1 = np.sqrt(UV1[0] ** 2 + UV1[1] ** 2)
V2 = np.sqrt(UV2[0] ** 2 + UV2[1] ** 2)
plt.imshow((V1 - V2) / V2, cmap=cm.RdBu_r, vmin=-0.5, vmax=0.5)
plt.colorbar(fraction=0.04, pad=0.04)
plt.title("Relative difference in motion speed")
plt.show()

Notice that the default motion field can be significantly slower (more than 10% slower) because of the presence of wrong stationary motion vectors at the edges.
Forecast skill¶
We are now going to evaluate the benefit of buffering the radar mask by computing the forecast skill in terms of the Spearman correlation coefficient. The extrapolation forecasts are computed using the dense UV motion fields estimted above.
# Get the advection routine and extrapolate the last radar frame by 12 time steps
# (i.e., 1 hour lead time)
extrapolate = nowcasts.get_method("extrapolation")
R[~np.isfinite(R)] = metadata["zerovalue"]
R_f1 = extrapolate(R[-1], UV1, 12)
R_f2 = extrapolate(R[-1], UV2, 12)
# Back-transform to rain rate
R_f1 = transformation.dB_transform(R_f1, threshold=-10.0, inverse=True)[0]
R_f2 = transformation.dB_transform(R_f2, threshold=-10.0, inverse=True)[0]
# Find the veriyfing observations in the archive
fns = io.archive.find_by_date(
date,
root_path,
path_fmt,
fn_pattern,
fn_ext,
timestep=5,
num_next_files=12,
)
# Read and convert the radar composites
R_o, _, metadata_o = io.read_timeseries(fns, importer, **importer_kwargs)
R_o, metadata_o = conversion.to_rainrate(R_o, metadata_o)
# Compute Spearman correlation
skill = verification.get_method("corr_s")
score_1 = []
score_2 = []
for i in range(12):
score_1.append(skill(R_f1[i, :, :], R_o[i + 1, :, :])["corr_s"])
score_2.append(skill(R_f2[i, :, :], R_o[i + 1, :, :])["corr_s"])
x = (np.arange(12) + 1) * 5 # [min]
plt.plot(x, score_1, label="buffer_mask = 0")
plt.plot(x, score_2, label="buffer_mask = %i" % buffer)
plt.legend()
plt.xlabel("Lead time [min]")
plt.ylabel("Corr. coeff. []")
plt.title("Spearman correlation")
plt.tight_layout()
plt.show()

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/transformation.py:236: RuntimeWarning: invalid value encountered in less
R[R < threshold] = zerovalue
As expected, the corrected motion field produces better forecast skill already within the first hour into the nowcast.
# sphinx_gallery_thumbnail_number = 2
Total running time of the script: ( 0 minutes 17.877 seconds)
Note
Click here to download the full example code
Optical flow methods convergence¶
In this example we test the convergence of the optical flow methods available in pySTEPS using idealized motion fields.
To test the convergence, using an example precipitation field we will:
- Read precipitation field from a file
- Morph the precipitation field using a given motion field (linear or rotor) to generate a sequence of moving precipitation patterns.
- Using the available optical flow methods, retrieve the motion field from the precipitation time sequence (synthetic precipitation observations).
Let’s first load the libraries that we will use.
from datetime import datetime
import time
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.cm import get_cmap
from scipy.ndimage import uniform_filter
import pysteps as stp
from pysteps import motion, io, rcparams
from pysteps.motion.vet import morph
from pysteps.visualization import plot_precip_field, quiver
Load the reference precipitation data¶
First, we will import a radar composite from the archive. You need the pysteps-data archive downloaded and the pystepsrc file configured with the data_source paths pointing to data folders.
# Selected case
date = datetime.strptime("201505151630", "%Y%m%d%H%M")
data_source = rcparams.data_sources["mch"]
root_path = data_source["root_path"]
path_fmt = data_source["path_fmt"]
fn_pattern = data_source["fn_pattern"]
fn_ext = data_source["fn_ext"]
importer_name = data_source["importer"]
importer_kwargs = data_source["importer_kwargs"]
# Find the reference field in the archive
fns = io.archive.find_by_date(date, root_path, path_fmt, fn_pattern, fn_ext,
timestep=5, num_prev_files=0)
# Read the reference radar composite
importer = io.get_method(importer_name, "importer")
reference_field, quality, metadata = io.read_timeseries(fns, importer,
**importer_kwargs)
del quality # Not used
reference_field = np.squeeze(reference_field) # Remove time dimension
Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:574: RuntimeWarning: invalid value encountered in greater
if np.any(R > np.nanmin(R)):
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/io/importers.py:575: RuntimeWarning: invalid value encountered in greater
metadata["threshold"] = np.nanmin(R[R > np.nanmin(R)])
# Convert to mm/h
reference_field, metadata = stp.utils.to_rainrate(reference_field, metadata)
# Mask invalid values
reference_field = np.ma.masked_invalid(reference_field)
# Plot the reference precipitation
plot_precip_field(reference_field, title="Reference field")
plt.show()
# Log-transform the data [dBR]
reference_field, metadata = stp.utils.dB_transform(reference_field,
metadata,
threshold=0.1,
zerovalue=-15.0)
print("Precip. pattern shape: " + str(reference_field.shape))
# This suppress nan conversion warnings in plot functions
reference_field.data[reference_field.mask] = np.nan

Out:
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/visualization/precipfields.py:210: RuntimeWarning: invalid value encountered in less
R[R < 0.1] = np.nan
/home/docs/checkouts/readthedocs.org/user_builds/pysteps/envs/v1.1.0/lib/python3.7/site-packages/pysteps/utils/transformation.py:206: RuntimeWarning: invalid value encountered in less
zeros = R < threshold
Precip. pattern shape: (640, 710)
Synthetic precipitation observations¶
Now we need to create a series of precipitation fields by applying the ideal motion field to the reference precipitation field “n” times.
To evaluate the accuracy of the computed_motion vectors, we will use a relative RMSE measure. Relative MSE = <(expected_motion - computed_motion)^2> / <expected_motion^2>
# Relative RMSE = Rel_RMSE = sqrt(Relative MSE)
#
# - Rel_RMSE = 0%: no error
# - Rel_RMSE = 100%: The retrieved motion field has an average error equal in
# magnitude to the motion field.
#
# Relative RMSE is computed over a region surrounding the precipitation
# field, were there is enough information to retrieve the motion field.
# The "precipitation region" includes the precipitation pattern plus a margin of
# approximately 20 grid points.
Let’s create a function to construct different motion fields.
def create_motion_field(input_precip, motion_type):
"""
Create idealized motion fields to be applied to the reference image.
Parameters
----------
input_precip: numpy array (lat, lon)
motion_type : str
The supported motion fields are:
- linear_x: (u=2, v=0)
- linear_y: (u=0, v=2)
- rotor: rotor field
Returns
-------
ideal_motion : numpy array (u, v)
"""
# Create an imaginary grid on the image and create a motion field to be
# applied to the image.
ny, nx = input_precip.shape
x_pos = np.arange(nx)
y_pos = np.arange(ny)
x, y = np.meshgrid(x_pos, y_pos, indexing='ij')
ideal_motion = np.zeros((2, nx, ny))
if motion_type == "linear_x":
ideal_motion[0, :] = 2 # Motion along x
elif motion_type == "linear_y":
ideal_motion[1, :] = 2 # Motion along y
elif motion_type == "rotor":
x_mean = x.mean()
y_mean = y.mean()
norm = np.sqrt(x * x + y * y)
mask = norm != 0
ideal_motion[0, mask] = 2 * (y - y_mean)[mask] / norm[mask]
ideal_motion[1, mask] = -2 * (x - x_mean)[mask] / norm[mask]
else:
raise ValueError("motion_type not supported.")
# We need to swap the axes because the optical flow methods expect
# (lat, lon) or (y,x) indexing convention.
ideal_motion = ideal_motion.swapaxes(1, 2)
return ideal_motion
Let’s create another function that construct the temporal series of precipitation observations.
def create_observations(input_precip, motion_type, num_times=9):
"""
Create synthetic precipitation observations by displacing the input field
using an ideal motion field.
Parameters
----------
input_precip: numpy array (lat, lon)
Input precipitation field.
motion_type : str
The supported motion fields are:
- linear_x: (u=2, v=0)
- linear_y: (u=0, v=2)
- rotor: rotor field
num_times: int, optional
Length of the observations sequence.
Returns
-------
synthetic_observations : numpy array
Sequence of observations
"""
ideal_motion = create_motion_field(input_precip, motion_type)
# The morph function expects (lon, lat) or (x, y) dimensions.
# Hence, we need to swap the lat,lon axes.
# NOTE: The motion field passed to the morph function can't have any NaNs.
# Otherwise, it can result in a segmentation fault.
morphed_field, mask = morph(input_precip.swapaxes(0, 1),
ideal_motion.swapaxes(1, 2))
mask = np.array(mask, dtype=bool)
synthetic_observations = np.ma.MaskedArray(morphed_field, mask=mask)
synthetic_observations = synthetic_observations[np.newaxis, :]
for t in range(1, num_times):
morphed_field, mask = morph(synthetic_observations[t - 1],
ideal_motion.swapaxes(1, 2))
mask = np.array(mask, dtype=bool)
morphed_field = np.ma.MaskedArray(morphed_field[np.newaxis, :],
mask=mask[np.newaxis, :])
synthetic_observations = np.ma.concatenate([synthetic_observations, morphed_field],
axis=0)
# Swap back to (lat, lon)
synthetic_observations = synthetic_observations.swapaxes(1, 2)
synthetic_observations = np.ma.masked_invalid(synthetic_observations)
synthetic_observations.data[np.ma.getmaskarray(synthetic_observations)] = 0
return ideal_motion, synthetic_observations
def plot_optflow_method_convergence(input_precip,
optflow_method_name,
motion_type):
"""
Test the convergence to the actual solution of the optical flow method used.
Parameters
----------
input_precip: numpy array (lat, lon)
Input precipitation field.
optflow_method_name: str
Optical flow method name
motion_type : str
The supported motion fields are:
- linear_x: (u=2, v=0)
- linear_y: (u=0, v=2)
- rotor: rotor field
"""
if optflow_method_name.lower() != "darts":
num_times = 2
else:
num_times = 9
ideal_motion, precip_obs = create_observations(input_precip,
motion_type,
num_times=num_times)
oflow_method = motion.get_method(optflow_method_name)
elapsed_time = time.perf_counter()
computed_motion = oflow_method(precip_obs, verbose=False)
print(f"{optflow_method_name} computation time: "
f"{(time.perf_counter() - elapsed_time):.1f} [s]")
precip_obs, _ = stp.utils.dB_transform(precip_obs, inverse=True)
precip_data = precip_obs.max(axis=0)
precip_data.data[precip_data.mask] = 0
precip_mask = ((uniform_filter(precip_data, size=20) > 0.1)
& ~precip_obs.mask.any(axis=0))
cmap = get_cmap('jet')
cmap.set_under('grey', alpha=0.25)
cmap.set_over('none')
# Compare retrieved motion field with the ideal one
plt.figure(figsize=(9, 4))
plt.subplot(1, 2, 1)
ax = plot_precip_field(precip_obs[0], title="Reference motion")
quiver(ideal_motion, step=25, ax=ax)
plt.subplot(1, 2, 2)
ax = plot_precip_field(precip_obs[0], title="Retrieved motion")
quiver(computed_motion, step=25, ax=ax)
# To evaluate the accuracy of the computed_motion vectors, we will use
# a relative RMSE measure.
# Relative MSE = < (expected_motion - computed_motion)^2 > / <expected_motion^2 >
# Relative RMSE = sqrt(Relative MSE)
mse = ((ideal_motion - computed_motion)[:, precip_mask] ** 2).mean()
rel_mse = mse / (ideal_motion[:, precip_mask] ** 2).mean()
plt.suptitle(f"{optflow_method_name} "
f"Relative RMSE: {np.sqrt(rel_mse) * 100:.2f}%")
plt.show()
Lucas-Kanade¶
plot_optflow_method_convergence(reference_field, 'LucasKanade', 'linear_x')

Out:
LucasKanade computation time: 3.4 [s]
plot_optflow_method_convergence(reference_field, 'LucasKanade', 'linear_y')

Out:
LucasKanade computation time: 3.4 [s]
plot_optflow_method_convergence(reference_field, 'LucasKanade', 'rotor')

Out:
LucasKanade computation time: 3.4 [s]
Variational Echo Tracking (VET)¶
plot_optflow_method_convergence(reference_field, 'VET', 'linear_x')

Out:
VET computation time: 3.9 [s]
plot_optflow_method_convergence(reference_field, 'VET', 'linear_y')

Out:
VET computation time: 3.5 [s]
plot_optflow_method_convergence(reference_field, 'VET', 'rotor')

Out:
VET computation time: 26.9 [s]
DARTS¶
plot_optflow_method_convergence(reference_field, 'DARTS', 'linear_x')

Out:
DARTS computation time: 2.9 [s]
plot_optflow_method_convergence(reference_field, 'DARTS', 'linear_y')

Out:
DARTS computation time: 2.9 [s]
plot_optflow_method_convergence(reference_field, 'DARTS', 'rotor')
# sphinx_gallery_thumbnail_number = 5

Out:
DARTS computation time: 2.9 [s]
Total running time of the script: ( 0 minutes 59.626 seconds)
pySTEPS reference¶
Release: | 1.1.0 |
---|---|
Date: | Oct 24, 2019 |
This page gives an comprehensive description of all the modules and functions available in pySTEPS.
pysteps.cascade¶
Methods for constructing bandpass filters and decomposing 2d precipitation fields into different spatial scales.
pysteps.cascade.interface¶
Interface for the cascade module.
get_method (name) |
Return a callable function for the bandpass filter or decomposition method corresponding to the given name. |
pysteps.cascade.bandpass_filters¶
Bandpass filters for separating different spatial scales from two-dimensional images in the frequency domain.
The methods in this module implement the following interface:
filter_xxx(shape, n, optional arguments)
where shape is the shape of the input field, respectively, and n is the number of frequency bands to use.
The output of each filter function is a dictionary containing the following key-value pairs:
Key | Value |
---|---|
weights_1d | 2d array of shape (n, r) containing 1d filter weights for each frequency band k=1,2,…,n |
weights_2d | 3d array of shape (n, M, int(N/2)+1) containing the 2d filter weights for each frequency band k=1,2,…,n |
central_freqs | 1d array of shape n containing the central frequencies of the filters |
where r = int(max(N, M)/2)+1
By default, the filter weights are normalized so that for any Fourier wavenumber they sum to one.
Available filters¶
filter_uniform (shape, n) |
A dummy filter with one frequency band covering the whole domain. |
filter_gaussian (shape, n[, l_0, …]) |
Implements a set of Gaussian bandpass filters in logarithmic frequency scale. |
pysteps.cascade.decomposition¶
Methods for decomposing two-dimensional images into multiple spatial scales.
The methods in this module implement the following interface:
decomposition_xxx(X, filter, **kwargs)
where X is the input field and filter is a dictionary returned by a filter
method implemented in pysteps.cascade.bandpass_filters
.
Optional parameters can be passed in
the keyword arguments. The output of each method is a dictionary with the
following key-value pairs:
Key | Value |
---|---|
cascade_levels | three-dimensional array of shape (k,m,n), where k is the number of cascade levels and the input fields have shape (m,n) |
means | list of mean values for each cascade level |
stds | list of standard deviations for each cascade level |
Available methods¶
decomposition_fft (X, filter, \*\*kwargs) |
Decompose a 2d input field into multiple spatial scales by using the Fast Fourier Transform (FFT) and a bandpass filter. |
pysteps.extrapolation¶
Extrapolation module functions and interfaces.
pysteps.extrapolation.interface¶
The functions in the extrapolation module implement the following interface:
extrapolate(extrap, precip, velocity, num_timesteps,
outval=np.nan, **keywords)
where extrap is an extrapolator object returned by the initialize function, precip is a (m,n) array with input precipitation field to be advected and velocity is a (2,m,n) array containing the x- and y-components of the m x n advection field. num_timesteps is an integer specifying the number of time steps to extrapolate. The optional argument outval specifies the value for pixels advected from outside the domain. Optional keyword arguments that are specific to a given extrapolation method are passed as a dictionary.
The output of each method is an array R_e that includes the time series of extrapolated fields of shape (num_timesteps, m, n).
get_method (name) |
Return two-element tuple for the extrapolation method corresponding to the given name. |
eulerian_persistence (precip, velocity, …) |
A dummy extrapolation method to apply Eulerian persistence to a two-dimensional precipitation field. |
pysteps.extrapolation.semilagrangian¶
Implementation of the semi-Lagrangian method of Germann et al (2002). [GZ2002]
extrapolate (precip, velocity, num_timesteps) |
Apply semi-Lagrangian backward extrapolation to a two-dimensional precipitation field. |
pysteps.io¶
Methods for browsing data archives, reading 2d precipitation fields and writing forecasts into files.
pysteps.io.interface¶
Interface for the io module.
get_method (name, method_type) |
Return a callable function for the method corresponding to the given name. |
pysteps.io.archive¶
Utilities for finding archived files that match the given criteria.
find_by_date (date, root_path, path_fmt, …) |
List input files whose timestamp matches the given date. |
pysteps.io.importers¶
Methods for importing files containing two-dimensional radar mosaics.
The methods in this module implement the following interface:
import_xxx(filename, optional arguments)
where xxx is the name (or abbreviation) of the file format and filename is the name of the input file.
The output of each method is a three-element tuple containing a two-dimensional radar mosaic, the corresponding quality field and a metadata dictionary. If the file contains no quality information, the quality field is set to None. Pixels containing missing data are set to nan.
The metadata dictionary contains the following recommended key-value pairs:
Key | Value |
---|---|
projection | PROJ.4-compatible projection definition |
x1 | x-coordinate of the lower-left corner of the data raster (meters) |
y1 | y-coordinate of the lower-left corner of the data raster (meters) |
x2 | x-coordinate of the upper-right corner of the data raster (meters) |
y2 | y-coordinate of the upper-right corner of the data raster (meters) |
xpixelsize | grid resolution in x-direction (meters) |
ypixelsize | grid resolution in y-direction (meters) |
yorigin | a string specifying the location of the first element in the data raster w.r.t. y-axis: ‘upper’ = upper border ‘lower’ = lower border |
institution | name of the institution who provides the data |
unit | the physical unit of the data: ‘mm/h’, ‘mm’ or ‘dBZ’ |
transform | the transformation of the data: None, ‘dB’, ‘Box-Cox’ or others |
accutime | the accumulation time in minutes of the data, float |
threshold | the rain/no rain threshold with the same unit, transformation and accutime of the data. |
zerovalue | the value assigned to the no rain pixels with the same unit, transformation and accutime of the data. |
zr_a | the Z-R constant a in Z = a*R**b |
zr_b | the Z-R exponent b in Z = a*R**b |
Available Importers¶
import_bom_rf3 (filename, \*\*kwargs) |
Import a NetCDF radar rainfall product from the BoM Rainfields3. |
import_fmi_geotiff (filename, \*\*kwargs) |
Import a reflectivity field (dBZ) from an FMI GeoTIFF file. |
import_fmi_pgm (filename, \*\*kwargs) |
Import a 8-bit PGM radar reflectivity composite from the FMI archive. |
import_mch_gif (filename, product, unit, accutime) |
Import a 8-bit gif radar reflectivity composite from the MeteoSwiss archive. |
import_mch_hdf5 (filename, \*\*kwargs) |
Import a precipitation field (and optionally the quality field) from a MeteoSwiss HDF5 file conforming to the ODIM specification. |
import_mch_metranet (filename, product, unit, …) |
Import a 8-bit bin radar reflectivity composite from the MeteoSwiss archive. |
import_opera_hdf5 (filename, \*\*kwargs) |
Import a precipitation field (and optionally the quality field) from an OPERA HDF5 file conforming to the ODIM specification. |
import_knmi_hdf5 (filename, \*\*kwargs) |
Import a precipitation or reflectivity field (and optionally the quality field) from a HDF5 file conforming to the KNMI Data Centre specification. |
pysteps.io.nowcast_importers¶
Methods for importing nowcast files.
The methods in this module implement the following interface:
import_xxx(filename, optional arguments)
where xxx is the name (or abbreviation) of the file format and filename is the name of the input file.
The output of each method is a two-element tuple containing the nowcast array and a metadata dictionary.
The metadata dictionary contains the following mandatory key-value pairs:
Key | Value |
---|---|
projection | PROJ.4-compatible projection definition |
x1 | x-coordinate of the lower-left corner of the data raster (meters) |
y1 | y-coordinate of the lower-left corner of the data raster (meters) |
x2 | x-coordinate of the upper-right corner of the data raster (meters) |
y2 | y-coordinate of the upper-right corner of the data raster (meters) |
xpixelsize | grid resolution in x-direction (meters) |
ypixelsize | grid resolution in y-direction (meters) |
yorigin | a string specifying the location of the first element in the data raster w.r.t. y-axis: ‘upper’ = upper border ‘lower’ = lower border |
institution | name of the institution who provides the data |
timestep | time step of the input data (minutes) |
unit | the physical unit of the data: ‘mm/h’, ‘mm’ or ‘dBZ’ |
transform | the transformation of the data: None, ‘dB’, ‘Box-Cox’ or others |
accutime | the accumulation time in minutes of the data, float |
threshold | the rain/no rain threshold with the same unit, transformation and accutime of the data. |
zerovalue | it is the value assigned to the no rain pixels with the same unit, transformation and accutime of the data. |
Available Nowcast Importers¶
import_netcdf_pysteps (filename, \*\*kwargs) |
Read a nowcast or a nowcast ensemble from a NetCDF file conforming to the CF 1.7 specification. |
pysteps.io.exporter¶
Methods for exporting forecasts of 2d precipitation fields into various file formats.
Each exporter method in this module has its own initialization function that implements the following interface:
initialize_forecast_exporter_xxx(filename, startdate, timestep,
num_timesteps, shape, num_ens_members,
metadata, incremental=None)
where xxx is the name (or abbreviation) of the file format.
This function creates the file and writes the metadata. The datasets are written
by calling pysteps.io.exporters.export_forecast_dataset()
, and
the file is closed by calling pysteps.io.exporters.close_forecast_file()
.
The arguments in the above are defined as follows:
Argument | Type/values | Description |
---|---|---|
filename | str | name of the output file |
startdate | datetime.datetime | start date of the forecast |
timestep | int | time step of the forecast (minutes) |
n_timesteps | int | number of time steps in the forecast this argument is ignored if incremental is set to ‘timestep’. |
shape | tuple | two-element tuple defining the shape (height,width) of the forecast grids |
n_ens_members | int | number of ensemble members in the forecast. This argument is ignored if incremental is set to ‘member’ |
metadata | dict | metadata dictionary containing the projection,x1,x2,y1,y2 and unit attributes described in the documentation of pysteps.io.importers |
incremental | {None, ‘timestep’, ‘member’} | Allow incremental writing of datasets into the netCDF file the available options are: ‘timestep’ = write a forecast or a forecast ensemble for a given time step ‘member’ = write a forecast sequence for a given ensemble member |
The return value is a dictionary containing an exporter object. This can be
used with pysteps.io.exporters.export_forecast_dataset()
to write
datasets into the given file format.
Available Exporters¶
initialize_forecast_exporter_kineros (…[, …]) |
Initialize a KINEROS2 Rainfall .pre file as specified in https://www.tucson.ars.ag.gov/kineros/. |
initialize_forecast_exporter_netcdf (…[, …]) |
Initialize a netCDF forecast exporter. |
Generic functions¶
export_forecast_dataset (F, exporter) |
Write a forecast array into a file. |
close_forecast_file (exporter) |
Close the file associated with a forecast exporter. |
pysteps.io.readers¶
Module with the reader functions.
read_timeseries (inputfns, importer, \*\*kwargs) |
Read a time series of input files using the methods implemented in the pysteps.io.importers module and stack them into a 3d array of shape (num_timesteps, height, width). |
pysteps.motion¶
Implementations of optical flow methods.
pysteps.motion.interface¶
Interface for the motion module. It returns a callable optical flow routine for computing the motion field.
The methods in the motion module implement the following interface:
motion_method(precip, **keywords)
where precip is a (T,m,n) array containing a sequence of T two-dimensional input images of shape (m,n). The first dimension represents the images time dimension and the value of T depends on the type of the method.
The output is a three-dimensional array (2,m,n) containing the dense x- and y-components of the motion field in units of pixels / timestep as given by the input array R.
get_method (name) |
Return a callable function for the optical flow method corresponding to the given name. |
pysteps.motion.constant¶
Implementation of a constant advection field estimation by maximizing the correlation between two images.
constant (R, \*\*kwargs) |
Compute a constant advection field by finding a translation vector that maximizes the correlation between two successive images. |
pysteps.motion.darts¶
Implementation of the DARTS algorithm.
DARTS (input_images, \*\*kwargs) |
Compute the advection field from a sequence of input images by using the DARTS method. |
pysteps.motion.lucaskanade¶
The Lucas-Kanade (LK) local feature tracking module.
This module implements the interface to the local Lucas-Kanade routine available in OpenCV.
For its dense method, it additionally interpolates the sparse vectors over a regular grid to return a motion field.
dense_lucaskanade (input_images[, lk_kwargs, …]) |
Run the Lucas-Kanade optical flow routine and interpolate the motion vectors. |
track_features (prvs_image, next_image, points) |
Interface to the OpenCV Lucas-Kanade features tracking algorithm (cv.calcOpticalFlowPyrLK). |
pysteps.motion.proesmans¶
Implementation of the anisotropic diffusion method of Proesmans et al. (1994).
proesmans (input_images[, lam, num_iter, …]) |
Implementation of the anisotropic diffusion method of Proesmans et al. |
pysteps.motion.vet¶
Variational Echo Tracking (VET) Module
This module implements the VET algorithm presented by Laroche and Zawadzki (1995) and used in the McGill Algorithm for Prediction by Lagrangian Extrapolation (MAPLE) described in Germann and Zawadzki (2002).
The morphing and the cost functions are implemented in Cython and parallelized for performance.
vet (input_images[, sectors, smooth_gain, …]) |
Variational Echo Tracking Algorithm presented in Laroche and Zawadzki (1995) and used in the McGill Algorithm for Prediction by Lagrangian Extrapolation (MAPLE) described in Germann and Zawadzki (2002). |
vet_cost_function (sector_displacement_1d, …) |
|
vet_cost_function_gradient (\*args, \*\*kwargs) |
Compute the vet cost function gradient. |
morph (image, displacement[, gradient]) |
Morph image by applying a displacement field (Warping). |
round_int (scalar) |
Round number to nearest integer. |
ceil_int (scalar) |
Round number to nearest integer. |
get_padding (dimension_size, sectors) |
Get the padding at each side of the one dimensions of the image so the new image dimensions are divided evenly in the number of sectors specified. |
pysteps.noise¶
Implementation of deterministic and ensemble nowcasting methods.
pysteps.noise.interface¶
Interface for the noise module.
get_method (name) |
Return two callable functions to initialize and generate 2d perturbations of precipitation or velocity fields. |
pysteps.noise.fftgenerators¶
Methods for noise generators based on FFT filtering of white noise.
The methods in this module implement the following interface for filter initialization depending on their parametric or nonparametric nature:
initialize_param_2d_xxx_filter(X, **kwargs)
or:
initialize_nonparam_2d_xxx_filter(X, **kwargs)
where X is an array of shape (m, n) or (t, m, n) that defines the target field and optional parameters are supplied as keyword arguments.
The output of each initialization method is a dictionary containing the keys F and input_shape. The first is a two-dimensional array of shape (m, int(n/2)+1) that defines the filter. The second one is the shape of the input field for the filter.
The methods in this module implement the following interface for the generation of correlated noise:
generate_noise_2d_xxx_filter(F, randstate=np.random, seed=None, **kwargs)
where F (m, n) is a filter returned from the corresponding initialization method, and randstate and seed can be used to set the random generator and its seed. Additional keyword arguments can be included as a dictionary.
The output of each generator method is a two-dimensional array containing the field of correlated noise cN of shape (m, n).
initialize_param_2d_fft_filter (X, \*\*kwargs) |
Takes one ore more 2d input fields, fits two spectral slopes, beta1 and beta2, to produce one parametric, global and isotropic fourier filter. |
initialize_nonparam_2d_fft_filter (X, \*\*kwargs) |
Takes one ore more 2d input fields and produces one non-paramtric, global and anasotropic fourier filter. |
initialize_nonparam_2d_nested_filter (X[, …]) |
Function to compute the local Fourier filters using a nested approach. |
initialize_nonparam_2d_ssft_filter (X, \*\*kwargs) |
Function to compute the local Fourier filters using the Short-Space Fourier filtering approach. |
generate_noise_2d_fft_filter (F[, randstate, …]) |
Produces a field of correlated noise using global Fourier filtering. |
generate_noise_2d_ssft_filter (F[, …]) |
Function to compute the locally correlated noise using a nested approach. |
pysteps.noise.motion¶
Methods for generating perturbations of two-dimensional motion fields.
The methods in this module implement the following interface for initialization:
inizialize_xxx(V, pixelsperkm, timestep, optional arguments)
where V (2,m,n) is the motion field and pixelsperkm and timestep describe the spatial and temporal resolution of the motion vectors. The output of each initialization method is a dictionary containing the perturbator that can be supplied to generate_xxx.
The methods in this module implement the following interface for the generation of a motion perturbation field:
generate_xxx(perturbator, t, randstate=np.random, seed=None)
where perturbator is a dictionary returned by an initialize_xxx method. Optional random generator can be specified with the randstate and seed arguments, respectively. The output of each generator method is an array of shape (2,m,n) containing the x- and y-components of the motion vector perturbations, where m and n are determined from the perturbator.
get_default_params_bps_par () |
Return a tuple containing the default velocity perturbation parameters given in [BPS2006] for the parallel component. |
get_default_params_bps_perp () |
Return a tuple containing the default velocity perturbation parameters given in [BPS2006] for the perpendicular component. |
initialize_bps (V, pixelsperkm, timestep[, …]) |
Initialize the motion field perturbator described in [BPS2006]. |
generate_bps (perturbator, t) |
Generate a motion perturbation field by using the method described in [BPS2006]. |
pysteps.noise.utils¶
Miscellaneous utility functions related to generation of stochastic perturbations.
compute_noise_stddev_adjs (R, R_thr_1, …[, …]) |
Apply a scale-dependent adjustment factor to the noise fields used in STEPS. |
pysteps.nowcasts¶
Implementation of deterministic and ensemble nowcasting methods.
pysteps.nowcasts.interface¶
Interface for the nowcasts module. It returns a callable function for computing nowcasts.
The methods in the nowcasts module implement the following interface:
forecast(precip, velocity, num_timesteps, **keywords)
where precip is a (m,n) array with input precipitation field to be advected and velocity is a (2,m,n) array containing the x- and y-components of the m x n advection field. num_timesteps is an integer specifying the number of time steps to forecast. The interface accepts optional keyword arguments specific to the given method.
The output depends on the type of the method. For deterministic methods, the output is a three-dimensional array of shape (num_timesteps,m,n) containing a time series of nowcast precipitation fields. For stochastic methods that produce an ensemble, the output is a four-dimensional array of shape (num_ensemble_members,num_timesteps,m,n). The time step of the output is taken from the inputs.
get_method (name) |
Return a callable function for computing nowcasts. |
pysteps.nowcasts.extrapolation¶
Implementation of extrapolation-based nowcasting methods.
forecast (precip, velocity, num_timesteps[, …]) |
Generate a nowcast by applying a simple advection-based extrapolation to the given precipitation field. |
pysteps.nowcasts.sprog¶
Implementation of the S-PROG method described in [Seed2003]
forecast (R, V, n_timesteps[, …]) |
Generate a nowcast by using the Spectral Prognosis (S-PROG) method. |
pysteps.nowcasts.sseps¶
Implementation of the Short-space ensemble prediction system (SSEPS) method. Essentially, SSEPS is a localized version of STEPS.
For localization we intend the use of a subset of the observations in order to estimate model parameters that are distributed in space. The short-space approach used in [NBSG2017] is generalized to the whole nowcasting system. This essenially boils down to a moving window localization of the nowcasting procedure, whereby all parameters are estimated over a subdomain of prescribed size.
forecast (R, metadata, V, n_timesteps[, …]) |
Generate a nowcast ensemble by using the Short-space ensemble prediction system (SSEPS) method. |
pysteps.nowcasts.steps¶
Implementation of the STEPS stochastic nowcasting method as described in [Seed2003], [BPS2006] and [SPN2013].
forecast (R, V, n_timesteps[, n_ens_members, …]) |
Generate a nowcast ensemble by using the Short-Term Ensemble Prediction System (STEPS) method. |
pysteps.nowcasts.utils¶
Module with common utilities used by nowcasts methods.
print_ar_params (PHI) |
Print the parameters of an AR(p) model. |
print_corrcoefs (GAMMA) |
Print the parameters of an AR(p) model. |
stack_cascades (R_d, n_levels[, donorm]) |
Stack the given cascades into a larger array. |
recompose_cascade (R, mu, sigma) |
Recompose a cascade by inverting the normalization and summing the cascade levels. |
pysteps.postprocessing¶
Methods for post-processing of forecasts.
pysteps.postprocessing.ensemblestats¶
Methods for the computation of ensemble statistics.
mean (X[, ignore_nan, X_thr]) |
Compute the mean value from a forecast ensemble field. |
excprob (X, X_thr[, ignore_nan]) |
For a given forecast ensemble field, compute exceedance probabilities for the given intensity thresholds. |
pysteps.postprocessing.probmatching¶
Methods for matching the probability distribution of two data sets.
compute_empirical_cdf (bin_edges, hist) |
Compute an empirical cumulative distribution function from the given histogram. |
nonparam_match_empirical_cdf (R, R_trg) |
Matches the empirical CDF of the initial array with the empirical CDF of a target array. |
pmm_init (bin_edges_1, cdf_1, bin_edges_2, cdf_2) |
Initialize a probability matching method (PMM) object from binned cumulative distribution functions (CDF). |
pmm_compute (pmm, x) |
For a given PMM object and x-coordinate, compute the probability matched value (i.e. |
shift_scale (R, f, rain_fraction_trg, …) |
Find shift and scale that is needed to return the required second_moment and rain area. |
pysteps.timeseries¶
Methods and models for time series analysis.
pysteps.timeseries.autoregression¶
Methods related to autoregressive AR(p) models.
adjust_lag2_corrcoef1 (gamma_1, gamma_2) |
A simple adjustment of lag-2 temporal autocorrelation coefficient to ensure that the resulting AR(2) process is stationary when the parameters are estimated from the Yule-Walker equations. |
adjust_lag2_corrcoef2 (gamma_1, gamma_2) |
A more advanced adjustment of lag-2 temporal autocorrelation coefficient to ensure that the resulting AR(2) process is stationary when the parameters are estimated from the Yule-Walker equations. |
ar_acf (gamma[, n]) |
Compute theoretical autocorrelation function (ACF) from the AR(p) model with lag-l, l=1,2,…,p temporal autocorrelation coefficients. |
estimate_ar_params_yw (gamma) |
Estimate the parameters of an AR(p) model from the Yule-Walker equations using the given set of autocorrelation coefficients. |
iterate_ar_model (X, phi[, EPS]) |
Apply an AR(p) model to a time-series of two-dimensional fields. |
pysteps.timeseries.correlation¶
Methods for computing spatial and temporal correlation of time series of two-dimensional fields.
temporal_autocorrelation (X[, MASK]) |
Compute lag-l autocorrelation coefficients gamma_l, l=1,2,…,n-1, for a time series of n two-dimensional input fields. |
pysteps.utils¶
Implementation of miscellaneous utility functions.
pysteps.utils.interface¶
Interface for the utils module.
get_method (name, \*\*kwargs) |
Return a callable function for the utility method corresponding to the given name. |
pysteps.utils.arrays¶
Utility methods for creating and processing arrays.
compute_centred_coord_array (M, N) |
Compute a 2D coordinate array, where the origin is at the center. |
pysteps.utils.cleansing¶
Data cleansing routines for pysteps.
decluster (coord, input_array, scale[, …]) |
Decluster a set of sparse data points by aggregating, that is, taking the median value of all values lying within a certain distance (i.e., a cluster). |
detect_outliers (input_array, thr[, coord, …]) |
Detect outliers in a (multivariate and georeferenced) dataset. |
pysteps.utils.conversion¶
Methods for converting physical units.
to_rainrate (R, metadata[, zr_a, zr_b]) |
Convert to rain rate [mm/h]. |
to_raindepth (R, metadata[, zr_a, zr_b]) |
Convert to rain depth [mm]. |
to_reflectivity (R, metadata[, zr_a, zr_b]) |
Convert to reflectivity [dBZ]. |
pysteps.utils.dimension¶
Functions to manipulate array dimensions.
aggregate_fields_time (R, metadata, …[, …]) |
Aggregate fields in time. |
aggregate_fields_space (R, metadata, space_window) |
Upscale fields in space. |
aggregate_fields (R, window_size[, axis, method]) |
Aggregate fields. |
clip_domain (R, metadata[, extent]) |
Clip the field domain by geographical coordinates. |
square_domain (R, metadata[, method, inverse]) |
Either pad or crop a field to obtain a square domain. |
pysteps.utils.fft¶
Interface module for different FFT methods.
get_numpy (shape[, fftn_shape]) |
|
get_scipy (shape[, fftn_shape]) |
|
get_pyfftw (shape[, fftn_shape, n_threads]) |
pysteps.utils.images¶
Image processing routines for pysteps.
ShiTomasi_detection (input_image[, …]) |
Interface to the OpenCV Shi-Tomasi features detection method to detect corners in an image. |
morph_opening (input_image, thr, n) |
Filter out small scale noise on the image by applying a binary morphological opening, that is, erosion followed by dilation. |
pysteps.utils.interpolate¶
Interpolation routines for pysteps.
rbfinterp2d (coord, input_array, xgrid, ygrid) |
Fast 2-D grid interpolation of a sparse (multivariate) array using a radial basis function. |
pysteps.utils.spectral¶
Utility methods for processing and analyzing precipitation fields in the Fourier domain.
rapsd (Z[, fft_method, return_freq, d]) |
Compute radially averaged power spectral density (RAPSD) from the given 2D input field. |
remove_rain_norain_discontinuity (R) |
Function to remove the rain/no-rain discontinuity. |
pysteps.utils.transformation¶
Methods for transforming data values.
boxcox_transform (R[, metadata, Lambda, …]) |
The one-parameter Box-Cox transformation. |
dB_transform (R[, metadata, threshold, …]) |
Methods to transform precipitation intensities to/from dB units. |
NQ_transform (R[, metadata, inverse]) |
The normal quantile transformation as in Bogner et al (2012). |
sqrt_transform (R[, metadata, inverse]) |
Square-root transform. |
pysteps.verification¶
Methods for verification of deterministic, probabilistic and ensemble forecasts.
pysteps.verification.interface¶
Interface for the verification module.
get_method (name[, type]) |
Return a callable function for the method corresponding to the given verification score. |
pysteps.verification.detcatscores¶
Forecast evaluation and skill scores for deterministic categorial (dichotomous) forecasts.
det_cat_fct (pred, obs, thr[, scores, axis]) |
Calculate simple and skill scores for deterministic categorical (dichotomous) forecasts. |
det_cat_fct_init (thr[, axis]) |
Initialize a contingency table object. |
det_cat_fct_accum (contab, pred, obs) |
Accumulate the frequency of “yes” and “no” forecasts and observations in the contingency table. |
det_cat_fct_compute (contab[, scores]) |
Compute simple and skill scores for deterministic categorical (dichotomous) forecasts from a contingency table object. |
pysteps.verification.detcontscores¶
Forecast evaluation and skill scores for deterministic continuous forecasts.
det_cont_fct (pred, obs[, scores, axis, …]) |
Calculate simple and skill scores for deterministic continuous forecasts. |
det_cont_fct_init ([axis, conditioning, thr]) |
Initialize a verification error object. |
det_cont_fct_accum (err, pred, obs) |
Accumulate the forecast error in the verification error object. |
det_cont_fct_compute (err[, scores]) |
Compute simple and skill scores for deterministic continuous forecasts from a verification error object. |
pysteps.verification.ensscores¶
Evaluation and skill scores for ensemble forecasts.
ensemble_skill (X_f, X_o, metric, \*\*kwargs) |
Compute mean ensemble skill for a given skill metric. |
ensemble_spread (X_f, metric, \*\*kwargs) |
Compute mean ensemble spread for a given skill metric. |
rankhist (X_f, X_o[, X_min, normalize]) |
Compute a rank histogram counts and optionally normalize the histogram. |
rankhist_init (num_ens_members[, X_min]) |
Initialize a rank histogram object. |
rankhist_accum (rankhist, X_f, X_o) |
Accumulate forecast-observation pairs to the given rank histogram. |
rankhist_compute (rankhist[, normalize]) |
Return the rank histogram counts and optionally normalize the histogram. |
pysteps.verification.lifetime¶
Estimation of precipitation lifetime from a decaying verification score function (e.g. autocorrelation function).
lifetime (X_s, X_t[, rule]) |
Compute the average lifetime by integrating the correlation function as a function of lead time. |
lifetime_init ([rule]) |
Initialize a lifetime object. |
lifetime_accum (lifetime, X_s, X_t) |
Compute the lifetime by integrating the correlation function and accumulate the result into the given lifetime object. |
lifetime_compute (lifetime) |
Compute the average value from the lifetime object. |
pysteps.verification.plots¶
Methods for plotting verification results.
plot_intensityscale (iss[, fig, vmin, vmax, …]) |
Plot a intensity-scale verification table with a color bar and axis labels. |
plot_rankhist (rankhist[, ax]) |
Plot a rank histogram. |
plot_reldiag (reldiag[, ax]) |
Plot a reliability diagram. |
plot_ROC (ROC[, ax, opt_prob_thr]) |
Plot a ROC curve. |
pysteps.verification.probscores¶
Evaluation and skill scores for probabilistic forecasts.
CRPS (X_f, X_o) |
Compute the continuous ranked probability score (CRPS). |
CRPS_init () |
Initialize a CRPS object. |
CRPS_accum (CRPS, X_f, X_o) |
Compute the average continuous ranked probability score (CRPS) for a set of forecast ensembles and the corresponding observations and accumulate the result to the given CRPS object. |
CRPS_compute (CRPS) |
Compute the averaged values from the given CRPS object. |
reldiag (P_f, X_o, X_min[, n_bins, min_count]) |
Compute the x- and y- coordinates of the points in the reliability diagram. |
reldiag_init (X_min[, n_bins, min_count]) |
Initialize a reliability diagram object. |
reldiag_accum (reldiag, P_f, X_o) |
Accumulate the given probability-observation pairs into the reliability diagram. |
reldiag_compute (reldiag) |
Compute the x- and y- coordinates of the points in the reliability diagram. |
ROC_curve (P_f, X_o, X_min[, n_prob_thrs, …]) |
Compute the ROC curve and its area from the given ROC object. |
ROC_curve_init (X_min[, n_prob_thrs]) |
Initialize a ROC curve object. |
ROC_curve_accum (ROC, P_f, X_o) |
Accumulate the given probability-observation pairs into the given ROC object. |
ROC_curve_compute (ROC[, compute_area]) |
Compute the ROC curve and its area from the given ROC object. |
pysteps.verification.spatialscores¶
Skill scores for spatial forecasts.
intensity_scale (X_f, X_o, name, thrs[, …]) |
Compute an intensity-scale verification score. |
intensity_scale_init (name, thrs[, scales, …]) |
Initialize an intensity-scale verification object. |
intensity_scale_accum (intscale, X_f, X_o) |
Compute and update the intensity-scale verification scores. |
intensity_scale_compute (intscale) |
Return the intensity scale matrix. |
binary_mse (X_f, X_o, thr[, wavelet]) |
Compute an intensity-scale verification as the MSE of the binary error. |
fss (X_f, X_o, thr, scale) |
Compute the fractions skill score (FSS) for a deterministic forecast field and the corresponding observation field. |
fss_init (thr, scale) |
Initialize a fractions skill score (FSS) verification object. |
fss_accum (fss, X_f, X_o) |
Accumulate forecast-observation pairs to an FSS object. |
fss_compute (fss) |
Compute the FSS. |
pysteps.visualization¶
Methods for plotting precipitation and motion fields.
pysteps.visualization.animations¶
Functions to produce animations for pysteps.
animate (R_obs[, nloops, timestamps, R_fct, …]) |
Function to animate observations and forecasts in pysteps. |
pysteps.visualization.motionfields¶
Functions to plot motion fields.
quiver (UV[, ax, map, geodata, …]) |
Function to plot a motion field as arrows. |
streamplot (UV[, ax, map, geodata, …]) |
Function to plot a motion field as streamlines. |
pysteps.visualization.precipfields¶
Methods for plotting precipitation fields.
plot_precip_field (R[, type, map, geodata, …]) |
Function to plot a precipitation intensity or probability field with a colorbar. |
get_colormap (type[, units, colorscale]) |
Function to generate a colormap (cmap) and norm. |
pysteps.visualization.spectral¶
Methods for plotting Fourier spectra.
plot_spectrum1d (fft_freq, fft_power[, …]) |
Function to plot in log-log a radially averaged Fourier spectrum. |
pysteps.visualization.utils¶
Miscellaneous utility functions for the visualization module.
parse_proj4_string (proj4str) |
Construct a dictionary from a PROJ.4 projection string. |
proj4_to_basemap (proj4str) |
Convert a PROJ.4 projection string into a dictionary that can be expanded as keyword arguments to mpl_toolkits.basemap.Basemap.__init__. |
proj4_to_cartopy (proj4str) |
Convert a PROJ.4 projection string into a Cartopy coordinate reference system (crs) object. |
reproject_geodata (geodata, t_proj4str[, …]) |
Reproject geodata and optionally create a grid in a new projection. |
Indices and tables¶
Bibliography¶
pySTEPS developer guide¶
In this section you can find a series of guidelines and tutorials for contributing to the pySTEPS project.
Contributing to Pysteps¶
Welcome! pySTEPS is a community-driven initiative for developing and maintaining an easy to use, modular, free and open source Python framework for short-term ensemble prediction systems.
If you haven’t already, take a look at the project’s README.rst file and the pysteps documentation.
There are many ways to contribute to pysteps:
- contributing bug reports and feature requests
- contributing documentation
- code contributions new features or bug fixes
- contribute with usage examples
Our main forum for discussion is the project’s GitHub issue tracker. This is the right place to start a discussion, report a bug, or request a new feature.
Workflow for code contributions¶
We welcome all kind of contributions, from documentation updates, a bug fix, or a new feature. If your new feature will take a lot of work, we recommend creating an issue with the enhancement tag to encourage discussions.
We use the usual GitHub pull-request flow, which may be familiar to you if you’ve contributed to other projects on GitHub.
First Time Contributors
If you are interested in helping to improve pysteps, the best way to get started is by looking for “Good First Issue” in the issue tracker.
In a nutshell, the main steps to follow for contributing to pysteps are:
- Setup the development environment
- Fork the repository
- Create a new branch for each contribution
- Read the Code Style guide
- Work on your changes
- Test your changes
- Push to your fork repository and create a new PR in GitHub.
Setup the Development environment¶
The recommended way to setup up the developer environment is the Anaconda (commonly referred as Conda). Conda quickly installs, runs and updates packages and their dependencies. It also allows to easily create, save, load and switch between different environments on your local computer.
The developer environment can be created using the environment_dev.yml file in the project’s root directory running the command:
conda env create -f environment_dev.yml
This will create the pysteps_dev environment that can be activated using:
conda activate pysteps_dev
Once the environment is created, the package can be installed in development mode, in such a way that the project appears to be installed, but yet is still editable from the source tree. See instructions in the Installing pysteps section.
Fork the repository¶
Once you have set the development environment, the next step is creating your local copy of the repository where you will commit your modifications. The steps to follow are:
Set up Git in your computer.
Create a GitHub account (if you don’t have one).
Fork the repository in your GitHub.
Clone local copy of your fork. For example:
git clone https://github.com/<your-account>/pysteps.git
Done!, now you have a local copy of pysteps git repository. If you are new to GitHub, below you can find a list of useful tutorials:
Create a new branch¶
As a collaborator, all the new contributions that you want should be done in a new branch under your forked repository. Working on the master branch is reserved for Core Contributors only. Core Contributors are developers that actively work and maintain the repository. They are the only ones who accept pull requests and push commits directly to the pysteps repository.
For more information on how to create and work with branches, see “Branches in a Nutshell” in the Git documentation
Code Style¶
Although it is not strictly enforced yet, we strongly suggest to follow the PEP8 coding standards. Two popular modules used to check pep8 compliance are pycodestyle and pylint that can be installed using pip:
pip install pylint
pip install pycodestyle
or using anaconda:
conda install pylint
conda install pycodestyle
For further information instructions, the reader is referred to their official documentation.
For quick reference, these are the most important good coding practices to follow:
Always use 4 spaces for indentation (don’t use tabs).
Write UTF-8 (add # -*- coding: utf-8 -*- at the top of each file).
Max line-length: 79 characters.
Always indent wrapped code for readability.
Avoid extraneous whitespace.
Don’t use whitespace to line up assignment operators (=, :).
Spaces around = for assignment.
No spaces around = for default parameter values (keywords).
Spaces around mathematical operators, but group them sensibly.
No multiple statements on the same line.
Naming conventions:
Function names, variable names, and filenames should be descriptive and self explanatory. Avoid using abbreviations that are ambiguous or unfamiliar to readers outside your project, and do not abbreviate by deleting letters within a word. Avoid single letter variables if possible and use more verbose names for clarity. An exception for this are indexes in loops (i, j, k, etc).
The following table summarizes the conventions:
Type Public Internal Packages lower_with_under
Modules lower_with_under
_lower_with_under
Classes CapWords
_CapWords
Exceptions CapWords
Functions lower_with_under()
_lower_with_under()
Global/Class Constants CAPS_WITH_UNDER
_CAPS_WITH_UNDER
Global/Class Variables lower_with_under
_lower_with_under
Instance Variables lower_with_under
_lower_with_under
(protected)Method Names lower_with_under()
_lower_with_under()
(protected)Function/Method Parameters lower_with_under
Local Variables lower_with_under
Source: Google’s python style guide
Create an ignored variable:
If you need to assign something (for instance, in Unpacking) but will not need that variable, use __ (double underscore):
precip, __, metadata = import_bom_rf3('example_file.bom')
Many Python style guides recommend the use of a single underscore “_” rather than the double underscore “__” recommended here. The issue is that “_” is commonly used as an alias for the gettext() function, and is also used at the interactive prompt to hold the value of the last operation. Using a double underscore instead is just as clear and eliminates the risk of accidentally interfering with either of these other use cases. (Source: https://docs.python-guide.org/writing/style/)
Zen of Python (PEP 20), the guiding principles for Python’s design:
>>> import this The Zen of Python, by Tim Peters Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those!
For a detailed description of a pythonic code style check these guidelines:
Auto-formatters
Formatting code to PEP8 style is a time consuming process. Instead of manually formatting code before a commit to to PEP8 style, you can use auto-format packages which automatically formats Python code to conform to the PEP 8 style guide.
If your development environment does not include auto-formatting capabilities, we recommend using black, which can be installed by any of the following options:
conda install black
#For the latest version:
conda install -c conda-forge black
pip install black
Check the official documentation for more information.
Docstrings
Every module, function, or class must have a docstring that describe its purpose and how to use it, following the conventions described in the PEP 257 and the Numpy’s docstrings format.
Here is a summary of the most important rules:
- One-line docstrings Triple quotes are used even though the string fits on one line. This makes it easy to later expand it.
- A one-line docstring is a phrase ending in a period.
- All docstrings should be written in imperative (“”“Return some value.”“”) mood rather than descriptive mood (“”“Returns some value.”“”).
Here is an example of a docstring:
def adjust_lag2_corrcoef1(gamma_1, gamma_2):
"""A simple adjustment of lag-2 temporal autocorrelation coefficient to
ensure that the resulting AR(2) process is stationary when the parameters
are estimated from the Yule-Walker equations.
Parameters
----------
gamma_1 : float
Lag-1 temporal autocorrelation coeffient.
gamma_2 : float
Lag-2 temporal autocorrelation coeffient.
Returns
-------
out : float
The adjusted lag-2 correlation coefficient.
"""
Working on changes¶
IMPORTANT
If your changes will take a significant amount of work, we highly recommend opening an issue first, explaining what do you want to do and why. It is better to start the discussions early in case that other contributors disagree with what you would like to do or have ideas that will help you do it.
Collaborators guidelines
As a collaborator, all the new contributions that you want should be done in a new branch under your forked repository. Working on the master branch is reserved for Core Contributors only. Core Contributors are developers that actively work and maintain the repository. They are the only ones who accept pull requests and push commits directly to the pysteps repository.
To include the contributions for collaborators, we use the usual GitHub pull-request flow. In their simplest form, pull requests are a mechanism for a collaborator to notify to the pysteps project about a completed feature”.
Once your proposed changes are ready, you need to create a pull request via your GitHub account. Afterward, the core developers review the code and merge it into the master branch. Be aware that pull requests are more than just a notification, they are also an excellent place for discussing the proposed feature. If there is any problem with the changes, the other project collaborators can post feedback and the author of the commit can even fix the problems by pushing follow-up commits to feature branch.
Do not squash your commits after you have submitted a pull request, as this erases context during the review. The commits will be squashed when the pull request is merged.
To keep you forked repository clean, we suggest deleting branches for once the Pull Requests (PRs) are accepted and merged.
Once you’ve created a pull request, you can push commits from your topic branch to add them to your existing pull request. These commits will appear in chronological order within your pull request and the changes will be visible in the “Files changed” tab.
Other contributors can review your proposed changes, add review comments, contribute to the pull request discussion, and even add commits to the pull request.
Important: each PR should should only address a single objective (e.g. fix a bug, improve documentation, etc). Pushing changes to an open PR that are outside its objective are highly discouraged. Under this circumstances, the recommended way to proceed is creating a new PR for changes, clearly explaining their goal.
Testing your changes¶
Before committing changes or creating pull requests, check that the all the tests in the pysteps suite pass. See the Testing pySTEPS for the instruction to run the tests.
Although it is not strictly needed, we suggest creating minimal tests for new contributions to ensure that it achieves the desired behavior. Pysteps uses the pytest framework, that it is easy to use and also supports complex functional testing for applications and libraries. Check the pytests official documentation for more information.
The tests should be placed under the pysteps.tests module. The file should follow the test_*.py naming convention and have a descriptive name.
A quick way to get familiar with the pytest syntax and the testing procedures is checking the python scripts present in the pysteps test module.
Core developer guidelines¶
Working directly on the master branch is discouraged and is reserved only for small changes and updates that do not compromise the stability of the code. The master branch is a production branch that is ready to be deployed (cloned, installed, and ready to use). In consequence, this master branch is meant to be stable.
The pysteps repository uses a Travis CI, a Continuous Integration service that automatically runs a series of tests every time you commit to GitHub. In that way, your modifications along with the entire package is tested.
Pushing untested or work-in-progress changes to the master branch can potentially introduce bugs or brake the stability of the package. Since the tests takes around 10 minutes and the are run after the commit was pushed, any errors introduced in that commit will be noticed after the stable in the master branch was compromised. In addition, other developers start working on a new feature from master, they may start a potentially broken state.
Instead, it is recommended to work on each new feature in its own branch, which can be pushed to the central repository for backup/collaboration. When you’re done with the development work on the feature, then you can merge the feature branch into the master or submit a Pull Request. This approach has two main advantages:
- Every commit on the feature branch is tested using Travis CI. If the tests fail, they do not affect the master branch.
- Once the new feature, improvement, or bug correction is finished and the all tests passed, the commits history can be squashed into a single commit and then merged into the master branch.
This helps approach helps to keep the commits history clean and allows experimentation in the branch without compromising the stability of the package.
Processing pull requests¶
Core developers should follow these rules when processing pull requests:
Always wait for tests to pass before merging PRs.
Use “Squash and merge” to merge PRs.
Delete branches for merged PRs (by core devs pushing to the main repo).
Edit the final commit message before merging to conform to the following style to help having a clean git log output:
- When merging a multi-commit PR make sure that the commit message doesn’t contain the local history from the committer and the review history from the PR. Edit the message to only describe the end state of the PR.
- Make sure there is a single newline at the end of the commit message. This way there is a single empty line between commits in git log output.
- Split lines as needed so that the maximum line length of the commit message is under 80 characters, including the subject line.
- Capitalize the subject and each paragraph.
- Make sure that the subject of the commit message has no trailing dot.
- Use the imperative mood in the subject line (e.g. “Fix typo in README”).
- If the PR fixes an issue, make sure something like “Fixes #xxx.” occurs in the body of the message (not in the subject).
Preparing a new release¶
Core developers should follow the steps to prepare a new release (version):
Before creating the actual release in GitHub, be sure that every item in the following checklist was followed:
- In the file setup.py, update the version=”X.X.X” keyword in the setup function.
- Update the version in PKG-INFO file.
- If new dependencies were added to pysteps since the last release, add them to the environment.yml, requirements.txt, and requirements_dev.txt files.
Create a new release in GitHub following these guidelines. Include a detailed changelog in the release.
Generating the source distribution for new pysteps version and upload it to the Python Package Index (PyPI). See Packaging the pysteps project for a detailed description of this process.
Update the conda-forge pysteps-feedstock following this guidelines: Updating the conda-forge pysteps-feedstock
Credits¶
This documents was based in contributors guides of two Python open source projects:
- Py-Art: Copyright (c) 2013, UChicago Argonne, LLC. License.
- mypy: Copyright (c) 2015-2016 Jukka Lehtosalo and contributors. MIT License.
- Official github documentation (https://help.github.com)
Testing pySTEPS¶
The pysteps distribution includes a small test suite for some of the modules. To run the tests the pytest package is needed. To install it, in a terminal run:
pip install pytest
Automatic testing¶
The simplest way to run the pysteps’ test suite is using tox and the tox-conda plugin (conda needed). To install these packages activate your conda development environment and run:
conda install -c conda-forge tox tox-conda
Then, to run the tests, from the repo’s root run:
tox # Run pytests
tox -e install # Test package installation
tox -e black # Test for black formatting warnings
Manual testing¶
Example data¶
The pySTEPS build-in tests require the pySTEPS example data installed. See the installation instructions in the Installing the Example data section.
Test an installed package¶
After the package is installed, you can launch the test suite from any directory by running:
pytest --pyargs pysteps
Test from sources¶
Before testing the package directly from the sources, we need to build the extensions in-place. To do that, from the root pysteps folder run:
python setup.py build_ext -i
Now, the package sources can be tested in-place using the pytest command on the root of the pysteps source directory. E.g.:
pytest -v --tb=line
Building the docs¶
The pysteps documentations is build using Sphinx, a tool that makes it easy to create intelligent and beautiful documentation
The documentation is located in the doc folder in the pysteps repo.
Automatic build¶
The simplest way to build the documentation is using tox and the tox-conda plugin (conda needed). To install these packages activate your conda development environment and run:
conda install -c conda-forge tox tox-conda
Then, to build the documentation, from the repo’s root run:
`tox -e docs`
This will create a conda environment will all the necessary dependencies and the data needed to create the examples.
Manual build¶
To build the docs you need to need to satisfy a few more dependencies related to Sphinx that are specified in the doc/requirements.txt file:
- sphinx
- numpydoc
- sphinxcontrib.bibtex
- sphinx_rtd_theme
- sphinx_gallery
You can install these packages running pip install -r doc/requirements.txt.
In addition to this requirements, to build the example gallery in the documentation the example pysteps-data is needed. To download and install this data see the installation instructions in the Installing the Example data section.
Once these requirements are met, to build the documentation, in the doc folder run:
make html
This will build the documentation along with the example gallery.
The build documentation (html web page) will be available in doc/_build/html/. To correctly visualize the documentation, you need to set up and run a local HTTP server. To do that, in the doc/_build/html/ directory run:
python -m http.server
This will set up a local HTTP server on 0.0.0.0 port 8000. To see the built documentation open the following url in the browser: http://0.0.0.0:8000/
Packaging the pysteps project¶
The Python Package Index (PyPI) is a software repository for the Python programming language. PyPI helps you find and install software developed and shared by the Python community.
The following guide to package pysteps was adapted from the PyPI official documentation.
Generating the source distribution¶
The first step is to generate a source distribution (sdist) for the pysteps library. These are archives that are uploaded to the Package Index and can be installed by pip.
To create the sdist package we need the setuptools package installed.
Then, from the root folder of the pysteps source run:
python setup.py sdist
Once this command is completed, it should generate a tar.gz (source archive) file the dist directory:
dist/
pysteps-a.b.c.tar.gz
where a.b.c denote the version number.
Uploading the source distribution to the archive¶
The last step is to upload your package to the Python Package Index.
Important
Before we actually upload the distribution to the Python Index, we will test it in Test PyPI. Test PyPI is a separate instance of the package index that allows us to try the distribution without affecting the real index (PyPi). Because TestPyPI has a separate database from the actual PyPI, you’ll need a separate user account for specifically for TestPyPI. You can register your account in https://test.pypi.org/account/register/.
Once you are registered, you can use twine to upload the distribution packages. Alternatively, the package can be uploaded manually from the Test PyPI page.
If Twine is not installed, you can install it by running
pip install twine
or conda install twine
.
Test PyPI¶
To upload the recently created source distribution (dist/pysteps-a.b.c.tar.gz) under the dist directory run:
twine upload --repository-url https://test.pypi.org/legacy/ dist/pysteps-a.b.c.tar.gz
where a.b.c denote the version number.
You will be prompted for the username and password you registered with Test PyPI. After the command completes, you should see output similar to this:
Uploading distributions to https://test.pypi.org/legacy/
Enter your username: [your username]
Enter your password:
Uploading pysteps-a.b.c.tar.gz
100%|█████████████████████| 4.25k/4.25k [00:01<00:00, 3.05kB/s]
Once uploaded your package should be viewable on TestPyPI, for example, https://test.pypi.org/project/pysteps
Before uploading the package to the official Python Package Index, test that the package can be installed using pip.
Automatic test¶
The simplest way to hat the package can be installed using pip is using tox and the tox-conda plugin (conda needed). To install these packages activate your conda development environment and run:
conda install -c conda-forge tox tox-conda
Then, to test the installation in a minimal and an environment with all the dependencies (full env), run:
tox -r -e pypi_test # Test the installation in a minimal env
tox -r -e pypi_test_full # Test the installation in an full env
Manual test¶
To manually test the installation on new environment, create a copy of the basic development environment using the environment_dev.yml file in the root folder of the pysteps project:
conda create -f environment_dev.yml -n pysteps_test
Then we activate the environment:
source activate pysteps_test
or:
conda activate pysteps_test
If the environment pysteps_test was already created, remove any version of pysteps already installed:
pip uninstall pysteps
Now, install the pysteps package from test.pypi.org:
pip install --index-url https://test.pypi.org/simple/ pysteps
To test that the installation was successful, from a folder different than the pysteps source, run:
pytest --pyargs pysteps
If any test didn’t pass, check the sources or consider creating a new release fixing those bugs.
Once the sdist package was tested, we can safely upload it to the Official PyPi repository with:
twine upload dist/pysteps-a.b.c.tar.gz
Now, pysteps can be installed by simply running:
pip install pysteps
As an extra sanity measure, it is recommended to test the pysteps package installed from the Official PyPi repository (instead of the test PyPi).
Automatic test¶
Similarly to the Test the uploaded package section, to test the installation from PyPI in a clean environment, run:
tox -r -e pypi
Manual test¶
Follow test instructions in Test PyPI section.
Updating the conda-forge pysteps-feedstock¶
Here we will describe the steps to update the pysteps conda-forge feedstock. This tutorial is intended for the core developers listed as maintainers of the conda recipe in the conda-forge/pysteps-feedstock.
Examples for needing to update the pysteps-feedstock are:
- New release
- Fix errors pysteps package errors
The following tutorial was adapted from the official conda-forge.org documentation, released under CC4.0 license
What is a “conda-forge”¶
Conda-forge is a community effort that provides conda packages for a wide range of software. The conda team from Anaconda packages a multitude of packages and provides them to all users free of charge in their default channel.
conda-forge is a community-led conda channel of installable packages that allows users to share software that is not included in the official Anaconda repository. The main advantages of conda-forge are:
- all packages are shared in a single channel named conda-forge
- care is taken that all packages are up-to-date
- common standards ensure that all packages have compatible versions
- by default, packages are built for macOS, linux amd64 and windows amd64
In order to provide high-quality builds, the process has been automated into the conda-forge GitHub organization. The conda-forge organization contains one repository for each of the installable packages. Such a repository is known as a feedstock.
The actual pysteps feedstock is https://github.com/conda-forge/pysteps-feedstock
A feedstock is made up of a conda recipe (the instructions on what and how to build the package) and the necessary configurations for automatic building using freely available continuous integration services.
See the official conda-forge documentation for more details.
Maintain pysteps conda-forge package¶
Pysteps Core developers that are maintainers of the pysteps feedstock.
All pysteps developers listed as maintainers of the pysteps feedstock are given push access to the feedstock repository. This means that a maintainer can create branches in the main repository.
Every time that a new commit is pushed/merged in the feedstock repository, conda-forge runs Continuous Integration (CI) system that run quality checks, builds the pysteps recipe on Windows, OSX, and Linux, and publish the built recipes in the conda-forge channel.
Important¶
For updates, using a branch in the main repo and a subsequent Pull Request (PR) to the master branch is discouraged because: - CI is run on both the branch and on the Pull Request (if any) associated with that branch. This wastes CI resources. - Branches are automatically published by the CI system. This mean that a for every push, the packages will be published before the PR is actually merged.
For these reasons, to update the feedstock, the maintainers need to fork the feedstock, create a new branch in that fork, push to that branch in the fork, and then open a PR to the conda-forge repo.
Workflow for updating a pysteps-feedstock¶
The mandatory steps to update the pysteps-feedstock are:
Forking the pysteps-feedstock.
Clone the forked repository in your computer:
git clone https://github.com/<your-github-id>/pysteps-feedstock
Syncing your fork with the pysteps feedstock. This step is only needed if your local repository is not up to date the pysteps-feedstock. If you just cloned the forked pysteps-feedstock, you can ignore this step.
Make sure you are on the master branch:
git checkout master
Register conda-forge’s feedstock with:
git remote add upstream https://github.com/conda-forge/pysteps-feedstock
Fetch the latest updates with git fetch upstream:
git fetch upstream
Pull in the latest changes into your master branch:
git rebase upstream/master
Create a new branch:
git checkout -b <branch-name>
Update the recipe and push changes in this new branch
See next section “Updating recipes” for more details
Push changes:
git commit -m <commit message>
Pushing your changes to GitHub:
git push origin <branch-name>
Propose a Pull Request
- Create a pull request via the web interface
Updating pysteps recipe¶
The pysteps-feedstock should be updated when:
- We release a new pysteps version
- Need to fix errors in the pysteps package
New release¶
When a new pysteps version is released, before update the pysteps feedstock, the new version needs to be uploaded to the Python Package Index (PyPI) (see Packaging the pysteps project for more details). This step is needed because the conda recipe uses the PyPI to build the pysteps conda package.
Once the new version is available in the PyPI, the conda recipe in pysteps-feedstock/recipe/meta.yaml needs to be updated by:
- Updating version and hash
- Checking the dependencies
- When the package version changes, reset the build number back to 0.
The build number is increased when the source code for the package has not changed but you need to make a new build. As a rule of thumb, the build number is increased whenever a new package with the same version needs to be uploaded to the conda-forge channel.
Recipe fixing¶
In case that the recipe must be updated but the source code for the package has not changed the build_number in the conda recipe in pysteps-feedstock/recipe/meta.yaml needs to be increased by 1.
Some examples for needing to increase the build number are:
- updating the pinned dependencies
- Fixing wrong dependencies
Bibliography¶
[BPS06] | N. E. Bowler, C. E. Pierce, and A. W. Seed. STEPS: a probabilistic precipitation forecasting scheme which merges an extrapolation nowcast with downscaled NWP. Quarterly Journal of the Royal Meteorological Society, 132(620):2127–2155, 2006. doi:10.1256/qj.04.100. |
[BrockerS07] | J. Bröcker and L. A. Smith. Increasing the reliability of reliability diagrams. Weather and Forecasting, 22(3):651–661, 2007. doi:10.1175/WAF993.1. |
[CRS04] | B. Casati, G. Ross, and D. B. Stephenson. A new intensity-scale approach for the verification of spatial precipitation forecasts. Meteorological Applications, 11(2):141––154, 2004. doi:10.1017/S1350482704001239. |
[EWW+13] | E. Ebert, L. Wilson, A. Weigel, M. Mittermaier, P. Nurmi, P. Gill, M. Göber, S. Joslyn, B. Brown, T. Fowler, and A. Watkins. Progress and challenges in forecast verification. Meteorological Applications, 20(2):130–139, 2013. doi:10.1002/met.1392. |
[GZ02] | U. Germann and I. Zawadzki. Scale-dependence of the predictability of precipitation from continental radar images. Part I: description of the methodology. Monthly Weather Review, 130(12):2859–2873, 2002. doi:10.1175/1520-0493(2002)130<2859:SDOTPO>2.0.CO;2. |
[Her00] | H. Hersbach. Decomposition of the continuous ranked probability score for ensemble prediction systems. Weather and Forecasting, 15(5):559–570, 2000. doi:10.1175/1520-0434(2000)015<0559:DOTCRP>2.0.CO;2. |
[NBS+17] | D. Nerini, N. Besic, I. Sideris, U. Germann, and L. Foresti. A non-stationary stochastic ensemble generator for radar rainfall fields based on the short-space Fourier transform. Hydrology and Earth System Sciences, 21(6):2777–2797, 2017. doi:10.5194/hess-21-2777-2017. |
[PvGPO94] | M. Proesmans, L. van Gool, E. Pauwels, and A. Oosterlinck. Determination of optical flow and its discontinuities using non-linear diffusion. In J.-O. Eklundh, editor, Computer Vision — ECCV ‘94, volume 801 of Lecture Notes in Computer Science, pages 294–304. Springer Berlin Heidelberg, 1994. |
[PCH18] | S. Pulkkinen, V. Chandrasekar, and A.-M. Harri. Nowcasting of precipitation in the high-resolution Dallas-Fort Worth (DFW) urban radar remote sensing network. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(8):2773–2787, 2018. doi:10.1109/JSTARS.2018.2840491. |
[RL08] | N. M. Roberts and H. W. Lean. Scale-selective verification of rainfall accumulations from high-resolution forecasts of convective events. Monthly Weather Review, 136(1):78–97, 2008. doi:10.1175/2007MWR2123.1. |
[RC11] | E. Ruzanski and V. Chandrasekar. Scale filtering for improved nowcasting performance in a high-resolution X-band radar network. IEEE Transactions on Geoscience and Remote Sensing, 49(6):2296–2307, June 2011. |
[RCW11] | E. Ruzanski, V. Chandrasekar, and Y. Wang. The CASA nowcasting system. Journal of Atmospheric and Oceanic Technology, 28(5):640–655, 2011. doi:10.1175/2011JTECHA1496.1. |
[See03] | A. W. Seed. A dynamic and spatial scaling approach to advection forecasting. Journal of Applied Meteorology, 42(3):381–388, 2003. doi:10.1175/1520-0450(2003)042<0381:ADASSA>2.0.CO;2. |
[SPN13] | A. W. Seed, C. E. Pierce, and K. Norman. Formulation and evaluation of a scale decomposition-based stochastic precipitation nowcast scheme. Water Resources Research, 49(10):6624–6641, 2013. doi:10.1002/wrcr.20536. |
[ZR09] | P. Zacharov and D. Rezacova. Using the fractions skill score to assess the relationship between an ensemble QPF spread and skill. Atmospheric Research, 94(4):684–693, 2009. doi:10.1016/j.atmosres.2009.03.004. |