Preprocessing Configuration¶
These fields specify settings used to run the preprocessor. Many of these fields are paths to input datasets. When a file path is a relative path, then it is interpreted relative to the inputs subfolder. If a file name does not include an extension, then wildcat will scan its parent folder for a file with a supported extension.
Required Datasets¶
These datasets are required to run the preprocessor. An error will be raised if they cannot be found.
Examples:
# Absolute file path (may be outside the project folder)
perimeter = r"/absolute/path/to/perimeter.shp"
# Relative to the "inputs" subfolder
perimeter = r"perimeter.shp"
- perimeter¶
- Type:
str | Path
- Default:
r"perimeter"
The path to a fire perimeter mask. Usually a Polygon or MultiPolygon feature file, but may also be a raster mask.
The mask will be buffered, and the extent of the buffered perimeter will define the domain of the analysis. Pixels within the perimeter may be used to delineate the initial network, and stream segments sufficiently within the perimeter are retained during network filtering.
Most users will likely want to run wildcat for an active or recent fire, but you can also find links to historical fire perimeters here: Fire perimeter datasets
CLI option:
--perimeter
Python kwarg:
perimeter
- dem¶
- Type:
str | Path
- Default:
r"dem"
The path to the digital elevation model (DEM) raster dataset. This dataset sets the CRS, resolution, and alignment of the preprocessed rasters. Also used to characterize the watershed, including determining flow directions.
The DEM must be georeferenced and we strongly recommend using a DEM with approximately 10 meter resolution. This is because wildcat’s hazard assessment models were calibrated using data from a 10 meter DEM. See also Smith et al., 2019 for a discussion of the effects of DEM resolution on topographic analysis.
You can find links to 10-meter DEM datasets here: DEM datasets
CLI option:
--dem
Python kwarg:
dem
Recommended Datasets¶
These datasets are not required to run the preprocessor, but they are either required or recommended for running an assessment. To explicitly disable the preprocessor for one of these datasets, set its value to None.
Examples:
# Absolute path (may be outside the project folder)
dnbr = r"/absolute/path/to/dnbr.tif"
# A file in the "inputs" subfolder
dnbr = r"dnbr.tif"
# Disable the preprocessor for a dataset
dnbr = None
- dnbr¶
- Type:
str | Path | float | None
- Default:
r"dnbr"
The differenced normalized burn ratio (dNBR) dataset. Used to estimate debris-flow likelihoods and rainfall thresholds. Optionally used to estimate burn severity. Should be (raw dNBR * 1000) with values ranging from approximately -1000 to 1000. This is usually a raster dataset, but you can instead use a constant value across the watershed by setting the field equal to a number.
Most users will likely want to run wildcat for an active or recent fire, but you can also find links to historical dNBR datasets here: dNBR datasets
Examples:
# From a raster file dnbr = r"path/to/my-dnbr.tif" # Using a constant value dnbr = 500
CLI option:
--dnbr
Python kwarg:
dnbr
- severity¶
- Type:
str | Path | float | None
- Default:
r"severity"
The path to a BARC4-like soil burn severity dataset. Usually a raster, but may also be a Polygon or MultiPolygon feature file. If a Polygon/MultiPolygon file, then you must provide the
severity_field
setting. Also supports using a constant severity across the watershed. To implement a constant value, set the field equal to a number, rather than a file path.The burn severity raster is used to locate burned areas, which are used to delineate the stream segment network. Also used to locate areas burned at moderate-or-high severity, which are used to estimate debris flow likelihoods, volumes, and rainfall thresholds. If missing, this dataset will be estimated from the dNBR using the values from the
severity_thresholds
setting.You can find links to burn severity datasets here: Burn severity datasets. Most users will likely want to run wildcat for an active or recent fire, but you can also find links to historical burn severity datasets here: historical severity datasets
Examples:
# From a raster file severity = r"path/to/my-severity.tif" # From a Polygon file severity = r"path/to/my-severity.shp" severity_field = "MY_FIELD" # Using a constant value severity = 3
CLI option:
--severity
Python kwarg:
severity
- kf¶
- Type:
str | Path | float | None
- Default:
r"kf"
The path to a soil KF-factor dataset. Often a Polygon or MultiPolygon feature file, but may also be a numeric raster. If a Polygon/MultiPolygon file, then you must also provide the
kf_field
setting. Also supports using a constant KF-factor across the watershed. To implement a constant value, set the field equal to a number, rather than a file path.The KF-factors are used to estimate debris-flow likelihoods and rainfall thresholds. Values should be positive, and the preprocessor will convert non-positive values to NoData by default.
You can find links to KF-factor datasets here: KF-factor datasets
Examples:
# From a raster kf = r"path/to/my-kf.tif" # From a Polygon file kf = r"path/to/my-kf.shp" kf_field = "MY_FIELD" # Using a constant value kf = 0.2
CLI option:
--kf
Python kwarg:
kf
What’s a KF-factor?
Kf factors are defined as saturated hydraulic conductivity of the fine soil (< 2mm) fraction in inches/hour. Essentially, this is a soil erodibility factor that represents both (1) the susceptibility of soil to erosion, and (2) the rate of runoff, for soil material with <2mm equivalent diameter. See Chapter 3 of USDA Agricultural Handbook 703 for additional details on its definition and calculation.
- evt¶
- Type:
str | Path | None
- Default:
r"evt"
The path to an existing vegetation type (EVT) raster. This is typically a raster of classification code integers. Although not required for an assessment, the EVT is used to build water, development, and exclusion masks, which can improve the design of the stream segment network.
You can find links to EVT datasets here: EVT datasets
CLI option:
--evt
Python kwarg:
evt
Optional Datasets¶
These datasets are optional. They are neither required to run the preprocessor, nor to run an assessment. To explicitly disable the preprocessor for one of these datasets, set its value to None.
Examples:
# Absolute path (may be outside the project folder)
excluded = r"/absolute/path/to/excluded.shp"
# Relative to the "inputs" subfolder
excluded = r"excluded"
# Disable the preprocessor for a dataset
excluded = None
- retainments¶
- Type:
str | Path | None
- Default:
r"retainments"
The path to a dataset indicating the locations of debris retainment features. Usually a Point or MultiPoint feature file, but may also be a raster mask. Pixels downstream of these features will not be used for network delineation.
CLI option:
--retainments
Python kwarg:
retainments
- excluded¶
- Type:
str | Path | None
- Default:
r"excluded"
The path to a dataset of areas that should be excluded from network delineation. Usually a Polygon or MultiPolygon feature file, but may also be a raster mask. Pixels in these areas will not be used to delineate the network. If provided in conjunction with the
excluded_evt
setting, then the two masks will be combined to produce the final preprocessed exclusion mask.CLI option:
--excluded
Python kwarg:
excluded
- included¶
- Type:
str | Path | None
- Default:
r"included"
The path to a dataset of areas that should be retained when filtering the network. Usually a Polygon or MultiPolygon feature file, but may also be a raster mask. Any stream segment that intersects one of these areas will automatically be retained in the network - it will not need to pass any other filtering criteria.
CLI option:
--included
Python kwarg:
included
- iswater¶
- Type:
str | Path | None
- Default:
r"iswater"
The path to a water body mask. Usually a Polygon or MultiPolygon feature file, but may also be a raster mask. Pixels in the mask will not be used for network delineation. If provided in conjunction with the
water
setting, then the two masks will be combined to produce the final preprocessed water mask.CLI option:
--iswater
Python kwarg:
iswater
- isdeveloped¶
- Type:
str | Path | None
- Default:
r"isdeveloped"
The path to a human-development mask. Usually a Polygon or MultiPolygon feature file, but may also be a raster mask. The development mask is used to inform network filtering. If provided in conjunction with the
developed
setting, then the two masks will be combined to produce the final preprocessed development raster.CLI option:
--isdeveloped
Python kwarg:
isdeveloped
Perimeter¶
Settings used to build the buffered perimeter.
- buffer_km¶
- Type:
float
- Default:
3
The number of kilometers to buffer the fire perimeter. The extent of the buffered perimeter defines the domain of the analysis.
Example:
buffer_km = 3.0
CLI option:
--buffer-km
Python kwarg:
buffer_km
DEM¶
Settings for preprocessing the DEM.
- resolution_limits_m¶
- Type:
[float, float]
- Default:
[6.5, 11]
The allowed range of DEM resolutions in meters. Should be a list of 2 values. The first value is the minimum allowed resolution, and the second is the maximum resolution. If either the X-axis or the Y-axis of the DEM has a resolution outside of this range, then this will trigger the
resolution_check
.The default values are selected to permit all DEM tiles from the USGS National Map within the continental US. In general, the DEM should have approximately 10 meter resolution. This is because wildcat’s assessment models were calibrated using data from a 10 meter DEM.
Example:
# Require resolution between 8 and 12 meters resolution_limits_m = [8, 12]
CLI option:
--resolution-limits-m
Python kwarg:
resolution_limits_m
- resolution_check¶
- Type:
"error" | "warn" | "none"
- Default:
"error"
What should happen when the DEM does not have an allowed resolution. Options are:
"error"
: Raises an error and stops the preprocessor"warn"
: Logs a warning to the console, but continues preprocessing"none"
: Does nothing and continues preprocessing
Example:
# Issue a warning instead of an error resolution_check = "warn"
CLI option:
--resolution-check
Python kwarg:
resolution_check
dNBR¶
Settings for preprocessing the dNBR raster.
- dnbr_scaling_check¶
- Type:
"error" | "warn" | "none"
- Default:
"error"
What should happen when the dNBR fails the scaling check. The dNBR will fail this check if all the dNBR data values are between -10 and 10. Options are:
"error"
: Raises an error and stops the preprocessor"warn"
: Logs a warning to the console, but continues preprocessing"none"
: Does nothing and continues preprocessing
Example:
# Issue a warning instead of an error dnbr_scaling_check = "warn"
CLI option:
--dnbr-scaling-check
Python kwarg:
dnbr_scaling_check
- constrain_dnbr¶
- Type:
bool
- Default:
True
Whether the preprocessor should constrain dNBR data values to a valid range. Any dNBR values outside the valid range are converted to the nearest bound of the valid range.
Example:
# Do not constrain dNBR constrain_dnbr = False
CLI option:
--no-constrain-dnbr
Python kwarg:
constrain_dnbr
- dnbr_limits¶
- Type:
[float, float]
- Default:
[-2000, 2000]
The lower and upper bounds of the dNBR valid data range. These values are ignored when
constrain_dnbr
isFalse
.Example:
# Set the valid range from -1500 to 3000 constrain_dnbr = True dnbr_limits = [-1500, 3000]
CLI option:
--dnbr-limits
Python kwarg:
dnbr_limits
Burn Severity¶
Settings for preprocessing the burn severity dataset.
- severity_field¶
- Type:
str | None
- Default:
None
The name of the data attribute field from which to read burn severity data when the
severity
dataset is a Polygon or MultiPolygon feature file. Ignored if the severity dataset is a raster, or if severity is estimated from the dNBR.Example:
# Read severity data from the "Burn_Sev" data field severity = r"severity.shp" severity_field = "Burn_Sev"
CLI option:
--severity-field
Python kwarg:
severity_field
- contain_severity¶
- Type:
bool
- Default:
True
Whether the preprocessor should contain burn severity data to within the fire perimeter.
Example:
# Do not contain severity within the perimeter contain_severity = False
CLI option:
--no-contain-severity
Python kwarg:
contain_severity
- estimate_severity¶
- Type:
bool
- Default:
True
Whether to estimate burn severity from the dNBR when the severity dataset is missing. This option is irrelevant if a burn severity dataset is provided.
Example:
# Estimate severity from the dNBR severity = None estimate_severity = True
CLI option:
--no-estimate-severity
Python kwarg:
estimate_severity
- severity_thresholds¶
- Type:
[float, float, float]
- Default:
[125, 250, 500]
When estimating severity from the dNBR, specifies the dNBR thresholds used to classify severity levels. The first value is the breakpoint between unburned and low severity. The second value is the breakpoint between low and moderate severity, and the third value is the breakpoint between moderate and high severity. A dNBR value that exactly equals a breakpoint will be classified at the lower severity level. This option is ignored if a severity dataset is provided, or if
estimate_severity
isFalse
.Example:
# Estimate severity using dNBR breakpoints of 100, 325, and 720 severity = None estimate_severity = True severity_thresholds = [100, 325, 720]
CLI option:
--severity-thresholds
Python kwarg:
severity_thresholds
KF-factors¶
Settings for preprocessing the KF-factor dataset.
- kf_field¶
- Type:
str | None
- Default:
None
The name of the data attribute field from which to read KF-factor data when the
kf
dataset is a Polygon or MultiPolygon feature file. Ignored if the KF-factor dataset is a raster.Example:
# Load KF-factor values from the "KFFACT" data field kf = r"soil-data.shp" kf_field = "KFFACT"
CLI option:
--kf-field
Python kwarg:
kf_field
- constrain_kf¶
- Type:
bool
- Default:
True
Whether to constrain KF-factor data to positive values. When constrained, negative and 0-valued KF-factors are replaced with NoData.
Example:
# Do not constrain KF-factors constrain_kf = False
CLI option:
--no-constrain-kf
Python kwarg:
constrain_kf
- max_missing_kf_ratio¶
- Type:
float
- Default:
0.05
A maximum allowed proportion of missing data in the KF-factor dataset. Exceeding this level will trigger the
missing_kf_check
. The threshold should be a value from 0 to 1.Example:
# Warn if more than 5% of the KF-factor data is missing max_missing_kf_ratio = 0.05
CLI option:
--max-missing-kf-ratio
Python kwarg:
max_missing_kf_ratio
- missing_kf_check¶
- Type:
"error" | "warn" | "none"
- Default:
"warn"
What to do if the proportion of missing KF-factor data exceeds the maximum level and there is no fill value. Options are:
"error"
: Raises an error and stops the preprocessor"warn"
: Logs a warning to the console, but continues preprocessing"none"
: Does nothing and continues preprocessing
This option is ignored if
kf_fill
is notFalse
.Example:
# Disable the KF-factor warning kf_fill = False missing_kf_check = "none"
CLI option:
--missing-kf-check
Python kwarg:
missing_kf_check
- kf_fill¶
- Type:
bool | float | str | Path
- Default:
False
Indicates how to fill missing KF-factor values. Options are
False
: Does not fill missing valuesTrue
: Replaces missing values with the median KF-factor in the datasetfloat
: Replaces missing values with the indicated numberstr | Path
: Uses the indicated dataset to implement spatially varying fill values. Missing KF-factor values are replaced with the co-located value in the fill-value dataset. Usually a Polygon or MultiPolygon feature file, but may also be a raster dataset. If a Polygon/MultiPolygon file, then you must also provide thekf_fill_field
setting.
Examples:
# Do not fill missing values kf_fill = False # Replace missing values with the median kf_fill = True # Replace with a specific number kf_fill = 0.8 # Replace using a spatially varying dataset kf_fill = r"kf-fill.shp" kf_fill_field = "FILL_VALUE"
CLI option:
--kf-fill
Python kwarg:
kf_fill
- kf_fill_field¶
- Type:
str | None
- Default:
None
The name of the data attribute field from which to read KF-factor fill values when
kf_fill
is the path to a Polygon or MultiPolygon feature file. Ignored ifkf_fill
is anything else.Example:
# Read fill value data from the "FILL_VALUE" field kf_fill = r"kf-fill.shp" kf_fill_field = "FILL_VALUE"
CLI option:
--kf-fill-field
Python kwarg:
kf_fill_field
EVT Masks¶
Options for building raster masks from the EVT dataset.
- water¶
- Type:
[float, ...]
- Default:
[7292]
A list of EVT values that should be classified as water bodies. These pixels will not be used for network delineation. Use an empty list to stop the preprocessor from building a water mask from the EVT. Ignored if there is no
evt
dataset. If provided in conjunction with theiswater
dataset, then the two masks will be combined to produce the final preprocessed water mask.Examples:
# Classify EVT values as water water = [1, 2, 3] # Do not build a water mask from the EVT water = [] # Combine EVT mask with pre-computed mask iswater = r"iswater.shp" water = [7292]
CLI options:
--water
,--no-find-water
Python kwarg:
water
- developed¶
- Type:
[float, ...]
- Default:
[7296, 7297, 7298, 7299, 7300]
A list of EVT values that should be classified as human development. The development mask will be used to inform network filtering. Use an empty list to stop the preprocessor from building a development mask from the EVT. Ignored if there is no
evt
dataset. If provided in conjunction with theisdeveloped
dataset, then the two masks will be combined to produce the final preprocessed development mask.Examples:
# Classify EVT values as developed developed = [1, 2, 3] # Do not build a development mask from the EVT developed = [] # Combine EVT mask with pre-computed mask isdeveloped = r"isdeveloped.shp" developed = [7296, 7297, 7298, 7299, 7300]
CLI options:
--developed
,--no-find-developed
Python kwarg:
developed
- excluded_evt¶
- Type:
[float, ...]
- Default:
[]
A list of EVT values that should be classified as excluded areas. These pixels will not be used for network delineation. Use an empty list to stop the preprocessor from building an exclusion mask from the EVT. Ignored if there is no
evt
dataset. If provided in conjunction with theexcluded
dataset, then the two masks will be combined to produce the final preprocessed exclusion mask.Examples:
# Classify EVT values as excluded areas excluded_evt = [1, 2, 3] # Do not build an exclusion mask from the EVT excluded_evt = [] # Combine EVT mask with pre-computed mask excluded = r"excluded.shp" excluded_evt = [1, 2, 3]
CLI options:
--excluded-evt
,--no-find-excluded
Python kwarg:
excluded_evt