Assessment Files¶
This page describes the files included in each assessment version child item. For more details on assessment data fields, and metadata.json fields, please refer to the linked pages.
Summary¶
Each assessment version includes the following items:
File |
Description |
---|---|
|
A zip archive holding assessment results, formatted as Shapefiles. |
|
A zip archive holding assessment results, formatted as GeoJSON. |
|
JSON metadata file with information on the fire event and implementation of the assessment. |
|
Median rainfall thresholds calculated over both the stream segments and catchment basins in the assessment results. |
|
A zip archive holding preprocessed datasets used as inputs for the hazard assessment. |
|
Configuration settings used to implement the assessment via the wildcat package. |
|
FGDC-compliant XML metadata describing the assessment for the ScienceBase interface. |
Assessment Results¶
Both the Shapefile.zip
and GeoJSON.zip
archives hold the same assessment results, albeit formatted as Shapefiles and GeoJSON, respectively. When extracted, each archive consists of the following four data layers:
Layer |
Geometry |
Description |
---|---|---|
|
LineString |
Holds assessment results for the stream segment network. |
|
Polygon |
Holds assessment results for the catchment basins of the stream segment network. |
|
Point |
The locations of the catchment basin drainage outlets. |
|
Polygon |
The location of the fire perimeter. |
Note that only the segments
and basins
layers include assessment results. The outlets
and perimeter
layers are intended only as spatial references. Please refer to the Data Fields page for details on the data fields included in the segments
and basins
layers.
metadata.json¶
The metadata.json
file holds metadata on the fire event, and on the implementation of the hazard assessment. This file is intended to (1) document assessment metadata for the PWFDF user community, and (2) facilitate advanced programmatic interactions with the collection. Please refer to the metadata.json page for details on the included metadata fields.
median-thresholds.csv¶
The median-thresholds.csv
file is a comma-separated text file that holds the median rainfall thresholds for the assessment results. Median thresholds are reported for both the segments
and basins
data layers, and results are provided in both millimeters and inches. The data columns follow the naming scheme Idd_pp
or Rdd_pp
. Here, dd
indicates a rainfall duration (in minutes) used to model rainfall thresholds, and pp
is the modeled probability percentage. An I
prefix indicates thresholds provided as rainfall intensities, and the units of these columns will be mm/hour or inches/hour, as appropriate. An R
prefix indicates thresholds provided as rainfall accumulations. The units of these columns are total accumulation over the associated rainfall duration, in millimeters or inches as appropriate.
inputs.zip¶
The inputs.zip
archive holds preprocessed datasets used as inputs to the hazard assessment. The preprocessed inputs are provided as GeoTIFF rasters, are all projected into the same coordinate reference system, and are all clipped to the same bounds. The complete set of preprocessed inputs represents the minimal dataset required to reproduce the hazard assessment. The archive may include any of the following files:
File |
Description |
---|---|
|
The digital elevation model used to determine flow pathways, slopes, and vertical relief. |
|
The differenced normalized burn ratio dataset used to inform the M1 likelihood model. |
|
The burn severity dataset used to inform network delineation, the M1 likelihood model, and the volume model. |
|
The soil KF-factor dataset used to inform the M1 likelihood model. |
|
The existing vegetation type dataset used to inform network delineation. |
|
The debris retainment feature location dataset used to inform network delineation. |
|
The exclusion mask dataset used to inform network delineation. |
In general, the inputs.zip
archive will not include any input datasets that have permanent references recorded in metadata.json. For example, most assessments in the U.S. use digital elevation model (DEM) and soil KF-factor datasets that are associated with DOIs or other permanent assets. These references are recorded in metadata.json, and so these datasets are typically not included in inputs.zip
. Although we strive to include all other input datasets in the archive, this may not always be possible due to legal and/or technical reasons. As such, no dataset is guaranteed to be included in the inputs.zip
archive.
configuration.txt¶
The configuration.txt
file records the configuration settings used to run the assessment via the wildcat package. The file is intended for researchers and/or developers who want to reproduce an assessment using wildcat. Please consult the wildcat documentation for details on the recorded fields.
XML Metadata¶
The XML metadata file is an FGDC-compliant document that records assessment metadata for the ScienceBase interface. The file will follow the naming scheme <fire>_<YYYY-MM-DD>_v<X-Y>_postfire_debris-flow_hazard_assessment.xml
, with the following substitutions:
Substitution |
Description |
---|---|
|
The name of the fire event. |
|
An ISO 8601 date string indicating the start date of the fire event. |
|
The version number of the assessment. |
Although the XML file contains much of the same metadata as the metadata.json file, we suggest that metadata.json
may prove simpler for programmatic interactions.