Quickstart

This page provides an introduction to the standard ocelote workflow. By the end of this quickstart, you should be able to run and upload a hazard assessment. We will only examine the most commonly used commands and options here, but you can learn more about specific commands and configuration fields in the API.

Tip

We recommend running ocelote in VS-Code, as this will let you edit files and use the command line in a single interface.

Initialize Fire

Note

You can skip this step if you’ve already run an assessment for this fire. (For example, if you ran a preliminary assessment, or if this is an update to an existing assessment).

Start by navigating to a folder where you like to store assessments. For example:

cd /path/to/my/assessments

Then, use the initialize fire command to initialize a folder for the fire event. The command takes one input, which should be the name of the fire in quotes:

ocelote initialize fire "My Fire Event"

This will create a new subfolder whose name will match the name of the fire event. From here on, we’ll refer to this subfolder as the “fire folder”.

Fire Names

The fire name should use spaces instead of underscores, and should not end with “Fire”. For example:

Fire Name

Status

"Cameron Peak"

GOOD

"Cameron_Peak"

BAD (uses underscore instead of space)

"Cameron Peak Fire"

BAD (ends with “Fire”)

The fire folder will contain a Python configuration file named fire.py. This file holds metadata about the fire event, and helps ensure that all assessments for this event have the same fire metadata. You should fill out the fields in this file now. Please consult the fire.py API if you need help filling out the individual fields.

Initialize Assessment Version

Next, move to the newly initialized fire folder. For example:

cd "My Fire Event"

Then, use the initialize version command to initialize a folder for the assessment version. The command requires the assessment version number as input. If assessment datasets were provided by:

  • The Burned Area Emergency Reponse (BAER) team,

  • The USGS Earth Resources Observation and Science (EROS) Center, or

  • The U.S. Forest Service (USFS) Geospatial Technology and Applications Center (GTAC)

then you can use --from baer, --from eros, or --from gtac to auto-populate some of the assessment metadata:

ocelote initialize version 1.0 --from baer

Running this command will create a subfolder in the fire folder, and the subfolder’s name will match the assessment version number. From here on, we’ll refer to this folder as the “assessment folder”.

Version Numbers

Most assessments will start with version 1.0. Preliminary assessments should begin at 0.1.

More generally, assessment version numbers should consist of two integers: X.Y. The X integer is the major version number. This should be 0 for a preliminary assessment, 1 for a year-1 assessment, 2 for a year-2 assessment, etc.

The Y integer is the minor version number. You should increment this number if you update assessment for a particular year. For example, if you update the year-1 assessment to leverage improved burn severity data, then you should use version numbers: 1.0 -> 1.1 -> 1.2 -> etc.

Configuration Files

The newly initialized assessment folder will contain an empty inputs subfolder, and three Python configuration files. These are: version.py, datasets.py, and configuration.py. These configuration files hold metadata and parameters used to implement the assessment, and you should fill these files out now. This section will walk you through the most commonly updated fields, and you can find more details in the configuration.py API.

version.py

The version.py file holds metadata about this specific assessment version. If this is version 1.0, then the metadata fields will auto-populate. Otherwise, you should add a note indicating the reason for this version. For example, if you are updating the assessment to leverage improved burn severity data, then you should write that in the note.

datasets.py

The datasets.py file instructs ocelote how to obtain the input datasets for the assessment. The file also records data provenance metadata. The file includes a block for each input dataset as a Python dictionary. Each block includes a dataset field, which tells ocelote how to obtain the data. Depending on the dataset, a dataset field may indicate a local file path, data downloaded from the internet, a constant value, or None (to not use the dataset at all). You should fill out these fields as appropriate for your assessment.

Note

If you are downloading the EVT (existing vegetation type) dataset, be sure to specify the appropriate LANDFIRE data layer. You can find a complete list of available LANDFIRE layers here: LANDFIRE layers. The most recent EVT dataset is usually the best choice.

Each block also contains additional fields documenting the dataset’s provenance. At this point, most of these fields are empty and commented out. Many of the fields will auto-populate with provenance metadata as you download and preprocess datasets. If a field is not commented out, then you should fill it out now.

configuration.py

The configuration.py file defines the parameters used to implement the assessment and to export results. You should start by updating the I15_mm_hr field to list the peak 15-minute rainfall intensities that should be used as design storms for this assessment. Next, update the I15_legend field to one of these rainfall intensities. This indicates the rainfall intensity that should be displayed on the web map, so should be a representative storm for the assessment region.

Tip

The I15_legend field is often set to approximate a design storm with a 1 year recurrence interval. You can obtain this information from the National Oceanic and Atmospheric Administration (NOAA) Atlas 14 data provided by the download command. If NOAA Atlas 14 data is not available, then 24 mm/h is a common choice.

This will be sufficient for most assessments. If you are estimating severity from the differenced normalized burn ratio (dNBR), then you should also update severity_thresholds to indicate the dNBR breaks used to classify severity. In some rare cases, you may want to update the legend_min, legend_max, and legend_step fields to alter the bins used for rainfall threshold legends. Finally, advanced users may want to alter other fields to customize the implementation of the assessment. Consult the wildcat API for details on these fields.

Where to run ocelote

The following sections will discuss the various commands used to implement an assessment. If you are running ocelote from the assessment folder, then the commands will not require any inputs. For example:

cd /path/to/assessments/"My Fire Event"/1.0
ocelote download

If you run ocelote from the the fire folder, then you should provide the version number as input. For example:

cd /path/to/assessments/"My Fire Event"
ocelote download 1.0

If you run ocelote anywhere else, then you should provide the full path to the assessment folder as input. For example:

cd /path/to/somewhere/else
ocelote download /path/to/assessments/"My Fire Event"/1.0

The following sections assume you are running ocelote from the fire folder, so include a version number in the command calls.

Download Data

Next, use the download command to download relevant datasets from the internet:

ocelote download 1.0

This will scan the datasets.py file, and download the indicated datasets. The command will also download NOAA Atlas 14 data, which you can optionally use to select design storm rainfall intensities for configuration.py. After downloading the datasets, the command will update provenance metadata for the downloaded datasets in datasets.py.

Preprocess

Next, use the preprocess command to preprocess the input datasets. This will rasterize all inputs, reproject them to the spatial projection of the digital elevation model (DEM), and clip them to the bounds of the buffered fire perimeter:

ocelote preprocess 1.0

Running this command will create a preprocessed subfolder holding the preprocessed raster datasets. The command will also update datasets.py with provenance metadata for certain special datasets. For example: if using an exclusion mask, if severity is estimated from dNBR, or if a dataset is set to a constant value.

Finalize Config Files

If you have not already done so, you should now finalize the four config files. The archive option in datasets.py is of particular note here. This option indicates which input datasets should be archived with the assessment results. The default behavior is to archive all datasets that are not associated with a permanent DOI, and this behavior is recommended for most assessments. However, certain rare circumstances may require different behavior (for example, if a dataset cannot be released for legal reasons). In this case, you can prevent a dataset from being archived by setting its archive option to False. Similarly, you can force a dataset to be archived by setting archive to True.

Run

We’ll now use the run command to implement an assessment and export the results:

ocelote run 1.0

Before running, the command will heavily validate all four configuration files to ensure the necessary metadata fields are provided and valid. The command will also check that metadata match expected values and formats, as appropriate.

After validating the configuration files, the command will scan configuration.py and use the provided settings to implement an assessment. The command will create the following assets:

  • A results subfolder,

  • A results.zip zip archive, and

  • A <fire>_<start date>_v<version>.zip archive.

The results folder provides a sandbox where you can validate the assessment results - the contents of the folder are never uploaded, so you can interact with its files without worrying about third-party files (such as ArcGIS metadata files) being included in the results. By contrast, the results.zip archive contains the files that will be uploaded to S3 and ScienceBase. As such, you should not alter this archive.

Note

The results.zip file contains an embedded checksum that is used to check if the archive has been altered. If you do alter results.zip, then ocelote will refuse to upload the results, and you will need to re-run the assessment.

The <fire>_<start date>_v<version>.zip archive represents a backup contingency for situations where we are unable to distribute results via ScienceBase. For example, you might use this archive if ScienceBase is down for multiple days due to maintenance. The archive contains a limited subset of the results (metadata.json, median-thresholds.csv and Shapefile results) and is intended for emailing to stakeholders until a ScienceBase release can be published.

Important

We recommend directing stakeholders to a ScienceBase release whenever possible. Please do not email the backup archive unless ScienceBase is down.

Upload

Finally, we’ll upload the assessment results to S3 (for the web map) and ScienceBase (for permanent archival). The upload step is split into subcommands for each platform, which we will discuss below.

Important

If you did not set up credentials at installation, then you will need to do so now.

Upload to S3

You can use the upload s3 command to upload the assessment to S3 to implement the web map:

ocelote upload s3 1.0

Important

You must be on the USGS wired network to use this command. Neither the VPN nor a WiFi connection is sufficient.

Upload to ScienceBase

Next, use the upload sciencebase command to archive the assessment on ScienceBase:

ocelote upload sciencebase 1.0

The command will prompt you for your AD password and then log in to ScienceBase. Before uploading, the command will check that (1) the assessment version does not already exist on ScienceBase, and (2) that version numbers proceed sequentially. (For example, to prevent skipping from 1.0 to 1.2 without 1.1 in between).

Once the upload is validated, the command will generate Federal Geographic Data Committee (FGDC) metadata, build a ScienceBase item, and upload the assessment automatically.

Tip

ScienceBase is often down for maintenance. If the assessment is valid but fails to upload, then wait a few hours and try again later.