Tutorials¶
The easiest way to run the USGS slab modeling code is through an interactive jupyter notebook, which is located
in tutorials/slab2.ipynb. To open the notebook, first ensure the Python virtual
environment has been installed and is activated. For more information on installation, please
visit the Installation page. With the environment activated, in a terminal
window, run the jupyter lab
commmand. In some browsers, a Jupyter tab will open automatically,
otherwise, use the link provided in the terminal output to access the Jupyter Lab window.
Once open, use the file explorer on the left side of the window to open the slab2.ipynb
notebook (located within the tutorials folder). Once open, select the first code cell
and use [SHIFT] + [ENTER]
to run it. A dropdown menu will be displayed with the options
below:
Run Synthetic Test¶
Once selected, this option will automatically begin running the synthetic model test. This test generates a slab model with a simplified, synthetic input dataset to test the code is functioning properly following installation. The test takes about a minute to run, and once finished, will save the synthetic output files to the src/output folder.
Generate a New USGS Slab Model¶
This option will generate a new slab model with the desired input data. To generate a model, first
select a parameter file for the appropriate slab region to model from the dropdown menu. Next, select
the database you would like to use from the list of available databases. If desired, you may use
threading to run the module on multiple cores at the same time by adjusting the slider. Finally,
select whether you would like to model the slab surface or center, then the model can be generated
by clicking the Run Module
button. The terminal output from the slab2.py script will be displayed
in the Jupyter Notebook, and once finished, a list of the resulting files will appear.
Make a Map¶
This option can be used to produce maps of slab models which have been saved to the src/output folder.
First, select the model you would like to make a plot of from the dropdown menu.
The default options will produce a plot of the slab depth for the requested model.
Next, select the type of plot to make (2D or 3D) and the location of the slab model (surface or center.)
The Run Module
button will produce and display maps for the desired model.
By default, a cross-section line with an azimuth angle of zero (vertical) will be plotted in the center of the slab region. Use the three sliders to adjust the location and orientation of the cross-section line. Checkboxes below allow for features like a basemap and contour lines to be added to the plot. When the desired parameters have been selected, hit the Save Figure button to save the figure to a jpeg file. The resulting file will be stored in src/plotting/output/.
Compare Two Slab Models¶
If you would like to see how two slab models with different input data or parameters differ visually, this option is especially useful. Like the options above, begin by selecting the two slab models you would like to compare. Next, select the slab model location (surface or center). To compare the slab surface and center for the same slab model, select both. Comparison outputs will be saved to the src/plotting/output directory.
Make a New Input File¶
Data must be formatted into a set of input csv files, organized by the date they were generated. There are two primary ways of going about making a new input file:
Make All Inputs¶
Selecting this option from the first dropdown will generate input files for every available slab region from the desired database. If you would like to make a model with a new database without filtering or modifying that data, this is the option you would select. For further customization, refer to the instructions for making individual inputs.
Making Individual Inputs¶
To make an input file for just one slab region, select this option. The advanced options
checkbox allows for
additional filtering and data modification if desired. If selected, you may customize the name of the resulting
input file, add supplementary data (e.g. tomography data), filter by global or regional preference, and specify
region boundaries. With the advanced option, you may also add specific files with additional data for the desired
slab model. Note that all additional data you wish to add must be stored in the input_data folder prior to
running this module.
Query the PDE and GCMT Earthquake Catalogs¶
One way to add new data is to run a query of the Preliminary Determined Epicenters (PDE) and Global Centroid Moment Tensors (GCMT) catalogs via a set of builtin methods. To run queries of the PDE and GCMT catalogs in the interactive notebook, follow the steps below. For more detailed instructions on individual modules, proceed to the Making Input Files page.
Querying the PDE Catalog¶
To gather new data from the PDE catalog (a.k.a. ComCat), select the option: Query a Database
. From the following
dropdown menu, select pde query
. There are two methods for determining the date range of data to gather. One is to
select a previous query of the PDE catalog, which will download all events since the last query was run. The other method
is to select start and finish dates for the query. Note the finish date may be left blank to get all events up to the
present day. The new data will be automatically formatted and stored in input/catalog_query/pde_query/pde_MMYY.csv with
the current month and year.
Querying the GCMT Catalog¶
To gather new data from the GCMT catalog, select the option: Query a Database
then gcmt query
. Similarly to the PDE
query process, you may select either a previous query to base off of, or specify start and finish dates manually.
Note the finish date may be left blank to get all events up to the present day. The new data will be automatically formatted
and stored in input/catalog_query/gcmt_query/gcmt_MMYY.csv with the current month and year.
Associating the PDE and GCMT Data¶
To format into a new database, the newly queried PDE and GCMT data will need to be associated into a single file with
consistent formatting. This can be done from the Query a Database
option in the USGS Slab Models Jupyter Notebook. To proceed with
associating PDE and GCMT files, select the associate data
option from the query_type dropdown menu. From the following
menus, select the appropriate GCMT and PDE files to associate. Running the module will take several minutes and will
produce a consolidated file: input/catalog_query/new_query/pde_gcmt_MMYY.csv with the current month and year. Note that
if any duplicate events do exist in one of the files, this module will remove them as they are consolidated.
Making a New Database¶
The last step in migrating your newly downloaded data into a usable form for making USGS slab models is to generate a new database
for making input files. To do so, once again use the Query a Database
option, then select make new database
and the appropirate
associated file from the previous step. This module will copy over the earthquake data from the most recent associated PDE and
GCMT query to the file ALL_EQ_MMDDYY.csv with the current date, as well as all supplementary data from the most recent database.
If desired, supplementary data can be added and removed from this new database, which will be labeled MMYYdatabase with the
current month and year.
Get Published Models From Science Base¶
The USGS slab modeling code produces new slab geometry models using a given set of data. A database of models referenced in Hayes et al. (2018)
is also available at Science Base. To make plots and further analyze these published models,
an option to automatically download these models to your local device is provided. This can be done by selecting the Download Science Base
Models
option from the home dropdown menu in the Jupyter Notebook. All models will be saved to the src/output/ folder.
Generate a Tomography Input¶
To process tomographic velocity models, there is a module which converts these models into a grid of points usable as an input file for generating a slab model. Note that velocity models must be formatted such that data is in %dV. The tomo_format.py module can be used to convert absolute velocity models to the proper format for making a slab model. To make a new tomography input in the Jupyter Notebook, first select the desired formatted tomography file, then select the appropriate slab guide to constrain it. Next, select the slab region and input a 4-letter identification code for the model (this will be used to name the output file). Next, you will have a number of filters to apply when processing the data. The applications of these filters are as follows:
- posmag/ negmag :
The maximum allowable distances (in km) from the slab guide to constrain the model by
- hnode :
horizontal node spacing for the output grid (in degrees)
- vnode :
vertical node spacing for the output grid (in km)
- HRES :
horizontal node spacing for upsampled tomography (in degrees)
- VRES :
vertical node spacing for upsampled tomograpy (in km)
- thresh :
percent velocity perturbation threshold for making points
- shallim :
shallow depth limit to apply to the slab guide
- minlen :
minimum number of sample points
- maxlen :
maximum number of sample points
- maxdist :
maximum vertical distance from slab guide
Resulting tomography files will be saved to the data/input_data folder, and are now ready to be used as inputs for generating a USGS slab model.
Other Modules¶
This list covers most of the functions necessary for generating and visualizing slab models, all of which are available in the USG Slab Models Jupyter Notebook. For a comprehensive list of all functions in the USGS Slab Models repoistory and how to interact with them directly, refer to the following pages.