openquake.commonlib package¶
openquake.commonlib.datastore module¶
- class openquake.commonlib.datastore.DataStore(path, ppath=None, mode=None)[source]¶
Bases:
collections.abc.MutableMapping
DataStore class to store the inputs/outputs of a calculation on the filesystem.
Here is a minimal example of usage:
>>> dstore, log = build_dstore_log() >>> with dstore, log: ... dstore['example'] = 42 ... print(dstore['example'][()]) 42
When reading the items, the DataStore will return a generator. The items will be ordered lexicographically according to their name.
There is a serialization protocol to store objects in the datastore. An object is serializable if it has a method __toh5__ returning an array and a dictionary, and a method __fromh5__ taking an array and a dictionary and populating the object. For an example of use see
openquake.hazardlib.site.SiteCollection
.- build_fname(prefix, postfix, fmt, export_dir=None)[source]¶
Build a file name from a realization, by using prefix and extension.
- Parameters
prefix – the prefix to use
postfix – the postfix to use (can be a realization object)
fmt – the extension (‘csv’, ‘xml’, etc)
export_dir – export directory (if None use .export_dir)
- Returns
relative pathname including the extension
- calc_id = None¶
- closed = 0¶
- create_df(key, nametypes, compression=None, **kw)[source]¶
Create a HDF5 datagroup readable as a pandas DataFrame
- Parameters
key – name of the dataset
nametypes – list of pairs (name, dtype) or (name, array) or DataFrame
compression – the kind of HDF5 compression to use
kw – extra attributes to store
- create_dset(key, dtype, shape=(None,), compression=None, fillvalue=0, attrs=None)[source]¶
Create a one-dimensional HDF5 dataset.
- Parameters
key – name of the dataset
dtype – dtype of the dataset (usually composite)
shape – shape of the dataset, possibly extendable
compression – the kind of HDF5 compression to use
attrs – dictionary of attributes of the dataset
- Returns
a HDF5 dataset
- property export_dir¶
Return the underlying export directory
- export_path(relname, export_dir=None)[source]¶
Return the path of the exported file by adding the export_dir in front, the calculation ID at the end.
- Parameters
relname – relative file name
export_dir – export directory (if None use .export_dir)
- get_attr(key, name, default=None)[source]¶
- Parameters
key – dataset path
name – name of the attribute
default – value to return if the attribute is missing
- get_attrs(key)[source]¶
- Parameters
key – dataset path
- Returns
dictionary of attributes for that path
- getsize(key='/')[source]¶
Return the size in byte of the output associated to the given key. If no key is given, returns the total size of all files.
- job = None¶
- property metadata¶
- Returns
datastore metadata version, date, checksum as a dictionary
- opened = 0¶
- read_df(key, index=None, sel=(), slc=slice(None, None, None))[source]¶
- Parameters
key – name of the structured dataset
index – pandas index (or multi-index), possibly None
sel – dictionary used to select subsets of the dataset
slc – slice object to extract a slice of the dataset
- Returns
pandas DataFrame associated to the dataset
- read_unique(key, field)[source]¶
- Parameters
key – key to a dataset containing a structured array
field – a field in the structured array
- Returns
sorted, unique values
Works with chunks of 1M records
- openquake.commonlib.datastore.build_dstore_log(description='custom calculation', parent=())[source]¶
- Returns
DataStore instance associated to the .calc_id
- openquake.commonlib.datastore.extract_calc_id_datadir(filename)[source]¶
Extract the calculation ID from the given filename or integer:
>>> extract_calc_id_datadir('/mnt/ssd/oqdata/calc_25.hdf5') (25, '/mnt/ssd/oqdata') >>> extract_calc_id_datadir('/mnt/ssd/oqdata/wrong_name.hdf5') Traceback (most recent call last): ... ValueError: Cannot extract calc_id from /mnt/ssd/oqdata/wrong_name.hdf5
- openquake.commonlib.datastore.hdf5new(datadir=None)[source]¶
Return a new hdf5.File by instance with name determined by the last calculation in the datadir (plus one). Set the .path attribute to the generated filename.
- openquake.commonlib.datastore.new(calc_id, oqparam, datadir=None, mode=None)[source]¶
- Parameters
calc_id – if integer > 0 look in the database and then on the filesystem if integer < 0 look at the old calculations in the filesystem
oqparam – OqParam instance with the validated parameters of the calculation
- Returns
a DataStore instance associated to the given calc_id
- openquake.commonlib.datastore.read(calc_id, mode='r', datadir=None, parentdir=None, read_parent=True)[source]¶
- Parameters
calc_id – calculation ID or filename
mode – ‘r’ or ‘w’
datadir – the directory where to look
parentdir – the datadir of the parent calculation
read_parent – read the parent calculation if it is there
- Returns
the corresponding DataStore instance
Read the datastore, if it exists and it is accessible.
openquake.commonlib.dbapi module¶
One of the worst thing about Python is the DB API 2.0 specification, which is unusable except for building frameworks. It should have been a stepping stone for an usable DB API 3.0 that never happened. So, instead of a good low level API, we had a proliferation of Object Relational Mappers making our lives a lot harder. Fortunately, there has always been good Pythonistas in the anti-ORM camp.
This module is heavily inspired by the dbapiext module by Martin Blais, which is part of the antiorm package. The main (only) difference is that I am using the question mark (?) for the placeholders instead of the percent sign (%) to avoid confusions with other usages of the %s, in particular in LIKE queries and in expressions like strftime(‘%s’, time) used in SQLite.
In less than 200 lines of code there is enough support to build dynamic SQL queries and to make an ORM unneeded, since we do not need database independence.
dbiapi tutorial¶
The only thing you must to know is the Db class, which is lazy wrapper over a database connection. You instantiate it by passing a connection function and its arguments:
>>> import sqlite3
>>> db = Db(sqlite3.connect, ':memory:')
Now you have an interface to your database, the db object. This object is lazy, i.e. the connection is not yet instantiated, but it will be when you will access its .conn attribute. This attribute is automatically accessed when you call the interface to run a query, for instance to create an empty table:
>>> curs = db('CREATE TABLE job ('
... 'id INTEGER PRIMARY KEY AUTOINCREMENT, value INTEGER)')
You can populate the table by using the .insert method:
>>> db.insert('job', ['value'], [(42,), (43,)])
<sqlite3.Cursor object at ...>
Notice that this method returns a standard DB API 2.0 cursor and you have access to all of its features: for instance here you could extract the lastrowid.
Then you can run SELECT queries:
>>> rows = db('SELECT * FROM job')
The dbapi provides a Row class which is used to hold the results of SELECT queries and is working as one would expect:
>>> rows
[<Row(id=1, value=42)>, <Row(id=2, value=43)>]
>>> tuple(rows[0])
(1, 42)
>>> rows[0].id
1
>>> rows[0].value
42
>>> rows[0]._fields
['id', 'value']
The queries can have different kind of ? parameters:
?s is for interpolated string parameters:
>>> db('SELECT * FROM ?s', 'job') # ?s is replaced by 'job' [<Row(id=1, value=42)>, <Row(id=2, value=43)>]
?x is for escaped parameters (to avoid SQL injection):
>>> db('SELECT * FROM job WHERE id=?x', 1) # ?x is replaced by 1 [<Row(id=1, value=42)>]
?s and ?x are for scalar parameters; ?S and ?X are for sequences:
>>> db('INSERT INTO job (?S) VALUES (?X)', ['id', 'value'], (3, 44)) <sqlite3.Cursor object at ...>
You can see how the interpolation works by calling the expand method that returns the interpolated template (alternatively, there is a debug=True flag when calling db that prints the same info). In this case
>>> db.expand('INSERT INTO job (?S) VALUES (?X)', ['id', 'value'], [3, 44])
'INSERT INTO job (id, value) VALUES (?, ?)'
As you see, ?S parameters work by replacing a list of strings with a comma separated string, where ?X parameters are replaced by a comma separated sequence of question marks, i.e. the low level placeholder for SQLite. The interpolation performs a regular search and replace, so if you have a ?- string in your template that must not be escaped, you can run into issues. This is an error:
>>> match("SELECT * FROM job WHERE id=?x AND description='Lots of ?s'", 1)
Traceback (most recent call last):
...
ValueError: Incorrect number of ?-parameters in SELECT * FROM job WHERE id=?x AND description='Lots of ?s', expected 1
This is correct:
>>> match("SELECT * FROM job WHERE id=?x AND description=?x", 1, 'Lots of ?s')
('SELECT * FROM job WHERE id=? AND description=?', (1, 'Lots of ?s'))
There are three other ? parameters:
?D is for dictionaries and it is used mostly in UPDATE queries:
>>> match('UPDATE mytable SET ?D WHERE id=?x', dict(value=33, other=5), 1) ('UPDATE mytable SET other=?, value=? WHERE id=?', (5, 33, 1))
?A is for dictionaries and it is used in AND queries:
>>> match('SELECT * FROM job WHERE ?A', dict(value=33, id=5)) ('SELECT * FROM job WHERE id=? AND value=?', (5, 33))
?O is for dictionaries and it is used in OR queries:
>>> match('SELECT * FROM job WHERE ?O', dict(value=33, id=5)) ('SELECT * FROM job WHERE id=? OR value=?', (5, 33))
The dictionary parameters are ordered per field name, just to make the templates reproducible. ?A and ?O are smart enough to treat specially None parameters, that are turned into NULL:
>>> match('SELECT * FROM job WHERE ?A', dict(value=None, id=5))
('SELECT * FROM job WHERE id=? AND value IS NULL', (5,))
The ? parameters are matched positionally; it is also possible to pass to the db object a few keyword arguments to tune the standard behavior. In particular, if you know that a query must return a single row you can do the following:
>>> db('SELECT * FROM job WHERE id=?x', 1, one=True)
<Row(id=1, value=42)>
Without the one=True the query would have returned a list with a single element. If you know that the query must return a scalar you can do the following:
>>> db('SELECT value FROM job WHERE id=?x', 1, scalar=True)
42
If a query that should return a scalar returns something else, or if a query that should return a row returns a different number of rows, appropriate errors are raised:
>>> db('SELECT * FROM job WHERE id=?x', 1, scalar=True)
Traceback (most recent call last):
...
TooManyColumns: 2, expected 1
>>> db('SELECT * FROM job', None, one=True)
Traceback (most recent call last):
...
TooManyRows: 3, expected 1
If a row is expected but not found, a NotFound exception is raised:
>>> db('SELECT * FROM job WHERE id=?x', None, one=True)
Traceback (most recent call last):
...
NotFound
- class openquake.commonlib.dbapi.Db(connect, *args, **kw)[source]¶
Bases:
object
A wrapper over a DB API 2 connection. See the tutorial.
- property conn¶
- classmethod expand(m_templ, *m_args)[source]¶
Performs partial interpolation of the template. Used for debugging.
- property path¶
Path to the underlying sqlite file
- exception openquake.commonlib.dbapi.NotFound[source]¶
Bases:
Exception
Raised when a scalar query has not output
- class openquake.commonlib.dbapi.Row(fields, values)[source]¶
Bases:
collections.abc.Sequence
A pickleable row, working both as a tuple and an object:
>>> row = Row(['id', 'value'], (1, 2)) >>> tuple(row) (1, 2) >>> assert row[0] == row.id and row[1] == row.value
- Parameters
fields – a sequence of field names
values – a sequence of values (one per field)
- class openquake.commonlib.dbapi.Table(fields, rows)[source]¶
Bases:
list
Just a list of Rows with an attribute _fields
- exception openquake.commonlib.dbapi.TooManyColumns[source]¶
Bases:
Exception
Raised when a scalar query has more than one column
calc module¶
- class openquake.commonlib.calc.RuptureImporter(dstore)[source]¶
Bases:
object
Import an array of ruptures correctly, i.e. by populating the datasets ruptures, rupgeoms, events.
- check_overflow(E)[source]¶
Raise a ValueError if the number of IMTs is larger than 256 or the number of events is larger than 4,294,967,296. The limits are due to the numpy dtype used to store the GMFs (gmv_dt). There also a limit of max_potential_gmfs on the number of sites times the number of events, to avoid producing too many GMFs. In that case split the calculation or be smarter.
- openquake.commonlib.calc.compute_hazard_maps(curves, imls, poes)[source]¶
Given a set of hazard curve poes, interpolate hazard maps at the specified
poes
.- Parameters
curves – Array of floats of shape N x L. Each row represents a curve, where the values in the row are the PoEs (Probabilities of Exceedance) corresponding to the
imls
. Each curve corresponds to a geographical location.imls – Intensity Measure Levels associated with these hazard
curves
. Type should be an array-like of floats.poes – Value(s) on which to interpolate a hazard map from the input
curves
. Can be an array-like or scalar value (for a single PoE).
- Returns
An array of shape N x P, where N is the number of curves and P the number of poes.
- openquake.commonlib.calc.convert_to_array(pmap, nsites, imtls, inner_idx=0)[source]¶
Convert the probability map into a composite array with header of the form PGA-0.1, PGA-0.2 …
- Parameters
pmap – probability map
nsites – total number of sites
imtls – a DictArray with IMT and levels
- Returns
a composite array of lenght nsites
- openquake.commonlib.calc.get_lvl(hcurve, imls, poe)[source]¶
- Parameters
hcurve – a hazard curve, i.e. array of L1 PoEs
imls – L1 intensity measure levels
- Returns
index of the intensity measure level associated to the poe
>>> imls = numpy.array([.1, .2, .3, .4]) >>> hcurve = numpy.array([1., .99, .90, .8]) >>> get_lvl(hcurve, imls, 1) 0 >>> get_lvl(hcurve, imls, .99) 1 >>> get_lvl(hcurve, imls, .91) 2 >>> get_lvl(hcurve, imls, .8) 3
- openquake.commonlib.calc.get_mean_curve(dstore, imt, site_id=0)[source]¶
Extract the mean hazard curve from the datastore for the first site.
- openquake.commonlib.calc.get_poe_from_mean_curve(dstore, imt, iml, site_id=0)[source]¶
Extract the poe corresponding to the given iml by looking at the mean curve for the given imt. iml can also be an array.
- openquake.commonlib.calc.gmvs_to_poes(df, imtls, ses_per_logic_tree_path)[source]¶
- Parameters
df – a DataFrame with fields gmv_0, .. gmv_{M-1}
imtls – a dictionary imt -> imls with M IMTs and L levels
ses_per_logic_tree_path – a positive integer
- Returns
an array of PoEs of shape (M, L)
- openquake.commonlib.calc.make_hmaps(pmaps, imtls, poes)[source]¶
Compute the hazard maps associated to the passed probability maps.
- Parameters
pmaps – a list of Pmaps of shape (N, M, L1)
imtls – DictArray with M intensity measure types
poes – P PoEs where to compute the maps
- Returns
a list of Pmaps with size (N, M, P)
hazard_writers module¶
Classes for serializing various NRML XML artifacts.
- class openquake.commonlib.hazard_writers.BaseCurveWriter(dest, **metadata)[source]¶
Bases:
object
Base class for curve writers.
- Parameters
dest – File path (including filename) or file-like object for results to be saved to.
metadata –
The following keyword args are required:
investigation_time: Investigation time (in years) defined in the calculation which produced these results.
The following are more or less optional (combinational rules noted below where applicable):
statistics: ‘mean’ or ‘quantile’
quantile_value: Only required if statistics = ‘quantile’.
smlt_path: String representing the logic tree path which produced these curves. Only required for non-statistical curves.
gsimlt_path: String represeting the GSIM logic tree path which produced these curves. Only required for non-statisical curves.
- class openquake.commonlib.hazard_writers.HazardCurveXMLWriter(dest, **metadata)[source]¶
Bases:
openquake.commonlib.hazard_writers.BaseCurveWriter
Hazard Curve XML writer. See
BaseCurveWriter
for a list of general constructor inputs.- The following additional metadata params are required:
imt: Intensity measure type used to compute these hazard curves.
imls: Intensity measure levels, which represent the x-axis values of each curve.
- The following parameters are optional:
sa_period: Only used with imt = ‘SA’.
sa_damping: Only used with imt = ‘SA’.
- add_hazard_curves(root, metadata, data)[source]¶
Add hazard curves stored into data as child of the root element with metadata. See the documentation of the method serialize and the constructor for a description of data and metadata, respectively.
- serialize(data)[source]¶
Write a sequence of hazard curves to the specified file.
- Parameters
data –
Iterable of hazard curve data. Each datum must be an object with the following attributes:
poes: A list of probability of exceedence values (floats).
location: An object representing the location of the curve; must have x and y to represent lon and lat, respectively.
- class openquake.commonlib.hazard_writers.HazardMapWriter(dest, **metadata)[source]¶
Bases:
object
- Parameters
dest – File path (including filename) or a file-like object for results to be saved to.
metadata –
The following keyword args are required:
investigation_time: Investigation time (in years) defined in the calculation which produced these results.
imt: Intensity measure type used to compute these hazard curves.
poe: The Probability of Exceedance level for which this hazard map was produced.
The following are more or less optional (combinational rules noted below where applicable):
statistics: ‘mean’ or ‘quantile’
quantile_value: Only required if statistics = ‘quantile’.
smlt_path: String representing the logic tree path which produced these curves. Only required for non-statistical curves.
gsimlt_path: String represeting the GSIM logic tree path which produced these curves. Only required for non-statisical curves.
sa_period: Only used with imt = ‘SA’.
sa_damping: Only used with imt = ‘SA’.
- class openquake.commonlib.hazard_writers.HazardMapXMLWriter(dest, **metadata)[source]¶
Bases:
openquake.commonlib.hazard_writers.HazardMapWriter
NRML/XML implementation of a
HazardMapWriter
.See
HazardMapWriter
for information about constructor parameters.- serialize(data)[source]¶
Serialize hazard map data to XML.
See
HazardMapWriter.serialize()
for details about the expected input.
- class openquake.commonlib.hazard_writers.UHSXMLWriter(dest, **metadata)[source]¶
Bases:
openquake.commonlib.hazard_writers.BaseCurveWriter
UHS curve XML writer. See
BaseCurveWriter
for a list of general constructor inputs.- The following additional metadata params are required:
- poe: Probability of exceedance for which a given set of UHS have been
computed
- periods: A list of SA (Spectral Acceleration) period values, sorted
ascending order
- serialize(data)[source]¶
Write a sequence of uniform hazard spectra to the specified file.
- Parameters
data –
Iterable of UHS data. Each datum must be an object with the following attributes:
imls: A sequence of Intensity Measure Levels
location: An object representing the location of the curve; must have x and y to represent lon and lat, respectively.
- openquake.commonlib.hazard_writers.gen_gmfs(gmf_set)[source]¶
Generate GMF nodes from a gmf_set :param gmf_set: a sequence of GMF objects with attributes imt, sa_period, sa_damping, event_id and containing a list of GMF nodes with attributes gmv and location. The nodes are sorted by lon/lat.
logictree module¶
Logic tree parser, verifier and processor. See specs at https://blueprints.launchpad.net/openquake-old/+spec/openquake-logic-tree-module
A logic tree object must be iterable and yielding realizations, i.e. objects with attributes value, weight, lt_path and ordinal.
- class openquake.commonlib.logictree.FullLogicTree(source_model_lt, gsim_lt)[source]¶
Bases:
object
The full logic tree as composition of
- Parameters
source_model_lt –
SourceModelLogicTree
objectgsim_lt –
GsimLogicTree
object
- classmethod fake(gsimlt=None)[source]¶
- Returns
a fake FullLogicTree instance with the given gsim logic tree object; if None, builds automatically a fake gsim logic tree
- get_num_rlzs(sm_rlz=None)[source]¶
- Parameters
sm_rlz – a Realization instance (or None)
- Returns
the number of realizations per source model (or all)
- get_rlzs_by_gsim_list(list_of_trt_smrs)[source]¶
- Returns
a list of dictionaries rlzs_by_gsim, one for each grp_id
- get_trt_smrs(smr)[source]¶
- Parameters
smr – effective realization index
- Returns
array of T group IDs, being T the number of TRTs
- property num_samples¶
- Returns
the source_model_lt
num_samples
parameter
- property rlzs¶
- Returns
an array of realizations
- property sampling_method¶
- Returns
the source_model_lt
sampling_method
parameter
- property seed¶
- Returns
the source_model_lt seed
- class openquake.commonlib.logictree.Info(smpaths, h5paths, applytosources)¶
Bases:
tuple
- applytosources¶
Alias for field number 2
- h5paths¶
Alias for field number 1
- smpaths¶
Alias for field number 0
- class openquake.commonlib.logictree.LtRealization(ordinal, sm_lt_path, gsim_rlz, weight)[source]¶
Bases:
object
Composite realization build on top of a source model realization and a GSIM realization.
- property gsim_lt_path¶
- class openquake.commonlib.logictree.SourceLogicTree(source_id, branchsets, bsetdict)[source]¶
Bases:
object
Source specific logic tree (full enumeration)
- class openquake.commonlib.logictree.SourceModelLogicTree(filename, seed=0, num_samples=0, sampling_method='early_weights', test_mode=False, branchID=None)[source]¶
Bases:
object
Source model logic tree parser.
- Parameters
filename – Full pathname of logic tree file
- Raises
LogicTreeError – If logic tree file has a logic error, which can not be prevented by xml schema rules (like referencing sources with missing id).
- ABSOLUTE_UNCERTAINTIES = ('abGRAbsolute', 'bGRAbsolute', 'maxMagGRAbsolute', 'simpleFaultGeometryAbsolute', 'truncatedGRFromSlipAbsolute', 'complexFaultGeometryAbsolute', 'setMSRAbsolute')¶
- FILTERS = ('applyToTectonicRegionType', 'applyToSources', 'applyToBranches')¶
- apply_branchset(apply_to_branches, lineno, branchset)[source]¶
See superclass’ method for description and signature specification.
Parses branchset node’s attribute
@applyToBranches
to apply following branchests to preceding branches selectively. Branching level can have more than one branchset exactly for this: different branchsets can apply to different open ends.Checks that branchset tries to be applied only to branches on previous branching level which do not have a child branchset yet.
- bset_values(lt_path)[source]¶
- Parameters
sm_rlz – an effective realization
- Returns
a list of B - 1 pairs (branchset, value)
- collect_source_model_data(branch_id, source_model)[source]¶
Parse source model file and collect information about source ids, source types and tectonic region types available in it. That information is used then for
validate_filters()
andvalidate_uncertainty_value()
.
- decompose()[source]¶
If the logic tree is source specific, returns a dictionary source ID -> SourceLogicTree instance
- parse_branches(branchset_node, branchset)[source]¶
Create and attach branches at
branchset_node
tobranchset
.- Parameters
branchset_node – Same as for
parse_branchset()
.branchset – An instance of
BranchSet
.
Checks that each branch has
valid
value, unique id and that all branches have total weight of 1.0.- Returns
None
, all branches are attached to provided branchset.
- parse_branchset(branchset_node, depth)[source]¶
- Parameters
node (branchset) –
etree.Element
object with tag “logicTreeBranchSet”.depth – The sequential number of this branching level, based on 0.
Enumerates children branchsets and call
parse_branchset()
,validate_branchset()
,parse_branches()
and finallyapply_branchset()
for each.Keeps track of “open ends” – the set of branches that don’t have any child branchset on this step of execution. After processing of every branchset only those branches that are listed in it can have child branchsets (if there is one on the next level).
- parse_filters(branchset_node, uncertainty_type, filters)[source]¶
Converts “applyToSources” and “applyToBranches” filters by splitting into lists.
- parse_tree(tree_node)[source]¶
Parse the whole tree and point
root_branchset
attribute to the tree’s root.
- validate_branchset(branchset_node, depth, branchset)[source]¶
See superclass’ method for description and signature specification.
Checks that the following conditions are met:
First branching level must contain exactly one branchset, which must be of type “sourceModel”.
All other branchsets must not be of type “sourceModel” or “gmpeModel”.
- validate_filters(branchset_node, uncertainty_type, filters)[source]¶
See superclass’ method for description and signature specification.
Checks that the following conditions are met:
“sourceModel” uncertainties can not have filters.
Absolute uncertainties must have only one filter – “applyToSources”, with only one source id.
All other uncertainty types can have either no or one filter.
Filter “applyToSources” must mention only source ids that exist in source models.
Filter “applyToTectonicRegionType” must mention only tectonic region types that exist in source models.
- openquake.commonlib.logictree.collect_info(smltpath, branchID=None)[source]¶
Given a path to a source model logic tree, collect all of the path names to the source models it contains.
- Parameters
smltpath – source model logic tree file
branchID – if given, consider only that branch
- Returns
an Info namedtuple (smpaths, h5paths, applytosources)
- openquake.commonlib.logictree.collect_paths(paths, b1=91, b2=93, til=126)[source]¶
Collect branch paths belonging to the same cluster
>>> collect_paths([b'0~A0', b'0~A1']) b'[0]~[A][01]'
- openquake.commonlib.logictree.compose(source_model_lt, gsim_lt)[source]¶
- Returns
a CompositeLogicTree instance
- openquake.commonlib.logictree.get_effective_rlzs(rlzs)[source]¶
Group together realizations with the same path and yield the first representative of each group.
- openquake.commonlib.logictree.get_field(data, field, default)[source]¶
- Parameters
data – a record with a field field, possibily missing
- openquake.commonlib.logictree.read_source_groups(fname)[source]¶
- Parameters
fname – a path to a source model XML file
- Returns
a list of SourceGroup objects containing source nodes
- openquake.commonlib.logictree.reduce_full(full_lt, rlz_clusters)[source]¶
- Parameters
full_lt – a FullLogicTree instance
rlz_clusters – list of paths for a realization cluster
- Returns
a dictionary with what can be reduced
- openquake.commonlib.logictree.reducible(lt, cluster_paths)[source]¶
- Parameters
lt – a logic tree with B branches
cluster_paths – list of paths for a realization cluster
- Returns
a list [filename, (branchSetID, branchIDs), …]
logs module¶
Set up some system-wide loggers
- class openquake.commonlib.logs.LogContext(job_ini, calc_id, log_level='info', log_file=None, user_name=None, hc_id=None, host=None)[source]¶
Bases:
object
Context manager managing the logging functionality
- multi = False¶
- oqparam = None¶
- class openquake.commonlib.logs.LogDatabaseHandler(job_id)[source]¶
Bases:
logging.Handler
Log stream handler
- class openquake.commonlib.logs.LogFileHandler(job_id, log_file)[source]¶
Bases:
logging.FileHandler
Log file handler
- class openquake.commonlib.logs.LogStreamHandler(job_id)[source]¶
Bases:
logging.StreamHandler
Log stream handler
- emit(record)[source]¶
Emit a record.
If a formatter is specified, it is used to format the record. The record is then written to the stream with a trailing newline. If exception information is present, it is formatted using traceback.print_exception and appended to the stream. If the stream has an ‘encoding’ attribute, it is used to determine how to do the output to the stream.
- openquake.commonlib.logs.dbcmd(action, *args)[source]¶
A dispatcher to the database server.
- Parameters
action (string) – database action to perform
args (tuple) – arguments
- openquake.commonlib.logs.dblog(level: str, job_id: int, task_no: int, msg: str)[source]¶
Log on the database
- openquake.commonlib.logs.get_calc_ids(datadir=None)[source]¶
Extract the available calculation IDs from the datadir, in order.
- openquake.commonlib.logs.get_datadir()[source]¶
Extracts the path of the directory where the openquake data are stored from the environment ($OQ_DATADIR) or from the shared_dir in the configuration file.
- openquake.commonlib.logs.get_last_calc_id(datadir=None)[source]¶
Extract the latest calculation ID from the given directory. If none is found, return 0.
- openquake.commonlib.logs.init(job_or_calc, job_ini, log_level='info', log_file=None, user_name=None, hc_id=None, host=None)[source]¶
- Parameters
job_or_calc – the string “job” or “calcXXX”
job_ini – path to the job.ini file or dictionary of parameters
log_level – the log level as a string or number
log_file – path to the log file (if any)
user_name – user running the job (None means current user)
hc_id – parent calculation ID (default None)
host – machine where the calculation is running (default None)
- Returns
a LogContext instance
initialize the root logger (if not already initialized)
set the format of the root log handlers (if any)
create a job in the database if job_or_calc == “job”
return a LogContext instance associated to a calculation ID
oqvalidation module¶
Full list of configuration parameters¶
Engine Version: 3.16.7
Some parameters have a default that it is used when the parameter is not specified in the job.ini file. Some other parameters have no default, which means that not specifying them will raise an error when running a calculation for which they are required.
- aggregate_by:
Used to compute aggregate losses and aggregate loss curves in risk calculations. Takes in input one or more exposure tags. Example: aggregate_by = region, taxonomy. Default: empty list
- reaggregate_by:
Used to perform additional aggregations in risk calculations. Takes in input a proper subset of the tags in the aggregate_by option. Example: reaggregate_by = region. Default: empty list
- amplification_method:
Used in classical PSHA calculations to amplify the hazard curves with the convolution or kernel method. Example: amplification_method = kernel. Default: “convolution”
- area_source_discretization:
Discretization parameters (in km) for area sources. Example: area_source_discretization = 10. Default: 10
- ash_wet_amplification_factor:
Used in volcanic risk calculations. Example: ash_wet_amplification_factor=1.0. Default: 1.0
- asset_correlation:
Used in risk calculations to take into account asset correlation. Accepts only the values 1 (full correlation) and 0 (no correlation). Example: asset_correlation=1. Default: 0
- asset_hazard_distance:
In km, used in risk calculations to print a warning when there are assets too distant from the hazard sites. Example: asset_hazard_distance = 5. Default: 15
- asset_life_expectancy:
Used in the classical_bcr calculator. Example: asset_life_expectancy = 50. Default: no default
- assets_per_site_limit:
INTERNAL
- gmf_max_gb:
If the size (in GB) of the GMFs is below this value, then compute avg_gmf Example: gmf_max_gb = 1. Default: 0.1
- avg_losses:
Used in risk calculations to compute average losses. Example: avg_losses=false. Default: True
- base_path:
INTERNAL
- cachedir:
INTERNAL
- cache_distances:
Useful in UCERF calculations. Example: cache_distances = true. Default: False
- calculation_mode:
One of classical, disaggregation, event_based, scenario, scenario_risk, scenario_damage, event_based_risk, classical_risk, classical_bcr. Example: calculation_mode=classical Default: no default
- collapse_gsim_logic_tree:
INTERNAL
- collapse_level:
INTERNAL
- collect_rlzs:
Collect all realizations into a single effective realization. If not given it is true for sampling and false for full enumeration. Example: collect_rlzs=true. Default: None
- compare_with_classical:
Used in event based calculation to perform also a classical calculation, so that the hazard curves can be compared. Example: compare_with_classical = true. Default: False
- complex_fault_mesh_spacing:
In km, used to discretize complex faults. Example: complex_fault_mesh_spacing = 15. Default: 5
- concurrent_tasks:
A hint to the engine for the number of tasks to generate. Do not set it unless you know what you are doing. Example: concurrent_tasks = 100. Default: twice the number of cores
- conditional_loss_poes:
Used in classical_risk calculations to compute loss curves. Example: conditional_loss_poes = 0.01 0.02. Default: empty list
- cholesky_limit:
When generating the GMFs from a ShakeMap the engine needs to perform a Cholesky decomposition of a matrix of size (M x N)^2, being M the number of intensity measure types and N the number of sites. The decomposition can become ultra-slow, run out of memory, or produce bogus negative eigenvalues, therefore there is a limit on the maximum size of M x N. Example: cholesky_limit = 1000. Default: 10,000
- continuous_fragility_discretization:
Used when discretizing continuuos fragility functions. Example: continuous_fragility_discretization = 10. Default: 20
- coordinate_bin_width:
Used in disaggregation calculations. Example: coordinate_bin_width = 1.0. Default: no default
- cross_correlation:
When used in Conditional Spectrum calculation is the name of a cross correlation class (i.e. “BakerJayaram2008”). When used in ShakeMap calculations the valid choices are “yes”, “no” “full”, same as for spatial_correlation. Example: cross_correlation = no. Default: “yes”
- description:
A string describing the calculation. Example: description = Test calculation. Default: “no description”
- disagg_bin_edges:
A dictionary where the keys can be: mag, eps, dist, lon, lat and the values are lists of floats indicating the edges of the bins used to perform the disaggregation. Example: disagg_bin_edges = {‘mag’: [5.0, 5.5, 6.0, 6.5]}. Default: empty dictionary
- disagg_by_src:
Flag used to enable disaggregation by source when possible. Example: disagg_by_src = true. Default: False
- disagg_outputs:
Used in disaggregation calculations to restrict the number of exported outputs. Example: disagg_outputs = Mag_Dist Default: list of all possible outputs
- discard_assets:
Flag used in risk calculations to discard assets from the exposure. Example: discard_assets = true. Default: False
- discard_trts:
Used to discard tectonic region types that do not contribute to the hazard. Example: discard_trts = Volcanic. Default: empty list
- discrete_damage_distribution:
Make sure the damage distribution contain only integers (require the “number” field in the exposure to be integer). Example: discrete_damage_distribution = true Default: False
- distance_bin_width:
In km, used in disaggregation calculations to specify the distance bins. Example: distance_bin_width = 20. Default: no default
- ebrisk_maxsize:
INTERNAL
- epsilon_star:
A boolean controlling the typology of disaggregation output to be provided. When True disaggregation is perfomed in terms of epsilon* rather then epsilon (see Bazzurro and Cornell, 1999)
- floating_x_step:
Float, used in rupture generation for kite faults. indicates the fraction of fault length used to float ruptures along strike by the given float (i.e. “0.5” floats the ruptures at half the rupture length). Uniform distribution of the ruptures is maintained, such that if the mesh spacing and rupture dimensions prohibit the defined overlap fraction, the fraction is increased until uniform distribution is achieved. The minimum possible value depends on the rupture dimensions and the mesh spacing. If 0, standard rupture floating is used along-strike (i.e. no mesh nodes are skipped). Example: floating_x_step = 0.5 Default: 0
- floating_y_step:
Float, used in rupture generation for kite faults. indicates the fraction of fault width used to float ruptures down dip. (i.e. “0.5” floats that half the rupture length). Uniform distribution of the ruptures is maintained, such that if the mesh spacing and rupture dimensions prohibit the defined overlap fraction, the fraction is increased until uniform distribution is achieved. The minimum possible value depends on the rupture dimensions and the mesh spacing. If 0, standard rupture floating is used along-strike (i.e. no mesh nodes on the rupture dimensions and the mesh spacing. Example: floating_y_step = 0.5 Default: 0
- ignore_encoding_errors:
If set, skip characters with non-UTF8 encoding Example: ignore_encoding_errors = true. Default: False
- ignore_master_seed:
If set, estimate analytically the uncertainty on the losses due to the uncertainty on the vulnerability functions. Example: ignore_master_seed = vulnerability. Default: None
- export_dir:
Set the export directory. Example: export_dir = /tmp. Default: the current directory, “.”
- exports:
Specify what kind of outputs to export by default. Example: exports = csv, rst. Default: empty list
- ground_motion_correlation_model:
Enable ground motion correlation. Example: ground_motion_correlation_model = JB2009. Default: None
- ground_motion_correlation_params:
To be used together with ground_motion_correlation_model. Example: ground_motion_correlation_params = {“vs30_clustering”: False}. Default: empty dictionary
- ground_motion_fields:
Flag to turn on/off the calculation of ground motion fields. Example: ground_motion_fields = false. Default: True
- gsim:
Used to specify a GSIM in scenario or event based calculations. Example: gsim = BooreAtkinson2008. Default: “[FromFile]”
- hazard_calculation_id:
Used to specify a previous calculation from which the hazard is read. Example: hazard_calculation_id = 42. Default: None
- hazard_curves:
Used to disable the calculation of hazard curves when there are too many realizations. Example: hazard_curves = false Default: True
- hazard_curves_from_gmfs:
Used in scenario/event based calculations. If set, generates hazard curves from the ground motion fields. Example: hazard_curves_from_gmfs = true. Default: False
- hazard_maps:
Set it to true to export the hazard maps. Example: hazard_maps = true. Default: False
- horiz_comp_to_geom_mean:
Apply the correction to the geometric mean when possible, depending on the GMPE and the Intensity Measure Component Example: horiz_comp_to_geom_mean = true. Default: False
- ignore_covs:
Used in risk calculations to set all the coefficients of variation of the vulnerability functions to zero. Example ignore_covs = true Default: False
- ignore_missing_costs:
Accepts exposures with missing costs (by discarding such assets). Example: ignore_missing_costs = nonstructural, business_interruption. Default: False
- iml_disagg:
Used in disaggregation calculations to specify an intensity measure type and level. Example: iml_disagg = {‘PGA’: 0.02}. Default: no default
- imt_ref:
Reference intensity measure type usedto compute the conditional spectrum. The imt_ref must belong to the list of IMTs of the calculation. Example: imt_ref = SA(0.15). Default: no default
- individual_rlzs:
When set, store the individual hazard curves and/or individual risk curves for each realization. Example: individual_rlzs = true. Default: False
- individual_curves:
Legacy name for individual_rlzs, it should not be used. Example: individual_curves = true. Default: False
- inputs:
INTERNAL. Dictionary with the input files paths.
- intensity_measure_types:
List of intensity measure types in an event based calculation. Example: intensity_measure_types = PGA SA(0.1). Default: empty list
- intensity_measure_types_and_levels:
List of intensity measure types and levels in a classical calculation. Example: intensity_measure_types_and_levels={“PGA”: logscale(0.1, 1, 20)}. Default: empty dictionary
- interest_rate:
Used in classical_bcr calculations. Example: interest_rate = 0.05. Default: no default
- investigation_time:
Hazard investigation time in years, used in classical and event based calculations. Example: investigation_time = 50. Default: no default
- limit_states:
Limit states used in damage calculations. Example: limit_states = moderate, complete Default: no default
- lrem_steps_per_interval:
Used in the vulnerability functions. Example: lrem_steps_per_interval = 1. Default: 0
- mag_bin_width:
Width of the magnitude bin used in disaggregation calculations. Example: mag_bin_width = 0.5. Default: no default
- master_seed:
Seed used to control the generation of the epsilons, relevant for risk calculations with vulnerability functions with nonzero coefficients of variation. Example: master_seed = 1234. Default: 123456789
- max:
Compute the maximum across realizations. Akin to mean and quantiles. Example: max = true. Default: False
- max_aggregations:
Maximum number of aggregation keys. Example: max_aggregations = 200_000 Default: 100_000
- max_data_transfer:
INTERNAL. Restrict the maximum data transfer in disaggregation calculations.
- max_gmvs_per_task:
Maximum number of rows of the gmf_data table per task. Example: max_gmvs_per_task = 100_000 Default: 1_000_0000
- max_potential_gmfs:
Restrict the product num_sites * num_events. Example: max_potential_gmfs = 1E9. Default: 2E11
- max_potential_paths:
Restrict the maximum number of realizations. Example: max_potential_paths = 200. Default: 15000
- max_sites_disagg:
Maximum number of sites for which to store rupture information. In disaggregation calculations with many sites you may be forced to raise max_sites_disagg, that must be greater or equal to the number of sites. Example: max_sites_disagg = 100 Default: 10
- pmap_max_gb:
Control the memory used in large classical calculations. The default is .5 (meant for people with 2 GB per core or less) but you increase it if you have plenty of memory, thus producing less tiles and making the calculation more efficient. For small calculations it has basically no effect. Example: pmap_max_gb = 2 Default: .5
- max_weight:
INTERNAL
- maximum_distance:
Integration distance. Can be give as a scalar, as a dictionary TRT -> scalar or as dictionary TRT -> [(mag, dist), …] Example: maximum_distance = 200. Default: no default
- mean:
Flag to enable/disable the calculation of mean curves. Example: mean = false. Default: True
- minimum_asset_loss:
Used in risk calculations. If set, losses smaller than the minimum_asset_loss are consider zeros. Example: minimum_asset_loss = {“structural”: 1000}. Default: empty dictionary
- minimum_distance:
If set, distances below the minimum are rounded up. Example: minimum_distance = 5 Default: 0
- minimum_intensity:
If set, ground motion values below the minimum_intensity are considered zeros. Example: minimum_intensity = {‘PGA’: .01}. Default: empty dictionary
- minimum_magnitude:
If set, ruptures below the minimum_magnitude are discarded. Example: minimum_magnitude = 5.0. Default: 0
- modal_damage_state:
Used in scenario_damage calculations to export only the damage state with the highest probability. Example: modal_damage_state = true. Default: false
- num_epsilon_bins:
Number of epsilon bins in disaggregation calculations. Example: num_epsilon_bins = 3. Default: 1
- num_rlzs_disagg:
Used in disaggregation calculation to specify how many outputs will be generated. 0 means all realizations. Example: num_rlzs_disagg=0. Default: 1
- number_of_ground_motion_fields:
Used in scenario calculations to specify how many random ground motion fields to generate. Example: number_of_ground_motion_fields = 100. Default: no default
- number_of_logic_tree_samples:
Used to specify the number of realizations to generate when using logic tree sampling. If zero, full enumeration is performed. Example: number_of_logic_tree_samples = 0. Default: 0
- poes:
Probabilities of Exceedance used to specify the hazard maps or hazard spectra to compute. Example: poes = 0.01 0.02. Default: empty list
- poes_disagg:
Alias for poes.
- pointsource_distance:
Used in classical calculations to collapse the point sources. Can also be used in conjunction with ps_grid_spacing. Example: pointsource_distance = 50. Default: {‘default’: 1000}
- ps_grid_spacing:
Used in classical calculations to grid the point sources. Requires the pointsource_distance to be set too. Example: ps_grid_spacing = 50. Default: 0, meaning no grid
- quantiles:
List of probabilities used to compute the quantiles across realizations. Example: quantiles = 0.15 0.50 0.85 Default: empty list
- random_seed:
Seed used in the sampling of the logic tree. Example: random_seed = 1234. Default: 42
- reference_backarc:
Used when there is no site model to specify a global backarc parameter, used in some GMPEs. Can be True or False Example: reference_backarc = true. Default: False
- reference_depth_to_1pt0km_per_sec:
Used when there is no site model to specify a global z1pt0 parameter, used in some GMPEs. Example: reference_depth_to_1pt0km_per_sec = 100. Default: no default
- reference_depth_to_2pt5km_per_sec:
Used when there is no site model to specify a global z2pt5 parameter, used in some GMPEs. Example: reference_depth_to_2pt5km_per_sec = 5. Default: no default
- reference_vs30_type:
Used when there is no site model to specify a global vs30 type. The choices are “inferred” or “measured” Example: reference_vs30_type = measured”. Default: “inferred”
- reference_vs30_value:
Used when there is no site model to specify a global vs30 value. Example: reference_vs30_value = 760. Default: no default
- region:
A list of lon/lat pairs used to specify a region of interest Example: region = 10.0 43.0, 12.0 43.0, 12.0 46.0, 10.0 46.0 Default: None
- region_grid_spacing:
Used together with the region option to generate the hazard sites. Example: region_grid_spacing = 10. Default: None
- return_periods:
Used in the computation of the loss curves. Example: return_periods = 200 500 1000. Default: empty list.
- risk_imtls:
INTERNAL. Automatically set by the engine.
- risk_investigation_time:
Used in risk calculations. If not specified, the (hazard) investigation_time is used instead. Example: risk_investigation_time = 50. Default: None
- rlz_index:
Used in disaggregation calculations to specify the realization from which to start the disaggregation. Example: rlz_index = 0. Default: None
- rupture_mesh_spacing:
Set the discretization parameter (in km) for rupture geometries. Example: rupture_mesh_spacing = 2.0. Default: 5.0
- sampling_method:
One of early_weights, late_weights, early_latin, late_latin) Example: sampling_method = early_latin. Default: ‘early_weights’
- sec_peril_params:
INTERNAL
- secondary_perils:
INTERNAL
- secondary_simulations:
INTERNAL
- sensitivity_analysis:
Dictionary describing a sensitivity analysis. Example: sensitivity_analysis = {‘maximum_distance’: [200, 300]}. Default: empty dictionary
- ses_per_logic_tree_path:
Set the number of stochastic event sets per logic tree realization in event based calculations. Example: ses_per_logic_tree_path = 100. Default: 1
- ses_seed:
Seed governing the generation of the ground motion field. Example: ses_seed = 123. Default: 42
- shakemap_id:
Used in ShakeMap calculations to download a ShakeMap from the USGS site Example: shakemap_id = usp000fjta. Default: no default
- shakemap_uri:
Dictionary used in ShakeMap calculations to specify a ShakeMap. Must contain a key named “kind” with values “usgs_id”, “usgs_xml” or “file_npy”. Example: shakemap_uri = { “kind”: “usgs_xml”, “grid_url”: “file:///home/michele/usp000fjta/grid.xml”, “uncertainty_url”: “file:///home/michele/usp000fjta/uncertainty.xml”}. Default: empty dictionary
- shift_hypo:
Used in classical calculations to shift the rupture hypocenter. Example: shift_hypo = true. Default: false
- site_effects:
Flag used in ShakeMap calculations to turn out GMF amplification Example: site_effects = true. Default: False
- sites:
Used to specify a list of sites. Example: sites = 10.1 45, 10.2 45.
- sites_slice:
INTERNAL
- soil_intensities:
Used in classical calculations with amplification_method = convolution
- source_id:
Used for debugging purposes. When given, restricts the source model to the given source IDs. Example: source_id = src001 src002. Default: empty list
- source_nodes:
INTERNAL
- spatial_correlation:
Used in the ShakeMap calculator. The choics are “yes”, “no” and “full”. Example: spatial_correlation = full. Default: “yes”
- specific_assets:
INTERNAL
- split_sources:
INTERNAL
- outs_per_task:
How many outputs per task to generate (honored in some calculators) Example: outs_per_task = 3 Default: 4
- std:
Compute the standard deviation across realizations. Akin to mean and max. Example: std = true. Default: False
- steps_per_interval:
Used in the fragility functions when building the intensity levels Example: steps_per_interval = 4. Default: 1
- time_event:
Used in scenario_risk calculations when the occupancy depend on the time. Valid choices are “day”, “night”, “transit”. Example: time_event = day. Default: None
- time_per_task:
Used in calculatins with task splitting. If a task slice takes longer then time_per_task seconds, then spawn subtasks for the other slices. Example: time_per_task=600 Default: 2000
- total_losses:
Used in event based risk calculations to compute total losses and and total curves by summing across different loss types. Possible values are “structural+nonstructural”, “structural+contents”, “nonstructural+contents”, “structural+nonstructural+contents”. Example: total_losses = structural+nonstructural Default: None
- truncation_level:
Truncation level used in the GMPEs. Example: truncation_level = 0 to compute median GMFs. Default: 99
- uniform_hazard_spectra:
Flag used to generated uniform hazard specta for the given poes Example: uniform_hazard_spectra = true. Default: False
- vs30_tolerance:
Used when amplification_method = convolution. Example: vs30_tolerance = 20. Default: 0
- width_of_mfd_bin:
Used to specify the width of the Magnitude Frequency Distribution. Example: width_of_mfd_bin = 0.2. Default: None
- class openquake.commonlib.oqvalidation.OqParam(**names_vals)[source]¶
Bases:
openquake.hazardlib.valid.ParamSet
- ALIASES = {'individual_curves': 'individual_rlzs', 'max_hazard_curves': 'max', 'mean_hazard_curves': 'mean', 'quantile_hazard_curves': 'quantiles'}¶
- KNOWN_INPUTS = {'amplification', 'business_interruption_consequence', 'business_interruption_fragility', 'business_interruption_vulnerability', 'consequence', 'contents_consequence', 'contents_fragility', 'contents_vulnerability', 'exposure', 'fragility', 'gmfs', 'gsim_logic_tree', 'hazard_curves', 'input_zip', 'ins_loss', 'insurance', 'job_ini', 'multi_peril', 'nonstructural_consequence', 'nonstructural_fragility', 'nonstructural_vulnerability', 'occupants_vulnerability', 'reinsurance', 'reqv', 'rupture_model', 'shakemap', 'site_model', 'sites', 'source_model', 'source_model_logic_tree', 'station_data', 'structural_consequence', 'structural_fragility', 'structural_vulnerability', 'structural_vulnerability_retrofitted', 'taxonomy_mapping'}¶
- aggregate_by¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- amplification_method¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- area_source_discretization¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- ash_wet_amplification_factor¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- asset_correlation¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- asset_hazard_distance¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- asset_life_expectancy¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- assets_per_site_limit¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- avg_losses¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- base_path¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- cache_distances¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- cachedir¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- calculation_mode¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- cholesky_limit¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- collapse_gsim_logic_tree¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- collapse_level¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- collect_rlzs¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- compare_with_classical¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- complex_fault_mesh_spacing¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- concurrent_tasks¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- conditional_loss_poes¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- continuous_fragility_discretization¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- coordinate_bin_width¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- property correl_model¶
Return a correlation object. See
openquake.hazardlib.correlation
for more info.
- property cross_correl¶
Return a cross correlation object (or None). See
openquake.hazardlib.cross_correlation
for more info.
- cross_correlation¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- description¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- disagg_bin_edges¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- disagg_by_src¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- disagg_outputs¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- discard_assets¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- discard_trts¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- discrete_damage_distribution¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- distance_bin_width¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- ebrisk_maxsize¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- epsilon_star¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- export_dir¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- exports¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- property ext_loss_types¶
- Returns
list of loss types + secondary loss types
- floating_x_step¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- floating_y_step¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- get_input_size()[source]¶
- Returns
the total size in bytes of the input files
NB: this will fail if the files are not available, so it should be called only before starting the calculation. The same information is stored in the datastore.
- gmf_max_gb¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- ground_motion_correlation_model¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- ground_motion_correlation_params¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- ground_motion_fields¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- gsim¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- hazard_calculation_id¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- hazard_curves¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- hazard_curves_from_gmfs¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- hazard_imtls = {}¶
- hazard_maps¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- horiz_comp_to_geom_mean¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- ignore_covs¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- ignore_encoding_errors¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- ignore_master_seed¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- ignore_missing_costs¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- iml_disagg¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- imt_ref¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- property imtls¶
Returns a DictArray with the risk intensity measure types and levels, if given, or the hazard ones.
- individual_rlzs¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- property input_dir¶
- Returns
absolute path to where the job.ini is
- inputs¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- intensity_measure_types¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- intensity_measure_types_and_levels¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- interest_rate¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- investigation_time¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- is_valid_collect_rlzs()[source]¶
sampling_method must be early_weights, only the mean is available, and number_of_logic_tree_samples must be greater than 1.
- is_valid_complex_fault_mesh_spacing()[source]¶
The complex_fault_mesh_spacing parameter can be None only if rupture_mesh_spacing is set. In that case it is identified with it.
- is_valid_export_dir()[source]¶
export_dir={export_dir} must refer to a directory, and the user must have the permission to write on it.
- is_valid_geometry()[source]¶
It is possible to infer the geometry only if exactly one of sites, sites_csv, hazard_curves_csv, region is set. You did set more than one, or nothing.
- is_valid_intensity_measure_levels()[source]¶
In order to compute hazard curves, intensity_measure_types_and_levels must be set or extracted from the risk models.
- is_valid_intensity_measure_types()[source]¶
If the IMTs and levels are extracted from the risk models, they must not be set directly. Moreover, if intensity_measure_types_and_levels is set directly, intensity_measure_types must not be set.
- is_valid_poes()[source]¶
When computing hazard maps and/or uniform hazard spectra, the poes list must be non-empty.
- is_valid_soil_intensities()[source]¶
soil_intensities must be defined only in classical calculations with amplification_method=convolution
- is_valid_specific_assets()[source]¶
Read the special assets from the parameters specific_assets or specific_assets_csv, if present. You cannot have both. The concept is meaninful only for risk calculators.
- is_valid_truncation_level()[source]¶
In presence of a correlation model the truncation level must be nonzero
- property job_type¶
‘hazard’ or ‘risk’
- limit_states¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- loss_dt(dtype=<class 'numpy.float64'>)[source]¶
- Returns
a composite dtype based on the loss types including occupants
- loss_dt_list(dtype=<class 'numpy.float64'>)[source]¶
- Returns
a data type list [(loss_name, dtype), …]
- property loss_types¶
- Returns
list of loss types (empty for hazard)
- lrem_steps_per_interval¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- property lti¶
Dictionary extended_loss_type -> extended_loss_type index
- mag_bin_width¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- master_seed¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- max¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- max_aggregations¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- max_data_transfer¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- max_gmvs_per_task¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- max_potential_gmfs¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- max_potential_paths¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- max_sites_disagg¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- maximum_distance¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- mean¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- mean_hazard_curves¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- property min_iml¶
- Returns
a dictionary of intensities, one per IMT
- minimum_asset_loss¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- minimum_distance¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- minimum_intensity¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- minimum_magnitude¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- modal_damage_state¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- property no_pointsource_distance¶
- Returns
True if the pointsource_distance is 1000 km
- num_epsilon_bins¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- num_rlzs_disagg¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- number_of_ground_motion_fields¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- number_of_logic_tree_samples¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- outs_per_task¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- pmap_max_gb¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- poes¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- poes_disagg¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- pointsource_distance¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- ps_grid_spacing¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- quantile_hazard_curves¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- quantiles¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- random_seed¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- reaggregate_by¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- reference_backarc¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- reference_depth_to_1pt0km_per_sec¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- reference_depth_to_2pt5km_per_sec¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- reference_vs30_type¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- reference_vs30_value¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- region¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- region_grid_spacing¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- return_periods¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- risk_event_rates(num_events, num_haz_rlzs)[source]¶
- Parameters
num_events – the number of events per risk realization
num_haz_rlzs – the number of hazard realizations
If risk_investigation_time is 1, returns the annual event rates for each realization as a list, possibly of 1 element.
- property risk_files¶
- risk_imtls¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- risk_investigation_time¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- rlz_index¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- rupture_mesh_spacing¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- sampling_method¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- sec_peril_params¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- secondary_perils¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- secondary_simulations¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- sensitivity_analysis¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- ses_per_logic_tree_path¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- ses_seed¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- set_risk_imts(risklist)[source]¶
- Parameters
risklist – a list of risk functions with attributes .id, .loss_type, .kind
Set the attribute risk_imtls.
- shakemap_id¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- shakemap_uri¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- shift_hypo¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- site_effects¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- sites¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- sites_slice¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- soil_intensities¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- source_id¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- source_nodes¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- spatial_correlation¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- specific_assets¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- split_sources¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- std¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- steps_per_interval¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- time_event¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- time_per_task¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- property time_ratio¶
The ratio risk_investigation_time / eff_investigation_time per rlz
- total_losses¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- truncation_level¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- property tses¶
Return the total time as investigation_time * ses_per_logic_tree_path * (number_of_logic_tree_samples or 1)
- uniform_hazard_spectra¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- vs30_tolerance¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
- width_of_mfd_bin¶
A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.
- Parameters
validator – the validator
default – the default value
readinput module¶
- exception openquake.commonlib.readinput.DuplicatedPoint[source]¶
Bases:
Exception
Raised when reading a CSV file with duplicated (lon, lat) pairs
- class openquake.commonlib.readinput.Global[source]¶
Bases:
object
Global variables to be reset at the end of each calculation/test
- exposure = None¶
- gsim_lt_cache = {}¶
- pmap = None¶
- class openquake.commonlib.readinput.Site(sid, lon, lat)¶
Bases:
tuple
- lat¶
Alias for field number 2
- lon¶
Alias for field number 1
- sid¶
Alias for field number 0
- openquake.commonlib.readinput.collect_files(dirpath, cond=<function <lambda>>)[source]¶
Recursively collect the files contained inside dirpath.
- Parameters
dirpath – path to a readable directory
cond – condition on the path to collect the file
- openquake.commonlib.readinput.extract_from_zip(path, ext='.ini', targetdir=None)[source]¶
Given a zip archive and an extension (by default .ini), unzip the archive into the target directory and the files with the given extension.
- Parameters
path – pathname of the archive
ext – file extension to search for
- Returns
filenames
- openquake.commonlib.readinput.get_checksum32(oqparam, h5=None)[source]¶
Build an unsigned 32 bit integer from the hazard input files
- Parameters
oqparam – an OqParam instance
- openquake.commonlib.readinput.get_composite_source_model(oqparam, h5=None, branchID=None)[source]¶
Parse the XML and build a complete composite source model in memory.
- Parameters
oqparam – an
openquake.commonlib.oqvalidation.OqParam
instanceh5 – an open hdf5.File where to store the source info
- openquake.commonlib.readinput.get_crmodel(oqparam)[source]¶
Return a
openquake.risklib.riskinput.CompositeRiskModel
instance- Parameters
oqparam – an
openquake.commonlib.oqvalidation.OqParam
instance
- openquake.commonlib.readinput.get_csv_header(fname, sep=',')[source]¶
- Parameters
fname – a CSV file
sep – the separator (default comma)
- Returns
the first non-commented line of fname and the file object
- openquake.commonlib.readinput.get_exposure(oqparam)[source]¶
Read the full exposure in memory and build a list of
openquake.risklib.asset.Asset
instances.- Parameters
oqparam – an
openquake.commonlib.oqvalidation.OqParam
instance- Returns
an
Exposure
instance or a compatible AssetCollection
- openquake.commonlib.readinput.get_full_lt(oqparam, branchID=None)[source]¶
- Parameters
oqparam – an
openquake.commonlib.oqvalidation.OqParam
instancebranchID – used to read a single sourceModel branch (if given)
- Returns
a
openquake.commonlib.logictree.FullLogicTree
instance
- openquake.commonlib.readinput.get_gsim_lt(oqparam, trts=('*',))[source]¶
- Parameters
oqparam – an
openquake.commonlib.oqvalidation.OqParam
instancetrts – a sequence of tectonic region types as strings; trts=[‘*’] means that there is no filtering
- Returns
a GsimLogicTree instance obtained by filtering on the provided tectonic region types.
- openquake.commonlib.readinput.get_imts(oqparam)[source]¶
Return a sorted list of IMTs as hazardlib objects
- openquake.commonlib.readinput.get_input_files(oqparam)[source]¶
- Parameters
oqparam – an OqParam instance
hazard – if True, consider only the hazard files
- Returns
input path names in a specific order
- openquake.commonlib.readinput.get_logic_tree(oqparam)[source]¶
- Returns
a CompositeLogicTree instance
- openquake.commonlib.readinput.get_mesh(oqparam, h5=None)[source]¶
Extract the mesh of points to compute from the sites, the sites_csv, the region, the site model, the exposure in this order.
- Parameters
oqparam – an
openquake.commonlib.oqvalidation.OqParam
instance
- openquake.commonlib.readinput.get_no_vect(gsim_lt)[source]¶
- Returns
the names of the non-vectorized GMPEs
- openquake.commonlib.readinput.get_oqparam(job_ini, pkg=None, kw={}, validate=True)[source]¶
Parse a dictionary of parameters from an INI-style config file.
- Parameters
job_ini – Path to configuration file/archive or dictionary of parameters with a key “calculation_mode”
pkg – Python package where to find the configuration file (optional)
kw – Dictionary of strings to override the job parameters
- Returns
An
openquake.commonlib.oqvalidation.OqParam
instance containing the validated and casted parameters/values parsed from the job.ini file as well as a subdictionary ‘inputs’ containing absolute paths to all of the files referenced in the job.ini, keyed by the parameter name.
- openquake.commonlib.readinput.get_params(job_ini, kw={})[source]¶
Parse a .ini file or a .zip archive
- Parameters
job_ini – Configuration file | zip archive | URL
kw – Optionally override some parameters
- Returns
A dictionary of parameters
- openquake.commonlib.readinput.get_pmap_from_csv(oqparam, fnames)[source]¶
- Parameters
oqparam – an
openquake.commonlib.oqvalidation.OqParam
instancefnames – a space-separated list of .csv relative filenames
- Returns
the site mesh and the hazard curves read by the .csv files
- openquake.commonlib.readinput.get_reinsurance(oqparam, assetcol=None)[source]¶
- Returns
(policy_df, treaty_df, field_map)
- openquake.commonlib.readinput.get_rupture(oqparam)[source]¶
Read the rupture_model XML file and by filter the site collection
- Parameters
oqparam – an
openquake.commonlib.oqvalidation.OqParam
instance- Returns
an hazardlib rupture
- openquake.commonlib.readinput.get_shapefiles(dirname)[source]¶
- Parameters
dirname – directory containing the shapefiles
- Returns
list of shapefiles
- openquake.commonlib.readinput.get_site_collection(oqparam, h5=None)[source]¶
Returns a SiteCollection instance by looking at the points and the site model defined by the configuration parameters.
- Parameters
oqparam – an
openquake.commonlib.oqvalidation.OqParam
instance
- openquake.commonlib.readinput.get_site_model(oqparam)[source]¶
Convert the NRML file into an array of site parameters.
- Parameters
oqparam – an
openquake.commonlib.oqvalidation.OqParam
instance- Returns
an array with fields lon, lat, vs30, …
- openquake.commonlib.readinput.get_sitecol_assetcol(oqparam, haz_sitecol=None, cost_types=())[source]¶
- Parameters
oqparam – calculation parameters
haz_sitecol – the hazard site collection
cost_types – the expected cost types
- Returns
(site collection, asset collection, discarded)
- openquake.commonlib.readinput.get_source_model_lt(oqparam, branchID=None)[source]¶
- Parameters
oqparam – an
openquake.commonlib.oqvalidation.OqParam
instance- Returns
a
openquake.commonlib.logictree.SourceModelLogicTree
instance
- openquake.commonlib.readinput.get_station_data(oqparam)[source]¶
Read the station data input file and build a list of ground motion stations and recorded ground motion values along with their uncertainty estimates
- Parameters
oqparam – an
openquake.commonlib.oqvalidation.OqParam
instance- Returns sd
a Pandas dataframe with station ids and coordinates as the index and IMT names as the first level of column headers and mean, std as the second level of column headers
- Returns imts
a list of observed intensity measure types
- openquake.commonlib.readinput.reduce_sm(paths, source_ids)[source]¶
- Parameters
paths – list of source_model.xml files
source_ids – dictionary src_id -> array[src_id, code]
- Returns
dictionary with keys good, total, model, path, xmlns
NB: duplicate sources are not removed from the XML
- openquake.commonlib.readinput.reduce_source_model(smlt_file, source_ids, remove=True)[source]¶
Extract sources from the composite source model.
- Parameters
smlt_file – path to a source model logic tree file
source_ids – dictionary source_id -> records (src_id, code)
remove – if True, remove sm.xml files containing no sources
- Returns
the number of sources satisfying the filter vs the total
source_reader module¶
- class openquake.commonlib.source_reader.CompositeSourceModel(full_lt, src_groups)[source]¶
Bases:
object
- Parameters
full_lt – a
FullLogicTree
instancesrc_groups – a list of SourceGroups
event_based – a flag True for event based calculations, flag otherwise
- get_groups(smr)[source]¶
- Parameters
smr – effective source model realization ID
- Returns
SourceGroups associated to the given smr
- get_max_weight(oq)[source]¶
- Parameters
oq – an OqParam instance
- Returns
total weight and max weight of the sources
- openquake.commonlib.source_reader.create_source_info(csm, h5)[source]¶
Creates source_info, source_wkt, trt_smrs, toms
- openquake.commonlib.source_reader.fix_geometry_sections(smdict, h5)[source]¶
If there are MultiFaultSources, fix the sections according to the GeometryModels (if any).
- openquake.commonlib.source_reader.get_csm(oq, full_lt, h5=None)[source]¶
Build source models from the logic tree and to store them inside the source_full_lt dataset.
- openquake.commonlib.source_reader.mutex_by_grp(src_groups)[source]¶
- Returns
a composite array with boolean fields src_mutex, rup_mutex
- openquake.commonlib.source_reader.read_source_model(fname, converter, monitor)[source]¶
- Parameters
fname – path to a source model XML file
converter – SourceConverter
monitor – a Monitor instance
- Returns
a SourceModel instance
util module¶
- openquake.commonlib.util.closest_to_ref(arrays, ref, cutoff=1e-12)[source]¶
- Parameters
arrays – a sequence of arrays
ref – the reference array
- Returns
a list of indices ordered by closeness
This function is used to extract the realization closest to the mean in disaggregation. For instance, if there are 2 realizations with indices 0 and 1, the first hazard curve having values
>>> c0 = numpy.array([.99, .97, .5, .1])
and the second hazard curve having values
>>> c1 = numpy.array([.98, .96, .45, .09])
with weights 0.6 and 0.4 and mean
>>> mean = numpy.average([c0, c1], axis=0, weights=[0.6, 0.4])
then calling
closest_to_ref
will returns the indices 0 and 1 respectively:>>> closest_to_ref([c0, c1], mean) [0, 1]
This means that the realization 0 is the closest to the mean, as expected, since it has a larger weight. You can check that it is indeed true by computing the sum of the quadratic deviations:
>>> ((c0 - mean)**2).sum() 0.0004480000000000008 >>> ((c1 - mean)**2).sum() 0.0010079999999999985
If the 2 realizations have equal weights the distance from the mean will be the same. In that case both the realizations will be okay; the one that will be chosen by
closest_to_ref
depends on the magic of floating point approximation (theoretically identical distances will likely be different as numpy.float64 numbers) or on the magic of Pythonlist.sort
.
- openquake.commonlib.util.compose_arrays(a1, a2, firstfield='etag')[source]¶
Compose composite arrays by generating an extended datatype containing all the fields. The two arrays must have the same length.
- openquake.commonlib.util.get_assets(dstore)[source]¶
- Parameters
dstore – a datastore with keys ‘assetcol’
- Returns
an array of records (id, tag1, …, tagN, lon, lat)
- openquake.commonlib.util.log(array, cutoff)[source]¶
Compute the logarithm of an array with a cutoff on the small values
- openquake.commonlib.util.max_rel_diff(curve_ref, curve, min_value=0.01)[source]¶
Compute the maximum relative difference between two curves. Only values greather or equal than the min_value are considered.
>>> curve_ref = [0.01, 0.02, 0.03, 0.05, 1.0] >>> curve = [0.011, 0.021, 0.031, 0.051, 1.0] >>> round(max_rel_diff(curve_ref, curve), 2) 0.1
- openquake.commonlib.util.max_rel_diff_index(curve_ref, curve, min_value=0.01)[source]¶
Compute the maximum relative difference between two sets of curves. Only values greather or equal than the min_value are considered. Return both the maximum difference and its location (array index).
>>> curve_refs = [[0.01, 0.02, 0.03, 0.05], [0.01, 0.02, 0.04, 0.06]] >>> curves = [[0.011, 0.021, 0.031, 0.051], [0.012, 0.022, 0.032, 0.051]] >>> max_rel_diff_index(curve_refs, curves) (0.2, 1)
- openquake.commonlib.util.rmsep(array_ref, array, min_value=0)[source]¶
Root Mean Square Error Percentage for two arrays.
- Parameters
array_ref – reference array
array – another array
min_value – compare only the elements larger than min_value
- Returns
the relative distance between the arrays
>>> curve_ref = numpy.array([[0.01, 0.02, 0.03, 0.05], ... [0.01, 0.02, 0.04, 0.06]]) >>> curve = numpy.array([[0.011, 0.021, 0.031, 0.051], ... [0.012, 0.022, 0.032, 0.051]]) >>> str(round(rmsep(curve_ref, curve, .01), 5)) '0.11292'
- Returns
True if a shared_dir has been set in openquake.cfg, else False