openquake.commonlib package

openquake.commonlib.concurrent_futures_process_mpatch module

openquake.commonlib.concurrent_futures_process_mpatch.concurrent_futures_process_monkeypatch()[source]

openquake.commonlib.datastore module

class openquake.commonlib.datastore.ByteCounter(nbytes=0)[source]

Bases: object

A visitor used to measure the dimensions of a HDF5 dataset or group. Use it as ByteCounter.get_nbytes(dset_or_group).

classmethod get_nbytes(dset)[source]
class openquake.commonlib.datastore.DataStore(calc_id=None, datadir='/home/daniele/oqdata', export_dir='.', params=(), mode=None)[source]

Bases: _abcoll.MutableMapping

DataStore class to store the inputs/outputs of a calculation on the filesystem.

Here is a minimal example of usage:

>>> ds = DataStore()
>>> ds['example'] = 'hello world'
>>> print(ds['example'])
hello world
>>> ds.clear()

When reading the items, the DataStore will return a generator. The items will be ordered lexicographically according to their name.

There is a serialization protocol to store objects in the datastore. An object is serializable if it has a method __toh5__ returning an array and a dictionary, and a method __fromh5__ taking an array and a dictionary and populating the object. For an example of use see openquake.hazardlib.site.SiteCollection.

build_fname(prefix, postfix, fmt, export_dir=None)[source]

Build a file name from a realization, by using prefix and extension.

Parameters:
  • prefix – the prefix to use
  • postfix – the postfix to use (can be a realization object)
  • fmt – the extension (‘csv’, ‘xml’, etc)
  • export_dir – export directory (if None use .export_dir)
Returns:

relative pathname including the extension

clear()[source]

Remove the datastore from the file system

close()[source]

Close the underlying hdf5 file

create_dset(key, dtype, shape=(None, ), compression=None, fillvalue=0, attrs=None)[source]

Create a one-dimensional HDF5 dataset.

Parameters:
  • key – name of the dataset
  • dtype – dtype of the dataset (usually composite)
  • shape – shape of the dataset, possibly extendable
  • compression – the kind of HDF5 compression to use
  • attrs – dictionary of attributes of the dataset
Returns:

a HDF5 dataset

export_csv(key)[source]

Generic csv exporter

export_path(relname, export_dir=None)[source]

Return the path of the exported file by adding the export_dir in front, the calculation ID at the end.

Parameters:
  • relname – relative file name
  • export_dir – export directory (if None use .export_dir)
extend(key, array)[source]

Extend the dataset associated to the given key; create it if needed

Parameters:
  • key – name of the dataset
  • array – array to store
flush()[source]

Flush the underlying hdf5 file

get(key, default)[source]
Returns:the value associated to the datastore key, or the default
get_attr(key, name, default=None)[source]
Parameters:
  • key – dataset path
  • name – name of the attribute
  • default – value to return if the attribute is missing
getitem(name)[source]

Return a dataset by using h5py.File.__getitem__

getsize(key=None)[source]

Return the size in byte of the output associated to the given key. If no key is given, returns the total size of all files.

save(key, kw)[source]

Update the object associated to key with the kw dictionary; works for LiteralAttrs objects and automatically flushes.

set_attrs(key, **kw)[source]

Set the HDF5 attributes of the given key

set_nbytes(key, nbytes=None)[source]

Set the nbytes attribute on the HDF5 object identified by key.

set_parent(parent)[source]

Give a parent to a datastore and update its .attrs with the parent attributes, which are assumed to be literal strings.

class openquake.commonlib.datastore.Fake(attrs=None, **kwargs)[source]

Bases: dict

A fake datastore as a dict subclass, useful in tests and such

openquake.commonlib.datastore.get_calc_ids(datadir='/home/daniele/oqdata')[source]

Extract the available calculation IDs from the datadir, in order.

openquake.commonlib.datastore.get_last_calc_id(datadir)[source]

Extract the latest calculation ID from the given directory. If none is found, return 0.

openquake.commonlib.datastore.get_nbytes(dset)[source]

If the dataset has an attribute ‘nbytes’, return it. Otherwise get the size of the underlying array. Returns None if the dataset is actually a group.

openquake.commonlib.datastore.persistent_attribute(key)[source]

Persistent attributes are persisted to the datastore and cached. Modifications to mutable objects are not automagically persisted. If you have a huge object that does not fit in memory use the datastore directory (for instance, open a HDF5 file to create an empty array, then populate it). Notice that you can use any dict-like data structure in place of the datastore, provided you can set attributes on it. Here is an example:

>>> class Datastore(dict):
...     "A fake datastore"
>>> class Store(object):
...     a = persistent_attribute('a')
...     def __init__(self, a):
...         self.datastore = Datastore()
...         self.a = a  # this assegnation will store the attribute
>>> store = Store([1])
>>> store.a  # this retrieves the attribute
[1]
>>> store.a.append(2)
>>> store.a = store.a  # remember to store the modified attribute!
Parameters:key – the name of the attribute to be made persistent
Returns:a property to be added to a class with a .datastore attribute
openquake.commonlib.datastore.read(calc_id, mode='r', datadir='/home/daniele/oqdata')[source]
Parameters:
  • calc_id – calculation ID
  • mode – ‘r’ or ‘w’
  • datadir – the directory where to look
Returns:

the corresponding DataStore instance

Read the datastore, if it exists and it is accessible.

openquake.commonlib.hazard_writers module

Classes for serializing various NRML XML artifacts.

class openquake.commonlib.hazard_writers.BaseCurveWriter(dest, **metadata)[source]

Bases: object

Base class for curve writers.

Parameters:
  • dest – File path (including filename) or file-like object for results to be saved to.
  • metadata

    The following keyword args are required:

    • investigation_time: Investigation time (in years) defined in the calculation which produced these results.

    The following are more or less optional (combinational rules noted below where applicable):

    • statistics: ‘mean’ or ‘quantile’
    • quantile_value: Only required if statistics = ‘quantile’.
    • smlt_path: String representing the logic tree path which produced these curves. Only required for non-statistical curves.
    • gsimlt_path: String represeting the GSIM logic tree path which produced these curves. Only required for non-statisical curves.
serialize(_data)[source]

Implement in subclasses.

class openquake.commonlib.hazard_writers.DisaggXMLWriter(dest, **metadata)[source]

Bases: object

Parameters:
  • dest – File path (including filename) or file-like object for XML results to be saved to.
  • metadata

    The following keyword args are required:

    • investigation_time: Investigation time (in years) defined in the calculation which produced these results.
    • imt: Intensity measure type used to compute these matrices.
    • lon, lat: Longitude and latitude associated with these results.

    The following attributes define dimension context for the result matrices:

    • mag_bin_edges: List of magnitude bin edges (floats)
    • dist_bin_edges: List of distance bin edges (floats)
    • lon_bin_edges: List of longitude bin edges (floats)
    • lat_bin_edges: List of latitude bin edges (floats)
    • eps_bin_edges: List of epsilon bin edges (floats)
    • tectonic_region_types: List of tectonic region types (strings)
    • smlt_path: String representing the logic tree path which produced these results. Only required for non-statistical results.
    • gsimlt_path: String represeting the GSIM logic tree path which produced these results. Only required for non-statistical results.

    The following are optional, depending on the imt:

    • sa_period: Only used with imt = ‘SA’.
    • sa_damping: Only used with imt = ‘SA’.
BIN_EDGE_ATTR_MAP = OrderedDict([('mag_bin_edges', 'magBinEdges'), ('dist_bin_edges', 'distBinEdges'), ('lon_bin_edges', 'lonBinEdges'), ('lat_bin_edges', 'latBinEdges'), ('eps_bin_edges', 'epsBinEdges'), ('tectonic_region_types', 'tectonicRegionTypes')])

Maps metadata keywords to XML attribute names for bin edge information passed to the constructor. The dict here is an OrderedDict so as to give consistent ordering of result attributes.

DIM_LABEL_TO_BIN_EDGE_MAP = {'Dist': 'dist_bin_edges', 'Lon': 'lon_bin_edges', 'Eps': 'eps_bin_edges', 'Mag': 'mag_bin_edges', 'Lat': 'lat_bin_edges', 'TRT': 'tectonic_region_types'}
serialize(data)[source]
Parameters:data

A sequence of data where each datum has the following attributes:

  • matrix: N-dimensional numpy array containing the disaggregation histogram.
  • dim_labels: A list of strings which label the dimensions of a given histogram. For example, for a Magnitude-Distance-Epsilon histogram, we would expect dim_labels to be ['Mag', 'Dist', 'Eps'].
  • poe: The disaggregation Probability of Exceedance level for which these results were produced.
  • iml: Intensity measure level, interpolated from the source hazard curve at the given poe.
class openquake.commonlib.hazard_writers.EventBasedGMFXMLWriter(dest, sm_lt_path, gsim_lt_path)[source]

Bases: object

Parameters:
  • dest – File path (including filename) or a file-like object for XML results to be saved to.
  • sm_lt_path (str) – Source model logic tree branch identifier of the logic tree realization which produced this collection of ground motion fields.
  • gsim_lt_path – GSIM logic tree branch identifier of the logic tree realization which produced this collection of ground motion fields.
serialize(data, fmt='%10.7E')[source]

Serialize a collection of ground motion fields to XML.

Parameters:data

An iterable of “GMF set” objects. Each “GMF set” object should:

  • have an investigation_time attribute
  • have an stochastic_event_set_id attribute
  • be iterable, yielding a sequence of “GMF” objects

Each “GMF” object should:

  • have an imt attribute
  • have an sa_period attribute (only if imt is ‘SA’)
  • have an sa_damping attribute (only if imt is ‘SA’)
  • have a rupture_id attribute (to indicate which rupture contributed to this gmf)
  • be iterable, yielding a sequence of “GMF node” objects

Each “GMF node” object should have:

  • a gmv attribute (to indicate the ground motion value
  • lon and lat attributes (to indicate the geographical location of the ground motion field)
class openquake.commonlib.hazard_writers.HazardCurveGeoJSONWriter(dest, **metadata)[source]

Bases: openquake.commonlib.hazard_writers.BaseCurveWriter

Writes hazard curves to GeoJSON. Has the same constructor and interface as HazardCurveXMLWriter.

serialize(data)[source]

Write the hazard curves to the given as GeoJSON. The GeoJSON format is customized to contain various bits of metadata.

See HazardCurveXMLWriter.serialize() for expected input.

class openquake.commonlib.hazard_writers.HazardCurveXMLWriter(dest, **metadata)[source]

Bases: openquake.commonlib.hazard_writers.BaseCurveWriter

Hazard Curve XML writer. See BaseCurveWriter for a list of general constructor inputs.

The following additional metadata params are required:
  • imt: Intensity measure type used to compute these hazard curves.
  • imls: Intensity measure levels, which represent the x-axis values of each curve.
The following parameters are optional:
  • sa_period: Only used with imt = ‘SA’.
  • sa_damping: Only used with imt = ‘SA’.
add_hazard_curves(root, metadata, data)[source]

Add hazard curves stored into data as child of the root element with metadata. See the documentation of the method serialize and the constructor for a description of data and metadata, respectively.

serialize(data)[source]

Write a sequence of hazard curves to the specified file.

Parameters:data

Iterable of hazard curve data. Each datum must be an object with the following attributes:

  • poes: A list of probability of exceedence values (floats).
  • location: An object representing the location of the curve; must have x and y to represent lon and lat, respectively.
class openquake.commonlib.hazard_writers.HazardMapGeoJSONWriter(dest, **metadata)[source]

Bases: openquake.commonlib.hazard_writers.HazardMapWriter

GeoJSON implementation of a HazardMapWriter. Serializes hazard maps as FeatureCollection artifacts with additional hazard map metadata.

See HazardMapWriter for information about constructor parameters.

serialize(data)[source]

Serialize hazard map data to GeoJSON.

See HazardMapWriter.serialize() for details about the expected input.

class openquake.commonlib.hazard_writers.HazardMapWriter(dest, **metadata)[source]

Bases: object

Parameters:
  • dest – File path (including filename) or a file-like object for results to be saved to.
  • metadata

    The following keyword args are required:

    • investigation_time: Investigation time (in years) defined in the calculation which produced these results.
    • imt: Intensity measure type used to compute these hazard curves.
    • poe: The Probability of Exceedance level for which this hazard map was produced.

    The following are more or less optional (combinational rules noted below where applicable):

    • statistics: ‘mean’ or ‘quantile’
    • quantile_value: Only required if statistics = ‘quantile’.
    • smlt_path: String representing the logic tree path which produced these curves. Only required for non-statistical curves.
    • gsimlt_path: String represeting the GSIM logic tree path which produced these curves. Only required for non-statisical curves.
    • sa_period: Only used with imt = ‘SA’.
    • sa_damping: Only used with imt = ‘SA’.
serialize(data)[source]

Write a sequence of hazard map data to the specified file.

Parameters:data – Iterable of hazard map data. Each datum should be a triple of (lon, lat, iml) values.
class openquake.commonlib.hazard_writers.HazardMapXMLWriter(dest, **metadata)[source]

Bases: openquake.commonlib.hazard_writers.HazardMapWriter

NRML/XML implementation of a HazardMapWriter.

See HazardMapWriter for information about constructor parameters.

serialize(data)[source]

Serialize hazard map data to XML.

See HazardMapWriter.serialize() for details about the expected input.

class openquake.commonlib.hazard_writers.SESXMLWriter(dest)[source]

Bases: object

Parameters:
  • dest – File path (including filename) or a file-like object for XML results to be saved to.
  • sm_lt_path (str) – Source model logic tree branch identifier of the logic tree realization which produced this collection of stochastic event sets.
  • gsim_lt_path – GSIM logic tree branch identifier of the logic tree realization which produced this collection of stochastic event sets.
serialize(data)[source]

Serialize a collection of stochastic event sets to XML.

Parameters:data

An iterable of “SES” (“Stochastic Event Set”) objects. Each “SES” object should:

  • have an investigation_time attribute
  • have an ordinal attribute
  • be iterable, yielding a sequence of “rupture” objects

Each rupture” should have the following attributes: * etag * magnitude * strike * dip * rake * tectonic_region_type * is_from_fault_source (a bool) * is_multi_surface (a bool) * lons * lats * depths

If is_from_fault_source is True, the rupture originated from a simple or complex fault sources. In this case, lons, lats, and depths should all be 2D arrays (of uniform shape). These coordinate triples represent nodes of the rupture mesh.

If is_from_fault_source is False, the rupture originated from a point or area source. In this case, the rupture is represented by a quadrilateral planar surface. This planar surface is defined by 3D vertices. In this case, the rupture should have the following attributes:

  • top_left_corner
  • top_right_corner
  • bottom_right_corner
  • bottom_left_corner

Each of these should be a triple of lon, lat, depth.

If is_multi_surface is True, the rupture originated from a multi-surface source. In this case, lons, lats, and depths should have uniform length. The length should be a multiple of 4, where each segment of 4 represents the corner points of a planar surface in the following order:

  • top left
  • top right
  • bottom left
  • bottom right

Each of these should be a triple of lon, lat, depth.

class openquake.commonlib.hazard_writers.UHSXMLWriter(dest, **metadata)[source]

Bases: openquake.commonlib.hazard_writers.BaseCurveWriter

UHS curve XML writer. See BaseCurveWriter for a list of general constructor inputs.

The following additional metadata params are required:
  • poe: Probability of exceedance for which a given set of UHS have been

    computed

  • periods: A list of SA (Spectral Acceleration) period values, sorted

    ascending order

serialize(data)[source]

Write a sequence of uniform hazard spectra to the specified file.

Parameters:data

Iterable of UHS data. Each datum must be an object with the following attributes:

  • imls: A sequence of Intensity Measure Levels
  • location: An object representing the location of the curve; must have x and y to represent lon and lat, respectively.
openquake.commonlib.hazard_writers.gen_gmfs(gmf_set)[source]

Generate GMF nodes from a gmf_set :param gmf_set: a sequence of GMF objects with attributes imt, sa_period, sa_damping, rupture_id and containing a list of GMF nodes with attributes gmv and location. The nodes are sorted by lon/lat.

openquake.commonlib.hazard_writers.rupture_to_element(rupture, parent=None)[source]

Convert a rupture object into an Element object.

Parameters:
  • rupture – must have attributes .rupture, .etag and .seed
  • parent – if None a new element is created, otherwise a sub element is attached to the parent.

openquake.commonlib.logictree module

Logic tree parser, verifier and processor. See specs at https://blueprints.launchpad.net/openquake-old/+spec/openquake-logic-tree-module

A logic tree object must be iterable and yielding realizations, i.e. objects with attributes value, weight, lt_path and ordinal.

class openquake.commonlib.logictree.Branch(branch_id, weight, value)[source]

Bases: object

Branch object, represents a <logicTreeBranch /> element.

Parameters:
  • branch_id – Value of @branchID attribute.
  • weight – Decimal value of weight assigned to the branch. A text node contents of <uncertaintyWeight /> child node.
  • value – The actual uncertainty parameter value. A text node contents of <uncertaintyModel /> child node. Type depends on the branchset’s uncertainty type.
class openquake.commonlib.logictree.BranchSet(uncertainty_type, filters)[source]

Bases: object

Branchset object, represents a <logicTreeBranchSet /> element.

Parameters:
  • uncertainty_type

    String value. According to the spec one of:

    gmpeModel
    Branches contain references to different GMPEs. Values are parsed as strings and are supposed to be one of supported GMPEs. See list at GMPELogicTree.
    sourceModel
    Branches contain references to different PSHA source models. Values are treated as file names, relatively to base path.
    maxMagGRRelative
    Different values to add to Gutenberg-Richter (“GR”) maximum magnitude. Value should be interpretable as float.
    bGRRelative
    Values to add to GR “b” value. Parsed as float.
    maxMagGRAbsolute
    Values to replace GR maximum magnitude. Values expected to be lists of floats separated by space, one float for each GR MFD in a target source in order of appearance.
    abGRAbsolute
    Values to replace “a” and “b” values of GR MFD. Lists of pairs of floats, one pair for one GR MFD in a target source.
    incrementalMFDAbsolute
    Replaces an evenly discretized MFD with the values provided
    simpleFaultDipRelative
    Increases or decreases the angle of fault dip from that given in the original source model
    simpleFaultDipAbsolute
    Replaces the fault dip in the specified source(s)
    simpleFaultGeometryAbsolute
    Replaces the simple fault geometry (trace, upper seismogenic depth lower seismogenic depth and dip) of a given source with the values provided
    complexFaultGeometryAbsolute
    Replaces the complex fault geometry edges of a given source with the values provided
    characteristicFaultGeometryAbsolute
    Replaces the complex fault geometry surface of a given source with the values provided
  • filters

    Dictionary, a set of filters to specify which sources should the uncertainty be applied to. Represented as branchset element’s attributes in xml:

    applyToSources
    The uncertainty should be applied only to specific sources. This filter is required for absolute uncertainties (also only one source can be used for those). Value should be the list of source ids. Can be used only in source model logic tree.
    applyToSourceType
    Can be used in the source model logic tree definition. Allows to specify to which source type (area, point, simple fault, complex fault) the uncertainty applies to.
    applyToTectonicRegionType
    Can be used in both the source model and GMPE logic trees. Allows to specify to which tectonic region type (Active Shallow Crust, Stable Shallow Crust, etc.) the uncertainty applies to. This filter is required for all branchsets in GMPE logic tree.
apply_uncertainty(value, source)[source]

Apply this branchset’s uncertainty with value value to source source, if it passes filters.

This method is not called for uncertainties of types “gmpeModel” and “sourceModel”.

Parameters:
  • value – The actual uncertainty value of sampled branch. Type depends on uncertainty type.
  • source – The opensha source data object.
Returns:

None, all changes are applied to MFD in place. Therefore all sources have to be reinstantiated after processing is done in order to sample the tree once again.

enumerate_paths()[source]

Generate all possible paths starting from this branch set.

Returns:Generator of two-item tuples. Each tuple contains weight of the path (calculated as a product of the weights of all path’s branches) and list of path’s Branch objects. Total sum of all paths’ weights is 1.0
filter_source(source)[source]

Apply filters to source and return True if uncertainty should be applied to it.

get_branch_by_id(branch_id)[source]

Return Branch object belonging to this branch set with id equal to branch_id.

class openquake.commonlib.logictree.BranchTuple(bset, id, uncertainty, weight, effective)

Bases: tuple

bset

Alias for field number 0

effective

Alias for field number 4

id

Alias for field number 1

uncertainty

Alias for field number 2

weight

Alias for field number 3

class openquake.commonlib.logictree.GsimLogicTree(fname, tectonic_region_types=['*'], ltnode=None)[source]

Bases: object

A GsimLogicTree instance is an iterable yielding Realization tuples with attributes value, weight and lt_path, where value is a dictionary {trt: gsim}, weight is a number in the interval 0..1 and lt_path is a tuple with the branch ids of the given realization.

Parameters:
  • fname (str) – full path of the gsim_logic_tree file
  • tectonic_region_types – a sequence of distinct tectonic region types
  • ltnode – usually None, but it can also be a openquake.commonlib.nrml.Node object describing the GSIM logic tree XML file, to avoid reparsing it
check_imts(imts)[source]

Make sure the IMTs are recognized by all GSIMs in the logic tree

classmethod from_(gsim)[source]

Generate a trivial GsimLogicTree from a single GSIM instance.

get_gsim_by_trt(rlz, trt)[source]
Parameters:rlz – a logictree Realization
Param:a tectonic region type string
Returns:the GSIM string associated to the given realization
get_num_branches()[source]

Return the number of effective branches for branchset id, as a dictionary.

get_num_paths()[source]

Return the effective number of paths in the tree.

reduce(trts)[source]

Reduce the GsimLogicTree.

Parameters:trts – a subset of tectonic region types
Returns:a reduced GsimLogicTree instance
exception openquake.commonlib.logictree.InvalidLogicTree[source]

Bases: exceptions.Exception

exception openquake.commonlib.logictree.LogicTreeError(filename, message)[source]

Bases: exceptions.Exception

Base class for errors of loading, parsing and validation of logic trees.

Parameters:
  • filename – The name of the file which contains an error.
  • message – The error message.
openquake.commonlib.logictree.MAX_SINT_32 = 2147483647

Maximum value for a seed number

openquake.commonlib.logictree.MIN_SINT_32 = -2147483648

Minimum value for a seed number

class openquake.commonlib.logictree.Realization(value, weight, lt_path, ordinal, lt_uid)

Bases: tuple

lt_path

Alias for field number 2

lt_uid

Alias for field number 4

ordinal

Alias for field number 3

uid
value

Alias for field number 0

weight

Alias for field number 1

class openquake.commonlib.logictree.SourceModelLogicTree(filename, validate=True, seed=0, num_samples=0)[source]

Bases: object

Source model logic tree parser.

Parameters:
  • filename – Full pathname of logic tree file
  • validate – Boolean indicating whether or not the tree should be validated while parsed. This should be set to True on initial load of the logic tree (before importing it to the database) and to False on workers side (when loaded from the database).
Raises:

ValidationError – If logic tree file has a logic error, which can not be prevented by xml schema rules (like referencing sources with missing id).

FILTERS = ('applyToTectonicRegionType', 'applyToSources', 'applyToSourceType')
SOURCE_TYPES = ('point', 'area', 'complexFault', 'simpleFault', 'characteristicFault')
apply_branchset(branchset_node, branchset)[source]

See superclass’ method for description and signature specification.

Parses branchset node’s attribute @applyToBranches to apply following branchests to preceding branches selectively. Branching level can have more than one branchset exactly for this: different branchsets can apply to different open ends.

Checks that branchset tries to be applied only to branches on previous branching level which do not have a child branchset yet.

collect_source_model_data(source_model)[source]

Parse source model file and collect information about source ids, source types and tectonic region types available in it. That information is used then for validate_filters() and validate_uncertainty_value().

make_apply_uncertainties(branch_ids)[source]

Parse the path through the source model logic tree and return “apply uncertainties” function.

Parameters:branch_ids – List of string identifiers of branches, representing the path through source model logic tree.
Returns:Function to be applied to all the sources as they get read from the database and converted to hazardlib representation. Function takes one argument, that is the hazardlib source object, and applies uncertainties to it in-place.
parse_branches(branchset_node, branchset, validate)[source]

Create and attach branches at branchset_node to branchset.

Parameters:
  • branchset_node – Same as for parse_branchset().
  • branchset – An instance of BranchSet.
  • validate – Whether or not branches’ uncertainty values should be validated.

Checks that each branch has valid value, unique id and that all branches have total weight of 1.0.

Returns:None, all branches are attached to provided branchset.
parse_branchinglevel(branchinglevel_node, depth, validate)[source]

Parse one branching level.

Parameters:
  • branchinglevel_nodeetree.Element object with tag “logicTreeBranchingLevel”.
  • depth – The sequential number of this branching level, based on 0.
  • validate – Whether or not the branching level, its branchsets and their branches should be validated.

Enumerates children branchsets and call parse_branchset(), validate_branchset(), parse_branches() and finally apply_branchset() for each.

Keeps track of “open ends” – the set of branches that don’t have any child branchset on this step of execution. After processing of every branching level only those branches that are listed in it can have child branchsets (if there is one on the next level).

parse_branchset(branchset_node, depth, number, validate)[source]

Create BranchSet object using data in branchset_node.

Parameters:
  • branchset_nodeetree.Element object with tag “logicTreeBranchSet”.
  • depth – The sequential number of branchset’s branching level, based on 0.
  • number – Index number of this branchset inside branching level, based on 0.
  • validate – Whether or not filters defined in branchset and the branchset itself should be validated.
Returns:

An instance of BranchSet with filters applied but with no branches (they’re attached in parse_branches()).

parse_filters(branchset_node, uncertainty_type, filters)[source]

See superclass’ method for description and signature specification.

Converts “applyToSources” filter value by just splitting it to a list.

parse_tree(tree_node, validate)[source]

Parse the whole tree and point root_branchset attribute to the tree’s root.

parse_uncertainty_value(node, branchset)[source]

See superclass’ method for description and signature specification.

Doesn’t change source model file name, converts other values to either pair of floats or a single float depending on uncertainty type.

sample_path(rnd)[source]

Return the model name and a list of branch ids.

Parameters:random_seed (int) – the seed used for the sampling
samples_by_lt_path()[source]

Returns a dictionary lt_path -> how many times that path was sampled

validate_branchset(branchset_node, depth, number, branchset)[source]

See superclass’ method for description and signature specification.

Checks that the following conditions are met:

  • First branching level must contain exactly one branchset, which must be of type “sourceModel”.
  • All other branchsets must not be of type “sourceModel” or “gmpeModel”.
validate_filters(branchset_node, uncertainty_type, filters)[source]

See superclass’ method for description and signature specification.

Checks that the following conditions are met:

  • “sourceModel” uncertainties can not have filters.
  • Absolute uncertainties must have only one filter – “applyToSources”, with only one source id.
  • All other uncertainty types can have either no or one filter.
  • Filter “applyToSources” must mention only source ids that exist in source models.
  • Filter “applyToTectonicRegionType” must mention only tectonic region types that exist in source models.
  • Filter “applyToSourceType” must mention only source types that exist in source models.
validate_uncertainty_value(node, branchset)[source]

See superclass’ method for description and signature specification.

Checks that the following conditions are met:

  • For uncertainty of type “sourceModel”: referenced file must exist and be readable. This is checked in collect_source_model_data() along with saving the source model information.
  • For uncertainty of type “abGRAbsolute”: value should be two float values.
  • For both absolute uncertainties: the source (only one) must be referenced in branchset’s filter “applyToSources”.
  • For all other cases: value should be a single float value.
exception openquake.commonlib.logictree.ValidationError(node, *args, **kwargs)[source]

Bases: openquake.commonlib.logictree.LogicTreeError

Logic tree file contains a logic error.

Parameters:node – XML node object that causes fail. Used to determine the affected line number.

All other constructor parameters are passed to superclass' constructor.

openquake.commonlib.logictree.get_effective_rlzs(rlzs)[source]

Group together realizations with the same unique identifier (uid) and yield the first representative of each group.

openquake.commonlib.logictree.sample(weighted_objects, num_samples, rnd)[source]

Take random samples of a sequence of weighted objects

Parameters:
  • weighted_objects – A finite sequence of objects with a .weight attribute. The weights must sum up to 1.
  • num_samples – The number of samples to return
  • rnd – Random object. Should have method random() – return uniformly distributed random float number >= 0 and < 1.
Returns:

A subsequence of the original sequence with num_samples elements

openquake.commonlib.logictree.sample_one(branches, rnd)[source]

openquake.commonlib.node module

This module defines a Node class, together with a few conversion functions which are able to convert NRML files into hierarchical objects (DOM). That makes it easier to read and write XML from Python and viceversa. Such features are used in the command-line conversion tools. The Node class is kept intentionally similar to an Element class, however it overcomes the limitation of ElementTree: in particular a node can manage a lazy iterable of subnodes, whereas ElementTree wants to keep everything in memory. Moreover the Node class provides a convenient dot notation to access subnodes.

The Node class is instantiated with four arguments:

  1. the node tag (a mandatory string)
  2. the node attributes (a dictionary)
  3. the node value (a string or None)
  4. the subnodes (an iterable over nodes)

If a node has subnodes, its value should be None.

For instance, here is an example of instantiating a root node with two subnodes a and b:

>>> from openquake.commonlib.node import Node
>>> a = Node('a', {}, 'A1')
>>> b = Node('b', {'attrb': 'B'}, 'B1')
>>> root = Node('root', nodes=[a, b])
>>> root
<root {} None ...>

Node objects can be converted into nicely indented strings:

>>> print(root.to_str())
root
  a 'A1'
  b{attrb='B'} 'B1'

The subnodes can be retrieved with the dot notation:

>>> root.a
<a {} A1 >

The value of a node can be extracted with the ~ operator:

>>> ~root.a
'A1'

If there are multiple subnodes with the same name

>>> root.append(Node('a', {}, 'A2'))  # add another 'a' node

the dot notation will retrieve the first node.

It is possible to retrieve the other nodes from the ordinal index:

>>> root[0], root[1], root[2]
(<a {} A1 >, <b {'attrb': 'B'} B1 >, <a {} A2 >)

The list of all subnodes with a given name can be retrieved as follows:

>>> list(root.getnodes('a'))
[<a {} A1 >, <a {} A2 >]

It is also possible to delete a node given its index:

>>> del root[2]

A node is an iterable object yielding its subnodes:

>>> list(root)
[<a {} A1 >, <b {'attrb': 'B'} B1 >]

The attributes of a node can be retrieved with the square bracket notation:

>>> root.b['attrb']
'B'

It is possible to add and remove attributes freely:

>>> root.b['attr'] = 'new attr'
>>> del root.b['attr']

Node objects can be easily converted into ElementTree objects:

>>> node_to_elem(root)  
<Element 'root' at ...>

Then is trivial to generate the XML representation of a node:

>>> from xml.etree import ElementTree
>>> print(ElementTree.tostring(node_to_elem(root)).decode('utf-8'))
<root><a>A1</a><b attrb="B">B1</b></root>

Generating XML files larger than the available memory requires some care. The trick is to use a node generator, such that it is not necessary to keep the entire tree in memory. Here is an example:

>>> def gen_many_nodes(N):
...     for i in xrange(N):
...         yield Node('a', {}, 'Text for node %d' % i)
>>> lazytree = Node('lazytree', {}, nodes=gen_many_nodes(10))

The lazytree object defined here consumes no memory, because the nodes are not created a instantiation time. They are created as soon as you start iterating on the lazytree. In particular list(lazytree) will generated all of them. If your goal is to store the tree on the filesystem in XML format you should use a writing routine converting a subnode at the time, without requiring the full list of them. The routines provided by ElementTree are no good, however commonlib.writers provide an StreamingXMLWriter just for that purpose.

Lazy trees should not be used unless it is absolutely necessary in order to save memory; the problem is that if you use a lazy tree the slice notation will not work (the underlying generator will not accept it); moreover it will not be possible to iterate twice on the subnodes, since the generator will be exhausted. Notice that even accessing a subnode with the dot notation will avance the generator. Finally, nodes containing lazy nodes will not be pickleable.

class openquake.commonlib.node.Node(fulltag, attrib=None, text=None, nodes=None, lineno=None)[source]

Bases: object

A class to make it easy to edit hierarchical structures with attributes, such as XML files. Node objects must be pickleable and must consume as little memory as possible. Moreover they must be easily converted from and to ElementTree objects. The advantage over ElementTree objects is that subnodes can be lazily generated and that they can be accessed with the dot notation.

append(node)[source]

Append a new subnode

attrib
getnodes(name)[source]

Return the direct subnodes with name ‘name’

lineno
nodes
tag
text
to_str(expandattrs=True, expandvals=True)[source]

Convert the node into a string, intended for testing/debugging purposes

Parameters:
  • expandattrs – print the values of the attributes if True, else print only the names
  • expandvals – print the values if True, else print only the tag names
class openquake.commonlib.node.SourceLineParser(html=0, target=None, encoding=None)[source]

Bases: xml.etree.ElementTree.XMLParser

A custom parser managing line numbers: works for Python <= 3.3

class openquake.commonlib.node.ValidatingXmlParser(validators, stop=None)[source]

Bases: object

Validating XML Parser based on Expat. It has two methods .parse_file and .parse_bytes returning a validated Node object.

Parameters:
  • validators – a dictionary of validation functions
  • stop – the tag where to stop the parsing (if any)
exception Exit[source]

Bases: exceptions.Exception

Raised when the parsing is stopped before the end on purpose

ValidatingXmlParser.parse_bytes(bytestr, isfinal=True)[source]

Parse a byte string. If the string is very large, split it in chuncks and parse each chunk with isfinal=False, then parse an empty chunk with isfinal=True.

ValidatingXmlParser.parse_file(file_or_fname)[source]

Parse a file or a filename

openquake.commonlib.node.context(*args, **kwds)[source]

Context manager managing exceptions and adding line number of the current node and name of the current file to the error message.

Parameters:
  • fname – the current file being processed
  • node – the current node being processed
openquake.commonlib.node.fromstring(text)[source]

Parse an XML string and return a tree

openquake.commonlib.node.iterparse(source, events=('end', ), remove_comments=True, **kw)[source]

Thin wrapper around ElementTree.iterparse

openquake.commonlib.node.node_copy(node, nodefactory=<class 'openquake.commonlib.node.Node'>)[source]

Make a deep copy of the node

openquake.commonlib.node.node_display(root, expandattrs=False, expandvals=False, output=<open file '<stdout>', mode 'w'>)[source]

Write an indented representation of the Node object on the output; this is intended for testing/debugging purposes.

Parameters:
  • root – a Node object
  • expandattrs (bool) – if True, the values of the attributes are also printed, not only the names
  • expandvals (bool) – if True, the values of the tags are also printed, not only the names.
  • output – stream where to write the string representation of the node
openquake.commonlib.node.node_from_dict(dic, nodefactory=<class 'openquake.commonlib.node.Node'>)[source]

Convert a (nested) dictionary with attributes tag, attrib, text, nodes into a Node object.

openquake.commonlib.node.node_from_elem(elem, nodefactory=<class 'openquake.commonlib.node.Node'>, lazy=())[source]

Convert (recursively) an ElementTree object into a Node object.

openquake.commonlib.node.node_from_ini(ini_file, nodefactory=<class 'openquake.commonlib.node.Node'>, root_name='ini')[source]

Convert a .ini file into a Node object.

Parameters:ini_file – a filename or a file like object in read mode
openquake.commonlib.node.node_from_xml(xmlfile, nodefactory=<class 'openquake.commonlib.node.Node'>)[source]

Convert a .xml file into a Node object.

Parameters:xmlfile – a file name or file object open for reading
openquake.commonlib.node.node_to_dict(node)[source]

Convert a Node object into a (nested) dictionary with attributes tag, attrib, text, nodes.

Parameters:node – a Node-compatible object
openquake.commonlib.node.node_to_elem(root)[source]

Convert (recursively) a Node object into an ElementTree object.

openquake.commonlib.node.node_to_ini(node, output=<open file '<stdout>', mode 'w'>)[source]

Convert a Node object with the right structure into a .ini file.

Params node:a Node object
Params output:a file-like object opened in write mode
openquake.commonlib.node.node_to_xml(node, output=<open file '<stdout>', mode 'w'>, nsmap=None)[source]

Convert a Node object into a pretty .xml file without keeping everything in memory. If you just want the string representation use commonlib.writers.tostring(node).

Parameters:
  • node – a Node-compatible object (ElementTree nodes are fine)
  • nsmap – if given, shorten the tags with aliases
openquake.commonlib.node.parse(source, remove_comments=True, **kw)[source]

Thin wrapper around ElementTree.parse

openquake.commonlib.node.pprint(self, stream=None, indent=1, width=80, depth=None)[source]

Pretty print the underlying literal Python object

openquake.commonlib.node.read_nodes(fname, filter_elem, nodefactory=<class 'openquake.commonlib.node.Node'>, remove_comments=True)[source]

Convert an XML file into a lazy iterator over Node objects satifying the given specification, i.e. a function element -> boolean.

Parameters:
  • fname – file name of file object
  • filter_elem – element specification

In case of errors, add the file name to the error message.

openquake.commonlib.node.striptag(tag)[source]

Get the short representation of a fully qualified tag

Parameters:tag (str) – a (fully qualified or not) XML tag
openquake.commonlib.node.to_literal(self)[source]

Convert the node into a literal Python object

openquake.commonlib.nrml module

From Node objects to NRML files and viceversa

It is possible to save a Node object into a NRML file by using the function write(nodes, output) where output is a file object. If you want to make sure that the generated file is valid according to the NRML schema just open it in ‘w+’ mode: immediately after writing it will be read and validated. It is also possible to convert a NRML file into a Node object with the routine read(node, input) where input is the path name of the NRML file or a file object opened for reading. The file will be validated as soon as opened.

For instance an exposure file like the following:

<?xml version='1.0' encoding='utf-8'?>
<nrml xmlns="http://openquake.org/xmlns/nrml/0.4"
      xmlns:gml="http://www.opengis.net/gml">
  <exposureModel
      id="my_exposure_model_for_population"
      category="population"
      taxonomySource="fake population datasource">

    <description>
      Sample population
    </description>

    <assets>
      <asset id="asset_01" number="7" taxonomy="IT-PV">
          <location lon="9.15000" lat="45.16667" />
      </asset>

      <asset id="asset_02" number="7" taxonomy="IT-CE">
          <location lon="9.15333" lat="45.12200" />
      </asset>
    </assets>
  </exposureModel>
</nrml>

can be converted as follows:

>> nrml = read(<path_to_the_exposure_file.xml>)

Then subnodes and attributes can be conveniently accessed:

>> nrml.exposureModel.assets[0][‘taxonomy’] ‘IT-PV’ >> nrml.exposureModel.assets[0][‘id’] ‘asset_01’ >> nrml.exposureModel.assets[0].location[‘lon’] ‘9.15000’ >> nrml.exposureModel.assets[0].location[‘lat’] ‘45.16667’

The Node class provides no facility to cast strings into Python types; this is a job for the Node class which can be subclassed and supplemented by a dictionary of validators.

exception openquake.commonlib.nrml.DuplicatedID[source]

Bases: exceptions.Exception

Raised when two sources with the same ID are found in a source model

openquake.commonlib.nrml.asset_mean_stddev(value, assetRef, mean, stdDev)[source]
openquake.commonlib.nrml.convert_fragility_model_04(node, fname, fmcounter=count(1))[source]
Parameters:
  • node – an openquake.commonib.node.Node in NRML 0.4
  • fname – path of the fragility file
Returns:

an openquake.commonib.node.Node in NRML 0.5

openquake.commonlib.nrml.damage_triple(value, ds, mean, stddev)[source]
openquake.commonlib.nrml.ffconvert(fname, limit_states, ff, min_iml=1e-10)[source]

Convert a fragility function into a numpy array plus a bunch of attributes.

Parameters:
  • fname – path to the fragility model file
  • limit_states – expected limit states
  • ff – fragility function node
Returns:

a pair (array, dictionary)

openquake.commonlib.nrml.get_consequence_model(node, fname)[source]
openquake.commonlib.nrml.get_fragility_model(node, fname)[source]
Parameters:
  • node – a vulnerabilityModel node
  • fname – path to the vulnerability file
Returns:

a dictionary imt, taxonomy -> fragility function list

openquake.commonlib.nrml.get_fragility_model_04(fmodel, fname)[source]
Parameters:
  • fmodel – a fragilityModel node
  • fname – path of the fragility file
Returns:

an openquake.risklib.scientific.FragilityModel instance

openquake.commonlib.nrml.get_source_model_04(node, fname, converter)[source]
openquake.commonlib.nrml.get_source_model_05(node, fname, converter)[source]
openquake.commonlib.nrml.get_tag_version(nrml_node)[source]

Extract from a node of kind NRML the tag and the version. For instance from ‘{http://openquake.org/xmlns/nrml/0.4}fragilityModel’ one gets the pair (‘fragilityModel’, ‘nrml/0.4’).

openquake.commonlib.nrml.get_vulnerability_functions_04(node, fname)[source]
Parameters:
  • node – a vulnerabilityModel node
  • fname – path to the vulnerability file
Returns:

a dictionary imt, taxonomy -> vulnerability function

openquake.commonlib.nrml.get_vulnerability_functions_05(node, fname)[source]
Parameters:
  • node – a vulnerabilityModel node
  • fname – path of the vulnerability filter
Returns:

a dictionary imt, taxonomy -> vulnerability function

openquake.commonlib.nrml.parse(fname, *args)[source]

Parse a NRML file and return an associated Python object. It works by calling nrml.read() and node_to_obj() in sequence.

openquake.commonlib.nrml.read(source, chatty=True, stop=None)[source]

Convert a NRML file into a validated Node object. Keeps the entire tree in memory.

Parameters:source – a file name or file object open for reading
openquake.commonlib.nrml.write(nodes, output=<open file '<stdout>', mode 'w'>, fmt='%10.7E', gml=True, xmlns=None)[source]

Convert nodes into a NRML file. output must be a file object open in write mode. If you want to perform a consistency check, open it in read-write mode, then it will be read after creation and validated.

Params nodes:an iterable over Node objects
Params output:a file-like object in write or read-write mode

openquake.commonlib.oqvalidation module

class openquake.commonlib.oqvalidation.OqParam(**names_vals)[source]

Bases: openquake.risklib.valid.ParamSet

all_cost_types

Return the cost types of the computation (including occupants if it is there) in order.

area_source_discretization

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
asset_correlation

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
asset_hazard_distance

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
asset_life_expectancy

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
asset_loss_table

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
avg_losses

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
base_path

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
calculation_mode

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
check_gsims(gsims)[source]
Parameters:gsims – a sequence of GSIM instances
check_uniform_hazard_spectra()[source]
compare_with_classical

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
complex_fault_mesh_spacing

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
concurrent_tasks

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
conditional_loss_poes

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
continuous_fragility_discretization

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
coordinate_bin_width

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
description

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
distance_bin_width

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
export_dir

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
export_multi_curves

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
exports

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
file_type
get_correl_model()[source]

Return a correlation object. See openquake.hazardlib.correlation for more info.

ground_motion_correlation_model

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
ground_motion_correlation_params

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
ground_motion_fields

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
gsim

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
hazard_calculation_id

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
hazard_curves_from_gmfs

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
hazard_maps

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
hazard_output_id

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
hypocenter

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
ignore_missing_costs

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
imtls

Returns an OrderedDict with the risk intensity measure types and levels, if given, or the hazard ones.

individual_curves

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
inputs

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
insured_losses

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
intensity_measure_types

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
intensity_measure_types_and_levels

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
interest_rate

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
investigation_time

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
is_valid_complex_fault_mesh_spacing()[source]

The complex_fault_mesh_spacing parameter can be None only if rupture_mesh_spacing is set. In that case it is identified with it.

is_valid_export_dir()[source]

The export_dir parameter must refer to a directory, and the user must have the permission to write on it.

is_valid_geometry()[source]

It is possible to infer the geometry only if exactly one of sites, sites_csv, hazard_curves_csv, gmfs_csv, region and exposure_file is set. You did set more than one, or nothing.

is_valid_hazard_curves()[source]

You must set hazard_curves_from_gmfs if mean_hazard_curves or quantile_hazard_curves are set.

is_valid_inputs()[source]

Invalid calculation_mode=”{calculation_mode}” or missing fragility_file/vulnerability_file in the .ini file.

is_valid_intensity_measure_levels()[source]

In order to compute hazard curves, intensity_measure_types_and_levels must be set or extracted from the risk models.

is_valid_intensity_measure_types()[source]

If the IMTs and levels are extracted from the risk models, they must not be set directly. Moreover, if intensity_measure_types_and_levels is set directly, intensity_measure_types must not be set.

is_valid_maximum_distance()[source]

Invalid maximum_distance={maximum_distance}: {error}

is_valid_poes()[source]

When computing hazard maps and/or uniform hazard spectra, the poes list must be non-empty.

is_valid_region()[source]

If there is a region a region_grid_spacing must be given

is_valid_sites_disagg()[source]

The option sites_disagg (when given) requires specific_assets to be set.

is_valid_specific_assets()[source]

Read the special assets from the parameters specific_assets or specific_assets_csv, if present. You cannot have both. The concept is meaninful only for risk calculators.

is_valid_truncation_level_disaggregation()[source]

Truncation level must be set for disaggregation calculations

job_type

‘hazard’ or ‘risk’

loss_curve_resolution

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
loss_dt(dtype=<type 'numpy.float32'>)[source]

Return a composite dtype based on the loss types, including occupants

loss_dt_list(dtype=<type 'numpy.float32'>)[source]

Return a data type list [(loss_name, dtype), ...]

loss_ratios

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
lrem_steps_per_interval

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
mag_bin_width

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
master_seed

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
maximum_distance

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
mean_hazard_curves

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
minimum_intensity

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
no_imls()[source]

Return True if there are no intensity measure levels

num_epsilon_bins

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
number_of_ground_motion_fields

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
number_of_logic_tree_samples

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
poes

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
poes_disagg

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
quantile_hazard_curves

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
quantile_loss_curves

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
random_seed

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
reference_backarc

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
reference_depth_to_1pt0km_per_sec

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
reference_depth_to_2pt5km_per_sec

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
reference_vs30_type

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
reference_vs30_value

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
region

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
region_constraint

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
region_grid_spacing

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
risk_files
risk_imtls

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
risk_investigation_time

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
rupture_mesh_spacing

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
ruptures_per_block

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
save_ruptures

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
ses_per_logic_tree_path

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
ses_ratio

The ratio

risk_investigation_time / investigation_time / ses_per_logic_tree_path

set_risk_imtls(risk_models)[source]
Parameters:risk_models – a dictionary taxonomy -> loss_type -> risk_function

Set the attribute risk_imtls.

siteparam = {'backarc': 'reference_backarc', 'z2pt5': 'reference_depth_to_2pt5km_per_sec', 'vs30measured': 'reference_vs30_type', 'vs30': 'reference_vs30_value', 'z1pt0': 'reference_depth_to_1pt0km_per_sec'}
sites

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
sites_disagg

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
sites_per_tile

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
specific_assets

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
steps_per_interval

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
taxonomies_from_model

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
time_event

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
truncation_level

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
tses

Return the total time as investigation_time * ses_per_logic_tree_path * (number_of_logic_tree_samples or 1)

uniform_hazard_spectra

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
width_of_mfd_bin

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
openquake.commonlib.oqvalidation.fix_maximum_distance(max_dist, trts)[source]

Make sure the dictionary maximum_distance (provided by the user in the job.ini file) is filled for all tectonic region types and has no key named ‘default’.

openquake.commonlib.oqvalidation.getdefault(dic_with_default, key)[source]
Parameters:
  • dic_with_default – a dictionary with a ‘default’ key
  • key – a key that may be present in the dictionary or not
Returns:

the value associated to the key, or to ‘default’

openquake.commonlib.parallel module

TODO: write documentation.

class openquake.commonlib.parallel.IterResult(futures, taskname, num_tasks=None, progress=<function info>)[source]

Bases: object

Parameters:
  • futures – an iterator over futures
  • taskname – the name of the task
:param num_tasks
the total number of expected futures (None if unknown)
Parameters:progress – a logging function for the progress report
reduce(agg=<built-in function add>, acc=None)[source]
save_task_data(mon)[source]
classmethod sum(iresults)[source]

Sum the data transfer information of a set of results

task_data_dt = dtype([('taskno', '<u4'), ('weight', '<f4'), ('duration', '<f4')])
class openquake.commonlib.parallel.NoFlush(monitor, taskname)[source]

Bases: object

class openquake.commonlib.parallel.Pickled(obj)[source]

Bases: object

An utility to manually pickling/unpickling objects. The reason is that celery does not use the HIGHEST_PROTOCOL, so relying on celery is slower. Moreover Pickled instances have a nice string representation and length giving the size of the pickled bytestring.

Parameters:obj – the object to pickle
unpickle()[source]

Unpickle the underlying object

class openquake.commonlib.parallel.Processmap(func, iterargs)[source]

Bases: openquake.commonlib.parallel.Starmap

MapReduce implementation based on processes. For instance

>>> from collections import Counter
>>> c = Processmap(Counter, [('hello',), ('world',)]).reduce(acc=Counter())
pool = None
static poolfactory(processes=None, initializer=None, initargs=(), maxtasksperchild=None)

Returns a process pool object

class openquake.commonlib.parallel.Serialmap(func, iterargs)[source]

Bases: openquake.commonlib.parallel.Starmap

A sequential Starmap, useful for debugging purpose.

class openquake.commonlib.parallel.Starmap(func, iterargs)[source]

Bases: object

classmethod apply(func, args, concurrent_tasks=20, weight=<function <lambda>>, key=<function <lambda>>)[source]
pool = None
poolfactory = None
reduce(agg=<built-in function add>, acc=None, progress=<function info>)[source]
class openquake.commonlib.parallel.TaskManager(oqtask, name=None)[source]

Bases: object

A manager to submit several tasks of the same type. The usage is:

tm = TaskManager(do_something, logging.info)
tm.send(arg1, arg2)
tm.send(arg3, arg4)
print tm.reduce()

Progress report is built-in.

classmethod apply(task, task_args, concurrent_tasks=20, maxweight=None, weight=<function <lambda>>, key=<function <lambda>>, name=None)[source]

Apply a task to a tuple of the form (sequence, *other_args) by first splitting the sequence in chunks, according to the weight of the elements and possibly to a key (see :function: openquake.baselib.general.split_in_blocks). Then reduce the results with an aggregation function. The chunks which are generated internally can be seen directly ( useful for debugging purposes) by looking at the attribute ._chunks, right after the apply function has been called.

Parameters:
  • task – a task to run in parallel
  • task_args – the arguments to be passed to the task function
  • agg – the aggregation function
  • acc – initial value of the accumulator (default empty AccumDict)
  • concurrent_tasks – hint about how many tasks to generate
  • maxweight – if not None, used to split the tasks
  • weight – function to extract the weight of an item in arg0
  • key – function to extract the kind of an item in arg0
executor = <concurrent.futures.process.ProcessPoolExecutor object>
progress(*args)[source]

Log in INFO mode regular tasks and in DEBUG private tasks

reduce(agg=<built-in function add>, acc=None)[source]

Loop on a set of results and update the accumulator by using the aggregation function.

Parameters:
  • agg – the aggregation function, (acc, val) -> new acc
  • acc – the initial value of the accumulator
Returns:

the final value of the accumulator

classmethod restart()[source]
classmethod starmap(task, task_args, name=None)[source]

Spawn a bunch of tasks with the given list of arguments

Returns:a TaskManager object with a .result method.
submit(*args)[source]

Submit a function with the given arguments to the process pool and add a Future to the list .results. If the attribute distribute is set, the function is run in process and the result is returned.

submit_all()[source]
Returns:an IterResult object
task_ids = []
wait()[source]

Wait until all the task terminate. Discard the results.

Returns:the total number of tasks that were spawned
class openquake.commonlib.parallel.Threadmap(func, iterargs)[source]

Bases: openquake.commonlib.parallel.Starmap

MapReduce implementation based on threads. For instance

>>> from collections import Counter
>>> c = Threadmap(Counter, [('hello',), ('world',)]).reduce(acc=Counter())
pool = None
static poolfactory()
openquake.commonlib.parallel.check_mem_usage(monitor=<Monitor dummy>, soft_percent=90, hard_percent=100)[source]

Display a warning if we are running out of memory

Parameters:mem_percent (int) – the memory limit as a percentage
openquake.commonlib.parallel.do_not_aggregate(acc, value)[source]

Do nothing aggregation function.

Parameters:
  • acc – the accumulator
  • value – the value to accumulate
Returns:

the accumulator unchanged

openquake.commonlib.parallel.get_pickled_sizes(obj)[source]

Return the pickled sizes of an object and its direct attributes, ordered by decreasing size. Here is an example:

>> total_size, partial_sizes = get_pickled_sizes(Monitor(‘’)) >> total_size 345 >> partial_sizes [(‘_procs’, 214), (‘exc’, 4), (‘mem’, 4), (‘start_time’, 4), (‘_start_time’, 4), (‘duration’, 4)]

Notice that the sizes depend on the operating system and the machine.

openquake.commonlib.parallel.oq_distribute()[source]

Return the current value of the variable OQ_DISTRIBUTE; if undefined, return ‘futures’.

openquake.commonlib.parallel.pickle_sequence(objects)[source]

Convert an iterable of objects into a list of pickled objects. If the iterable contains copies, the pickling will be done only once. If the iterable contains objects already pickled, they will not be pickled again.

Parameters:objects – a sequence of objects to pickle
openquake.commonlib.parallel.rec_delattr(mon, name)[source]

Delete attribute from a monitor recursively

openquake.commonlib.parallel.safely_call(func, args, pickle=False)[source]

Call the given function with the given arguments safely, i.e. by trapping the exceptions. Return a pair (result, exc_type) where exc_type is None if no exceptions occur, otherwise it is the exception class and the result is a string containing error message and traceback.

Parameters:
  • func – the function to call
  • args – the arguments
  • pickle – if set, the input arguments are unpickled and the return value is pickled; otherwise they are left unchanged
openquake.commonlib.parallel.wakeup_pool()[source]

This is used at startup, only when the ProcessPoolExecutor is used, to fork the processes before loading any big data structure.

openquake.commonlib.readinput module

exception openquake.commonlib.readinput.DuplicatedID[source]

Bases: exceptions.Exception

Raised when two assets with the same ID are found in an exposure model

exception openquake.commonlib.readinput.DuplicatedPoint[source]

Bases: exceptions.Exception

Raised when reading a CSV file with duplicated (lon, lat) pairs

class openquake.commonlib.readinput.Exposure(id, category, description, cost_types, time_events, insurance_limit_is_absolute, deductible_is_absolute, area, assets, taxonomies, asset_refs)

Bases: tuple

area

Alias for field number 7

asset_refs

Alias for field number 10

assets

Alias for field number 8

category

Alias for field number 1

cost_types

Alias for field number 3

deductible_is_absolute

Alias for field number 6

description

Alias for field number 2

id

Alias for field number 0

insurance_limit_is_absolute

Alias for field number 5

taxonomies

Alias for field number 9

time_events

Alias for field number 4

openquake.commonlib.readinput.collect_files(dirpath, cond=<function <lambda>>)[source]

Recursively collect the files contained inside dirpath.

Parameters:
  • dirpath – path to a readable directory
  • cond – condition on the path to collect the file
openquake.commonlib.readinput.extract_from_zip(path, candidates)[source]

Given a zip archive and a function to detect the presence of a given filename, unzip the archive into a temporary directory and return the full path of the file. Raise an IOError if the file cannot be found within the archive.

Parameters:
  • path – pathname of the archive
  • candidates – list of names to search for
openquake.commonlib.readinput.get_composite_source_model(oqparam, in_memory=True)[source]

Parse the XML and build a complete composite source model in memory.

Parameters:
openquake.commonlib.readinput.get_cost_calculator(oqparam)[source]

Read the first lines of the exposure file and infers the cost calculator

openquake.commonlib.readinput.get_exposure(oqparam)[source]

Read the full exposure in memory and build a list of openquake.risklib.riskmodels.Asset instances. If you don’t want to keep everything in memory, use get_exposure_lazy instead (for experts only).

Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
Returns:an Exposure instance
openquake.commonlib.readinput.get_gmfs(oqparam)[source]
Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
Returns:sitecol, etags, gmf array
openquake.commonlib.readinput.get_gmfs_from_txt(oqparam, fname)[source]
Parameters:
Returns:

a composite array of shape (N, R) read from a CSV file with format etag indices [gmv1 ... gmvN] * num_imts

openquake.commonlib.readinput.get_gsim_lt(oqparam, trts=['*'])[source]
Parameters:
Returns:

a GsimLogicTree instance obtained by filtering on the provided tectonic region types.

openquake.commonlib.readinput.get_gsims(oqparam)[source]

Return an ordered list of GSIM instances from the gsim name in the configuration file or from the gsim logic tree file.

Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
openquake.commonlib.readinput.get_hcurves(oqparam)[source]
Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
Returns:sitecol, imtls, curve array
openquake.commonlib.readinput.get_hcurves_from_csv(oqparam, fname)[source]
Parameters:
Returns:

the site collection and the hazard curves read by the .txt file

openquake.commonlib.readinput.get_hcurves_from_nrml(oqparam, fname)[source]
Parameters:
Returns:

sitecol, curve array

openquake.commonlib.readinput.get_imts(oqparam)[source]

Return a sorted list of IMTs as hazardlib objects

openquake.commonlib.readinput.get_job_info(oqparam, csm, sitecol)[source]
Parameters:
Returns:

a dictionary with same parameters of the computation, in particular the input and output weights

openquake.commonlib.readinput.get_mesh(oqparam)[source]

Extract the mesh of points to compute from the sites, the sites_csv, or the region.

Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
openquake.commonlib.readinput.get_mesh_csvdata(csvfile, imts, num_values, validvalues)[source]

Read CSV data in the format IMT lon lat value1 ... valueN.

Parameters:
  • csvfile – a file or file-like object with the CSV data
  • imts – a list of intensity measure types
  • num_values – dictionary with the number of expected values per IMT
  • validvalues – validation function for the values
Returns:

the mesh of points and the data as a dictionary imt -> array of curves for each site

openquake.commonlib.readinput.get_mesh_hcurves(oqparam)[source]

Read CSV data in the format lon lat, v1-vN, w1-wN, ....

Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
Returns:the mesh of points and the data as a dictionary imt -> array of curves for each site
openquake.commonlib.readinput.get_oqparam(job_ini, pkg=None, calculators=None, hc_id=None)[source]

Parse a dictionary of parameters from an INI-style config file.

Parameters:
  • job_ini – Path to configuration file/archive or dictionary of parameters
  • pkg – Python package where to find the configuration file (optional)
  • calculators – Sequence of calculator names (optional) used to restrict the valid choices for calculation_mode
  • hc_id – Not None only when called from a post calculation
Returns:

An openquake.commonlib.oqvalidation.OqParam instance containing the validate and casted parameters/values parsed from the job.ini file as well as a subdictionary ‘inputs’ containing absolute paths to all of the files referenced in the job.ini, keyed by the parameter name.

openquake.commonlib.readinput.get_params(job_inis)[source]

Parse one or more INI-style config files.

Parameters:job_inis – List of configuration files (or list containing a single zip archive)
Returns:A dictionary of parameters
openquake.commonlib.readinput.get_risk_model(oqparam, rmdict)[source]
Parameters:
openquake.commonlib.readinput.get_rupture(oqparam)[source]

Returns a hazardlib rupture by reading the rupture_model file.

Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
openquake.commonlib.readinput.get_scenario_from_nrml(oqparam, fname)[source]
Parameters:
Returns:

a triple (sitecol, etags, gmf array)

openquake.commonlib.readinput.get_site_collection(oqparam, mesh=None, site_model_params=None)[source]

Returns a SiteCollection instance by looking at the points and the site model defined by the configuration parameters.

Parameters:
  • oqparam – an openquake.commonlib.oqvalidation.OqParam instance
  • mesh – a mesh of hazardlib points; if None the mesh is determined by invoking get_mesh
  • site_model_params – object with a method .get_closest returning the closest site model parameters
openquake.commonlib.readinput.get_site_model(oqparam)[source]

Convert the NRML file into an iterator over 6-tuple of the form (z1pt0, z2pt5, measured, vs30, lon, lat)

Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
openquake.commonlib.readinput.get_sitecol_assets(oqparam, exposure)[source]
Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
Returns:two sequences of the same length: the site collection and an array with the assets per each site, collected by taxonomy
openquake.commonlib.readinput.get_source_model_lt(oqparam)[source]
Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
Returns:a openquake.commonlib.logictree.SourceModelLogicTree instance
openquake.commonlib.readinput.get_source_models(oqparam, gsim_lt, source_model_lt, in_memory=True)[source]

Build all the source models generated by the logic tree.

Parameters:
Returns:

an iterator over openquake.commonlib.source.SourceModel tuples

openquake.commonlib.readinput.possibly_gunzip(fname)[source]

A file can be .gzipped to save space (this happens in the debian package); in that case, let’s gunzip it.

Parameters:fname – a file name (not zipped)
openquake.commonlib.readinput.sitecol_from_coords(oqparam, coords)[source]

Return a SiteCollection instance from an ordered set of coordinates

openquake.calculators.reportwriter module

Utilities to build a report writer generating a .rst report for a calculation

class openquake.calculators.reportwriter.ReportWriter(dstore)[source]

Bases: object

A particularly smart view over the datastore

add(name, obj=None)[source]

Add the view named name to the report text

make_report()[source]

Build the report and return a restructed text string

save(fname)[source]

Save the report

title = {'inputs': u'Input files', 'csm_info': u'Composite source model', 'exposure_info': u'Exposure model', 'times_by_source_class': u'Computation times by source typology', 'task_info': u'Information about the tasks', 'task_slowest': u'Slowest task', 'required_params_per_trt': u'Required parameters per tectonic region type', 'ruptures_per_trt': u'Number of ruptures per tectonic region type', 'short_source_info': u'Slowest sources', 'avglosses_data_transfer': u'Estimated data transfer for the avglosses', 'rlzs_assoc': u'Realizations per (TRT, GSIM)', 'job_info': u'Informational data', 'params': u'Parameters', 'ruptures_events': u'Specific information for event based', 'performance': u'Slowest operations', 'biggest_ebr_gmf': u'Maximum memory allocated for the GMFs'}
openquake.calculators.reportwriter.build_report(job_ini, output_dir=None)[source]

Write a report.csv file with information about the calculation without running it

Parameters:
  • job_ini – full pathname of the job.ini file
  • output_dir – the directory where the report is written (default the input directory)
openquake.calculators.reportwriter.count_eff_ruptures(sources, sitecol, gsims, monitor)[source]

Count the number of ruptures contained in the given sources and return a dictionary src_group_id -> num_ruptures. All sources belong to the same tectonic region type.

openquake.calculators.reportwriter.indent(text)[source]

openquake.commonlib.risk_writers module

Module containing writers for risk output artifacts.

class openquake.commonlib.risk_writers.AggregateLossCurveXMLWriter(dest, investigation_time, loss_type, source_model_tree_path=None, gsim_tree_path=None, statistics=None, quantile_value=None, unit=None, poe=None, risk_investigation_time=None)[source]

Bases: object

Parameters:
  • dest – File path (including filename) or file-like objects for results to be saved to.
  • investigation_time (float) – Investigation time (also known as Time Span) defined in the calculation which produced these results (in years).
  • loss_type (str) – Loss type used in risk model input for the calculation producing this output (examples: structural, non-structural, business-interruption, occupants)
  • source_model_tree_path (str) – Id of the source model tree path (obtained by concatenating the IDs of the branches the path is made of) for which input hazard curves have been computed.
  • gsim_tree_path (str) – Id of the gsim (ground shaking intensity model) tree path (obtained by concatenating the IDs of the branches the path is made of) for which input hazard curves have been computed.
  • unit (str) – Attribute describing how the value of the assets has been measured.
  • statistics (str) – mean or quantile. When serializing loss curves produced from statistical hazard inputs, it describes the type of statistic used.
  • quantile_value (float) – When serializing loss curves produced from quantile hazard inputs, it describes the quantile value.
serialize(data)[source]

Serialize an aggregation loss curve.

Parameters:data

An object representing an aggregate loss curve. This object should:

  • define an attribute poes, which is a list of floats describing the probabilities of exceedance.
  • define an attribute losses, which is a list of floats describing the losses.
  • define an attribute average_loss, which is a float describing the average loss associated to the loss curve
  • define an attribute stddev_loss, which is a float describing the standard deviation of losses if the loss curve has been computed with an event based approach. Otherwise, it is None

Also, poes, losses values must be indexed coherently, i.e.: the loss at index zero is related to the probability of exceedance at the same index.

class openquake.commonlib.risk_writers.BCRMapXMLWriter(path, interest_rate, asset_life_expectancy, loss_type, source_model_tree_path=None, gsim_tree_path=None, statistics=None, quantile_value=None, unit=None, loss_category=None, poe=None)[source]

Bases: object

Serializer for bcr (benefit cost ratio) maps produced with the classical and probabilistic calculators.

Parameters:
  • dest – File path (including filename) or file-like object for results to be saved to.
  • interest_rate (float) – The inflation discount rate.
  • asset_life_expectancy (float) – The period of time in which the building is expected to be used.
  • loss_type (str) – Loss type used in risk model input for the calculation producing this output (examples: structural, non-structural, business-interruption, occupants)
  • source_model_tree_path (str) – Id of the source model tree path (obtained by concatenating the IDs of the branches the path is made of) for which input hazard curves have been computed.
  • gsim_tree_path (str) – Id of the gsim (ground shaking intensity model) tree path (obtained by concatenating the IDs of the branches the path is made of) for which input hazard curves have been computed.
  • unit (str) – Attribute describing how the value of the assets has been measured.
  • loss_category (str) – Attribute describing the category (economic, population, buildings, etc..) of the losses producing this bcr map.
  • statistics (str) – mean or quantile. When serializing bcr values produced from statistical hazard inputs, it describes the type of statistic used.
  • quantile_value (float) – When serializing bcr values produced from quantile hazard inputs, it describes the quantile value.
serialize(data)[source]

Serialize a collection of (benefit cost) ratios.

Parameters:data

An iterable of bcr objects. Each object should:

  • define an attribute location, which is itself an object defining two attributes, x containing the longitude value and y containing the latitude value. Also, it must define an attribute wkt, which is the Well-known text representation of the location.
  • define an attribute asset_ref, which contains the unique identifier of the asset related to the (benefit cost) ratio.
  • define an attribute average_annual_loss_original, which is the expected average annual economic loss using the original vulnerability of the asset.
  • define an attribute average_annual_loss_retrofitted, which is the expected average annual economic loss using the improved (better design or retrofitted) vulnerability of the asset.
  • define an attribute bcr, which is the value of the ( benefit cost) ratio.
class openquake.commonlib.risk_writers.DamageWriter(damage_states)[source]

Bases: object

A class to convert scenario_damage outputs into nodes and then XML.

Parameters:damage_states – a sequence of DamageState objects with attributes .dmg_state and .lsi
asset_node(asset_ref, means, stddevs)[source]
Parameters:
  • asset_ref – asset reference string
  • means – array of means, one per damage state
  • stddevs – array of stddevs, one per damage state
Returns:

an asset node

cm_node(loc, asset_refs, means, stddevs)[source]
Parameters:
  • loc – a location object with attributes x and y
  • asset_refs – asset reference strings
  • means – array of means, one per asset
  • stddevs – array of stddevs, one per asset
Returns:

a CMNode node

collapse_map_node(data)[source]
Parameters:data – a sequence of records with attributes .exposure_data, .mean and .stddev
Returns:a dmgDistPerAsset node
damage_nodes(means, stddevs)[source]
Parameters:
  • means – array of means, one per damage state
  • stddevs – array of stddevs, one per damage state
Returns:

a list of damage nodes

dd_node_taxo(taxonomy, means, stddevs)[source]
Parameters:
  • taxonomy – taxonomy string
  • means – array of means, one per damage state
  • stddevs – array of stddevs, one per damage state
Returns:

a DDNode node

dmg_dist_per_asset_node(data)[source]
Parameters:data – a sequence of records with attributes .exposure_data, .mean and .stddev
Returns:a dmgDistPerAsset node
dmg_dist_per_taxonomy_node(data)[source]
Parameters:data – a sequence of records with attributes .taxonomy, .mean and .stddev
Returns:a dmgDistPerTaxonomy node
dmg_dist_total_node(data)[source]
Parameters:data – a sequence of records with attributes .dmg_state, .mean and .stddev
Returns:a totalDmgDist node
point_node(loc)[source]
Parameters:loc – a location object with attributes x and y
Returns:a gml:Point node
to_nrml(key, data, fname=None, fmt='%.5E')[source]
Parameters:
  • keydmg_dist_per_asset|dmg_dist_per_taxonomy|dmg_dist_total|collapse_map
  • data – sequence of rows to serialize
Fname:

the path name of the output file; if None, build a name

Returns:

path name of the saved file

class openquake.commonlib.risk_writers.DmgDistPerAsset(exposure_data, dmg_state, mean, stddev)

Bases: tuple

dmg_state

Alias for field number 1

exposure_data

Alias for field number 0

mean

Alias for field number 2

stddev

Alias for field number 3

class openquake.commonlib.risk_writers.DmgDistPerTaxonomy(taxonomy, dmg_state, mean, stddev)

Bases: tuple

dmg_state

Alias for field number 1

mean

Alias for field number 2

stddev

Alias for field number 3

taxonomy

Alias for field number 0

class openquake.commonlib.risk_writers.DmgDistTotal(dmg_state, mean, stddev)

Bases: tuple

dmg_state

Alias for field number 0

mean

Alias for field number 1

stddev

Alias for field number 2

class openquake.commonlib.risk_writers.DmgState(dmg_state, lsi)

Bases: tuple

dmg_state

Alias for field number 0

lsi

Alias for field number 1

class openquake.commonlib.risk_writers.ExposureData(asset_ref, site)

Bases: tuple

asset_ref

Alias for field number 0

site

Alias for field number 1

class openquake.commonlib.risk_writers.LossCurveXMLWriter(dest, investigation_time, loss_type, source_model_tree_path=None, gsim_tree_path=None, statistics=None, quantile_value=None, unit=None, insured=False, poe=None, risk_investigation_time=None)[source]

Bases: object

Parameters:
  • dest – File path (including filename) or file-like object for results to be saved to.
  • investigation_time (float) – Investigation time (also known as Time Span) defined in the calculation which produced these results (in years).
  • loss_type (str) – Loss type used in risk model input for the calculation producing this output (examples: structural, non-structural, business-interruption, occupants)
  • source_model_tree_path (str) – Id of the source model tree path (obtained by concatenating the IDs of the branches the path is made of) for which input hazard curves have been computed.
  • gsim_tree_path (str) – Id of the gsim (ground shaking intensity model) tree path (obtained by concatenating the IDs of the branches the path is made of) for which input hazard curves have been computed.
  • statistics (str) – mean or quantile. When serializing loss curves produced from statistical hazard inputs, it describes the type of statistic used.
  • quantile_value (float) – When serializing loss curves produced from quantile hazard inputs, it describes the quantile value.
  • quantile_value – When serializing loss curves produced from quantile hazard inputs, it describes the quantile value.
  • unit (str) – Attribute describing how the value of the assets has been measured.
  • insured (bool) – True if it is an insured loss curve
serialize(data)[source]

Serialize a collection of loss curves.

Parameters:data

An iterable of loss curve objects. Each object should:

  • define an attribute location, which is itself an object defining two attributes, x containing the longitude value and y containing the latitude value.
  • define an attribute asset_ref, which contains the unique identifier of the asset related to the loss curve.
  • define an attribute poes, which is a list of floats describing the probabilities of exceedance.
  • define an attribute losses, which is a list of floats describing the losses.
  • define an attribute loss_ratios, which is a list of floats describing the loss ratios.
  • define an attribute average_loss, which is a float describing the average loss associated to the loss curve
  • define an attribute stddev_loss, which is a float describing the standard deviation of losses if the loss curve has been computed with an event based approach. Otherwise, it is None

All attributes must be defined, except for loss_ratios that can be None since it is optional in the schema.

Also, poes, losses and loss_ratios values must be indexed coherently, i.e.: the loss (and optionally loss ratio) at index zero is related to the probability of exceedance at the same index.

class openquake.commonlib.risk_writers.LossFractionsWriter(dest, variable, loss_unit, loss_type, loss_category, hazard_metadata, poe=None)[source]

Bases: object

Serializer for loss fractions produced with the classical and event based calculators.

Attr dest:

Full path including file name or file-like object where the results will be saved into.

Attr str variable:
 

The variable used for disaggregation

Attr str loss_unit:
 

Attribute describing how the value of the assets has been measured.

Parameters:

loss_type (str) – Loss type used in risk model input for the calculation producing this output (examples: structural, non-structural, business-interruption, occupants)

Attr str loss_category:
 

Attribute describing the category (economic, population, buildings, etc..) of the losses producing this loss map.

Attr object hazard_metadata:
 
metadata of hazard outputs used by risk calculation. It has the

attributes: investigation_time, source_model_tree_path, gsim_tree_path, statistics, quantile_value

Attr float poe:

Probability of exceedance used to interpolate the losses producing this fraction map.

serialize(total_fractions, locations_fractions)[source]

Actually serialize the fractions.

Parameters:
  • total_fractions (dict) – maps a value of variable with a tuple representing the absolute losses and the fraction
  • locations_fractions (dict) – a dictionary mapping a tuple (longitude, latitude) to bins. Each bin is a dictionary with the same structure of total_fractions.
class openquake.commonlib.risk_writers.LossMapGeoJSONWriter(dest, investigation_time, poe, loss_type, source_model_tree_path=None, gsim_tree_path=None, statistics=None, quantile_value=None, unit=None, loss_category=None, risk_investigation_time=None)[source]

Bases: openquake.commonlib.risk_writers.LossMapWriter

GeoJSON implementation of a LossMapWriter. Serializes loss maps as FeatureCollection artifacts with additional loss map metadata.

See LossMapWriter for information about constructor parameters.

serialize(data)[source]

Serialize loss map data to a file as a GeoJSON feature collection.

See LossMapWriter.serialize() for expected input.

class openquake.commonlib.risk_writers.LossMapWriter(dest, investigation_time, poe, loss_type, source_model_tree_path=None, gsim_tree_path=None, statistics=None, quantile_value=None, unit=None, loss_category=None, risk_investigation_time=None)[source]

Bases: object

Base class for serializing loss maps produced with the classical and probabilistic calculators.

Subclasses must implement the serialize() method, which defines the format of the output.

Parameters:
  • dest – File path (including filename) or file-like object for results to be saved to.
  • investigation_time (float) – Investigation time (also known as Time Span) defined in the calculation which produced these results (in years).
  • poe (float) – Probability of exceedance used to interpolate the losses producing this loss map.
  • loss_type (str) – Loss type used in risk model input for the calculation producing this output (examples: structural, non-structural, business-interruption, occupants)
  • source_model_tree_path (str) – Id of the source model tree path (obtained by concatenating the IDs of the branches the path is made of) for which input hazard curves have been computed.
  • gsim_tree_path (str) – Id of the gsim (ground shaking intensity model) tree path (obtained by concatenating the IDs of the branches the path is made of) for which input hazard curves have been computed.
  • unit (str) – Attribute describing how the value of the assets has been measured.
  • loss_category (str) – Attribute describing the category (economic, population, buildings, etc..) of the losses producing this loss map.
  • statistics (str) – mean or quantile. When serializing loss curves produced from statistical hazard inputs, it describes the type of statistic used.
  • quantile_value (float) – When serializing loss curves produced from quantile hazard inputs, it describes the quantile value.
serialize(data)[source]

Serialize a collection of losses.

Parameters:data

An iterable of loss objects. Each object should:

  • define an attribute location, which is itself an object defining two attributes, x containing the longitude value and y containing the latitude value. Also, it must define an attribute wkt, which is the Well-known text representation of the location.
  • define an attribute asset_ref, which contains the unique identifier of the asset related to the loss curve.
  • define an attribute value, which is the value of the loss.
class openquake.commonlib.risk_writers.LossMapXMLWriter(dest, investigation_time, poe, loss_type, source_model_tree_path=None, gsim_tree_path=None, statistics=None, quantile_value=None, unit=None, loss_category=None, risk_investigation_time=None)[source]

Bases: openquake.commonlib.risk_writers.LossMapWriter

NRML/XML implementation of a LossMapWriter.

See LossMapWriter for information about constructor parameters.

serialize(data)[source]

Serialize loss map data to XML.

See LossMapWriter.serialize() for expected input.

class openquake.commonlib.risk_writers.Site(x, y)[source]

Bases: object

A small wrapper over a lon-lat pair (x, y). It has a .wkt attribute and an ordering. It is used for consistency with the export routines.

openquake.commonlib.risk_writers.notnan(value)[source]

True if the value is not numpy.nan

openquake.commonlib.risk_writers.validate_hazard_metadata(gsim_tree_path=None, source_model_tree_path=None, statistics=None, quantile_value=None)[source]

Validate the hazard input metadata.

openquake.commonlib.riskmodels module

Reading risk models for risk calculators

openquake.commonlib.riskmodels.build_vf_node(vf)[source]

Convert a VulnerabilityFunction object into a Node suitable for XML conversion.

openquake.commonlib.riskmodels.filter_vset(elem)[source]
openquake.commonlib.riskmodels.get_risk_files(inputs)[source]
Parameters:inputs – a dictionary key -> path name
Returns:a pair (file_type, {cost_type: path})
openquake.commonlib.riskmodels.get_risk_models(oqparam, kind=None)[source]
Parameters:
  • oqparam – an OqParam instance
  • kind – vulnerability|vulnerability_retrofitted|fragility|consequence; if None it is extracted from the oqparam.file_type attribute
Returns:

a dictionary taxonomy -> loss_type -> function

openquake.commonlib.sap module

openquake.commonlib.source module

class openquake.commonlib.source.CompositeSourceModel(gsim_lt, source_model_lt, source_models, set_weight=False)[source]

Bases: _abcoll.Sequence

Parameters:
filter(ss_filter)[source]

Generate a new CompositeSourceModel by filtering the sources on the given site collection.

Parameters:sitecol – a SiteCollection instance
Para ss_filter:a SourceSitesFilter instance
get_maxweight(concurrent_tasks)[source]

Return an appropriate maxweight for use in the block_splitter

get_model(sm_id)[source]

Extract a CompositeSourceModel instance containing the single model of index sm_id.

get_num_sources()[source]
Returns:the total number of sources in the model
get_sources(kind='all', maxweight=None)[source]

Extract the sources contained in the source models by optionally filtering and splitting them, depending on the passed parameters.

init_serials()[source]

Generate unique seeds for each rupture with numpy.arange. This should be called only in event based calculators

set_weights()[source]

Update the attributes .weight and src.num_ruptures for each TRT model .weight of the CompositeSourceModel.

src_groups

Yields the SourceGroups inside each source model.

class openquake.commonlib.source.CompositionInfo(gsim_lt, seed, num_samples, source_models, tot_weight)[source]

Bases: object

An object to collect information about the composition of a composite source model.

Parameters:
  • source_model_lt – a SourceModelLogicTree object
  • source_models – a list of SourceModel instances
classmethod fake(gsimlt=None)[source]
Returns:a fake CompositionInfo instance with the given gsim logic tree object; if None, builds automatically a fake gsim logic tree
get_num_rlzs(source_model=None)[source]
Parameters:source_model – a SourceModel instance (or None)
Returns:the number of realizations per source model (or all)
get_rlzs_assoc(count_ruptures=None)[source]

Return a RlzsAssoc with fields realizations, gsim_by_trt, rlz_idx and trt_gsims.

Parameters:count_ruptures – a function src_group -> num_ruptures
get_sm_by_grp()[source]
Returns:a dictionary grp_id -> sm_id
get_sm_by_rlz(realizations)[source]
Returns:a dictionary rlz -> source model name
get_source_model(src_group_id)[source]

Return the source model for the given src_group_id

get_trt(src_group_id)[source]

Return the TRT string for the given src_group_id

class openquake.commonlib.source.LtRealization(ordinal, sm_lt_path, gsim_rlz, weight, sampleid)[source]

Bases: object

Composite realization build on top of a source model realization and a GSIM realization.

gsim_lt_path
uid

An unique identifier for effective realizations

class openquake.commonlib.source.RlzsAssoc(csm_info)[source]

Bases: _abcoll.Mapping

Realization association class. It should not be instantiated directly, but only via the method :meth: openquake.commonlib.source.CompositeSourceModel.get_rlzs_assoc.

Attr realizations:
 list of LtRealization objects
Attr gsim_by_trt:
 list of dictionaries {trt: gsim}
Attr rlzs_assoc:
 dictionary {src_group_id, gsim: rlzs}
Attr rlzs_by_smodel:
 list of lists of realizations

For instance, for the non-trivial logic tree in openquake.qa_tests_data.classical.case_15, which has 4 tectonic region types and 4 + 2 + 2 realizations, there are the following associations:

(0, ‘BooreAtkinson2008()’) [‘#0-SM1-BA2008_C2003’, ‘#1-SM1-BA2008_T2002’] (0, ‘CampbellBozorgnia2008()’) [‘#2-SM1-CB2008_C2003’, ‘#3-SM1-CB2008_T2002’] (1, ‘Campbell2003()’) [‘#0-SM1-BA2008_C2003’, ‘#2-SM1-CB2008_C2003’] (1, ‘ToroEtAl2002()’) [‘#1-SM1-BA2008_T2002’, ‘#3-SM1-CB2008_T2002’] (2, ‘BooreAtkinson2008()’) [‘#4-SM2_a3pt2b0pt8-BA2008’] (2, ‘CampbellBozorgnia2008()’) [‘#5-SM2_a3pt2b0pt8-CB2008’] (3, ‘BooreAtkinson2008()’) [‘#6-SM2_a3b1-BA2008’] (3, ‘CampbellBozorgnia2008()’) [‘#7-SM2_a3b1-CB2008’]

extract(rlz_indices, csm_info)[source]

Extract a RlzsAssoc instance containing only the given realizations.

Parameters:rlz_indices – a list of realization indices from 0 to R - 1
get_rlz(rlzstr)[source]

Get a Realization instance for a string of the form ‘rlz-d+’

get_rlzs_by_grp_id()[source]

Returns a dictionary grp_id > [sorted rlzs]

realizations

Flat list with all the realizations

class openquake.commonlib.source.SourceInfo(src, calc_time=0, num_split=0)[source]

Bases: object

dt = dtype([('grp_id', '<u4'), ('source_id', 'S100'), ('source_class', 'S30'), ('num_ruptures', '<u4'), ('calc_time', '<f4'), ('num_sites', '<u4'), ('num_split', '<u4')])
class openquake.commonlib.source.SourceModel(name, weight, path, src_groups, num_gsim_paths, ordinal, samples)[source]

Bases: object

A container of SourceGroup instances with some additional attributes describing the source model in the logic tree.

get_skeleton()[source]

Return an empty copy of the source model, i.e. without sources, but with the proper attributes for each SourceGroup contained within.

num_sources
class openquake.commonlib.source.SourceModelParser(converter)[source]

Bases: object

A source model parser featuring a cache.

Parameters:converteropenquake.commonlib.source.SourceConverter instance
parse_groups(fname)[source]

Parse all the groups and return them ordered by number of sources. It does not count the ruptures, so it is relatively fast.

Parameters:fname – the full pathname of the source model file
parse_src_groups(fname, apply_uncertainties=None)[source]
Parameters:
  • fname – the full pathname of the source model file
  • apply_uncertainties – a function modifying the sources (or None)
openquake.commonlib.source.capitalize(words)[source]

Capitalize words separated by spaces.

>>> capitalize('active shallow crust')
'Active Shallow Crust'
openquake.commonlib.source.collect_source_model_paths(smlt)[source]

Given a path to a source model logic tree or a file-like, collect all of the soft-linked path names to the source models it contains and return them as a uniquified list (no duplicates).

Parameters:smlt – source model logic tree file

openquake.commonlib.sourceconverter module

class openquake.commonlib.sourceconverter.RuptureConverter(rupture_mesh_spacing, complex_fault_mesh_spacing=None)[source]

Bases: object

Convert ruptures from nodes into Hazardlib ruptures.

convert_complexFaultRupture(node, mag, rake, hypocenter)[source]

Convert a complexFaultRupture node.

Parameters:
  • node – the rupture node
  • mag – the rupture magnitude
  • rake – the rupture rake angle
  • hypocenter – the rupture hypocenter
convert_multiPlanesRupture(node, mag, rake, hypocenter)[source]

Convert a multiPlanesRupture node.

Parameters:
  • node – the rupture node
  • mag – the rupture magnitude
  • rake – the rupture rake angle
  • hypocenter – the rupture hypocenter
convert_node(node)[source]

Convert the given rupture node into a hazardlib rupture, depending on the node tag.

Parameters:node – a node representing a rupture
convert_simpleFaultRupture(node, mag, rake, hypocenter)[source]

Convert a simpleFaultRupture node.

Parameters:
  • node – the rupture node
  • mag – the rupture magnitude
  • rake – the rupture rake angle
  • hypocenter – the rupture hypocenter
convert_singlePlaneRupture(node, mag, rake, hypocenter)[source]

Convert a singlePlaneRupture node.

Parameters:
  • node – the rupture node
  • mag – the rupture magnitude
  • rake – the rupture rake angle
  • hypocenter – the rupture hypocenter
convert_surfaces(surface_nodes)[source]

Utility to convert a list of surface nodes into a single hazardlib surface. There are three possibilities:

  1. there is a single simpleFaultGeometry node; returns a openquake.hazardlib.geo.simpleFaultSurface instance
  2. there is a single complexFaultGeometry node; returns a openquake.hazardlib.geo.complexFaultSurface instance
  3. there is a list of PlanarSurface nodes; returns a openquake.hazardlib.geo.MultiSurface instance
Parameters:surface_nodes – surface nodes as just described
fname = None
geo_line(edge)[source]

Utility function to convert a node of kind edge into a openquake.hazardlib.geo.Line instance.

Parameters:edge – a node describing an edge
geo_lines(edges)[source]

Utility function to convert a list of edges into a list of openquake.hazardlib.geo.Line instances.

Parameters:edge – a node describing an edge
geo_planar(surface)[source]

Utility to convert a PlanarSurface node with subnodes topLeft, topRight, bottomLeft, bottomRight into a openquake.hazardlib.geo.PlanarSurface instance.

Parameters:surface – PlanarSurface node
class openquake.commonlib.sourceconverter.SourceConverter(investigation_time, rupture_mesh_spacing, complex_fault_mesh_spacing=None, width_of_mfd_bin=1.0, area_source_discretization=None)[source]

Bases: openquake.commonlib.sourceconverter.RuptureConverter

Convert sources from valid nodes into Hazardlib objects.

convert_areaSource(node)[source]

Convert the given node into an area source object.

Parameters:node – a node with tag areaGeometry
Returns:a openquake.hazardlib.source.AreaSource instance
convert_characteristicFaultSource(node)[source]

Convert the given node into a characteristic fault object.

Parameters:node – a node with tag areaGeometry
Returns:a openquake.hazardlib.source.CharacteristicFaultSource instance
convert_complexFaultSource(node)[source]

Convert the given node into a complex fault object.

Parameters:node – a node with tag areaGeometry
Returns:a openquake.hazardlib.source.ComplexFaultSource instance
convert_hpdist(node)[source]

Convert the given node into a probability mass function for the hypo depth distribution.

Parameters:node – a hypoDepthDist node
Returns:a openquake.hazardlib.pmf.PMF instance
convert_mfdist(node)[source]

Convert the given node into a Magnitude-Frequency Distribution object.

Parameters:node – a node of kind incrementalMFD or truncGutenbergRichterMFD
Returns:a openquake.hazardlib.mdf.EvenlyDiscretizedMFD. or openquake.hazardlib.mdf.TruncatedGRMFD instance
convert_node(node)[source]

Convert the given node into a hazardlib source, depending on the node tag.

Parameters:node – a node representing a source
convert_nonParametricSeismicSource(node)[source]

Convert the given node into a non parametric source object.

Parameters:node – a node with tag areaGeometry
Returns:a openquake.hazardlib.source.NonParametricSeismicSource instance
convert_npdist(node)[source]

Convert the given node into a Nodal Plane Distribution.

Parameters:node – a nodalPlaneDist node
Returns:a openquake.hazardlib.geo.NodalPlane instance
convert_pointSource(node)[source]

Convert the given node into a point source object.

Parameters:node – a node with tag pointGeometry
Returns:a openquake.hazardlib.source.PointSource instance
convert_simpleFaultSource(node)[source]

Convert the given node into a simple fault object.

Parameters:node – a node with tag areaGeometry
Returns:a openquake.hazardlib.source.SimpleFaultSource instance
convert_sourceGroup(node)[source]

Convert the given node into a SourceGroup object.

Parameters:node – a node with tag sourceGroup
Returns:a openquake.commonlib.source.SourceGroup instance
class openquake.commonlib.sourceconverter.SourceGroup(trt, sources=None, min_mag=None, max_mag=None, id=0, eff_ruptures=-1)[source]

Bases: _abcoll.Sequence

A container for the following parameters:

Parameters:
  • trt (str) – the tectonic region type all the sources belong to
  • sources (list) – a list of hazardlib source objects
  • min_mag – the minimum magnitude among the given sources
  • max_mag – the maximum magnitude among the given sources
  • id – an optional numeric ID (default None) useful to associate the model to a database object
  • eff_ruptures – the number of ruptures contained in the group; if -1, the number is unknown and has to be computed by using get_set_num_ruptures
classmethod collect(sources)[source]
Parameters:sources – dictionaries with a key ‘tectonicRegion’
Returns:an ordered list of SourceGroup instances
tot_ruptures()[source]
update(src)[source]

Update the attributes sources, min_mag, max_mag according to the given source.

Parameters:src – an instance of :class: openquake.hazardlib.source.base.BaseSeismicSource
openquake.commonlib.sourceconverter.area_to_point_sources(area_src)[source]

Split an area source into a generator of point sources.

MFDs will be rescaled appropriately for the number of points in the area mesh.

Parameters:area_srcopenquake.hazardlib.source.AreaSource
openquake.commonlib.sourceconverter.get_set_num_ruptures(src)[source]

Extract the number of ruptures and set it

openquake.commonlib.sourceconverter.parse_ses_ruptures(fname)[source]

Convert a stochasticEventSetCollection file into a set of SES, each one containing ruptures with an etag and a seed.

openquake.commonlib.sourceconverter.split_coords_2d(seq)[source]
Parameters:seq – a flat list with lons and lats
Returns:a validated list of pairs (lon, lat)
>>> split_coords_2d([1.1, 2.1, 2.2, 2.3])
[(1.1, 2.1), (2.2, 2.3)]
openquake.commonlib.sourceconverter.split_coords_3d(seq)[source]
Parameters:seq – a flat list with lons, lats and depths
Returns:a validated list of (lon, lat, depths) triplets
>>> split_coords_3d([1.1, 2.1, 0.1, 2.3, 2.4, 0.1])
[(1.1, 2.1, 0.1), (2.3, 2.4, 0.1)]
openquake.commonlib.sourceconverter.split_fault_source(src)[source]

Generator splitting a fault source into several fault sources.

Parameters:src – an instance of openquake.hazardlib.source.base.SeismicSource
openquake.commonlib.sourceconverter.split_fault_source_by_magnitude(src)[source]

Utility splitting a fault source into fault sources with a single magnitude bin.

Parameters:src – an instance of openquake.hazardlib.source.base.SeismicSource
openquake.commonlib.sourceconverter.split_source(src)[source]

Split an area source into point sources and a fault sources into smaller fault sources.

Parameters:src – an instance of openquake.hazardlib.source.base.SeismicSource

openquake.commonlib.sourcewriter module

Source model XML Writer

openquake.commonlib.sourcewriter.build_arbitrary_mfd(mfd)[source]

Parses the arbitrary MFD as a Node param mfd:

MFD as instance of :class: openquake.hazardlib.mfd.arbitrary.ArbitraryMFD
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_area_source_geometry(area_source)[source]

Returns the area source geometry as a Node :param area_source:

Area source model as an instance of the :class: openquake.hazardlib.source.area.AreaSource
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_area_source_node(area_source)[source]

Parses an area source to a Node class :param area_source:

Area source as instance of :class: openquake.hazardlib.source.area.AreaSource
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_characteristic_fault_source_node(source)[source]
openquake.commonlib.sourcewriter.build_complex_fault_geometry(fault_source)[source]

Returns the complex fault source geometry as a Node :param fault_source:

Complex fault source model as an instance of the :class: openquake.hazardlib.source.complex_fault.ComplexFaultSource
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_complex_fault_source_node(fault_source)[source]

Parses a complex fault source to a Node class :param fault_source:

Simple fault source as instance of :class: openquake.hazardlib.source.complex_fault.ComplexFaultSource
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_evenly_discretised_mfd(mfd)[source]

Returns the evenly discretized MFD as a Node :param mfd:

MFD as instance of :class: openquake.hazardlib.mfd.evenly_discretized.EvenlyDiscretizedMFD
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_hypo_depth_dist(hdd)[source]

Returns the hypocentral depth distribution as a Node instance :param hdd:

Hypocentral depth distribution as an instance of :class: openuake.hzardlib.pmf.PMF
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_hypo_list_node(hypo_list)[source]
Parameters:hypo_list – an array of shape (N, 3) with columns (alongStrike, downDip, weight)
Returns:a hypoList node containing N hypo nodes
openquake.commonlib.sourcewriter.build_linestring_node(line, with_depth=False)[source]

Parses a line to a Node class :param line:

Line as instance of :class: openquake.hazardlib.geo.line.Line
Parameters:with_depth (bool) – Include the depth values (True) or not (False):
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_nodal_plane_dist(npd)[source]

Returns the nodal plane distribution as a Node instance :param npd:

Nodal plane distribution as instance of :class: openquake.hazardlib.pmf.PMF
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_nonparametric_source_node(source)[source]
openquake.commonlib.sourcewriter.build_point_source_geometry(point_source)[source]

Returns the poing source geometry as a Node :param point_source:

Point source model as an instance of the :class: openquake.hazardlib.source.point.PointSource
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_point_source_node(point_source)[source]

Parses a point source to a Node class :param point_source:

Point source as instance of :class: openquake.hazardlib.source.point.PointSource
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_rupture_node(rupt, probs_occur)[source]
Parameters:
  • rupt – a hazardlib rupture object
  • probs_occur – a list of floats with sum 1
openquake.commonlib.sourcewriter.build_simple_fault_geometry(fault_source)[source]

Returns the simple fault source geometry as a Node :param fault_source:

Simple fault source model as an instance of the :class: openquake.hazardlib.source.simple_fault.SimpleFaultSource
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_simple_fault_source_node(fault_source)[source]

Parses a simple fault source to a Node class :param fault_source:

Simple fault source as instance of :class: openquake.hazardlib.source.simple_fault.SimpleFaultSource
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_slip_list_node(slip_list)[source]
Parameters:slip_list – an array of shape (N, 2) with columns (slip, weight)
Returns:a hypoList node containing N slip nodes
openquake.commonlib.sourcewriter.build_source_group_node(source_group)[source]

Parses a SourceGroup to a Node class :param source_group:

Instance of :class:openquake.commonlib.source.SourceGroup
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_truncated_gr_mfd(mfd)[source]

Parses the truncated Gutenberg Richter MFD as a Node :param mfd:

MFD as instance of :class: openquake.hazardlib.mfd.truncated_gr.TruncatedGRMFD
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.build_youngs_coppersmith_mfd(mfd)[source]

Parses the Youngs & Coppersmith MFD as a node. Note that the MFD does not hold the total moment rate, but only the characteristic rate. Therefore the node is written to the characteristic rate version regardless of whether or not it was originally created from total moment rate :param mfd:

MFD as instance of :class: openquake.hazardlib.mfd.youngs_coppersmith_1985. YoungsCoppersmith1985MFD
Returns:Instance of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.get_distributed_seismicity_source_nodes(source)[source]

Returns list of nodes of attributes common to all distributed seismicity source classes :param source:

Seismic source as instance of :class: openquake.hazardlib.source.area.AreaSource or :class: openquake.hazardlib.source.point.PointSource
Returns:List of instances of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.get_fault_source_nodes(source)[source]

Returns list of nodes of attributes common to all fault source classes :param source:

Fault source as instance of :class: openquake.hazardlib.source.simple_fault.SimpleFaultSource or :class: openquake.hazardlib.source.complex_fault.ComplexFaultSource
Returns:List of instances of :class: openquake.commonlib.node.Node
openquake.commonlib.sourcewriter.get_source_attributes(source)[source]

Retreives a dictionary of source attributes from the source class :param source:

Seismic source as instance of :class: openquake.hazardlib.source.base.BaseSeismicSource
Returns:Dictionary of source attributes
openquake.commonlib.sourcewriter.write_source_model(dest, groups, name=None)[source]

Writes a source model to XML.

Parameters:
  • dest (str) – Destination path
  • groups (list) – Source model as list of SourceGroups
  • name (str) – Name of the source model (if missing, extracted from the filename)

openquake.commonlib.util module

class openquake.commonlib.util.Rupture(etag, indices=None)[source]

Bases: object

Simplified Rupture class with attributes etag, indices, ses_idx, used in export.

openquake.commonlib.util.compose_arrays(a1, a2, firstfield='etag')[source]

Compose composite arrays by generating an extended datatype containing all the fields. The two arrays must have the same length.

openquake.commonlib.util.get_assets(dstore)[source]
Parameters:dstore – a datastore with keys ‘assetcol’
Returns:an ordered array of records (asset_ref, taxonomy, lon, lat)
openquake.commonlib.util.get_serial(etag)[source]
>>> print(get_serial("trt=00~ses=0007~src=1-3~rup=018-01"))
018
openquake.commonlib.util.get_ses_idx(etag)[source]
>>> get_ses_idx("trt=00~ses=0007~src=1-3~rup=018-01")
7
openquake.commonlib.util.max_rel_diff(curve_ref, curve, min_value=0.01)[source]

Compute the maximum relative difference between two curves. Only values greather or equal than the min_value are considered.

>>> curve_ref = [0.01, 0.02, 0.03, 0.05, 1.0]
>>> curve = [0.011, 0.021, 0.031, 0.051, 1.0]
>>> round(max_rel_diff(curve_ref, curve), 2)
0.1
openquake.commonlib.util.max_rel_diff_index(curve_ref, curve, min_value=0.01)[source]

Compute the maximum relative difference between two sets of curves. Only values greather or equal than the min_value are considered. Return both the maximum difference and its location (array index).

>>> curve_refs = [[0.01, 0.02, 0.03, 0.05], [0.01, 0.02, 0.04, 0.06]]
>>> curves = [[0.011, 0.021, 0.031, 0.051], [0.012, 0.022, 0.032, 0.051]]
>>> max_rel_diff_index(curve_refs, curves)
(0.2, 1)
openquake.commonlib.util.rmsep(array_ref, array, min_value=0.01)[source]

Root Mean Square Error Percentage for two arrays.

Parameters:
  • array_ref – reference array
  • array – another array
  • min_value – compare only the elements larger than min_value
Returns:

the relative distance between the arrays

>>> curve_ref = numpy.array([[0.01, 0.02, 0.03, 0.05],
... [0.01, 0.02, 0.04, 0.06]])
>>> curve = numpy.array([[0.011, 0.021, 0.031, 0.051],
... [0.012, 0.022, 0.032, 0.051]])
>>> str(round(rmsep(curve_ref, curve), 5))
'0.11292'

openquake.commonlib.writers module

class openquake.commonlib.writers.CsvWriter(sep=', ', fmt='%12.8E')[source]

Bases: object

Class used in the exporters to save a bunch of CSV files

getsaved()[source]

Returns the list of files saved by this CsvWriter

save(data, fname, header=None)[source]

Save data on fname.

Parameters:
  • data – numpy array or list of lists
  • fname – path name
  • header – header to use
class openquake.commonlib.writers.HeaderTranslator(*regexps)[source]

Bases: object

An utility to convert the headers in CSV files. When reading, the column names are converted into column descriptions with the method .read, when writing column descriptions are converted into column names with the method .write. The usage is

>>> htranslator = HeaderTranslator(
...     '(asset_ref):\|S100',
...     '(rup_id):uint32',
...     '(taxonomy):object')
>>> htranslator.write('asset_ref:|S100 value:5'.split())
['asset_ref', 'value:5']
>>> htranslator.read('asset_ref value:5'.split())
['asset_ref:|S100', 'value:5']
read(names)[source]

Convert names into descriptions

write(descrs)[source]

Convert descriptions into names

class openquake.commonlib.writers.StreamingXMLWriter(bytestream, indent=4, encoding='utf-8', nsmap=None)[source]

Bases: object

A bynary stream XML writer. The typical usage is something like this:

with StreamingXMLWriter(output_file) as writer:
    writer.start_tag('root')
    for node in nodegenerator():
        writer.serialize(node)
    writer.end_tag('root')
emptyElement(name, attrs)[source]

Add an empty element (may have attributes)

end_tag(name)[source]

Close an XML tag

serialize(node)[source]

Serialize a node object (typically an ElementTree object)

shorten(tag)[source]

Get the short representation of a fully qualified tag

Parameters:tag (str) – a (fully qualified or not) XML tag
start_tag(name, attrs=None)[source]

Open an XML tag

openquake.commonlib.writers.build_header(dtype)[source]

Convert a numpy nested dtype into a list of strings suitable as header of csv file.

>>> imt_dt = numpy.dtype([('PGA', float, 3), ('PGV', float, 4)])
>>> build_header(imt_dt)
['PGA:3', 'PGV:4']
>>> gmf_dt = numpy.dtype([('A', imt_dt), ('B', imt_dt),
...                       ('idx', numpy.uint32)])
>>> build_header(gmf_dt)
['A~PGA:3', 'A~PGV:4', 'B~PGA:3', 'B~PGV:4', 'idx:uint32']
openquake.commonlib.writers.castable_to_int(s)[source]

Return True if the string s can be interpreted as an integer

openquake.commonlib.writers.extract_from(data, fields)[source]

Extract data from numpy arrays with nested records.

>>> imt_dt = numpy.dtype([('PGA', float, 3), ('PGV', float, 4)])
>>> a = numpy.array([([1, 2, 3], [4, 5, 6, 7])], imt_dt)
>>> extract_from(a, ['PGA'])
array([[ 1.,  2.,  3.]])
>>> gmf_dt = numpy.dtype([('A', imt_dt), ('B', imt_dt),
...                       ('idx', numpy.uint32)])
>>> b = numpy.array([(([1, 2, 3], [4, 5, 6, 7]),
...                  ([1, 2, 4], [3, 5, 6, 7]), 8)], gmf_dt)
>>> extract_from(b, ['idx'])
array([8], dtype=uint32)
>>> extract_from(b, ['B', 'PGV'])
array([[ 3.,  5.,  6.,  7.]])
openquake.commonlib.writers.floatformat(*args, **kwds)[source]

Context manager to change the default format string for the function openquake.commonlib.writers.scientificformat().

Parameters:fmt_string – the format to use; for instance ‘%13.9E’
openquake.commonlib.writers.parse_header(header)[source]

Convert a list of the form [‘fieldname:fieldtype:fieldsize’,...] into a numpy composite dtype. The parser understands headers generated by openquake.commonlib.writers.build_header(). Here is an example:

>>> parse_header(['PGA:float32', 'PGV', 'avg:float32:2'])
(['PGA', 'PGV', 'avg'], dtype([('PGA', '<f4'), ('PGV', '<f8'), ('avg', '<f4', (2,))]))
Params header:a list of type descriptions
Returns:column names and the corresponding composite dtype
openquake.commonlib.writers.read_array(fname, sep=', ')[source]

Convert a CSV file without header into a numpy array of floats.

>>> from openquake.baselib.general import writetmp
>>> print(read_array(writetmp('.1 .2, .3 .4, .5 .6\n')))
[[[ 0.1  0.2]
  [ 0.3  0.4]
  [ 0.5  0.6]]]
openquake.commonlib.writers.read_composite_array(fname, sep=', ')[source]

Convert a CSV file with header into a numpy array of records.

>>> from openquake.baselib.general import writetmp
>>> fname = writetmp('PGA:3,PGV:2,avg:1\n'
...                  '.1 .2 .3,.4 .5,.6\n')
>>> print(read_composite_array(fname))  # array of shape (1,)
[([0.1, 0.2, 0.3], [0.4, 0.5], [0.6])]
openquake.commonlib.writers.scientificformat(value, fmt='%13.9E', sep=' ', sep2=':')[source]
Parameters:
  • value – the value to convert into a string
  • fmt – the formatting string to use for float values
  • sep – separator to use for vector-like values
  • sep2 – second separator to use for matrix-like values

Convert a float or an array into a string by using the scientific notation and a fixed precision (by default 10 decimal digits). For instance:

>>> scientificformat(-0E0)
'0.000000000E+00'
>>> scientificformat(-0.004)
'-4.000000000E-03'
>>> scientificformat([0.004])
'4.000000000E-03'
>>> scientificformat([0.01, 0.02], '%10.6E')
'1.000000E-02 2.000000E-02'
>>> scientificformat([[0.1, 0.2], [0.3, 0.4]], '%4.1E')
'1.0E-01:2.0E-01 3.0E-01:4.0E-01'
openquake.commonlib.writers.tostring(node, indent=4, nsmap=None)[source]

Convert a node into an XML string by using the StreamingXMLWriter. This is useful for testing purposes.

Parameters:
  • node – a node object (typically an ElementTree object)
  • indent – the indentation to use in the XML (default 4 spaces)
openquake.commonlib.writers.write_csv(dest, data, sep=', ', fmt='%.6E', header=None, comment=None)[source]
Parameters:
  • dest – destination filename or io.StringIO instance
  • data – array to save
  • sep – separator to use (default comma)
  • fmt – formatting string (default ‘%12.8E’)
  • header – optional list with the names of the columns to display
  • comment – optional first line starting with a # character

Module contents

exception openquake.commonlib.InvalidFile[source]

Bases: exceptions.Exception