openquake.commonlib package

openquake.commonlib.calc module

class openquake.commonlib.calc.PmapGetter(dstore, fromworker=False)[source]

Bases: object

Read hazard curves from the datastore for all realizations or for a specific realization.

Parameters:
  • dstore – a DataStore instance
  • fromworker – if True, read directly from the datastore
combine_pmaps(pmap_by_grp)[source]
Parameters:pmap_by_grp – dictionary group string -> probability map
Returns:a list of probability maps, one per realization
get(sids, rlzi)[source]
Parameters:
  • sids – an array of S site IDs
  • rlzi – a realization index
Returns:

the hazard curves for the given realization

get_pmap_by_grp(sids=None)[source]
Parameters:sids – an array of site IDs
Returns:a dictionary of probability maps by source group
get_pmaps(sids)[source]
Parameters:sids – an array of S site IDs
Returns:a list of R probability maps
items(kind='')[source]

Extract probability maps from the datastore, possibly generating on the fly the ones corresponding to the individual realizations. Yields pairs (tag, pmap).

Parameters:kind – the kind of PoEs to extract; if not given, returns the realization if there is only one or the statistics otherwise.
new(sids)[source]
Parameters:sids – an array of S site IDs
Returns:a new instance of the getter, with the cache populated
class openquake.commonlib.calc.RuptureData(trt, gsims)[source]

Bases: object

Container for information about the ruptures of a given tectonic region type.

to_array(ebruptures)[source]

Convert a list of ebruptures into an array of dtype RuptureRata.dt

class openquake.commonlib.calc.RuptureSerializer(datastore)[source]

Bases: object

Serialize event based ruptures on an HDF5 files. Populate the datasets ruptures and sids.

close()[source]

Flush the ruptures and the site IDs on the datastore

classmethod get_array_nbytes(ebruptures)[source]

Convert a list of EBRuptures into a numpy composite array

pmfs_dt = dtype([('serial', '<u4'), ('pmf', 'O')])
rupture_dt = dtype([('serial', '<u4'), ('code', 'u1'), ('sidx', '<u4'), ('eidx1', '<u4'), ('eidx2', '<u4'), ('pmfx', '<i4'), ('seed', '<u4'), ('mag', '<f4'), ('rake', '<f4'), ('occurrence_rate', '<f4'), ('hypo', [('lon', '<f4'), ('lat', '<f4'), ('depth', '<f4')]), ('sx', '<u2'), ('sy', 'u1'), ('sz', 'u1'), ('points', 'O')])
save(ebruptures, eidx)[source]

Populate a dictionary of site IDs tuples and save the ruptures.

Parameters:
  • ebruptures – a list of EBRupture objects to save
  • eidx – the last event index saved
openquake.commonlib.calc.check_overflow(calc)[source]
Parameters:calc – an event based calculator

Raise a ValueError if the number of sites is larger than 65,536 or the number of IMTs is larger than 256 or the number of ruptures is larger than 4,294,967,296. The limits are due to the numpy dtype used to store the GMFs (gmv_dt). They could be relaxed in the future.

openquake.commonlib.calc.compute_hazard_maps(curves, imls, poes)[source]

Given a set of hazard curve poes, interpolate a hazard map at the specified poe.

Parameters:
  • curves – 2D array of floats. Each row represents a curve, where the values in the row are the PoEs (Probabilities of Exceedance) corresponding to imls. Each curve corresponds to a geographical location.
  • imls – Intensity Measure Levels associated with these hazard curves. Type should be an array-like of floats.
  • poes – Value(s) on which to interpolate a hazard map from the input curves. Can be an array-like or scalar value (for a single PoE).
Returns:

An array of shape N x P, where N is the number of curves and P the number of poes.

openquake.commonlib.calc.fix_minimum_intensity(min_iml, imts)[source]
Parameters:
  • min_iml – a dictionary, possibly with a ‘default’ key
  • imts – an ordered list of IMTs
Returns:

a numpy array of intensities, one per IMT

Make sure the dictionary minimum_intensity (provided by the user in the job.ini file) is filled for all intensity measure types and has no key named ‘default’. Here is how it works:

>>> min_iml = {'PGA': 0.1, 'default': 0.05}
>>> fix_minimum_intensity(min_iml, ['PGA', 'PGV'])
array([ 0.1 ,  0.05], dtype=float32)
>>> sorted(min_iml.items())
[('PGA', 0.1), ('PGV', 0.05)]
openquake.commonlib.calc.get_gmfs(dstore, precalc=None)[source]
Parameters:
  • dstore – a datastore
  • precalc – a scenario calculator with attribute .gmfa
Returns:

a dictionary grp_id, gsid -> gmfa

openquake.commonlib.calc.get_imts_periods(imtls)[source]

Returns a list of IMT strings and a list of periods. There is an element for each IMT of type Spectral Acceleration, including PGA which is considered an alias for SA(0.0). The lists are sorted by period.

Parameters:imtls – a set of intensity measure type strings
Returns:a list of IMT strings and a list of periods
openquake.commonlib.calc.get_ruptures(dstore, grp_id)[source]

Extracts the ruptures of the given grp_id

openquake.commonlib.calc.make_hmap(pmap, imtls, poes)[source]

Compute the hazard maps associated to the passed probability map.

Parameters:
  • pmap – hazard curves in the form of a ProbabilityMap
  • imtls – DictArray of intensity measure types and levels
  • poes – P PoEs where to compute the maps
Returns:

a ProbabilityMap with size (N, I * P, 1)

openquake.commonlib.calc.make_uhs(pmap, imtls, poes, nsites)[source]

Make Uniform Hazard Spectra curves for each location.

It is assumed that the lons and lats for each of the maps are uniform.

Parameters:
  • pmap – a probability map of hazard curves
  • imtls – a dictionary of intensity measure types and levels
  • poes – a sequence of PoEs for the underlying hazard maps
Returns:

an composite array containing nsites uniform hazard maps

openquake.commonlib.config module

Various utility functions concerned with configuration.

openquake.commonlib.config.OQ_CONFIG_FILE_VAR = 'OQ_CONFIG_FILE'

Environment variable name for specifying a custom openquake.cfg. The file name doesn’t matter.

openquake.commonlib.config.abort_if_no_config_available()[source]

Call sys.exit() if no openquake configuration file is readable.

openquake.commonlib.config.flag_set(section, setting)[source]

True if the given boolean setting is enabled in openquake.cfg

Parameters:
  • section (string) – name of the configuration file section
  • setting (string) – name of the configuration file setting
Returns:

True if the setting is enabled in openquake.cfg, False otherwise

openquake.commonlib.config.get(section, key)[source]

The configuration value for the given section and key or None.

openquake.commonlib.config.get_section(section)[source]

A dictionary of key/value pairs for the given section or None.

openquake.commonlib.datastore module

class openquake.commonlib.datastore.DataStore(calc_id=None, datadir='/var/lib/jenkins/oqdata', params=(), mode=None)[source]

Bases: _abcoll.MutableMapping

DataStore class to store the inputs/outputs of a calculation on the filesystem.

Here is a minimal example of usage:

>>> ds = DataStore()
>>> ds['example'] = 'hello world'
>>> print(ds['example'])
hello world
>>> ds.clear()

When reading the items, the DataStore will return a generator. The items will be ordered lexicographically according to their name.

There is a serialization protocol to store objects in the datastore. An object is serializable if it has a method __toh5__ returning an array and a dictionary, and a method __fromh5__ taking an array and a dictionary and populating the object. For an example of use see openquake.hazardlib.site.SiteCollection.

build_fname(prefix, postfix, fmt, export_dir=None)[source]

Build a file name from a realization, by using prefix and extension.

Parameters:
  • prefix – the prefix to use
  • postfix – the postfix to use (can be a realization object)
  • fmt – the extension (‘csv’, ‘xml’, etc)
  • export_dir – export directory (if None use .export_dir)
Returns:

relative pathname including the extension

clear()[source]

Remove the datastore from the file system

close()[source]

Close the underlying hdf5 file

create_dset(key, dtype, shape=(None, ), compression=None, fillvalue=0, attrs=None)[source]

Create a one-dimensional HDF5 dataset.

Parameters:
  • key – name of the dataset
  • dtype – dtype of the dataset (usually composite)
  • shape – shape of the dataset, possibly extendable
  • compression – the kind of HDF5 compression to use
  • attrs – dictionary of attributes of the dataset
Returns:

a HDF5 dataset

export_csv(key)[source]

Generic csv exporter

export_dir

Return the underlying export directory

export_path(relname, export_dir=None)[source]

Return the path of the exported file by adding the export_dir in front, the calculation ID at the end.

Parameters:
  • relname – relative file name
  • export_dir – export directory (if None use .export_dir)
extend(key, array, **attrs)[source]

Extend the dataset associated to the given key; create it if needed

Parameters:
  • key – name of the dataset
  • array – array to store
  • attrs – a dictionary of attributes
flush()[source]

Flush the underlying hdf5 file

get(key, default)[source]
Returns:the value associated to the datastore key, or the default
get_attr(key, name, default=None)[source]
Parameters:
  • key – dataset path
  • name – name of the attribute
  • default – value to return if the attribute is missing
getitem(name)[source]

Return a dataset by using h5py.File.__getitem__

getsize(key=None)[source]

Return the size in byte of the output associated to the given key. If no key is given, returns the total size of all files.

open()[source]

Open the underlying .hdf5 file and the parent, if any

save(key, kw)[source]

Update the object associated to key with the kw dictionary; works for LiteralAttrs objects and automatically flushes.

set_attrs(key, **kw)[source]

Set the HDF5 attributes of the given key

set_nbytes(key, nbytes=None)[source]

Set the nbytes attribute on the HDF5 object identified by key.

openquake.commonlib.datastore.extract_calc_id_datadir(hdf5path, datadir='/var/lib/jenkins/oqdata')[source]

Extract the calculation ID from the given hdf5path or integer:

>>> extract_calc_id_datadir('/mnt/ssd/oqdata/calc_25.hdf5')
(25, '/mnt/ssd/oqdata')
>>> extract_calc_id_datadir('/mnt/ssd/oqdata/wrong_name.hdf5')
Traceback (most recent call last):
   ...
ValueError: Cannot extract calc_id from /mnt/ssd/oqdata/wrong_name.hdf5
openquake.commonlib.datastore.get_calc_ids(datadir='/var/lib/jenkins/oqdata')[source]

Extract the available calculation IDs from the datadir, in order.

openquake.commonlib.datastore.get_last_calc_id(datadir='/var/lib/jenkins/oqdata')[source]

Extract the latest calculation ID from the given directory. If none is found, return 0.

openquake.commonlib.datastore.hdf5new(datadir='/var/lib/jenkins/oqdata')[source]

Return a new hdf5.File by instance with name determined by the last calculation in the datadir (plus one). Set the .path attribute to the generated filename.

openquake.commonlib.datastore.persistent_attribute(key)[source]

Persistent attributes are persisted to the datastore and cached. Modifications to mutable objects are not automagically persisted. If you have a huge object that does not fit in memory use the datastore directory (for instance, open a HDF5 file to create an empty array, then populate it). Notice that you can use any dict-like data structure in place of the datastore, provided you can set attributes on it. Here is an example:

>>> class Datastore(dict):
...     "A fake datastore"
>>> class Store(object):
...     a = persistent_attribute('a')
...     def __init__(self, a):
...         self.datastore = Datastore()
...         self.a = a  # this assegnation will store the attribute
>>> store = Store([1])
>>> store.a  # this retrieves the attribute
[1]
>>> store.a.append(2)
>>> store.a = store.a  # remember to store the modified attribute!
Parameters:key – the name of the attribute to be made persistent
Returns:a property to be added to a class with a .datastore attribute
openquake.commonlib.datastore.read(calc_id, mode='r', datadir='/var/lib/jenkins/oqdata')[source]
Parameters:
  • calc_id – calculation ID or hdf5path
  • mode – ‘r’ or ‘w’
  • datadir – the directory where to look
Returns:

the corresponding DataStore instance

Read the datastore, if it exists and it is accessible.

openquake.commonlib.hazard_writers module

Classes for serializing various NRML XML artifacts.

class openquake.commonlib.hazard_writers.BaseCurveWriter(dest, **metadata)[source]

Bases: object

Base class for curve writers.

Parameters:
  • dest – File path (including filename) or file-like object for results to be saved to.
  • metadata

    The following keyword args are required:

    • investigation_time: Investigation time (in years) defined in the calculation which produced these results.

    The following are more or less optional (combinational rules noted below where applicable):

    • statistics: ‘mean’ or ‘quantile’
    • quantile_value: Only required if statistics = ‘quantile’.
    • smlt_path: String representing the logic tree path which produced these curves. Only required for non-statistical curves.
    • gsimlt_path: String represeting the GSIM logic tree path which produced these curves. Only required for non-statisical curves.
serialize(_data)[source]

Implement in subclasses.

class openquake.commonlib.hazard_writers.DisaggXMLWriter(dest, **metadata)[source]

Bases: object

Parameters:
  • dest – File path (including filename) or file-like object for XML results to be saved to.
  • metadata

    The following keyword args are required:

    • investigation_time: Investigation time (in years) defined in the calculation which produced these results.
    • imt: Intensity measure type used to compute these matrices.
    • lon, lat: Longitude and latitude associated with these results.

    The following attributes define dimension context for the result matrices:

    • mag_bin_edges: List of magnitude bin edges (floats)
    • dist_bin_edges: List of distance bin edges (floats)
    • lon_bin_edges: List of longitude bin edges (floats)
    • lat_bin_edges: List of latitude bin edges (floats)
    • eps_bin_edges: List of epsilon bin edges (floats)
    • tectonic_region_types: List of tectonic region types (strings)
    • smlt_path: String representing the logic tree path which produced these results. Only required for non-statistical results.
    • gsimlt_path: String represeting the GSIM logic tree path which produced these results. Only required for non-statistical results.

    The following are optional, depending on the imt:

    • sa_period: Only used with imt = ‘SA’.
    • sa_damping: Only used with imt = ‘SA’.
BIN_EDGE_ATTR_MAP = OrderedDict([('mag_bin_edges', 'magBinEdges'), ('dist_bin_edges', 'distBinEdges'), ('lon_bin_edges', 'lonBinEdges'), ('lat_bin_edges', 'latBinEdges'), ('eps_bin_edges', 'epsBinEdges'), ('tectonic_region_types', 'tectonicRegionTypes')])

Maps metadata keywords to XML attribute names for bin edge information passed to the constructor. The dict here is an OrderedDict so as to give consistent ordering of result attributes.

DIM_LABEL_TO_BIN_EDGE_MAP = {'Dist': 'dist_bin_edges', 'Lon': 'lon_bin_edges', 'Eps': 'eps_bin_edges', 'Mag': 'mag_bin_edges', 'Lat': 'lat_bin_edges', 'TRT': 'tectonic_region_types'}
serialize(data)[source]
Parameters:data

A sequence of data where each datum has the following attributes:

  • matrix: N-dimensional numpy array containing the disaggregation histogram.
  • dim_labels: A list of strings which label the dimensions of a given histogram. For example, for a Magnitude-Distance-Epsilon histogram, we would expect dim_labels to be ['Mag', 'Dist', 'Eps'].
  • poe: The disaggregation Probability of Exceedance level for which these results were produced.
  • iml: Intensity measure level, interpolated from the source hazard curve at the given poe.
class openquake.commonlib.hazard_writers.EventBasedGMFXMLWriter(dest, sm_lt_path, gsim_lt_path)[source]

Bases: object

Parameters:
  • dest – File path (including filename) or a file-like object for XML results to be saved to.
  • sm_lt_path (str) – Source model logic tree branch identifier of the logic tree realization which produced this collection of ground motion fields.
  • gsim_lt_path – GSIM logic tree branch identifier of the logic tree realization which produced this collection of ground motion fields.
serialize(data, fmt='%10.7E')[source]

Serialize a collection of ground motion fields to XML.

Parameters:data

An iterable of “GMF set” objects. Each “GMF set” object should:

  • have an investigation_time attribute
  • have an stochastic_event_set_id attribute
  • be iterable, yielding a sequence of “GMF” objects

Each “GMF” object should:

  • have an imt attribute
  • have an sa_period attribute (only if imt is ‘SA’)
  • have an sa_damping attribute (only if imt is ‘SA’)
  • have a rupture_id attribute (to indicate which rupture contributed to this gmf)
  • be iterable, yielding a sequence of “GMF node” objects

Each “GMF node” object should have:

  • a gmv attribute (to indicate the ground motion value
  • lon and lat attributes (to indicate the geographical location of the ground motion field)
class openquake.commonlib.hazard_writers.HazardCurveGeoJSONWriter(dest, **metadata)[source]

Bases: openquake.commonlib.hazard_writers.BaseCurveWriter

Writes hazard curves to GeoJSON. Has the same constructor and interface as HazardCurveXMLWriter.

serialize(data)[source]

Write the hazard curves to the given as GeoJSON. The GeoJSON format is customized to contain various bits of metadata.

See HazardCurveXMLWriter.serialize() for expected input.

class openquake.commonlib.hazard_writers.HazardCurveXMLWriter(dest, **metadata)[source]

Bases: openquake.commonlib.hazard_writers.BaseCurveWriter

Hazard Curve XML writer. See BaseCurveWriter for a list of general constructor inputs.

The following additional metadata params are required:
  • imt: Intensity measure type used to compute these hazard curves.
  • imls: Intensity measure levels, which represent the x-axis values of each curve.
The following parameters are optional:
  • sa_period: Only used with imt = ‘SA’.
  • sa_damping: Only used with imt = ‘SA’.
add_hazard_curves(root, metadata, data)[source]

Add hazard curves stored into data as child of the root element with metadata. See the documentation of the method serialize and the constructor for a description of data and metadata, respectively.

serialize(data)[source]

Write a sequence of hazard curves to the specified file.

Parameters:data

Iterable of hazard curve data. Each datum must be an object with the following attributes:

  • poes: A list of probability of exceedence values (floats).
  • location: An object representing the location of the curve; must have x and y to represent lon and lat, respectively.
class openquake.commonlib.hazard_writers.HazardMapGeoJSONWriter(dest, **metadata)[source]

Bases: openquake.commonlib.hazard_writers.HazardMapWriter

GeoJSON implementation of a HazardMapWriter. Serializes hazard maps as FeatureCollection artifacts with additional hazard map metadata.

See HazardMapWriter for information about constructor parameters.

serialize(data)[source]

Serialize hazard map data to GeoJSON.

See HazardMapWriter.serialize() for details about the expected input.

class openquake.commonlib.hazard_writers.HazardMapWriter(dest, **metadata)[source]

Bases: object

Parameters:
  • dest – File path (including filename) or a file-like object for results to be saved to.
  • metadata

    The following keyword args are required:

    • investigation_time: Investigation time (in years) defined in the calculation which produced these results.
    • imt: Intensity measure type used to compute these hazard curves.
    • poe: The Probability of Exceedance level for which this hazard map was produced.

    The following are more or less optional (combinational rules noted below where applicable):

    • statistics: ‘mean’ or ‘quantile’
    • quantile_value: Only required if statistics = ‘quantile’.
    • smlt_path: String representing the logic tree path which produced these curves. Only required for non-statistical curves.
    • gsimlt_path: String represeting the GSIM logic tree path which produced these curves. Only required for non-statisical curves.
    • sa_period: Only used with imt = ‘SA’.
    • sa_damping: Only used with imt = ‘SA’.
serialize(data)[source]

Write a sequence of hazard map data to the specified file.

Parameters:data – Iterable of hazard map data. Each datum should be a triple of (lon, lat, iml) values.
class openquake.commonlib.hazard_writers.HazardMapXMLWriter(dest, **metadata)[source]

Bases: openquake.commonlib.hazard_writers.HazardMapWriter

NRML/XML implementation of a HazardMapWriter.

See HazardMapWriter for information about constructor parameters.

serialize(data)[source]

Serialize hazard map data to XML.

See HazardMapWriter.serialize() for details about the expected input.

class openquake.commonlib.hazard_writers.SESXMLWriter(dest)[source]

Bases: object

Parameters:
  • dest – File path (including filename) or a file-like object for XML results to be saved to.
  • sm_lt_path (str) – Source model logic tree branch identifier of the logic tree realization which produced this collection of stochastic event sets.
  • gsim_lt_path – GSIM logic tree branch identifier of the logic tree realization which produced this collection of stochastic event sets.
serialize(data, investigation_time)[source]

Serialize a collection of stochastic event sets to XML.

Parameters:
  • data

    A dictionary src_group_id -> list of openquake.commonlib.calc.Rupture objects. Each Rupture should have the following attributes:

    • rupid
    • events_by_ses
    • magnitude
    • strike
    • dip
    • rake
    • tectonic_region_type
    • is_from_fault_source (a bool)
    • is_multi_surface (a bool)
    • lons
    • lats
    • depths

    If is_from_fault_source is True, the rupture originated from a simple or complex fault sources. In this case, lons, lats, and depths should all be 2D arrays (of uniform shape). These coordinate triples represent nodes of the rupture mesh.

    If is_from_fault_source is False, the rupture originated from a point or area source. In this case, the rupture is represented by a quadrilateral planar surface. This planar surface is defined by 3D vertices. In this case, the rupture should have the following attributes:

    • top_left_corner
    • top_right_corner
    • bottom_right_corner
    • bottom_left_corner

    Each of these should be a triple of lon, lat, depth.

    If is_multi_surface is True, the rupture originated from a multi-surface source. In this case, lons, lats, and depths should have uniform length. The length should be a multiple of 4, where each segment of 4 represents the corner points of a planar surface in the following order:

    • top left
    • top right
    • bottom left
    • bottom right

    Each of these should be a triple of lon, lat, depth.

  • investigation_time – Investigation time parameter specified in the job.ini
class openquake.commonlib.hazard_writers.UHSXMLWriter(dest, **metadata)[source]

Bases: openquake.commonlib.hazard_writers.BaseCurveWriter

UHS curve XML writer. See BaseCurveWriter for a list of general constructor inputs.

The following additional metadata params are required:
  • poe: Probability of exceedance for which a given set of UHS have been

    computed

  • periods: A list of SA (Spectral Acceleration) period values, sorted

    ascending order

serialize(data)[source]

Write a sequence of uniform hazard spectra to the specified file.

Parameters:data

Iterable of UHS data. Each datum must be an object with the following attributes:

  • imls: A sequence of Intensity Measure Levels
  • location: An object representing the location of the curve; must have x and y to represent lon and lat, respectively.
openquake.commonlib.hazard_writers.gen_gmfs(gmf_set)[source]

Generate GMF nodes from a gmf_set :param gmf_set: a sequence of GMF objects with attributes imt, sa_period, sa_damping, rupture_id and containing a list of GMF nodes with attributes gmv and location. The nodes are sorted by lon/lat.

openquake.commonlib.hazard_writers.rupture_to_element(rup, parent)[source]

Convert a rupture object into an Element object.

Parameters:
  • rup – must have attributes .rupid, .events_by_ses and .seed
  • parent – parent of the returned element
openquake.commonlib.hazard_writers.sub_elems(elem, rup, *names)[source]

openquake.commonlib.logictree module

Logic tree parser, verifier and processor. See specs at https://blueprints.launchpad.net/openquake-old/+spec/openquake-logic-tree-module

A logic tree object must be iterable and yielding realizations, i.e. objects with attributes value, weight, lt_path and ordinal.

class openquake.commonlib.logictree.Branch(branch_id, weight, value)[source]

Bases: object

Branch object, represents a <logicTreeBranch /> element.

Parameters:
  • branch_id – Value of @branchID attribute.
  • weight – Decimal value of weight assigned to the branch. A text node contents of <uncertaintyWeight /> child node.
  • value – The actual uncertainty parameter value. A text node contents of <uncertaintyModel /> child node. Type depends on the branchset’s uncertainty type.
class openquake.commonlib.logictree.BranchSet(uncertainty_type, filters)[source]

Bases: object

Branchset object, represents a <logicTreeBranchSet /> element.

Parameters:
  • uncertainty_type

    String value. According to the spec one of:

    gmpeModel
    Branches contain references to different GMPEs. Values are parsed as strings and are supposed to be one of supported GMPEs. See list at GMPELogicTree.
    sourceModel
    Branches contain references to different PSHA source models. Values are treated as file names, relatively to base path.
    maxMagGRRelative
    Different values to add to Gutenberg-Richter (“GR”) maximum magnitude. Value should be interpretable as float.
    bGRRelative
    Values to add to GR “b” value. Parsed as float.
    maxMagGRAbsolute
    Values to replace GR maximum magnitude. Values expected to be lists of floats separated by space, one float for each GR MFD in a target source in order of appearance.
    abGRAbsolute
    Values to replace “a” and “b” values of GR MFD. Lists of pairs of floats, one pair for one GR MFD in a target source.
    incrementalMFDAbsolute
    Replaces an evenly discretized MFD with the values provided
    simpleFaultDipRelative
    Increases or decreases the angle of fault dip from that given in the original source model
    simpleFaultDipAbsolute
    Replaces the fault dip in the specified source(s)
    simpleFaultGeometryAbsolute
    Replaces the simple fault geometry (trace, upper seismogenic depth lower seismogenic depth and dip) of a given source with the values provided
    complexFaultGeometryAbsolute
    Replaces the complex fault geometry edges of a given source with the values provided
    characteristicFaultGeometryAbsolute
    Replaces the complex fault geometry surface of a given source with the values provided
  • filters

    Dictionary, a set of filters to specify which sources should the uncertainty be applied to. Represented as branchset element’s attributes in xml:

    applyToSources
    The uncertainty should be applied only to specific sources. This filter is required for absolute uncertainties (also only one source can be used for those). Value should be the list of source ids. Can be used only in source model logic tree.
    applyToSourceType
    Can be used in the source model logic tree definition. Allows to specify to which source type (area, point, simple fault, complex fault) the uncertainty applies to.
    applyToTectonicRegionType
    Can be used in both the source model and GMPE logic trees. Allows to specify to which tectonic region type (Active Shallow Crust, Stable Shallow Crust, etc.) the uncertainty applies to. This filter is required for all branchsets in GMPE logic tree.
apply_uncertainty(value, source)[source]

Apply this branchset’s uncertainty with value value to source source, if it passes filters.

This method is not called for uncertainties of types “gmpeModel” and “sourceModel”.

Parameters:
  • value – The actual uncertainty value of sampled branch. Type depends on uncertainty type.
  • source – The opensha source data object.
Returns:

None, all changes are applied to MFD in place. Therefore all sources have to be reinstantiated after processing is done in order to sample the tree once again.

enumerate_paths()[source]

Generate all possible paths starting from this branch set.

Returns:Generator of two-item tuples. Each tuple contains weight of the path (calculated as a product of the weights of all path’s branches) and list of path’s Branch objects. Total sum of all paths’ weights is 1.0
filter_source(source)[source]

Apply filters to source and return True if uncertainty should be applied to it.

get_branch_by_id(branch_id)[source]

Return Branch object belonging to this branch set with id equal to branch_id.

class openquake.commonlib.logictree.BranchTuple(bset, id, uncertainty, weight, effective)

Bases: tuple

bset

Alias for field number 0

effective

Alias for field number 4

id

Alias for field number 1

uncertainty

Alias for field number 2

weight

Alias for field number 3

class openquake.commonlib.logictree.GsimLogicTree(fname, tectonic_region_types=['*'], ltnode=None)[source]

Bases: object

A GsimLogicTree instance is an iterable yielding Realization tuples with attributes value, weight and lt_path, where value is a dictionary {trt: gsim}, weight is a number in the interval 0..1 and lt_path is a tuple with the branch ids of the given realization.

Parameters:
  • fname (str) – full path of the gsim_logic_tree file
  • tectonic_region_types – a sequence of distinct tectonic region types
  • ltnode – usually None, but it can also be a openquake.hazardlib.nrml.Node object describing the GSIM logic tree XML file, to avoid reparsing it
check_imts(imts)[source]

Make sure the IMTs are recognized by all GSIMs in the logic tree

classmethod from_(gsim)[source]

Generate a trivial GsimLogicTree from a single GSIM instance.

get_gsim_by_trt(rlz, trt)[source]
Parameters:rlz – a logictree Realization
Param:a tectonic region type string
Returns:the GSIM string associated to the given realization
get_num_branches()[source]

Return the number of effective branches for branchset id, as a dictionary.

get_num_paths()[source]

Return the effective number of paths in the tree.

reduce(trts)[source]

Reduce the GsimLogicTree.

Parameters:trts – a subset of tectonic region types
Returns:a reduced GsimLogicTree instance
exception openquake.commonlib.logictree.InvalidLogicTree[source]

Bases: exceptions.Exception

exception openquake.commonlib.logictree.LogicTreeError(filename, message)[source]

Bases: exceptions.Exception

Base class for errors of loading, parsing and validation of logic trees.

Parameters:
  • filename – The name of the file which contains an error.
  • message – The error message.
openquake.commonlib.logictree.MAX_SINT_32 = 2147483647

Maximum value for a seed number

openquake.commonlib.logictree.MIN_SINT_32 = -2147483648

Minimum value for a seed number

class openquake.commonlib.logictree.Realization(value, weight, lt_path, ordinal, lt_uid)

Bases: tuple

lt_path

Alias for field number 2

lt_uid

Alias for field number 4

ordinal

Alias for field number 3

uid
value

Alias for field number 0

weight

Alias for field number 1

class openquake.commonlib.logictree.SourceModel(name, weight, path, src_groups, num_gsim_paths, ordinal, samples)[source]

Bases: object

A container of SourceGroup instances with some additional attributes describing the source model in the logic tree.

get_skeleton()[source]

Return an empty copy of the source model, i.e. without sources, but with the proper attributes for each SourceGroup contained within.

num_sources
class openquake.commonlib.logictree.SourceModelLogicTree(filename, validate=True, seed=0, num_samples=0)[source]

Bases: object

Source model logic tree parser.

Parameters:
  • filename – Full pathname of logic tree file
  • validate – Boolean indicating whether or not the tree should be validated while parsed. This should be set to True on initial load of the logic tree (before importing it to the database) and to False on workers side (when loaded from the database).
Raises:

ValidationError – If logic tree file has a logic error, which can not be prevented by xml schema rules (like referencing sources with missing id).

FILTERS = ('applyToTectonicRegionType', 'applyToSources', 'applyToSourceType')
SOURCE_TYPES = ('point', 'area', 'complexFault', 'simpleFault', 'characteristicFault')
apply_branchset(branchset_node, branchset)[source]

See superclass’ method for description and signature specification.

Parses branchset node’s attribute @applyToBranches to apply following branchests to preceding branches selectively. Branching level can have more than one branchset exactly for this: different branchsets can apply to different open ends.

Checks that branchset tries to be applied only to branches on previous branching level which do not have a child branchset yet.

collect_source_model_data(source_model)[source]

Parse source model file and collect information about source ids, source types and tectonic region types available in it. That information is used then for validate_filters() and validate_uncertainty_value().

gen_source_models(gsim_lt)[source]

Yield empty SourceModel instances (one per effective realization)

make_apply_uncertainties(branch_ids)[source]

Parse the path through the source model logic tree and return “apply uncertainties” function.

Parameters:branch_ids – List of string identifiers of branches, representing the path through source model logic tree.
Returns:Function to be applied to all the sources as they get read from the database and converted to hazardlib representation. Function takes one argument, that is the hazardlib source object, and applies uncertainties to it in-place.
parse_branches(branchset_node, branchset, validate)[source]

Create and attach branches at branchset_node to branchset.

Parameters:
  • branchset_node – Same as for parse_branchset().
  • branchset – An instance of BranchSet.
  • validate – Whether or not branches’ uncertainty values should be validated.

Checks that each branch has valid value, unique id and that all branches have total weight of 1.0.

Returns:None, all branches are attached to provided branchset.
parse_branchinglevel(branchinglevel_node, depth, validate)[source]

Parse one branching level.

Parameters:
  • branchinglevel_nodeetree.Element object with tag “logicTreeBranchingLevel”.
  • depth – The sequential number of this branching level, based on 0.
  • validate – Whether or not the branching level, its branchsets and their branches should be validated.

Enumerates children branchsets and call parse_branchset(), validate_branchset(), parse_branches() and finally apply_branchset() for each.

Keeps track of “open ends” – the set of branches that don’t have any child branchset on this step of execution. After processing of every branching level only those branches that are listed in it can have child branchsets (if there is one on the next level).

parse_branchset(branchset_node, depth, number, validate)[source]

Create BranchSet object using data in branchset_node.

Parameters:
  • branchset_nodeetree.Element object with tag “logicTreeBranchSet”.
  • depth – The sequential number of branchset’s branching level, based on 0.
  • number – Index number of this branchset inside branching level, based on 0.
  • validate – Whether or not filters defined in branchset and the branchset itself should be validated.
Returns:

An instance of BranchSet with filters applied but with no branches (they’re attached in parse_branches()).

parse_filters(branchset_node, uncertainty_type, filters)[source]

See superclass’ method for description and signature specification.

Converts “applyToSources” filter value by just splitting it to a list.

parse_tree(tree_node, validate)[source]

Parse the whole tree and point root_branchset attribute to the tree’s root.

parse_uncertainty_value(node, branchset)[source]

See superclass’ method for description and signature specification.

Doesn’t change source model file name, converts other values to either pair of floats or a single float depending on uncertainty type.

sample_path(rnd)[source]

Return the model name and a list of branch ids.

Parameters:random_seed (int) – the seed used for the sampling
samples_by_lt_path()[source]

Returns a dictionary lt_path -> how many times that path was sampled

validate_branchset(branchset_node, depth, number, branchset)[source]

See superclass’ method for description and signature specification.

Checks that the following conditions are met:

  • First branching level must contain exactly one branchset, which must be of type “sourceModel”.
  • All other branchsets must not be of type “sourceModel” or “gmpeModel”.
validate_filters(branchset_node, uncertainty_type, filters)[source]

See superclass’ method for description and signature specification.

Checks that the following conditions are met:

  • “sourceModel” uncertainties can not have filters.
  • Absolute uncertainties must have only one filter – “applyToSources”, with only one source id.
  • All other uncertainty types can have either no or one filter.
  • Filter “applyToSources” must mention only source ids that exist in source models.
  • Filter “applyToTectonicRegionType” must mention only tectonic region types that exist in source models.
  • Filter “applyToSourceType” must mention only source types that exist in source models.
validate_uncertainty_value(node, branchset)[source]

See superclass’ method for description and signature specification.

Checks that the following conditions are met:

  • For uncertainty of type “sourceModel”: referenced file must exist and be readable. This is checked in collect_source_model_data() along with saving the source model information.
  • For uncertainty of type “abGRAbsolute”: value should be two float values.
  • For both absolute uncertainties: the source (only one) must be referenced in branchset’s filter “applyToSources”.
  • For all other cases: value should be a single float value.
exception openquake.commonlib.logictree.ValidationError(node, *args, **kwargs)[source]

Bases: openquake.commonlib.logictree.LogicTreeError

Logic tree file contains a logic error.

Parameters:node – XML node object that causes fail. Used to determine the affected line number.

All other constructor parameters are passed to superclass' constructor.

openquake.commonlib.logictree.get_effective_rlzs(rlzs)[source]

Group together realizations with the same unique identifier (uid) and yield the first representative of each group.

openquake.commonlib.logictree.sample(weighted_objects, num_samples, rnd)[source]

Take random samples of a sequence of weighted objects

Parameters:
  • weighted_objects – A finite sequence of objects with a .weight attribute. The weights must sum up to 1.
  • num_samples – The number of samples to return
  • rnd – Random object. Should have method random() – return uniformly distributed random float number >= 0 and < 1.
Returns:

A subsequence of the original sequence with num_samples elements

openquake.commonlib.logictree.sample_one(branches, rnd)[source]

openquake.commonlib.logs module

Set up some system-wide loggers

class openquake.commonlib.logs.LogDatabaseHandler(job_id)[source]

Bases: logging.Handler

Log stream handler

emit(record)[source]
class openquake.commonlib.logs.LogFileHandler(job_id, log_file)[source]

Bases: logging.FileHandler

Log file handler

emit(record)[source]
class openquake.commonlib.logs.LogStreamHandler(job_id)[source]

Bases: logging.StreamHandler

Log stream handler

emit(record)[source]
openquake.commonlib.logs.dbcmd(action, *args)[source]

A dispatcher to the database server.

Parameters:
  • action – database action to perform
  • args – arguments
openquake.commonlib.logs.handle(*args, **kwds)[source]

Context manager adding and removing log handlers.

Parameters:
  • job_id – ID of the current job
  • log_level – one of debug, info, warn, error, critical
  • log_file – log file path (if None, logs on stdout only)
openquake.commonlib.logs.set_level(level)[source]

Initialize logs to write records with level level or above.

openquake.commonlib.logs.touch_log_file(log_file)[source]

If a log file destination is specified, attempt to open the file in ‘append’ mode (‘a’). If the specified file is not writable, an IOError will be raised.

openquake.commonlib.oqvalidation module

class openquake.commonlib.oqvalidation.OqParam(**names_vals)[source]

Bases: openquake.hazardlib.valid.ParamSet

all_cost_types

Return the cost types of the computation (including occupants if it is there) in order.

area_source_discretization

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
asset_correlation

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
asset_hazard_distance

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
asset_life_expectancy

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
asset_loss_table

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
avg_losses

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
base_path

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
calculation_mode

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
check_gsims(gsims)[source]
Parameters:gsims – a sequence of GSIM instances
check_uniform_hazard_spectra()[source]
compare_with_classical

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
complex_fault_mesh_spacing

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
concurrent_tasks

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
conditional_loss_poes

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
continuous_fragility_discretization

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
coordinate_bin_width

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
description

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
disagg_outputs

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
distance_bin_width

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
export_dir

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
export_multi_curves

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
exports

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
file_type
get_correl_model()[source]

Return a correlation object. See openquake.hazardlib.correlation for more info.

ground_motion_correlation_model

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
ground_motion_correlation_params

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
ground_motion_fields

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
gsim

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
hazard_calculation_id

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
hazard_curves_from_gmfs

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
hazard_maps

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
hazard_output_id

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
hazard_stats()[source]

Return a list of item with the statistical functions defined for the hazard calculation

hypocenter

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
ignore_covs

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
ignore_missing_costs

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
iml_disagg

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
imtls

Returns an OrderedDict with the risk intensity measure types and levels, if given, or the hazard ones.

inputs

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
insured_losses

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
intensity_measure_types

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
intensity_measure_types_and_levels

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
interest_rate

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
investigation_time

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
is_valid_complex_fault_mesh_spacing()[source]

The complex_fault_mesh_spacing parameter can be None only if rupture_mesh_spacing is set. In that case it is identified with it.

is_valid_export_dir()[source]

The export_dir parameter must refer to a directory, and the user must have the permission to write on it.

is_valid_geometry()[source]

It is possible to infer the geometry only if exactly one of sites, sites_csv, hazard_curves_csv, gmfs_csv, region and exposure_file is set. You did set more than one, or nothing.

is_valid_inputs()[source]

Invalid calculation_mode=”{calculation_mode}” or missing fragility_file/vulnerability_file in the .ini file.

is_valid_intensity_measure_levels()[source]

In order to compute hazard curves, intensity_measure_types_and_levels must be set or extracted from the risk models.

is_valid_intensity_measure_types()[source]

If the IMTs and levels are extracted from the risk models, they must not be set directly. Moreover, if intensity_measure_types_and_levels is set directly, intensity_measure_types must not be set.

is_valid_maximum_distance()[source]

Invalid maximum_distance={maximum_distance}: {error}

is_valid_poes()[source]

When computing hazard maps and/or uniform hazard spectra, the poes list must be non-empty.

is_valid_region()[source]

If there is a region a region_grid_spacing must be given

is_valid_sites_disagg()[source]

The option sites_disagg (when given) requires specific_assets to be set.

is_valid_specific_assets()[source]

Read the special assets from the parameters specific_assets or specific_assets_csv, if present. You cannot have both. The concept is meaninful only for risk calculators.

is_valid_truncation_level_disaggregation()[source]

Truncation level must be set for disaggregation calculations

job_type

‘hazard’ or ‘risk’

loss_curve_resolution

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
loss_dt(dtype=<type 'numpy.float32'>)[source]

Return a composite dtype based on the loss types, including occupants

loss_dt_list(dtype=<type 'numpy.float32'>)[source]

Return a data type list [(loss_name, dtype), ...]

loss_ratios

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
lrem_steps_per_interval

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
mag_bin_width

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
master_seed

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
max_hazard_curves

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
max_loss_curves

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
max_site_model_distance

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
maximum_distance

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
mean_hazard_curves

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
mean_loss_curves

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
minimum_intensity

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
no_imls()[source]

Return True if there are no intensity measure levels

num_epsilon_bins

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
number_of_ground_motion_fields

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
number_of_logic_tree_samples

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
poes

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
poes_disagg

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
quantile_hazard_curves

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
quantile_loss_curves

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
random_seed

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
reference_backarc

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
reference_depth_to_1pt0km_per_sec

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
reference_depth_to_2pt5km_per_sec

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
reference_vs30_type

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
reference_vs30_value

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
region

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
region_constraint

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
region_grid_spacing

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
risk_files
risk_imtls

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
risk_investigation_time

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
risk_stats()[source]

Return a list of items with the statistical functions defined for the risk calculation

rupture_mesh_spacing

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
ruptures_per_block

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
save_ruptures

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
ses_per_logic_tree_path

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
ses_ratio

The ratio

risk_investigation_time / investigation_time / ses_per_logic_tree_path

ses_seed

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
set_risk_imtls(risk_models)[source]
Parameters:risk_models – a dictionary taxonomy -> loss_type -> risk_function

Set the attribute risk_imtls.

siteparam = {'backarc': 'reference_backarc', 'z2pt5': 'reference_depth_to_2pt5km_per_sec', 'vs30measured': 'reference_vs30_type', 'vs30': 'reference_vs30_value', 'z1pt0': 'reference_depth_to_1pt0km_per_sec'}
sites

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
sites_disagg

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
sites_per_tile

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
sites_slice

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
specific_assets

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
steps_per_interval

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
taxonomies_from_model

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
time_event

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
truncation_level

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
tses

Return the total time as investigation_time * ses_per_logic_tree_path * (number_of_logic_tree_samples or 1)

uniform_hazard_spectra

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
width_of_mfd_bin

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value

openquake.commonlib.readinput module

exception openquake.commonlib.readinput.DuplicatedPoint[source]

Bases: exceptions.Exception

Raised when reading a CSV file with duplicated (lon, lat) pairs

class openquake.commonlib.readinput.Exposure(id, category, description, cost_types, time_events, insurance_limit_is_absolute, deductible_is_absolute, area, assets, taxonomies, asset_refs, cost_calculator)

Bases: tuple

area

Alias for field number 7

asset_refs

Alias for field number 10

assets

Alias for field number 8

category

Alias for field number 1

cost_calculator

Alias for field number 11

cost_types

Alias for field number 3

deductible_is_absolute

Alias for field number 6

description

Alias for field number 2

id

Alias for field number 0

insurance_limit_is_absolute

Alias for field number 5

taxonomies

Alias for field number 9

time_events

Alias for field number 4

openquake.commonlib.readinput.collect_files(dirpath, cond=<function <lambda>>)[source]

Recursively collect the files contained inside dirpath.

Parameters:
  • dirpath – path to a readable directory
  • cond – condition on the path to collect the file
openquake.commonlib.readinput.extract_from_zip(path, candidates)[source]

Given a zip archive and a function to detect the presence of a given filename, unzip the archive into a temporary directory and return the full path of the file. Raise an IOError if the file cannot be found within the archive.

Parameters:
  • path – pathname of the archive
  • candidates – list of names to search for
openquake.commonlib.readinput.get_composite_source_model(oqparam, in_memory=True)[source]

Parse the XML and build a complete composite source model in memory.

Parameters:
openquake.commonlib.readinput.get_cost_calculator(oqparam)[source]

Read the first lines of the exposure file and infers the cost calculator

openquake.commonlib.readinput.get_exposure(oqparam)[source]

Read the full exposure in memory and build a list of openquake.risklib.riskmodels.Asset instances.

Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
Returns:an Exposure instance
openquake.commonlib.readinput.get_gmfs(oqparam)[source]
Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
Returns:sitecol, etags, gmf array of shape (N, I, E)
openquake.commonlib.readinput.get_gmfs_from_txt(oqparam, fname)[source]
Parameters:
Returns:

a composite array of shape (N, R) read from a CSV file with format etag indices [gmv1 ... gmvN] * num_imts

openquake.commonlib.readinput.get_gsim_lt(oqparam, trts=['*'])[source]
Parameters:
Returns:

a GsimLogicTree instance obtained by filtering on the provided tectonic region types.

openquake.commonlib.readinput.get_gsims(oqparam)[source]

Return an ordered list of GSIM instances from the gsim name in the configuration file or from the gsim logic tree file.

Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
openquake.commonlib.readinput.get_hcurves(oqparam)[source]
Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
Returns:sitecol, imtls, curve array
openquake.commonlib.readinput.get_hcurves_from_csv(oqparam, fname)[source]
Parameters:
Returns:

the site collection and the hazard curves read by the .txt file

openquake.commonlib.readinput.get_hcurves_from_nrml(oqparam, fname)[source]
Parameters:
Returns:

sitecol, curve array

openquake.commonlib.readinput.get_imts(oqparam)[source]

Return a sorted list of IMTs as hazardlib objects

openquake.commonlib.readinput.get_job_info(oqparam, csm, sitecol)[source]
Parameters:
Returns:

a dictionary with same parameters of the computation, in particular the input and output weights

openquake.commonlib.readinput.get_mesh(oqparam)[source]

Extract the mesh of points to compute from the sites, the sites_csv, or the region.

Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
openquake.commonlib.readinput.get_mesh_csvdata(csvfile, imts, num_values, validvalues)[source]

Read CSV data in the format IMT lon lat value1 ... valueN.

Parameters:
  • csvfile – a file or file-like object with the CSV data
  • imts – a list of intensity measure types
  • num_values – dictionary with the number of expected values per IMT
  • validvalues – validation function for the values
Returns:

the mesh of points and the data as a dictionary imt -> array of curves for each site

openquake.commonlib.readinput.get_mesh_hcurves(oqparam)[source]

Read CSV data in the format lon lat, v1-vN, w1-wN, ....

Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
Returns:the mesh of points and the data as a dictionary imt -> array of curves for each site
openquake.commonlib.readinput.get_oqparam(job_ini, pkg=None, calculators=None, hc_id=None)[source]

Parse a dictionary of parameters from an INI-style config file.

Parameters:
  • job_ini – Path to configuration file/archive or dictionary of parameters
  • pkg – Python package where to find the configuration file (optional)
  • calculators – Sequence of calculator names (optional) used to restrict the valid choices for calculation_mode
  • hc_id – Not None only when called from a post calculation
Returns:

An openquake.commonlib.oqvalidation.OqParam instance containing the validate and casted parameters/values parsed from the job.ini file as well as a subdictionary ‘inputs’ containing absolute paths to all of the files referenced in the job.ini, keyed by the parameter name.

openquake.commonlib.readinput.get_params(job_inis)[source]

Parse one or more INI-style config files.

Parameters:job_inis – List of configuration files (or list containing a single zip archive)
Returns:A dictionary of parameters
openquake.commonlib.readinput.get_risk_model(oqparam)[source]
Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
openquake.commonlib.readinput.get_rupture(oqparam)[source]

Returns a hazardlib rupture by reading the rupture_model file.

Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
openquake.commonlib.readinput.get_scenario_from_nrml(oqparam, fname)[source]
Parameters:
Returns:

a triple (sitecol, etags, gmf array)

openquake.commonlib.readinput.get_site_collection(oqparam, mesh=None)[source]

Returns a SiteCollection instance by looking at the points and the site model defined by the configuration parameters.

Parameters:
openquake.commonlib.readinput.get_site_model(oqparam)[source]

Convert the NRML file into an iterator over 6-tuple of the form (z1pt0, z2pt5, measured, vs30, lon, lat)

Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
openquake.commonlib.readinput.get_sitecol_assetcol(oqparam, exposure)[source]
Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
Returns:the site collection and the asset collection
openquake.commonlib.readinput.get_source_model_lt(oqparam)[source]
Parameters:oqparam – an openquake.commonlib.oqvalidation.OqParam instance
Returns:a openquake.commonlib.logictree.SourceModelLogicTree instance
openquake.commonlib.readinput.get_source_models(oqparam, gsim_lt, source_model_lt, in_memory=True)[source]

Build all the source models generated by the logic tree.

Parameters:
Returns:

an iterator over openquake.commonlib.logictree.SourceModel tuples

openquake.commonlib.readinput.possibly_gunzip(fname)[source]

A file can be .gzipped to save space (this happens in the debian package); in that case, let’s gunzip it.

Parameters:fname – a file name (not zipped)

openquake.calculators.reportwriter module

Utilities to build a report writer generating a .rst report for a calculation

class openquake.calculators.reportwriter.ReportWriter(dstore)[source]

Bases: object

A particularly smart view over the datastore

add(name, obj=None)[source]

Add the view named name to the report text

make_report()[source]

Build the report and return a restructed text string

save(fname)[source]

Save the report

title = {'inputs': u'Input files', 'csm_info': u'Composite source model', 'exposure_info': u'Exposure model', 'times_by_source_class': u'Computation times by source typology', 'task_info': u'Information about the tasks', 'task_slowest': u'Slowest task', 'required_params_per_trt': u'Required parameters per tectonic region type', 'ruptures_per_trt': u'Number of ruptures per tectonic region type', 'short_source_info': u'Slowest sources', 'avglosses_data_transfer': u'Estimated data transfer for the avglosses', 'rlzs_assoc': u'Realizations per (TRT, GSIM)', 'job_info': u'Informational data', 'params': u'Parameters', 'ruptures_events': u'Specific information for event based', 'performance': u'Slowest operations', 'biggest_ebr_gmf': u'Maximum memory allocated for the GMFs'}
openquake.calculators.reportwriter.build_report(job_ini, output_dir=None)[source]

Write a report.csv file with information about the calculation without running it

Parameters:
  • job_ini – full pathname of the job.ini file
  • output_dir – the directory where the report is written (default the input directory)
openquake.calculators.reportwriter.count_eff_ruptures(sources, srcfilter, gsims, param, monitor)[source]

Count the effective number of ruptures contained in the given sources within the integration distance and return a dictionary src_group_id -> num_ruptures. All sources must belong to the same tectonic region type.

openquake.calculators.reportwriter.indent(text)[source]

openquake.commonlib.risk_writers module

openquake.commonlib.riskmodels module

Reading risk models for risk calculators

openquake.commonlib.riskmodels.build_vf_node(vf)[source]

Convert a VulnerabilityFunction object into a Node suitable for XML conversion.

openquake.commonlib.riskmodels.filter_vset(elem)[source]
openquake.commonlib.riskmodels.get_risk_files(inputs)[source]
Parameters:inputs – a dictionary key -> path name
Returns:a pair (file_type, {cost_type: path})
openquake.commonlib.riskmodels.get_risk_models(oqparam, kind=None)[source]
Parameters:
  • oqparam – an OqParam instance
  • kind – vulnerability|vulnerability_retrofitted|fragility|consequence; if None it is extracted from the oqparam.file_type attribute
Returns:

a dictionary taxonomy -> loss_type -> function

openquake.commonlib.source module

class openquake.commonlib.source.CompositeSourceModel(gsim_lt, source_model_lt, source_models)[source]

Bases: _abcoll.Sequence

Parameters:
add_infos(sources)[source]

Populate the .infos dictionary (grp_id, src_id) -> <SourceInfo>

filter(src_filter)[source]

Generate a new CompositeSourceModel by filtering the sources on the given site collection.

Parameters:sitecol – a SiteCollection instance
Para src_filter:
 a SourceFilter instance
get_maxweight(concurrent_tasks)[source]

Return an appropriate maxweight for use in the block_splitter

get_model(sm_id)[source]

Extract a CompositeSourceModel instance containing the single model of index sm_id.

get_num_sources()[source]
Returns:the total number of sources in the model
get_sources(kind='all', maxweight=None)[source]

Extract the sources contained in the source models by optionally filtering and splitting them, depending on the passed parameters.

init_serials()[source]

Generate unique seeds for each rupture with numpy.arange. This should be called only in event based calculators

split_sources(sources, src_filter, maxweight=200)[source]

Split a set of sources of the same source group; light sources (i.e. with weight <= maxweight) are not split.

Parameters:
  • sources – sources of the same source group
  • src_filter – SourceFilter instance
  • maxweight – weight used to decide if a source is light
Yields:

blocks of sources of weight around maxweight

src_groups

Yields the SourceGroups inside each source model.

class openquake.commonlib.source.CompositionInfo(gsim_lt, seed, num_samples, source_models, tot_weight)[source]

Bases: object

An object to collect information about the composition of a composite source model.

Parameters:
  • source_model_lt – a SourceModelLogicTree object
  • source_models – a list of SourceModel instances
classmethod fake(gsimlt=None)[source]
Returns:a fake CompositionInfo instance with the given gsim logic tree object; if None, builds automatically a fake gsim logic tree
get_grp_ids(sm_id)[source]
Returns:a list of source group IDs for the given source model ID
get_info(sm_id)[source]

Extract a CompositionInfo instance containing the single model of index sm_id.

get_num_rlzs(source_model=None)[source]
Parameters:source_model – a SourceModel instance (or None)
Returns:the number of realizations per source model (or all)
get_rlzs_assoc(count_ruptures=None)[source]

Return a RlzsAssoc with fields realizations, gsim_by_trt, rlz_idx and trt_gsims.

Parameters:count_ruptures – a function src_group -> num_ruptures
get_sm_by_grp()[source]
Returns:a dictionary grp_id -> sm_id
get_sm_by_rlz(realizations)[source]
Returns:a dictionary rlz -> source model name
get_source_model(src_group_id)[source]

Return the source model for the given src_group_id

grp_trt()[source]
Returns:a dictionary grp_id -> TRT string
class openquake.commonlib.source.LtRealization(ordinal, sm_lt_path, gsim_rlz, weight, sampleid)[source]

Bases: object

Composite realization build on top of a source model realization and a GSIM realization.

gsim_lt_path
uid

An unique identifier for effective realizations

class openquake.commonlib.source.RlzsAssoc(csm_info)[source]

Bases: _abcoll.Mapping

Realization association class. It should not be instantiated directly, but only via the method :meth: openquake.commonlib.source.CompositeSourceModel.get_rlzs_assoc.

Attr realizations:
 list of LtRealization objects
Attr gsim_by_trt:
 list of dictionaries {trt: gsim}
Attr rlzs_assoc:
 dictionary {src_group_id, gsim: rlzs}
Attr rlzs_by_smodel:
 list of lists of realizations

For instance, for the non-trivial logic tree in openquake.qa_tests_data.classical.case_15, which has 4 tectonic region types and 4 + 2 + 2 realizations, there are the following associations:

(0, ‘BooreAtkinson2008()’) [‘#0-SM1-BA2008_C2003’, ‘#1-SM1-BA2008_T2002’] (0, ‘CampbellBozorgnia2008()’) [‘#2-SM1-CB2008_C2003’, ‘#3-SM1-CB2008_T2002’] (1, ‘Campbell2003()’) [‘#0-SM1-BA2008_C2003’, ‘#2-SM1-CB2008_C2003’] (1, ‘ToroEtAl2002()’) [‘#1-SM1-BA2008_T2002’, ‘#3-SM1-CB2008_T2002’] (2, ‘BooreAtkinson2008()’) [‘#4-SM2_a3pt2b0pt8-BA2008’] (2, ‘CampbellBozorgnia2008()’) [‘#5-SM2_a3pt2b0pt8-CB2008’] (3, ‘BooreAtkinson2008()’) [‘#6-SM2_a3b1-BA2008’] (3, ‘CampbellBozorgnia2008()’) [‘#7-SM2_a3b1-CB2008’]

extract(rlz_indices, csm_info)[source]

Extract a RlzsAssoc instance containing only the given realizations.

Parameters:rlz_indices – a list of realization indices from 0 to R - 1
get_assoc_by_grp()[source]
Returns:a numpy array of dtype assoc_by_grp_dt
get_rlz(rlzstr)[source]

Get a Realization instance for a string of the form ‘rlz-d+’

get_rlzs_by_grp_id()[source]

Returns a dictionary grp_id > [sorted rlzs]

get_rlzs_by_gsim(grp_id)[source]

Returns an orderd dictionary gsim > rlzs for the given grp_id

realizations

Flat list with all the realizations

weights

Array with the weight of the realizations

class openquake.commonlib.source.SourceInfo(src, calc_time=0, num_split=0)[source]

Bases: object

dt = dtype([('grp_id', '<u4'), ('source_id', 'S100'), ('source_class', 'S30'), ('num_ruptures', '<u4'), ('calc_time', '<f4'), ('num_sites', '<u4'), ('num_split', '<u4')])
openquake.commonlib.source.capitalize(words)[source]

Capitalize words separated by spaces.

openquake.commonlib.source.collect_source_model_paths(smlt)[source]

Given a path to a source model logic tree or a file-like, collect all of the soft-linked path names to the source models it contains and return them as a uniquified list (no duplicates).

Parameters:smlt – source model logic tree file
openquake.commonlib.source.split_filter_source(src, src_filter)[source]
Parameters:
  • src – a source to split
  • src_filter – a SourceFilter instance
Returns:

a list of split sources

openquake.commonlib.util module

openquake.commonlib.util.compose_arrays(a1, a2, firstfield='etag')[source]

Compose composite arrays by generating an extended datatype containing all the fields. The two arrays must have the same length.

openquake.commonlib.util.get_assets(dstore)[source]
Parameters:dstore – a datastore with keys ‘assetcol’
Returns:an ordered array of records (asset_ref, taxonomy, lon, lat)
openquake.commonlib.util.max_rel_diff(curve_ref, curve, min_value=0.01)[source]

Compute the maximum relative difference between two curves. Only values greather or equal than the min_value are considered.

>>> curve_ref = [0.01, 0.02, 0.03, 0.05, 1.0]
>>> curve = [0.011, 0.021, 0.031, 0.051, 1.0]
>>> round(max_rel_diff(curve_ref, curve), 2)
0.1
openquake.commonlib.util.max_rel_diff_index(curve_ref, curve, min_value=0.01)[source]

Compute the maximum relative difference between two sets of curves. Only values greather or equal than the min_value are considered. Return both the maximum difference and its location (array index).

>>> curve_refs = [[0.01, 0.02, 0.03, 0.05], [0.01, 0.02, 0.04, 0.06]]
>>> curves = [[0.011, 0.021, 0.031, 0.051], [0.012, 0.022, 0.032, 0.051]]
>>> max_rel_diff_index(curve_refs, curves)
(0.2, 1)
openquake.commonlib.util.reader(func)[source]

Decorator used to mark functions that require read access to the file system. It simply adds a thunk shared_dir_on to the function.

openquake.commonlib.util.rmsep(array_ref, array, min_value=0.01)[source]

Root Mean Square Error Percentage for two arrays.

Parameters:
  • array_ref – reference array
  • array – another array
  • min_value – compare only the elements larger than min_value
Returns:

the relative distance between the arrays

>>> curve_ref = numpy.array([[0.01, 0.02, 0.03, 0.05],
... [0.01, 0.02, 0.04, 0.06]])
>>> curve = numpy.array([[0.011, 0.021, 0.031, 0.051],
... [0.012, 0.022, 0.032, 0.051]])
>>> str(round(rmsep(curve_ref, curve), 5))
'0.11292'
openquake.commonlib.util.shared_dir_on()[source]
Returns:True if a shared_dir has been set in openquake.cfg, else False

openquake.commonlib.writers module

class openquake.commonlib.writers.CsvWriter(sep=', ', fmt='%12.8E')[source]

Bases: object

Class used in the exporters to save a bunch of CSV files

getsaved()[source]

Returns the list of files saved by this CsvWriter

save(data, fname, header=None)[source]

Save data on fname.

Parameters:
  • data – numpy array or list of lists
  • fname – path name
  • header – header to use
class openquake.commonlib.writers.HeaderTranslator(*regexps)[source]

Bases: object

An utility to convert the headers in CSV files. When reading, the column names are converted into column descriptions with the method .read, when writing column descriptions are converted into column names with the method .write. The usage is

>>> htranslator = HeaderTranslator(
...     '(asset_ref):\|S100',
...     '(eid):uint32',
...     '(taxonomy):object')
>>> htranslator.write('asset_ref:|S100 value:5'.split())
['asset_ref', 'value:5']
>>> htranslator.read('asset_ref value:5'.split())
['asset_ref:|S100', 'value:5']
read(names)[source]

Convert names into descriptions

write(descrs)[source]

Convert descriptions into names

openquake.commonlib.writers.build_header(dtype)[source]

Convert a numpy nested dtype into a list of strings suitable as header of csv file.

>>> imt_dt = numpy.dtype([('PGA', float, 3), ('PGV', float, 4)])
>>> build_header(imt_dt)
['PGA:3', 'PGV:4']
>>> gmf_dt = numpy.dtype([('A', imt_dt), ('B', imt_dt),
...                       ('idx', numpy.uint32)])
>>> build_header(gmf_dt)
['A~PGA:3', 'A~PGV:4', 'B~PGA:3', 'B~PGV:4', 'idx:uint32']
openquake.commonlib.writers.castable_to_int(s)[source]

Return True if the string s can be interpreted as an integer

openquake.commonlib.writers.extract_from(data, fields)[source]

Extract data from numpy arrays with nested records.

>>> imt_dt = numpy.dtype([('PGA', float, 3), ('PGV', float, 4)])
>>> a = numpy.array([([1, 2, 3], [4, 5, 6, 7])], imt_dt)
>>> extract_from(a, ['PGA'])
array([[ 1.,  2.,  3.]])
>>> gmf_dt = numpy.dtype([('A', imt_dt), ('B', imt_dt),
...                       ('idx', numpy.uint32)])
>>> b = numpy.array([(([1, 2, 3], [4, 5, 6, 7]),
...                  ([1, 2, 4], [3, 5, 6, 7]), 8)], gmf_dt)
>>> extract_from(b, ['idx'])
array([8], dtype=uint32)
>>> extract_from(b, ['B', 'PGV'])
array([[ 3.,  5.,  6.,  7.]])
openquake.commonlib.writers.parse_header(header)[source]

Convert a list of the form [‘fieldname:fieldtype:fieldsize’,...] into a numpy composite dtype. The parser understands headers generated by openquake.commonlib.writers.build_header(). Here is an example:

>>> parse_header(['PGA:float32', 'PGV', 'avg:float32:2'])
(['PGA', 'PGV', 'avg'], dtype([('PGA', '<f4'), ('PGV', '<f8'), ('avg', '<f4', (2,))]))
Params header:a list of type descriptions
Returns:column names and the corresponding composite dtype
openquake.commonlib.writers.read_array(fname, sep=', ')[source]

Convert a CSV file without header into a numpy array of floats.

>>> from openquake.baselib.general import writetmp
>>> print(read_array(writetmp('.1 .2, .3 .4, .5 .6\n')))
[[[ 0.1  0.2]
  [ 0.3  0.4]
  [ 0.5  0.6]]]
openquake.commonlib.writers.read_composite_array(fname, sep=', ')[source]

Convert a CSV file with header into a numpy array of records.

>>> from openquake.baselib.general import writetmp
>>> fname = writetmp('PGA:3,PGV:2,avg:1\n'
...                  '.1 .2 .3,.4 .5,.6\n')
>>> print(read_composite_array(fname))  # array of shape (1,)
[([0.1, 0.2, 0.3], [0.4, 0.5], [0.6])]
openquake.commonlib.writers.write_csv(dest, data, sep=', ', fmt='%.6E', header=None, comment=None)[source]
Parameters:
  • dest – file, filename or io.StringIO instance
  • data – array to save
  • sep – separator to use (default comma)
  • fmt – formatting string (default ‘%12.8E’)
  • header – optional list with the names of the columns to display
  • comment – optional first line starting with a # character