openquake.risklib package

openquake.risklib.riskinput module

class openquake.risklib.riskinput.AssetCollection(assets_by_site, cost_calculator, time_event, time_events='')[source]

Bases: object

D = 11
I = 16
R = 12
assets_by_site()[source]
Returns:numpy array of lists with the assets by each site
static build_asset_collection(assets_by_site, time_event=None)[source]
Parameters:
  • assets_by_site – a list of lists of assets
  • time_event – a time event string (or None)
Returns:

two arrays assetcol and taxonomies

class openquake.risklib.riskinput.CompositeRiskModel(riskmodels, damage_states=None)[source]

Bases: _abcoll.Mapping

A container (imt, taxonomy) -> riskmodel.

Parameters:
  • riskmodels – a dictionary (imt, taxonomy) -> riskmodel
  • damage_states – None or a list of damage states
build_all_loss_dtypes(curve_resolution, conditional_loss_poes, insured_losses=False)[source]
Parameters:
  • conditional_loss_poes – configuration parameter
  • insured_losses – configuration parameter
Returns:

loss_curve_dt and loss_maps_dt

build_input(imt, hazards_by_site, assetcol, eps_dict)[source]
Parameters:
  • imt – an Intensity Measure Type
  • hazards_by_site – an array of hazards per each site
  • assetcol – AssetCollection instance
  • eps_dict – a dictionary of epsilons
Returns:

a RiskInput instance

build_inputs_from_ruptures(sitecol, all_ruptures, trunc_level, correl_model, min_iml, eps, hint)[source]
Parameters:
  • sitecol – a SiteCollection instance
  • all_ruptures – the complete list of EBRupture instances
  • trunc_level – the truncation level (or None)
  • correl_model – the correlation model (or None)
  • min_iml – an array of minimum IMLs per IMT
  • eps – a matrix of epsilons of shape (N, E) or None
  • hint – hint for how many blocks to generate

Yield RiskInputFromRuptures instances.

build_loss_dtypes(conditional_loss_poes, insured_losses=False)[source]
Parameters:
  • conditional_loss_poes – configuration parameter
  • insured_losses – configuration parameter
Returns:

loss_curve_dt and loss_maps_dt

gen_outputs(riskinput, rlzs_assoc, monitor, assetcol=None)[source]

Group the assets per taxonomy and compute the outputs by using the underlying riskmodels. Yield the outputs generated as dictionaries out_by_lr.

Parameters:
  • riskinput – a RiskInput instance
  • rlzs_assoc – a RlzsAssoc instance
  • monitor – a monitor object used to measure the performance
  • assetcol – not None only for event based risk
get_imt_taxonomies(imt=None)[source]
Returns:sorted list of pairs (imt, taxonomies)
get_loss_ratios()[source]
Returns:a 1-dimensional composite array with loss ratios by loss type
get_min_iml()[source]
make_curve_builders(oqparam)[source]

Populate the inner lists .loss_types, .curve_builders.

class openquake.risklib.riskinput.GmfCollector(imts, rlzs)[source]

Bases: object

An object storing the GMFs in memory.

close()[source]
save(eid, imti, rlz, gmf, sids)[source]
class openquake.risklib.riskinput.RiskInput(imt_taxonomies, hazard_by_site, assets_by_site, eps_dict)[source]

Bases: object

Contains all the assets and hazard values associated to a given imt and site.

Parameters:
  • imt_taxonomies – a pair (IMT, taxonomies)
  • hazard_by_site – array of hazards, one per site
  • assets_by_site – array of assets, one per site
  • eps_dict – dictionary of epsilons
epsilon_getter(asset_ordinals)[source]
Parameters:asset_ordinals – list of ordinals of the assets
Returns:a closure returning an array of epsilons from the event IDs
get_hazard(rlzs_assoc, monitor=<Monitor dummy>)[source]
Parameters:
Returns:

list of hazard dictionaries imt -> rlz -> haz per each site

imt_taxonomies

Return a list of pairs (imt, taxonomies) with a single element

class openquake.risklib.riskinput.RiskInputFromRuptures(imt_taxonomies, sitecol, ses_ruptures, trunc_level, correl_model, min_iml, epsilons, eids)[source]

Bases: object

Contains all the assets associated to the given IMT and a subsets of the ruptures for a given calculation.

Parameters:
  • imt_taxonomies – list given by the risk model
  • sitecol – SiteCollection instance
  • assets_by_site – list of list of assets
  • ses_ruptures – ordered array of EBRuptures
  • gsims – list of GSIM instances
  • trunc_level – truncation level for the GSIMs
  • correl_model – correlation model for the GSIMs
Params eps:

a matrix of epsilons

epsilon_getter(asset_ordinals)[source]
Parameters:asset_ordina – ordinal of the asset
Returns:a closure returning an array of epsilons from the event IDs
get_hazard(rlzs_assoc, monitor=<Monitor dummy>)[source]
Parameters:
Returns:

lists of N hazard dictionaries imt -> rlz -> Gmvs

openquake.risklib.riskinput.create(GmfColl, eb_ruptures, sitecol, imts, rlzs_assoc, trunc_level, correl_model, min_iml, monitor=<Monitor dummy>)[source]
Parameters:
  • GmfColl – a GmfCollector class to be instantiated
  • eb_ruptures – a list of EBRuptures with the same trt_model_id
  • sitecol – a SiteCollection instance
  • imts – list of IMT strings
  • rlzs_assoc – a RlzsAssoc instance
  • trunc_level – truncation level
  • correl_model – correlation model instance
  • min_iml – a dictionary of minimum intensity measure levels
  • monitor – a monitor instance
Returns:

a GmfCollector instance

openquake.risklib.riskinput.make_eps(assets_by_site, num_samples, seed, correlation)[source]
Parameters:
  • assets_by_site – a list of lists of assets
  • num_samples (int) – the number of ruptures
  • seed (int) – a random seed
  • correlation (float) – the correlation coefficient
Returns:

epsilons matrix of shape (num_assets, num_samples)

openquake.risklib.riskmodels module

class openquake.risklib.riskmodels.Asset(asset_id, taxonomy, number, location, values, area=1, deductibles=None, insurance_limits=None, retrofitteds=None, calc=<CostCalculator {'deduct_abs': True, 'area_types': {'structural': 'per_asset'}, 'limit_abs': True, 'cost_types': {'structural': 'per_area'}}>, ordinal=None)[source]

Bases: object

Describe an Asset as a collection of several values. A value can represent a replacement cost (e.g. structural cost, business interruption cost) or another quantity that can be considered for a risk analysis (e.g. occupants).

Optionally, a Asset instance can hold also a collection of deductible values and insured limits considered for insured losses calculations.

deductible(loss_type)[source]
Returns:the deductible fraction of the asset cost for loss_type
insurance_limit(loss_type)[source]
Returns:the limit fraction of the asset cost for loss_type
retrofitted(loss_type, time_event=None)[source]
Returns:the asset retrofitted value for loss_type
value(loss_type, time_event=None)[source]
Returns:the total asset value for loss_type
class openquake.risklib.riskmodels.Classical(taxonomy, vulnerability_functions, hazard_imtls, lrem_steps_per_interval, conditional_loss_poes, poes_disagg, insured_losses=False)[source]

Bases: openquake.risklib.riskmodels.RiskModel

Classical PSHA-Based RiskModel.

  1. Compute loss curves, loss maps for each realization.
  2. Compute (if more than one realization is given) mean and quantiles loss curves and maps.

Per-realization Outputs contain the following fields:

Attr assets:an iterable over N assets the outputs refer to
Attr loss_curves:
 a numpy array of N loss curves. If the curve resolution is C, the final shape of the array will be (N, 2, C), where the two accounts for the losses/poes dimensions
Attr average_losses:
 a numpy array of N average loss values
Attr insured_curves:
 a numpy array of N insured loss curves, shaped (N, 2, C)
Attr average_insured_losses:
 a numpy array of N average insured loss values
Attr loss_maps:a numpy array of P elements holding N loss maps where P is the number of conditional_loss_poes considered. Shape: (P, N)

The statistical outputs are stored into openquake.risklib.scientific.Output, which holds the following fields:

Attr assets:an iterable of N assets the outputs refer to
Attr mean_curves:
 A numpy array with N mean loss curves. Shape: (N, 2)
Attr mean_average_losses:
 A numpy array with N mean average loss values
Attr mean_maps:A numpy array with P mean loss maps. Shape: (P, N)
Attr mean_fractions:
 A numpy array with F mean fractions, where F is the number of PoEs used for disaggregation. Shape: (F, N)
Attr quantile_curves:
 A numpy array with Q quantile curves (Q = number of quantiles). Shape: (Q, N, 2, C)
Attr quantile_average_losses:
 A numpy array shaped (Q, N) with average losses
Attr quantile_maps:
 A numpy array with Q quantile maps shaped (Q, P, N)
Attr quantile_fractions:
 A numpy array with Q quantile maps shaped (Q, F, N)
Attr mean_insured_curves:
 A numpy array with N mean insured loss curves. Shape: (N, 2)
Attr mean_average_insured_losses:
 A numpy array with N mean average insured loss values
Attr quantile_insured_curves:
 A numpy array with Q quantile insured curves (Q = number of quantiles). Shape: (Q, N, 2, C)
Attr quantile_average_insured_losses:
 A numpy array shaped (Q, N) with average insured losses
kind = 'vulnerability'
class openquake.risklib.riskmodels.ClassicalBCR(taxonomy, vulnerability_functions_orig, vulnerability_functions_retro, hazard_imtls, lrem_steps_per_interval, interest_rate, asset_life_expectancy)[source]

Bases: openquake.risklib.riskmodels.RiskModel

kind = 'vulnerability'
class openquake.risklib.riskmodels.ClassicalDamage(taxonomy, fragility_functions, hazard_imtls, investigation_time, risk_investigation_time)[source]

Bases: openquake.risklib.riskmodels.Damage

Implements the ClassicalDamage riskmodel

kind = 'fragility'
class openquake.risklib.riskmodels.CostCalculator(cost_types, area_types, deduct_abs=True, limit_abs=True)[source]

Bases: object

Return the value of an asset for the given loss type depending on the cost types declared in the exposure, as follows:

case 1: cost type: aggregated:
cost = economic value
case 2: cost type: per asset:
cost * number (of assets) = economic value
case 3: cost type: per area and area type: aggregated:
cost * area = economic value
case 4: cost type: per area and area type: per asset:
cost * area * number = economic value

The same “formula” applies to retrofitting cost.

class openquake.risklib.riskmodels.Damage(taxonomy, fragility_functions)[source]

Bases: openquake.risklib.riskmodels.RiskModel

Implements the ScenarioDamage riskmodel

kind = 'fragility'
class openquake.risklib.riskmodels.ProbabilisticEventBased(taxonomy, vulnerability_functions, loss_curve_resolution, conditional_loss_poes, insured_losses=False)[source]

Bases: openquake.risklib.riskmodels.RiskModel

Implements the Probabilistic Event Based riskmodel

Per-realization Output are saved into openquake.risklib.scientific.ProbabilisticEventBased.Output which contains the several fields:

Attr assets:an iterable over N assets the outputs refer to
Attr loss_matrix:
 an array of losses shaped N x T (where T is the number of events)
Attr loss_curves:
 a numpy array of N loss curves. If the curve resolution is C, the final shape of the array will be (N, 2, C), where the two accounts for the losses/poes dimensions
Attr average_losses:
 a numpy array of N average loss values
Attr stddev_losses:
 a numpy array holding N standard deviation of losses
Attr insured_curves:
 a numpy array of N insured loss curves, shaped (N, 2, C)
Attr average_insured_losses:
 a numpy array of N average insured loss values
Attr stddev_insured_losses:
 a numpy array holding N standard deviation of losses
Attr loss_maps:a numpy array of P elements holding N loss maps where P is the number of conditional_loss_poes considered. Shape: (P, N)
Attr dict event_loss_table:
 a dictionary mapping event ids to aggregate loss values

The statistical outputs are stored into openquake.risklib.scientific.Output objects.

kind = 'vulnerability'
class openquake.risklib.riskmodels.RiskModel(taxonomy, risk_functions)[source]

Bases: object

Base class. Can be used in the tests as a mock.

compositemodel = None
get_loss_types(imt)[source]
Parameters:imt – Intensity Measure Type string
Returns:loss types with risk functions of the given imt
kind = None
loss_types

The list of loss types in the underlying vulnerability functions, in lexicographic order

out_by_lr(imt, assets, hazard, epsgetter)[source]
Parameters:
  • imt – restrict the risk functions to this IMT
  • assets – an array of assets of homogeneous taxonomy
  • hazard – a dictionary rlz -> hazard
  • epsgetter – a callable returning epsilons for the given eids
Returns:

a dictionary (l, r) -> output

time_event = None
class openquake.risklib.riskmodels.Scenario(taxonomy, vulnerability_functions, insured_losses, time_event=None)[source]

Bases: openquake.risklib.riskmodels.RiskModel

Implements the Scenario riskmodel

kind = 'vulnerability'
openquake.risklib.riskmodels.get_riskmodel(taxonomy, oqparam, **extra)[source]

Return an instance of the correct riskmodel class, depending on the attribute calculation_mode of the object oqparam.

Parameters:
  • taxonomy – a taxonomy string
  • oqparam – an object containing the parameters needed by the riskmodel class
  • extra – extra parameters to pass to the riskmodel class
openquake.risklib.riskmodels.get_values(loss_type, assets, time_event=None)[source]
Returns:a numpy array with the values for the given assets, depending on the loss_type.
openquake.risklib.riskmodels.rescale(curves, values)[source]

Multiply the losses in each curve of kind (losses, poes) by the corresponding value.

openquake.risklib.scientific module

This module includes the scientific API of the oq-risklib

class openquake.risklib.scientific.BetaDistribution[source]

Bases: openquake.risklib.scientific.Distribution

sample(means, _covs, stddevs, _idxs=None)[source]
survival(loss_ratio, mean, stddev)[source]
class openquake.risklib.scientific.ConsequenceFunction(id, dist, params)

Bases: tuple

dist

Alias for field number 1

id

Alias for field number 0

params

Alias for field number 2

class openquake.risklib.scientific.ConsequenceModel(id, assetCategory, lossCategory, description, limitStates)[source]

Bases: dict

Container for a set of consequence functions. You can access each function given its name with the square bracket notation.

Parameters:
  • id (str) – ID of the model
  • assetCategory (str) – asset category (i.e. buildings, population)
  • lossCategory (str) – loss type (i.e. structural, contents, ...)
  • description (str) – description of the model
  • limitStates – a list of limit state strings
  • consequence_functions – a dictionary name -> ConsequenceFunction
class openquake.risklib.scientific.CurveBuilder(loss_type, loss_ratios, user_provided, conditional_loss_poes=(), insured_losses=False, curve_resolution=None)[source]

Bases: object

Build loss ratio curves. The loss ratios can be provided by the user or automatically generated (user_provided=False). The usage is something like this:

builder = CurveBuilder(loss_type, loss_ratios, user_provided=True) counts = builder.build_counts(loss_matrix)
build_counts(loss_matrix)[source]
Parameters:loss_matrix – a matrix of loss ratios of size N x E, N = #assets, E = #events
build_loss_curves(assetcol, losses_by_aid, ses_ratio)[source]
Parameters:
  • assetcol – asset collection object
  • losses_by_aid – a matrix of losses indexed by asset
  • ses_ratio – event based factor
build_loss_maps(assetcol, rcurves)[source]

Build loss maps from the risk curves. Yield pairs (rlz_ordinal, loss_maps array).

Parameters:
  • assetcol – asset collection
  • rcurves – array of risk curves of shape (N, R, 2)
build_poes(N, count_dicts, ses_ratio)[source]
Parameters:
  • N – the number of assets
  • count_dicts – a list of maps asset_idx -> [C indices]
  • ses_ratio – event based factor
get_counts(N, count_dicts)[source]

Return a matrix of shape (N, C), with nonzero entries at the indices given by the count_dicts

Parameters:
  • N – the number of assets
  • count_dicts – a list of maps asset_idx -> [C indices]
>>> cb = CurveBuilder('structural', [0.1, 0.2, 0.3, 0.9], True)
>>> cb.get_counts(3, [{1: [4, 3, 2, 1]}, {2: [4, 0, 0, 0]},
...                   {1: [1, 0, 0, 0]}, {2: [2, 0, 0, 0]}])
array([[0, 0, 0, 0],
       [5, 3, 2, 1],
       [6, 0, 0, 0]], dtype=uint32)
class openquake.risklib.scientific.DegenerateDistribution[source]

Bases: openquake.risklib.scientific.Distribution

The degenerate distribution. E.g. a distribution with a delta corresponding to the mean.

sample(means, _covs, _stddev, _idxs)[source]
survival(loss_ratio, mean, _stddev)[source]
class openquake.risklib.scientific.DiscreteDistribution[source]

Bases: openquake.risklib.scientific.Distribution

sample(loss_ratios, probs)[source]
seed = None
survival(loss_ratios, probs)[source]

Required for the Classical Risk and BCR Calculators. Currently left unimplemented as the PMF format is used only for the Scenario and Event Based Risk Calculators.

Parameters:steps (int) – number of steps between loss ratios.
class openquake.risklib.scientific.Distribution[source]

Bases: object

A Distribution class models continuous probability distribution of random variables used to sample losses of a set of assets. It is usually registered with a name (e.g. LN, BT, PM) by using openquake.baselib.general.CallableDict

sample(means, covs, stddevs, idxs)[source]
Returns:

sample a set of losses

Parameters:
  • means – an array of mean losses
  • covs – an array of covariances
  • stddevs – an array of stddevs
survival(loss_ratio, mean, stddev)[source]

Return the survival function of the distribution with mean and stddev applied to loss_ratio

class openquake.risklib.scientific.FragilityFunctionContinuous(limit_state, mean, stddev)[source]

Bases: object

class openquake.risklib.scientific.FragilityFunctionDiscrete(limit_state, imls, poes, no_damage_limit=None)[source]

Bases: object

interp
class openquake.risklib.scientific.FragilityFunctionList(array, **attrs)[source]

Bases: list

A list of fragility functions with common attributes; there is a function for each limit state.

mean_loss_ratios_with_steps(steps)[source]

For compatibility with vulnerability functions

class openquake.risklib.scientific.FragilityModel(id, assetCategory, lossCategory, description, limitStates)[source]

Bases: dict

Container for a set of fragility functions. You can access each function given the IMT and taxonomy with the square bracket notation.

Parameters:
  • id (str) – ID of the model
  • assetCategory (str) – asset category (i.e. buildings, population)
  • lossCategory (str) – loss type (i.e. structural, contents, ...)
  • description (str) – description of the model
  • limitStates – a list of limit state strings
build(continuous_fragility_discretization, steps_per_interval)[source]

Return a new FragilityModel instance, in which the values have been replaced with FragilityFunctionList instances.

Parameters:
  • continuous_fragility_discretization – configuration parameter
  • steps_per_interval – configuration parameter
class openquake.risklib.scientific.LogNormalDistribution(epsilons=None)[source]

Bases: openquake.risklib.scientific.Distribution

Model a distribution of a random variable whoose logarithm are normally distributed.

Attr epsilons:An array of random numbers generated with numpy.random.multivariate_normal() with size E
sample(means, covs, _stddevs, idxs)[source]
survival(loss_ratio, mean, stddev)[source]
class openquake.risklib.scientific.Output(assets, loss_type, hid=None, weight=0, **attrs)[source]

Bases: object

A generic container of attributes. Only assets, loss_type, hid and weight are always defined.

Parameters:
  • assets – a list of assets
  • loss_type – a loss type string
  • hid – ordinal of the hazard realization (can be None)
  • weight – weight of the realization (can be None)
class openquake.risklib.scientific.SimpleStats(rlzs, quantiles=())[source]

Bases: object

A class to perform statistics on the average losses. The average losses are stored as N x 2 arrays (non-insured and insured losses) where N is the number of assets.

Parameters:
  • rlzs – a list of realizations
  • quantiles – a list of floats in the range 0..1
compute_and_store(name, dstore)[source]

Compute mean and quantiles from the data in the datastore under the group <name>-rlzs and store them under the group <name>-stats. Return the number of bytes stored.

class openquake.risklib.scientific.StatsBuilder(quantiles, conditional_loss_poes, poes_disagg, curve_resolution=0, _normalize_curves=<function normalize_curves>, insured_losses=False)[source]

Bases: object

A class to build risk statistics.

Parameters:
  • quantiles – list of quantile values
  • conditional_loss_poes – list of conditional loss poes
  • poes_disagg – list of poes_disagg
  • curve_resolution – only meaninful for the event based
build(all_outputs, prefix='')[source]

Build all statistics from a set of risk outputs.

Parameters:all_outputs – a non empty sequence of risk outputs referring to the same assets and loss_type. Each output must have attributes assets, loss_type, hid, weight, loss_curves and insured_curves (the latter is possibly None).
Returns:an Output object with the following attributes (numpy arrays; the shape is in parenthesis, N is the number of assets, R the resolution of the loss curve, P the number of conditional loss poes, Q the number of quantiles):

01. assets (N) 02. loss_type (1) 03. mean_curves (2, N, 2, R) 04. mean_average_losses (2, N) 05. mean_map (2, P, N) 06. mean_fractions (2, P, N) 07. quantile_curves (2, Q, N, 2, R) 08. quantile_average_losses (2, Q, N) 09. quantile_maps (2, Q, P, N) 10. quantile_fractions (2, Q, P, N) 11. quantiles (Q)

get_curves_maps(stats)[source]
Parameters:stats – an object with attributes mean_curves, mean_average_losses, mean_maps, quantile_curves, quantile_average_losses, quantile_loss_curves, quantile_maps, assets. There is also a loss_type attribute which must be always the same.
Returns:statistical loss curves and maps per asset as composite arrays of shape (Q1, N)
normalize(loss_curves)[source]

Normalize the loss curves by using the provided normalization function

class openquake.risklib.scientific.VulnerabilityFunction(vf_id, imt, imls, mean_loss_ratios, covs=None, distribution='LN')[source]

Bases: object

dtype = dtype([('iml', '<f4'), ('loss_ratio', '<f4'), ('cov', '<f4')])
init()[source]
interpolate(gmvs)[source]
Parameters:gmvs – array of intensity measure levels
Returns:(interpolated loss ratios, interpolated covs, indices > min)
loss_ratio_exceedance_matrix = <functools.partial object>
mean_imls = <functools.partial object>
mean_loss_ratios_with_steps(steps)[source]

Split the mean loss ratios, producing a new set of loss ratios. The new set of loss ratios always includes 0.0 and 1.0

Parameters:steps (int) –

the number of steps we make to go from one loss ratio to the next. For example, if we have [0.5, 0.7]:

steps = 1 produces [0.0,  0.5, 0.7, 1]
steps = 2 produces [0.0, 0.25, 0.5, 0.6, 0.7, 0.85, 1]
steps = 3 produces [0.0, 0.17, 0.33, 0.5, 0.57, 0.63,
                    0.7, 0.8, 0.9, 1]
sample(means, covs, idxs, epsilons)[source]

Sample the epsilons and apply the corrections to the means. This method is called only if there are nonzero covs.

Parameters:
  • means – array of E’ loss ratios
  • covs – array of E’ floats
  • idxs – array of E booleans with E >= E’
  • epsilons – array of E floats
Returns:

array of E’ loss ratios

set_distribution(epsilons=None)[source]
strictly_increasing()[source]
Returns:a new vulnerability function that is strictly increasing. It is built by removing piece of the function where the mean loss ratio is constant.
class openquake.risklib.scientific.VulnerabilityFunctionWithPMF(vf_id, imt, imls, loss_ratios, probs, seed=42)[source]

Bases: openquake.risklib.scientific.VulnerabilityFunction

Vulnerability function with an explicit distribution of probabilities

Parameters:
  • vf_id (str) – vulnerability function ID
  • imt (str) – Intensity Measure Type
  • imls – intensity measure levels (L)
  • ratios – an array of mean ratios (M)
  • probs – a matrix of probabilities of shape (M, L)
init()[source]
interpolate(gmvs)[source]
Parameters:gmvs – array of intensity measure levels
Returns:(interpolated probabilities, None, indices > min)
loss_ratio_exceedance_matrix = <functools.partial object>
sample(probs, _covs, idxs, epsilons)[source]

Sample the epsilons and applies the corrections to the probabilities. This method is called only if there are epsilons.

Parameters:
  • probs – array of E’ floats
  • _covs – ignored, it is there only for API consistency
  • idxs – array of E booleans with E >= E’
  • epsilons – array of E floats
Returns:

array of E’ probabilities

set_distribution(epsilons=None)[source]
class openquake.risklib.scientific.VulnerabilityModel(id=None, assetCategory=None, lossCategory=None)[source]

Bases: dict

Container for a set of vulnerability functions. You can access each function given the IMT and taxonomy with the square bracket notation.

Parameters:
  • id (str) – ID of the model
  • assetCategory (str) – asset category (i.e. buildings, population)
  • lossCategory (str) – loss type (i.e. structural, contents, ...)

All such attributes are None for a vulnerability model coming from a NRML 0.4 file.

openquake.risklib.scientific.annual_frequency_of_exceedence(poe, t_haz)[source]
Parameters:
  • poe – hazard probability of exceedence
  • t_haz – hazard investigation time
openquake.risklib.scientific.asset_statistics(losses, curves_poes, quantiles, weights, poes)[source]

Compute output statistics (mean/quantile loss curves and maps) for a single asset

Parameters:
  • losses – the losses on which the loss curves are defined
  • curves_poes – a numpy matrix with the poes of the different curves
  • quantiles (list) – an iterable over the quantile levels to be considered for quantile outputs
  • poes (list) – the poe taken into account for computing loss maps
Returns:

a tuple with 1) mean loss curve 2) a list of quantile curves 3) mean loss map 4) a list of quantile loss maps

openquake.risklib.scientific.average_loss(losses_poes)[source]

Given a loss curve with poes over losses defined on a given time span it computes the average loss on this period of time.

Note:As the loss curve is supposed to be piecewise linear as it is a result of a linear interpolation, we compute an exact integral by using the trapeizodal rule with the width given by the loss bin width.
openquake.risklib.scientific.bcr(eal_original, eal_retrofitted, interest_rate, asset_life_expectancy, asset_value, retrofitting_cost)[source]

Compute the Benefit-Cost Ratio.

BCR = (EALo - EALr)(1-exp(-r*t))/(r*C)

Where:

  • BCR – Benefit cost ratio
  • EALo – Expected annual loss for original asset
  • EALr – Expected annual loss for retrofitted asset
  • r – Interest rate
  • t – Life expectancy of the asset
  • C – Retrofitting cost
openquake.risklib.scientific.build_dtypes(curve_resolution, conditional_loss_poes, insured=False)[source]

Returns loss_curve_dt and loss_maps_dt

openquake.risklib.scientific.build_imls(ff, continuous_fragility_discretization, steps_per_interval=0)[source]

Build intensity measure levels from a fragility function. If the function is continuous, they are produced simply as a linear space between minIML and maxIML. If the function is discrete, they are generated with a complex logic depending on the noDamageLimit and the parameter steps per interval.

Parameters:
  • ff – a fragility function object
  • continuous_fragility_discretization – .ini file parameter
  • steps_per_interval – .ini file parameter
Returns:

generated imls

openquake.risklib.scientific.build_poes(counts, nses)[source]
Parameters:
  • counts – an array of counts of exceedence for the bins
  • nses – number of stochastic event sets
Returns:

an array of PoEs

openquake.risklib.scientific.classical(vulnerability_function, hazard_imls, hazard_poes, steps=10)[source]
Parameters:
  • vulnerability_function – an instance of openquake.risklib.scientific.VulnerabilityFunction representing the vulnerability function used to compute the curve.
  • hazard_imls – the hazard intensity measure type and levels
  • steps (int) – Number of steps between loss ratios.
openquake.risklib.scientific.classical_damage(fragility_functions, hazard_imls, hazard_poes, investigation_time, risk_investigation_time)[source]
Parameters:
  • fragility_functions – a list of fragility functions for each damage state
  • hazard_imls – Intensity Measure Levels
  • hazard_poes – hazard curve
  • investigation_time – hazard investigation time
  • risk_investigation_time – risk investigation time
Returns:

an array of M probabilities of occurrence where M is the numbers of damage states.

openquake.risklib.scientific.conditional_loss_ratio(loss_ratios, poes, probability)[source]

Return the loss ratio corresponding to the given PoE (Probability of Exceendance). We can have four cases:

  1. If probability is in poes it takes the bigger corresponding loss_ratios.
  2. If it is in (poe1, poe2) where both poe1 and poe2 are in poes, then we perform a linear interpolation on the corresponding losses
  3. if the given probability is smaller than the lowest PoE defined, it returns the max loss ratio .
  4. if the given probability is greater than the highest PoE defined it returns zero.
Parameters:
  • loss_ratios – an iterable over non-decreasing loss ratio values (float)
  • poes – an iterable over non-increasing probability of exceedance values (float)
  • probability (float) – the probability value used to interpolate the loss curve
openquake.risklib.scientific.event_based(loss_values, ses_ratio, curve_resolution)[source]

Compute a loss (or loss ratio) curve.

Parameters:
  • loss_values – The loss ratios (or the losses) computed by applying the vulnerability function
  • ses_ratio – Time representative of the stochastic event set
  • curve_resolution – The number of points the output curve is defined by
openquake.risklib.scientific.exposure_statistics(loss_curves, map_poes, weights, quantiles)[source]

Compute exposure statistics for N assets and R realizations.

Parameters:
  • loss_curves – a list with N loss curves data. Each item holds a 2-tuple with 1) the loss ratios on which the curves have been defined on 2) the poes of the R curves
  • map_poes – a numpy array with P poes used to compute loss maps
  • weights – a list of N weights used to compute mean/quantile weighted statistics
  • quantiles – the quantile levels used to compute quantile results
Returns:

a tuple with four elements:
  1. a numpy array with N mean loss curves
  2. a numpy array with P x N mean map values
  3. a numpy array with Q x N quantile loss curves
  4. a numpy array with Q x P quantile map values

openquake.risklib.scientific.extract_poe_ins(name)[source]
>>> extract_poe_ins('poe-0.1')
(0.1, False)
>>> extract_poe_ins('poe-0.2_ins')
(0.2, True)
openquake.risklib.scientific.fine_graining(points, steps)[source]
Parameters:
  • points – a list of floats
  • steps (int) – expansion steps (>= 2)
>>> fine_graining([0, 1], steps=0)
[0, 1]
>>> fine_graining([0, 1], steps=1)
[0, 1]
>>> fine_graining([0, 1], steps=2)
array([ 0. ,  0.5,  1. ])
>>> fine_graining([0, 1], steps=3)
array([ 0.        ,  0.33333333,  0.66666667,  1.        ])
>>> fine_graining([0, 0.5, 0.7, 1], steps=2)
array([ 0.  ,  0.25,  0.5 ,  0.6 ,  0.7 ,  0.85,  1.  ])

N points become S * (N - 1) + 1 points with S > 0

openquake.risklib.scientific.insured_loss_curve(curve, deductible, insured_limit)[source]

Compute an insured loss ratio curve given a loss ratio curve

Parameters:
  • curve – an array 2 x R (where R is the curve resolution)
  • deductible (float) – the deductible limit in fraction form
  • insured_limit (float) – the insured limit in fraction form
>>> losses = numpy.array([3, 20, 101])
>>> poes = numpy.array([0.9, 0.5, 0.1])
>>> insured_loss_curve(numpy.array([losses, poes]), 5, 100)
array([[  3.        ,  20.        ],
       [  0.85294118,   0.5       ]])
openquake.risklib.scientific.insured_losses(losses, deductible, insured_limit)[source]
Parameters:
  • losses – an array of ground-up loss ratios
  • deductible (float) – the deductible limit in fraction form
  • insured_limit (float) – the insured limit in fraction form

Compute insured losses for the given asset and losses, from the point of view of the insurance company. For instance:

>>> insured_losses(numpy.array([3, 20, 101]), 5, 100)
array([ 0, 15, 95])
  • if the loss is 3 (< 5) the company does not pay anything
  • if the loss is 20 the company pays 20 - 5 = 15
  • if the loss is 101 the company pays 100 - 5 = 95
openquake.risklib.scientific.loss_map_matrix(poes, curves)[source]

Wrapper around openquake.risklib.scientific.conditional_loss_ratio(). Return a matrix of shape (num-poes, num-curves). The curves are lists of pairs (loss_ratios, poes).

openquake.risklib.scientific.make_epsilons(matrix, seed, correlation)[source]

Given a matrix N * R returns a matrix of the same shape N * R obtained by applying the multivariate_normal distribution to N points and R samples, by starting from the given seed and correlation.

openquake.risklib.scientific.mean_curve(values, weights=None)[source]

Compute the mean by using numpy.average on the first axis.

openquake.risklib.scientific.mean_std(fractions)[source]

Given an N x M matrix, returns mean and std computed on the rows, i.e. two M-dimensional vectors.

openquake.risklib.scientific.normalize_curves(curves)[source]
Parameters:curves – a list of pairs (losses, poes)
Returns:first losses, all_poes
openquake.risklib.scientific.normalize_curves_eb(curves)[source]

A more sophisticated version of normalize_curves, used in the event based calculator.

Parameters:curves – a list of pairs (losses, poes)
Returns:first losses, all_poes
openquake.risklib.scientific.pairwise_diff(values)[source]

Differences between a value and the next value in a sequence

openquake.risklib.scientific.pairwise_mean(values)[source]

Averages between a value and the next value in a sequence

openquake.risklib.scientific.quantile_curve(curves, quantile, weights=None)[source]

Compute the weighted quantile aggregate of a set of curves when using the logic tree end-branch enumeration approach, or just the standard quantile when using the sampling approach.

Parameters:
  • curves – 2D array-like of curve PoEs. Each row represents the PoEs for a single curve
  • quantile – Quantile value to calculate. Should in the range [0.0, 1.0].
  • weights – Array-like of weights, 1 for each input curve, or None
Returns:

A numpy array representing the quantile aggregate

openquake.risklib.scientific.quantile_matrix(values, quantiles, weights)[source]
Parameters:
  • curves – a matrix R x N, where N is the number of assets and R the number of realizations
  • quantile – a list of Q quantiles
  • weights – a list of R weights
Returns:

a matrix Q x N

openquake.risklib.scientific.scenario_damage(fragility_functions, gmv)[source]

Compute the damage state fractions for the given ground motion value. Return am array of M values where M is the numbers of damage states.

openquake.risklib.utils module

class openquake.risklib.utils.memoized(func)[source]

Bases: object

Minimalistic memoizer decorator

openquake.risklib.utils.numpy_map(f, *args)[source]
openquake.risklib.utils.pairwise(iterable)[source]

s -> (s0,s1), (s1,s2), (s2, s3), ...

openquake.risklib.valid module

Validation library for the engine, the desktop tools, and anything else

class openquake.risklib.valid.Choice(*choices)[source]

Bases: object

Check if the choice is valid (case sensitive).

class openquake.risklib.valid.ChoiceCI(*choices)[source]

Bases: object

Check if the choice is valid (case insensitive version).

class openquake.risklib.valid.Choices(*choices)[source]

Bases: openquake.risklib.valid.Choice

Convert the choices, passed as a comma separated string, into a tuple of validated strings. For instance

>>> Choices('xml', 'csv')('xml,csv')
('xml', 'csv')
class openquake.risklib.valid.FloatRange(minrange, maxrange)[source]

Bases: object

openquake.risklib.valid.IML(value, IMT, minIML=None, maxIML=None, imlUnit=None)[source]

Convert a node of the form

<IML IMT=”PGA” imlUnit=”g” minIML=”0.02” maxIML=”1.5”/>

into (“PGA”, None, 0.02, 1.5) and a node

<IML IMT=”MMI” imlUnit=”g”>7 8 9 10 11</IML>

into (“MMI”, [7., 8., 9., 10., 11.], None, None)

class openquake.risklib.valid.MetaParamSet(name, bases, dic)[source]

Bases: type

Set the .name attribute of every Param instance defined inside any subclass of ParamSet.

class openquake.risklib.valid.NoneOr(cast)[source]

Bases: object

Accept the empty string (casted to None) or something else validated by the underlying cast validator.

class openquake.risklib.valid.Param(validator, default=<object object>, name=None)[source]

Bases: object

A descriptor for validated parameters with a default, to be used as attributes in ParamSet objects.

Parameters:
  • validator – the validator
  • default – the default value
NODEFAULT = <object object>
class openquake.risklib.valid.ParamSet(**names_vals)[source]

Bases: openquake.baselib.hdf5.LiteralAttrs

A set of valid interrelated parameters. Here is an example of usage:

>>> class MyParams(ParamSet):
...     a = Param(positiveint)
...     b = Param(positivefloat)
...
...     def is_valid_not_too_big(self):
...         "The sum of a and b must be under 10: a={a} and b={b}"
...         return self.a + self.b < 10
>>> mp = MyParams(a='1', b='7.2')
>>> mp
<MyParams a=1, b=7.2>
>>> MyParams(a='1', b='9.2').validate()
Traceback (most recent call last):
...
ValueError: The sum of a and b must be under 10: a=1 and b=9.2

The constrains are applied in lexicographic order. The attribute corresponding to a Param descriptor can be set as usual:

>>> mp.a = '2'
>>> mp.a
'2'

A list with the literal strings can be extracted as follows:

>>> mp.to_params()
[('a', "'2'"), ('b', '7.2')]

It is possible to build a new object from a dictionary of parameters which are assumed to be already validated:

>>> MyParams.from_(dict(a="'2'", b='7.2'))
<MyParams a='2', b=7.2>
classmethod check(dic)[source]

Convert a dictionary name->string into a dictionary name->value by converting the string. If the name does not correspond to a known parameter, just ignore it and print a warning.

classmethod from_(dic)[source]

Build a new ParamSet from a dictionary of string-valued parameters which are assumed to be already valid.

params = {}
to_params()[source]

Convert the instance dictionary into a sorted list of pairs (name, valrepr) where valrepr is the string representation of the underlying value.

validate()[source]

Apply the is_valid methods to self and possibly raise a ValueError.

class openquake.risklib.valid.Regex(regex)[source]

Bases: object

Compare the value with the given regex

class openquake.risklib.valid.SimpleId(length, regex='^[\w_\-]+$')[source]

Bases: object

Check if the given value is a valid ID.

Parameters:
  • length – maximum length of the ID
  • regex – accepted characters
class openquake.risklib.valid.SiteParam(z1pt0, z2pt5, measured, vs30, lon, lat, backarc)

Bases: tuple

backarc

Alias for field number 6

lat

Alias for field number 5

lon

Alias for field number 4

measured

Alias for field number 2

vs30

Alias for field number 3

z1pt0

Alias for field number 0

z2pt5

Alias for field number 1

openquake.risklib.valid.ab_values(value)[source]

a and b values of the GR magniture-scaling relation. a is a positive float, b is just a float.

openquake.risklib.valid.boolean(value)[source]
Parameters:value – input string such as ‘0’, ‘1’, ‘true’, ‘false’
Returns:boolean
>>> boolean('')
False
>>> boolean('True')
True
>>> boolean('false')
False
>>> boolean('t')
Traceback (most recent call last):
    ...
ValueError: Not a boolean: t
openquake.risklib.valid.check_levels(imls, imt)[source]

Raise a ValueError if the given levels are invalid.

Parameters:
  • imls – a list of intensity measure and levels
  • imt – the intensity measure type
>>> check_levels([0.1, 0.2], 'PGA')  # ok
>>> check_levels([0.1], 'PGA')
Traceback (most recent call last):
   ...
ValueError: Not enough imls for PGA: [0.1]
>>> check_levels([0.2, 0.1], 'PGA')
Traceback (most recent call last):
   ...
ValueError: The imls for PGA are not sorted: [0.2, 0.1]
>>> check_levels([0.2, 0.2], 'PGA')
Traceback (most recent call last):
   ...
ValueError: Found duplicated levels for PGA: [0.2, 0.2]
openquake.risklib.valid.check_weights(nodes_with_a_weight)[source]

Ensure that the sum of the values is 1

Parameters:nodes_with_a_weight – a list of Node objects with a weight attribute
openquake.risklib.valid.compose(*validators)[source]

Implement composition of validators. For instance

>>> utf8_not_empty = compose(utf8, not_empty)
openquake.risklib.valid.coordinates(value)[source]

Convert a non-empty string into a list of lon-lat coordinates >>> coordinates(‘’) Traceback (most recent call last): ... ValueError: Empty list of coordinates: ‘’ >>> coordinates(‘1.1 1.2’) [(1.1, 1.2)] >>> coordinates(‘1.1 1.2, 2.2 2.3’) [(1.1, 1.2), (2.2, 2.3)]

openquake.risklib.valid.decreasing_probabilities(value)[source]
Parameters:value – input string, comma separated or space separated
Returns:a list of decreasing probabilities
>>> decreasing_probabilities('1')
Traceback (most recent call last):
...
ValueError: Not enough probabilities, found '1'
>>> decreasing_probabilities('0.2 0.1')
[0.2, 0.1]
>>> decreasing_probabilities('0.1 0.2')
Traceback (most recent call last):
...
ValueError: The probabilities 0.1 0.2 are not in decreasing order
openquake.risklib.valid.depth(value)[source]
Parameters:value – input string
Returns:float >= 0
openquake.risklib.valid.dictionary(value)[source]
Parameters:value – input string corresponding to a literal Python object
Returns:the Python object
>>> dictionary('')
{}
>>> dictionary('{}')
{}
>>> dictionary('{"a": 1}')
{'a': 1}
>>> dictionary('"vs30_clustering: true"')  # an error really done by a user
Traceback (most recent call last):
   ...
ValueError: '"vs30_clustering: true"' is not a valid Python dictionary
openquake.risklib.valid.float_(value)[source]
Parameters:value – input string
Returns:a floating point number
openquake.risklib.valid.floatdict(value)[source]
Parameters:value – input string corresponding to a literal Python number or dictionary
Returns:a Python dictionary key -> number
>>> floatdict("200")
{'default': 200}
>>> text = "{'active shallow crust': 250., 'default': 200}"
>>> sorted(floatdict(text).items())
[('active shallow crust', 250.0), ('default', 200)]
openquake.risklib.valid.gsim(value, **kwargs)[source]

Make sure the given value is the name of an available GSIM class.

>>> gsim('BooreAtkinson2011')
'BooreAtkinson2011()'
openquake.risklib.valid.hazard_id(value)[source]
>>> hazard_id('')
()
>>> hazard_id('-1')
(-1,)
>>> hazard_id('42')
(42,)
>>> hazard_id('42,3')
(42, 3)
>>> hazard_id('42,3,4')
(42, 3, 4)
>>> hazard_id('42:3')
Traceback (most recent call last):
   ...
ValueError: Invalid hazard_id '42:3'
openquake.risklib.valid.hypo_list(nodes)[source]
Parameters:nodes – a hypoList node with N hypocenter nodes
Returns:a numpy array of shape (N, 3) with strike, dip and weight
openquake.risklib.valid.integers(value)[source]
Parameters:value – input string
Returns:non-empty list of integers
>>> integers('1, 2')
[1, 2]
>>> integers(' ')
Traceback (most recent call last):
   ...
ValueError: Not a list of integers: ' '
openquake.risklib.valid.intensity_measure_type(value)[source]

Make sure value is a valid intensity measure type and return it in a normalized form

>>> intensity_measure_type('SA(0.10)')  # NB: strips the trailing 0
'SA(0.1)'
>>> intensity_measure_type('SA')  # this is invalid
Traceback (most recent call last):
  ...
ValueError: Invalid IMT: 'SA'
openquake.risklib.valid.intensity_measure_types(value)[source]
Parameters:value – input string
Returns:non-empty list of Intensity Measure Type objects
>>> intensity_measure_types('PGA')
['PGA']
>>> intensity_measure_types('PGA, SA(1.00)')
['PGA', 'SA(1.0)']
>>> intensity_measure_types('SA(0.1), SA(0.10)')
Traceback (most recent call last):
  ...
ValueError: Duplicated IMTs in SA(0.1), SA(0.10)
openquake.risklib.valid.intensity_measure_types_and_levels(value)[source]
Parameters:value – input string
Returns:Intensity Measure Type and Levels dictionary
>>> intensity_measure_types_and_levels('{"SA(0.10)": [0.1, 0.2]}')
{'SA(0.1)': [0.1, 0.2]}
openquake.risklib.valid.latitude(value)[source]
Parameters:value – input string
Returns:latitude float, rounded to 5 digits, i.e. 1 meter maximum
>>> latitude('-0.123456')
-0.12346
openquake.risklib.valid.latitudes(value)[source]
Parameters:value – a comma separated string of latitudes
Returns:a list of latitudes
openquake.risklib.valid.lon_lat(value)[source]
Parameters:value – a pair of coordinates
Returns:a tuple (longitude, latitude)
>>> lon_lat('12 14')
(12.0, 14.0)
openquake.risklib.valid.lon_lat_iml(value, lon, lat, iml)[source]

Used to convert nodes of the form <node lon=”LON” lat=”LAT” iml=”IML” />

openquake.risklib.valid.longitude(value)[source]
Parameters:value – input string
Returns:longitude float, rounded to 5 digits, i.e. 1 meter maximum
>>> longitude('0.123456')
0.12346
openquake.risklib.valid.longitudes(value)[source]
Parameters:value – a comma separated string of longitudes
Returns:a list of longitudes
openquake.risklib.valid.loss_ratios(value)[source]
Parameters:value – input string
Returns:dictionary loss_type -> loss ratios
>>> loss_ratios('{"structural": [0.1, 0.2]}')
{'structural': [0.1, 0.2]}
openquake.risklib.valid.mag_scale_rel(value)[source]
Parameters:value – name of a Magnitude-Scale relationship in hazardlib
Returns:the corresponding hazardlib object
openquake.risklib.valid.namelist(value)[source]
Parameters:value – input string
Returns:list of identifiers separated by whitespace or commas
>>> namelist('a,b')
['a', 'b']
>>> namelist('a1  b_2       _c')
['a1', 'b_2', '_c']
>>> namelist('a1 b_2 1c')
Traceback (most recent call last):
    ...
ValueError: List of names containing an invalid name: 1c
openquake.risklib.valid.nonzero(value)[source]
Parameters:value – input string
Returns:the value unchanged
>>> nonzero('1')
'1'
>>> nonzero('0')
Traceback (most recent call last):
  ...
ValueError: '0' is zero
openquake.risklib.valid.not_empty(value)[source]

Check that the string is not all blanks

openquake.risklib.valid.pmf(value)[source]

Comvert a string into a Probability Mass Function.

Parameters:value – a sequence of probabilities summing up to 1 (no commas)
Returns:a list of pairs [(probability, index), ...] with index starting from 0
>>> pmf("0.157 0.843")
[(0.157, 0), (0.843, 1)]
openquake.risklib.valid.point2d(value, lon, lat)[source]

This is used to convert nodes of the form <location lon=”LON” lat=”LAT” />

Parameters:
  • value – None
  • lon – longitude string
  • lat – latitude string
Returns:

a validated pair (lon, lat)

openquake.risklib.valid.point3d(value, lon, lat, depth)[source]

This is used to convert nodes of the form <hypocenter lon=”LON” lat=”LAT” depth=”DEPTH”/>

Parameters:
  • value – None
  • lon – longitude string
  • lat – latitude string
Returns:

a validated triple (lon, lat, depth)

openquake.risklib.valid.posList(value)[source]
Parameters:value – a string with the form lon1 lat1 [depth1] ... lonN latN [depthN] without commas, where the depts are optional.
Returns:a list of floats without other validations
openquake.risklib.valid.positivefloat(value)[source]
Parameters:value – input string
Returns:positive float
openquake.risklib.valid.positivefloats(value)[source]
Parameters:value – string of whitespace separated floats
Returns:a list of positive floats
openquake.risklib.valid.positiveint(value)[source]
Parameters:value – input string
Returns:positive integer
openquake.risklib.valid.positiveints(value)[source]
>>> positiveints('1, -1')
Traceback (most recent call last):
   ...
ValueError: -1 is negative in '1, -1'
openquake.risklib.valid.probabilities(value)[source]
Parameters:value – input string, comma separated or space separated
Returns:a list of probabilities
>>> probabilities('')
[]
>>> probabilities('1')
[1.0]
>>> probabilities('0.1 0.2')
[0.1, 0.2]
>>> probabilities('0.1, 0.2')  # commas are ignored
[0.1, 0.2]
openquake.risklib.valid.probability_depth(value, probability, depth)[source]

This is used to convert nodes of the form <hypoDepth probability=”PROB” depth=”DEPTH” />

Parameters:
  • value – None
  • probability – a probability
  • depth – a depth
Returns:

a validated pair (probability, depth)

openquake.risklib.valid.site_param(z1pt0, z2pt5, vs30Type, vs30, lon, lat, backarc='false')[source]

Used to convert a node like

<site lon=”24.7125” lat=”42.779167” vs30=”462” vs30Type=”inferred” z1pt0=”100” z2pt5=”5” backarc=”False”/>

into a 7-tuple (z1pt0, z2pt5, measured, vs30, backarc, lon, lat)

openquake.risklib.valid.slip_list(nodes)[source]
Parameters:nodes – a slipList node with N slip nodes
Returns:a numpy array of shape (N, 2) with slip angle and weight
openquake.risklib.valid.utf8(value)[source]

Check that the string is UTF-8. Returns an encode bytestring.

>>> utf8(b'\xe0')  
Traceback (most recent call last):
...
ValueError: Not UTF-8: ...
openquake.risklib.valid.utf8_not_empty(value)[source]

Check that the string is UTF-8 and not empty

openquake.risklib.valid.wkt_polygon(value)[source]

Convert a string with a comma separated list of coordinates into a WKT polygon, by closing the ring.

Module contents

class openquake.risklib.VulnerabilityFunction(vf_id, imt, imls, mean_loss_ratios, covs=None, distribution='LN')[source]

Bases: object

dtype = dtype([('iml', '<f4'), ('loss_ratio', '<f4'), ('cov', '<f4')])
init()[source]
interpolate(gmvs)[source]
Parameters:gmvs – array of intensity measure levels
Returns:(interpolated loss ratios, interpolated covs, indices > min)
loss_ratio_exceedance_matrix = <functools.partial object>
mean_imls = <functools.partial object>
mean_loss_ratios_with_steps(steps)[source]

Split the mean loss ratios, producing a new set of loss ratios. The new set of loss ratios always includes 0.0 and 1.0

Parameters:steps (int) –

the number of steps we make to go from one loss ratio to the next. For example, if we have [0.5, 0.7]:

steps = 1 produces [0.0,  0.5, 0.7, 1]
steps = 2 produces [0.0, 0.25, 0.5, 0.6, 0.7, 0.85, 1]
steps = 3 produces [0.0, 0.17, 0.33, 0.5, 0.57, 0.63,
                    0.7, 0.8, 0.9, 1]
sample(means, covs, idxs, epsilons)[source]

Sample the epsilons and apply the corrections to the means. This method is called only if there are nonzero covs.

Parameters:
  • means – array of E’ loss ratios
  • covs – array of E’ floats
  • idxs – array of E booleans with E >= E’
  • epsilons – array of E floats
Returns:

array of E’ loss ratios

set_distribution(epsilons=None)[source]
strictly_increasing()[source]
Returns:a new vulnerability function that is strictly increasing. It is built by removing piece of the function where the mean loss ratio is constant.
class openquake.risklib.DegenerateDistribution[source]

Bases: openquake.risklib.scientific.Distribution

The degenerate distribution. E.g. a distribution with a delta corresponding to the mean.

sample(means, _covs, _stddev, _idxs)[source]
survival(loss_ratio, mean, _stddev)[source]
openquake.risklib.classical(vulnerability_function, hazard_imls, hazard_poes, steps=10)[source]
Parameters:
  • vulnerability_function – an instance of openquake.risklib.scientific.VulnerabilityFunction representing the vulnerability function used to compute the curve.
  • hazard_imls – the hazard intensity measure type and levels
  • steps (int) – Number of steps between loss ratios.