openquake.calculators package

base module

class openquake.calculators.base.BaseCalculator(oqparam, calc_id=None)[source]

Bases: object

Abstract base class for all calculators.

Parameters:
  • oqparam – OqParam object
  • monitor – monitor object
  • calc_id – numeric calculation ID
before_export()[source]

Set the attributes nbytes

core_task(*args)[source]

Core routine running on the workers.

execute()[source]

Execution phase. Usually will run in parallel the core function and return a dictionary with the results.

export(exports=None)[source]

Export all the outputs in the datastore in the given export formats. Individual outputs are not exported if there are multiple realizations.

from_engine = False
is_stochastic = False
monitor(operation='', **kw)[source]
Returns:a new Monitor instance
post_execute(result)[source]

Post-processing phase of the aggregated output. It must be overridden with the export code. It will return a dictionary of output files.

pre_execute()[source]

Initialization phase.

run(pre_execute=True, concurrent_tasks=None, close=True, **kw)[source]

Run the calculation and return the exported outputs.

save_params(**kw)[source]

Update the current calculation parameters and save engine_version

set_log_format()[source]

Set the format of the root logger

class openquake.calculators.base.HazardCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.base.BaseCalculator

Base class for hazard calculators based on source models

can_read_parent()[source]
Returns:the parent datastore if it is present and can be read from the workers, None otherwise
check_floating_spinning()[source]
check_overflow()[source]

Overridden in event based

count_eff_ruptures(result_dict, src_group_id)[source]

Returns the number of ruptures in the src_group (after filtering) or 0 if the src_group has been filtered away.

Parameters:
  • result_dict – a dictionary with keys (grp_id, gsim)
  • src_group_id – the source group ID
filter_csm()[source]
Returns:(filtered CompositeSourceModel, SourceFilter)
get_min_iml(oq)[source]
init()[source]

To be overridden to initialize the datasets needed by the calculation

load_riskmodel()[source]

Read the risk model and set the attribute .riskmodel. The riskmodel can be empty for hazard calculations. Save the loss ratios (if any) in the datastore.

post_process()[source]

For compatibility with the engine

pre_execute(pre_calculator=None)[source]

Check if there is a previous calculation ID. If yes, read the inputs by retrieving the previous calculation; if not, read the inputs directly.

precalc = None
read_exposure(haz_sitecol=None)[source]

Read the exposure, the riskmodel and update the attributes .sitecol, .assetcol

read_inputs()[source]

Read risk data and sources if any

store_source_info(infos, acc)[source]
exception openquake.calculators.base.InvalidCalculationID[source]

Bases: Exception

Raised when running a post-calculation on top of an incompatible pre-calculation

class openquake.calculators.base.RiskCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.base.HazardCalculator

Base class for all risk calculators. A risk calculator must set the attributes .riskmodel, .sitecol, .assetcol, .riskinputs in the pre_execute phase.

R
Returns:the number of realizations as read from csm_info
build_riskinputs(kind, eps=None, num_events=0)[source]
Parameters:
  • kind – kind of hazard getter, can be ‘poe’ or ‘gmf’
  • eps – a matrix of epsilons (or None)
  • num_events – how many events there are
Returns:

a list of RiskInputs objects, sorted by IMT.

combine(acc, res)[source]
execute()[source]

Parallelize on the riskinputs and returns a dictionary of results. Require a .core_task to be defined with signature (riskinputs, riskmodel, rlzs_assoc, monitor).

read_shakemap(haz_sitecol, assetcol)[source]

Enabled only if there is a shakemap_id parameter in the job.ini. Download, unzip, parse USGS shakemap files and build a corresponding set of GMFs which are then filtered with the hazard site collection and stored in the datastore.

openquake.calculators.base.build_hmaps(hcurves_by_kind, slice_, imtls, poes, monitor)[source]

Build hazard maps from a slice of hazard curves. :returns: a pair ({kind: hmaps}, slice)

openquake.calculators.base.check_precalc_consistency(calc_mode, precalc_mode)[source]

Defensive programming against users providing an incorrect pre-calculation ID (with --hazard-calculation-id)

Parameters:
  • calc_mode – calculation_mode of the current calculation
  • precalc_mode – calculation_mode of the previous calculation
openquake.calculators.base.check_time_event(oqparam, occupancy_periods)[source]

Check the time_event parameter in the datastore, by comparing with the periods found in the exposure.

openquake.calculators.base.get_gmv_data(sids, gmfs)[source]

Convert an array of shape (R, N, E, I) into an array of type gmv_data_dt

openquake.calculators.base.import_gmfs(dstore, fname, sids)[source]

Import in the datastore a ground motion field CSV file.

Parameters:
  • dstore – the datastore
  • fname – the CSV file
  • sids – the site IDs (complete)
Returns:

event_ids, num_rlzs

openquake.calculators.base.save_gmdata(calc, n_rlzs)[source]

Save a composite array gmdata in the datastore.

Parameters:
  • calc – a calculator with a dictionary .gmdata {rlz: data}
  • n_rlzs – the total number of realizations
openquake.calculators.base.save_gmf_data(dstore, sitecol, gmfs, eids=())[source]
Parameters:
openquake.calculators.base.save_gmfs(calculator)[source]
Parameters:calculator – a scenario_risk/damage or event_based_risk calculator
Returns:a pair (eids, R) where R is the number of realizations
openquake.calculators.base.set_array(longarray, shortarray)[source]
Parameters:
  • longarray – a numpy array of floats of length L >= l
  • shortarray – a numpy array of floats of length l

Fill longarray with the values of shortarray, starting from the left. If shortarry is shorter than longarray, then the remaining elements on the right are filled with numpy.nan values.

getters module

class openquake.calculators.getters.GmfDataGetter(dstore, sids, num_rlzs, imtls)[source]

Bases: collections.abc.Mapping

A dictionary-like object {sid: dictionary by realization index}

get_hazard(gsim=None)[source]
Parameters:gsim – ignored
Returns:an OrderedDict rlzi -> datadict
init()[source]
class openquake.calculators.getters.GmfGetter(rlzs_by_gsim, ebruptures, sitecol, oqparam, min_iml, samples=1)[source]

Bases: object

An hazard getter with methods .gen_gmv and .get_hazard returning ground motion values.

compute_gmfs_curves(monitor)[source]
Returns:a dict with keys gmdata, gmfdata, indices, hcurves
gen_gmv()[source]

Compute the GMFs for the given realization and populate the .gmdata array. Yields tuples of the form (sid, eid, imti, gmv).

get_hazard(data=None)[source]
Parameters:data – if given, an iterator of records of dtype gmf_data_dt
Returns:an array (rlzi, sid, imti) -> array(gmv, eid)
imtls
init()[source]

Initialize the computers. Should be called on the workers

sids
class openquake.calculators.getters.PmapGetter(dstore, rlzs_assoc=None, sids=None)[source]

Bases: object

Read hazard curves from the datastore for all realizations or for a specific realization.

Parameters:
  • dstore – a DataStore instance or file system path to it
  • sids – the subset of sites to consider (if None, all sites)
  • rlzs_assoc – a RlzsAssoc instance (if None, infers it)
get(rlzi, grp=None)[source]
Parameters:
  • rlzi – a realization index
  • grp – None (all groups) or a string of the form “grp-XX”
Returns:

the hazard curves for the given realization

get_hazard(gsim=None)[source]
Parameters:gsim – ignored
Returns:an OrderedDict rlzi -> datadict
get_hcurves(imtls=None)[source]
Parameters:imtls – intensity measure types and levels
Returns:an array of (R, N) hazard curves
get_mean(grp=None)[source]

Compute the mean curve as a ProbabilityMap

Parameters:grp – if not None must be a string of the form “grp-XX”; in that case returns the mean considering only the contribution for group XX
get_pmaps(sids)[source]
Parameters:sids – an array of S site IDs
Returns:a list of R probability maps
init()[source]

Read the poes and set the .data attribute with the hazard curves

items(kind='')[source]

Extract probability maps from the datastore, possibly generating on the fly the ones corresponding to the individual realizations. Yields pairs (tag, pmap).

Parameters:kind – the kind of PoEs to extract; if not given, returns the realization if there is only one or the statistics otherwise.
pmap_by_grp
Returns:dictionary “grp-XXX” -> ProbabilityMap instance
weights
class openquake.calculators.getters.RuptureGetter(dstore, mask=None, grp_id=None)[source]

Bases: object

Iterable over ruptures.

Parameters:
  • dstore – a DataStore instance with a dataset names ruptures
  • mask – which ruptures to read; it can be: - None: read all ruptures - a slice - a boolean mask - a list of integers
  • grp_id – the group ID of the ruptures, if they are homogeneous, or None
classmethod from_(dstore)[source]
Returns:a dictionary grp_id -> RuptureGetter instance
split(block_size)[source]

Split a RuptureGetter in multiple getters, each one containing a block of ruptures.

Parameters:block_size – maximum length of the rupture blocks
Returns:RuptureGetters containing block_size ruptures and with an attribute .n_events counting the total number of events
openquake.calculators.getters.get_maxloss_rupture(dstore, loss_type)[source]
Parameters:
  • dstore – a DataStore instance
  • loss_type – a loss type string
Returns:

EBRupture instance corresponding to the maximum loss for the given loss type

openquake.calculators.getters.get_ruptures_by_grp(dstore, slice_=slice(None, None, None))[source]

Extracts the ruptures corresponding to the given slice. If missing, extract all ruptures.

Returns:a dictionary grp_id -> list of EBRuptures

classical module

class openquake.calculators.classical.ClassicalCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.base.HazardCalculator

Classical PSHA calculator

agg_dicts(acc, pmap_by_grp)[source]

Aggregate dictionaries of hazard curves by updating the accumulator.

Parameters:
  • acc – accumulator dictionary
  • pmap_by_grp – dictionary grp_id -> ProbabilityMap
calc_stats(parent)[source]
core_task(group, src_filter, gsims, param, monitor=<Monitor >)

Compute the hazard curves for a set of sources belonging to the same tectonic region type for all the GSIMs associated to that TRT. The arguments are the same as in calc_hazard_curves(), except for gsims, which is a list of GSIM instances.

Returns:a dictionary {grp_id: pmap} with attributes .grp_ids, .calc_times, .eff_ruptures
execute()[source]

Run in parallel core_task(sources, sitecol, monitor), by parallelizing on the sources according to their weight and tectonic region type.

gen_args(monitor)[source]

Used in the case of large source model logic trees.

Parameters:monitor – a openquake.baselib.performance.Monitor
Yields:(sources, sites, gsims, monitor) tuples
gen_getters(parent)[source]
Yields:pgetter, hstats, monitor
post_execute(pmap_by_grp_id)[source]

Collect the hazard curves by realization and export them.

Parameters:pmap_by_grp_id – a dictionary grp_id -> hazard curves
save_hazard_stats(acc, pmap_by_kind)[source]

Works by side effect by saving statistical hcurves and hmaps on the datastore.

Parameters:
  • acc – ignored
  • pmap_by_kind – a dictionary of ProbabilityMaps

kind can be (‘hcurves’, ‘mean’), (‘hmaps’, ‘mean’), …

zerodict()[source]

Initial accumulator, a dict grp_id -> ProbabilityMap(L, G)

class openquake.calculators.classical.PreCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.classical.ClassicalCalculator

Calculator to filter the sources and compute the number of effective ruptures

core_task(sources, srcfilter, gsims, param, monitor)

Count the number of ruptures contained in the given sources by applying a raw source filtering on the integration distance. Return a dictionary src_group_id -> {}. All sources must belong to the same tectonic region type.

openquake.calculators.classical.build_hazard_stats(pgetter, hstats, monitor)[source]
Parameters:
  • pgetter – an openquake.commonlib.getters.PmapGetter
  • hstats – a list of pairs (statname, statfunc)
  • monitor – instance of Monitor
Returns:

a dictionary kind -> ProbabilityMap

The “kind” is a string of the form ‘rlz-XXX’ or ‘mean’ of ‘quantile-XXX’ used to specify the kind of output.

openquake.calculators.classical.count_eff_ruptures(sources, srcfilter, gsims, param, monitor)[source]

Count the number of ruptures contained in the given sources by applying a raw source filtering on the integration distance. Return a dictionary src_group_id -> {}. All sources must belong to the same tectonic region type.

openquake.calculators.classical.fix_ones(pmap)[source]

Physically, an extremely small intensity measure level can have an extremely large probability of exceedence, however that probability cannot be exactly 1 unless the level is exactly 0. Numerically, the PoE can be 1 and this give issues when calculating the damage (there is a log(0) in openquake.risklib.scientific.annual_frequency_of_exceedence). Here we solve the issue by replacing the unphysical probabilities 1 with .9999999999999999 (the float64 closest to 1).

openquake.calculators.classical.get_src_ids(sources)[source]
Returns:a string with the source IDs of the given sources, stripping the extension after the colon, if any
openquake.calculators.classical.saving_sources_by_task(iterargs, dstore)[source]

Yield the iterargs again by populating ‘source_data’

classical_bcr module

class openquake.calculators.classical_bcr.ClassicalBCRCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.classical_risk.ClassicalRiskCalculator

Classical BCR Risk calculator

core_task(riskinputs, riskmodel, param, monitor)

Compute and return the average losses for each asset.

Parameters:
post_execute(result)[source]
openquake.calculators.classical_bcr.classical_bcr(riskinputs, riskmodel, param, monitor)[source]

Compute and return the average losses for each asset.

Parameters:

classical_damage module

class openquake.calculators.classical_damage.ClassicalDamageCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.classical_risk.ClassicalRiskCalculator

Scenario damage calculator

core_task(riskinputs, riskmodel, param, monitor)

Core function for a classical damage computation.

Parameters:
Returns:

a nested dictionary rlz_idx -> asset -> <damage array>

post_execute(result)[source]

Export the result in CSV format.

Parameters:result – a dictionary (l, r) -> asset_ordinal -> fractions per damage state
openquake.calculators.classical_damage.classical_damage(riskinputs, riskmodel, param, monitor)[source]

Core function for a classical damage computation.

Parameters:
Returns:

a nested dictionary rlz_idx -> asset -> <damage array>

classical_risk module

class openquake.calculators.classical_risk.ClassicalRiskCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.base.RiskCalculator

Classical Risk calculator

core_task(riskinputs, riskmodel, param, monitor)

Compute and return the average losses for each asset.

Parameters:
post_execute(result)[source]

Saving loss curves in the datastore.

Parameters:result – aggregated result of the task classical_risk
pre_execute()[source]

Associate the assets to the sites and build the riskinputs.

openquake.calculators.classical_risk.classical_risk(riskinputs, riskmodel, param, monitor)[source]

Compute and return the average losses for each asset.

Parameters:

disaggregation module

Disaggregation calculator core functionality

class openquake.calculators.disaggregation.DisaggregationCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.base.HazardCalculator

Classical PSHA disaggregation calculator

POE_TOO_BIG = "You are trying to disaggregate for poe=%s.\nHowever the source model #%d, '%s',\nproduces at most probabilities of %.7f for rlz=#%d, IMT=%s.\nThe disaggregation PoE is too big or your model is wrong,\nproducing too small PoEs."
agg_result(acc, result)[source]

Collect the results coming from compute_disagg into self.results, a dictionary with key (sid, rlzi, poe, imt, trti) and values which are probability arrays.

Parameters:
  • acc – dictionary k -> dic accumulating the results
  • result – dictionary with the result coming from a task
build_disagg_by_src(iml4)[source]
Parameters:
  • dstore – a datastore
  • iml4 – 4D array of IMLs with shape (N, 1, M, P)
build_stats(results, hstats)[source]
Parameters:
  • results – dict key -> 6D disagg_matrix
  • hstats – (statname, statfunc) pairs
check_poes_disagg(curves)[source]

Raise an error if the given poes_disagg are too small compared to the hazard curves.

execute()[source]

Performs the disaggregation

full_disaggregation(curves)[source]

Run the disaggregation phase.

Parameters:curves – a list of hazard curves, one per site

The curves can be all None if iml_disagg is set in the job.ini

get_NRPM()[source]
Returns:(num_sites, num_rlzs, num_poes, num_imts)
get_curves(sid)[source]

Get all the relevant hazard curves for the given site ordinal. Returns a dictionary rlz_id -> curve_by_imt.

post_execute(results)[source]

Save all the results of the disaggregation. NB: the number of results to save is #sites * #rlzs * #disagg_poes * #IMTs.

Parameters:results – a dictionary (sid, rlzi, poe, imt) -> trti -> disagg matrix
pre_execute()[source]
save_bin_edges()[source]

Save disagg-bins

save_disagg_result(dskey, results)[source]

Save the computed PMFs in the datastore

Parameters:
  • dskey – dataset key; can be ‘disagg’ or ‘disagg-stats’
  • results – a dictionary sid, rlz, poe, imt -> 6D disagg_matrix
openquake.calculators.disaggregation.agg_probs(*probs)[source]

Aggregate probabilities withe the usual formula 1 - (1 - P1) … (1 - Pn)

openquake.calculators.disaggregation.compute_disagg(src_filter, sources, cmaker, iml4, trti, bin_edges, oqparam, monitor)[source]
Parameters:
  • src_filter – a openquake.hazardlib.calc.filter.SourceFilter instance
  • sources – list of hazardlib source objects
  • cmaker – a openquake.hazardlib.gsim.base.ContextMaker instance
  • iml4 – an array of intensities of shape (N, R, M, P)
  • trti (dict) – tectonic region type index
  • bin_egdes – a dictionary site_id -> edges
  • oqparam – the parameters in the job.ini file
  • monitor – monitor of the currently running job
Returns:

a dictionary of probability arrays, with composite key (sid, rlzi, poe, imt, iml, trti).

event_based module

class openquake.calculators.event_based.EventBasedCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.base.HazardCalculator

Event based PSHA calculator generating the ground motion fields and the hazard curves from the ruptures, depending on the configuration parameters.

agg_dicts(acc, result)[source]
Parameters:
  • acc – accumulator dictionary
  • result – an AccumDict with events, ruptures, gmfs and hcurves
check_overflow()[source]

Raise a ValueError if the number of sites is larger than 65,536 or the number of IMTs is larger than 256 or the number of ruptures is larger than 4,294,967,296. The limits are due to the numpy dtype used to store the GMFs (gmv_dt). They could be relaxed in the future.

core_task(sources_or_ruptures, src_filter, rlzs_by_gsim, param, monitor)

Compute GMFs and optionally hazard curves

execute()[source]
gen_args(monitor)[source]
Yields:the arguments for compute_gmfs_and_curves
init()[source]
is_stochastic = True
post_execute(result)[source]

Save the SES collection

process_csm()[source]

Prefilter the composite source model and store the source_info

save_gmf_bytes()[source]

Save the attribute nbytes in the gmf_data datasets

save_ruptures(ruptures)[source]

Extend the ‘events’ dataset with the events from the given ruptures; also, save the ruptures if the flag save_ruptures is on.

Parameters:ruptures – a list of EBRuptures
zerodict()[source]

Initial accumulator, a dictionary (grp_id, gsim) -> curves

openquake.calculators.event_based.compute_gmfs(sources_or_ruptures, src_filter, rlzs_by_gsim, param, monitor)[source]

Compute GMFs and optionally hazard curves

openquake.calculators.event_based.get_events(ebruptures)[source]

Extract an array of dtype stored_event_dt from a list of EBRuptures

openquake.calculators.event_based.get_mean_curves(dstore)[source]

Extract the mean hazard curves from the datastore, as a composite array of length nsites.

openquake.calculators.event_based.max_gmf_size(ruptures_by_grp, get_rlzs_by_gsim, samples_by_grp, num_imts)[source]
Parameters:
  • ruptures_by_grp – dictionary grp_id -> EBRuptures
  • rlzs_by_gsim – method grp_id -> {gsim: rlzs}
  • samples_by_grp – dictionary grp_id -> samples
  • num_imts – number of IMTs
Returns:

the size of the GMFs generated by the ruptures, by excess, if minimum_intensity is set

openquake.calculators.event_based.set_counts(dstore, dsetname)[source]
Parameters:
  • dstore – a DataStore instance
  • dsetname – name of dataset with a field grp_id
Returns:

a dictionary grp_id > counts

openquake.calculators.event_based.set_random_years(dstore, name, investigation_time)[source]

Set on the events dataset year labels sensitive to the SES ordinal and the investigation time.

Parameters:
  • dstore – a DataStore instance
  • name – name of the dataset (‘events’)
  • investigation_time – investigation time
openquake.calculators.event_based.update_nbytes(dstore, key, array)[source]
openquake.calculators.event_based.weight(src)[source]

event_based_risk module

class openquake.calculators.event_based_risk.EbrCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.base.RiskCalculator

Event based PSHA calculator generating the total losses by taxonomy

build_datasets(builder)[source]
combine(dummy, results)[source]
Parameters:
  • dummy – unused parameter
  • res – a list of result dictionaries
core_task(riskinputs, riskmodel, param, monitor)
Parameters:
Returns:

a dictionary of numpy arrays of shape (L, R)

epsilon_getter()[source]
Returns:a callable (start, stop) producing a slice of epsilons
is_stochastic = True
post_execute(result)[source]

Save risk data and build the aggregate loss curves

postproc()[source]

Build aggregate loss curves in process

pre_execute()[source]
save_losses(dic, offset=0)[source]

Save the event loss tables incrementally.

Parameters:
  • dic – dictionary with agglosses, avglosses
  • offset – realization offset
openquake.calculators.event_based_risk.build_loss_tables(dstore)[source]

Compute the total losses by rupture and losses by rlzi.

openquake.calculators.event_based_risk.event_based_risk(riskinputs, riskmodel, param, monitor)[source]
Parameters:
Returns:

a dictionary of numpy arrays of shape (L, R)

reportwriter module

Utilities to build a report writer generating a .rst report for a calculation

class openquake.calculators.reportwriter.ReportWriter(dstore)[source]

Bases: object

A particularly smart view over the datastore

add(name, obj=None)[source]

Add the view named name to the report text

make_report()[source]

Build the report and return a restructed text string

save(fname)[source]

Save the report

title = {'required_params_per_trt': 'Required parameters per tectonic region type', 'avglosses_data_transfer': 'Estimated data transfer for the avglosses', 'inputs': 'Input files', 'task_hazard:-1': 'Slowest task', 'dupl_sources': 'Duplicated sources', 'job_info': 'Data transfer', 'rlzs_assoc': 'Realizations per (TRT, GSIM)', 'short_source_info': 'Slowest sources', 'task_info': 'Information about the tasks', 'csm_info': 'Composite source model', 'biggest_ebr_gmf': 'Maximum memory allocated for the GMFs', 'performance': 'Slowest operations', 'task_hazard:0': 'Fastest task', 'ruptures_per_trt': 'Number of ruptures per tectonic region type', 'params': 'Parameters', 'exposure_info': 'Exposure model', 'ruptures_events': 'Specific information for event based', 'times_by_source_class': 'Computation times by source typology'}
openquake.calculators.reportwriter.build_report(job_ini, output_dir=None)[source]

Write a report.csv file with information about the calculation without running it

Parameters:
  • job_ini – full pathname of the job.ini file
  • output_dir – the directory where the report is written (default the input directory)
openquake.calculators.reportwriter.indent(text)[source]

scenario module

class openquake.calculators.scenario.ScenarioCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.base.HazardCalculator

Scenario hazard calculator

execute()[source]

Compute the GMFs and return a dictionary gsim -> array(N, E, I)

init()[source]
is_stochastic = True
post_execute(dummy)[source]
pre_execute()[source]

Read the site collection and initialize GmfComputer and seeds

scenario_damage module

class openquake.calculators.scenario_damage.ScenarioDamageCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.base.RiskCalculator

Scenario damage calculator

core_task(riskinputs, riskmodel, param, monitor)

Core function for a damage computation.

Parameters:
Returns:

a dictionary {‘d_asset’: [(l, r, a, mean-stddev), …],

’d_event’: damage array of shape R, L, E, D, ‘c_asset’: [(l, r, a, mean-stddev), …], ‘c_event’: damage array of shape R, L, E}

d_asset and d_tag are related to the damage distributions whereas c_asset and c_tag are the consequence distributions. If there is no consequence model c_asset is an empty list and c_tag is a zero-valued array.

is_stochastic = True
post_execute(result)[source]

Compute stats for the aggregated distributions and save the results on the datastore.

pre_execute()[source]
openquake.calculators.scenario_damage.dist_by_asset(data, multi_stat_dt, number)[source]
Parameters:
  • data – array of shape (N, R, L, 2, …)
  • multi_stat_dt – numpy dtype for statistical outputs
  • number – expected number of units per asset
Returns:

array of shape (N, R) with records of type multi_stat_dt

openquake.calculators.scenario_damage.scenario_damage(riskinputs, riskmodel, param, monitor)[source]

Core function for a damage computation.

Parameters:
Returns:

a dictionary {‘d_asset’: [(l, r, a, mean-stddev), …],

’d_event’: damage array of shape R, L, E, D, ‘c_asset’: [(l, r, a, mean-stddev), …], ‘c_event’: damage array of shape R, L, E}

d_asset and d_tag are related to the damage distributions whereas c_asset and c_tag are the consequence distributions. If there is no consequence model c_asset is an empty list and c_tag is a zero-valued array.

scenario_risk module

class openquake.calculators.scenario_risk.ScenarioRiskCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.base.RiskCalculator

Run a scenario risk calculation

core_task(riskinputs, riskmodel, param, monitor)

Core function for a scenario computation.

Parameters:
Returns:

a dictionary { ‘agg’: array of shape (E, L, R, 2), ‘avg’: list of tuples (lt_idx, rlz_idx, asset_ordinal, statistics) } where E is the number of simulated events, L the number of loss types, R the number of realizations and statistics is an array of shape (n, R, 4), with n the number of assets in the current riskinput object

is_stochastic = True
post_execute(result)[source]

Compute stats for the aggregated distributions and save the results on the datastore.

pre_execute()[source]

Compute the GMFs, build the epsilons, the riskinputs, and a dictionary with the unit of measure, used in the export phase.

openquake.calculators.scenario_risk.scenario_risk(riskinputs, riskmodel, param, monitor)[source]

Core function for a scenario computation.

Parameters:
Returns:

a dictionary { ‘agg’: array of shape (E, L, R, 2), ‘avg’: list of tuples (lt_idx, rlz_idx, asset_ordinal, statistics) } where E is the number of simulated events, L the number of loss types, R the number of realizations and statistics is an array of shape (n, R, 4), with n the number of assets in the current riskinput object

ucerf_event_classical module

class openquake.calculators.ucerf_classical.UcerfClassicalCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.classical.ClassicalCalculator

UCERF classical calculator.

execute()[source]

Run in parallel core_task(sources, sitecol, monitor), by parallelizing on the sources according to their weight and tectonic region type.

pre_execute()[source]

ucerf_event_based module

class openquake.calculators.ucerf_event_based.List[source]

Bases: list

Trivial container returned by compute_losses

class openquake.calculators.ucerf_event_based.UCERFHazardCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.event_based.EventBasedCalculator

Event based PSHA calculator generating the ruptures only

core_task(sources, src_filter, rlzs_by_gsim, param, monitor)
Parameters:
  • sources – a list with a single UCERF source
  • src_filter – a SourceFilter instance
  • rlzs_by_gsim – a dictionary gsim -> rlzs
  • param – extra parameters
  • monitor – a Monitor instance
Returns:

an AccumDict grp_id -> EBRuptures

filter_csm()[source]
gen_args(monitor)[source]

Generate a task for each branch

pre_execute()[source]

parse the logic tree and source model input

class openquake.calculators.ucerf_event_based.UCERFRiskCalculator(oqparam, calc_id=None)[source]

Bases: openquake.calculators.event_based_risk.EbrCalculator

Event based risk calculator for UCERF, parallelizing on the source models

execute()[source]
gen_args()[source]

Yield the arguments required by build_ruptures, i.e. the source models, the asset collection, the riskmodel and others.

post_execute(result)[source]

Call the EbrPostCalculator to compute the aggregate loss curves

pre_execute()

parse the logic tree and source model input

save_losses(dic, offset=0)[source]

Save the event loss tables incrementally.

Parameters:
  • dic – dictionary with agglosses, assratios, avglosses, lrs_idx
  • offset – realization offset
save_results(allres, num_rlzs)[source]
Parameters:
  • allres – an iterable of result iterators
  • num_rlzs – the total number of realizations
Returns:

the total number of events

openquake.calculators.ucerf_event_based.compute_hazard(sources, src_filter, rlzs_by_gsim, param, monitor)[source]
Parameters:
  • sources – a list with a single UCERF source
  • src_filter – a SourceFilter instance
  • rlzs_by_gsim – a dictionary gsim -> rlzs
  • param – extra parameters
  • monitor – a Monitor instance
Returns:

an AccumDict grp_id -> EBRuptures

openquake.calculators.ucerf_event_based.compute_losses(ssm, src_filter, param, riskmodel, monitor)[source]

Compute the losses for a single source model. Returns the ruptures as an attribute .ruptures_by_grp of the list of losses.

Parameters:
  • ssm – CompositeSourceModel containing a single source model
  • sitecol – a SiteCollection instance
  • param – a dictionary of extra parameters
  • riskmodel – a RiskModel instance
  • monitor – a Monitor instance
Returns:

a List containing the losses by taxonomy and some attributes

openquake.calculators.ucerf_event_based.generate_event_set(ucerf, background_sids, src_filter, seed)[source]

Generates the event set corresponding to a particular branch

openquake.calculators.ucerf_event_based.sample_background_model(hdf5, branch_key, tom, seed, filter_idx, min_mag, npd, hdd, upper_seismogenic_depth, lower_seismogenic_depth, msr=<WC1994>, aspect=1.5, trt='Active Shallow Crust')[source]

Generates a rupture set from a sample of the background model

Parameters:
  • branch_key – Key to indicate the branch for selecting the background model
  • tom – Temporal occurrence model as instance of :class: openquake.hazardlib.tom.TOM
  • seed – Random seed to use in the call to tom.sample_number_of_occurrences
  • filter_idx – Sites for consideration (can be None!)
  • min_mag (float) – Minimim magnitude for consideration of background sources
  • npd – Nodal plane distribution as instance of :class: openquake.hazardlib.pmf.PMF
  • hdd – Hypocentral depth distribution as instance of :class: openquake.hazardlib.pmf.PMF
  • aspect (float) – Aspect ratio
  • upper_seismogenic_depth (float) – Upper seismogenic depth (km)
  • lower_seismogenic_depth (float) – Lower seismogenic depth (km)
  • msr – Magnitude scaling relation
  • integration_distance (float) – Maximum distance from rupture to site for consideration
openquake.calculators.ucerf_event_based.ucerf_risk(riskinput, riskmodel, param, monitor)[source]
Parameters:
Returns:

a dictionary of numpy arrays of shape (L, R)

views module

openquake.calculators.views.avglosses_data_transfer(token, dstore)[source]

Determine the amount of average losses transferred from the workers to the controller node in a risk calculation.

openquake.calculators.views.classify_gsim_lt(gsim_lt)[source]
Returns:“trivial”, “simple” or “complex”
openquake.calculators.views.ebr_data_transfer(token, dstore)[source]

Display the data transferred in an event based risk calculation

openquake.calculators.views.form(value)[source]

Format numbers in a nice way.

>>> form(0)
'0'
>>> form(0.0)
'0.0'
>>> form(0.0001)
'1.000E-04'
>>> form(1003.4)
'1,003'
>>> form(103.4)
'103'
>>> form(9.3)
'9.30000'
>>> form(-1.2)
'-1.2'
openquake.calculators.views.performance_view(dstore)[source]

Returns the performance view as a numpy array.

openquake.calculators.views.rst_table(data, header=None, fmt=None)[source]

Build a .rst table from a matrix.

>>> tbl = [['a', 1], ['b', 2]]
>>> print(rst_table(tbl, header=['Name', 'Value']))
==== =====
Name Value
==== =====
a    1    
b    2    
==== =====
openquake.calculators.views.stats(name, array, *extras)[source]

Returns statistics from an array of numbers.

Parameters:name – a descriptive string
Returns:(name, mean, std, min, max, len)
openquake.calculators.views.sum_table(records)[source]

Used to compute summaries. The records are assumed to have numeric fields, except the first field which is ignored, since it typically contains a label. Here is an example:

>>> sum_table([('a', 1), ('b', 2)])
['total', 3]
openquake.calculators.views.sum_tbl(tbl, kfield, vfields)[source]

Aggregate a composite array and compute the totals on a given key.

>>> dt = numpy.dtype([('name', (bytes, 10)), ('value', int)])
>>> tbl = numpy.array([('a', 1), ('a', 2), ('b', 3)], dt)
>>> sum_tbl(tbl, 'name', ['value'])['value']
array([3, 3])
openquake.calculators.views.view_assets_by_site(token, dstore)[source]

Display statistical information about the distribution of the assets

openquake.calculators.views.view_contents(token, dstore)[source]

Returns the size of the contents of the datastore and its total size

openquake.calculators.views.view_csm_info(token, dstore)[source]
openquake.calculators.views.view_dupl_sources(token, dstore)[source]

Display the duplicated sources from source_info

openquake.calculators.views.view_elt(token, dstore)[source]

Display the event loss table averaged by event

openquake.calculators.views.view_exposure_info(token, dstore)[source]

Display info about the exposure model

openquake.calculators.views.view_flat_hcurves(token, dstore)[source]

Display the flat hazard curves for the calculation. They are used for debugging purposes when comparing the results of two calculations. They are the mean over the sites of the mean hazard curves.

openquake.calculators.views.view_flat_hmaps(token, dstore)[source]

Display the flat hazard maps for the calculation. They are used for debugging purposes when comparing the results of two calculations. They are the mean over the sites of the mean hazard maps.

openquake.calculators.views.view_fullreport(token, dstore)[source]

Display an .rst report about the computation

openquake.calculators.views.view_global_gmfs(token, dstore)[source]

Display GMFs averaged on everything for debugging purposes

openquake.calculators.views.view_global_poes(token, dstore)[source]

Display global probabilities averaged on all sites and all GMPEs

openquake.calculators.views.view_hmap(token, dstore)[source]

Display the highest 20 points of the mean hazard map. Called as $ oq show hmap:0.1 # 10% PoE

openquake.calculators.views.view_inputs(token, dstore)[source]
openquake.calculators.views.view_job_info(token, dstore)[source]

Determine the amount of data transferred from the controller node to the workers and back in a classical calculation.

openquake.calculators.views.view_mean_avg_losses(token, dstore)[source]
openquake.calculators.views.view_mean_disagg(token, dstore)[source]

Display mean quantities for the disaggregation. Useful for checking differences between two calculations.

openquake.calculators.views.view_num_units(token, dstore)[source]

Display the number of units by taxonomy

openquake.calculators.views.view_params(token, dstore)[source]
openquake.calculators.views.view_performance(token, dstore)[source]

Display performance information

openquake.calculators.views.view_pmap(token, dstore)[source]

Display the mean ProbabilityMap associated to a given source group name

openquake.calculators.views.view_portfolio_loss(token, dstore)[source]

The loss for the full portfolio, for each realization and loss type, extracted from the event loss table.

openquake.calculators.views.view_required_params_per_trt(token, dstore)[source]

Display the parameters needed by each tectonic region type

openquake.calculators.views.view_ruptures_events(token, dstore)[source]
openquake.calculators.views.view_ruptures_per_trt(token, dstore)[source]
openquake.calculators.views.view_short_source_info(token, dstore, maxrows=20)[source]
openquake.calculators.views.view_slow_sources(token, dstore, maxrows=20)[source]

Returns the slowest sources

openquake.calculators.views.view_task_durations(token, dstore)[source]

Display the raw task durations. Here is an example of usage:

$ oq show task_durations:classical
openquake.calculators.views.view_task_hazard(token, dstore)[source]

Display info about a given task. Here are a few examples of usage:

$ oq show task_hazard:0  # the fastest task
$ oq show task_hazard:-1  # the slowest task
openquake.calculators.views.view_task_info(token, dstore)[source]

Display statistical information about the tasks performance. It is possible to get full information about a specific task with a command like this one, for a classical calculation:

$ oq show task_info:classical
openquake.calculators.views.view_task_risk(token, dstore)[source]

Display info about a given risk task. Here are a few examples of usage:

$ oq show task_risk:0  # the fastest task
$ oq show task_risk:-1  # the slowest task
openquake.calculators.views.view_times_by_source_class(token, dstore)[source]

Returns the calculation times depending on the source typology

openquake.calculators.views.view_totlosses(token, dstore)[source]

This is a debugging view. You can use it to check that the total losses, i.e. the losses obtained by summing the average losses on all assets are indeed equal to the aggregate losses. This is a sanity check for the correctness of the implementation.

extract module

class openquake.calculators.extract.Extract[source]

Bases: collections.OrderedDict

A callable dictionary of functions with a single instance called extract. Then extract(dstore, fullkey) dispatches to the function determined by the first part of fullkey (a slash-separated string) by passing as argument the second part of fullkey.

For instance extract(dstore, ‘sitecol), extract(dstore, ‘asset_values/0’) etc.

add(key, cache=False)[source]
openquake.calculators.extract.barray(iterlines)[source]

Array of bytes

openquake.calculators.extract.build_damage_array(data, damage_dt)[source]
Parameters:
  • data – an array of length N with fields ‘mean’ and ‘stddev’
  • damage_dt – a damage composite data type loss_type -> states
Returns:

a composite array of length N and dtype damage_dt

openquake.calculators.extract.build_damage_dt(dstore, mean_std=True)[source]
Parameters:
  • dstore – a datastore instance
  • mean_std – a flag (default True)
Returns:

a composite dtype loss_type -> (mean_ds1, stdv_ds1, …) or loss_type -> (ds1, ds2, …) depending on the flag mean_std

openquake.calculators.extract.cast(loss_array, loss_dt)[source]
openquake.calculators.extract.extract_(dstore, dspath)[source]

Extracts an HDF5 path object from the datastore, for instance extract(‘sitecol’, dstore). It is also possibly to extract the attributes, for instance with extract(‘sitecol.attrs’, dstore).

openquake.calculators.extract.extract_agg_curves(dstore, what)[source]

Aggregate loss curves of the given loss type and tags for event based risk calculations. Use it as /extract/agg_curves/structural?taxonomy=RC&zipcode=20126

Returns:array of shape (S, P), being P the number of return periods and S the number of statistics
openquake.calculators.extract.extract_agg_damages(dstore, what)[source]

Aggregate damages of the given loss type and tags. Use it as /extract/agg_damages/structural?taxonomy=RC&zipcode=20126

Returns:array of shape (R, D), being R the number of realizations and D the number of damage states or array of length 0 if there is no data for the given tags
openquake.calculators.extract.extract_agg_losses(dstore, what)[source]

Aggregate losses of the given loss type and tags. Use it as /extract/agg_losses/structural?taxonomy=RC&zipcode=20126 /extract/agg_losses/structural?taxonomy=RC&zipcode=*

Returns:an array of shape (T, R) if one of the tag names has a * value an array of shape (R,), being R the number of realizations an array of length 0 if there is no data for the given tags
openquake.calculators.extract.extract_asset_tags(dstore, tagname)[source]

Extract an array of asset tags for the given tagname. Use it as /extract/asset_tags or /extract/asset_tags/taxonomy

openquake.calculators.extract.extract_asset_values(dstore, sid)[source]

Extract an array of asset values for the given sid. Use it as /extract/asset_values/0

Returns:(aid, loss_type1, …, loss_typeN) composite array
openquake.calculators.extract.extract_dmg_by_asset_npz(dstore, what)[source]
openquake.calculators.extract.extract_gmf_scenario_npz(dstore, what)[source]
openquake.calculators.extract.extract_hazard(dstore, what)[source]

Extracts hazard curves and possibly hazard maps and/or uniform hazard spectra. Use it as /extract/hazard/mean or /extract/hazard/rlz-0, etc

openquake.calculators.extract.extract_hcurves(dstore, what)[source]

Extracts hazard curves. Use it as /extract/hcurves/mean or /extract/hcurves/rlz-0, /extract/hcurves/stats, /extract/hcurves/rlzs etc

openquake.calculators.extract.extract_hmaps(dstore, what)[source]

Extracts hazard maps. Use it as /extract/hmaps/mean or /extract/hmaps/rlz-0, etc

openquake.calculators.extract.extract_losses_by_asset(dstore, what)[source]
openquake.calculators.extract.extract_losses_by_event(dstore, what)[source]
openquake.calculators.extract.extract_mean_std_curves(dstore, what)[source]

Yield imls/IMT and poes/IMT containg mean and stddev for all sites

openquake.calculators.extract.extract_mfd(dstore, what)[source]

Display num_ruptures by magnitude for event based calculations. Example: http://127.0.0.1:8800/v1/calc/30/extract/event_based_mfd

openquake.calculators.extract.extract_realizations(dstore, dummy)[source]

Extract an array of realizations. Use it as /extract/realizations

openquake.calculators.extract.extract_uhs(dstore, what)[source]

Extracts uniform hazard spectra. Use it as /extract/uhs/mean or /extract/uhs/rlz-0, etc

openquake.calculators.extract.get_loss_type_tags(what)[source]
openquake.calculators.extract.get_mesh(sitecol, complete=True)[source]
Returns:a lon-lat or lon-lat-depth array depending if the site collection is at sea level or not
openquake.calculators.extract.hazard_items(dic, mesh, *extras, **kw)[source]
Parameters:
  • dic – dictionary of arrays of the same shape
  • mesh – a mesh array with lon, lat fields of the same length
  • extras – optional triples (field, dtype, values)
  • kw – dictionary of parameters (like investigation_time)
Returns:

a list of pairs (key, value) suitable for storage in .npz format