openquake.calculators package

openquake.calculators.base module

exception openquake.calculators.base.AssetSiteAssociationError[source]

Bases: exceptions.Exception

Raised when there are no hazard sites close enough to any asset

class openquake.calculators.base.BaseCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: object

Abstract base class for all calculators.

Parameters:
  • oqparam – OqParam object
  • monitor – monitor object
  • calc_id – numeric calculation ID
assetcol
clean_up()[source]

Collect the realizations and the monitoring information, then close the datastore.

core_task(*args)[source]

Core routine running on the workers.

cost_types
csm
etags
execute()[source]

Execution phase. Usually will run in parallel the core function and return a dictionary with the results.

export(exports=None)[source]

Export all the outputs in the datastore in the given export formats.

Returns:dictionary output_key -> sorted list of exported paths
is_stochastic = False
job_info
performance
post_execute(result)[source]

Post-processing phase of the aggregated output. It must be overridden with the export code. It will return a dictionary of output files.

pre_calculator = None
pre_execute()[source]

Initialization phase.

run(pre_execute=True, concurrent_tasks=None, close=True, **kw)[source]

Run the calculation and return the exported outputs.

save_params(**kw)[source]

Update the current calculation parameters and save engine_version

set_log_format()[source]

Set the format of the root logger

sitecol
taxonomies
class openquake.calculators.base.HazardCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.base.BaseCalculator

Base class for hazard calculators based on source models

class SourceManager(csm, maximum_distance, dstore, monitor, random_seed=None, filter_sources=True, num_tiles=1)

Bases: object

Manager associated to a CompositeSourceModel instance. Filter and split sources and send them to the worker tasks.

gen_args(tiles)

Yield (sources, sitecol, siteidx, rlzs_assoc, monitor) by looping on the tiles and on the source blocks.

get_sources(kind, tile)
Parameters:
  • kind – a string ‘light’, ‘heavy’ or ‘all’
  • tile – a openquake.hazardlib.site.Tile instance
Returns:

the sources of the given kind affecting the given tile

set_serial(src, split_sources=())

Set a serial number per each rupture in a source, managing also the case of split sources, if any.

store_source_info(dstore)

Save the source_info array and its attributes in the datastore.

Parameters:dstore – the datastore
HazardCalculator.assoc_assets_sites(sitecol)[source]
Parameters:sitecol – a sequence of sites
Returns:a pair (filtered_sites, assets_by_site)

The new site collection is different from the original one if some assets were discarded or if there were missing assets for some sites.

HazardCalculator.basic_pre_execute()[source]
HazardCalculator.compute_previous()[source]
HazardCalculator.count_assets()[source]

Count how many assets are taken into consideration by the calculator

HazardCalculator.init()[source]

To be overridden to initialize the datasets needed by the calculation

HazardCalculator.is_tiling()[source]
Returns:True if the calculator produces more than one tile, False otherwise
HazardCalculator.load_riskmodel()[source]

Read the risk model and set the attribute .riskmodel. The riskmodel can be empty for hazard calculations. Save the loss ratios (if any) in the datastore.

HazardCalculator.mean_curves = None
HazardCalculator.post_process()[source]

For compatibility with the engine

HazardCalculator.pre_execute()[source]

Check if there is a pre_calculator or a previous calculation ID. If yes, read the inputs by invoking the precalculator or by retrieving the previous calculation; if not, read the inputs directly.

HazardCalculator.read_exposure()[source]

Read the exposure, the riskmodel and update the attributes .exposure, .sitecol, .assets_by_site, .cost_types, .taxonomies.

HazardCalculator.read_previous(precalc_id)[source]
HazardCalculator.read_risk_data()[source]

Read the exposure (if any), the risk model (if any) and then the site collection, possibly extracted from the exposure.

HazardCalculator.save_data_transfer(taskmanager)[source]

Save information about the data transfer in the risk calculation as attributes of agg_loss_table

HazardCalculator.send_sources()[source]

Filter/split and send the sources to the workers. :returns: a openquake.commonlib.parallel.TaskManager

class openquake.calculators.base.RiskCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.base.HazardCalculator

Base class for all risk calculators. A risk calculator must set the attributes .riskmodel, .sitecol, .assets_by_site, .exposure .riskinputs in the pre_execute phase.

build_riskinputs(hazards_by_key, eps=array([], dtype=float64))[source]
Parameters:
  • hazards_by_key – a dictionary key -> IMT -> array of length num_sites
  • eps – a matrix of epsilons (possibly empty)
Returns:

a list of RiskInputs objects, sorted by IMT.

check_poes(curves_by_trt_gsim)[source]

Overridden in ClassicalDamage

execute()[source]

Parallelize on the riskinputs and returns a dictionary of results. Require a .core_task to be defined with signature (riskinputs, riskmodel, rlzs_assoc, monitor).

extra_args = ()
make_eps(num_ruptures)[source]
Parameters:num_ruptures – the size of the epsilon array for each asset
riskinput_key(ri)[source]
Parameters:ri – riskinput object
Returns:the IMT associated to it
class openquake.calculators.base.Site(sid, lon, lat)

Bases: tuple

lat

Alias for field number 2

lon

Alias for field number 1

sid

Alias for field number 0

openquake.calculators.base.check_time_event(oqparam, time_events)[source]

Check the time_event parameter in the datastore, by comparing with the periods found in the exposure.

openquake.calculators.base.gsim_names(rlz)[source]

Names of the underlying GSIMs separated by spaces

openquake.calculators.base.set_array(longarray, shortarray)[source]
Parameters:
  • longarray – a numpy array of floats of length L >= l
  • shortarray – a numpy array of floats of length l

Fill longarray with the values of shortarray, starting from the left. If shortarry is shorter than longarray, then the remaining elements on the right are filled with numpy.nan values.

openquake.calculators.base.view_task_info(token, dstore)[source]

Display statistical information about the tasks performance

openquake.calculators.calc module

openquake.calculators.classical module

class openquake.calculators.classical.BoundingBox(lt_model_id, site_id)[source]

Bases: object

A class to store the bounding box in distances, longitudes and magnitudes, given a source model and a site. This is used for disaggregation calculations. The goal is to determine the minimum and maximum distances of the ruptures generated from the model from the site; moreover the maximum and minimum longitudes and magnitudes are stored, by taking in account the international date line.

bins_edges(dist_bin_width, coord_bin_width)[source]

Define bin edges for disaggregation histograms, from the bin data collected from the ruptures.

Parameters:
  • dists – array of distances from the ruptures
  • lons – array of longitudes from the ruptures
  • lats – array of latitudes from the ruptures
  • dist_bin_width – distance_bin_width from job.ini
  • coord_bin_width – coordinate_bin_width from job.ini
update(dists, lons, lats)[source]

Compare the current bounding box with the value in the arrays dists, lons, lats and enlarge it if needed.

Parameters:
  • dists – a sequence of distances
  • lons – a sequence of longitudes
  • lats – a sequence of latitudes
update_bb(bb)[source]

Compare the current bounding box with the given bounding box and enlarge it if needed.

Parameters:bb – an instance of :class: openquake.engine.calculators.hazard.classical.core.BoundingBox
class openquake.calculators.classical.ClassicalCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.base.HazardCalculator

Classical PSHA calculator

agg_dicts(acc, val)[source]

Aggregate dictionaries of hazard curves by updating the accumulator.

Parameters:
  • acc – accumulator dictionary
  • val – a nested dictionary trt_id -> ProbabilityMap
core_task(sources, sitecol, siteidx, rlzs_assoc, monitor)
Parameters:
  • sources – a non-empty sequence of sources of homogeneous tectonic region type
  • sitecol – a SiteCollection instance
  • siteidx – index of the first site (0 if there is a single tile)
  • rlzs_assoc – a RlzsAssoc instance
  • monitor – a monitor instance
Returns:

an AccumDict rlz -> curves

count_eff_ruptures(result_dict, trt_model)[source]

Returns the number of ruptures in the trt_model (after filtering) or 0 if the trt_model has been filtered away.

Parameters:
  • result_dict – a dictionary with keys (trt_id, gsim)
  • trt_model – a TrtModel instance
execute()[source]

Run in parallel core_task(sources, sitecol, monitor), by parallelizing on the sources according to their weight and tectonic region type.

hazard_maps(curves)[source]

Compute the hazard maps associated to the curves

post_execute(curves_by_trt_id)[source]

Collect the hazard curves by realization and export them.

Parameters:curves_by_trt_id – a dictionary trt_id -> hazard curves
save_curves(curves_by_rlz)[source]

Save the dictionary curves_by_rlz

source_info
store_curves(kind, curves, rlz=None)[source]

Store all kind of curves, optionally computing maps and uhs curves.

Parameters:
  • kind – the kind of curves to store
  • curves – an array of N curves to store
  • rlz – hazard realization, if any
store_source_info(curves_by_trt_id)[source]
zerodict()[source]

Initial accumulator, an empty ProbabilityMap

class openquake.calculators.classical.HazardCurve(location, poes)

Bases: tuple

location

Alias for field number 0

poes

Alias for field number 1

openquake.calculators.classical.classical(sources, sitecol, siteidx, rlzs_assoc, monitor)[source]
Parameters:
  • sources – a non-empty sequence of sources of homogeneous tectonic region type
  • sitecol – a SiteCollection instance
  • siteidx – index of the first site (0 if there is a single tile)
  • rlzs_assoc – a RlzsAssoc instance
  • monitor – a monitor instance
Returns:

an AccumDict rlz -> curves

openquake.calculators.classical.nonzero(val)[source]
Returns:the sum of the composite array val

openquake.calculators.classical_bcr module

class openquake.calculators.classical_bcr.ClassicalBCRCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.classical_risk.ClassicalRiskCalculator

Classical BCR Risk calculator

core_task(riskinput, riskmodel, rlzs_assoc, bcr_dt, monitor)

Compute and return the average losses for each asset.

Parameters:
post_execute(result)[source]
pre_execute()[source]
openquake.calculators.classical_bcr.classical_bcr(riskinput, riskmodel, rlzs_assoc, bcr_dt, monitor)[source]

Compute and return the average losses for each asset.

Parameters:

openquake.calculators.classical_damage module

class openquake.calculators.classical_damage.ClassicalDamageCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.classical_risk.ClassicalRiskCalculator

Scenario damage calculator

check_poes(curves_by_trt_gsim)[source]

Raise an error if one PoE = 1, since it would produce a log(0) in openquake.risklib.scientific.annual_frequency_of_exceedence

core_task(riskinput, riskmodel, rlzs_assoc, monitor)

Core function for a classical damage computation.

Parameters:
Returns:

a nested dictionary rlz_idx -> asset -> <damage array>

damages
post_execute(result)[source]

Export the result in CSV format.

Parameters:result – a dictionary asset -> fractions per damage state
openquake.calculators.classical_damage.classical_damage(riskinput, riskmodel, rlzs_assoc, monitor)[source]

Core function for a classical damage computation.

Parameters:
Returns:

a nested dictionary rlz_idx -> asset -> <damage array>

openquake.calculators.classical_risk module

class openquake.calculators.classical_risk.ClassicalRiskCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.base.RiskCalculator

Classical Risk calculator

avg_losses
core_task(riskinput, riskmodel, rlzs_assoc, monitor)

Compute and return the average losses for each asset.

Parameters:
post_execute(result)[source]

Save the losses in a compact form.

pre_calculator = 'classical'
pre_execute()[source]

Associate the assets to the sites and build the riskinputs.

save_loss_curves(result)[source]

Saving loss curves in the datastore.

Parameters:result – aggregated result of the task classical_risk
save_loss_maps(result)[source]

Saving loss maps in the datastore.

Parameters:result – aggregated result of the task classical_risk
openquake.calculators.classical_risk.classical_risk(riskinput, riskmodel, rlzs_assoc, monitor)[source]

Compute and return the average losses for each asset.

Parameters:

openquake.calculators.disaggregation module

Disaggregation calculator core functionality

class openquake.calculators.disaggregation.BinData(mags, dists, lons, lats, trts, pnes)

Bases: tuple

dists

Alias for field number 1

lats

Alias for field number 3

lons

Alias for field number 2

mags

Alias for field number 0

pnes

Alias for field number 5

trts

Alias for field number 4

class openquake.calculators.disaggregation.DisaggregationCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.classical.ClassicalCalculator

Classical PSHA disaggregation calculator

agg_result(acc, result)[source]

Collect the results coming from compute_disagg into self.results, a dictionary with key (sid, rlz.id, poe, imt, iml, trt_names) and values which are probability arrays.

Parameters:
  • acc – dictionary accumulating the results
  • result – dictionary with the result coming from a task
full_disaggregation(curves_by_trt_gsim)[source]

Run the disaggregation phase after hazard curve finalization.

get_curves(sid)[source]

Get all the relevant hazard curves for the given site ordinal. Returns a dictionary {(rlz_id, imt) -> curve}.

post_execute(result=None)[source]
save_disagg_result(site_id, bin_edges, trt_names, matrix, rlz_id, investigation_time, imt_str, iml, poe)[source]

Save a computed disaggregation matrix to hzrdr.disagg_result (see DisaggResult).

Parameters:
  • site_id – id of the current site
  • bin_edges – The 5-uple mag, dist, lon, lat, eps
  • trt_names – The list of Tectonic Region Types
  • matrix – A probability array
  • rlz_id – ordinal of the realization to which the results belong.
  • investigation_time (float) – Investigation time (years) for the calculation.
  • imt_str – Intensity measure type string (PGA, SA, etc.)
  • iml (float) – Intensity measure level interpolated (using poe) from the hazard curve at the site.
  • poe (float) – Disaggregation probability of exceedance value for this result.
save_disagg_results(results)[source]

Save all the results of the disaggregation. NB: the number of results to save is #sites * #rlzs * #disagg_poes * #IMTs.

Parameters:results – a dictionary of probability arrays
openquake.calculators.disaggregation.compute_disagg(sitecol, sources, trt_model_id, rlzs_assoc, trt_names, curves_dict, bin_edges, oqparam, monitor)[source]
Parameters:
  • sitecol – a openquake.hazardlib.site.SiteCollection instance
  • sources – list of hazardlib source objects
  • trt_model_id – numeric ID of a TrtModel instance
  • rlzs_assoc – a openquake.commonlib.source.RlzsAssoc instance
  • trt_names (dict) – a tuple of names for the given tectonic region type
  • curves_dict – a dictionary with the hazard curves for sites, realizations and IMTs
  • bin_egdes – a dictionary site_id -> edges
  • oqparam – the parameters in the job.ini file
  • monitor – monitor of the currently running job
Returns:

a dictionary of probability arrays, with composite key (sid, rlz.id, poe, imt, iml, trt_names).

openquake.calculators.event_based module

class openquake.calculators.event_based.EBRupture(rupture, indices, events, source_id, trt_id, serial)[source]

Bases: object

An event based rupture. It is a wrapper over a hazardlib rupture object, containing an array of site indices affected by the rupture, as well as the tags of the corresponding seismic events.

eids

An array with the underlying event IDs

etags

An array of tags for the underlying seismic events

export(mesh)[source]

Yield openquake.commonlib.util.Rupture objects, with all the attributes set, suitable for export in XML format.

multiplicity

How many times the underlying rupture occurs.

set_weight(num_rlzs_by_trt_id, num_assets_by_site_id)[source]

Set the weight attribute of each rupture with the formula weight = multiplicity * affected_sites * realizations

Parameters:
  • num_rlzs_by_trt_id – dictionary, possibly empty
  • num_assets_by_site_id – dictionary, possibly empty
class openquake.calculators.event_based.EventBasedCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.classical.ClassicalCalculator

Event based PSHA calculator generating the ground motion fields and the hazard curves from the ruptures, depending on the configuration parameters.

combine_curves_and_save_gmfs(acc, res)[source]

Combine the hazard curves (if any) and save the gmfs (if any) sequentially; notice that the gmfs may come from different tasks in any order.

Parameters:
  • acc – an accumulator for the hazard curves
  • res – a dictionary rlzi, imt -> [gmf_array, curves_by_imt]
Returns:

a new accumulator

core_task(eb_ruptures, sitecol, imts, rlzs_assoc, min_iml, monitor)
Parameters:
  • eb_ruptures – a list of blocks of EBRuptures of the same SESCollection
  • sitecol – a openquake.hazardlib.site.SiteCollection instance
  • imts – a list of IMT string
  • rlzs_assoc – a RlzsAssoc instance
  • monitor – a Monitor instance
Returns:

a dictionary (rlzi, imt) -> [gmfarray, haz_curves]

execute()[source]

Run in parallel core_task(sources, sitecol, monitor), by parallelizing on the ruptures according to their weight and tectonic region type.

is_stochastic = True
post_execute(result)[source]
Parameters:result – a dictionary (trt_model_id, gsim) -> haz_curves or an empty dictionary if hazard_curves_from_gmfs is false
pre_calculator = 'event_based_rupture'
pre_execute()[source]

Read the precomputed ruptures (or compute them on the fly) and prepare some empty files in the export directory to store the gmfs (if any). If there were pre-existing files, they will be erased.

class openquake.calculators.event_based.EventBasedRuptureCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.classical.ClassicalCalculator

Event based PSHA calculator generating the ruptures only

agg_dicts(acc, ruptures_by_trt_id)[source]

Aggregate dictionaries of hazard curves by updating the accumulator.

Parameters:
  • acc – accumulator dictionary
  • ruptures_by_trt_id – a nested dictionary trt_id -> ProbabilityMap
core_task(sources, sitecol, siteidx, rlzs_assoc, monitor)
Parameters:
  • sources – List of commonlib.source.Source tuples
  • sitecol – a openquake.hazardlib.site.SiteCollection instance
  • siteidx – always equal to 0
  • rlzs_assoc – a openquake.commonlib.source.RlzsAssoc instance
  • monitor – monitor instance
Returns:

a dictionary trt_model_id -> [Rupture instances]

count_eff_ruptures(ruptures_by_trt_id, trt_model)[source]

Returns the number of ruptures sampled in the given trt_model.

Parameters:
  • ruptures_by_trt_id – a dictionary with key trt_id
  • trt_model – a TrtModel instance
etags
init()[source]

Set the random seed passed to the SourceManager and the minimum_intensity dictionary.

is_stochastic = True
post_execute(result)[source]

Save the SES collection

zerodict()[source]

Initial accumulator, a dictionary (trt_id, gsim) -> curves

openquake.calculators.event_based.build_eb_ruptures(src, num_occ_by_rup, rupture_filter, random_seed, rup_mon)[source]

Filter the ruptures stored in the dictionary num_occ_by_rup and yield pairs (rupture, <list of associated EBRuptures>)

openquake.calculators.event_based.compute_gmfs_and_curves(eb_ruptures, sitecol, imts, rlzs_assoc, min_iml, monitor)[source]
Parameters:
  • eb_ruptures – a list of blocks of EBRuptures of the same SESCollection
  • sitecol – a openquake.hazardlib.site.SiteCollection instance
  • imts – a list of IMT string
  • rlzs_assoc – a RlzsAssoc instance
  • monitor – a Monitor instance
Returns:

a dictionary (rlzi, imt) -> [gmfarray, haz_curves]

openquake.calculators.event_based.compute_ruptures(sources, sitecol, siteidx, rlzs_assoc, monitor)[source]
Parameters:
  • sources – List of commonlib.source.Source tuples
  • sitecol – a openquake.hazardlib.site.SiteCollection instance
  • siteidx – always equal to 0
  • rlzs_assoc – a openquake.commonlib.source.RlzsAssoc instance
  • monitor – monitor instance
Returns:

a dictionary trt_model_id -> [Rupture instances]

openquake.calculators.event_based.get_geom(surface, is_from_fault_source, is_multi_surface)[source]

The following fields can be interpreted different ways, depending on the value of is_from_fault_source. If is_from_fault_source is True, each of these fields should contain a 2D numpy array (all of the same shape). Each triple of (lon, lat, depth) for a given index represents the node of a rectangular mesh. If is_from_fault_source is False, each of these fields should contain a sequence (tuple, list, or numpy array, for example) of 4 values. In order, the triples of (lon, lat, depth) represent top left, top right, bottom left, and bottom right corners of the the rupture’s planar surface. Update: There is now a third case. If the rupture originated from a characteristic fault source with a multi-planar-surface geometry, lons, lats, and depths will contain one or more sets of 4 points, similar to how planar surface geometry is stored (see above).

Parameters:
  • rupture – an instance of openquake.hazardlib.source.rupture.BaseProbabilisticRupture
  • is_from_fault_source – a boolean
  • is_multi_surface – a boolean
openquake.calculators.event_based.sample_ruptures(src, num_ses, num_samples, seed)[source]

Sample the ruptures contained in the given source.

Parameters:
  • src – a hazardlib source object
  • num_ses – the number of Stochastic Event Sets to generate
  • num_samples – how many samples for the given source
  • seed – master seed from the job.ini file
Returns:

a dictionary of dictionaries rupture -> {ses_id: num_occurrences}

openquake.calculators.event_based_risk module

class openquake.calculators.event_based_risk.EventBasedRiskCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.base.RiskCalculator

Event based PSHA calculator generating the event loss table and fixed ratios loss curves.

agg(acc, result)[source]

Aggregate losses and store them in the datastore.

Parameters:
  • acc – accumulator dictionary
  • result – dictionary coming from event_based_risk
build_agg_curve_and_stats(builder)[source]

Build a single loss curve per realization. It is NOT obtained by aggregating the loss curves; instead, it is obtained without generating the loss curves, directly from the the aggregate losses.

build_agg_curve_stats(builder, agg_curve, loss_curve_dt)[source]

Build and save agg_curve-stats in the HDF5 file.

Parameters:
compute_store_stats(rlzs, builder)[source]

Compute and store the statistical outputs. :param rlzs: list of realizations

core_task(riskinput, riskmodel, rlzs_assoc, assetcol, monitor)
Parameters:
Returns:

a dictionary of numpy arrays of shape (L, R)

execute()[source]

Run the event_based_risk calculator and aggregate the results

is_stochastic = True
post_execute(result)[source]

Save the event loss table in the datastore.

Parameters:result – the dictionary returned by the .execute method
pre_calculator = 'event_based'
pre_execute()[source]

Read the precomputed ruptures (or compute them on the fly)

openquake.calculators.event_based_risk.build_agg_curve(lr_data, insured_losses, ses_ratio, curve_resolution, L, monitor)[source]

Build the aggregate loss curve in parallel for each loss type and realization pair.

Parameters:
  • lr_data – a list of triples (l, r, data) where l is the loss type index, r is the realization index and data is an array of kind (rupture_id, loss) or (rupture_id, loss, loss_ins)
  • insured_losses (bool) – job.ini configuration parameter
  • ses_ratio – a ratio obtained from ses_per_logic_tree_path
  • curve_resolution – the number of discretization steps for the loss curve
  • L – the number of loss types
  • monitor – a Monitor instance
Returns:

a dictionary (r, l, i) -> (losses, poes, avg)

openquake.calculators.event_based_risk.build_el_dtypes(insured_losses)[source]
Parameters:insured_losses (bool) – job.ini configuration parameter
Returns:ela_dt and elt_dt i.e. the data types for event loss assets and event loss table respectively
openquake.calculators.event_based_risk.event_based_risk(riskinput, riskmodel, rlzs_assoc, assetcol, monitor)[source]
Parameters:
Returns:

a dictionary of numpy arrays of shape (L, R)

openquake.calculators.event_based_risk.square(L, R, factory)[source]
Parameters:
  • L – the number of loss types
  • R – the number of realizations
  • factory – thunk used to initialize the elements
Returns:

a numpy matrix of shape (L, R)

openquake.calculators.risk module

openquake.calculators.scenario module

class openquake.calculators.scenario.ScenarioCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.base.HazardCalculator

Scenario hazard calculator

execute()[source]

Compute the GMFs and return a dictionary rlzi -> array gmv_dt

init()[source]
is_stochastic = True
post_execute(gmfa_by_rlzi)[source]
Parameters:gmfa – a dictionary rlzi -> gmfa
pre_execute()[source]

Read the site collection and initialize GmfComputer, etags and seeds

openquake.calculators.scenario_damage module

class openquake.calculators.scenario_damage.ScenarioDamageCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.base.RiskCalculator

Scenario damage calculator

core_task(riskinput, riskmodel, rlzs_assoc, monitor)

Core function for a damage computation.

Parameters:
Returns:

a dictionary {‘d_asset’: [(l, r, a, mean-stddev), ...],

‘d_taxonomy’: damage array of shape T, L, R, E, D, ‘c_asset’: [(l, r, a, mean-stddev), ...], ‘c_taxonomy’: damage array of shape T, L, R, E}

d_asset and d_taxonomy are related to the damage distributions whereas c_asset and c_taxonomy are the consequence distributions. If there is no consequence model c_asset is an empty list and c_taxonomy is a zero-value array.

is_stochastic = True
post_execute(result)[source]

Compute stats for the aggregated distributions and save the results on the datastore.

pre_calculator = 'scenario'
pre_execute()[source]
openquake.calculators.scenario_damage.dist_by_asset(data, multi_stat_dt)[source]
Parameters:
  • data – array of shape (N, L, R, 2, ...)
  • multi_stat_dt – numpy dtype for statistical outputs
Returns:

array of shape (N, R) with records of type multi_stat_dt

openquake.calculators.scenario_damage.dist_by_taxon(data, multi_stat_dt)[source]
Parameters:
  • data – array of shape (T, L, R, ...)
  • multi_stat_dt – numpy dtype for statistical outputs
Returns:

array of shape (T, R) with records of type multi_stat_dt

openquake.calculators.scenario_damage.dist_total(data, multi_stat_dt)[source]
Parameters:
  • data – array of shape (T, L, R, ...)
  • multi_stat_dt – numpy dtype for statistical outputs
Returns:

array of shape (R,) with records of type multi_stat_dt

openquake.calculators.scenario_damage.scenario_damage(riskinput, riskmodel, rlzs_assoc, monitor)[source]

Core function for a damage computation.

Parameters:
Returns:

a dictionary {‘d_asset’: [(l, r, a, mean-stddev), ...],

‘d_taxonomy’: damage array of shape T, L, R, E, D, ‘c_asset’: [(l, r, a, mean-stddev), ...], ‘c_taxonomy’: damage array of shape T, L, R, E}

d_asset and d_taxonomy are related to the damage distributions whereas c_asset and c_taxonomy are the consequence distributions. If there is no consequence model c_asset is an empty list and c_taxonomy is a zero-value array.

openquake.calculators.scenario_risk module

class openquake.calculators.scenario_risk.ScenarioRiskCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.base.RiskCalculator

Run a scenario risk calculation

core_task(riskinput, riskmodel, rlzs_assoc, monitor)

Core function for a scenario computation.

Parameters:
Returns:

a dictionary { ‘agg’: array of shape (E, L, R, 2), ‘avg’: list of tuples (lt_idx, rlz_idx, asset_idx, statistics) } where E is the number of simulated events, L the number of loss types, R the number of realizations and statistics is an array of shape (n, R, 4), with n the number of assets in the current riskinput object

is_stochastic = True
post_execute(result)[source]

Compute stats for the aggregated distributions and save the results on the datastore.

pre_calculator = 'scenario'
pre_execute()[source]

Compute the GMFs, build the epsilons, the riskinputs, and a dictionary with the unit of measure, used in the export phase.

openquake.calculators.scenario_risk.scenario_risk(riskinput, riskmodel, rlzs_assoc, monitor)[source]

Core function for a scenario computation.

Parameters:
Returns:

a dictionary { ‘agg’: array of shape (E, L, R, 2), ‘avg’: list of tuples (lt_idx, rlz_idx, asset_idx, statistics) } where E is the number of simulated events, L the number of loss types, R the number of realizations and statistics is an array of shape (n, R, 4), with n the number of assets in the current riskinput object

openquake.calculators.ucerf_event_based module

class openquake.calculators.ucerf_event_based.UCERFEventBasedCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.event_based.EventBasedCalculator

Event based PSHA calculator generating the ground motion fields and the hazard curves from the ruptures, depending on the configuration parameters. Specialized for the UCERF model.

pre_calculator = 'ucerf_event_based_rupture'
class openquake.calculators.ucerf_event_based.UCERFEventBasedRuptureCalculator(oqparam, monitor=<Monitor dummy>, calc_id=None)[source]

Bases: openquake.calculators.event_based.EventBasedRuptureCalculator

Event based PSHA calculator generating the ruptures only

agg(acc, val)[source]

Aggregated the ruptures and the calculation times

core_task(branch_info, source, sitecol, oqparam, monitor)

Returns the ruptures as a TRT set :param str branch_info:

Tuple of (ltbr, branch_id, branch_weight)
Parameters:
  • source – Instance of the UCERFSESControl object
  • sitecol – Site collection :class: openquake.hazardlib.site.SiteCollection
  • info – Instance of the :class: openquake.commonlib.source.CompositionInfo
Returns:

Dictionary of rupture instances associated to a TRT ID

etags
execute()[source]

Run the ucerf rupture calculation

is_stochastic = True
pre_execute()[source]

parse the logic tree and source model input

class openquake.calculators.ucerf_event_based.UCERFSESControl(source_file, id, investigation_time, min_mag, npd=<openquake.hazardlib.pmf.PMF object>, hdd=<openquake.hazardlib.pmf.PMF object>, aspect=1.5, upper_seismogenic_depth=0.0, lower_seismogenic_depth=15.0, msr=<WC1994>, mesh_spacing=1.0, trt='Active Shallow Crust', integration_distance=1000)[source]

Bases: object

Parameters:
  • source_file – Path to an existing HDF5 file containing the UCERF model
  • id (str) – Valid branch of UCERF
  • investigation_time (float) – Investigation time of event set (years)
  • min_mag (float) – Minimim magnitude for consideration of background sources
  • npd – Nodal plane distribution as instance of :class: openquake.hazardlib.pmf.PMF
  • hdd – Hypocentral depth distribution as instance of :class: openquake.hazardlib.pmf.PMF
  • aspect (float) – Aspect ratio
  • upper_seismoge nic_depth (float) – Upper seismogenic depth (km)
  • lower_seismogenic_depth (float) – Lower seismogenic depth (km)
  • msr – Magnitude scaling relation
  • mesh_spacing (float) – Spacing (km) of fault mesh
  • trt (str) – Tectonic region type
  • integration_distance (float) – Maximum distance from rupture to site for consideration
static build_idx_set(branch_code)[source]

Builds a dictionary of indices based on the branch code

Parameters:branch_code (str) – Code for the branch
generate_event_set(branch_id, sites=None, integration_distance=1000.0)[source]

Generates the event set corresponding to a particular branch

update_background_site_filter(sites, integration_distance=1000.0)[source]

We can apply the filtering of the background sites as a pre-processing step - this is done here rather than in the sampling of the ruptures themselves

update_seed(seed)[source]

Updates the random seed associated with the source

class openquake.calculators.ucerf_event_based.UCERFSourceConverter(investigation_time, rupture_mesh_spacing, complex_fault_mesh_spacing=None, width_of_mfd_bin=1.0, area_source_discretization=None)[source]

Bases: openquake.commonlib.sourceconverter.SourceConverter

Adjustment of the UCERF Source Converter to return the source information as an instance of the UCERF SES Control object

convert_UCERFSource(node)[source]

Converts the Ucerf Source node into an SES Control object

openquake.calculators.ucerf_event_based.compute_ruptures(branch_info, source, sitecol, oqparam, monitor)[source]

Returns the ruptures as a TRT set :param str branch_info:

Tuple of (ltbr, branch_id, branch_weight)
Parameters:
  • source – Instance of the UCERFSESControl object
  • sitecol – Site collection :class: openquake.hazardlib.site.SiteCollection
  • info – Instance of the :class: openquake.commonlib.source.CompositionInfo
Returns:

Dictionary of rupture instances associated to a TRT ID

openquake.calculators.ucerf_event_based.generate_background_ruptures(tom, locations, occurrence, mag, npd, hdd, upper_seismogenic_depth, lower_seismogenic_depth, msr=<WC1994>, aspect=1.5, trt='Active Shallow Crust')[source]
Parameters:
  • tom – Temporal occurrence model as instance of :class: openquake.hazardlib.tom.TOM
  • locations (numpy.ndarray) – Array of locations [Longitude, Latitude] of the point sources
  • occurrence (numpy.ndarray) – Annual rates of occurrence
  • mag (float) – Magnitude
  • npd – Nodal plane distribution as instance of :class: openquake.hazardlib.pmf.PMF
  • hdd – Hypocentral depth distribution as instance of :class: openquake.hazardlib.pmf.PMF
  • upper_seismogenic_depth (float) – Upper seismogenic depth (km)
  • lower_seismogenic_depth (float) – Lower seismogenic depth (km)
  • msr – Magnitude scaling relation
  • aspect (float) – Aspect ratio
  • trt (str) – Tectonic region type
Returns:

List of ruptures

openquake.calculators.ucerf_event_based.get_rupture_dimensions(mag, nodal_plane, msr, rupture_aspect_ratio, upper_seismogenic_depth, lower_seismogenic_depth)[source]

Calculate and return the rupture length and width for given magnitude mag and nodal plane.

Parameters:nodal_plane – Instance of openquake.hazardlib.geo.nodalplane.NodalPlane.
Returns:Tuple of two items: rupture length in width in km.

The rupture area is calculated using method get_median_area() of source’s magnitude-scaling relationship. In any case the returned dimensions multiplication is equal to that value. Than the area is decomposed to length and width with respect to source’s rupture aspect ratio.

If calculated rupture width being inclined by nodal plane’s dip angle would not fit in between upper and lower seismogenic depth, the rupture width is shrunken to a maximum possible and rupture length is extended to preserve the same area.

openquake.calculators.ucerf_event_based.get_rupture_surface(mag, nodal_plane, hypocenter, msr, rupture_aspect_ratio, upper_seismogenic_depth, lower_seismogenic_depth, mesh_spacing=1.0)[source]

Create and return rupture surface object with given properties.

Parameters:
  • mag – Magnitude value, used to calculate rupture dimensions, see _get_rupture_dimensions().
  • nodal_plane – Instance of openquake.hazardlib.geo.nodalplane.NodalPlane describing the rupture orientation.
  • hypocenter – Point representing rupture’s hypocenter.
Returns:

Instance of PlanarSurface.

openquake.calculators.ucerf_event_based.get_ucerf_rupture(hdf5, iloc, idx_set, tom, sites, integration_distance, mesh_spacing=1.0, trt='Active Shallow Crust')[source]
Parameters:
  • hdf5 – Source Model hdf5 object as instance of :class: h5py.File
  • iloc (int) – Location of the rupture plane in the hdf5 file
  • idx_set (dict) – Set of indices for the branch

Generates a rupture set from a sample of the background model :param tom:

Temporal occurrence model as instance of :class: openquake.hazardlib.tom.TOM
Parameters:sites – Sites for consideration (can be None!)
openquake.calculators.ucerf_event_based.prefilter_background_model(hdf5, sites, integration_distance, msr=<WC1994>, aspect=1.5)[source]

Identify those points within the integration distance :param sites:

Sites for consideration (can be None!)
Parameters:
  • integration_distance (float) – Maximum distance from rupture to site for consideration
  • msr – Magnitude scaling relation
  • aspect (float) – Aspect ratio
Returns:

Boolean vector indicating if sites are within (True) or outside (False) the integration distance

openquake.calculators.ucerf_event_based.prefilter_ruptures(hdf5, ridx, idx_set, sites, integration_distance)[source]

Determines if a rupture is likely to be inside the integration distance by considering the set of fault plane centroids.

Parameters:
  • hdf5 – Source of UCERF file as h5py.File object
  • ridx (list) – List of indices composing the rupture sections
  • idx_set (dict) – Set of indices for the branch
  • sites – Sites for consideration (can be None!)
  • integration_distance (float) – Maximum distance from rupture to site for consideration
openquake.calculators.ucerf_event_based.sample_background_model(hdf5, tom, filter_idx, min_mag, npd, hdd, upper_seismogenic_depth, lower_seismogenic_depth, msr=<WC1994>, aspect=1.5, trt='Active Shallow Crust')[source]

Generates a rupture set from a sample of the background model :param tom:

Temporal occurrence model as instance of :class: openquake.hazardlib.tom.TOM
Parameters:
  • filter_idx – Sites for consideration (can be None!)
  • min_mag (float) – Minimim magnitude for consideration of background sources
  • npd – Nodal plane distribution as instance of :class: openquake.hazardlib.pmf.PMF
  • hdd – Hypocentral depth distribution as instance of :class: openquake.hazardlib.pmf.PMF
  • aspect (float) – Aspect ratio
  • upper_seismogenic_depth (float) – Upper seismogenic depth (km)
  • lower_seismogenic_depth (float) – Lower seismogenic depth (km)
  • msr – Magnitude scaling relation
  • integration_distance (float) – Maximum distance from rupture to site for consideration

openquake.calculators.views module

openquake.calculators.views.avglosses_data_transfer(token, dstore)

Determine the amount of average losses transferred from the workers to the controller node in a risk calculation.

openquake.calculators.views.classify_gsim_lt(gsim_lt)
Returns:“trivial”, “simple” or “complex”
openquake.calculators.views.ebr_data_transfer(token, dstore)

Display the data transferred in an event based risk calculation

openquake.calculators.views.form(value)

Format numbers in a nice way.

>>> form(0)
'0'
>>> form(0.0)
'0.0'
>>> form(0.0001)
'1.000E-04'
>>> form(1003.4)
'1,003'
>>> form(103.4)
'103'
>>> form(9.3)
'9.300'
>>> form(-1.2)
'-1.2'
openquake.calculators.views.get_max_gmf_size(dstore)

Upper limit for the size of the GMFs

openquake.calculators.views.performance_view(dstore)

Returns the performance view as a numpy array.

openquake.calculators.views.portfolio_loss_from_agg_loss_table(agg_loss_table, loss_dt)
openquake.calculators.views.portfolio_loss_from_losses_by_taxon(losses_by_taxon, loss_dt)
openquake.calculators.views.rst_table(data, header=None, fmt=None)

Build a .rst table from a matrix.

>>> tbl = [['a', 1], ['b', 2]]
>>> print(rst_table(tbl, header=['Name', 'Value']))
==== =====
Name Value
==== =====
a    1    
b    2    
==== =====
openquake.calculators.views.stats(name, array, *extras)

Returns statistics from an array of numbers.

Parameters:name – a descriptive string
Returns:(name, mean, std, min, max, len)
openquake.calculators.views.sum_table(records)

Used to compute summaries. The records are assumed to have numeric fields, except the first field which is ignored, since it typically contains a label. Here is an example:

>>> sum_table([('a', 1), ('b', 2)])
['total', 3]
openquake.calculators.views.sum_tbl(tbl, kfield, vfields)

Aggregate a composite array and compute the totals on a given key.

>>> dt = numpy.dtype([('name', (bytes, 10)), ('value', int)])
>>> tbl = numpy.array([('a', 1), ('a', 2), ('b', 3)], dt)
>>> sum_tbl(tbl, 'name', ['value'])['value']
array([3, 3])
openquake.calculators.views.view_assetcol(token, dstore)

Display the exposure in CSV format

openquake.calculators.views.view_assets_by_site(token, dstore)

Display statistical information about the distribution of the assets

openquake.calculators.views.view_biggest_ebr_gmf(token, dstore)

Returns the size of the biggest GMF in an event based risk calculation

openquake.calculators.views.view_contents(token, dstore)

Returns the size of the contents of the datastore and its total size

openquake.calculators.views.view_csm_info(token, dstore)
openquake.calculators.views.view_exposure_info(token, dstore)

Display info about the exposure model

openquake.calculators.views.view_fullreport(token, dstore)

Display an .rst report about the computation

openquake.calculators.views.view_inputs(token, dstore)
openquake.calculators.views.view_job_info(token, dstore)

Determine the amount of data transferred from the controller node to the workers and back in a classical calculation.

openquake.calculators.views.view_loss_curves_avg(token, dstore)

Returns the average losses computed from the loss curves; for each asset shows all realizations.

openquake.calculators.views.view_mean_avg_losses(token, dstore)
openquake.calculators.views.view_params(token, dstore)
openquake.calculators.views.view_performance(token, dstore)

Display performance information

openquake.calculators.views.view_portfolio_loss(token, dstore)

The loss for the full portfolio, for each realization and loss type, extracted from the event loss table.

openquake.calculators.views.view_required_params_per_trt(token, dstore)

Display the parameters needed by each tectonic region type

openquake.calculators.views.view_ruptures_events(token, dstore)
openquake.calculators.views.view_ruptures_per_trt(token, dstore)
openquake.calculators.views.view_short_source_info(token, dstore, maxrows=20)
openquake.calculators.views.view_slow_sources(token, dstore, maxrows=20)

Returns the slowest sources

openquake.calculators.views.view_task_durations(token, dstore)

Display the raw task durations. Here is an example of usage:

$ oq show task_durations:classical
openquake.calculators.views.view_task_info(token, dstore)

Display statistical information about the tasks performance. It is possible to get full information about a specific task with a command like this one, for a classical calculation:

$ oq show task_info:classical
openquake.calculators.views.view_task_slowest(token, dstore)

Display info about the slowest classical task.

openquake.calculators.views.view_times_by_source_class(token, dstore)

Returns the calculation times depending on the source typology

openquake.calculators.views.view_totlosses(token, dstore)

This is a debugging view. You can use it to check that the total losses, i.e. the losses obtained by summing the average losses on all assets are indeed equal to the aggregate losses. This is a sanity check for the correctness of the implementation.

Module contents