openquake.calculators package#

Subpackages#

base module#

class openquake.calculators.base.BaseCalculator(oqparam, calc_id)[source]#

Bases: object

Abstract base class for all calculators.

Parameters:
  • oqparam – OqParam object

  • monitor – monitor object

  • calc_id – numeric calculation ID

accept_precalc = []#
check_precalc(precalc_mode)[source]#

Defensive programming against users providing an incorrect pre-calculation ID (with --hazard-calculation-id).

Parameters:

precalc_mode – calculation_mode of the previous calculation

core_task()[source]#

Core routine running on the workers.

abstract execute()[source]#

Execution phase. Usually will run in parallel the core function and return a dictionary with the results.

export(exports=None)[source]#

Export all the outputs in the datastore in the given export formats. Individual outputs are not exported if there are multiple realizations.

from_engine = False#
gzip_inputs()[source]#

Gzipping the inputs and saving them in the datastore

is_stochastic = False#
monitor(operation='', **kw)[source]#
Returns:

a new Monitor instance

abstract post_execute(result)[source]#

Post-processing phase of the aggregated output. It must be overridden with the export code. It will return a dictionary of output files.

pre_checks()[source]#

Checks to run after the pre_execute but before the execute

abstract pre_execute()[source]#

Initialization phase.

precalc = None#
run(pre_execute=True, concurrent_tasks=None, remove=False, shutdown=False, **kw)[source]#

Run the calculation and return the exported outputs.

Parameters:
  • pre_execute – set it to False to avoid running pre_execute

  • concurrent_tasks – set it to 0 to disable parallelization

  • remove – set it to False to remove the hdf5cache file (if any)

  • shutdown – set it to True to shutdown the ProcessPool

save_params(**kw)[source]#

Update the current calculation parameters and save engine_version

class openquake.calculators.base.HazardCalculator(oqparam, calc_id)[source]#

Bases: BaseCalculator

Base class for hazard calculators based on source models

property E#
Returns:

the number of stored events

property N#
Returns:

the number of sites

property R#
Returns:

the number of realizations

af = None#
amplifier = None#
check_consequences()[source]#
check_discardable(rel_ruptures)[source]#

Check if logic tree reduction is possible

check_floating_spinning()[source]#
check_overflow()[source]#

Overridden in event based

property few_sites#
Returns:

True if there are less than max_sites_disagg

import_perils()[source]#

Read the hazard fields as csv files, associate them to the sites and create suitable gmf_data and events.

init()[source]#

To be overridden to initialize the datasets needed by the calculation

load_crmodel()[source]#

Read the risk models and set the attribute .crmodel. The crmodel can be empty for hazard calculations. Save the loss ratios (if any) in the datastore.

load_insurance_data(lt_fnames)[source]#

Read the insurance files and populate the policy_df

post_process()[source]#

Run postprocessing function, if any

pre_execute()[source]#

Check if there is a previous calculation ID. If yes, read the inputs by retrieving the previous calculation; if not, read the inputs directly.

pre_execute_from_parent()[source]#

Read data from the parent calculation and perform some checks

read_exposure(haz_sitecol)[source]#

Read the exposure, the risk models and update the attributes .sitecol, .assetcol

read_inputs()[source]#

Read risk data and sources if any

save_crmodel()[source]#

Save the risk models in the datastore

src_filter()[source]#
Returns:

a SourceFilter

store_rlz_info(rel_ruptures)[source]#

Save info about the composite source model inside the full_lt dataset

Parameters:

rel_ruptures – dictionary TRT -> number of relevant ruptures

store_source_info(source_data)[source]#

Save (eff_ruptures, num_sites, calc_time) inside the source_info

exception openquake.calculators.base.InvalidCalculationID[source]#

Bases: Exception

Raised when running a post-calculation on top of an incompatible pre-calculation

class openquake.calculators.base.RiskCalculator(oqparam, calc_id)[source]#

Bases: HazardCalculator

Base class for all risk calculators. A risk calculator must set the attributes .crmodel, .sitecol, .assetcol, .riskinputs in the pre_execute phase.

build_riskinputs()[source]#
Returns:

a list of RiskInputs objects, sorted by IMT.

combine(acc, res)[source]#

Combine the outputs assuming acc and res are dictionaries

execute()[source]#

Parallelize on the riskinputs and returns a dictionary of results. Require a .core_task to be defined with signature (riskinputs, crmodel, param, monitor).

openquake.calculators.base.check_amplification(ampl_df, sitecol)[source]#

Make sure the amplification codes in the site collection match the ones in the amplification table.

Parameters:
  • ampl_df – the amplification table as a pandas DataFrame

  • sitecol – the site collection

openquake.calculators.base.check_imtls(this, parent)[source]#

Fix the hazard_imtls of two calculations if possible

openquake.calculators.base.check_time_event(oqparam, occupancy_periods)[source]#

Check the time_event parameter in the datastore, by comparing with the periods found in the exposure.

openquake.calculators.base.consistent(dic1, dic2)[source]#

Check if two dictionaries with default are consistent:

>>> consistent({'PGA': 0.05, 'SA(0.3)': 0.05}, {'default': 0.05})
True
>>> consistent({'SA(0.3)': 0.1, 'SA(0.6)': 0.05},
... {'default': 0.1, 'SA(0.3)': 0.1, 'SA(0.6)': 0.05})
True
openquake.calculators.base.create_gmf_data(dstore, prim_imts, sec_imts=(), data=None, N=None, E=None)[source]#

Create and possibly populate the datasets in the gmf_data group

openquake.calculators.base.create_risk_by_event(calc)[source]#

Created an empty risk_by_event with keys event_id, agg_id, loss_id and fields for damages, losses and consequences

openquake.calculators.base.csv2peril(fname, name, sitecol, tofloat, asset_hazard_distance)[source]#

Converts a CSV file into a peril array of length N

openquake.calculators.base.expose_outputs(dstore, owner='runner', status='complete')[source]#

Build a correspondence between the outputs in the datastore and the ones in the database.

Parameters:

dstore – datastore

openquake.calculators.base.get_aelo_changelog()[source]#
openquake.calculators.base.get_aelo_version()[source]#
openquake.calculators.base.get_stats(seq)[source]#
openquake.calculators.base.import_gmfs_csv(dstore, oqparam, sitecol)[source]#

Import in the datastore a ground motion field CSV file.

Parameters:
  • dstore – the datastore

  • oqparam – an OqParam instance

  • sitecol – the site collection

Returns:

event_ids

openquake.calculators.base.import_gmfs_hdf5(dstore, oqparam)[source]#

Import in the datastore a ground motion field HDF5 file.

Parameters:
  • dstore – the datastore

  • oqparam – an OqParam instance

Returns:

event_ids

openquake.calculators.base.read_parent_sitecol(oq, dstore)[source]#
Returns:

the hazard site collection in the parent calculation

openquake.calculators.base.read_shakemap(calc, haz_sitecol, assetcol)[source]#

Enabled only if there is a shakemap_id parameter in the job.ini. Download, unzip, parse USGS shakemap files and build a corresponding set of GMFs which are then filtered with the hazard site collection and stored in the datastore.

openquake.calculators.base.run_calc(job_ini, **kw)[source]#

Helper to run calculations programmatically.

Parameters:
  • job_ini – path to a job.ini file or dictionary of parameters

  • kw – parameters to override

Returns:

a Calculator instance

openquake.calculators.base.save_agg_values(dstore, assetcol, lossnames, aggby, maxagg)[source]#

Store agg_keys, agg_values. :returns: the aggkey dictionary key -> tags

openquake.calculators.base.set_array(longarray, shortarray)[source]#
Parameters:
  • longarray – a numpy array of floats of length L >= l

  • shortarray – a numpy array of floats of length l

Fill longarray with the values of shortarray, starting from the left. If shortarry is shorter than longarray, then the remaining elements on the right are filled with numpy.nan values.

openquake.calculators.base.store_shakemap(calc, sitecol, shakemap, gmf_dict)[source]#

Store a ShakeMap array as a gmf_data dataset.

openquake.calculators.base.wkt2peril(fname, name, sitecol)[source]#

Converts a WKT file into a peril array of length N

getters module#

class openquake.calculators.getters.CurveGetter(sid, rates, trt_rlzs, R)[source]#

Bases: object

Hazard curve builder used in classical_risk/classical_damage.

Parameters:
  • sid – site index

  • rates – array of shape (L, G) for the given site

classmethod build(dstore)[source]#
Returns:

a dictionary sid -> CurveGetter

get_hazard()[source]#
class openquake.calculators.getters.HcurvesGetter(dstore)[source]#

Bases: object

Read the contribution to the hazard curves coming from each source in a calculation with a source specific logic tree

get_hcurve(src_id, imt=None, site_id=0, gsim_idx=None)[source]#

Return the curve associated to the given src_id, imt and gsim_idx as an array of length L

get_hcurves(src, imt=None, site_id=0, gsim_idx=None)[source]#

Return the curves associated to the given src, imt and gsim_idx as an array of shape (R, L)

get_mean_hcurve(src=None, imt=None, site_id=0, gsim_idx=None)[source]#

Return the mean curve associated to the given src, imt and gsim_idx as an array of shape L

class openquake.calculators.getters.MapGetter(filenames, idx, trt_rlzs, R, oq)[source]#

Bases: object

Read hazard curves from the datastore for all realizations or for a specific realization.

property G#
property L#
property M#
property N#
get_fast_mean(gweights)[source]#
Returns:

a MapArray of shape (N, M, L1) with the mean hcurves

get_hcurve(sid)[source]#
Parameters:

sid – a site ID

Returns:

an array of shape (L, R) for the given site ID

property imts#
init()[source]#

Build the _map from the underlying dataframes

property sids#
exception openquake.calculators.getters.NotFound[source]#

Bases: Exception

class openquake.calculators.getters.RuptureGetter(proxies, filename, trt_smr, trt, rlzs_by_gsim)[source]#

Bases: object

Parameters:
  • proxies – a list of RuptureProxies

  • filename – path to the HDF5 file containing a ‘rupgeoms’ dataset

  • trt_smr – source group index

  • trt – tectonic region type string

  • rlzs_by_gsim – dictionary gsim -> rlzs for the group

get_proxies(min_mag=0)[source]#
Returns:

a list of RuptureProxies

property num_ruptures#
property seeds#
split(srcfilter, maxw)[source]#
Returns:

RuptureProxies with weight < maxw

class openquake.calculators.getters.ZeroGetter(L, R)[source]#

Bases: object

Return an array of zeros of shape (L, R)

get_hazard()[source]#
openquake.calculators.getters.build_stat_curve(hcurve, imtls, stat, weights, wget, use_rates=False)[source]#

Build statistics by taking into account IMT-dependent weights

openquake.calculators.getters.get_ebrupture(dstore, rup_id)[source]#

This is EXTREMELY inefficient, so it must be used only when you are interested in a single rupture.

openquake.calculators.getters.get_ebruptures(dstore)[source]#

Extract EBRuptures from the datastore

openquake.calculators.getters.get_num_chunks(dstore)[source]#
Returns:

the number of postclassical tasks to generate.

It is 5 times the number of GB required to store the rates.

openquake.calculators.getters.get_pmaps_gb(dstore, full_lt=None)[source]#
Returns:

memory required on the master node to keep the pmaps

openquake.calculators.getters.get_rupture_getters(dstore, ct=0, srcfilter=None, rupids=None)[source]#
Parameters:
Returns:

a list of RuptureGetters

openquake.calculators.getters.line(points)[source]#
openquake.calculators.getters.map_getters(dstore, full_lt=None, disagg=False)[source]#
Returns:

a list of pairs (MapGetter, weights)

openquake.calculators.getters.multiline(array3RC)[source]#
Parameters:

array3RC – array of shape (3, R, C)

Returns:

a MULTILINESTRING

openquake.calculators.getters.sig_eps_dt(imts)[source]#
Returns:

a composite data type for the sig_eps output

classical module#

class openquake.calculators.classical.ClassicalCalculator(oqparam, calc_id)[source]#

Bases: HazardCalculator

Classical PSHA calculator

SLOW_TASK_ERROR = False#
accept_precalc = ['preclassical', 'classical']#
agg_dicts(acc, dic)[source]#

Aggregate dictionaries of hazard curves by updating the accumulator.

Parameters:
  • acc – accumulator dictionary

  • dic – dict with keys pmap, source_data, rup_data

build_curves_maps()[source]#

Compute and store hcurves-rlzs, hcurves-stats, hmaps-rlzs, hmaps-stats

check_mean_rates(mean_rates_by_src)[source]#

The sum of the mean_rates_by_src must correspond to the mean_rates

check_memory(N, L, maxw)[source]#

Log the memory required to receive the largest MapArray, assuming all sites are affected (upper limit)

collect_hazard(acc, pmap_by_kind)[source]#

Populate hcurves and hmaps in the .hazard dictionary

Parameters:
  • acc – ignored

  • pmap_by_kind – a dictionary of MapArrays

core_task(tilegetters, cmaker, dstore, monitor)#

Call the classical calculator in hazardlib

create_rup()[source]#

Create the rup datasets before starting the calculation

execute()[source]#

Run in parallel core_task(sources, sitecol, monitor), by parallelizing on the sources according to their weight and tectonic region type.

init_poes()[source]#
plot_hmaps()[source]#

Generate hazard map plots if there are more than 1000 sites

post_execute(dummy)[source]#

Check for slow tasks

precalc = 'preclassical'#
store_info()[source]#

Store full_lt, source_info and source_data

class openquake.calculators.classical.Hazard(dstore, srcidx, gids)[source]#

Bases: object

Helper class for storing the rates

get_rates(pmap, grp_id)[source]#
Parameters:

pmap – a MapArray

Returns:

an array of rates of shape (N, M, L1)

store_mean_rates_by_src(dic)[source]#

Store data inside mean_rates_by_src with shape (N, M, L1, Ns)

class openquake.calculators.classical.Set[source]#

Bases: set

openquake.calculators.classical.classical(sources, tilegetters, cmaker, dstore, monitor)[source]#

Call the classical calculator in hazardlib

openquake.calculators.classical.fast_mean(pgetter, monitor)[source]#
Parameters:
  • pgetter – a openquake.commonlib.getters.MapGetter

  • gweights – an array of G weights

Returns:

a dictionary kind -> MapArray

openquake.calculators.classical.get_heavy_gids(source_groups, cmakers)[source]#
Returns:

the g-indices associated to the heavy groups

openquake.calculators.classical.make_hmap_png(hmap, lons, lats)[source]#
Parameters:
  • hmap – a dictionary with keys calc_id, m, p, imt, poe, inv_time, array

  • lons – an array of longitudes

  • lats – an array of latitudes

Returns:

an Image object containing the hazard map

openquake.calculators.classical.postclassical(pgetter, wget, hstats, individual_rlzs, max_sites_disagg, amplifier, monitor)[source]#
Parameters:
  • pgetter – a openquake.commonlib.getters.MapGetter

  • wget – function (weights[:, :], imt) -> weights[:]

  • hstats – a list of pairs (statname, statfunc)

  • individual_rlzs – if True, also build the individual curves

  • max_sites_disagg – if there are less sites than this, store rup info

  • amplifier – instance of Amplifier or None

  • monitor – instance of Monitor

Returns:

a dictionary kind -> MapArray

The “kind” is a string of the form ‘rlz-XXX’ or ‘mean’ of ‘quantile-XXX’ used to specify the kind of output.

openquake.calculators.classical.save_rates(g, N, jid, num_chunks, mon)[source]#

Store the rates for the given g on a file scratch/calc_id/task_no.hdf5

openquake.calculators.classical.store_ctxs(dstore, rupdata_list, grp_id)[source]#

Store contexts in the datastore

openquake.calculators.classical.tiling(tilegetter, cmaker, dstore, monitor)[source]#

Tiling calculator

classical_bcr module#

class openquake.calculators.classical_bcr.ClassicalBCRCalculator(oqparam, calc_id)[source]#

Bases: ClassicalRiskCalculator

Classical BCR Risk calculator

accept_precalc = ['classical']#
core_task(param, monitor)#

Compute and return the average losses for each asset.

Parameters:
post_execute(result)[source]#

Saving loss curves in the datastore.

Parameters:

result – aggregated result of the task classical_risk

pre_execute()[source]#

Associate the assets to the sites and build the riskinputs.

openquake.calculators.classical_bcr.classical_bcr(riskinputs, param, monitor)[source]#

Compute and return the average losses for each asset.

Parameters:

classical_damage module#

class openquake.calculators.classical_damage.ClassicalDamageCalculator(oqparam, calc_id)[source]#

Bases: ClassicalRiskCalculator

Scenario damage calculator

accept_precalc = ['classical']#
core_task(param, monitor)#

Core function for a classical damage computation.

Parameters:
Yields:

dictionaries asset_ordinal -> damage(R, L, D)

post_execute(result)[source]#

Export the result in CSV format.

Parameters:

result – a dictionary asset_ordinal -> array(R, D)

openquake.calculators.classical_damage.classical_damage(riskinputs, param, monitor)[source]#

Core function for a classical damage computation.

Parameters:
Yields:

dictionaries asset_ordinal -> damage(R, L, D)

classical_risk module#

class openquake.calculators.classical_risk.ClassicalRiskCalculator(oqparam, calc_id)[source]#

Bases: RiskCalculator

Classical Risk calculator

accept_precalc = ['classical']#
core_task(oqparam, monitor)#

Compute and return the average losses for each asset.

Parameters:
post_execute(result)[source]#

Saving loss curves in the datastore.

Parameters:

result – aggregated result of the task classical_risk

pre_execute()[source]#

Associate the assets to the sites and build the riskinputs.

precalc = 'classical'#
openquake.calculators.classical_risk.classical_risk(riskinputs, oqparam, monitor)[source]#

Compute and return the average losses for each asset.

Parameters:

disaggregation module#

Disaggregation calculator core functionality

class openquake.calculators.disaggregation.DisaggregationCalculator(oqparam, calc_id)[source]#

Bases: HazardCalculator

Classical PSHA disaggregation calculator

accept_precalc = ['classical', 'disaggregation']#
agg_result(acc, results)[source]#

Collect the results coming from compute_disagg into self.results.

Parameters:
  • acc – dictionary s, r -> array8D

  • result – dictionary with the result coming from a task

compute()[source]#

Submit disaggregation tasks and return the results

execute()[source]#

Performs the disaggregation

full_disaggregation()[source]#

Run the disaggregation phase.

post_execute(results)[source]#

Save all the results of the disaggregation. NB: the number of results to save is #sites * #rlzs * #disagg_poes * #IMTs.

Parameters:

results – a dictionary sid, rlz -> 8D disagg matrix

pre_checks()[source]#

Checks on the number of sites, atomic groups and size of the disaggregation matrix.

precalc = 'classical'#
save_bin_edges(all_edges)[source]#

Save disagg-bins

save_disagg_results(results, name)[source]#

Save the computed PMFs in the datastore.

Parameters:
  • results – a dict s, z -> 8D-matrix of shape (T, Ma, D, E, Lo, La, M, P) containing individual realizations or statistics (only mean)

  • name – the string “disagg-rlzs” or “disagg-stats”

openquake.calculators.disaggregation.check_memory(N, Z, shape8D)[source]#

Raise an error if the calculation will require too much memory

openquake.calculators.disaggregation.compute_disagg(dstore, ctxt, sitecol, cmaker, bin_edges, src_mutex, rwdic, monitor)[source]#
Parameters:
  • dstore – a DataStore instance

  • ctxt – a context array

  • sitecol – a site collection

  • cmaker – a ContextMaker instance

  • bin_edges – a tuple of bin edges (mag, dist, lon, lat, eps, trt)

  • src_mutex – a dictionary src_id -> weight, usually empty

  • rwdic – dictionary rlz -> weight, empty for individual realizations

  • monitor – monitor of the currently running job

Returns:

a list of dictionaries containing matrices of rates

openquake.calculators.disaggregation.get_outputs_size(shapedic, disagg_outputs, Z)[source]#
Returns:

the total size of the outputs

openquake.calculators.disaggregation.output_dict(shapedic, disagg_outputs, Z)[source]#
openquake.calculators.disaggregation.submit(smap, dstore, ctxt, sitecol, cmaker, bin_edges, src_mutex, rwdic)[source]#

event_based module#

class openquake.calculators.event_based.EventBasedCalculator(oqparam, calc_id)[source]#

Bases: HazardCalculator

Event based PSHA calculator generating the ground motion fields and the hazard curves from the ruptures, depending on the configuration parameters.

accept_precalc = ['event_based', 'ebrisk', 'event_based_risk']#
agg_dicts(acc, result)[source]#
Parameters:
  • acc – accumulator dictionary

  • result – an AccumDict with events, ruptures and gmfs

build_events_from_sources()[source]#

Prefilter the composite source model and store the source_info

core_task(cmaker, stations, dstore, monitor)#

Compute GMFs and optionally hazard curves

execute()[source]#

Execution phase. Usually will run in parallel the core function and return a dictionary with the results.

init()[source]#

To be overridden to initialize the datasets needed by the calculation

is_stochastic = True#
post_execute(dummy)[source]#

Post-processing phase of the aggregated output. It must be overridden with the export code. It will return a dictionary of output files.

save_avg_gmf()[source]#

Compute and save avg_gmf, unless there are too many GMFs

openquake.calculators.event_based.build_hcurves(calc)[source]#

Build the hazard curves from each realization starting from the stored GMFs. Works only for few sites.

openquake.calculators.event_based.compute_avg_gmf(gmf_df, weights, min_iml)[source]#
Parameters:
  • gmf_df – a DataFrame with colums eid, sid, rlz, gmv…

  • weights – E weights associated to the realizations

  • min_iml – array of M minimum intensities

Returns:

a dictionary site_id -> array of shape (2, M)

openquake.calculators.event_based.count_ruptures(src)[source]#

Count the number of ruptures on a heavy source

openquake.calculators.event_based.event_based(proxies, cmaker, stations, dstore, monitor)[source]#

Compute GMFs and optionally hazard curves

openquake.calculators.event_based.filter_stations(station_df, complete, rup, maxdist)[source]#
Parameters:
  • station_df – DataFrame with the stations

  • complete – complete SiteCollection

  • rup – rupture

  • maxdist – maximum distance

Returns:

filtered (station_df, station_sitecol)

openquake.calculators.event_based.get_computer(cmaker, proxy, srcfilter, station_data, station_sitecol)[source]#
Returns:

GmfComputer or ConditionedGmfComputer

openquake.calculators.event_based.read_gsim_lt(oq)[source]#
openquake.calculators.event_based.rup_weight(rup)[source]#
openquake.calculators.event_based.set_mags(oq, dstore)[source]#

Set the attribute oq.mags_by_trt

openquake.calculators.event_based.starmap_from_rups(func, oq, full_lt, sitecol, dstore, save_tmp=None)[source]#

Submit the ruptures and apply func (event_based or ebrisk)

event_based_risk module#

class openquake.calculators.event_based_risk.EventBasedRiskCalculator(oqparam, calc_id)[source]#

Bases: EventBasedCalculator

Event based risk calculator generating event loss tables

accept_precalc = ['scenario', 'event_based', 'event_based_risk', 'ebrisk']#
agg_dicts(dummy, dic)[source]#
Parameters:
  • dummy – unused parameter

  • dic – dictionary with keys “avg”, “alt”

build_aggcurves()[source]#
core_task(cmaker, stations, dstore, monitor)#
Parameters:
  • proxies – list of RuptureProxies with the same trt_smr

  • cmaker – ContextMaker instance associated to the trt_smr

  • stations – empty pair or (station_data, station_sitecol)

  • monitor – a Monitor instance

Returns:

a dictionary of arrays

create_avg_losses()[source]#
execute()[source]#

Compute risk from GMFs or ruptures depending on what is stored

is_stochastic = True#
log_info(eids)[source]#

Printing some information about the risk calculation

post_execute(dummy)[source]#

Compute and store average losses from the risk_by_event dataset, and then loss curves and maps.

pre_execute()[source]#

Check if there is a previous calculation ID. If yes, read the inputs by retrieving the previous calculation; if not, read the inputs directly.

precalc = 'event_based'#
save_tmp(monitor)[source]#

Save some useful data in the file calc_XXX_tmp.hdf5

openquake.calculators.event_based_risk.aggreg(outputs, crmodel, ARK, aggids, rlz_id, ideduc, monitor)[source]#
Returns:

(avg_losses, agg_loss_table)

openquake.calculators.event_based_risk.average_losses(ln, alt, rlz_id, AR, collect_rlzs)[source]#
Returns:

a sparse coo matrix with the losses per asset and realization

openquake.calculators.event_based_risk.debugprint(ln, asset_loss_table, adf)[source]#

Print risk_by_event in a reasonable format. To be used with –nd

openquake.calculators.event_based_risk.ebr_from_gmfs(sbe, oqparam, dstore, monitor)[source]#
Parameters:
  • slice_by_event – composite array with fields ‘start’, ‘stop’

  • oqparam – OqParam instance

  • dstore – DataStore instance from which to read the GMFs

  • monitor – a Monitor instance

Yields:

dictionary of arrays, the output of event_based_risk

openquake.calculators.event_based_risk.ebrisk(proxies, cmaker, stations, dstore, monitor)[source]#
Parameters:
  • proxies – list of RuptureProxies with the same trt_smr

  • cmaker – ContextMaker instance associated to the trt_smr

  • stations – empty pair or (station_data, station_sitecol)

  • monitor – a Monitor instance

Returns:

a dictionary of arrays

openquake.calculators.event_based_risk.event_based_risk(df, oqparam, monitor)[source]#
Parameters:
  • df – a DataFrame of GMFs with fields sid, eid, gmv_X, …

  • oqparam – parameters coming from the job.ini

  • monitor – a Monitor instance

Returns:

a dictionary of arrays

openquake.calculators.event_based_risk.fast_agg(keys, values, correl, li, acc)[source]#
Parameters:
  • keys – an array of N uint64 numbers encoding (event_id, agg_id)

  • values – an array of (N, D) floats

  • correl – True if there is asset correlation

  • li – loss type index

  • acc – dictionary unique key -> array(L, D)

openquake.calculators.event_based_risk.gen_outputs(df, crmodel, rng, monitor)[source]#
Parameters:
  • df – GMF dataframe (a slice of events)

  • crmodel – CompositeRiskModel instance

  • rng – random number generator

  • monitor – Monitor instance

Yields:

one output per taxonomy and slice of events

openquake.calculators.event_based_risk.set_oqparam(oq, assetcol, dstore)[source]#

Set the attributes .M, .K, .A, .ideduc, ._sec_losses

event_based_damage module#

class openquake.calculators.event_based_damage.DamageCalculator(oqparam, calc_id)[source]#

Bases: EventBasedRiskCalculator

Damage calculator

accept_precalc = ['scenario', 'event_based', 'event_based_risk', 'event_based_damage']#
combine(acc, res)[source]#
Parameters:
  • acc – unused

  • res – DataFrame with fields (event_id, agg_id, loss_id, dmg1 …) plus array with damages and consequences of shape (A, Dc)

Combine the results and grows risk_by_event with fields (event_id, agg_id, loss_id) and (dmg_0, dmg_1, dmg_2, …)

core_task(oq, dstore, monitor)#
Parameters:
  • df – a DataFrame of GMFs with fields sid, eid, gmv_X, …

  • oq – parameters coming from the job.ini

  • dstore – a DataStore instance

  • monitor – a Monitor instance

Returns:

(damages (eid, kid) -> LDc plus damages (A, Dc))

create_avg_losses()[source]#

Do nothing: there are no losses in the DamageCalculator

execute()[source]#

Compute risk from GMFs or ruptures depending on what is stored

is_stochastic = True#
post_execute(dummy)[source]#

Store damages-rlzs/stats, aggrisk and aggcurves

precalc = 'event_based'#
openquake.calculators.event_based_damage.damage_from_gmfs(gmfslices, oqparam, dstore, monitor)[source]#
Parameters:
  • gmfslices – an array (S, 3) with S slices (start, stop, weight)

  • oqparam – OqParam instance

  • dstore – DataStore instance from which to read the GMFs

  • monitor – a Monitor instance

Returns:

a dictionary of arrays, the output of event_based_damage

openquake.calculators.event_based_damage.event_based_damage(df, oq, dstore, monitor)[source]#
Parameters:
  • df – a DataFrame of GMFs with fields sid, eid, gmv_X, …

  • oq – parameters coming from the job.ini

  • dstore – a DataStore instance

  • monitor – a Monitor instance

Returns:

(damages (eid, kid) -> LDc plus damages (A, Dc))

openquake.calculators.event_based_damage.zero_dmgcsq(A, R, L, crmodel)[source]#
Returns:

an array of zeros of shape (A, R, L, Dc)

post_risk module#

class openquake.calculators.post_risk.FakeBuilder[source]#

Bases: object

eff_time = 0.0#
pla_factor = None#
class openquake.calculators.post_risk.PostRiskCalculator(oqparam, calc_id)[source]#

Bases: RiskCalculator

Compute losses and loss curves starting from an event loss table.

execute()[source]#

Parallelize on the riskinputs and returns a dictionary of results. Require a .core_task to be defined with signature (riskinputs, crmodel, param, monitor).

post_execute(ok)[source]#

Sanity checks and save agg_curves-stats

pre_execute()[source]#

Check if there is a previous calculation ID. If yes, read the inputs by retrieving the previous calculation; if not, read the inputs directly.

openquake.calculators.post_risk.build_aggcurves(items, builder, num_events, aggregate_loss_curves_types)[source]#
Parameters:
  • items – a list of pairs ((agg_id, rlz_id, loss_id), losses)

  • builder – a LossCurvesMapsBuilder instance

openquake.calculators.post_risk.build_reinsurance(dstore, oq, num_events)[source]#

Build and store the tables reinsurance-avg_policy and reinsurance-avg_portfolio; for event_based, also build the reinsurance-aggcurves table.

openquake.calculators.post_risk.build_store_agg(dstore, oq, rbe_df, num_events)[source]#

Build the aggrisk and aggcurves tables from the risk_by_event table

openquake.calculators.post_risk.fix_dtype(dic, dtype, names)[source]#
openquake.calculators.post_risk.fix_dtypes(dic)[source]#

Fix the dtypes of the given columns inside a dictionary (to be called before conversion to a DataFrame)

openquake.calculators.post_risk.fix_investigation_time(oq, dstore)[source]#

If starting from GMFs, fix oq.investigation_time. :returns: the number of hazard realizations

openquake.calculators.post_risk.get_loss_builder(dstore, oq, return_periods=None, loss_dt=None, num_events=None)[source]#
Parameters:

dstore – datastore for an event based risk calculation

Returns:

a LossCurvesMapsBuilder instance or a Mock object for scenarios

openquake.calculators.post_risk.get_loss_id(ext_loss_types)[source]#
openquake.calculators.post_risk.get_src_loss_table(dstore, loss_id)[source]#
Returns:

(source_ids, array of losses of shape Ns)

openquake.calculators.post_risk.post_aggregate(calc_id: int, aggregate_by)[source]#

Re-run the postprocessing after an event based risk calculation

openquake.calculators.post_risk.reagg_idxs(num_tags, tagnames)[source]#
Parameters:
  • num_tags – dictionary tagname -> number of tags with that tagname

  • tagnames – subset of tagnames of interest

Returns:

T = T1 x … X TN indices with repetitions

Reaggregate indices. Consider for instance a case with 3 tagnames, taxonomy (4 tags), region (3 tags) and country (2 tags):

>>> num_tags = dict(taxonomy=4, region=3, country=2)

There are T = T1 x T2 x T3 = 4 x 3 x 2 = 24 combinations. The function will return 24 reaggregated indices with repetions depending on the selected subset of tagnames.

For instance reaggregating by taxonomy and region would give:

>>> list(reagg_idxs(num_tags, ['taxonomy', 'region']))  # 4x3
[0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11]

Reaggregating by taxonomy and country would give:

>>> list(reagg_idxs(num_tags, ['taxonomy', 'country']))  # 4x2
[0, 1, 0, 1, 0, 1, 2, 3, 2, 3, 2, 3, 4, 5, 4, 5, 4, 5, 6, 7, 6, 7, 6, 7]

Reaggregating by region and country would give:

>>> list(reagg_idxs(num_tags, ['region', 'country']))  # 3x2
[0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5]

Here is an example of single tag aggregation:

>>> list(reagg_idxs(num_tags, ['taxonomy']))  # 4
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3]
openquake.calculators.post_risk.save_curve_stats(dstore)[source]#

Save agg_curves-stats

openquake.calculators.post_risk.store_aggcurves(oq, agg_ids, rbe_df, builder, loss_cols, events, num_events, dstore)[source]#

reportwriter module#

Utilities to build a report writer generating a .rst report for a calculation

class openquake.calculators.reportwriter.ReportWriter(dstore)[source]#

Bases: object

A particularly smart view over the datastore

add(name, obj=None)[source]#

Add the view named name to the report text

make_report(show_inputs=True)[source]#

Build the report and return a restructed text string

save(fname)[source]#

Save the report

title = {'avglosses_data_transfer': 'Estimated data transfer for the avglosses', 'biggest_ebr_gmf': 'Maximum memory allocated for the GMFs', 'exposure_info': 'Exposure model', 'inputs': 'Input files', 'job_info': 'Data transfer', 'params': 'Parameters', 'performance': 'Slowest operations', 'required_params_per_trt': 'Required parameters per tectonic region type', 'ruptures_events': 'Specific information for event based', 'slow_sources': 'Slowest sources', 'task:start_classical:-1': 'Slowest task', 'task:start_classical:0': 'Fastest task', 'task_info': 'Information about the tasks', 'weight_by_src': 'Computation times by source typology'}#
openquake.calculators.reportwriter.build_report(job_ini, output_dir=None)[source]#

Write a report.csv file with information about the calculation without running it

Parameters:
  • job_ini – full pathname of the job.ini file

  • output_dir – the directory where the report is written (default the input directory)

openquake.calculators.reportwriter.indent(text)[source]#

views module#

class openquake.calculators.views.GmpeExtractor(dstore)[source]#

Bases: object

extract(trt_smrs, rlz_ids)[source]#
class openquake.calculators.views.HtmlTable(header_plus_body, name='noname', empty_table='Empty table')[source]#

Bases: object

Convert a sequence header+body into a HTML table.

border = '1'#
css = '    tr.evenRow { background-color: lightgreen }\n    tr.oddRow { }\n    th { background-color: lightblue }\n    '#
maxrows = 5000#
render(dummy_ctxt=None)[source]#
summary = ''#
class openquake.calculators.views.Source(source_id, code, num_ruptures, checksum)#

Bases: tuple

checksum#

Alias for field number 3

code#

Alias for field number 1

num_ruptures#

Alias for field number 2

source_id#

Alias for field number 0

openquake.calculators.views.alt_to_many_columns(alt, loss_types)[source]#
openquake.calculators.views.asce_fix(asce, siteid)[source]#
openquake.calculators.views.avglosses_data_transfer(token, dstore)[source]#

Determine the amount of average losses transferred from the workers to the controller node in a risk calculation.

openquake.calculators.views.binning_error(values, eids, nbins=10)[source]#
Parameters:
  • values – E values

  • eids – E integer event indices

Returns:

std/mean for the sums of the values

Group the values in nbins depending on the eids and returns the variability of the sums relative to the mean.

openquake.calculators.views.compare_disagg_rates(token, dstore)[source]#
openquake.calculators.views.discard_small(values)[source]#

Discard values 10x smaller than the mean

openquake.calculators.views.dt(names)[source]#
Parameters:

names – list or a string with space-separated names

Returns:

a numpy structured dtype

openquake.calculators.views.fix_newlines(row)[source]#
openquake.calculators.views.form(value)[source]#

Format numbers in a nice way.

>>> form(0)
'0'
>>> form(0.0)
'0.0'
>>> form(0.0001)
'1.000E-04'
>>> form(1003.4)
'1_003'
>>> form(103.41)
'103.4100'
>>> form(9.3)
'9.3000'
>>> form(-1.2)
'-1.2'
openquake.calculators.views.reduce_srcids(srcids)[source]#
openquake.calculators.views.short_repr(lst)[source]#
openquake.calculators.views.shorten(lst)[source]#

Shorten a list of strings

openquake.calculators.views.stats(name, array, *extras)[source]#

Returns statistics from an array of numbers.

Parameters:

name – a descriptive string

Returns:

(name, mean, rel_std, min, max, len) + extras

openquake.calculators.views.sum_table(records)[source]#

Used to compute summaries. The records are assumed to have numeric fields, except the first field which is ignored, since it typically contains a label. Here is an example:

>>> sum_table([('a', 1), ('b', 2)])
['total', 3]
openquake.calculators.views.text_table(data, header=None, fmt=None, ext='rst')[source]#

Build a .rst (or .org) table from a matrix or a DataFrame

>>> tbl = [['a', 1], ['b', 2]]
>>> print(text_table(tbl, header=['Name', 'Value']))
+------+-------+
| Name | Value |
+------+-------+
| a    | 1     |
+------+-------+
| b    | 2     |
+------+-------+
openquake.calculators.views.to_str(arr)[source]#
openquake.calculators.views.tup2str(tups)[source]#
openquake.calculators.views.view_MPL(token, dstore)[source]#

Maximum Probable Loss at a given return period

openquake.calculators.views.view_agg_id(token, dstore)[source]#

Show the available aggregations

openquake.calculators.views.view_aggrisk(token, dstore)[source]#

Returns a table with the aggregate risk by realization and loss type

openquake.calculators.views.view_asce(token, dstore)[source]#

Returns asce:41 and asce:07 arrays

openquake.calculators.views.view_assets_by_site(token, dstore)[source]#

Display statistical information about the distribution of the assets

openquake.calculators.views.view_bad_ruptures(token, dstore)[source]#

Display the ruptures degenerating to a point

openquake.calculators.views.view_branches(token, dstore)[source]#

Show info about the branches in the logic tree

openquake.calculators.views.view_branchsets(token, dstore)[source]#

Show the branchsets in the logic tree

openquake.calculators.views.view_calc_risk(token, dstore)[source]#

Compute the risk_by_event table starting from GMFs

openquake.calculators.views.view_composite_source_model(token, dstore)[source]#

Show the structure of the CompositeSourceModel in terms of grp_id

openquake.calculators.views.view_contents(token, dstore)[source]#

Returns the size of the contents of the datastore and its total size

openquake.calculators.views.view_delta_loss(token, dstore)[source]#

Estimate the stocastic error on the loss curve by splitting the events in odd and even. Example:

$ oq show delta_loss # default structural

openquake.calculators.views.view_disagg(token, dstore)[source]#

Example: $ oq show disagg:Mag Returns a table poe, imt, mag, contribution for the first site

openquake.calculators.views.view_ebrups_by_mag(token, dstore)[source]#

Show how many event based ruptures there are for each magnitude

openquake.calculators.views.view_eff_ruptures(token, dstore)[source]#
openquake.calculators.views.view_event_based_mfd(token, dstore)[source]#

Compare n_occ/eff_time with occurrence_rate

openquake.calculators.views.view_event_loss_table(token, dstore)[source]#

Display the top 20 losses of the event loss table for the first loss type

$ oq show event_loss_table

openquake.calculators.views.view_event_rates(token, dstore)[source]#

Show the number of events per realization multiplied by risk_time/eff_time

openquake.calculators.views.view_events_by_mag(token, dstore)[source]#

Show how many events there are for each magnitude

openquake.calculators.views.view_exposure_info(token, dstore)[source]#

Display info about the exposure model

openquake.calculators.views.view_extreme(token, dstore)[source]#

Show sites where the mean hazard map reaches maximum values

openquake.calculators.views.view_extreme_gmvs(token, dstore)[source]#

Display table of extreme GMVs with fields (eid, gmv_0, sid, rlz. rup)

openquake.calculators.views.view_fastmean(token, dstore)[source]#

Compute the mean hazard curves for the given site from the rates

openquake.calculators.views.view_full_lt(token, dstore)[source]#
openquake.calculators.views.view_fullreport(token, dstore)[source]#

Display an .rst report about the computation

openquake.calculators.views.view_gh3(token, dstore)[source]#
openquake.calculators.views.view_gids(token, dstore)[source]#

Show the meaning of the gids indices

openquake.calculators.views.view_global_gmfs(token, dstore)[source]#

Display GMFs on the first IMT averaged on everything for debugging purposes

openquake.calculators.views.view_global_hazard(token, dstore)[source]#

Display the global hazard for the calculation. This is used for debugging purposes when comparing the results of two calculations.

openquake.calculators.views.view_global_hmaps(token, dstore)[source]#

Display the global hazard maps for the calculation. They are used for debugging purposes when comparing the results of two calculations. They are the mean over the sites of the mean hazard maps.

openquake.calculators.views.view_gmf(token, dstore)[source]#

Display a mean gmf for debugging purposes

openquake.calculators.views.view_gmvs(token, dstore)[source]#

Show the GMVs on a given site ID

openquake.calculators.views.view_gmvs_to_hazard(token, dstore)[source]#

Show the number of GMFs over the highest IML

openquake.calculators.views.view_gsim_for_event(token, dstore)[source]#

Display the GSIM used when computing the GMF for the given event:

$ oq show gsim_for_event:123 -1 [BooreAtkinson2008]

openquake.calculators.views.view_gw(token, dstore)[source]#

Display the gweights

openquake.calculators.views.view_high_hazard(token, dstore)[source]#

Returns the sites with hazard curve below max(poes)

openquake.calculators.views.view_inputs(token, dstore)[source]#
openquake.calculators.views.view_job_info(token, dstore)[source]#

Determine the amount of data transferred from the controller node to the workers and back in a classical calculation.

openquake.calculators.views.view_log_median_spectrum(token, dstore)[source]#
openquake.calculators.views.view_long_ruptures(token, dstore)[source]#

Display the planar ruptures with maxlen > 900 km

openquake.calculators.views.view_loss_ids(token, dstore)[source]#

Displays the loss IDs corresponding to nonzero losses

openquake.calculators.views.view_maximum_intensity(token, dstore)[source]#

Show intensities at minimum and maximum distance for the highest magnitude

openquake.calculators.views.view_mean_disagg(token, dstore)[source]#

Display mean quantities for the disaggregation. Useful for checking differences between two calculations.

openquake.calculators.views.view_mean_perils(token, dstore)[source]#

For instance oq show mean_perils

openquake.calculators.views.view_mean_rates(token, dstore)[source]#

Display mean hazard rates, averaged on the sites

openquake.calculators.views.view_msr(token, dstore)[source]#
openquake.calculators.views.view_num_units(token, dstore)[source]#

Display the number of units by taxonomy

openquake.calculators.views.view_params(token, dstore)[source]#
openquake.calculators.views.view_performance(token, dstore)[source]#

Display performance information

openquake.calculators.views.view_pmaps_size(token, dstore)[source]#
openquake.calculators.views.view_portfolio_damage(token, dstore)[source]#

The mean full portfolio damage for each loss type, extracted from the average damages

openquake.calculators.views.view_portfolio_dmgdist(token, dstore)[source]#

The portfolio damages extracted from the first realization of damages-rlzs

openquake.calculators.views.view_portfolio_loss(token, dstore)[source]#

The mean portfolio loss for each loss type, extracted from the event loss table.

openquake.calculators.views.view_portfolio_losses(token, dstore)[source]#

The losses for the full portfolio, for each realization and loss type, extracted from the event loss table.

openquake.calculators.views.view_relevant_sources(token, dstore)[source]#

Returns a table with the sources contributing more than 10% of the highest source.

openquake.calculators.views.view_required_params_per_trt(token, dstore)[source]#

Display the parameters needed by each tectonic region type

openquake.calculators.views.view_risk_by_event(token, dstore)[source]#

There are two possibilities:

$ oq show risk_by_event:<loss_type> $ oq show risk_by_event:<event_id>

In both cases displays the top 30 losses of the aggregate loss table as a TSV, for all events or only the given event.

openquake.calculators.views.view_risk_by_rup(token, dstore)[source]#

Display the top 30 aggregate losses by rupture ID. Usage:

$ oq show risk_by_rup

openquake.calculators.views.view_rlz(token, dstore)[source]#

Show info about a given realization in the logic tree Example:

$ oq show rlz:0 -1

openquake.calculators.views.view_rup(token, dstore)[source]#

Show the ruptures (contexts) generated by a given source

openquake.calculators.views.view_rup_info(token, dstore, maxrows=25)[source]#

Show the slowest ruptures

openquake.calculators.views.view_rup_stats(token, dstore)[source]#

Show the statistics of event based ruptures

openquake.calculators.views.view_rupture(token, dstore)[source]#

Show a rupture with its geometry

openquake.calculators.views.view_ruptures_events(token, dstore)[source]#
openquake.calculators.views.view_short_source_info(token, dstore, maxrows=20)[source]#
openquake.calculators.views.view_sites_by_country(token, dstore)[source]#

Returns a table with the number of sites per country. The countries are defined as in the file geoBoundariesCGAZ_ADM0.shp

openquake.calculators.views.view_slow_sources(token, dstore, maxrows=20)[source]#

Returns the slowest sources

openquake.calculators.views.view_source_data(token, dstore)[source]#

Display info about a given task. Here is an example:

$ oq show source_data:42
openquake.calculators.views.view_sources_branches(token, dstore)[source]#

Returns a table with the sources in the logic tree by branches

openquake.calculators.views.view_sum(token, dstore)[source]#

Show the sum of an array of shape (A, R, L, …) on the first axis

openquake.calculators.views.view_task_durations(token, dstore)[source]#

Display the raw task durations. Here is an example of usage:

$ oq show task_durations
openquake.calculators.views.view_task_ebrisk(token, dstore)[source]#

Display info about ebrisk tasks:

$ oq show task_ebrisk:-1 # the slowest task

openquake.calculators.views.view_task_hazard(token, dstore)[source]#

Display info about a given task. Here are a few examples of usage:

$ oq show task:classical:0  # the fastest task
$ oq show task:classical:-1  # the slowest task
openquake.calculators.views.view_task_info(token, dstore)[source]#

Display statistical information about the tasks performance. It is possible to get full information about a specific task with a command like this one, for a classical calculation:

$ oq show task_info:classical
openquake.calculators.views.view_totlosses(token, dstore)[source]#

This is a debugging view. You can use it to check that the total losses, i.e. the losses obtained by summing the average losses on all assets are indeed equal to the aggregate losses. This is a sanity check for the correctness of the implementation.

openquake.calculators.views.view_usgs_rupture(token, dstore)[source]#

Show the parameters of a rupture downloaded from the USGS site. $ oq show usgs_rupture:us70006sj8 {‘lon’: 74.628, ‘lat’: 35.5909, ‘dep’: 13.8, ‘mag’: 5.6, ‘rake’: 0.0}

openquake.calculators.views.view_worst_sources(token, dstore)[source]#

Returns the sources with worst weights

openquake.calculators.views.view_zero_losses(token, dstore)[source]#

Sanity check on avg_losses and avg_gmf

extract module#

class openquake.calculators.extract.Extract[source]#

Bases: dict

A callable dictionary of functions with a single instance called extract. Then extract(dstore, fullkey) dispatches to the function determined by the first part of fullkey (a slash-separated string) by passing as argument the second part of fullkey.

For instance extract(dstore, ‘sitecol’).

add(key, cache=False)[source]#
class openquake.calculators.extract.Extractor(calc_id)[source]#

Bases: object

A class to extract data from a calculation.

Parameters:

calc_id – a calculation ID

NB: instantiating the Extractor opens the datastore.

close()[source]#

Close the datastore

get(what, asdict=False)[source]#
Parameters:

what – what to extract

Returns:

an ArrayWrapper instance or a dictionary if asdict is True

class openquake.calculators.extract.RuptureData(trt, gsims, mags)[source]#

Bases: object

Container for information about the ruptures of a given tectonic region type.

to_array(proxies)[source]#

Convert a list of rupture proxies into an array of dtype RuptureData.dt

exception openquake.calculators.extract.WebAPIError[source]#

Bases: RuntimeError

Wrapper for an error on a WebAPI server

class openquake.calculators.extract.WebExtractor(calc_id, server=None, username=None, password=None)[source]#

Bases: Extractor

A class to extract data from the WebAPI.

Parameters:
  • calc_id – a calculation ID

  • server – hostname of the webapi server (can be ‘’)

  • username – login username (can be ‘’)

  • password – login password (can be ‘’)

NB: instantiating the WebExtractor opens a session.

close()[source]#

Close the session

dump(fname)[source]#

Dump the remote datastore on a local path.

get(what)[source]#
Parameters:

what – what to extract

Returns:

an ArrayWrapper instance

openquake.calculators.extract.avglosses(dstore, loss_types, kind)[source]#
Returns:

an array of average losses of shape (A, R, L)

openquake.calculators.extract.barray(iterlines)[source]#

Array of bytes

openquake.calculators.extract.build_damage_dt(dstore)[source]#
Parameters:

dstore – a datastore instance

Returns:

a composite dtype loss_type -> (ds1, ds2, …)

openquake.calculators.extract.cast(loss_array, loss_dt)[source]#
openquake.calculators.extract.clusterize(hmaps, rlzs, k)[source]#
Parameters:
  • hmaps – array of shape (R, M, P)

  • rlzs – composite array of shape R

  • k – number of clusters to build

Returns:

array of K elements with dtype (rlzs, branch_paths, centroid)

openquake.calculators.extract.crm_attrs(dstore, what)[source]#
Returns:

the attributes of the risk model, i.e. limit_states, loss_types, min_iml and covs, needed by the risk exporters.

openquake.calculators.extract.extract_(dstore, dspath)[source]#

Extracts an HDF5 path object from the datastore, for instance extract(dstore, ‘sitecol’).

openquake.calculators.extract.extract_agg_curves(dstore, what)[source]#

Aggregate loss curves from the ebrisk calculator:

/extract/agg_curves?kind=stats&absolute=1&loss_type=occupants&occupancy=RES

Returns an array of shape (#periods, #stats) or (#periods, #rlzs)

openquake.calculators.extract.extract_agg_damages(dstore, what)[source]#

Aggregate damages of the given loss type and tags. Use it as /extract/agg_damages?taxonomy=RC&custom_site_id=20126

Returns:

array of shape (R, D), being R the number of realizations and D the number of damage states, or an array of length 0 if there is no data for the given tags

openquake.calculators.extract.extract_agg_losses(dstore, what)[source]#

Aggregate losses of the given loss type and tags. Use it as /extract/agg_losses/structural?taxonomy=RC&custom_site_id=20126 /extract/agg_losses/structural?taxonomy=RC&custom_site_id=*

Returns:

an array of shape (T, R) if one of the tag names has a * value an array of shape (R,), being R the number of realizations an array of length 0 if there is no data for the given tags

openquake.calculators.extract.extract_aggregate(dstore, what)[source]#

/extract/aggregate/avg_losses? kind=mean&loss_type=structural&tag=taxonomy&tag=occupancy

openquake.calculators.extract.extract_asset_risk(dstore, what)[source]#

Extract an array of assets + risk fields, optionally filtered by tag. Use it as /extract/asset_risk?taxonomy=RC&taxonomy=MSBC&occupancy=RES

openquake.calculators.extract.extract_asset_tags(dstore, tagname)[source]#

Extract an array of asset tags for the given tagname. Use it as /extract/asset_tags or /extract/asset_tags/taxonomy

openquake.calculators.extract.extract_assets(dstore, what)[source]#

Extract an array of assets, optionally filtered by tag. Use it as /extract/assets?taxonomy=RC&taxonomy=MSBC&occupancy=RES

openquake.calculators.extract.extract_avg_gmf(dstore, what)[source]#
openquake.calculators.extract.extract_csq_curves(dstore, what)[source]#

Aggregate damages curves from the event_based_damage calculator:

/extract/csq_curves?agg_id=0&loss_type=occupants

Returns an ArrayWrapper of shape (P, D1) with attribute return_periods

openquake.calculators.extract.extract_damages_npz(dstore, what)[source]#
openquake.calculators.extract.extract_disagg(dstore, what)[source]#

Extract a disaggregation output as an ArrayWrapper. Example: http://127.0.0.1:8800/v1/calc/30/extract/ disagg?kind=Mag_Dist&imt=PGA&site_id=1&poe_id=0&spec=stats

openquake.calculators.extract.extract_disagg_layer(dstore, what)[source]#

Extract a disaggregation layer containing all sites and outputs Example: http://127.0.0.1:8800/v1/calc/30/extract/disagg_layer?

openquake.calculators.extract.extract_effect(dstore, what)[source]#

Extracts the effect of ruptures. Use it as /extract/effect

openquake.calculators.extract.extract_eids_by_gsim(dstore, what)[source]#

Returns a dictionary gsim -> event_ids for the first TRT Example: http://127.0.0.1:8800/v1/calc/30/extract/eids_by_gsim

openquake.calculators.extract.extract_exposure_metadata(dstore, what)[source]#

Extract the loss categories and the tags of the exposure. Use it as /extract/exposure_metadata

openquake.calculators.extract.extract_gmf_npz(dstore, what)[source]#
openquake.calculators.extract.extract_gmf_scenario(dstore, what)[source]#
openquake.calculators.extract.extract_gridded_sources(dstore, what)[source]#

Extract information about the gridded sources (requires ps_grid_spacing) Use it as /extract/gridded_sources?task_no=0. Returns a json string id -> lonlats

openquake.calculators.extract.extract_gsims_by_trt(dstore, what)[source]#

Extract the dictionary gsims_by_trt

openquake.calculators.extract.extract_hcurves(dstore, what)[source]#

Extracts hazard curves. Use it as /extract/hcurves?kind=mean&imt=PGA or /extract/hcurves?kind=rlz-0&imt=SA(1.0)

openquake.calculators.extract.extract_high_sites(dstore, what)[source]#

Returns an array of boolean with the high hazard sites (max_poe > .2) Example: http://127.0.0.1:8800/v1/calc/30/extract/high_sites

openquake.calculators.extract.extract_hmaps(dstore, what)[source]#

Extracts hazard maps. Use it as /extract/hmaps?imt=PGA

openquake.calculators.extract.extract_losses_by_asset(dstore, what)[source]#
openquake.calculators.extract.extract_mean_by_rup(dstore, what)[source]#

Extract src_id, rup_id, mean from the stored contexts Example: http://127.0.0.1:8800/v1/calc/30/extract/mean_by_rup

openquake.calculators.extract.extract_mean_rates_by_src(dstore, what)[source]#

Extract the mean_rates_by_src information. Example: http://127.0.0.1:8800/v1/calc/30/extract/mean_rates_by_src?site_id=0&imt=PGA&iml=.001

openquake.calculators.extract.extract_med_gmv(dstore, what)[source]#

Extract med_gmv array for the given source

openquake.calculators.extract.extract_median_spectra(dstore, what)[source]#

Extracts median spectra per site and group. Use it as /extract/median_spectra?site_id=0&poe_id=1

openquake.calculators.extract.extract_mfd(dstore, what)[source]#

Compare n_occ/eff_time with occurrence_rate. Example: http://127.0.0.1:8800/v1/calc/30/extract/event_based_mfd?

openquake.calculators.extract.extract_num_events(dstore, what)[source]#
Returns:

the number of events (if any)

openquake.calculators.extract.extract_oqparam(dstore, dummy)[source]#

Extract job parameters as a JSON npz. Use it as /extract/oqparam

openquake.calculators.extract.extract_realizations(dstore, dummy)[source]#

Extract an array of realizations. Use it as /extract/realizations

openquake.calculators.extract.extract_relevant_events(dstore, dummy=None)[source]#

Extract the relevant events Example: http://127.0.0.1:8800/v1/calc/30/extract/events

openquake.calculators.extract.extract_relevant_gmfs(dstore, what)[source]#
openquake.calculators.extract.extract_risk_stats(dstore, what)[source]#

Compute the risk statistics from a DataFrame with individual realizations Example: http://127.0.0.1:8800/v1/calc/30/extract/risk_stats/aggrisk

openquake.calculators.extract.extract_rup_ids(dstore, what)[source]#

Extract src_id, rup_id from the stored contexts Example: http://127.0.0.1:8800/v1/calc/30/extract/rup_ids

openquake.calculators.extract.extract_rups_by_mag_dist(dstore, what)[source]#

Extracts the number of ruptures by mag, dist. Use it as /extract/rups_by_mag_dist

openquake.calculators.extract.extract_rupture_info(dstore, what)[source]#

Extract some information about the ruptures, including the boundary. Example: http://127.0.0.1:8800/v1/calc/30/extract/rupture_info?min_mag=6

openquake.calculators.extract.extract_ruptures(dstore, what)[source]#

Extract the ruptures with their geometry as a big CSV string Example: http://127.0.0.1:8800/v1/calc/30/extract/ruptures?rup_id=6

openquake.calculators.extract.extract_sitecol(dstore, what)[source]#

Extracts the site collection array (not the complete object, otherwise it would need to be pickled). Use it as /extract/sitecol?field=vs30

openquake.calculators.extract.extract_source_data(dstore, what)[source]#

Extract performance information about the sources. Use it as /extract/source_data?

openquake.calculators.extract.extract_sources(dstore, what)[source]#

Extract information about a source model. Use it as /extract/sources?limit=10 or /extract/sources?source_id=1&source_id=2 or /extract/sources?code=A&code=B

openquake.calculators.extract.extract_task_info(dstore, what)[source]#

Extracts the task distribution. Use it as /extract/task_info?kind=classical

openquake.calculators.extract.extract_uhs(dstore, what)[source]#

Extracts uniform hazard spectra. Use it as /extract/uhs?kind=mean or /extract/uhs?kind=rlz-0, etc

openquake.calculators.extract.extract_weights(dstore, what)[source]#

Extract the realization weights

openquake.calculators.extract.get_info(dstore)[source]#
Returns:

a dict with ‘stats’, ‘loss_types’, ‘num_rlzs’, ‘tagnames’, etc

openquake.calculators.extract.get_relevant_event_ids(dstore, threshold)[source]#
Parameters:
  • dstore – a DataStore instance with a risk_by_rupture dataframe

  • threshold – fraction of the total losses

Returns:

array with the event IDs cumulating the highest losses up to the threshold (usually 95% of the total loss)

openquake.calculators.extract.get_relevant_rup_ids(dstore, threshold)[source]#
Parameters:
  • dstore – a DataStore instance with a risk_by_rupture dataframe

  • threshold – fraction of the total losses

Returns:

array with the rupture IDs cumulating the highest losses up to the threshold (usually 95% of the total loss)

openquake.calculators.extract.get_ruptures_within(dstore, bbox)[source]#

Extract the ruptures within the given bounding box, a string minlon,minlat,maxlon,maxlat. Example: http://127.0.0.1:8800/v1/calc/30/extract/ruptures_with/8,44,10,46

openquake.calculators.extract.get_sites(sitecol, complete=True)[source]#
Returns:

a lon-lat or lon-lat-depth array depending if the site collection is at sea level or not; if there is a custom_site_id, prepend it

openquake.calculators.extract.hazard_items(dic, sites, *extras, **kw)[source]#
Parameters:
  • dic – dictionary of arrays of the same shape

  • sites – a sites array with lon, lat fields of the same length

  • extras – optional triples (field, dtype, values)

  • kw – dictionary of parameters (like investigation_time)

Returns:

a list of pairs (key, value) suitable for storage in .npz format

openquake.calculators.extract.lit_eval(string)[source]#

ast.literal_eval the string if possible, otherwise returns it unchanged

openquake.calculators.extract.norm(qdict, params)[source]#
openquake.calculators.extract.parse(query_string, info={})[source]#
Returns:

a normalized query_dict as in the following examples:

>>> parse('kind=stats', {'stats': {'mean': 0, 'max': 1}})
{'kind': ['mean', 'max'], 'k': [0, 1], 'rlzs': False}
>>> parse('kind=rlzs', {'stats': {}, 'num_rlzs': 3})
{'kind': ['rlz-000', 'rlz-001', 'rlz-002'], 'k': [0, 1, 2], 'rlzs': True}
>>> parse('kind=mean', {'stats': {'mean': 0, 'max': 1}})
{'kind': ['mean'], 'k': [0], 'rlzs': False}
>>> parse('kind=rlz-3&imt=PGA&site_id=0', {'stats': {}})
{'kind': ['rlz-3'], 'imt': ['PGA'], 'site_id': [0], 'k': [3], 'rlzs': True}
>>> parse(
...    'loss_type=structural+nonstructural&absolute=True&kind=rlzs')['lt']
['structural+nonstructural']
openquake.calculators.extract.sanitize(query_string)[source]#

Replace /, ?, & characters with underscores and ‘=’ with ‘-’