openquake.baselib package¶
openquake.baselib.datastore module¶
-
class
openquake.baselib.datastore.
DataStore
(calc_id=None, datadir=None, params=(), mode=None)[source]¶ Bases:
collections.abc.MutableMapping
DataStore class to store the inputs/outputs of a calculation on the filesystem.
Here is a minimal example of usage:
>>> ds = DataStore() >>> ds['example'] = 42 >>> print(ds['example'].value) 42 >>> ds.clear()
When reading the items, the DataStore will return a generator. The items will be ordered lexicographically according to their name.
There is a serialization protocol to store objects in the datastore. An object is serializable if it has a method __toh5__ returning an array and a dictionary, and a method __fromh5__ taking an array and a dictionary and populating the object. For an example of use see
openquake.hazardlib.site.SiteCollection
.-
build_fname
(prefix, postfix, fmt, export_dir=None)[source]¶ Build a file name from a realization, by using prefix and extension.
Parameters: - prefix – the prefix to use
- postfix – the postfix to use (can be a realization object)
- fmt – the extension (‘csv’, ‘xml’, etc)
- export_dir – export directory (if None use .export_dir)
Returns: relative pathname including the extension
-
create_dset
(key, dtype, shape=(None, ), compression=None, fillvalue=0, attrs=None)[source]¶ Create a one-dimensional HDF5 dataset.
Parameters: - key – name of the dataset
- dtype – dtype of the dataset (usually composite)
- shape – shape of the dataset, possibly extendable
- compression – the kind of HDF5 compression to use
- attrs – dictionary of attributes of the dataset
Returns: a HDF5 dataset
-
export_dir
¶ Return the underlying export directory
-
export_path
(relname, export_dir=None)[source]¶ Return the path of the exported file by adding the export_dir in front, the calculation ID at the end.
Parameters: - relname – relative file name
- export_dir – export directory (if None use .export_dir)
-
extend
(key, array, **attrs)[source]¶ Extend the dataset associated to the given key; create it if needed
Parameters: - key – name of the dataset
- array – array to store
- attrs – a dictionary of attributes
-
get_attr
(key, name, default=None)[source]¶ Parameters: - key – dataset path
- name – name of the attribute
- default – value to return if the attribute is missing
-
get_attrs
(key)[source]¶ Parameters: key – dataset path Returns: dictionary of attributes for that path
-
getsize
(key=None)[source]¶ Return the size in byte of the output associated to the given key. If no key is given, returns the total size of all files.
-
save
(key, kw)[source]¶ Update the object associated to key with the kw dictionary; works for LiteralAttrs objects and automatically flushes.
-
-
openquake.baselib.datastore.
extract_calc_id_datadir
(hdf5path, datadir=None)[source]¶ Extract the calculation ID from the given hdf5path or integer:
>>> extract_calc_id_datadir('/mnt/ssd/oqdata/calc_25.hdf5') (25, '/mnt/ssd/oqdata') >>> extract_calc_id_datadir('/mnt/ssd/oqdata/wrong_name.hdf5') Traceback (most recent call last): ... ValueError: Cannot extract calc_id from /mnt/ssd/oqdata/wrong_name.hdf5
-
openquake.baselib.datastore.
get_calc_ids
(datadir=None)[source]¶ Extract the available calculation IDs from the datadir, in order.
-
openquake.baselib.datastore.
get_datadir
()[source]¶ Extracts the path of the directory where the openquake data are stored from the environment ($OQ_DATADIR) or from the shared_dir in the configuration file.
-
openquake.baselib.datastore.
get_last_calc_id
(datadir=None)[source]¶ Extract the latest calculation ID from the given directory. If none is found, return 0.
-
openquake.baselib.datastore.
hdf5new
(datadir=None)[source]¶ Return a new hdf5.File by instance with name determined by the last calculation in the datadir (plus one). Set the .path attribute to the generated filename.
-
openquake.baselib.datastore.
persistent_attribute
(key)[source]¶ Persistent attributes are persisted to the datastore and cached. Modifications to mutable objects are not automagically persisted. If you have a huge object that does not fit in memory use the datastore directory (for instance, open a HDF5 file to create an empty array, then populate it). Notice that you can use any dict-like data structure in place of the datastore, provided you can set attributes on it. Here is an example:
>>> class Datastore(dict): ... "A fake datastore"
>>> class Store(object): ... a = persistent_attribute('a') ... def __init__(self, a): ... self.datastore = Datastore() ... self.a = a # this assegnation will store the attribute
>>> store = Store([1]) >>> store.a # this retrieves the attribute [1] >>> store.a.append(2) >>> store.a = store.a # remember to store the modified attribute!
Parameters: key – the name of the attribute to be made persistent Returns: a property to be added to a class with a .datastore attribute
general¶
Utility functions of general interest.
-
class
openquake.baselib.general.
AccumDict
(dic=None, accum=None, **kw)[source]¶ Bases:
dict
An accumulating dictionary, useful to accumulate variables:
>> acc = AccumDict() >> acc += {'a': 1} >> acc += {'a': 1, 'b': 1} >> acc {'a': 2, 'b': 1} >> {'a': 1} + acc {'a': 3, 'b': 1} >> acc + 1 {'a': 3, 'b': 2} >> 1 - acc {'a': -1, 'b': 0} >> acc - 1 {'a': 1, 'b': 0}
Also the multiplication has been defined:
>> prob1 = AccumDict(a=0.4, b=0.5) >> prob2 = AccumDict(b=0.5) >> prob1 * prob2 {'a': 0.4, 'b': 0.25} >> prob1 * 1.2 {'a': 0.48, 'b': 0.6} >> 1.2 * prob1 {'a': 0.48, 'b': 0.6}
It is very common to use an AccumDict of accumulators; here is an example using the empty list as accumulator:
>>> acc = AccumDict(accum=[]) >>> acc['a'] += [1] >>> acc['b'] += [2] >>> sorted(acc.items()) [('a', [1]), ('b', [2])]
The implementation is smart enough to make (deep) copies of the accumulator, therefore each key has a different accumulator, which initially is the empty list (in this case).
-
class
openquake.baselib.general.
CallableDict
(keyfunc=<function CallableDict.<lambda>>, keymissing=None)[source]¶ Bases:
collections.OrderedDict
A callable object built on top of a dictionary of functions, used as a smart registry or as a poor man generic function dispatching on the first argument. It is typically used to implement converters. Here is an example:
>>> format_attrs = CallableDict() # dict of functions (fmt, obj) -> str
>>> @format_attrs.add('csv') # implementation for csv ... def format_attrs_csv(fmt, obj): ... items = sorted(vars(obj).items()) ... return '\n'.join('%s,%s' % item for item in items)
>>> @format_attrs.add('json') # implementation for json ... def format_attrs_json(fmt, obj): ... return json.dumps(vars(obj))
format_attrs(fmt, obj) calls the correct underlying function depending on the fmt key. If the format is unknown a KeyError is raised. It is also possible to set a keymissing function to specify what to return if the key is missing.
For a more practical example see the implementation of the exporters in openquake.calculators.export
-
exception
openquake.baselib.general.
DeprecationWarning
[source]¶ Bases:
UserWarning
Raised the first time a deprecated function is called
-
class
openquake.baselib.general.
DictArray
(imtls)[source]¶ Bases:
collections.abc.Mapping
A small wrapper over a dictionary of arrays serializable to HDF5:
>>> d = DictArray({'PGA': [0.01, 0.02, 0.04], 'PGV': [0.1, 0.2]}) >>> from openquake.baselib import hdf5 >>> with hdf5.File('/tmp/x.h5', 'w') as f: ... f['d'] = d ... f['d'] <DictArray PGA: [ 0.01 0.02 0.04] PGV: [ 0.1 0.2]>
The DictArray maintains the lexicographic order of the keys.
-
class
openquake.baselib.general.
WeightedSequence
(seq=())[source]¶ Bases:
collections.abc.MutableSequence
A wrapper over a sequence of weighted items with a total weight attribute. Adding items automatically increases the weight.
-
classmethod
merge
(ws_list)[source]¶ Merge a set of WeightedSequence objects.
Parameters: ws_list – a sequence of :class: openquake.baselib.general.WeightedSequence instances Returns: a openquake.baselib.general.WeightedSequence
instance
-
classmethod
-
openquake.baselib.general.
assert_close
(a, b, rtol=1e-07, atol=0, context=None)[source]¶ Compare for equality up to a given precision two composite objects which may contain floats. NB: if the objects are or contain generators, they are exhausted.
Parameters: - a – an object
- b – another object
- rtol – relative tolerance
- atol – absolute tolerance
-
openquake.baselib.general.
assert_independent
(package, *packages)[source]¶ Parameters: - package – Python name of a module/package
- packages – Python names of modules/packages
Make sure the package does not depend from the packages.
-
openquake.baselib.general.
block_splitter
(items, max_weight, weight=<function <lambda>>, kind=<function nokey>)[source]¶ Parameters: - items – an iterator over items
- max_weight – the max weight to split on
- weight – a function returning the weigth of a given item
- kind – a function returning the kind of a given item
Group together items of the same kind until the total weight exceeds the max_weight and yield WeightedSequence instances. Items with weight zero are ignored.
For instance
>>> items = 'ABCDE' >>> list(block_splitter(items, 3)) [<WeightedSequence ['A', 'B', 'C'], weight=3>, <WeightedSequence ['D', 'E'], weight=2>]
The default weight is 1 for all items.
-
openquake.baselib.general.
ceil
(a, b)[source]¶ Divide a / b and return the biggest integer close to the quotient.
Parameters: - a – a number
- b – a positive number
Returns: the biggest integer close to the quotient
-
openquake.baselib.general.
deprecated
(message)[source]¶ Return a decorator to make deprecated functions.
Parameters: message – the message to print the first time the deprecated function is used. Here is an example of usage:
>>> @deprecated('Use new_function instead') ... def old_function(): ... 'Do something'
Notice that if the function is called several time, the deprecation warning will be displayed only the first time.
-
openquake.baselib.general.
detach_process
()[source]¶ Detach the current process from the controlling terminal by using a double fork. Can be used only on platforms with fork (no Windows).
-
openquake.baselib.general.
get_array
(array, **kw)[source]¶ Extract a subarray by filtering on the given keyword arguments
-
openquake.baselib.general.
git_suffix
(fname)[source]¶ Returns: <short git hash> if Git repository found
-
openquake.baselib.general.
group_array
(array, *kfields)[source]¶ Convert an array into an OrderedDict kfields -> array
-
openquake.baselib.general.
groupby
(objects, key, reducegroup=<class 'list'>)[source]¶ Parameters: - objects – a sequence of objects with a key value
- key – the key function to extract the key value
- reducegroup – the function to apply to each group
Returns: an OrderedDict {key value: map(reducegroup, group)}
>>> groupby(['A1', 'A2', 'B1', 'B2', 'B3'], lambda x: x[0], ... lambda group: ''.join(x[1] for x in group)) OrderedDict([('A', '12'), ('B', '123')])
-
openquake.baselib.general.
groupby2
(records, kfield, vfield)[source]¶ Parameters: - records – a sequence of records with positional or named fields
- kfield – the index/name/tuple specifying the field to use as a key
- vfield – the index/name/tuple specifying the field to use as a value
Returns: an list of pairs of the form (key, [value, …]).
>>> groupby2(['A1', 'A2', 'B1', 'B2', 'B3'], 0, 1) [('A', ['1', '2']), ('B', ['1', '2', '3'])]
Here is an example where the keyfield is a tuple of integers:
>>> groupby2(['A11', 'A12', 'B11', 'B21'], (0, 1), 2) [(('A', '1'), ['1', '2']), (('B', '1'), ['1']), (('B', '2'), ['1'])]
-
openquake.baselib.general.
humansize
(nbytes, suffixes=('B', 'KB', 'MB', 'GB', 'TB', 'PB'))[source]¶ Return file size in a human-friendly format
-
openquake.baselib.general.
import_all
(module_or_package)[source]¶ If module_or_package is a module, just import it; if it is a package, recursively imports all the modules it contains. Returns the names of the modules that were imported as a set. The set can be empty if the modules were already in sys.modules.
-
class
openquake.baselib.general.
pack
(dic, attrs=())[source]¶ Bases:
dict
Compact a dictionary of lists into a dictionary of arrays. If attrs are given, consider those keys as attributes. For instance,
>>> p = pack(dict(x=[1], a=[0]), ['a']) >>> p {'x': array([1])} >>> p.a array([0])
-
openquake.baselib.general.
run_in_process
(code, *args)[source]¶ Run in an external process the given Python code and return the output as a Python object. If there are arguments, then code is taken as a template and traditional string interpolation is performed.
Parameters: - code – string or template describing Python code
- args – arguments to be used for interpolation
Returns: the output of the process, as a Python object
-
openquake.baselib.general.
safeprint
(*args, **kwargs)[source]¶ Convert and print characters using the proper encoding
-
openquake.baselib.general.
search_module
(module, syspath=['/var/lib/jenkins/jobs/builders/doc-builder/workspace_engine-2.9_32/env/bin', '/usr/lib/python35.zip', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-x86_64-linux-gnu', '/usr/lib/python3.5/lib-dynload', '/var/lib/jenkins/jobs/builders/doc-builder/workspace_engine-2.9_32/env/lib/python3.5/site-packages', '/var/lib/jenkins/jobs/builders/doc-builder/workspace_engine-2.9_32/oq-engine-engine-2.9'])[source]¶ Given a module name (possibly with dots) returns the corresponding filepath, or None, if the module cannot be found.
Parameters: - module – (dotted) name of the Python module to look for
- syspath – a list of directories to search (default sys.path)
-
openquake.baselib.general.
socket_ready
(hostport)[source]¶ Parameters: hostport – a pair (host, port) or a string (tcp://)host:port Returns: True if the socket is ready and False otherwise
-
openquake.baselib.general.
split_in_blocks
(sequence, hint, weight=<function <lambda>>, key=<function nokey>)[source]¶ Split the sequence in a number of WeightedSequences close to hint.
Parameters: - sequence – a finite sequence of items
- hint – an integer suggesting the number of subsequences to generate
- weight – a function returning the weigth of a given item
- key – a function returning the key of a given item
The WeightedSequences are of homogeneous key and they try to be balanced in weight. For instance
>>> items = 'ABCDE' >>> list(split_in_blocks(items, 3)) [<WeightedSequence ['A', 'B'], weight=2>, <WeightedSequence ['C', 'D'], weight=2>, <WeightedSequence ['E'], weight=1>]
-
openquake.baselib.general.
split_in_slices
(number, num_slices)[source]¶ Parameters: - number – a positive number to split in slices
- num_slices – the number of slices to return (at most)
Returns: a list of slices
>>> split_in_slices(4, 2) [slice(0, 2, None), slice(2, 4, None)] >>> split_in_slices(5, 1) [slice(0, 5, None)] >>> split_in_slices(5, 2) [slice(0, 3, None), slice(3, 5, None)] >>> split_in_slices(2, 4) [slice(0, 1, None), slice(1, 2, None)]
-
openquake.baselib.general.
writetmp
(content=None, dir=None, prefix='tmp', suffix='tmp')[source]¶ Create temporary file with the given content.
Please note: the temporary file must be deleted by the caller.
Parameters: - content (string) – the content to write to the temporary file.
- dir (string) – directory where the file should be created
- prefix (string) – file name prefix
- suffix (string) – file name suffix
Returns: a string with the path to the temporary file
hdf5¶
-
class
openquake.baselib.hdf5.
ArrayWrapper
(array, attrs)[source]¶ Bases:
object
A pickleable and serializable wrapper over an array, HDF5 dataset or group
-
dtype
¶ dtype of the underlying array
-
shape
¶ shape of the underlying array
-
-
class
openquake.baselib.hdf5.
ByteCounter
(nbytes=0)[source]¶ Bases:
object
A visitor used to measure the dimensions of a HDF5 dataset or group. Use it as ByteCounter.get_nbytes(dset_or_group).
-
class
openquake.baselib.hdf5.
File
(name, mode=None, driver=None, libver=None, userblock_size=None, swmr=False, **kwds)[source]¶ Bases:
h5py._hl.files.File
Subclass of
h5py.File
able to store and retrieve objects conforming to the HDF5 protocol used by the OpenQuake software. It works recursively also for dictionaries of the form name->obj.>>> f = File('/tmp/x.h5', 'w') >>> f['dic'] = dict(a=dict(x=1, y=2), b=3) >>> dic = f['dic'] >>> dic['a']['x'].value 1 >>> dic['b'].value 3 >>> f.close()
-
save
(nodedict, root='')[source]¶ Save a node dictionary in the .hdf5 file, starting from the root dataset. A common application is to convert XML files into .hdf5 files, see the usage in
openquake.commands.to_hdf5
.Parameters: nodedict – a dictionary with keys ‘tag’, ‘attrib’, ‘text’, ‘nodes’
-
-
class
openquake.baselib.hdf5.
LiteralAttrs
[source]¶ Bases:
object
A class to serialize a set of parameters in HDF5 format. The goal is to store simple parameters as an HDF5 table in a readable way. Each parameter can be retrieved as an attribute, given its name. The implementation treats specially dictionary attributes, by storing them as attrname.keyname strings, see the example below:
>>> class Ser(LiteralAttrs): ... def __init__(self, a, b): ... self.a = a ... self.b = b >>> ser = Ser(1, dict(x='xxx', y='yyy')) >>> arr, attrs = ser.__toh5__() >>> for k, v in arr: ... print('%s=%s' % (k, v)) a=1 b.x='xxx' b.y='yyy' >>> s = object.__new__(Ser) >>> s.__fromh5__(arr, attrs) >>> s.a 1 >>> s.b['x'] 'xxx'
The implementation is not recursive, i.e. there will be at most one dot in the serialized names (in the example here a, b.x, b.y).
-
class
openquake.baselib.hdf5.
PickleableSequence
(objects)[source]¶ Bases:
collections.abc.Sequence
An immutable sequence of pickleable objects that can be serialized in HDF5 format. Here is an example, using the LiteralAttrs class defined in this module, but any pickleable class would do:
>>> seq = PickleableSequence([LiteralAttrs(), LiteralAttrs()]) >>> with File('/tmp/x.h5', 'w') as f: ... f['data'] = seq >>> with File('/tmp/x.h5') as f: ... f['data'] (<LiteralAttrs >, <LiteralAttrs >)
-
openquake.baselib.hdf5.
array_of_vstr
(lst)[source]¶ Parameters: lst – a list of strings or bytes Returns: an array of variable length ASCII strings
-
openquake.baselib.hdf5.
cls2dotname
(cls)[source]¶ The full Python name (i.e. pkg.subpkg.mod.cls) of a class
-
openquake.baselib.hdf5.
create
(hdf5, name, dtype, shape=(None, ), compression=None, fillvalue=0, attrs=None)[source]¶ Parameters: - hdf5 – a h5py.File object
- name – an hdf5 key string
- dtype – dtype of the dataset (usually composite)
- shape – shape of the dataset (can be extendable)
- compression – None or ‘gzip’ are recommended
- attrs – dictionary of attributes of the dataset
Returns: a HDF5 dataset
-
openquake.baselib.hdf5.
dotname2cls
(dotname)[source]¶ The class associated to the given dotname (i.e. pkg.subpkg.mod.cls)
-
openquake.baselib.hdf5.
extend
(dset, array)[source]¶ Extend an extensible dataset with an array of a compatible dtype.
Parameters: - dset – an h5py dataset
- array – an array of length L
Returns: the total length of the dataset (i.e. initial length + L)
-
openquake.baselib.hdf5.
extend3
(hdf5path, key, array, **attrs)[source]¶ Extend an HDF5 file dataset with the given array
node¶
This module defines a Node class, together with a few conversion functions which are able to convert NRML files into hierarchical objects (DOM). That makes it easier to read and write XML from Python and viceversa. Such features are used in the command-line conversion tools. The Node class is kept intentionally similar to an Element class, however it overcomes the limitation of ElementTree: in particular a node can manage a lazy iterable of subnodes, whereas ElementTree wants to keep everything in memory. Moreover the Node class provides a convenient dot notation to access subnodes.
The Node class is instantiated with four arguments:
- the node tag (a mandatory string)
- the node attributes (a dictionary)
- the node value (a string or None)
- the subnodes (an iterable over nodes)
If a node has subnodes, its value should be None.
For instance, here is an example of instantiating a root node with two subnodes a and b:
>>> from openquake.baselib.node import Node
>>> a = Node('a', {}, 'A1')
>>> b = Node('b', {'attrb': 'B'}, 'B1')
>>> root = Node('root', nodes=[a, b])
>>> root
<root {} None ...>
Node objects can be converted into nicely indented strings:
>>> print(root.to_str())
root
a 'A1'
b{attrb='B'} 'B1'
The subnodes can be retrieved with the dot notation:
>>> root.a
<a {} A1 >
The value of a node can be extracted with the ~ operator:
>>> ~root.a
'A1'
If there are multiple subnodes with the same name
>>> root.append(Node('a', {}, 'A2')) # add another 'a' node
the dot notation will retrieve the first node.
It is possible to retrieve the other nodes from the ordinal index:
>>> root[0], root[1], root[2]
(<a {} A1 >, <b {'attrb': 'B'} B1 >, <a {} A2 >)
The list of all subnodes with a given name can be retrieved as follows:
>>> list(root.getnodes('a'))
[<a {} A1 >, <a {} A2 >]
It is also possible to delete a node given its index:
>>> del root[2]
A node is an iterable object yielding its subnodes:
>>> list(root)
[<a {} A1 >, <b {'attrb': 'B'} B1 >]
The attributes of a node can be retrieved with the square bracket notation:
>>> root.b['attrb']
'B'
It is possible to add and remove attributes freely:
>>> root.b['attr'] = 'new attr'
>>> del root.b['attr']
Node objects can be easily converted into ElementTree objects:
>>> node_to_elem(root)
<Element 'root' at ...>
Then is trivial to generate the XML representation of a node:
>>> from xml.etree import ElementTree
>>> print(ElementTree.tostring(node_to_elem(root)).decode('utf-8'))
<root><a>A1</a><b attrb="B">B1</b></root>
Generating XML files larger than the available memory requires some care. The trick is to use a node generator, such that it is not necessary to keep the entire tree in memory. Here is an example:
>>> def gen_many_nodes(N):
... for i in xrange(N):
... yield Node('a', {}, 'Text for node %d' % i)
>>> lazytree = Node('lazytree', {}, nodes=gen_many_nodes(10))
The lazytree object defined here consumes no memory, because the nodes are not created a instantiation time. They are created as soon as you start iterating on the lazytree. In particular list(lazytree) will generated all of them. If your goal is to store the tree on the filesystem in XML format you should use a writing routine converting a subnode at the time, without requiring the full list of them. The routines provided by ElementTree are no good, however commonlib.writers provide an StreamingXMLWriter just for that purpose.
Lazy trees should not be used unless it is absolutely necessary in order to save memory; the problem is that if you use a lazy tree the slice notation will not work (the underlying generator will not accept it); moreover it will not be possible to iterate twice on the subnodes, since the generator will be exhausted. Notice that even accessing a subnode with the dot notation will avance the generator. Finally, nodes containing lazy nodes will not be pickleable.
-
class
openquake.baselib.node.
Node
(fulltag, attrib=None, text=None, nodes=None, lineno=None)[source]¶ Bases:
object
A class to make it easy to edit hierarchical structures with attributes, such as XML files. Node objects must be pickleable and must consume as little memory as possible. Moreover they must be easily converted from and to ElementTree objects. The advantage over ElementTree objects is that subnodes can be lazily generated and that they can be accessed with the dot notation.
-
attrib
¶
-
lineno
¶
-
nodes
¶
-
tag
¶
-
text
¶
-
-
class
openquake.baselib.node.
SourceLineParser
[source]¶ Bases:
xml.etree.ElementTree.XMLParser
A custom parser managing line numbers: works for Python <= 3.3
-
class
openquake.baselib.node.
StreamingXMLWriter
(bytestream, indent=4, encoding='utf-8', nsmap=None)[source]¶ Bases:
object
A bynary stream XML writer. The typical usage is something like this:
with StreamingXMLWriter(output_file) as writer: writer.start_tag('root') for node in nodegenerator(): writer.serialize(node) writer.end_tag('root')
-
class
openquake.baselib.node.
ValidatingXmlParser
(validators, stop=None)[source]¶ Bases:
object
Validating XML Parser based on Expat. It has two methods .parse_file and .parse_bytes returning a validated
Node
object.Parameters: - validators – a dictionary of validation functions
- stop – the tag where to stop the parsing (if any)
-
exception
Exit
[source]¶ Bases:
Exception
Raised when the parsing is stopped before the end on purpose
-
openquake.baselib.node.
context
(fname, node)[source]¶ Context manager managing exceptions and adding line number of the current node and name of the current file to the error message.
Parameters: - fname – the current file being processed
- node – the current node being processed
-
openquake.baselib.node.
floatformat
(fmt_string)[source]¶ Context manager to change the default format string for the function
openquake.commonlib.writers.scientificformat()
.Parameters: fmt_string – the format to use; for instance ‘%13.9E’
-
openquake.baselib.node.
iterparse
(source, events=('end', ), remove_comments=True, **kw)[source]¶ Thin wrapper around ElementTree.iterparse
-
openquake.baselib.node.
node_copy
(node, nodefactory=<class 'openquake.baselib.node.Node'>)[source]¶ Make a deep copy of the node
-
openquake.baselib.node.
node_display
(root, expandattrs=False, expandvals=False, output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>)[source]¶ Write an indented representation of the Node object on the output; this is intended for testing/debugging purposes.
Parameters: - root – a Node object
- expandattrs (bool) – if True, the values of the attributes are also printed, not only the names
- expandvals (bool) – if True, the values of the tags are also printed, not only the names.
- output – stream where to write the string representation of the node
-
openquake.baselib.node.
node_from_dict
(dic, nodefactory=<class 'openquake.baselib.node.Node'>)[source]¶ Convert a (nested) dictionary with attributes tag, attrib, text, nodes into a Node object.
-
openquake.baselib.node.
node_from_elem
(elem, nodefactory=<class 'openquake.baselib.node.Node'>, lazy=())[source]¶ Convert (recursively) an ElementTree object into a Node object.
-
openquake.baselib.node.
node_from_ini
(ini_file, nodefactory=<class 'openquake.baselib.node.Node'>, root_name='ini')[source]¶ Convert a .ini file into a Node object.
Parameters: ini_file – a filename or a file like object in read mode
-
openquake.baselib.node.
node_from_xml
(xmlfile, nodefactory=<class 'openquake.baselib.node.Node'>)[source]¶ Convert a .xml file into a Node object.
Parameters: xmlfile – a file name or file object open for reading
-
openquake.baselib.node.
node_to_dict
(node)[source]¶ Convert a Node object into a (nested) dictionary with attributes tag, attrib, text, nodes.
Parameters: node – a Node-compatible object
-
openquake.baselib.node.
node_to_elem
(root)[source]¶ Convert (recursively) a Node object into an ElementTree object.
-
openquake.baselib.node.
node_to_ini
(node, output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>)[source]¶ Convert a Node object with the right structure into a .ini file.
Params node: a Node object Params output: a file-like object opened in write mode
-
openquake.baselib.node.
node_to_xml
(node, output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>, nsmap=None)[source]¶ Convert a Node object into a pretty .xml file without keeping everything in memory. If you just want the string representation use tostring(node).
Parameters: - node – a Node-compatible object (ElementTree nodes are fine)
- nsmap – if given, shorten the tags with aliases
-
openquake.baselib.node.
parse
(source, remove_comments=True, **kw)[source]¶ Thin wrapper around ElementTree.parse
-
openquake.baselib.node.
pprint
(self, stream=None, indent=1, width=80, depth=None)[source]¶ Pretty print the underlying literal Python object
-
openquake.baselib.node.
read_nodes
(fname, filter_elem, nodefactory=<class 'openquake.baselib.node.Node'>, remove_comments=True)[source]¶ Convert an XML file into a lazy iterator over Node objects satifying the given specification, i.e. a function element -> boolean.
Parameters: - fname – file name of file object
- filter_elem – element specification
In case of errors, add the file name to the error message.
-
openquake.baselib.node.
scientificformat
(value, fmt='%13.9E', sep=' ', sep2=':')[source]¶ Parameters: - value – the value to convert into a string
- fmt – the formatting string to use for float values
- sep – separator to use for vector-like values
- sep2 – second separator to use for matrix-like values
Convert a float or an array into a string by using the scientific notation and a fixed precision (by default 10 decimal digits). For instance:
>>> scientificformat(-0E0) '0.000000000E+00' >>> scientificformat(-0.004) '-4.000000000E-03' >>> scientificformat([0.004]) '4.000000000E-03' >>> scientificformat([0.01, 0.02], '%10.6E') '1.000000E-02 2.000000E-02' >>> scientificformat([[0.1, 0.2], [0.3, 0.4]], '%4.1E') '1.0E-01:2.0E-01 3.0E-01:4.0E-01'
-
openquake.baselib.node.
striptag
(tag)[source]¶ Get the short representation of a fully qualified tag
Parameters: tag (str) – a (fully qualified or not) XML tag
-
openquake.baselib.node.
tostring
(node, indent=4, nsmap=None)[source]¶ Convert a node into an XML string by using the StreamingXMLWriter. This is useful for testing purposes.
Parameters: - node – a node object (typically an ElementTree object)
- indent – the indentation to use in the XML (default 4 spaces)
parallel¶
The Starmap API¶
There are several good libraries to manage parallel programming,
both in the standard library and in third party packages. Since we are
not interested in reinventing the wheel, OpenQuake does not offer any
new parallel library; however, it does offer some glue code so that
you can use your library of choice. Currently multiprocessing,
concurrent.futures, celery and ipython-parallel are
supported. Moreover, openquake.baselib.parallel
offers some
additional facilities that make it easier to parallelize
scientific computations, i.e. embarrassing parallel problems.
Typically one wants to apply a callable to a list of arguments in parallel rather then sequentially, and then combine together the results. This is known as a MapReduce problem. As a simple example, we will consider the problem of counting the letters in a text. Here is how you can solve the problem sequentially:
>>> from itertools import starmap # map a function with multiple arguments
>>> from functools import reduce # reduce an iterable with a binary operator
>>> from operator import add # addition function
>>> from collections import Counter # callable doing the counting
>>> arglist = [('hello',), ('world',)] # list of arguments
>>> results = starmap(Counter, arglist) # iterator over the results
>>> res = reduce(add, results, Counter()) # aggregated counts
>>> sorted(res.items()) # counts per letter
[('d', 1), ('e', 1), ('h', 1), ('l', 3), ('o', 2), ('r', 1), ('w', 1)]
Here is how you can solve the problem in parallel by using
openquake.baselib.parallel.Starmap
:
>>> res2 = Starmap(Counter, arglist).reduce()
>>> assert res2 == res # the same as before
As you see there are some notational advantages with respect to use itertools.starmap. First of all, Starmap has a reduce method, so there is no need to import functools.reduce; secondly, the reduce method has sensible defaults:
- the default aggregation function is add, so there is no need to specify it
- the default accumulator is an empty accumulation dictionary (see
openquake.baselib.AccumDict
) working as a Counter, so there is no need to specify it.
You can of course ovverride the defaults, so if you really want to return a Counter you can do
>>> res3 = Starmap(Counter, arglist).reduce(acc=Counter())
In the engine we use nearly always callables that return dictionaries and we aggregate nearly always with the addition operator, so such defaults are very convenient. You are encouraged to do the same, since we found that approach to be very flexible. Typically in a scientific application you will return a dictionary of numpy arrays.
The parallelization algorithm used by Starmap will depend on the environment variable OQ_DISTRIBUTE. Here are the possibilities available at the moment:
- OQ_DISTRIBUTE not set or set to “futures”:
- use multiprocessing via the concurrent.futures interface
- OQ_DISTRIBUTE set to “no”:
- disable the parallelization, useful for debugging
- OQ_DISTRIBUTE set to “celery”:
- use celery, useful if you have multiple machines in a cluster
- OQ_DISTRIBUTE set tp “ipython”
- use the ipyparallel concurrency mechanism (experimental)
There is also an OQ_DISTRIBUTE = “threadpool”; however the performance of using threads instead of processes is normally bad for the kind of applications we are interested in (CPU-dominated, which large tasks such that the time to spawn a new process is negligible with respect to the time to perform the task), so it is not recommended.
The Starmap.apply API¶
The Starmap class has a very convenient classmethod Starmap.apply which is used in several places in the engine. Starmap.apply is useful when you have a sequence of objects that you want to split in homogenous chunks and then apply a callable to each chunk (in parallel). For instance, in the letter counting example discussed before, Starmap.apply could be used as follows:
>>> text = 'helloworld' # sequence of characters
>>> res3 = Starmap.apply(Counter, (text,)).reduce()
>>> assert res3 == res
The API of Starmap.apply is designed to extend the one of apply,
a builtin of Python 2; the second argument is the tuple of arguments
passed to the first argument. The difference with apply is that
Starmap.apply returns a Starmap
object so that nothing is
actually done until you iterate on it (reduce is doing that).
How many chunks will be produced? That depends on the parameter concurrent_tasks; it it is not passed, it has a default of 5 times the number of cores in your machine - as returned by os.cpu_count() - and Starmap.apply will try to produce a number of chunks close to that number. The nice thing is that it is also possible to pass a weight function. Suppose for instance that instead of a list of letters you have a list of seismic sources: some sources requires a long computation time (such as ComplexFaultSources), some requires a short computation time (such as PointSources). By giving an heuristic weight to the different sources it is possible to produce chunks with nearly homogeneous weight; in particular PointSource tasks will contain a lot more sources than tasks with ComplexFaultSources.
It is essential in large computations to have a homogeneous task distribution, otherwise you will end up having a big task dominating the computation time (i.e. you may have 1000 cores of which 999 are free, having finished all the short tasks, but you have to wait for days for the single core processing the slow task). The OpenQuake engine does a great deal of work trying to split slow sources in more manageable fast sources.
-
class
openquake.baselib.parallel.
BaseStarmap
(func, iterargs, poolsize=None, progress=<function info>)[source]¶ Bases:
object
-
add_task_no
(iterargs, pickle=True)¶ Add .task_no and .weight to the monitor and yield back the arguments by pickling them if pickle is True.
-
classmethod
apply
(func, args, concurrent_tasks=48, weight=<function BaseStarmap.<lambda>>, key=<function BaseStarmap.<lambda>>)[source]¶
-
init
(oqtask)¶
-
num_tasks
¶ The number of tasks, if known, or the empty string otherwise.
-
static
poolfactory
(size)¶
-
submit_all
()[source]¶ Returns: an IterResult
instance
-
-
class
openquake.baselib.parallel.
IterResult
(futures, taskname, num_tasks, progress=<function info>, sent=0)[source]¶ Bases:
object
Parameters: - futures – an iterator over futures
- taskname – the name of the task
- num_tasks – the total number of expected futures
- progress – a logging function for the progress report
- sent – the number of bytes sent (0 if OQ_DISTRIBUTE=no)
-
task_data_dt
= dtype([('taskno', '<u4'), ('weight', '<f4'), ('duration', '<f4')])¶
-
class
openquake.baselib.parallel.
Pickled
(obj)[source]¶ Bases:
object
An utility to manually pickling/unpickling objects. The reason is that celery does not use the HIGHEST_PROTOCOL, so relying on celery is slower. Moreover Pickled instances have a nice string representation and length giving the size of the pickled bytestring.
Parameters: obj – the object to pickle
-
class
openquake.baselib.parallel.
Processmap
(func, iterargs, poolsize=None, progress=<function info>)[source]¶ Bases:
openquake.baselib.parallel.BaseStarmap
MapReduce implementation based on processes. For instance
>>> from collections import Counter >>> c = Processmap(Counter, [('hello',), ('world',)], poolsize=4).reduce() >>> sorted(c.items()) [('d', 1), ('e', 1), ('h', 1), ('l', 3), ('o', 2), ('r', 1), ('w', 1)]
-
class
openquake.baselib.parallel.
Sequential
(func, iterargs, poolsize=None, progress=<function info>)[source]¶ Bases:
openquake.baselib.parallel.BaseStarmap
A sequential Starmap, useful for debugging purpose.
-
class
openquake.baselib.parallel.
Starmap
(oqtask, task_args, name=None)[source]¶ Bases:
object
A manager to submit several tasks of the same type. The usage is:
tm = Starmap(do_something, logging.info) tm.send(arg1, arg2) tm.send(arg3, arg4) print(tm.reduce())
Progress report is built-in.
-
add_task_no
(iterargs, pickle=True)[source]¶ Add .task_no and .weight to the monitor and yield back the arguments by pickling them if pickle is True.
-
classmethod
apply
(task, task_args, concurrent_tasks=48, maxweight=None, weight=<function Starmap.<lambda>>, key=<function Starmap.<lambda>>, name=None)[source]¶ Apply a task to a tuple of the form (sequence, *other_args) by first splitting the sequence in chunks, according to the weight of the elements and possibly to a key (see :func: openquake.baselib.general.split_in_blocks).
Parameters: - task – a task to run in parallel
- task_args – the arguments to be passed to the task function
- agg – the aggregation function
- acc – initial value of the accumulator (default empty AccumDict)
- concurrent_tasks – hint about how many tasks to generate
- maxweight – if not None, used to split the tasks
- weight – function to extract the weight of an item in arg0
- key – function to extract the kind of an item in arg0
-
executor
= <concurrent.futures.process.ProcessPoolExecutor object>¶
-
num_tasks
¶ The number of tasks, if known, or the empty string otherwise.
-
reduce
(agg=<built-in function add>, acc=None)[source]¶ Loop on a set of results and update the accumulator by using the aggregation function.
Parameters: - agg – the aggregation function, (acc, val) -> new acc
- acc – the initial value of the accumulator
Returns: the final value of the accumulator
-
submit
(*args)[source]¶ Submit a function with the given arguments to the process pool and add a Future to the list .results. If the attribute distribute is set, the function is run in process and the result is returned.
-
task_ids
= []¶
-
-
class
openquake.baselib.parallel.
Threadmap
(func, iterargs, poolsize=None, progress=<function info>)[source]¶ Bases:
openquake.baselib.parallel.BaseStarmap
MapReduce implementation based on threads. For instance
>>> from collections import Counter >>> c = Threadmap(Counter, [('hello',), ('world',)], poolsize=4).reduce() >>> sorted(c.items()) [('d', 1), ('e', 1), ('h', 1), ('l', 3), ('o', 2), ('r', 1), ('w', 1)]
-
static
poolfactory
(size)¶
-
static
-
openquake.baselib.parallel.
check_mem_usage
(monitor=<Monitor dummy>, soft_percent=None, hard_percent=None)[source]¶ Display a warning if we are running out of memory
Parameters: mem_percent (int) – the memory limit as a percentage
-
openquake.baselib.parallel.
do_not_aggregate
(acc, value)[source]¶ Do nothing aggregation function.
Parameters: - acc – the accumulator
- value – the value to accumulate
Returns: the accumulator unchanged
-
openquake.baselib.parallel.
get_pickled_sizes
(obj)[source]¶ Return the pickled sizes of an object and its direct attributes, ordered by decreasing size. Here is an example:
>> total_size, partial_sizes = get_pickled_sizes(Monitor(‘’)) >> total_size 345 >> partial_sizes [(‘_procs’, 214), (‘exc’, 4), (‘mem’, 4), (‘start_time’, 4), (‘_start_time’, 4), (‘duration’, 4)]
Notice that the sizes depend on the operating system and the machine.
-
openquake.baselib.parallel.
oq_distribute
(task=None)[source]¶ Returns: the value of OQ_DISTRIBUTE or ‘futures’
-
openquake.baselib.parallel.
pickle_sequence
(objects)[source]¶ Convert an iterable of objects into a list of pickled objects. If the iterable contains copies, the pickling will be done only once. If the iterable contains objects already pickled, they will not be pickled again.
Parameters: objects – a sequence of objects to pickle
-
openquake.baselib.parallel.
qsub
(func, allargs, authkey=None)[source]¶ Map functions to arguments by means of the Grid Engine.
Parameters: - func – a pickleable callable object
- allargs – a list of tuples of arguments
- authkey – authentication token used to send back the results
Returns: an iterable over results of the form (res, etype, mon)
performance¶
-
class
openquake.baselib.performance.
Monitor
(operation='dummy', hdf5path=None, autoflush=False, measuremem=False)[source]¶ Bases:
object
Measure the resident memory occupied by a list of processes during the execution of a block of code. Should be used as a context manager, as follows:
with Monitor('do_something') as mon: do_something() print mon.mem
At the end of the block the Monitor object will have the following 5 public attributes:
.start_time: when the monitor started (a datetime object) .duration: time elapsed between start and stop (in seconds) .exc: usually None; otherwise the exception happened in the with block .mem: the memory delta in bytes
The behaviour of the Monitor can be customized by subclassing it and by overriding the method on_exit(), called at end and used to display or store the results of the analysis.
NB: if the .address attribute is set, it is possible for the monitor to send commands to that address, assuming there is a
multiprocessing.connection.Listener
listening.-
address
= None¶
-
authkey
= None¶
-
calc_id
= None¶
-
dt
¶ Last time interval measured
-
get_data
()[source]¶ Returns: an array of dtype perf_dt, with the information of the monitor (operation, time_sec, memory_mb, counts); the lenght of the array can be 0 (for counts=0) or 1 (otherwise).
-
new
(operation='no operation', **kw)[source]¶ Return a copy of the monitor usable for a different operation.
-
start_time
¶ Datetime instance recording when the monitoring started
-
python3compat¶
Compatibility layer for Python 2 and 3. Mostly copied from six and future, but reduced to the subset of utilities needed by GEM. This is done to avoid an external dependency.
-
openquake.baselib.python3compat.
check_syntax
(pkg)[source]¶ Recursively check all modules in the given package for compatibility with Python 3 syntax. No imports are performed.
Parameters: pkg – a Python package
-
openquake.baselib.python3compat.
decode
(val)[source]¶ Decode an object assuming the encoding is UTF-8.
Param: a unicode or bytes object Returns: a unicode object
-
openquake.baselib.python3compat.
encode
(val)[source]¶ Encode a string assuming the encoding is UTF-8.
Param: a unicode or bytes object Returns: bytes
-
openquake.baselib.python3compat.
raise_
(tp, value=None, tb=None)[source]¶ A function that matches the Python 2.x
raise
statement. This allows re-raising exceptions with the cls value and traceback on Python 2 and 3.
runtests¶
sap¶
Here is a minimal example of usage:
>>> from openquake.baselib import sap
>>> def fun(input, inplace, output=None, out='/tmp'):
... 'Example'
... for item in sorted(locals().items()):
... print('%s = %s' % item)
>>> p = sap.Script(fun)
>>> p.arg('input', 'input file or archive')
>>> p.flg('inplace', 'convert inplace')
>>> p.arg('output', 'output archive')
>>> p.opt('out', 'optional output file')
>>> p.callfunc(['a'])
inplace = False
input = a
out = /tmp
output = None
>>> p.callfunc(['a', 'b', '-i', '-o', 'OUT'])
inplace = True
input = a
out = OUT
output = b
Parsers can be composed too.
-
class
openquake.baselib.sap.
Script
(func, name=None, parentparser=None, help=True, registry=True)[source]¶ Bases:
object
A simple way to define command processors based on argparse. Each parser is associated to a function and parsers can be composed together, by dispatching on a given name (if not given, the function name is used).
-
arg
(name, help, type=None, choices=None, metavar=None, nargs=None)[source]¶ Describe a positional argument
-
callfunc
(argv=None)[source]¶ Parse the argv list and extract a dictionary of arguments which is then passed to the function underlying the Script.
-
opt
(name, help, abbrev=None, type=None, choices=None, metavar=None, nargs=None)[source]¶ Describe an option
-
registry
= {}¶
-
-
openquake.baselib.sap.
compose
(scripts, name='main', description=None, prog=None, version=None)[source]¶ Collects together different Scripts and builds a single Script dispatching to the subparsers depending on the first argument, i.e. the name of the subparser to invoke.
Parameters: - scripts – a list of Script instances
- name – the name of the composed parser
- description – description of the composed parser
- prog – name of the script printed in the usage message
- version – version of the script printed with –version
-
openquake.baselib.sap.
get_parentparser
(parser, description=None, help=True)[source]¶ Parameters: - parser –
argparse.ArgumentParser
instance or None - description – string used to build a new parser if parser is None
- help – flag used to build a new parser if parser is None
Returns: if parser is None the new parser; otherwise the .parentparser attribute (if set) or the parser itself (if not set)
- parser –