././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1637056578.6244493
h5py-3.6.0/ 0000775 0001750 0001750 00000000000 00000000000 013246 5 ustar 00takluyver takluyver ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1620561336.0
h5py-3.6.0/LICENSE 0000644 0001750 0001750 00000002760 00000000000 014256 0 ustar 00takluyver takluyver Copyright (c) 2008 Andrew Collette and contributors
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the
distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1620561336.0
h5py-3.6.0/MANIFEST.in 0000644 0001750 0001750 00000002015 00000000000 015000 0 ustar 00takluyver takluyver include AUTHORS
include api_gen.py
include dev-install.sh
include LICENSE
include MANIFEST.in
include pylintrc
include README.rst
include setup_build.py
include setup_configure.py
include tox.ini
include pytest.ini
include *.toml
recursive-include docs *
prune docs/_build
recursive-include docs_api *
prune docs_api/_build
recursive-include examples *.py
recursive-include h5py *.h *.pyx *.pxd *.pxi *.py *.txt
exclude h5py/config.pxi
exclude h5py/defs.pxd
exclude h5py/defs.pyx
exclude h5py/_hdf5.pxd
recursive-include h5py/tests *.h5
recursive-include licenses *
recursive-include lzf *
recursive-exclude * .DS_Store
exclude ci other .github
recursive-exclude ci *
recursive-exclude other *
recursive-exclude .github *
exclude *.yml
exclude *.yaml
recursive-exclude * __pycache__
recursive-exclude * *.py[co]
exclude .coveragerc
exclude .coverage_dir
recursive-exclude .coverage_dir *
exclude .mailmap
exclude github_deploy_key_h5py_h5py.enc
exclude rever.xsh
prune news
include asv.conf.json
recursive-include benchmarks *.py
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1637056578.6244493
h5py-3.6.0/PKG-INFO 0000664 0001750 0001750 00000003357 00000000000 014353 0 ustar 00takluyver takluyver Metadata-Version: 2.1
Name: h5py
Version: 3.6.0
Summary: Read and write HDF5 files from Python
Home-page: http://www.h5py.org
Author: Andrew Collette
Author-email: andrew.collette@gmail.com
Maintainer: Andrew Collette
Maintainer-email: andrew.collette@gmail.com
License: BSD
Download-URL: https://pypi.python.org/pypi/h5py
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Information Technology
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: BSD License
Classifier: Programming Language :: Cython
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Database
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Operating System :: Unix
Classifier: Operating System :: POSIX :: Linux
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Operating System :: Microsoft :: Windows
Requires-Python: >=3.7
License-File: LICENSE
The h5py package provides both a high- and low-level interface to the HDF5
library from Python. The low-level interface is intended to be a complete
wrapping of the HDF5 API, while the high-level component supports access to
HDF5 files, datasets and groups using established Python and NumPy concepts.
A strong emphasis on automatic conversion between Python (Numpy) datatypes and
data structures and their HDF5 equivalents vastly simplifies the process of
reading and writing data from Python.
Supports HDF5 versions 1.8.4 and higher. On Windows, HDF5 is included with
the installer.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1625747965.0
h5py-3.6.0/README.rst 0000664 0001750 0001750 00000003247 00000000000 014743 0 ustar 00takluyver takluyver .. image:: https://travis-ci.org/h5py/h5py.png
:target: https://travis-ci.org/h5py/h5py
.. image:: https://ci.appveyor.com/api/projects/status/h3iajp4d1myotprc/branch/master?svg=true
:target: https://ci.appveyor.com/project/h5py/h5py/branch/master
.. image:: https://dev.azure.com/h5pyappveyor/h5py/_apis/build/status/h5py.h5py?branchName=master
:target: https://dev.azure.com/h5pyappveyor/h5py/_build/latest?definitionId=1&branchName=master
HDF5 for Python
===============
`h5py` is a thin, pythonic wrapper around `HDF5 `_,
which runs on Python 3 (3.6+).
Websites
--------
* Main website: https://www.h5py.org
* Source code: https://github.com/h5py/h5py
* Mailing list: https://groups.google.com/d/forum/h5py
Installation
------------
Pre-build `h5py` can either be installed via your Python Distribution (e.g.
`Continuum Anaconda`_, `Enthought Canopy`_) or from `PyPI`_ via `pip`_.
`h5py` is also distributed in many Linux Distributions (e.g. Ubuntu, Fedora),
and in the MacOS package managers `Homebrew `_,
`Macports `_, or `Fink `_.
More detailed installation instructions, including how to install `h5py` with
MPI support, can be found at: https://docs.h5py.org/en/latest/build.html.
Reporting bugs
--------------
Open a bug at https://github.com/h5py/h5py/issues. For general questions, ask
on the HDF forum (https://forum.hdfgroup.org/c/hdf-tools/h5py).
.. _`Continuum Anaconda`: http://continuum.io/downloads
.. _`Enthought Canopy`: https://www.enthought.com/products/canopy/
.. _`PyPI`: https://pypi.org/project/h5py/
.. _`pip`: https://pip.pypa.io/en/stable/
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1620561336.0
h5py-3.6.0/api_gen.py 0000644 0001750 0001750 00000024324 00000000000 015225 0 ustar 00takluyver takluyver
"""
Generate the lowest-level Cython bindings to HDF5.
In order to translate HDF5 errors to exceptions, the raw HDF5 API is
wrapped with Cython "error wrappers". These are cdef functions with
the same names and signatures as their HDF5 equivalents, but implemented
in the h5py.defs extension module.
The h5py.defs files (defs.pyx and defs.pxd), along with the "real" HDF5
function definitions (_hdf5.pxd), are auto-generated by this script from
api_functions.txt. This file also contains annotations which indicate
whether a function requires a certain minimum version of HDF5, an
MPI-aware build of h5py, or special error handling.
This script is called automatically by the h5py build system when the
output files are missing, or api_functions.txt has been updated.
See the Line class in this module for documentation of the format for
api_functions.txt.
h5py/_hdf5.pxd: Cython "extern" definitions for HDF5 functions
h5py/defs.pxd: Cython definitions for error wrappers
h5py/defs.pyx: Cython implementations of error wrappers
"""
import re
import os.path as op
class Line(object):
"""
Represents one line from the api_functions.txt file.
Exists to provide the following attributes:
nogil: String indicating if we should release the GIL to call this
function. Any Python callbacks it could trigger must
acquire the GIL (e.g. using 'with gil' in Cython).
mpi: Bool indicating if MPI required
ros3: Bool indicating if ROS3 required
version: None or a minimum-version tuple
code: String with function return type
fname: String with function name
sig: String with raw function signature
args: String with sequence of arguments to call function
Example: MPI 1.8.12 int foo(char* a, size_t b)
.nogil: ""
.mpi: True
.ros3: True
.version: (1, 8, 12)
.code: "int"
.fname: "foo"
.sig: "char* a, size_t b"
.args: "a, b"
"""
PATTERN = re.compile("""(?P(MPI)[ ]+)?
(?P(ROS3)[ ]+)?
(?P([0-9]+\.[0-9]+\.[0-9]+))?
(-(?P([0-9]+\.[0-9]+\.[0-9]+)))?
([ ]+)?
(?P(unsigned[ ]+)?[a-zA-Z_]+[a-zA-Z0-9_]*\**)[ ]+
(?P[a-zA-Z_]+[a-zA-Z0-9_]*)[ ]*
\((?P[a-zA-Z0-9_,* ]*)\)
([ ]+)?
(?P(nogil))?
""", re.VERBOSE)
SIG_PATTERN = re.compile("""
(?:unsigned[ ]+)?
(?:[a-zA-Z_]+[a-zA-Z0-9_]*\**)
[ ]+[ *]*
(?P[a-zA-Z_]+[a-zA-Z0-9_]*)
""", re.VERBOSE)
def __init__(self, text):
""" Break the line into pieces and populate object attributes.
text: A valid function line, with leading/trailing whitespace stripped.
"""
m = self.PATTERN.match(text)
if m is None:
raise ValueError("Invalid line encountered: {0}".format(text))
parts = m.groupdict()
self.nogil = "nogil" if parts['nogil'] else ""
self.mpi = parts['mpi'] is not None
self.ros3 = parts['ros3'] is not None
self.min_version = parts['min_version']
if self.min_version is not None:
self.min_version = tuple(int(x) for x in self.min_version.split('.'))
self.max_version = parts['max_version']
if self.max_version is not None:
self.max_version = tuple(int(x) for x in self.max_version.split('.'))
self.code = parts['code']
self.fname = parts['fname']
self.sig = parts['sig']
sig_const_stripped = self.sig.replace('const', '')
self.args = self.SIG_PATTERN.findall(sig_const_stripped)
if self.args is None:
raise ValueError("Invalid function signature: {0}".format(self.sig))
self.args = ", ".join(self.args)
# Figure out what test and return value to use with error reporting
if '*' in self.code or self.code in ('H5T_conv_t',):
self.err_condition = "==NULL"
self.err_value = f"<{self.code}>NULL"
elif self.code in ('int', 'herr_t', 'htri_t', 'hid_t', 'hssize_t', 'ssize_t') \
or re.match(r'H5[A-Z]+_[a-zA-Z_]+_t', self.code):
self.err_condition = "<0"
self.err_value = f"<{self.code}>-1"
elif self.code in ('unsigned int', 'haddr_t', 'hsize_t', 'size_t'):
self.err_condition = "==0"
self.err_value = f"<{self.code}>0"
else:
raise ValueError("Return code <<%s>> unknown" % self.code)
raw_preamble = """\
# cython: language_level=3
#
# Warning: this file is auto-generated from api_gen.py. DO NOT EDIT!
#
include "config.pxi"
from .api_types_hdf5 cimport *
from .api_types_ext cimport *
"""
def_preamble = """\
# cython: language_level=3
#
# Warning: this file is auto-generated from api_gen.py. DO NOT EDIT!
#
include "config.pxi"
from .api_types_hdf5 cimport *
from .api_types_ext cimport *
"""
imp_preamble = """\
# cython: language_level=3
#
# Warning: this file is auto-generated from api_gen.py. DO NOT EDIT!
#
include "config.pxi"
from .api_types_ext cimport *
from .api_types_hdf5 cimport *
from . cimport _hdf5
from ._errors cimport set_exception, set_default_error_handler
"""
class LineProcessor(object):
def run(self):
# Function definitions file
self.functions = open(op.join('h5py', 'api_functions.txt'), 'r')
# Create output files
self.raw_defs = open(op.join('h5py', '_hdf5.pxd'), 'w')
self.cython_defs = open(op.join('h5py', 'defs.pxd'), 'w')
self.cython_imp = open(op.join('h5py', 'defs.pyx'), 'w')
self.raw_defs.write(raw_preamble)
self.cython_defs.write(def_preamble)
self.cython_imp.write(imp_preamble)
for text in self.functions:
# Directive specifying a header file
if not text.startswith(' ') and not text.startswith('#') and \
len(text.strip()) > 0:
inc = text.split(':')[0]
self.raw_defs.write('cdef extern from "%s.h":\n' % inc)
continue
text = text.strip()
# Whitespace or comment line
if len(text) == 0 or text[0] == '#':
continue
# Valid function line
self.line = Line(text)
self.write_raw_sig()
self.write_cython_sig()
self.write_cython_imp()
self.functions.close()
self.cython_imp.close()
self.cython_defs.close()
self.raw_defs.close()
def add_cython_if(self, block):
""" Wrap a block of code in the required "IF" checks """
def wrapif(condition, code):
code = code.replace('\n', '\n ', code.count('\n') - 1) # Yes, -1.
code = "IF {0}:\n {1}".format(condition, code)
return code
if self.line.mpi:
block = wrapif('MPI', block)
if self.line.ros3:
block = wrapif('ROS3', block)
if self.line.min_version is not None and self.line.max_version is not None:
block = wrapif('HDF5_VERSION >= {0.min_version} and HDF5_VERSION <= {0.max_version}'.format(self.line), block)
elif self.line.min_version is not None:
block = wrapif('HDF5_VERSION >= {0.min_version}'.format(self.line), block)
elif self.line.max_version is not None:
block = wrapif('HDF5_VERSION <= {0.max_version}'.format(self.line), block)
return block
def write_raw_sig(self):
""" Write out "cdef extern"-style definition for an HDF5 function """
raw_sig = "{0.code} {0.fname}({0.sig}) {0.nogil}\n".format(self.line)
raw_sig = self.add_cython_if(raw_sig)
raw_sig = "\n".join((" " + x if x.strip() else x) for x in raw_sig.split("\n"))
self.raw_defs.write(raw_sig)
def write_cython_sig(self):
""" Write out Cython signature for wrapper function """
if self.line.fname == 'H5Dget_storage_size':
# Special case: https://github.com/h5py/h5py/issues/1475
cython_sig = "cdef {0.code} {0.fname}({0.sig}) except? {0.err_value}\n".format(self.line)
else:
cython_sig = "cdef {0.code} {0.fname}({0.sig}) except {0.err_value}\n".format(self.line)
cython_sig = self.add_cython_if(cython_sig)
self.cython_defs.write(cython_sig)
def write_cython_imp(self):
""" Write out Cython wrapper implementation """
if self.line.nogil:
imp = """\
cdef {0.code} {0.fname}({0.sig}) except {0.err_value}:
cdef {0.code} r
with nogil:
set_default_error_handler()
r = _hdf5.{0.fname}({0.args})
if r{0.err_condition}:
if set_exception():
return {0.err_value}
else:
raise RuntimeError("Unspecified error in {0.fname} (return value {0.err_condition})")
return r
"""
else:
if self.line.fname == 'H5Dget_storage_size':
# Special case: https://github.com/h5py/h5py/issues/1475
imp = """\
cdef {0.code} {0.fname}({0.sig}) except? {0.err_value}:
cdef {0.code} r
set_default_error_handler()
r = _hdf5.{0.fname}({0.args})
if r{0.err_condition}:
if set_exception():
return {0.err_value}
return r
"""
else:
imp = """\
cdef {0.code} {0.fname}({0.sig}) except {0.err_value}:
cdef {0.code} r
set_default_error_handler()
r = _hdf5.{0.fname}({0.args})
if r{0.err_condition}:
if set_exception():
return {0.err_value}
else:
raise RuntimeError("Unspecified error in {0.fname} (return value {0.err_condition})")
return r
"""
imp = imp.format(self.line)
imp = self.add_cython_if(imp)
self.cython_imp.write(imp)
def run():
lp = LineProcessor()
lp.run()
if __name__ == '__main__':
run()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1620561336.0
h5py-3.6.0/asv.conf.json 0000644 0001750 0001750 00000015312 00000000000 015656 0 ustar 00takluyver takluyver {
// The version of the config file format. Do not change, unless
// you know what you are doing.
"version": 1,
// The name of the project being benchmarked
"project": "h5py",
// The project's homepage
"project_url": "https://www.h5py.org/",
// The URL or local path of the source code repository for the
// project being benchmarked
"repo": ".",
// The Python project's subdirectory in your repo. If missing or
// the empty string, the project is assumed to be located at the root
// of the repository.
// "repo_subdir": "",
// Customizable commands for building, installing, and
// uninstalling the project. See asv.conf.json documentation.
//
// "install_command": ["in-dir={env_dir} python -mpip install {wheel_file}"],
// "uninstall_command": ["return-code=any python -mpip uninstall -y {project}"],
// "build_command": [
// "python setup.py build",
// "PIP_NO_BUILD_ISOLATION=false python -mpip wheel --no-deps --no-index -w {build_cache_dir} {build_dir}"
// ],
// List of branches to benchmark. If not provided, defaults to "master"
// (for git) or "default" (for mercurial).
// "branches": ["master"], // for git
// "branches": ["default"], // for mercurial
// The DVCS being used. If not set, it will be automatically
// determined from "repo" by looking at the protocol in the URL
// (if remote), or by looking for special directories, such as
// ".git" (if local).
// "dvcs": "git",
// The tool to use to create environments. May be "conda",
// "virtualenv" or other value depending on the plugins in use.
// If missing or the empty string, the tool will be automatically
// determined by looking for tools on the PATH environment
// variable.
"environment_type": "virtualenv",
// timeout in seconds for installing any dependencies in environment
// defaults to 10 min
//"install_timeout": 600,
// the base URL to show a commit for the project.
// "show_commit_url": "http://github.com/owner/project/commit/",
// The Pythons you'd like to test against. If not provided, defaults
// to the current version of Python used to run `asv`.
// "pythons": ["2.7", "3.6"],
// The list of conda channel names to be searched for benchmark
// dependency packages in the specified order
// "conda_channels": ["conda-forge", "defaults"],
// The matrix of dependencies to test. Each key is the name of a
// package (in PyPI) and the values are version numbers. An empty
// list or empty string indicates to just test against the default
// (latest) version. null indicates that the package is to not be
// installed. If the package to be tested is only available from
// PyPi, and the 'environment_type' is conda, then you can preface
// the package name by 'pip+', and the package will be installed via
// pip (with all the conda available packages installed first,
// followed by the pip installed packages).
//
// "matrix": {
// "numpy": ["1.6", "1.7"],
// "six": ["", null], // test with and without six installed
// "pip+emcee": [""], // emcee is only available for install with pip.
// },
// Combinations of libraries/python versions can be excluded/included
// from the set to test. Each entry is a dictionary containing additional
// key-value pairs to include/exclude.
//
// An exclude entry excludes entries where all values match. The
// values are regexps that should match the whole string.
//
// An include entry adds an environment. Only the packages listed
// are installed. The 'python' key is required. The exclude rules
// do not apply to includes.
//
// In addition to package names, the following keys are available:
//
// - python
// Python version, as in the *pythons* variable above.
// - environment_type
// Environment type, as above.
// - sys_platform
// Platform, as in sys.platform. Possible values for the common
// cases: 'linux2', 'win32', 'cygwin', 'darwin'.
//
// "exclude": [
// {"python": "3.2", "sys_platform": "win32"}, // skip py3.2 on windows
// {"environment_type": "conda", "six": null}, // don't run without six on conda
// ],
//
// "include": [
// // additional env for python2.7
// {"python": "2.7", "numpy": "1.8"},
// // additional env if run on windows+conda
// {"platform": "win32", "environment_type": "conda", "python": "2.7", "libpython": ""},
// ],
// The directory (relative to the current directory) that benchmarks are
// stored in. If not provided, defaults to "benchmarks"
// "benchmark_dir": "benchmarks",
// The directory (relative to the current directory) to cache the Python
// environments in. If not provided, defaults to "env"
"env_dir": ".asv/env",
// The directory (relative to the current directory) that raw benchmark
// results are stored in. If not provided, defaults to "results".
"results_dir": ".asv/results",
// The directory (relative to the current directory) that the html tree
// should be written to. If not provided, defaults to "html".
"html_dir": ".asv/html",
// The number of characters to retain in the commit hashes.
// "hash_length": 8,
// `asv` will cache results of the recent builds in each
// environment, making them faster to install next time. This is
// the number of builds to keep, per environment.
// "build_cache_size": 2,
// The commits after which the regression search in `asv publish`
// should start looking for regressions. Dictionary whose keys are
// regexps matching to benchmark names, and values corresponding to
// the commit (exclusive) after which to start looking for
// regressions. The default is to start from the first commit
// with results. If the commit is `null`, regression detection is
// skipped for the matching benchmark.
//
// "regressions_first_commits": {
// "some_benchmark": "352cdf", // Consider regressions only after this commit
// "another_benchmark": null, // Skip regression detection altogether
// },
// The thresholds for relative change in results, after which `asv
// publish` starts reporting regressions. Dictionary of the same
// form as in ``regressions_first_commits``, with values
// indicating the thresholds. If multiple entries match, the
// maximum is taken. If no entry matches, the default is 5%.
//
// "regressions_thresholds": {
// "some_benchmark": 0.01, // Threshold of 1%
// "another_benchmark": 0.5, // Threshold of 50%
// },
}
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1637056578.5124476
h5py-3.6.0/benchmarks/ 0000775 0001750 0001750 00000000000 00000000000 015363 5 ustar 00takluyver takluyver ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1620561336.0
h5py-3.6.0/benchmarks/__init__.py 0000644 0001750 0001750 00000000000 00000000000 017460 0 ustar 00takluyver takluyver ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1620561336.0
h5py-3.6.0/benchmarks/benchmark_slicing.py 0000644 0001750 0001750 00000015661 00000000000 021406 0 ustar 00takluyver takluyver #!/usr/bin/env python3
import os
import time
import numpy
from tempfile import TemporaryDirectory
import logging
logger = logging.getLogger(__name__)
import h5py
#Needed for mutithreading:
from queue import Queue
from threading import Thread, Event
import multiprocessing
class Reader(Thread):
"""Thread executing tasks from a given tasks queue"""
def __init__(self, queue_in, queue_out, quit_event):
Thread.__init__(self)
self._queue_in = queue_in
self._queue_out = queue_out
self._quit_event = quit_event
self.daemon = True
self.start()
def run(self):
while not self._quit_event.is_set():
task = self._queue_in.get()
if task:
fn, ds, position = task
else:
logger.debug("Swallow a bitter pill: %s", task)
break
try:
r = fn(ds, position)
self._queue_out.put((position, r))
except Exception as e:
raise(e)
finally:
self._queue_in.task_done()
class SlicingBenchmark:
"""
Benchmark for reading slices in the most pathlogical way in a chunked dataset
Allows the test
"""
def __init__(self, ndim=3, size=1024, chunk=64, dtype="float32", precision=16, compression_kwargs=None):
"""
Defines some parameters for the benchmark, can be tuned later on.
:param ndim: work in 3D datasets
:param size: Volume size 1024**3 elements
:param chunk: size of one chunk, with itemsize = 32bits this makes block size of 1MB by default
:param dtype: the type of data to be stored
:param precision: to gain a bit in compression, number of trailing bits to be zeroed.
:param compression_kwargs: a dict with all options for configuring the compression
"""
self.ndim = ndim
self.size = size
self.dtype = numpy.dtype(dtype)
self.chunk = chunk
self.precision = precision
self.tmpdir = None
self.filename = None
self.h5path = "data"
self.total_size = self.size ** self.ndim * self.dtype.itemsize
self.needed_memory = self.size ** (self.ndim-1) * self.dtype.itemsize * self.chunk
if compression_kwargs is None:
self.compression = {}
else:
self.compression = dict(compression_kwargs)
def setup(self):
self.tmpdir = TemporaryDirectory()
self.filename = os.path.join(self.tmpdir.name, "benchmark_slicing.h5")
logger.info("Saving data in %s", self.filename)
logger.info("Total size: %i^%i volume size: %.3fGB, Needed memory: %.3fGB",
self.size, self.ndim, self.total_size/1e9, self.needed_memory/1e9)
shape = [self.size] * self.ndim
chunks = (self.chunk,) * self.ndim
if self.precision and self.dtype.char in "df":
if self.dtype.itemsize == 4:
mask = numpy.uint32(((1<<32) - (1<<(self.precision))))
elif self.dtype.itemsize == 8:
mask = numpy.uint64(((1<<64) - (1<<(self.precision))))
else:
logger.warning("Precision reduction: only float32 and float64 are supported")
else:
self.precision = 0
t0 = time.time()
with h5py.File(self.filename, 'w') as h:
ds = h.create_dataset(self.h5path,
shape,
chunks=chunks,
**self.compression)
for i in range(0, self.size, self.chunk):
x, y, z = numpy.ogrid[i:i+self.chunk, :self.size, :self.size]
data = (numpy.sin(x/3)*numpy.sin(y/5)*numpy.sin(z/7)).astype(self.dtype)
if self.precision:
idata = data.view(mask.dtype)
idata &= mask # mask out the last XX bits
ds[i:i+self.chunk] = data
t1 = time.time()
dt = t1 - t0
filesize = os.stat(self.filename).st_size
logger.info("Compression: %.3f time %.3fs uncompressed data saving speed %.3f MB/s effective write speed %.3f MB/s ",
self.total_size/filesize, dt, self.total_size/dt/1e6, filesize/dt/1e6)
def teardown(self):
self.tmpdir.cleanup()
self.filename = None
@staticmethod
def read_slice(dataset, position):
"""This reads all hyperplans crossing at the given position:
enforces many reads of different chunks,
Probably one of the most pathlogical use-case"""
assert dataset.ndim == len(position)
l = len(position)
res = []
noneslice = slice(None)
for i, w in enumerate(position):
where = [noneslice]*i + [w] + [noneslice]*(l - 1 - i)
res.append(dataset[tuple(where)])
return res
def time_sequential_reads(self, nb_read=64):
"Perform the reading of many orthogonal hyperplanes"
where = [[(i*(self.chunk+1+j))%self.size for j in range(self.ndim)] for i in range(nb_read)]
with h5py.File(self.filename, "r") as h:
ds = h[self.h5path]
t0 = time.time()
for i in where:
self.read_slice(ds, i)
t1 = time.time()
dt = t1 - t0
logger.info("Time for reading %sx%s slices: %.3fs fps: %.3f "%(self.ndim, nb_read, dt, self.ndim*nb_read/dt) +
"Uncompressed data read speed %.3f MB/s"%(self.ndim*nb_read*self.needed_memory/dt/1e6))
return dt
def time_threaded_reads(self, nb_read=64, nthreads=multiprocessing.cpu_count()):
"Perform the reading of many orthogonal hyperplanes, threaded version"
where = [[(i*(self.chunk+1+j))%self.size for j in range(self.ndim)] for i in range(nb_read)]
tasks = Queue()
results = Queue()
quitevent = Event()
pool = [Reader(tasks, results, quitevent) for i in range(nthreads)]
res = []
with h5py.File(self.filename, "r") as h:
ds = h[self.h5path]
t0 = time.time()
for i in where:
tasks.put((self.read_slice, ds, i))
for i in where:
a = results.get()
res.append(a[0])
results.task_done()
tasks.join()
results.join()
t1 = time.time()
# destroy the threads in the pool
quitevent.set()
for i in range(nthreads):
tasks.put(None)
dt = t1 - t0
logger.info("Time for %s-threaded reading %sx%s slices: %.3fs fps: %.3f "%(nthreads, self.ndim, nb_read, dt, self.ndim*nb_read/dt) +
"Uncompressed data read speed %.3f MB/s"%(self.ndim*nb_read*self.needed_memory/dt/1e6))
return dt
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
benckmark = SlicingBenchmark()
benckmark.setup()
benckmark.time_sequential_reads()
benckmark.time_threaded_reads()
benckmark.teardown()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1620561336.0
h5py-3.6.0/benchmarks/benchmarks.py 0000644 0001750 0001750 00000003205 00000000000 020050 0 ustar 00takluyver takluyver # Write the benchmarking functions here.
# See "Writing benchmarks" in the asv docs for more information.
import os.path as osp
import numpy as np
from tempfile import TemporaryDirectory
import h5py
class TimeSuite:
"""
An example benchmark that times the performance of various kinds
of iterating over dictionaries in Python.
"""
def setup(self):
self._td = TemporaryDirectory()
path = osp.join(self._td.name, 'test.h5')
with h5py.File(path, 'w') as f:
f['a'] = np.arange(100000)
self.f = h5py.File(path, 'r')
def teardown(self):
self.f.close()
self._td.cleanup()
def time_many_small_reads(self):
ds = self.f['a']
for i in range(10000):
arr = ds[i * 10:(i + 1) * 10]
class WritingTimeSuite:
"""Based on example in GitHub issue 492:
https://github.com/h5py/h5py/issues/492
"""
def setup(self):
self._td = TemporaryDirectory()
path = osp.join(self._td.name, 'test.h5')
self.f = h5py.File(path, 'w')
self.shape = shape = (128, 1024, 512)
self.f.create_dataset(
'a', shape=shape, dtype=np.float32, chunks=(1, shape[1], 64)
)
def teardown(self):
self.f.close()
self._td.cleanup()
def time_write_index_last_axis(self):
ds = self.f['a']
data = np.zeros(self.shape[:2])
for i in range(self.shape[2]):
ds[..., i] = data
def time_write_slice_last_axis(self):
ds = self.f['a']
data = np.zeros(self.shape[:2])
for i in range(self.shape[2]):
ds[..., i:i+1] = data[..., np.newaxis]
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1621416738.0
h5py-3.6.0/dev-install.sh 0000775 0001750 0001750 00000000567 00000000000 016037 0 ustar 00takluyver takluyver # Install h5py in a convenient way for frequent reinstallation as you work on it.
# This disables the mechanisms to find and install build dependencies, so you
# need to already have those (Cython, pkgconfig, numpy & optionally mpi4py) installed
# in the current environment.
set -e
H5PY_SETUP_REQUIRES=0 python3 setup.py build
python3 -m pip install . --no-build-isolation
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1637056578.5224478
h5py-3.6.0/docs/ 0000775 0001750 0001750 00000000000 00000000000 014176 5 ustar 00takluyver takluyver ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1620561336.0
h5py-3.6.0/docs/Makefile 0000644 0001750 0001750 00000015145 00000000000 015642 0 ustar 00takluyver takluyver # Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS = -W
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make ' where is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/h5py.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/h5py.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/h5py"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/h5py"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1635430661.0
h5py-3.6.0/docs/build.rst 0000664 0001750 0001750 00000021520 00000000000 016027 0 ustar 00takluyver takluyver .. _install:
Installation
============
.. _install_recommends:
It is highly recommended that you use a pre-built version of h5py, either from a
Python Distribution, an OS-specific package manager, or a pre-built wheel from
PyPI.
Be aware however that most pre-built versions lack MPI support, and that they
are built against a specific version of HDF5. If you require MPI support, or
newer HDF5 features, you will need to build from source.
After installing h5py, you should run the tests to be sure that everything was
installed correctly. This can be done in the python interpreter via::
import h5py
h5py.run_tests()
.. _prebuilt_install:
Pre-built installation (recommended)
-----------------------------------------
Pre-build h5py can be installed via many Python Distributions, OS-specific
package managers, or via h5py wheels.
Python Distributions
....................
If you do not already use a Python Distribution, we recommend either
`Anaconda `_/`Miniconda `_
or
`Enthought Canopy `_, both of which
support most versions of Microsoft Windows, OSX/MacOS, and a variety of Linux
Distributions. Installation of h5py can be done on the command line via::
$ conda install h5py
for Anaconda/MiniConda, and via::
$ enpkg h5py
for Canopy.
Wheels
......
If you have an existing Python installation (e.g. a python.org download,
or one that comes with your OS), then on Windows, MacOS/OSX, and
Linux on Intel computers, pre-built h5py wheels can be installed via pip from
PyPI::
$ pip install h5py
Additionally, for Windows users, `Chris Gohlke provides third-party wheels
which use Intel's MKL `_.
OS-Specific Package Managers
............................
On OSX/MacOS, h5py can be installed via `Homebrew `_,
`Macports `_, or `Fink `_.
The current state of h5py in various Linux Distributions can be seen at
https://pkgs.org/download/python-h5py, and can be installed via the package
manager.
As far as the h5py developers know, none of the Windows package managers (e.g.
`Chocolatey `_, `nuget `_)
have h5py included, however they may assist in installing h5py's requirements
when building from source.
.. _source_install:
Source installation
-------------------
To install h5py from source, you need:
* A supported Python version with development headers
* HDF5 1.8.4 or newer with development headers
* A C compiler
On Unix platforms, you also need ``pkg-config`` unless you explicitly specify
a path for HDF5 as described in :ref:`custom_install`.
There are notes below on installing HDF5, Python and a C compiler on different
platforms.
Building h5py also requires several Python packages, but in most cases pip will
automatically install these in a build environment for you, so you don't need to
deal with them manually. See :ref:`dev_install` for a list.
The actual installation of h5py should be done via::
$ pip install --no-binary=h5py h5py
or, from a tarball or git :ref:`checkout `::
$ pip install -v .
.. _dev_install:
Development installation
........................
When modifying h5py, you often want to reinstall it quickly to test your changes.
To benefit from caching and use NumPy & Cython from your existing Python
environment, run::
$ H5PY_SETUP_REQUIRES=0 python3 setup.py build
$ python3 -m pip install . --no-build-isolation
For convenience, these commands are also in a script ``dev-install.sh`` in the
h5py git repository.
This skips setting up a build environment, so you should
have already installed Cython, NumPy, pkgconfig (a Python interface to
``pkg-config``) and mpi4py (if you want MPI integration - see :ref:`build_mpi`).
See ``setup.py`` for minimum versions.
This will normally rebuild Cython files automatically when they change, but
sometimes it may be necessary to force a full rebuild. The easiest way to
achieve this is to discard everything but the code committed to git. In the root
of your git checkout, run::
$ git clean -xfd
Then build h5py again as above.
Source installation on OSX/MacOS
................................
HDF5 and Python are most likely in your package manager (e.g. `Homebrew `_,
`Macports `_, or `Fink `_).
Be sure to install the development headers, as sometimes they are not included
in the main package.
XCode comes with a C compiler (clang), and your package manager will likely have
other C compilers for you to install.
Source installation on Linux/Other Unix
.......................................
HDF5 and Python are most likely in your package manager. A C compiler almost
definitely is, usually there is some kind of metapackage to install the
default build tools, e.g. ``build-essential``, which should be sufficient for our
needs. Make sure that that you have the development headers, as they are
usually not installed by default. They can usually be found in ``python-dev`` or
similar and ``libhdf5-dev`` or similar.
Source installation on Windows
..............................
Installing from source on Windows is a much more difficult prospect than
installing from source on other OSs, as not only are you likely to need to
compile HDF5 from source, everything must be built with the correct version of
Visual Studio. Additional patches are also needed to HDF5 to get HDF5 and Python
to work together.
We recommend examining the appveyor build scripts, and using those to build and
install HDF5 and h5py.
Downstream packagers
....................
If you are building h5py for another packaging system - e.g. Linux distros or
packaging aimed at HPC users - you probably want to satisfy build dependencies
from your packaging system. To build without automatically fetching
dependencies, use a command like::
H5PY_SETUP_REQUIRES=0 pip install . --no-deps --no-build-isolation
Depending on your packaging system, you may need to use the ``--prefix`` or
``--root`` options to control where files get installed.
h5py's Python packaging has build dependencies on the oldest compatible
versions of NumPy and mpi4py. You can build with newer versions of these,
but the resulting h5py binaries will only work with the NumPy & mpi4py versions
they were built with (or newer). Mpi4py is an optional dependency, only required
for :ref:`parallel` features.
You should also look at the build options under :ref:`custom_install`.
.. _custom_install:
Custom installation
-------------------
.. important:: Remember that pip installs wheels by default.
To perform a custom installation with pip, you should use::
$ pip install --no-binary=h5py h5py
or build from a git checkout or downloaded tarball to avoid getting
a pre-built version of h5py.
You can specify build options for h5py as environment variables when you build
it from source::
$ HDF5_DIR=/path/to/hdf5 pip install --no-binary=h5py h5py
$ HDF5_VERSION=X.Y.Z pip install --no-binary=h5py h5py
$ CC="mpicc" HDF5_MPI="ON" HDF5_DIR=/path/to/parallel-hdf5 pip install --no-binary=h5py h5py
The supported build options are:
- To specify where to find HDF5, use one of these options:
- ``HDF5_LIBDIR`` and ``HDF5_INCLUDEDIR``: the directory containing the
compiled HDF5 libraries and the directory containing the C header files,
respectively.
- ``HDF5_DIR``: a shortcut for common installations, a directory with ``lib``
and ``include`` subdirectories containing compiled libraries and C headers.
- ``HDF5_PKGCONFIG_NAME``: A name to query ``pkg-config`` for.
If none of these options are specified, h5py will query ``pkg-config`` by
default for ``hdf5``, or ``hdf5-openmpi`` if building with MPI support.
- ``HDF5_MPI=ON`` to build with MPI integration - see :ref:`build_mpi`.
- ``HDF5_VERSION`` to force a specified HDF5 version. In most cases, you don't
need to set this; the version number will be detected from the HDF5 library.
- ``H5PY_SYSTEM_LZF=1`` to build the bundled LZF compression filter
(see :ref:`dataset_compression`) against an external LZF library, rather than
using the bundled LZF C code.
.. _build_mpi:
Building against Parallel HDF5
------------------------------
If you just want to build with ``mpicc``, and don't care about using Parallel
HDF5 features in h5py itself::
$ export CC=mpicc
$ pip install --no-binary=h5py h5py
If you want access to the full Parallel HDF5 feature set in h5py
(:ref:`parallel`), you will further have to build in MPI mode. This can be done
by setting the ``HDF5_MPI`` environment variable::
$ export CC=mpicc
$ export HDF5_MPI="ON"
$ pip install --no-binary=h5py h5py
You will need a shared-library build of Parallel HDF5 as well, i.e. built with
``./configure --enable-shared --enable-parallel``.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1637056428.0
h5py-3.6.0/docs/conf.py 0000664 0001750 0001750 00000020474 00000000000 015504 0 ustar 00takluyver takluyver # -*- coding: utf-8 -*-
#
# h5py documentation build configuration file, created by
# sphinx-quickstart on Fri Jan 31 11:23:59 2014.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.intersphinx',
'sphinx.ext.extlinks',
'sphinx.ext.mathjax',
]
intersphinx_mapping = {'low': ('https://api.h5py.org', None)}
extlinks = {
'issue': ('https://github.com/h5py/h5py/issues/%s', 'GH'),
'pr': ('https://github.com/h5py/h5py/pull/%s', 'PR '),
}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'h5py'
copyright = '2014, Andrew Collette and contributors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The full version, including alpha/beta/rc tags.
release = '3.6.0'
# The short X.Y version.
version = '.'.join(release.split('.')[:2])
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# " v documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'h5pydoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'h5py.tex', 'h5py Documentation',
'Andrew Collette and contributors', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'h5py', 'h5py Documentation',
['Andrew Collette and contributors'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'h5py', 'h5py Documentation',
'Andrew Collette and contributors', 'h5py', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1620561336.0
h5py-3.6.0/docs/config.rst 0000644 0001750 0001750 00000005576 00000000000 016210 0 ustar 00takluyver takluyver Configuring h5py
================
Library configuration
---------------------
A few library options are available to change the behavior of the library.
You can get a reference to the global library configuration object via the
function :func:`h5py.get_config`. This object supports the following
attributes:
**complex_names**
Set to a 2-tuple of strings (real, imag) to control how complex numbers
are saved. The default is ('r','i').
**bool_names**
Booleans are saved as HDF5 enums. Set this to a 2-tuple of strings
(false, true) to control the names used in the enum. The default
is ("FALSE", "TRUE").
**track_order**
Whether to track dataset/group/attribute creation order. If
container creation order is tracked, its links and attributes
are iterated in ascending creation order (consistent with
``dict`` in Python 3.7+); otherwise in ascending alphanumeric
order. Global configuration value can be overridden for
particular container by specifying ``track_order`` argument to
:class:`h5py.File`, :meth:`h5py.Group.create_group`,
:meth:`h5py.Group.create_dataset`. The default is ``False``.
IPython
-------
H5py ships with a custom ipython completer, which provides object introspection
and tab completion for h5py objects in an ipython session. For example, if a
file contains 3 groups, "foo", "bar", and "baz"::
In [4]: f['b
bar baz
In [4]: f['f
# Completes to:
In [4]: f['foo'
In [4]: f['foo'].
f['foo'].attrs f['foo'].items f['foo'].ref
f['foo'].copy f['foo'].iteritems f['foo'].require_dataset
f['foo'].create_dataset f['foo'].iterkeys f['foo'].require_group
f['foo'].create_group f['foo'].itervalues f['foo'].values
f['foo'].file f['foo'].keys f['foo'].visit
f['foo'].get f['foo'].name f['foo'].visititems
f['foo'].id f['foo'].parent
The easiest way to enable the custom completer is to do the following in an
IPython session::
In [1]: import h5py
In [2]: h5py.enable_ipython_completer()
It is also possible to configure IPython to enable the completer every time you
start a new session. For >=ipython-0.11, "h5py.ipy_completer" just needs to be
added to the list of extensions in your ipython config file, for example
:file:`~/.config/ipython/profile_default/ipython_config.py` (if this file does
not exist, you can create it by invoking `ipython profile create`)::
c = get_config()
c.InteractiveShellApp.extensions = ['h5py.ipy_completer']
For