bottleneck-1.2.0/000077500000000000000000000000001300536544100136455ustar00rootroot00000000000000bottleneck-1.2.0/.gitignore000066400000000000000000000001001300536544100156240ustar00rootroot00000000000000*.pyc build dist MANIFEST *.so .tox .coverage *.egg-info .*.swp bottleneck-1.2.0/.travis.yml000066400000000000000000000024721300536544100157630ustar00rootroot00000000000000 language: generic sudo: false matrix: include: # test installing bottleneck tarball before installing numpy; # start this test first because it takes the longest - os: linux env: TEST_DEPS="nose" PYTHON_VERSION="2.7" PYTHON_ARCH="64" TEST_RUN="sdist" # flake8 - os: linux env: TEST_DEPS="flake8" PYTHON_VERSION="2.7" PYTHON_ARCH="64" TEST_RUN="style" # python 3.4 - os: linux env: TEST_DEPS="numpy==1.11.2 nose" PYTHON_VERSION="3.4" PYTHON_ARCH="64" TEST_RUN="sdist" # python 3.5 - os: linux env: TEST_DEPS="numpy==1.11.2 nose" PYTHON_VERSION="3.5" PYTHON_ARCH="64" # python 2.7 - os: osx env: TEST_DEPS="numpy==1.11.2 nose" PYTHON_VERSION="2.7" PYTHON_ARCH="64" TEST_RUN="sdist" # python 3.4 - os: osx env: TEST_DEPS="numpy==1.11.2 nose" PYTHON_VERSION="3.4" PYTHON_ARCH="64" # python 3.5 - os: osx env: TEST_DEPS="numpy==1.11.2 nose" PYTHON_VERSION="3.5" PYTHON_ARCH="64" before_install: - uname -a - source "tools/travis/conda_setup.sh" install: - source "tools/travis/conda_install.sh" script: - source "tools/travis/bn_setup.sh" notifications: email: on_success: never on_failure: always bottleneck-1.2.0/LICENSE000066400000000000000000000001071300536544100146500ustar00rootroot00000000000000The license file is one level down from this file: bottleneck/LICENSE. bottleneck-1.2.0/MANIFEST.in000066400000000000000000000005051300536544100154030ustar00rootroot00000000000000include README.rst RELEASE.rst include LICENSE bottleneck/LICENSE include Makefile tox.ini recursive-include bottleneck/src *.c *.h exclude bottleneck/src/reduce.c exclude bottleneck/src/move.c exclude bottleneck/src/nonreduce.c exclude bottleneck/src/nonreduce_axis.c recursive-include doc * recursive-exclude doc/build * bottleneck-1.2.0/Makefile000066400000000000000000000025211300536544100153050ustar00rootroot00000000000000# Bottleneck Makefile PYTHON=python srcdir := bottleneck/ help: @echo "Available tasks:" @echo "help --> This help page" @echo "all --> clean, build, flake8, test" @echo "build --> Build the Python C extensions" @echo "clean --> Remove all the build files for a fresh start" @echo "test --> Run unit tests" @echo "flake8 --> Check for pep8 errors" @echo "readme --> Update benchmark results in README.rst" @echo "bench --> Run performance benchmark" @echo "detail --> Detailed benchmarks for all functions" @echo "sdist --> Make source distribution" @echo "doc --> Build Sphinx manual" all: clean build test flake8 build: ${PYTHON} setup.py build_ext --inplace test: ${PYTHON} -c "import bottleneck;bottleneck.test()" flake8: flake8 --exclude=doc . readme: PYTHONPATH=`pwd`:PYTHONPATH ${PYTHON} tools/update_readme.py bench: ${PYTHON} -c "import bottleneck; bottleneck.bench()" detail: ${PYTHON} -c "import bottleneck; bottleneck.bench_detailed('all')" sdist: rm -f MANIFEST ${PYTHON} setup.py sdist git status # doc directory exists so use phony .PHONY: doc doc: clean build rm -rf build/sphinx ${PYTHON} setup.py build_sphinx clean: rm -rf build dist Bottleneck.egg-info find . -name \*.pyc -delete rm -rf ${srcdir}/*.html ${srcdir}/build rm -rf ${srcdir}/*.c rm -rf ${srcdir}/*.so bottleneck-1.2.0/README.rst000066400000000000000000000115151300536544100153370ustar00rootroot00000000000000.. image:: https://travis-ci.org/kwgoodman/bottleneck.svg?branch=master :target: https://travis-ci.org/kwgoodman/bottleneck .. image:: https://ci.appveyor.com/api/projects/status/github/kwgoodman/bottleneck?svg=true&passingText=passing&failingText=failing&pendingText=pending :target: https://ci.appveyor.com/project/kwgoodman/bottleneck ========== Bottleneck ========== Bottleneck is a collection of fast NumPy array functions written in C. Let's give it a try. Create a NumPy array:: >>> import numpy as np >>> a = np.array([1, 2, np.nan, 4, 5]) Find the nanmean:: >>> import bottleneck as bn >>> bn.nanmean(a) 3.0 Moving window mean:: >>> bn.move_mean(a, window=2, min_count=1) array([ 1. , 1.5, 2. , 4. , 4.5]) Benchmark ========= Bottleneck comes with a benchmark suite:: >>> bn.bench() Bottleneck performance benchmark Bottleneck 1.2.0dev; Numpy 1.11.2 Speed is NumPy time divided by Bottleneck time NaN means approx one-fifth NaNs; float64 and axis=-1 are used no NaN NaN no NaN NaN (100,) (1000,) (1000,1000)(1000,1000) nansum 58.3 16.6 2.3 5.1 nanmean 258.7 46.1 3.5 5.1 nanstd 238.4 42.9 2.8 5.0 nanvar 229.9 41.4 2.7 5.0 nanmin 44.6 12.9 0.8 0.9 nanmax 41.8 12.9 0.8 1.8 median 99.6 51.4 1.1 5.7 nanmedian 102.1 26.5 5.0 31.2 ss 27.4 6.4 1.6 1.6 nanargmin 72.6 24.6 2.3 3.4 nanargmax 70.1 29.2 2.4 4.6 anynan 22.1 49.9 0.5 114.6 allnan 43.3 48.4 115.8 66.7 rankdata 50.3 8.0 2.6 6.5 nanrankdata 52.5 8.1 2.9 6.8 partition 4.1 3.6 1.0 2.0 argpartition 2.7 2.2 1.1 1.5 replace 13.7 4.9 1.5 1.5 push 3231.6 7437.4 20.1 19.6 move_sum 4173.5 8955.4 194.7 374.8 move_mean 10265.5 18540.0 222.8 372.2 move_std 8910.9 12158.5 128.7 234.5 move_var 11969.4 18323.8 202.7 378.7 move_min 2164.6 3676.3 23.9 57.2 move_max 1995.0 4206.0 23.8 108.8 move_argmin 3380.5 5559.1 40.5 180.5 move_argmax 3386.5 7278.1 43.0 227.2 move_median 1762.3 1134.9 157.9 118.5 move_rank 1203.6 223.2 2.7 7.8 You can also run a detailed benchmark for a single function using, for example, the command:: >>> bn.bench_detailed("move_median", fraction_nan=0.3) Only arrays with data type (dtype) int32, int64, float32, and float64 are accelerated. All other dtypes result in calls to slower, unaccelerated functions. In the rare case of a byte-swapped input array (e.g. a big-endian array on a little-endian operating system) the function will not be accelerated regardless of dtype. Where ===== =================== ======================================================== download https://pypi.python.org/pypi/Bottleneck docs http://berkeleyanalytics.com/bottleneck code https://github.com/kwgoodman/bottleneck mailing list https://groups.google.com/group/bottle-neck =================== ======================================================== License ======= Bottleneck is distributed under a Simplified BSD license. See the LICENSE file for details. Install ======= Requirements: ======================== ==================================================== Bottleneck Python 2.7, 3.4, 3.5; NumPy 1.11.2 Compile gcc, clang, MinGW or MSVC Unit tests nose ======================== ==================================================== To install Bottleneck on GNU/Linux, Mac OS X, et al.:: $ sudo python setup.py install To install bottleneck on Windows, first install MinGW and add it to your system path. Then install Bottleneck with the commands:: python setup.py install --compiler=mingw32 Alternatively, you can use the Windows binaries created by Christoph Gohlke: http://www.lfd.uci.edu/~gohlke/pythonlibs/#bottleneck Unit tests ========== After you have installed Bottleneck, run the suite of unit tests:: >>> import bottleneck as bn >>> bn.test() Ran 169 tests in 57.205s OK bottleneck-1.2.0/RELEASE.rst000066400000000000000000000265331300536544100154700ustar00rootroot00000000000000 ============= Release Notes ============= These are the major changes made in each release. For details of the changes see the commit log at http://github.com/kwgoodman/bottleneck Bottleneck 1.2.0 ================ *Release date: in development, not yet released* This release is a complete rewrite of Bottleneck. **Port to C** - Bottleneck is now written in C - Cython is no longer a dependency - Source tarball size reduced by 80% - Build time reduced by 66% - Install size reduced by 45% **Redesign** - Besides porting to C, much of bottleneck has been redesigned to be simpler and faster. For example, bottleneck now uses its own N-dimensional array iterators, reducing function call overhead. **New features** - The new function bench_detailed runs a detailed performance benchmark on a single bottleneck function. - Bottleneck can be installed on systems that do not yet have NumPy installed. Previously that only worked on some systems. **Beware** - Functions partsort and argpartsort have been renamed to partition and argpartition to match NumPy. Additionally the meaning of the input arguments have changed: bn.partsort(a, n) is now equivalent to bn.partition(a, kth=n-1). Similarly for bn.argpartition. - The keyword for array input has been changed from `arr` to `a` in all functions. It now matches NumPy. **Thanks** - Moritz E. Beber: continuous integration with AppVeyor - Christoph Gohlke: Windows compatibility - Jennifer Olsen: comments and suggestions - A special thanks to the Cython developers. The quickest way to appreciate their work is to remove Cython from your project. It is not easy. Older versions ============== Release notes from past releases. Bottleneck 1.1.0 ---------------- *Release date: 2016-06-22* This release makes Bottleneck more robust, releases GIL, adds new functions. **More Robust** - move_median can now handle NaNs and `min_count` parameter - move_std is slower but numerically more stable - Bottleneck no longer crashes on byte-swapped input arrays **Faster** - All Bottleneck functions release the GIL - median is faster if the input array contains NaN - move_median is faster for input arrays that contain lots of NaNs - No speed penalty for median, nanmedian, nanargmin, nanargmax for Fortran ordered input arrays when axis is None - Function call overhead cut in half for reduction along all axes (axis=None) if the input array satisfies at least one of the following properties: 1d, C contiguous, F contiguous - Reduction along all axes (axis=None) is more than twice as fast for long, narrow input arrays such as a (1000000, 2) C contiguous array and a (2, 1000000) F contiguous array **New Functions** - move_var - move_argmin - move_argmax - move_rank - push **Beware** - median now returns NaN for a slice that contains one or more NaNs - Instead of using the distutils default, the '-O2' C compiler flag is forced - move_std output changed when mean is large compared to standard deviation - Fixed: Non-accelerated moving window functions used min_count incorrectly - move_median is a bit slower for float input arrays that do not contain NaN **Thanks** Alphabeticaly by last name - Alessandro Amici worked on setup.py - Pietro Battiston modernized bottleneck installation - Moritz E. Beber set up continuous integration with Travis CI - Jaime Frio improved the numerical stability of move_std - Christoph Gohlke revived Windows compatibility - Jennifer Olsen added NaN support to move_median Bottleneck 1.0.0 ---------------- *Release date: 2015-02-06* This release is a complete rewrite of Bottleneck. **Faster** - "python setup.py build" is 18.7 times faster - Function-call overhead cut in half---a big speed up for small input arrays - Arbitrary ndim input arrays accelerated; previously only 1d, 2d, and 3d - bn.nanrankdata is twice as fast for float input arrays - bn.move_max, bn.move_min are faster for int input arrays - No speed penalty for reducing along all axes when input is Fortran ordered **Smaller** - Compiled binaries 14.1 times smaller - Source tarball 4.7 times smaller - 9.8 times less C code - 4.3 times less Cython code - 3.7 times less Python code **Beware** - Requires numpy 1.9.1 - Single API, e.g.: bn.nansum instead of bn.nansum and nansum_2d_float64_axis0 - On 64-bit systems bn.nansum(int32) returns int32 instead of int64 - bn.nansum now returns 0 for all NaN slices (as does numpy 1.9.1) - Reducing over all axes returns, e.g., 6.0; previously np.float64(6.0) - bn.ss() now has default axis=None instead of axis=0 - bn.nn() is no longer in bottleneck **min_count** - Previous releases had moving window function pairs: move_sum, move_nansum - This release only has half of the pairs: move_sum - Instead a new input parameter, min_count, has been added - min_count=None same as old move_sum; min_count=1 same as old move_nansum - If # non-NaN values in window < min_count, then NaN assigned to the window - Exception: move_median does not take min_count as input **Bug Fixes** - Can now install bottleneck with pip even if numpy is not already installed - bn.move_max, bn.move_min now return float32 for float32 input Bottleneck 0.8.0 ---------------- *Release date: 2014-01-21* This version of Bottleneck requires NumPy 1.8. **Breaks from 0.7.0** - This version of Bottleneck requires NumPy 1.8 - nanargmin and nanargmax behave like the corresponding functions in NumPy 1.8 **Bug fixes** - nanargmax/nanargmin wrong for redundant max/min values in 1d int arrays Bottleneck 0.7.0 ---------------- *Release date: 2013-09-10* **Enhancements** - bn.rankdata() is twice as fast (with input a = np.random.rand(1000000)) - C files now included in github repo; cython not needed to try latest - C files are now generated with Cython 0.19.1 instead of 0.16 - Test bottleneck across multiple python/numpy versions using tox - Source tarball size cut in half **Bug fixes** - #50 move_std, move_nanstd return inappropriate NaNs (sqrt of negative #) - #52 `make test` fails on some computers - #57 scipy optional yet some unit tests depend on scipy - #49, #55 now works on Mac OS X 10.8 using clang compiler - #60 nanstd([1.0], ddof=1) and nanvar([1.0], ddof=1) crash Bottleneck 0.6.0 ---------------- *Release date: 2012-06-04* Thanks to Dougal Sutherland, Bottleneck now runs on Python 3.2. **New functions** - replace(arr, old, new), e.g, replace(arr, np.nan, 0) - nn(arr, arr0, axis) nearest neighbor and its index of 1d arr0 in 2d arr - anynan(arr, axis) faster alternative to np.isnan(arr).any(axis) - allnan(arr, axis) faster alternative to np.isnan(arr).all(axis) **Enhancements** - Python 3.2 support (may work on earlier versions of Python 3) - C files are now generated with Cython 0.16 instead of 0.14.1 - Upgrade numpydoc from 0.3.1 to 0.4 to support Sphinx 1.0.1 **Breaks from 0.5.0** - Support for Python 2.5 dropped - Default axis for benchmark suite is now axis=1 (was 0) **Bug fixes** - #31 Confusing error message in partsort and argpartsort - #32 Update path in MANIFEST.in - #35 Wrong output for very large (2**31) input arrays Bottleneck 0.5.0 ---------------- *Release date: 2011-06-13* The fifth release of bottleneck adds four new functions, comes in a single source distribution instead of separate 32 and 64 bit versions, and contains bug fixes. J. David Lee wrote the C-code implementation of the double heap moving window median. **New functions** - move_median(), moving window median - partsort(), partial sort - argpartsort() - ss(), sum of squares, faster version of scipy.stats.ss **Changes** - Single source distribution instead of separate 32 and 64 bit versions - nanmax and nanmin now follow Numpy 1.6 (not 1.5.1) when input is all NaN **Bug fixes** - #14 Support python 2.5 by importing `with` statement - #22 nanmedian wrong for particular ordering of NaN and non-NaN elements - #26 argpartsort, nanargmin, nanargmax returned wrong dtype on 64-bit Windows - #29 rankdata and nanrankdata crashed on 64-bit Windows Bottleneck 0.4.3 ---------------- *Release date: 2011-03-17* This is a bug fix release. **Bug fixes** - #11 median and nanmedian modified (partial sort) input array - #12 nanmedian wrong when odd number of elements with all but last a NaN **Enhancement** - Lazy import of SciPy (rarely used) speeds Bottleneck import 3x Bottleneck 0.4.2 ---------------- *Release date: 2011-03-08* This is a bug fix release. Same bug fixed in Bottleneck 0.4.1 for nanstd() was fixed for nanvar() in this release. Thanks again to Christoph Gohlke for finding the bug. Bottleneck 0.4.1 ---------------- *Release date: 2011-03-08* This is a bug fix release. The low-level functions nanstd_3d_int32_axis1 and nanstd_3d_int64_axis1, called by bottleneck.nanstd(), wrote beyond the memory owned by the output array if arr.shape[1] == 0 and arr.shape[0] > arr.shape[2], where arr is the input array. Thanks to Christoph Gohlke for finding an example to demonstrate the bug. Bottleneck 0.4.0 ---------------- *Release date: 2011-03-08* The fourth release of Bottleneck contains new functions and bug fixes. Separate source code distributions are now made for 32 bit and 64 bit operating systems. **New functions** - rankdata() - nanrankdata() **Enhancements** - Optionally specify the shapes of the arrays used in benchmark - Can specify which input arrays to fill with one-third NaNs in benchmark **Breaks from 0.3.0** - Removed group_nanmean() function - Bump dependency from NumPy 1.4.1 to NumPy 1.5.1 - C files are now generated with Cython 0.14.1 instead of 0.13 **Bug fixes** - #6 Some functions gave wrong output dtype for some input dtypes on 32 bit OS - #7 Some functions choked on size zero input arrays - #8 Segmentation fault with Cython 0.14.1 (but not 0.13) Bottleneck 0.3.0 ---------------- *Release date: 2010-01-19* The third release of Bottleneck is twice as fast for small input arrays and contains 10 new functions. **Faster** - All functions are faster (less overhead in selector functions) **New functions** - nansum() - move_sum() - move_nansum() - move_mean() - move_std() - move_nanstd() - move_min() - move_nanmin() - move_max() - move_nanmax() **Enhancements** - You can now specify the dtype and axis to use in the benchmark timings - Improved documentation and more unit tests **Breaks from 0.2.0** - Moving window functions now default to axis=-1 instead of axis=0 - Low-level moving window selector functions no longer take window as input **Bug fix** - int input array resulted in call to slow, non-cython version of move_nanmean Bottleneck 0.2.0 ---------------- *Release date: 2010-12-27* The second release of Bottleneck is faster, contains more functions, and supports more dtypes. **Faster** - All functions faster (less overhead) when output is not a scalar - Faster nanmean() for 2d, 3d arrays containing NaNs when axis is not None **New functions** - nanargmin() - nanargmax() - nanmedian() **Enhancements** - Added support for float32 - Fallback to slower, non-Cython functions for unaccelerated ndim/dtype - Scipy is no longer a dependency - Added support for older versions of NumPy (1.4.1) - All functions are now templated for dtype and axis - Added a sandbox for prototyping of new Bottleneck functions - Rewrote benchmarking code Bottleneck 0.1.0 ---------------- *Release date: 2010-12-10* Initial release. The three categories of Bottleneck functions: - Faster replacement for NumPy and SciPy functions - Moving window functions - Group functions that bin calculations by like-labeled elements bottleneck-1.2.0/appveyor.yml000066400000000000000000000030171300536544100162360ustar00rootroot00000000000000shallow_clone: true # download the zip instead of the whole history environment: global: DEPS: "numpy=1.11.2 nose" CONDA_VENV: "testy" # SDK v7.0 MSVC Express 2008's SetEnv.cmd script will fail if the # /E:ON and /V:ON options are not enabled in the batch script # intepreter. See: http://stackoverflow.com/a/13751649/163740 CMD_IN_ENV: "cmd /E:ON /V:ON /C .\\tools\\appveyor\\windows_sdk.cmd" matrix: - PYTHON_VERSION: "2.7" PYTHON_ARCH: "32" platform: x86 CONDA_HOME: "C:\\Miniconda" - PYTHON_VERSION: "2.7" PYTHON_ARCH: "64" platform: x64 CONDA_HOME: "C:\\Miniconda-x64" - PYTHON_VERSION: "3.4" PYTHON_ARCH: "32" platform: x86 CONDA_HOME: "C:\\Miniconda3" - PYTHON_VERSION: "3.4" PYTHON_ARCH: "64" platform: x64 CONDA_HOME: "C:\\Miniconda3-x64" - PYTHON_VERSION: "3.5" PYTHON_ARCH: "32" platform: x86 CONDA_HOME: "C:\\Miniconda3" - PYTHON_VERSION: "3.5" PYTHON_ARCH: "64" platform: x64 CONDA_HOME: "C:\\Miniconda3-x64" install: - "SET PATH=%CONDA_HOME%;%CONDA_HOME%\\Scripts;%PATH%" # configures the Miniconda environment (Py2/Py3, 32/64 bit) - "python -u tools\\appveyor\\conda_setup.py" build: false test_script: - "activate %CONDA_VENV%" - "%CMD_IN_ENV% pip install --verbose ." - "python tools\\test-installed-bottleneck.py" bottleneck-1.2.0/bottleneck/000077500000000000000000000000001300536544100157775ustar00rootroot00000000000000bottleneck-1.2.0/bottleneck/.gitignore000066400000000000000000000000611300536544100177640ustar00rootroot00000000000000/reduce.* /nonreduce.* /nonreduce_axis.* /move.* bottleneck-1.2.0/bottleneck/LICENSE000066400000000000000000000523631300536544100170150ustar00rootroot00000000000000======= License ======= Bottleneck is distributed under a Simplified BSD license. Parts of NumPy, SciPy and and numpydoc, which all have BSD licenses, are included in Bottleneck. Bottleneck license ================== Copyright (c) 2010-2016 Keith Goodman All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Other licenses ============== Bottleneck contains doc strings from NumPy and SciPy and Sphinx extensions from numpydoc. Bottleneck also contains the file ez_setup.py from setuptools. NumPy license ------------- Copyright (c) 2005-2014, NumPy Developers. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the NumPy Developers nor the names of any contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. itions of this License Agreement. SciPy license ------------- Copyright (c) 2001, 2002 Enthought, Inc. All rights reserved. Copyright (c) 2003-2014 SciPy Developers. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: a. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. b. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. c. Neither the name of the Enthought nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. numpydoc license ---------------- The numpydoc license is in bottleneck/doc/sphinxext/LICENSE.txt setuptools license ------------------ setuptools is dual-licensed under the Python Software Foundation (PSF) license and the Zope Public License (ZPL). Python Software Foundation License ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A. HISTORY OF THE SOFTWARE """""""""""""""""""""""""" Python was created in the early 1990s by Guido van Rossum at Stichting Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands as a successor of a language called ABC. Guido remains Python's principal author, although it includes many contributions from others. In 1995, Guido continued his work on Python at the Corporation for National Research Initiatives (CNRI, see http://www.cnri.reston.va.us) in Reston, Virginia where he released several versions of the software. In May 2000, Guido and the Python core development team moved to BeOpen.com to form the BeOpen PythonLabs team. In October of the same year, the PythonLabs team moved to Digital Creations (now Zope Corporation, see http://www.zope.com). In 2001, the Python Software Foundation (PSF, see http://www.python.org/psf/) was formed, a non-profit organization created specifically to own Python-related Intellectual Property. Zope Corporation is a sponsoring member of the PSF. All Python releases are Open Source (see http://www.opensource.org for the Open Source Definition). Historically, most, but not all, Python releases have also been GPL-compatible; the table below summarizes the various releases. Release Derived Year Owner GPL- from compatible? (1) 0.9.0 thru 1.2 1991-1995 CWI yes 1.3 thru 1.5.2 1.2 1995-1999 CNRI yes 1.6 1.5.2 2000 CNRI no 2.0 1.6 2000 BeOpen.com no 1.6.1 1.6 2001 CNRI yes (2) 2.1 2.0+1.6.1 2001 PSF no 2.0.1 2.0+1.6.1 2001 PSF yes 2.1.1 2.1+2.0.1 2001 PSF yes 2.2 2.1.1 2001 PSF yes 2.1.2 2.1.1 2002 PSF yes 2.1.3 2.1.2 2002 PSF yes 2.2.1 2.2 2002 PSF yes 2.2.2 2.2.1 2002 PSF yes 2.2.3 2.2.2 2003 PSF yes 2.3 2.2.2 2002-2003 PSF yes 2.3.1 2.3 2002-2003 PSF yes 2.3.2 2.3.1 2002-2003 PSF yes 2.3.3 2.3.2 2002-2003 PSF yes 2.3.4 2.3.3 2004 PSF yes 2.3.5 2.3.4 2005 PSF yes 2.4 2.3 2004 PSF yes 2.4.1 2.4 2005 PSF yes 2.4.2 2.4.1 2005 PSF yes 2.4.3 2.4.2 2006 PSF yes 2.4.4 2.4.3 2006 PSF yes 2.5 2.4 2006 PSF yes 2.5.1 2.5 2007 PSF yes 2.5.2 2.5.1 2008 PSF yes 2.5.3 2.5.2 2008 PSF yes 2.6 2.5 2008 PSF yes 2.6.1 2.6 2008 PSF yes 2.6.2 2.6.1 2009 PSF yes 2.6.3 2.6.2 2009 PSF yes 2.6.4 2.6.3 2009 PSF yes 2.6.5 2.6.4 2010 PSF yes Footnotes: (1) GPL-compatible doesn't mean that we're distributing Python under the GPL. All Python licenses, unlike the GPL, let you distribute a modified version without making your changes open source. The GPL-compatible licenses make it possible to combine Python with other software that is released under the GPL; the others don't. (2) According to Richard Stallman, 1.6.1 is not GPL-compatible, because its license has a choice of law clause. According to CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1 is "not incompatible" with the GPL. Thanks to the many outside volunteers who have worked under Guido's direction to make these releases possible. B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 ............................................ 1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and the Individual or Organization ("Licensee") accessing and otherwise using this software ("Python") in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010 Python Software Foundation; All Rights Reserved" are retained in Python alone or in any derivative version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates Python or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python. 4. PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between PSF and Licensee. This License Agreement does not grant permission to use PSF trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By copying, installing or otherwise using Python, Licensee agrees to be bound by the terms and conditions of this License Agreement. BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0 ........................................... BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1 1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the Individual or Organization ("Licensee") accessing and otherwise using this software in source or binary form and its associated documentation ("the Software"). 2. Subject to the terms and conditions of this BeOpen Python License Agreement, BeOpen hereby grants Licensee a non-exclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use the Software alone or in any derivative version, provided, however, that the BeOpen Python License is retained in the Software, alone or in any derivative version prepared by Licensee. 3. BeOpen is making the Software available to Licensee on an "AS IS" basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 5. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 6. This License Agreement shall be governed by and interpreted in all respects by the law of the State of California, excluding conflict of law provisions. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between BeOpen and Licensee. This License Agreement does not grant permission to use BeOpen trademarks or trade names in a trademark sense to endorse or promote products or services of Licensee, or any third party. As an exception, the "BeOpen Python" logos available at http://www.pythonlabs.com/logos.html may be used according to the permissions granted on that web page. 7. By copying, installing or otherwise using the software, Licensee agrees to be bound by the terms and conditions of this License Agreement. CNRI LICENSE AGREEMENT FOR PYTHON 1.6.1 ....................................... 1. This LICENSE AGREEMENT is between the Corporation for National Research Initiatives, having an office at 1895 Preston White Drive, Reston, VA 20191 ("CNRI"), and the Individual or Organization ("Licensee") accessing and otherwise using Python 1.6.1 software in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, CNRI hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python 1.6.1 alone or in any derivative version, provided, however, that CNRI's License Agreement and CNRI's notice of copyright, i.e., "Copyright (c) 1995-2001 Corporation for National Research Initiatives; All Rights Reserved" are retained in Python 1.6.1 alone or in any derivative version prepared by Licensee. Alternately, in lieu of CNRI's License Agreement, Licensee may substitute the following text (omitting the quotes): "Python 1.6.1 is made available subject to the terms and conditions in CNRI's License Agreement. This Agreement together with Python 1.6.1 may be located on the Internet using the following unique, persistent identifier (known as a handle): 1895.22/1013. This Agreement may also be obtained from a proxy server on the Internet using the following URL: http://hdl.handle.net/1895.22/1013". 3. In the event Licensee prepares a derivative work that is based on or incorporates Python 1.6.1 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python 1.6.1. 4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS" basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. This License Agreement shall be governed by the federal intellectual property law of the United States, including without limitation the federal copyright law, and, to the extent such U.S. federal law does not apply, by the law of the Commonwealth of Virginia, excluding Virginia's conflict of law provisions. Notwithstanding the foregoing, with regard to derivative works based on Python 1.6.1 that incorporate non-separable material that was previously distributed under the GNU General Public License (GPL), the law of the Commonwealth of Virginia shall govern this License Agreement only as to issues arising under or with respect to Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between CNRI and Licensee. This License Agreement does not grant permission to use CNRI trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By clicking on the "ACCEPT" button where indicated, or by copying, installing or otherwise using Python 1.6.1, Licensee agrees to be bound by the terms and conditions of this License Agreement. ACCEPT CWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2 .................................................. Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam, The Netherlands. All rights reserved. Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of Stichting Mathematisch Centrum or CWI not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Zope Public License ^^^^^^^^^^^^^^^^^^^ This software is Copyright (c) Zope Corporation (tm) and Contributors. All rights reserved. This license has been certified as open source. It has also been designated as GPL compatible by the Free Software Foundation (FSF). Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions in source code must retain the above copyright notice, this list of conditions, and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions, and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name Zope Corporation (tm) must not be used to endorse or promote products derived from this software without prior written permission from Zope Corporation. 4. The right to distribute this software or to use it for any purpose does not give you the right to use Servicemarks (sm) or Trademarks (tm) of Zope Corporation. Use of them is covered in a separate agreement (see http://www.zope.com/Marks). 5. If any files are modified, you must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. Disclaimer THIS SOFTWARE IS PROVIDED BY ZOPE CORPORATION ''AS IS'' AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL ZOPE CORPORATION OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. This software consists of contributions made by Zope Corporation and many individuals on behalf of Zope Corporation. Specific attributions are listed in the accompanying credits file. bottleneck-1.2.0/bottleneck/__init__.py000066400000000000000000000024521300536544100201130ustar00rootroot00000000000000# flake8: noqa # If you bork the build (e.g. by messing around with the templates), # you still want to be able to import Bottleneck so that you can # rebuild using the templates. So try to import the compiled Bottleneck # functions to the top level, but move on if not successful. try: from .reduce import (nansum, nanmean, nanstd, nanvar, nanmin, nanmax, median, nanmedian, ss, nanargmin, nanargmax, anynan, allnan) except: pass try: from .nonreduce import replace except: pass try: from .nonreduce_axis import (partition, argpartition, rankdata, nanrankdata, push) except: pass try: from .move import (move_sum, move_mean, move_std, move_var, move_min, move_max, move_argmin, move_argmax, move_median, move_rank) except: pass try: from . import slow from bottleneck.version import __version__ from bottleneck.benchmark.bench import bench from bottleneck.benchmark.bench_detailed import bench_detailed from bottleneck.tests.util import get_functions except: pass try: from numpy.testing import Tester test = Tester().test del Tester except (ImportError, ValueError): print("No Bottleneck unit testing available.") bottleneck-1.2.0/bottleneck/benchmark/000077500000000000000000000000001300536544100177315ustar00rootroot00000000000000bottleneck-1.2.0/bottleneck/benchmark/__init__.py000066400000000000000000000000001300536544100220300ustar00rootroot00000000000000bottleneck-1.2.0/bottleneck/benchmark/autotimeit.py000066400000000000000000000007671300536544100225010ustar00rootroot00000000000000 import timeit def autotimeit(stmt, setup='pass', repeat=3, mintime=0.2): timer = timeit.Timer(stmt, setup) number, time1 = autoscaler(timer, mintime) time2 = timer.repeat(repeat=repeat-1, number=number) return min(time2 + [time1]) / number def autoscaler(timer, mintime): number = 1 for i in range(12): time = timer.timeit(number) if time > mintime: return number, time number *= 10 raise RuntimeError('function is too fast to test') bottleneck-1.2.0/bottleneck/benchmark/bench.py000066400000000000000000000161101300536544100213610ustar00rootroot00000000000000 import numpy as np import bottleneck as bn from .autotimeit import autotimeit __all__ = ['bench'] def bench(dtype='float64', axis=-1, shapes=[(100,), (1000,), (1000, 1000), (1000, 1000)], nans=[False, True, False, True], order='C', functions=None): """ Bottleneck benchmark. Parameters ---------- dtype : str, optional Data type string such as 'float64', which is the default. axis : int, optional Axis along which to perform the calculations that are being benchmarked. The default is the last axis (axis=-1). shapes : list, optional A list of tuple shapes of input arrays to use in the benchmark. nans : list, optional A list of the bools (True or False), one for each tuple in the `shapes` list, that tells whether the input arrays should be randomly filled with one-fifth NaNs. order : {'C', 'F'}, optional Whether to store multidimensional data in C- or Fortran-contiguous (row- or column-wise) order in memory. functions : {list, None}, optional A list of strings specifying which functions to include in the benchmark. By default (None) all functions are included in the benchmark. Returns ------- A benchmark report is printed to stdout. """ if len(shapes) != len(nans): raise ValueError("`shapes` and `nans` must have the same length") dtype = str(dtype) axis = str(axis) tab = ' ' # Header print('Bottleneck performance benchmark') print("%sBottleneck %s; Numpy %s" % (tab, bn.__version__, np.__version__)) print("%sSpeed is NumPy time divided by Bottleneck time" % tab) tup = (tab, dtype, axis) print("%sNaN means approx one-fifth NaNs; %s and axis=%s are used" % tup) print('') header = [" "*15] for nan in nans: if nan: header.append("NaN".center(11)) else: header.append("no NaN".center(11)) print("".join(header)) header = ["".join(str(shape).split(" ")).center(11) for shape in shapes] header = [" "*16] + header print("".join(header)) suite = benchsuite(shapes, dtype, axis, nans, order, functions) for test in suite: name = test["name"].ljust(12) fmt = tab + name + "%7.1f" + "%11.1f"*(len(shapes) - 1) speed = timer(test['statements'], test['setups']) print(fmt % tuple(speed)) def timer(statements, setups): speed = [] if len(statements) != 2: raise ValueError("Two statements needed.") for setup in setups: with np.errstate(invalid='ignore'): t0 = autotimeit(statements[0], setup) t1 = autotimeit(statements[1], setup) speed.append(t1 / t0) return speed def getarray(shape, dtype, nans=False, order='C'): a = np.arange(np.prod(shape), dtype=dtype) if nans and issubclass(a.dtype.type, np.inexact): a[::5] = np.nan else: rs = np.random.RandomState(shape) rs.shuffle(a) return np.array(a.reshape(*shape), order=order) def benchsuite(shapes, dtype, axis, nans, order, functions): suite = [] def getsetups(setup, shapes, nans, order): template = """ from bottleneck.benchmark.bench import getarray a = getarray(%s, 'DTYPE', %s, '%s') %s""" setups = [] for shape, nan in zip(shapes, nans): setups.append(template % (str(shape), str(nan), order, setup)) return setups # non-moving window functions funcs = bn.get_functions("reduce", as_string=True) funcs += ['rankdata', 'nanrankdata'] for func in funcs: if functions is not None and func not in functions: continue run = {} run['name'] = func run['statements'] = ["bn_func(a, AXIS)", "sl_func(a, AXIS)"] setup = """ from bottleneck import %s as bn_func try: from numpy import %s as sl_func except ImportError: from bottleneck.slow import %s as sl_func if "%s" == "median": from bottleneck.slow import median as sl_func """ % (func, func, func, func) run['setups'] = getsetups(setup, shapes, nans, order) suite.append(run) # partition, argpartition funcs = ['partition', 'argpartition'] for func in funcs: if functions is not None and func not in functions: continue run = {} run['name'] = func run['statements'] = ["bn_func(a, n, AXIS)", "sl_func(a, n, AXIS)"] setup = """ from bottleneck import %s as bn_func from bottleneck.slow import %s as sl_func if AXIS is None: n = a.size else: n = a.shape[AXIS] - 1 n = max(n / 2, 0) """ % (func, func) run['setups'] = getsetups(setup, shapes, nans, order) suite.append(run) # replace, push funcs = ['replace', 'push'] for func in funcs: if functions is not None and func not in functions: continue run = {} run['name'] = func if func == 'replace': run['statements'] = ["bn_func(a, nan, 0)", "slow_func(a, nan, 0)"] elif func == 'push': run['statements'] = ["bn_func(a, 5, AXIS)", "slow_func(a, 5, AXIS)"] else: raise ValueError('Unknow function name') setup = """ from numpy import nan from bottleneck import %s as bn_func from bottleneck.slow import %s as slow_func """ % (func, func) run['setups'] = getsetups(setup, shapes, nans, order) suite.append(run) # moving window functions funcs = bn.get_functions('move', as_string=True) for func in funcs: if functions is not None and func not in functions: continue run = {} run['name'] = func run['statements'] = ["bn_func(a, w, 1, AXIS)", "sw_func(a, w, 1, AXIS)"] setup = """ from bottleneck.slow.move import %s as sw_func from bottleneck import %s as bn_func w = a.shape[AXIS] // 5 """ % (func, func) run['setups'] = getsetups(setup, shapes, nans, order) if axis != 'None': suite.append(run) # Strip leading spaces from setup code for i, run in enumerate(suite): for j in range(len(run['setups'])): t = run['setups'][j] t = '\n'.join([z.strip() for z in t.split('\n')]) suite[i]['setups'][j] = t # Set dtype and axis in setups for i, run in enumerate(suite): for j in range(len(run['setups'])): t = run['setups'][j] t = t.replace('DTYPE', dtype) t = t.replace('AXIS', axis) suite[i]['setups'][j] = t # Set dtype and axis in statements for i, run in enumerate(suite): for j in range(2): t = run['statements'][j] t = t.replace('DTYPE', dtype) t = t.replace('AXIS', axis) suite[i]['statements'][j] = t return suite bottleneck-1.2.0/bottleneck/benchmark/bench_detailed.py000066400000000000000000000146221300536544100232220ustar00rootroot00000000000000 import numpy as np import bottleneck as bn from .autotimeit import autotimeit __all__ = ['bench_detailed'] def bench_detailed(function='nansum', fraction_nan=0.0): """ Benchmark a single function in detail or, optionally, all functions. Parameters ---------- function : str, optional Name of function, as a string, to benchmark. Default ('nansum') is to benchmark bn.nansum. If `function` is 'all' then detailed benchmarks are run on all bottleneck functions. fraction_nan : float, optional Fraction of array elements that should, on average, be NaN. The default (0.0) is not to set any elements to NaN. Returns ------- A benchmark report is printed to stdout. """ if function == 'all': # benchmark all bottleneck functions funcs = bn.get_functions('all', as_string=True) funcs.sort() for func in funcs: bench_detailed(func, fraction_nan) if fraction_nan < 0 or fraction_nan > 1: raise ValueError("`fraction_nan` must be between 0 and 1, inclusive") tab = ' ' # Header print('%s benchmark' % function) print("%sBottleneck %s; Numpy %s" % (tab, bn.__version__, np.__version__)) print("%sSpeed is NumPy time divided by Bottleneck time" % tab) if fraction_nan == 0: print("%sNone of the array elements are NaN" % tab) else: print("%s%.1f%% of the array elements are NaN (on average)" % (tab, fraction_nan * 100)) print("") print(" Speed Call Array") suite = benchsuite(function, fraction_nan) for test in suite: name = test["name"] speed = timer(test['statements'], test['setup'], test['repeat']) print("%8.1f %s %s" % (speed, name[0].ljust(27), name[1])) def timer(statements, setup, repeat): if len(statements) != 2: raise ValueError("Two statements needed.") with np.errstate(invalid='ignore'): t0 = autotimeit(statements[0], setup, repeat=repeat) t1 = autotimeit(statements[1], setup, repeat=repeat) speed = t1 / t0 return speed def benchsuite(function, fraction_nan): # setup is called before each run of each function setup = """ from bottleneck import %s as bn_fn try: from numpy import %s as sl_fn except ImportError: from bottleneck.slow import %s as sl_fn # avoid all-nan slice warnings from np.median and np.nanmedian if "%s" == "median": from bottleneck.slow import median as sl_fn if "%s" == "nanmedian": from bottleneck.slow import nanmedian as sl_fn from numpy import array, nan from numpy.random import RandomState rand = RandomState(123).rand a = %s if %s != 0: a[a < %s] = nan """ setup = '\n'.join([s.strip() for s in setup.split('\n')]) # what kind of function signature do we need to use? if function in bn.get_functions('reduce', as_string=True): index = 0 elif function in ['rankdata', 'nanrankdata']: index = 0 elif function in bn.get_functions('move', as_string=True): index = 1 elif function in ['partition', 'argpartition', 'push']: index = 2 elif function == 'replace': index = 3 else: raise ValueError("`function` (%s) not recognized" % function) # create benchmark suite instructions = get_instructions() f = function suite = [] for instruction in instructions: signature = instruction[index + 1] if signature is None: continue array = instruction[0] repeat = instruction[-1] run = {} run['name'] = [f + signature, array] run['statements'] = ["bn_fn" + signature, "sl_fn" + signature] run['setup'] = setup % (f, f, f, f, f, array, fraction_nan, fraction_nan) run['repeat'] = repeat suite.append(run) return suite def get_instructions(): instructions = [ # 1d input array ("rand(1)", "(a)", # reduce + (nan)rankdata "(a, 1)", # move "(a, 0)", # (arg)partition "(a, np.nan, 0)", # replace 10), ("rand(10)", "(a)", "(a, 2)", "(a, 2)", "(a, np.nan, 0)", 10), ("rand(100)", "(a)", "(a, 20)", "(a, 20)", "(a, np.nan, 0)", 6), ("rand(1000)", "(a)", "(a, 200)", "(a, 200)", "(a, np.nan, 0)", 3), ("rand(1000000)", "(a)", "(a, 200)", "(a, 200)", "(a, np.nan, 0)", 2), # 2d input array ("rand(10, 10)", "(a)", "(a, 2)", "(a, 2)", "(a, np.nan, 0)", 6), ("rand(100, 100)", "(a)", "(a, 20)", "(a, 20)", "(a, np.nan, 0)", 3), ("rand(1000, 1000)", "(a)", "(a, 200)", "(a, 200)", "(a, np.nan, 0)", 2), ("rand(10, 10)", "(a, 1)", None, None, None, 6), ("rand(100, 100)", "(a, 1)", None, None, None, 3), ("rand(1000, 1000)", "(a, 1)", None, None, None, 2), ("rand(100000, 2)", "(a, 1)", "(a, 1)", "(a, 1)", None, 2), ("rand(10, 10)", "(a, 0)", None, None, None, 6), ("rand(100, 100)", "(a, 0)", "(a, 20, axis=0)", None, None, 3), ("rand(1000, 1000)", "(a, 0)", "(a, 200, axis=0)", None, None, 2), # 3d input array ("rand(100, 100, 100)", "(a, 0)", "(a, 20, axis=0)", "(a, 20, axis=0)", None, 2), ("rand(100, 100, 100)", "(a, 1)", "(a, 20, axis=1)", "(a, 20, axis=1)", None, 2), ("rand(100, 100, 100)", "(a, 2)", "(a, 20, axis=2)", "(a, 20, axis=2)", "(a, np.nan, 0)", 2), # 0d input array ("array(1.0)", "(a)", None, None, "(a, 0, 2)", 10), ] return instructions bottleneck-1.2.0/bottleneck/slow/000077500000000000000000000000001300536544100167635ustar00rootroot00000000000000bottleneck-1.2.0/bottleneck/slow/__init__.py000066400000000000000000000002551300536544100210760ustar00rootroot00000000000000# flake8: noqa from bottleneck.slow.reduce import * from bottleneck.slow.nonreduce import * from bottleneck.slow.nonreduce_axis import * from bottleneck.slow.move import * bottleneck-1.2.0/bottleneck/slow/move.py000066400000000000000000000166431300536544100203150ustar00rootroot00000000000000"Alternative methods of calculating moving window statistics." import warnings import numpy as np __all__ = ['move_sum', 'move_mean', 'move_std', 'move_var', 'move_min', 'move_max', 'move_argmin', 'move_argmax', 'move_median', 'move_rank'] def move_sum(a, window, min_count=None, axis=-1): "Slow move_sum for unaccelerated dtype" return move_func(np.nansum, a, window, min_count, axis=axis) def move_mean(a, window, min_count=None, axis=-1): "Slow move_mean for unaccelerated dtype" return move_func(np.nanmean, a, window, min_count, axis=axis) def move_std(a, window, min_count=None, axis=-1, ddof=0): "Slow move_std for unaccelerated dtype" return move_func(np.nanstd, a, window, min_count, axis=axis, ddof=ddof) def move_var(a, window, min_count=None, axis=-1, ddof=0): "Slow move_var for unaccelerated dtype" return move_func(np.nanvar, a, window, min_count, axis=axis, ddof=ddof) def move_min(a, window, min_count=None, axis=-1): "Slow move_min for unaccelerated dtype" return move_func(np.nanmin, a, window, min_count, axis=axis) def move_max(a, window, min_count=None, axis=-1): "Slow move_max for unaccelerated dtype" return move_func(np.nanmax, a, window, min_count, axis=axis) def move_argmin(a, window, min_count=None, axis=-1): "Slow move_argmin for unaccelerated dtype" def argmin(a, axis): a = np.array(a, copy=False) flip = [slice(None)] * a.ndim flip[axis] = slice(None, None, -1) a = a[flip] # if tie, pick index of rightmost tie try: idx = np.nanargmin(a, axis=axis) except ValueError: # an all nan slice encountered a = a.copy() mask = np.isnan(a) np.copyto(a, np.inf, where=mask) idx = np.argmin(a, axis=axis).astype(np.float64) if idx.ndim == 0: idx = np.nan else: mask = np.all(mask, axis=axis) idx[mask] = np.nan return idx return move_func(argmin, a, window, min_count, axis=axis) def move_argmax(a, window, min_count=None, axis=-1): "Slow move_argmax for unaccelerated dtype" def argmax(a, axis): a = np.array(a, copy=False) flip = [slice(None)] * a.ndim flip[axis] = slice(None, None, -1) a = a[flip] # if tie, pick index of rightmost tie try: idx = np.nanargmax(a, axis=axis) except ValueError: # an all nan slice encountered a = a.copy() mask = np.isnan(a) np.copyto(a, -np.inf, where=mask) idx = np.argmax(a, axis=axis).astype(np.float64) if idx.ndim == 0: idx = np.nan else: mask = np.all(mask, axis=axis) idx[mask] = np.nan return idx return move_func(argmax, a, window, min_count, axis=axis) def move_median(a, window, min_count=None, axis=-1): "Slow move_median for unaccelerated dtype" return move_func(np.nanmedian, a, window, min_count, axis=axis) def move_rank(a, window, min_count=None, axis=-1): "Slow move_rank for unaccelerated dtype" return move_func(lastrank, a, window, min_count, axis=axis) # magic utility functions --------------------------------------------------- def move_func(func, a, window, min_count=None, axis=-1, **kwargs): "Generic moving window function implemented with a python loop." a = np.array(a, copy=False) if min_count is None: mc = window else: mc = min_count if mc > window: msg = "min_count (%d) cannot be greater than window (%d)" raise ValueError(msg % (mc, window)) elif mc <= 0: raise ValueError("`min_count` must be greater than zero.") if a.ndim == 0: raise ValueError("moving window functions require ndim > 0") if axis is None: raise ValueError("An `axis` value of None is not supported.") if window < 1: raise ValueError("`window` must be at least 1.") if window > a.shape[axis]: raise ValueError("`window` is too long.") if issubclass(a.dtype.type, np.inexact): y = np.empty_like(a) else: y = np.empty(a.shape) idx1 = [slice(None)] * a.ndim idx2 = list(idx1) with warnings.catch_warnings(): warnings.simplefilter("ignore") for i in range(a.shape[axis]): win = min(window, i + 1) idx1[axis] = slice(i + 1 - win, i + 1) idx2[axis] = i y[idx2] = func(a[idx1], axis=axis, **kwargs) idx = _mask(a, window, mc, axis) y[idx] = np.nan return y def _mask(a, window, min_count, axis): n = (a == a).cumsum(axis) idx1 = [slice(None)] * a.ndim idx2 = [slice(None)] * a.ndim idx3 = [slice(None)] * a.ndim idx1[axis] = slice(window, None) idx2[axis] = slice(None, -window) idx3[axis] = slice(None, window) nidx1 = n[idx1] nidx1 = nidx1 - n[idx2] idx = np.empty(a.shape, dtype=np.bool) idx[idx1] = nidx1 < min_count idx[idx3] = n[idx3] < min_count return idx # --------------------------------------------------------------------------- def lastrank(a, axis=-1): """ The ranking of the last element along the axis, ignoring NaNs. The ranking is normalized to be between -1 and 1 instead of the more common 1 and N. The results are adjusted for ties. Parameters ---------- a : ndarray Input array. If `a` is not an array, a conversion is attempted. axis : int, optional The axis over which to rank. By default (axis=-1) the ranking (and reducing) is performed over the last axis. Returns ------- d : array In the case of, for example, a 2d array of shape (n, m) and axis=1, the output will contain the rank (normalized to be between -1 and 1 and adjusted for ties) of the the last element of each row. The output in this example will have shape (n,). Examples -------- Create an array: >>> y1 = larry([1, 2, 3]) What is the rank of the last element (the value 3 in this example)? It is the largest element so the rank is 1.0: >>> import numpy as np >>> from la.afunc import lastrank >>> x1 = np.array([1, 2, 3]) >>> lastrank(x1) 1.0 Now let's try an example where the last element has the smallest value: >>> x2 = np.array([3, 2, 1]) >>> lastrank(x2) -1.0 Here's an example where the last element is not the minimum or maximum value: >>> x3 = np.array([1, 3, 4, 5, 2]) >>> lastrank(x3) -0.5 """ a = np.array(a, copy=False) ndim = a.ndim if a.size == 0: # At least one dimension has length 0 shape = list(a.shape) shape.pop(axis) r = np.empty(shape, dtype=a.dtype) r.fill(np.nan) if (r.ndim == 0) and (r.size == 1): r = np.nan return r indlast = [slice(None)] * ndim indlast[axis] = slice(-1, None) indlast2 = [slice(None)] * ndim indlast2[axis] = -1 n = (~np.isnan(a)).sum(axis) a_indlast = a[indlast] g = (a_indlast > a).sum(axis) e = (a_indlast == a).sum(axis) r = (g + g + e - 1.0) / 2.0 r = r / (n - 1.0) r = 2.0 * (r - 0.5) if ndim == 1: if n == 1: r = 0 if np.isnan(a[indlast2]): # elif? r = np.nan else: np.putmask(r, n == 1, 0) np.putmask(r, np.isnan(a[indlast2]), np.nan) return r bottleneck-1.2.0/bottleneck/slow/nonreduce.py000066400000000000000000000012101300536544100213110ustar00rootroot00000000000000import numpy as np __all__ = ['replace'] def replace(a, old, new): "Slow replace (inplace) used for unaccelerated dtypes." if type(a) is not np.ndarray: raise TypeError("`a` must be a numpy array.") if not issubclass(a.dtype.type, np.inexact): if old != old: # int arrays do not contain NaN return if int(old) != old: raise ValueError("Cannot safely cast `old` to int.") if int(new) != new: raise ValueError("Cannot safely cast `new` to int.") if old != old: mask = np.isnan(a) else: mask = a == old np.putmask(a, mask, new) bottleneck-1.2.0/bottleneck/slow/nonreduce_axis.py000066400000000000000000000117041300536544100223460ustar00rootroot00000000000000import numpy as np from numpy import partition, argpartition __all__ = ['rankdata', 'nanrankdata', 'partition', 'argpartition', 'push'] def rankdata(a, axis=None): "Slow rankdata function used for unaccelerated dtypes." return _rank(scipy_rankdata, a, axis) def nanrankdata(a, axis=None): "Slow nanrankdata function used for unaccelerated dtypes." return _rank(_nanrankdata_1d, a, axis) def _rank(func1d, a, axis): a = np.array(a, copy=False) if axis is None: a = a.ravel() axis = 0 if a.size == 0: y = a.astype(np.float64, copy=True) else: y = np.apply_along_axis(func1d, axis, a) if a.dtype != np.float64: y = y.astype(np.float64) return y def _nanrankdata_1d(a): y = np.empty(a.shape, dtype=np.float64) y.fill(np.nan) idx = ~np.isnan(a) y[idx] = scipy_rankdata(a[idx]) return y def push(a, n=np.inf, axis=-1): "Slow push used for unaccelerated dtypes." if axis is None: raise ValueError("`axis` cannot be None") y = np.array(a) ndim = y.ndim if axis != -1 or axis != ndim - 1: y = np.rollaxis(y, axis, ndim) if ndim == 1: y = y[None, :] elif ndim == 0: return y fidx = ~np.isnan(y) recent = np.empty(y.shape[:-1]) count = np.empty(y.shape[:-1]) recent.fill(np.nan) count.fill(np.nan) with np.errstate(invalid='ignore'): for i in range(y.shape[-1]): idx = (i - count) > n recent[idx] = np.nan idx = ~fidx[..., i] y[idx, i] = recent[idx] idx = fidx[..., i] count[idx] = i recent[idx] = y[idx, i] if axis != -1 or axis != ndim - 1: y = np.rollaxis(y, ndim - 1, axis) if ndim == 1: return y[0] return y # --------------------------------------------------------------------------- # # SciPy # # Local copy of SciPy's rankdata to avoid a SciPy dependency. The SciPy # license is included in the Bottleneck license file, which is distributed # with Bottleneck. # # Code taken from scipy master branch on Aug 31, 2016. def scipy_rankdata(a, method='average'): """ rankdata(a, method='average') Assign ranks to data, dealing with ties appropriately. Ranks begin at 1. The `method` argument controls how ranks are assigned to equal values. See [1]_ for further discussion of ranking methods. Parameters ---------- a : array_like The array of values to be ranked. The array is first flattened. method : str, optional The method used to assign ranks to tied elements. The options are 'average', 'min', 'max', 'dense' and 'ordinal'. 'average': The average of the ranks that would have been assigned to all the tied values is assigned to each value. 'min': The minimum of the ranks that would have been assigned to all the tied values is assigned to each value. (This is also referred to as "competition" ranking.) 'max': The maximum of the ranks that would have been assigned to all the tied values is assigned to each value. 'dense': Like 'min', but the rank of the next highest element is assigned the rank immediately after those assigned to the tied elements. 'ordinal': All values are given a distinct rank, corresponding to the order that the values occur in `a`. The default is 'average'. Returns ------- ranks : ndarray An array of length equal to the size of `a`, containing rank scores. References ---------- .. [1] "Ranking", http://en.wikipedia.org/wiki/Ranking Examples -------- >>> from scipy.stats import rankdata >>> rankdata([0, 2, 3, 2]) array([ 1. , 2.5, 4. , 2.5]) >>> rankdata([0, 2, 3, 2], method='min') array([ 1, 2, 4, 2]) >>> rankdata([0, 2, 3, 2], method='max') array([ 1, 3, 4, 3]) >>> rankdata([0, 2, 3, 2], method='dense') array([ 1, 2, 3, 2]) >>> rankdata([0, 2, 3, 2], method='ordinal') array([ 1, 2, 4, 3]) """ if method not in ('average', 'min', 'max', 'dense', 'ordinal'): raise ValueError('unknown method "{0}"'.format(method)) a = np.ravel(np.asarray(a)) algo = 'mergesort' if method == 'ordinal' else 'quicksort' sorter = np.argsort(a, kind=algo) inv = np.empty(sorter.size, dtype=np.intp) inv[sorter] = np.arange(sorter.size, dtype=np.intp) if method == 'ordinal': return inv + 1 a = a[sorter] obs = np.r_[True, a[1:] != a[:-1]] dense = obs.cumsum()[inv] if method == 'dense': return dense # cumulative counts of each unique value count = np.r_[np.nonzero(obs)[0], len(obs)] if method == 'max': return count[dense] if method == 'min': return count[dense - 1] + 1 # average method return .5 * (count[dense] + count[dense - 1] + 1) bottleneck-1.2.0/bottleneck/slow/reduce.py000066400000000000000000000050021300536544100206010ustar00rootroot00000000000000import warnings import numpy as np from numpy import nanmean __all__ = ['median', 'nanmedian', 'nansum', 'nanmean', 'nanvar', 'nanstd', 'nanmin', 'nanmax', 'nanargmin', 'nanargmax', 'ss', 'anynan', 'allnan'] def nansum(a, axis=None): "Slow nansum function used for unaccelerated dtype." a = np.asarray(a) y = np.nansum(a, axis=axis) if y.dtype != a.dtype: y = y.astype(a.dtype) return y def nanargmin(a, axis=None): "Slow nanargmin function used for unaccelerated dtypes." with warnings.catch_warnings(): warnings.simplefilter("ignore") return np.nanargmin(a, axis=axis) def nanargmax(a, axis=None): "Slow nanargmax function used for unaccelerated dtypes." with warnings.catch_warnings(): warnings.simplefilter("ignore") return np.nanargmax(a, axis=axis) def nanvar(a, axis=None, ddof=0): "Slow nanvar function used for unaccelerated dtypes." with warnings.catch_warnings(): warnings.simplefilter("ignore") return np.nanvar(a, axis=axis, ddof=ddof) def nanstd(a, axis=None, ddof=0): "Slow nanstd function used for unaccelerated dtypes." with warnings.catch_warnings(): warnings.simplefilter("ignore") return np.nanstd(a, axis=axis, ddof=ddof) def nanmin(a, axis=None): "Slow nanmin function used for unaccelerated dtypes." with warnings.catch_warnings(): warnings.simplefilter("ignore") return np.nanmin(a, axis=axis) def nanmax(a, axis=None): "Slow nanmax function used for unaccelerated dtypes." with warnings.catch_warnings(): warnings.simplefilter("ignore") return np.nanmax(a, axis=axis) def median(a, axis=None): "Slow median function used for unaccelerated dtypes." with warnings.catch_warnings(): warnings.simplefilter("ignore") return np.median(a, axis=axis) def nanmedian(a, axis=None): "Slow nanmedian function used for unaccelerated dtypes." with warnings.catch_warnings(): warnings.simplefilter("ignore") return np.nanmedian(a, axis=axis) def ss(a, axis=None): "Slow sum of squares used for unaccelerated dtypes." a = np.asarray(a) y = np.multiply(a, a).sum(axis) if y.dtype != a.dtype: y = y.astype(a.dtype) return y def anynan(a, axis=None): "Slow check for Nans used for unaccelerated dtypes." return np.isnan(a).any(axis) def allnan(a, axis=None): "Slow check for all Nans used for unaccelerated dtypes." return np.isnan(a).all(axis) bottleneck-1.2.0/bottleneck/src/000077500000000000000000000000001300536544100165665ustar00rootroot00000000000000bottleneck-1.2.0/bottleneck/src/.gitignore000066400000000000000000000000611300536544100205530ustar00rootroot00000000000000/reduce.c /move.c /nonreduce.c /nonreduce_axis.c bottleneck-1.2.0/bottleneck/src/__init__.py000066400000000000000000000000001300536544100206650ustar00rootroot00000000000000bottleneck-1.2.0/bottleneck/src/bottleneck.h000066400000000000000000000130051300536544100210700ustar00rootroot00000000000000#ifndef BOTTLENECK_H #define BOTTLENECK_H #include #define NPY_NO_DEPRECATED_API NPY_1_11_API_VERSION #include /* THREADS=1 releases the GIL but increases function call * overhead. THREADS=0 does not release the GIL but keeps * function call overhead low. Curly brackets are for C89 * support. */ #define THREADS 1 #if THREADS #define BN_BEGIN_ALLOW_THREADS Py_BEGIN_ALLOW_THREADS { #define BN_END_ALLOW_THREADS ;} Py_END_ALLOW_THREADS #else #define BN_BEGIN_ALLOW_THREADS { #define BN_END_ALLOW_THREADS } #endif /* for ease of dtype templating */ #define NPY_float64 NPY_FLOAT64 #define NPY_float32 NPY_FLOAT32 #define NPY_int64 NPY_INT64 #define NPY_int32 NPY_INT32 #define NPY_intp NPY_INTP #define NPY_MAX_int64 NPY_MAX_INT64 #define NPY_MAX_int32 NPY_MAX_INT32 #define NPY_MIN_int64 NPY_MIN_INT64 #define NPY_MIN_int32 NPY_MIN_INT32 #if PY_MAJOR_VERSION >= 3 #define PyString_FromString PyBytes_FromString #define PyInt_FromLong PyLong_FromLong #define PyInt_AsLong PyLong_AsLong #define PyString_InternFromString PyUnicode_InternFromString #endif #define VARKEY METH_VARARGS | METH_KEYWORDS #define error_converting(x) (((x) == -1) && PyErr_Occurred()) #define VALUE_ERR(text) PyErr_SetString(PyExc_ValueError, text) #define TYPE_ERR(text) PyErr_SetString(PyExc_TypeError, text) #define MEMORY_ERR(text) PyErr_SetString(PyExc_MemoryError, text) #define RUNTIME_ERR(text) PyErr_SetString(PyExc_RuntimeError, text) /* `inline` copied from NumPy. */ #if defined(_MSC_VER) #define BN_INLINE __inline #elif defined(__GNUC__) #if defined(__STRICT_ANSI__) #define BN_INLINE __inline__ #else #define BN_INLINE inline #endif #else #define BN_INLINE #endif /* * NAN and INFINITY like macros (same behavior as glibc for NAN, same as C99 * for INFINITY). Copied from NumPy. */ BN_INLINE static float __bn_inff(void) { const union { npy_uint32 __i; float __f;} __bint = {0x7f800000UL}; return __bint.__f; } BN_INLINE static float __bn_nanf(void) { const union { npy_uint32 __i; float __f;} __bint = {0x7fc00000UL}; return __bint.__f; } #define BN_INFINITYF __bn_inff() #define BN_NANF __bn_nanf() #define BN_INFINITY ((npy_double)BN_INFINITYF) #define BN_NAN ((npy_double)BN_NANF) #define C_CONTIGUOUS(a) PyArray_CHKFLAGS(a, NPY_ARRAY_C_CONTIGUOUS) #define F_CONTIGUOUS(a) PyArray_CHKFLAGS(a, NPY_ARRAY_F_CONTIGUOUS) #define IS_CONTIGUOUS(a) (C_CONTIGUOUS(a) || F_CONTIGUOUS(a)) /* WIRTH ----------------------------------------------------------------- */ /* WIRTH macro based on: Fast median search: an ANSI C implementation Nicolas Devillard - ndevilla AT free DOT fr July 1998 which, in turn, took the algorithm from Wirth, Niklaus Algorithms + data structures = programs, p. 366 Englewood Cliffs: Prentice-Hall, 1976 Adapted for Bottleneck: (C) 2016 Keith Goodman */ #define WIRTH(dtype) \ x = B(dtype, k); \ i = l; \ j = r; \ do { \ while (B(dtype, i) < x) i++; \ while (x < B(dtype, j)) j--; \ if (i <= j) { \ npy_##dtype atmp = B(dtype, i); \ B(dtype, i) = B(dtype, j); \ B(dtype, j) = atmp; \ i++; \ j--; \ } \ } while (i <= j); \ if (j < k) l = i; \ if (k < i) r = j; /* partition ------------------------------------------------------------- */ #define PARTITION(dtype) \ while (l < r) { \ npy_##dtype x; \ npy_##dtype al = B(dtype, l); \ npy_##dtype ak = B(dtype, k); \ npy_##dtype ar = B(dtype, r); \ if (al > ak) { \ if (ak < ar) { \ if (al < ar) { \ B(dtype, k) = al; \ B(dtype, l) = ak; \ } \ else { \ B(dtype, k) = ar; \ B(dtype, r) = ak; \ } \ } \ } \ else { \ if (ak > ar) { \ if (al > ar) { \ B(dtype, k) = al; \ B(dtype, l) = ak; \ } \ else { \ B(dtype, k) = ar; \ B(dtype, r) = ak; \ } \ } \ } \ WIRTH(dtype) \ } /* slow ------------------------------------------------------------------ */ static PyObject *slow_module = NULL; static PyObject * slow(char *name, PyObject *args, PyObject *kwds) { PyObject *func = NULL; PyObject *out = NULL; if (slow_module == NULL) { /* bottleneck.slow has not been imported during the current * python session. Only import it once per session to save time */ slow_module = PyImport_ImportModule("bottleneck.slow"); if (slow_module == NULL) { PyErr_SetString(PyExc_RuntimeError, "Cannot import bottleneck.slow"); return NULL; } } func = PyObject_GetAttrString(slow_module, name); if (func == NULL) { PyErr_Format(PyExc_RuntimeError, "Cannot import %s from bottleneck.slow", name); return NULL; } if (PyCallable_Check(func)) { out = PyObject_Call(func, args, kwds); if (out == NULL) { Py_XDECREF(func); return NULL; } } else { Py_XDECREF(func); PyErr_Format(PyExc_RuntimeError, "bottleneck.slow.%s is not callable", name); return NULL; } Py_XDECREF(func); return out; } #endif /* BOTTLENECK_H */ bottleneck-1.2.0/bottleneck/src/iterators.h000066400000000000000000000237131300536544100207610ustar00rootroot00000000000000#include /* Bottleneck iterators are based on ideas from NumPy's PyArray_IterAllButAxis and PyArray_ITER_NEXT. */ /* one input array ------------------------------------------------------- */ /* these iterators are used mainly by reduce functions such as nansum */ struct _iter { int ndim_m2; /* ndim - 2 */ int axis; /* axis to not iterate over */ Py_ssize_t length; /* a.shape[axis] */ Py_ssize_t astride; /* a.strides[axis] */ npy_intp i; /* integer used by some macros */ npy_intp its; /* number of iterations completed */ npy_intp nits; /* number of iterations iterator plans to make */ npy_intp indices[NPY_MAXDIMS]; /* current location of iterator */ npy_intp astrides[NPY_MAXDIMS]; /* a.strides, a.strides[axis] removed */ npy_intp shape[NPY_MAXDIMS]; /* a.shape, a.shape[axis] removed */ char *pa; /* pointer to data corresponding to indices */ }; typedef struct _iter iter; static BN_INLINE void init_iter_one(iter *it, PyArrayObject *a, int axis) { int i, j = 0; const int ndim = PyArray_NDIM(a); const npy_intp *shape = PyArray_SHAPE(a); const npy_intp *strides = PyArray_STRIDES(a); it->axis = axis; it->its = 0; it->nits = 1; it->pa = PyArray_BYTES(a); it->ndim_m2 = -1; it->length = 1; it->astride = 0; if (ndim != 0) { it->ndim_m2 = ndim - 2; for (i = 0; i < ndim; i++) { if (i == axis) { it->astride = strides[i]; it->length = shape[i]; } else { it->indices[j] = 0; it->astrides[j] = strides[i]; it->shape[j] = shape[i]; it->nits *= shape[i]; j++; } } } } static BN_INLINE void init_iter_all(iter *it, PyArrayObject *a, int ravel, int anyorder) { int i, j = 0; const int ndim = PyArray_NDIM(a); const npy_intp *shape = PyArray_SHAPE(a); const npy_intp *strides = PyArray_STRIDES(a); it->axis = 0; it->its = 0; it->nits = 1; if (ndim == 1) { it->ndim_m2 = -1; it->length = shape[0]; it->astride = strides[0]; } else if (ndim == 0) { it->ndim_m2 = -1; it->length = 1; it->astride = 0; } else if (C_CONTIGUOUS(a)) { it->ndim_m2 = -1; it->axis = ndim - 1; it->length = PyArray_SIZE(a); it->astride = strides[ndim - 1]; } else if (F_CONTIGUOUS(a)) { if (anyorder || !ravel) { it->ndim_m2 = -1; it->length = PyArray_SIZE(a); it->astride = strides[0]; } else { it->ndim_m2 = -1; if (anyorder) { a = (PyArrayObject *)PyArray_Ravel(a, NPY_ANYORDER); } else { a = (PyArrayObject *)PyArray_Ravel(a, NPY_CORDER); } Py_DECREF(a); it->length = PyArray_DIM(a, 0); it->astride = PyArray_STRIDE(a, 0); } } else if (ravel) { it->ndim_m2 = -1; if (anyorder) { a = (PyArrayObject *)PyArray_Ravel(a, NPY_ANYORDER); } else { a = (PyArrayObject *)PyArray_Ravel(a, NPY_CORDER); } Py_DECREF(a); it->length = PyArray_DIM(a, 0); it->astride = PyArray_STRIDE(a, 0); } else { it->ndim_m2 = ndim - 2; it->astride = strides[0]; for (i = 1; i < ndim; i++) { if (strides[i] < it->astride) { it->astride = strides[i]; it->axis = i; } } it->length = shape[it->axis]; for (i = 0; i < ndim; i++) { if (i != it->axis) { it->indices[j] = 0; it->astrides[j] = strides[i]; it->shape[j] = shape[i]; it->nits *= shape[i]; j++; } } } it->pa = PyArray_BYTES(a); } #define NEXT \ for (it.i = it.ndim_m2; it.i > -1; it.i--) { \ if (it.indices[it.i] < it.shape[it.i] - 1) { \ it.pa += it.astrides[it.i]; \ it.indices[it.i]++; \ break; \ } \ it.pa -= it.indices[it.i] * it.astrides[it.i]; \ it.indices[it.i] = 0; \ } \ it.its++; /* two input arrays ------------------------------------------------------ */ /* this iterator is used mainly by moving window functions such as move_sum */ struct _iter2 { int ndim_m2; int axis; Py_ssize_t length; Py_ssize_t astride; Py_ssize_t ystride; npy_intp i; npy_intp its; npy_intp nits; npy_intp indices[NPY_MAXDIMS]; npy_intp astrides[NPY_MAXDIMS]; npy_intp ystrides[NPY_MAXDIMS]; npy_intp shape[NPY_MAXDIMS]; char *pa; char *py; }; typedef struct _iter2 iter2; static BN_INLINE void init_iter2(iter2 *it, PyArrayObject *a, PyObject *y, int axis) { int i, j = 0; const int ndim = PyArray_NDIM(a); const npy_intp *shape = PyArray_SHAPE(a); const npy_intp *astrides = PyArray_STRIDES(a); const npy_intp *ystrides = PyArray_STRIDES((PyArrayObject *)y); /* to avoid compiler warning of uninitialized variables */ it->length = 0; it->astride = 0; it->ystride = 0; it->ndim_m2 = ndim - 2; it->axis = axis; it->its = 0; it->nits = 1; it->pa = PyArray_BYTES(a); it->py = PyArray_BYTES((PyArrayObject *)y); for (i = 0; i < ndim; i++) { if (i == axis) { it->astride = astrides[i]; it->ystride = ystrides[i]; it->length = shape[i]; } else { it->indices[j] = 0; it->astrides[j] = astrides[i]; it->ystrides[j] = ystrides[i]; it->shape[j] = shape[i]; it->nits *= shape[i]; j++; } } } #define NEXT2 \ for (it.i = it.ndim_m2; it.i > -1; it.i--) { \ if (it.indices[it.i] < it.shape[it.i] - 1) { \ it.pa += it.astrides[it.i]; \ it.py += it.ystrides[it.i]; \ it.indices[it.i]++; \ break; \ } \ it.pa -= it.indices[it.i] * it.astrides[it.i]; \ it.py -= it.indices[it.i] * it.ystrides[it.i]; \ it.indices[it.i] = 0; \ } \ it.its++; /* three input arrays ---------------------------------------------------- */ /* this iterator is used mainly by rankdata and nanrankdata */ struct _iter3 { int ndim_m2; int axis; Py_ssize_t length; Py_ssize_t astride; Py_ssize_t ystride; Py_ssize_t zstride; npy_intp i; npy_intp its; npy_intp nits; npy_intp indices[NPY_MAXDIMS]; npy_intp astrides[NPY_MAXDIMS]; npy_intp ystrides[NPY_MAXDIMS]; npy_intp zstrides[NPY_MAXDIMS]; npy_intp shape[NPY_MAXDIMS]; char *pa; char *py; char *pz; }; typedef struct _iter3 iter3; static BN_INLINE void init_iter3(iter3 *it, PyArrayObject *a, PyObject *y, PyObject *z, int axis) { int i, j = 0; const int ndim = PyArray_NDIM(a); const npy_intp *shape = PyArray_SHAPE(a); const npy_intp *astrides = PyArray_STRIDES(a); const npy_intp *ystrides = PyArray_STRIDES((PyArrayObject *)y); const npy_intp *zstrides = PyArray_STRIDES((PyArrayObject *)z); /* to avoid compiler warning of uninitialized variables */ it->length = 0; it->astride = 0; it->ystride = 0; it->zstride = 0; it->ndim_m2 = ndim - 2; it->axis = axis; it->its = 0; it->nits = 1; it->pa = PyArray_BYTES(a); it->py = PyArray_BYTES((PyArrayObject *)y); it->pz = PyArray_BYTES((PyArrayObject *)z); for (i = 0; i < ndim; i++) { if (i == axis) { it->astride = astrides[i]; it->ystride = ystrides[i]; it->zstride = zstrides[i]; it->length = shape[i]; } else { it->indices[j] = 0; it->astrides[j] = astrides[i]; it->ystrides[j] = ystrides[i]; it->zstrides[j] = zstrides[i]; it->shape[j] = shape[i]; it->nits *= shape[i]; j++; } } } #define NEXT3 \ for (it.i = it.ndim_m2; it.i > -1; it.i--) { \ if (it.indices[it.i] < it.shape[it.i] - 1) { \ it.pa += it.astrides[it.i]; \ it.py += it.ystrides[it.i]; \ it.pz += it.zstrides[it.i]; \ it.indices[it.i]++; \ break; \ } \ it.pa -= it.indices[it.i] * it.astrides[it.i]; \ it.py -= it.indices[it.i] * it.ystrides[it.i]; \ it.pz -= it.indices[it.i] * it.zstrides[it.i]; \ it.indices[it.i] = 0; \ } \ it.its++; /* macros used with iterators -------------------------------------------- */ /* most of these macros assume iterator is named `it` */ #define NDIM it.ndim_m2 + 2 #define SHAPE it.shape #define SIZE it.nits * it.length #define LENGTH it.length #define INDEX it.i #define WHILE while (it.its < it.nits) #define WHILE0 it.i = 0; while (it.i < min_count - 1) #define WHILE1 while (it.i < window) #define WHILE2 while (it.i < it.length) #define FOR for (it.i = 0; it.i < it.length; it.i++) #define FOR_REVERSE for (it.i = it.length - 1; it.i > -1; it.i--) #define RESET it.its = 0; #define A0(dtype) *(npy_##dtype *)(it.pa) #define AI(dtype) *(npy_##dtype *)(it.pa + it.i * it.astride) #define AX(dtype, x) *(npy_##dtype *)(it.pa + (x) * it.astride) #define AOLD(dtype) *(npy_##dtype *)(it.pa + (it.i - window) * it.astride) #define YPP *py++ #define YI(dtype) *(npy_##dtype *)(it.py + it.i++ * it.ystride) #define YX(dtype, x) *(npy_##dtype *)(it.py + (x) * it.ystride) #define ZX(dtype, x) *(npy_##dtype *)(it.pz + (x) * it.zstride) #define FILL_Y(value) \ int i; \ Py_ssize_t size = PyArray_SIZE((PyArrayObject *)y); \ for (i = 0; i < size; i++) YPP = value; bottleneck-1.2.0/bottleneck/src/move_median/000077500000000000000000000000001300536544100210515ustar00rootroot00000000000000bottleneck-1.2.0/bottleneck/src/move_median/.gitignore000066400000000000000000000000061300536544100230350ustar00rootroot00000000000000*.out bottleneck-1.2.0/bottleneck/src/move_median/makefile000066400000000000000000000006351300536544100225550ustar00rootroot00000000000000 all: clean @gcc move_median.c move_median_debug.c -DBINARY_TREE=1 -lm -Wall -Wextra @./a.out gdb: clean @gcc move_median.c move_median_debug.c -DBINARY_TREE=1 -lm -g -Wall -Wextra @gdb ./a.out valgrind: clean @gcc move_median.c move_median_debug.c -DBINARY_TREE=1 -lm -g -Wall -Wextra @valgrind --tool=memcheck --leak-check=yes --show-reachable=yes \ --num-callers=20 ./a.out clean: @rm -rf a.out bottleneck-1.2.0/bottleneck/src/move_median/move_median.c000066400000000000000000000531271300536544100235100ustar00rootroot00000000000000/* Copyright (c) 2011 J. David Lee. All rights reserved. Released under a Simplified BSD license Adapted, expanded, and added NaN handling for Bottleneck: Copyright 2016 Keith Goodman Released under the Bottleneck license */ #include "move_median.h" #define min(a, b) (((a) < (b)) ? (a) : (b)) #define SWAP_NODES(heap, idx1, node1, idx2, node2) \ heap[idx1] = node2; \ heap[idx2] = node1; \ node1->idx = idx2; \ node2->idx = idx1; \ idx1 = idx2 /* ----------------------------------------------------------------------------- Prototypes ----------------------------------------------------------------------------- */ /* helper functions */ static inline ai_t mm_get_median(mm_handle *mm); static inline void heapify_small_node(mm_handle *mm, idx_t idx); static inline void heapify_large_node(mm_handle *mm, idx_t idx); static inline idx_t mm_get_smallest_child(mm_node **heap, idx_t window, idx_t idx, mm_node **child); static inline idx_t mm_get_largest_child(mm_node **heap, idx_t window, idx_t idx, mm_node **child); static inline void mm_move_up_small(mm_node **heap, idx_t idx, mm_node *node, idx_t p_idx, mm_node *parent); static inline void mm_move_down_small(mm_node **heap, idx_t window, idx_t idx, mm_node *node); static inline void mm_move_down_large(mm_node **heap, idx_t idx, mm_node *node, idx_t p_idx, mm_node *parent); static inline void mm_move_up_large(mm_node **heap, idx_t window, idx_t idx, mm_node *node); static inline void mm_swap_heap_heads(mm_node **s_heap, idx_t n_s, mm_node **l_heap, idx_t n_l, mm_node *s_node, mm_node *l_node); /* ----------------------------------------------------------------------------- Top-level non-nan functions ----------------------------------------------------------------------------- */ /* At the start of bn.move_median two heaps are created. One heap contains the * small values (a max heap); the other heap contains the large values (a min * heap). The handle, containing information about the heaps, is returned. */ mm_handle * mm_new(const idx_t window, idx_t min_count) { mm_handle *mm = malloc(sizeof(mm_handle)); mm->nodes = malloc(window * sizeof(mm_node*)); mm->node_data = malloc(window * sizeof(mm_node)); mm->s_heap = mm->nodes; mm->l_heap = &mm->nodes[window / 2 + window % 2]; mm->window = window; mm->odd = window % 2; mm->min_count = min_count; mm_reset(mm); return mm; } /* Insert a new value, ai, into one of the heaps. Use this function when * the heaps contain less than window-1 nodes. Returns the median value. * Once there are window-1 nodes in the heap, switch to using mm_update. */ ai_t mm_update_init(mm_handle *mm, ai_t ai) { mm_node *node; idx_t n_s = mm->n_s; idx_t n_l = mm->n_l; node = &mm->node_data[n_s + n_l]; node->ai = ai; if (n_s == 0) { /* the first node to appear in a heap */ mm->s_heap[0] = node; node->region = SH; node->idx = 0; mm->oldest = node; /* only need to set the oldest node once */ mm->n_s = 1; mm->s_first_leaf = 0; } else { /* at least one node already exists in the heaps */ mm->newest->next = node; if (n_s > n_l) { /* add new node to large heap */ mm->l_heap[n_l] = node; node->region = LH; node->idx = n_l; ++mm->n_l; mm->l_first_leaf = FIRST_LEAF(mm->n_l); heapify_large_node(mm, n_l); } else { /* add new node to small heap */ mm->s_heap[n_s] = node; node->region = SH; node->idx = n_s; ++mm->n_s; mm->s_first_leaf = FIRST_LEAF(mm->n_s); heapify_small_node(mm, n_s); } } mm->newest = node; return mm_get_median(mm); } /* Insert a new value, ai, into the double heap structure. Use this function * when the double heap contains at least window-1 nodes. Returns the median * value. If there are less than window-1 nodes in the heap, use * mm_update_init. */ ai_t mm_update(mm_handle *mm, ai_t ai) { /* node is oldest node with ai of newest node */ mm_node *node = mm->oldest; node->ai = ai; /* update oldest, newest */ mm->oldest = mm->oldest->next; mm->newest->next = node; mm->newest = node; /* adjust position of new node in heap if needed */ if (node->region == SH) { heapify_small_node(mm, node->idx); } else { heapify_large_node(mm, node->idx); } /* return the median */ if (mm->odd) return mm->s_heap[0]->ai; else return (mm->s_heap[0]->ai + mm->l_heap[0]->ai) / 2.0; } /* ----------------------------------------------------------------------------- Top-level nan functions ----------------------------------------------------------------------------- */ /* At the start of bn.move_median two heaps and a nan array are created. One * heap contains the small values (a max heap); the other heap contains the * large values (a min heap); the nan array contains the NaNs. The handle, * containing information about the heaps and the nan array is returned. */ mm_handle * mm_new_nan(const idx_t window, idx_t min_count) { mm_handle *mm = malloc(sizeof(mm_handle)); mm->nodes = malloc(2 * window * sizeof(mm_node*)); mm->node_data = malloc(window * sizeof(mm_node)); mm->s_heap = mm->nodes; mm->l_heap = &mm->nodes[window / 2 + window % 2]; mm->n_array = &mm->nodes[window]; mm->window = window; mm->min_count = min_count; mm_reset(mm); return mm; } /* Insert a new value, ai, into one of the heaps or the nan array. Use this * function when there are less than window-1 nodes. Returns the median * value. Once there are window-1 nodes in the heap, switch to using * mm_update_nan. */ ai_t mm_update_init_nan(mm_handle *mm, ai_t ai) { mm_node *node; idx_t n_s = mm->n_s; idx_t n_l = mm->n_l; idx_t n_n = mm->n_n; node = &mm->node_data[n_s + n_l + n_n]; node->ai = ai; if (ai != ai) { mm->n_array[n_n] = node; node->region = NA; node->idx = n_n; if (n_s + n_l + n_n == 0) { /* only need to set the oldest node once */ mm->oldest = node; } else { mm->newest->next = node; } ++mm->n_n; } else { if (n_s == 0) { /* the first node to appear in a heap */ mm->s_heap[0] = node; node->region = SH; node->idx = 0; if (n_s + n_l + n_n == 0) { /* only need to set the oldest node once */ mm->oldest = node; } else { mm->newest->next = node; } mm->n_s = 1; mm->s_first_leaf = 0; } else { /* at least one node already exists in the heaps */ mm->newest->next = node; if (n_s > n_l) { /* add new node to large heap */ mm->l_heap[n_l] = node; node->region = LH; node->idx = n_l; ++mm->n_l; mm->l_first_leaf = FIRST_LEAF(mm->n_l); heapify_large_node(mm, n_l); } else { /* add new node to small heap */ mm->s_heap[n_s] = node; node->region = SH; node->idx = n_s; ++mm->n_s; mm->s_first_leaf = FIRST_LEAF(mm->n_s); heapify_small_node(mm, n_s); } } } mm->newest = node; return mm_get_median(mm); } /* Insert a new value, ai, into one of the heaps or the nan array. Use this * function when there are at least window-1 nodes. Returns the median value. * If there are less than window-1 nodes, use mm_update_init_nan. */ ai_t mm_update_nan(mm_handle *mm, ai_t ai) { idx_t n_s, n_l, n_n; mm_node **l_heap; mm_node **s_heap; mm_node **n_array; mm_node *node2; /* node is oldest node with ai of newest node */ mm_node *node = mm->oldest; idx_t idx = node->idx; node->ai = ai; /* update oldest, newest */ mm->oldest = mm->oldest->next; mm->newest->next = node; mm->newest = node; l_heap = mm->l_heap; s_heap = mm->s_heap; n_array = mm->n_array; n_s = mm->n_s; n_l = mm->n_l; n_n = mm->n_n; if (ai != ai) { if (node->region == SH) { /* Oldest node is in the small heap and needs to be moved * to the nan array. Resulting hole in the small heap will be * filled with the rightmost leaf of the last row of the small * heap. */ /* insert node into nan array */ node->region = NA; node->idx = n_n; n_array[n_n] = node; ++mm->n_n; /* plug small heap hole */ --mm->n_s; if (mm->n_s == 0) { mm->s_first_leaf = 0; if (n_l > 0) { /* move head node from the large heap to the small heap */ node2 = mm->l_heap[0]; node2->region = SH; s_heap[0] = node2; mm->n_s = 1; mm->s_first_leaf = 0; /* plug hole in large heap */ node2= mm->l_heap[mm->n_l - 1]; node2->idx = 0; l_heap[0] = node2; --mm->n_l; if (mm->n_l == 0) mm->l_first_leaf = 0; else mm->l_first_leaf = FIRST_LEAF(mm->n_l); heapify_large_node(mm, 0); } } else { if (idx != n_s - 1) { s_heap[idx] = s_heap[n_s - 1]; s_heap[idx]->idx = idx; heapify_small_node(mm, idx); } if (mm->n_s < mm->n_l) { /* move head node from the large heap to the small heap */ node2 = mm->l_heap[0]; node2->idx = mm->n_s; node2->region = SH; s_heap[mm->n_s] = node2; ++mm->n_s; mm->l_first_leaf = FIRST_LEAF(mm->n_s); heapify_small_node(mm, node2->idx); /* plug hole in large heap */ node2= mm->l_heap[mm->n_l - 1]; node2->idx = 0; l_heap[0] = node2; --mm->n_l; if (mm->n_l == 0) mm->l_first_leaf = 0; else mm->l_first_leaf = FIRST_LEAF(mm->n_l); heapify_large_node(mm, 0); } else { mm->s_first_leaf = FIRST_LEAF(mm->n_s); heapify_small_node(mm, idx); } } } else if (node->region == LH) { /* Oldest node is in the large heap and needs to be moved * to the nan array. Resulting hole in the large heap will be * filled with the rightmost leaf of the last row of the large * heap. */ /* insert node into nan array */ node->region = NA; node->idx = n_n; n_array[n_n] = node; ++mm->n_n; /* plug large heap hole */ if (idx != n_l - 1) { l_heap[idx] = l_heap[n_l - 1]; l_heap[idx]->idx = idx; heapify_large_node(mm, idx); } --mm->n_l; if (mm->n_l == 0) mm->l_first_leaf = 0; else mm->l_first_leaf = FIRST_LEAF(mm->n_l); if (mm->n_l < mm->n_s - 1) { /* move head node from the small heap to the large heap */ node2 = mm->s_heap[0]; node2->idx = mm->n_l; node2->region = LH; l_heap[mm->n_l] = node2; ++mm->n_l; mm->l_first_leaf = FIRST_LEAF(mm->n_l); heapify_large_node(mm, node2->idx); /* plug hole in small heap */ if (n_s != 1) { node2 = mm->s_heap[mm->n_s - 1]; node2->idx = 0; s_heap[0] = node2; } --mm->n_s; if (mm->n_s == 0) mm->s_first_leaf = 0; else mm->s_first_leaf = FIRST_LEAF(mm->n_s); heapify_small_node(mm, 0); } /* reorder large heap if needed */ heapify_large_node(mm, idx); } else if (node->region == NA) { /* insert node into nan heap */ n_array[idx] = node; } } else { if (node->region == SH) heapify_small_node(mm, idx); else if (node->region == LH) heapify_large_node(mm, idx); else { /* ai is not NaN but oldest node is in nan array */ if (n_s > n_l) { /* insert into large heap */ node->region = LH; node->idx = n_l; l_heap[n_l] = node; ++mm->n_l; mm->l_first_leaf = FIRST_LEAF(mm->n_l); heapify_large_node(mm, n_l); } else { /* insert into small heap */ node->region = SH; node->idx = n_s; s_heap[n_s] = node; ++mm->n_s; mm->s_first_leaf = FIRST_LEAF(mm->n_s); heapify_small_node(mm, n_s); } /* plug nan array hole */ if (idx != n_n - 1) { n_array[idx] = n_array[n_n - 1]; n_array[idx]->idx = idx; } --mm->n_n; } } return mm_get_median(mm); } /* ----------------------------------------------------------------------------- Top-level functions common to nan and non-nan cases ----------------------------------------------------------------------------- */ /* At the end of each slice the double heap and nan array are reset (mm_reset) * to prepare for the next slice. In the 2d input array case (with axis=1), * each slice is a row of the input array. */ void mm_reset(mm_handle *mm) { mm->n_l = 0; mm->n_s = 0; mm->n_n = 0; mm->s_first_leaf = 0; mm->l_first_leaf = 0; } /* After bn.move_median is done, free the memory */ void mm_free(mm_handle *mm) { free(mm->node_data); free(mm->nodes); free(mm); } /* ----------------------------------------------------------------------------- Utility functions ----------------------------------------------------------------------------- */ /* Return the current median */ static inline ai_t mm_get_median(mm_handle *mm) { idx_t n_total = mm->n_l + mm->n_s; if (n_total < mm->min_count) return NAN; if (min(mm->window, n_total) % 2 == 1) return mm->s_heap[0]->ai; return (mm->s_heap[0]->ai + mm->l_heap[0]->ai) / 2.0; } static inline void heapify_small_node(mm_handle *mm, idx_t idx) { idx_t idx2; mm_node *node; mm_node *node2; mm_node **s_heap; mm_node **l_heap; idx_t n_s, n_l; ai_t ai; s_heap = mm->s_heap; l_heap = mm->l_heap; node = s_heap[idx]; n_s = mm->n_s; n_l = mm->n_l; ai = node->ai; /* Internal or leaf node */ if (idx > 0) { idx2 = P_IDX(idx); node2 = s_heap[idx2]; /* Move up */ if (ai > node2->ai) { mm_move_up_small(s_heap, idx, node, idx2, node2); /* Maybe swap between heaps */ node2 = l_heap[0]; if (ai > node2->ai) { mm_swap_heap_heads(s_heap, n_s, l_heap, n_l, node, node2); } } /* Move down */ else if (idx < mm->s_first_leaf) { mm_move_down_small(s_heap, n_s, idx, node); } } /* Head node */ else { node2 = l_heap[0]; if (n_l > 0 && ai > node2->ai) { mm_swap_heap_heads(s_heap, n_s, l_heap, n_l, node, node2); } else { mm_move_down_small(s_heap, n_s, idx, node); } } } static inline void heapify_large_node(mm_handle *mm, idx_t idx) { idx_t idx2; mm_node *node; mm_node *node2; mm_node **s_heap; mm_node **l_heap; idx_t n_s, n_l; ai_t ai; s_heap = mm->s_heap; l_heap = mm->l_heap; node = l_heap[idx]; n_s = mm->n_s; n_l = mm->n_l; ai = node->ai; /* Internal or leaf node */ if (idx > 0) { idx2 = P_IDX(idx); node2 = l_heap[idx2]; /* Move down */ if (ai < node2->ai) { mm_move_down_large(l_heap, idx, node, idx2, node2); /* Maybe swap between heaps */ node2 = s_heap[0]; if (ai < node2->ai) { mm_swap_heap_heads(s_heap, n_s, l_heap, n_l, node2, node); } } /* Move up */ else if (idx < mm->l_first_leaf) { mm_move_up_large(l_heap, n_l, idx, node); } } /* Head node */ else { node2 = s_heap[0]; if (n_s > 0 && ai < node2->ai) { mm_swap_heap_heads(s_heap, n_s, l_heap, n_l, node2, node); } else { mm_move_up_large(l_heap, n_l, idx, node); } } } /* Return the index of the smallest child of the node. The pointer * child will also be set. */ static inline idx_t mm_get_smallest_child(mm_node **heap, idx_t window, idx_t idx, mm_node **child) { idx_t i0 = FC_IDX(idx); idx_t i1 = i0 + NUM_CHILDREN; i1 = min(i1, window); switch(i1 - i0) { case 8: if (heap[i0 + 7]->ai < heap[idx]->ai) { idx = i0 + 7; } case 7: if (heap[i0 + 6]->ai < heap[idx]->ai) { idx = i0 + 6; } case 6: if (heap[i0 + 5]->ai < heap[idx]->ai) { idx = i0 + 5; } case 5: if (heap[i0 + 4]->ai < heap[idx]->ai) { idx = i0 + 4; } case 4: if (heap[i0 + 3]->ai < heap[idx]->ai) { idx = i0 + 3; } case 3: if (heap[i0 + 2]->ai < heap[idx]->ai) { idx = i0 + 2; } case 2: if (heap[i0 + 1]->ai < heap[idx]->ai) { idx = i0 + 1; } case 1: if (heap[i0 ]->ai < heap[idx]->ai) { idx = i0; } } *child = heap[idx]; return idx; } /* Return the index of the largest child of the node. The pointer * child will also be set. */ static inline idx_t mm_get_largest_child(mm_node **heap, idx_t window, idx_t idx, mm_node **child) { idx_t i0 = FC_IDX(idx); idx_t i1 = i0 + NUM_CHILDREN; i1 = min(i1, window); switch(i1 - i0) { case 8: if (heap[i0 + 7]->ai > heap[idx]->ai) { idx = i0 + 7; } case 7: if (heap[i0 + 6]->ai > heap[idx]->ai) { idx = i0 + 6; } case 6: if (heap[i0 + 5]->ai > heap[idx]->ai) { idx = i0 + 5; } case 5: if (heap[i0 + 4]->ai > heap[idx]->ai) { idx = i0 + 4; } case 4: if (heap[i0 + 3]->ai > heap[idx]->ai) { idx = i0 + 3; } case 3: if (heap[i0 + 2]->ai > heap[idx]->ai) { idx = i0 + 2; } case 2: if (heap[i0 + 1]->ai > heap[idx]->ai) { idx = i0 + 1; } case 1: if (heap[i0 ]->ai > heap[idx]->ai) { idx = i0; } } *child = heap[idx]; return idx; } /* Move the given node up through the heap to the appropriate position. */ static inline void mm_move_up_small(mm_node **heap, idx_t idx, mm_node *node, idx_t p_idx, mm_node *parent) { do { SWAP_NODES(heap, idx, node, p_idx, parent); if (idx == 0) { break; } p_idx = P_IDX(idx); parent = heap[p_idx]; } while (node->ai > parent->ai); } /* Move the given node down through the heap to the appropriate position. */ static inline void mm_move_down_small(mm_node **heap, idx_t window, idx_t idx, mm_node *node) { mm_node *child; ai_t ai = node->ai; idx_t c_idx = mm_get_largest_child(heap, window, idx, &child); while (ai < child->ai) { SWAP_NODES(heap, idx, node, c_idx, child); c_idx = mm_get_largest_child(heap, window, idx, &child); } } /* Move the given node down through the heap to the appropriate position. */ static inline void mm_move_down_large(mm_node **heap, idx_t idx, mm_node *node, idx_t p_idx, mm_node *parent) { do { SWAP_NODES(heap, idx, node, p_idx, parent); if (idx == 0) { break; } p_idx = P_IDX(idx); parent = heap[p_idx]; } while (node->ai < parent->ai); } /* Move the given node up through the heap to the appropriate position. */ static inline void mm_move_up_large(mm_node **heap, idx_t window, idx_t idx, mm_node *node) { mm_node *child; ai_t ai = node->ai; idx_t c_idx = mm_get_smallest_child(heap, window, idx, &child); while (ai > child->ai) { SWAP_NODES(heap, idx, node, c_idx, child); c_idx = mm_get_smallest_child(heap, window, idx, &child); } } /* Swap the heap heads. */ static inline void mm_swap_heap_heads(mm_node **s_heap, idx_t n_s, mm_node **l_heap, idx_t n_l, mm_node *s_node, mm_node *l_node) { s_node->region = LH; l_node->region = SH; s_heap[0] = l_node; l_heap[0] = s_node; mm_move_down_small(s_heap, n_s, 0, l_node); mm_move_up_large(l_heap, n_l, 0, s_node); } bottleneck-1.2.0/bottleneck/src/move_median/move_median.h000066400000000000000000000050561300536544100235130ustar00rootroot00000000000000#include #include #include #include #include typedef size_t idx_t; typedef double ai_t; #if BINARY_TREE==1 #define NUM_CHILDREN 2 #else /* maximum of 8 due to the manual loop-unrolling used in the code */ #define NUM_CHILDREN 8 #endif #if defined(_MSC_VER) && (_MSC_VER < 1900) #define inline __inline static __inline ai_t __NAN() { ai_t value; memset(&value, 0xFF, sizeof(value)); return value; } #define NAN __NAN() #endif /* Find indices of parent and first child */ #define P_IDX(i) ((i) - 1) / NUM_CHILDREN #define FC_IDX(i) NUM_CHILDREN * (i) + 1 /* are we in the small heap (SM), large heap (LH) or NaN array (NA)? */ #define SH 0 #define LH 1 #define NA 2 #define FIRST_LEAF(n) ceil((n - 1) / (double)NUM_CHILDREN) struct _mm_node { int region; /* SH small heap, LH large heap, NA nan array */ ai_t ai; /* The node's value */ idx_t idx; /* The node's index in the heap or nan array */ struct _mm_node *next; /* The next node in order of insertion */ }; typedef struct _mm_node mm_node; struct _mm_handle { idx_t window; /* window size */ int odd; /* is window even (0) or odd (1) */ idx_t min_count; /* Same meaning as in bn.move_median */ idx_t n_s; /* Number of nodes in the small heap */ idx_t n_l; /* Number of nodes in the large heap */ idx_t n_n; /* Number of nodes in the nan array */ mm_node **s_heap; /* The max heap of small ai */ mm_node **l_heap; /* The min heap of large ai */ mm_node **n_array; /* The nan array */ mm_node **nodes; /* s_heap and l_heap point into this array */ mm_node *node_data; /* Pointer to memory location where nodes live */ mm_node *oldest; /* The oldest node */ mm_node *newest; /* The newest node (most recent insert) */ idx_t s_first_leaf; /* All nodes this index or greater are leaf nodes */ idx_t l_first_leaf; /* All nodes this index or greater are leaf nodes */ }; typedef struct _mm_handle mm_handle; /* non-nan functions */ mm_handle *mm_new(const idx_t window, idx_t min_count); ai_t mm_update_init(mm_handle *mm, ai_t ai); ai_t mm_update(mm_handle *mm, ai_t ai); /* nan functions */ mm_handle *mm_new_nan(const idx_t window, idx_t min_count); ai_t mm_update_init_nan(mm_handle *mm, ai_t ai); ai_t mm_update_nan(mm_handle *mm, ai_t ai); /* functions common to non-nan and nan cases */ void mm_reset(mm_handle *mm); void mm_free(mm_handle *mm); bottleneck-1.2.0/bottleneck/src/move_median/move_median_debug.c000066400000000000000000000207141300536544100246520ustar00rootroot00000000000000#include "move_median.h" ai_t *mm_move_median(ai_t *a, idx_t length, idx_t window, idx_t min_count); int mm_assert_equal(ai_t *actual, ai_t *desired, ai_t *input, idx_t length, char *err_msg); int mm_unit_test(void); void mm_dump(mm_handle *mm); void mm_print_binary_heap(mm_node **heap, idx_t n_array, idx_t oldest_idx, idx_t newest_idx); void mm_check(mm_handle *mm); void mm_print_chain(mm_handle *mm); void mm_print_line(void); void mm_print_node(mm_node *node); int main(void) { return mm_unit_test(); } /* moving window median of 1d arrays returns output array */ ai_t *mm_move_median(ai_t *a, idx_t length, idx_t window, idx_t min_count) { mm_handle *mm; ai_t *out; idx_t i; out = malloc(length * sizeof(ai_t)); mm = mm_new_nan(window, min_count); for (i=0; i < length; i++) { if (i < window) { out[i] = mm_update_init_nan(mm, a[i]); } else { out[i] = mm_update_nan(mm, a[i]); } if (i == window) { mm_print_line(); printf("window complete; switch to mm_update\n"); } mm_print_line(); printf("inserting ai = %f\n", a[i]); mm_print_chain(mm); mm_dump(mm); printf("\nmedian = %f\n\n", out[i]); mm_check(mm); } mm_free(mm); return out; } /* assert that two arrays are equal */ int mm_assert_equal(ai_t *actual, ai_t *desired, ai_t *input, idx_t length, char *err_msg) { idx_t i; int failed = 0; mm_print_line(); printf("%s\n", err_msg); mm_print_line(); printf("input actual desired\n"); for (i=0; i < length; i++) { if (isnan(actual[i]) && isnan(desired[i])) { printf("%9f %9f %9f\n", input[i], actual[i], desired[i]); } else if (actual[i] != desired[i]) { failed = 1; printf("%9f %9f %9f BUG\n", input[i], actual[i], desired[i]); } else printf("%9f %9f %9f\n", input[i], actual[i], desired[i]); } return failed; } int mm_unit_test(void) { ai_t arr_input[] = {0, 3, 7, NAN, 1, 5, 8, 9, 2, NAN}; ai_t desired[] = {0 , 1.5, 3, 5, 4, 3, 5, 8, 8, 5.5}; ai_t *actual; int window = 3; int min_count = 1; int length; char *err_msg; int failed; length = sizeof(arr_input) / sizeof(*arr_input); err_msg = malloc(1024 * sizeof *err_msg); sprintf(err_msg, "move_median failed with window=%d, min_count=%d", window, min_count); actual = mm_move_median(arr_input, length, window, min_count); failed = mm_assert_equal(actual, desired, arr_input, length, err_msg); free(actual); free(err_msg); return failed; } void mm_print_node(mm_node *node) { printf("\n\n%d small\n", node->region); printf("%d idx\n", node->idx); printf("%f ai\n", node->ai); printf("%p next\n\n", node->next); } void mm_print_chain(mm_handle *mm) { idx_t i; mm_node *node; printf("\nchain\n"); node = mm->oldest; printf("\t%6.2f region %d idx %d addr %p\n", node->ai, node->region, node->idx, node); for (i=1; i < mm->n_s + mm->n_l + mm->n_n; i++) { node = node->next; printf("\t%6.2f region %d idx %d addr %p\n", node->ai, node->region, node->idx, node); } } void mm_check(mm_handle *mm) { int ndiff; idx_t i; mm_node *child; mm_node *parent; // small heap for (i=0; in_s; i++) { assert(mm->s_heap[i]->idx == i); assert(mm->s_heap[i]->ai == mm->s_heap[i]->ai); if (i > 0) { child = mm->s_heap[i]; parent = mm->s_heap[P_IDX(child->idx)]; assert(child->ai <= parent->ai); } } // large heap for (i=0; in_l; i++) { assert(mm->l_heap[i]->idx == i); assert(mm->l_heap[i]->ai == mm->l_heap[i]->ai); if (i > 0) { child = mm->l_heap[i]; parent = mm->l_heap[P_IDX(child->idx)]; assert(child->ai >= parent->ai); } } // nan array for (i=0; in_n; i++) { assert(mm->n_array[i]->idx == i); assert(mm->n_array[i]->ai != mm->n_array[i]->ai); } // handle assert(mm->window >= mm->n_s + mm->n_l + mm->n_n); assert(mm->min_count <= mm->window); if (mm->n_s == 0) { assert(mm->s_first_leaf == 0); } else { assert(mm->s_first_leaf == FIRST_LEAF(mm->n_s)); } if (mm->n_l == 0) { assert(mm->l_first_leaf == 0); } else { assert(mm->l_first_leaf == FIRST_LEAF(mm->n_l)); } ndiff = (int)mm->n_s - (int)mm->n_l; if (ndiff < 0) { ndiff *= -1; } assert(ndiff <= 1); if (mm->n_s > 0 && mm->n_l > 0) { assert(mm->s_heap[0]->ai <= mm->l_heap[0]->ai); } } /* Print the two heaps to the screen */ void mm_dump(mm_handle *mm) { int i; idx_t idx; if (!mm) { printf("mm is empty"); return; } printf("\nhandle\n"); printf("\t%2d window\n", mm->window); printf("\t%2d n_s\n", mm->n_s); printf("\t%2d n_l\n", mm->n_l); printf("\t%2d n_n\n", mm->n_n); printf("\t%2d min_count\n", mm->min_count); printf("\t%2d s_first_leaf\n", mm->s_first_leaf); printf("\t%2d l_first_leaf\n", mm->l_first_leaf); if (NUM_CHILDREN == 2) { // binary heap int idx0; int idx1; printf("\nsmall heap\n"); idx0 = -1; if (mm->oldest->region == SH) { idx0 = mm->oldest->idx; } idx1 = -1; if (mm->newest->region == SH) { idx1 = mm->newest->idx; } mm_print_binary_heap(mm->s_heap, mm->n_s, idx0, idx1); printf("\nlarge heap\n"); idx0 = -1; if (mm->oldest->region == LH) { idx0 = mm->oldest->idx; } idx1 = -1; if (mm->newest->region == LH) { idx1 = mm->newest->idx; } mm_print_binary_heap(mm->l_heap, mm->n_l, idx0, idx1); printf("\nnan array\n"); idx0 = -1; if (mm->oldest->region == NA) { idx0 = mm->oldest->idx; } idx1 = -1; if (mm->newest->region == NA) { idx1 = mm->newest->idx; } for(i = 0; i < (int)mm->n_n; ++i) { idx = mm->n_array[i]->idx; if (i == idx0 && i == idx1) { printf("\t%i >%f<\n", idx, mm->n_array[i]->ai); } else if (i == idx0) { printf("\t%i >%f\n", idx, mm->n_array[i]->ai); } else if (i == idx1) { printf("\t%i %f<\n", idx, mm->n_array[i]->ai); } else { printf("\t%i %f\n", idx, mm->n_array[i]->ai); } } } else { // not a binary heap if (mm->oldest) printf("\n\nFirst: %f\n", (double)mm->oldest->ai); if (mm->newest) printf("Last: %f\n", (double)mm->newest->ai); printf("\n\nSmall heap:\n"); for(i = 0; i < (int)mm->n_s; ++i) { printf("%i %f\n", (int)mm->s_heap[i]->idx, mm->s_heap[i]->ai); } printf("\n\nLarge heap:\n"); for(i = 0; i < (int)mm->n_l; ++i) { printf("%i %f\n", (int)mm->l_heap[i]->idx, mm->l_heap[i]->ai); } printf("\n\nNaN heap:\n"); for(i = 0; i < (int)mm->n_n; ++i) { printf("%i %f\n", (int)mm->n_array[i]->idx, mm->n_array[i]->ai); } } } /* Code to print a binary tree from http://stackoverflow.com/a/13755783 * Code modified for bottleneck's needs. */ void mm_print_binary_heap(mm_node **heap, idx_t n_array, idx_t oldest_idx, idx_t newest_idx) { const int line_width = 77; int print_pos[n_array]; int i, j, k, pos, x=1, level=0; print_pos[0] = 0; for(i=0,j=1; i<(int)n_array; i++,j++) { pos = print_pos[(i-1)/2]; pos += (i%2?-1:1)*(line_width/(pow(2,level+1))+1); for (k=0; k%.2f", heap[i]->ai); } else if (i == (int)newest_idx) { printf("%.2f<", heap[i]->ai); } else { printf("%.2f", heap[i]->ai); } print_pos[i] = x = pos+1; if (j==pow(2,level)) { printf("\n"); level++; x = 1; j = 0; } } } void mm_print_line(void) { int i, width = 70; for (i=0; i < width; i++) printf("-"); printf("\n"); } bottleneck-1.2.0/bottleneck/src/move_template.c000066400000000000000000001263011300536544100215760ustar00rootroot00000000000000#include "bottleneck.h" #include "iterators.h" #include "move_median/move_median.h" /* move_min, move_max, move_argmin, and move_argmax are based on the minimum on a sliding window algorithm by Richard Harter http://www.richardhartersworld.com/cri/2001/slidingmin.html Copyright Richard Harter 2009 Released under a Simplified BSD license Adapted, expanded, and added NaN handling for Bottleneck: Copyright 2010, 2014, 2015, 2016 Keith Goodman Released under the Bottleneck license */ /* macros ---------------------------------------------------------------- */ #define INIT(dtype) \ PyObject *y = PyArray_EMPTY(PyArray_NDIM(a), PyArray_SHAPE(a), dtype, 0); \ iter2 it; \ init_iter2(&it, a, y, axis); /* low-level functions such as move_sum_float64 */ #define MOVE(name, dtype) \ static PyObject * \ name##_##dtype(PyArrayObject *a, \ int window, \ int min_count, \ int axis, \ int ddof) /* top-level functions such as move_sum */ #define MOVE_MAIN(name, ddof) \ static PyObject * \ name(PyObject *self, PyObject *args, PyObject *kwds) \ { \ return mover(#name, \ args, \ kwds, \ name##_float64, \ name##_float32, \ name##_int64, \ name##_int32, \ ddof); \ } /* typedefs and prototypes ----------------------------------------------- */ /* used by move_min and move_max */ struct _pairs { double value; int death; }; typedef struct _pairs pairs; /* function pointer for functions passed to mover */ typedef PyObject *(*move_t)(PyArrayObject *, int, int, int, int); static PyObject * mover(char *name, PyObject *args, PyObject *kwds, move_t, move_t, move_t, move_t, int has_ddof); /* move_sum -------------------------------------------------------------- */ /* dtype = [['float64'], ['float32']] */ MOVE(move_sum, DTYPE0) { Py_ssize_t count; npy_DTYPE0 asum, ai, aold; INIT(NPY_DTYPE0) BN_BEGIN_ALLOW_THREADS WHILE { asum = count = 0; WHILE0 { ai = AI(DTYPE0); if (ai == ai) { asum += ai; count += 1; } YI(DTYPE0) = BN_NAN; } WHILE1 { ai = AI(DTYPE0); if (ai == ai) { asum += ai; count += 1; } YI(DTYPE0) = count >= min_count ? asum : BN_NAN; } WHILE2 { ai = AI(DTYPE0); aold = AOLD(DTYPE0); if (ai == ai) { if (aold == aold) { asum += ai - aold; } else { asum += ai; count++; } } else { if (aold == aold) { asum -= aold; count--; } } YI(DTYPE0) = count >= min_count ? asum : BN_NAN; } NEXT2 } BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64', 'float64'], ['int32', 'float64']] */ MOVE(move_sum, DTYPE0) { npy_DTYPE1 asum; INIT(NPY_DTYPE1) BN_BEGIN_ALLOW_THREADS WHILE { asum = 0; WHILE0 { asum += AI(DTYPE0); YI(DTYPE1) = BN_NAN; } WHILE1 { asum += AI(DTYPE0); YI(DTYPE1) = asum; } WHILE2 { asum += AI(DTYPE0) - AOLD(DTYPE0); YI(DTYPE1) = asum; } NEXT2 } BN_END_ALLOW_THREADS return y; } /* dtype end */ MOVE_MAIN(move_sum, 0) /* move_mean -------------------------------------------------------------- */ /* dtype = [['float64'], ['float32']] */ MOVE(move_mean, DTYPE0) { Py_ssize_t count; npy_DTYPE0 asum, ai, aold, count_inv; INIT(NPY_DTYPE0) BN_BEGIN_ALLOW_THREADS WHILE { asum = count = 0; WHILE0 { ai = AI(DTYPE0); if (ai == ai) { asum += ai; count += 1; } YI(DTYPE0) = BN_NAN; } WHILE1 { ai = AI(DTYPE0); if (ai == ai) { asum += ai; count += 1; } YI(DTYPE0) = count >= min_count ? asum / count : BN_NAN; } count_inv = 1.0 / count; WHILE2 { ai = AI(DTYPE0); aold = AOLD(DTYPE0); if (ai == ai) { if (aold == aold) { asum += ai - aold; } else { asum += ai; count++; count_inv = 1.0 / count; } } else { if (aold == aold) { asum -= aold; count--; count_inv = 1.0 / count; } } YI(DTYPE0) = count >= min_count ? asum * count_inv : BN_NAN; } NEXT2 } BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64', 'float64'], ['int32', 'float64']] */ MOVE(move_mean, DTYPE0) { npy_DTYPE1 asum, window_inv = 1.0 / window; INIT(NPY_DTYPE1) BN_BEGIN_ALLOW_THREADS WHILE { asum = 0; WHILE0 { asum += AI(DTYPE0); YI(DTYPE1) = BN_NAN; } WHILE1 { asum += AI(DTYPE0); *(npy_DTYPE1*)(it.py + it.i * it.ystride) = (npy_DTYPE1)asum / (it.i + 1); it.i++; } WHILE2 { asum += AI(DTYPE0) - AOLD(DTYPE0); YI(DTYPE1) = (npy_DTYPE1)asum * window_inv; } NEXT2 } BN_END_ALLOW_THREADS return y; } /* dtype end */ MOVE_MAIN(move_mean, 0) /* move_std, move_var ---------------------------------------------------- */ /* repeat = {'NAME': ['move_std', 'move_var'], 'FUNC': ['sqrt', '']} */ /* dtype = [['float64'], ['float32']] */ MOVE(NAME, DTYPE0) { Py_ssize_t count; npy_DTYPE0 delta, amean, assqdm, ai, aold, yi, count_inv, ddof_inv; INIT(NPY_DTYPE0) BN_BEGIN_ALLOW_THREADS WHILE { amean = assqdm = count = 0; WHILE0 { ai = AI(DTYPE0); if (ai == ai) { count += 1; delta = ai - amean; amean += delta / count; assqdm += delta * (ai - amean); } YI(DTYPE0) = BN_NAN; } WHILE1 { ai = AI(DTYPE0); if (ai == ai) { count += 1; delta = ai - amean; amean += delta / count; assqdm += delta * (ai - amean); } if (count >= min_count) { if (assqdm < 0) { assqdm = 0; } yi = FUNC(assqdm / (count - ddof)); } else { yi = BN_NAN; } YI(DTYPE0) = yi; } count_inv = 1.0 / count; ddof_inv = 1.0 / (count - ddof); WHILE2 { ai = AI(DTYPE0); aold = AOLD(DTYPE0); if (ai == ai) { if (aold == aold) { delta = ai - aold; aold -= amean; amean += delta * count_inv; ai -= amean; assqdm += (ai + aold) * delta; } else { count++; count_inv = 1.0 / count; ddof_inv = 1.0 / (count - ddof); delta = ai - amean; amean += delta * count_inv; assqdm += delta * (ai - amean); } } else { if (aold == aold) { count--; count_inv = 1.0 / count; ddof_inv = 1.0 / (count - ddof); if (count > 0) { delta = aold - amean; amean -= delta * count_inv; assqdm -= delta * (aold - amean); } else { amean = 0; assqdm = 0; } } } if (count >= min_count) { if (assqdm < 0) { assqdm = 0; } yi = FUNC(assqdm * ddof_inv); } else { yi = BN_NAN; } YI(DTYPE0) = yi; } NEXT2 } BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64', 'float64'], ['int32', 'float64']] */ MOVE(NAME, DTYPE0) { int winddof = window - ddof; npy_DTYPE1 delta, amean, assqdm, yi, ai, aold; npy_DTYPE1 window_inv = 1.0 / window, winddof_inv = 1.0 / winddof; INIT(NPY_DTYPE1) BN_BEGIN_ALLOW_THREADS WHILE { amean = assqdm = 0; WHILE0 { ai = AI(DTYPE0); delta = ai - amean; amean += delta / (INDEX + 1); assqdm += delta * (ai - amean); YI(DTYPE1) = BN_NAN; } WHILE1 { ai = AI(DTYPE0); delta = ai - amean; amean += delta / (INDEX + 1); assqdm += delta * (ai - amean); yi = FUNC(assqdm / (INDEX + 1 - ddof)); YI(DTYPE1) = yi; } WHILE2 { ai = AI(DTYPE0); aold = AOLD(DTYPE0); delta = ai - aold; aold -= amean; amean += delta * window_inv; ai -= amean; assqdm += (ai + aold) * delta; if (assqdm < 0) { assqdm = 0; } YI(DTYPE1) = FUNC(assqdm * winddof_inv); } NEXT2 } BN_END_ALLOW_THREADS return y; } /* dtype end */ MOVE_MAIN(NAME, 1) /* repeat end */ /* move_min, move_max, move_argmin, move_argmax -------------------------- */ /* repeat = {'MACRO_FLOAT': ['MOVE_NANMIN', 'MOVE_NANMAX'], 'MACRO_INT': ['MOVE_MIN', 'MOVE_MAX'], 'COMPARE': ['<=', '>='], 'FLIP': ['>=', '<='], 'BIG_FLOAT': ['BN_INFINITY', '-BN_INFINITY']} */ #define MACRO_FLOAT(dtype, yi, code) \ ai = AI(dtype); \ if (ai == ai) count++; else ai = BIG_FLOAT; \ code; \ if (ai COMPARE extreme_pair->value) { \ extreme_pair->value = ai; \ extreme_pair->death = INDEX + window; \ last = extreme_pair; \ } \ else { \ while (last->value FLIP ai) { \ if (last == ring) last = end; \ last--; \ } \ last++; \ if (last == end) last = ring; \ last->value = ai; \ last->death = INDEX + window; \ } \ yi_tmp = yi; /* yi might contain i and YI contains i++ */ \ YI(dtype) = yi_tmp; #define MACRO_INT(a_dtype, y_dtype, yi, code) \ ai = AI(a_dtype); \ code; \ if (ai COMPARE extreme_pair->value) { \ extreme_pair->value = ai; \ extreme_pair->death = INDEX + window; \ last = extreme_pair; \ } \ else { \ while (last->value FLIP ai) { \ if (last == ring) last = end; \ last--; \ } \ last++; \ if (last == end) last = ring; \ last->value = ai; \ last->death = INDEX + window; \ } \ yi_tmp = yi; \ YI(y_dtype) = yi_tmp; /* repeat end */ /* repeat = { 'NAME': ['move_min', 'move_max', 'move_argmin', 'move_argmax'], 'MACRO_FLOAT': ['MOVE_NANMIN', 'MOVE_NANMAX', 'MOVE_NANMIN', 'MOVE_NANMAX'], 'MACRO_INT': ['MOVE_MIN', 'MOVE_MAX', 'MOVE_MIN', 'MOVE_MAX'], 'COMPARE': ['<=', '>=', '<=', '>='], 'BIG_FLOAT': ['BN_INFINITY', '-BN_INFINITY', 'BN_INFINITY', '-BN_INFINITY'], 'BIG_INT': ['NPY_MAX_DTYPE0', 'NPY_MIN_DTYPE0', 'NPY_MAX_DTYPE0', 'NPY_MIN_DTYPE0'], 'VALUE': ['extreme_pair->value', 'extreme_pair->value', 'INDEX-extreme_pair->death+window', 'INDEX-extreme_pair->death+window'] } */ /* dtype = [['float64'], ['float32']] */ MOVE(NAME, DTYPE0) { npy_DTYPE0 ai, aold, yi_tmp; Py_ssize_t count; pairs *extreme_pair; pairs *end; pairs *last; pairs *ring = (pairs *)malloc(window * sizeof(pairs)); INIT(NPY_DTYPE0) BN_BEGIN_ALLOW_THREADS WHILE { count = 0; end = ring + window; last = ring; extreme_pair = ring; ai = A0(DTYPE0); extreme_pair->value = ai == ai ? ai : BIG_FLOAT; extreme_pair->death = window; WHILE0 { MACRO_FLOAT(DTYPE0, BN_NAN, ) } WHILE1 { MACRO_FLOAT(DTYPE0, count >= min_count ? VALUE : BN_NAN, ) } WHILE2 { MACRO_FLOAT(DTYPE0, count >= min_count ? VALUE : BN_NAN, aold = AOLD(DTYPE0); if (aold == aold) count--; if (extreme_pair->death == INDEX) { extreme_pair++; if (extreme_pair >= end) extreme_pair = ring; }) } NEXT2 } free(ring); BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64', 'float64'], ['int32', 'float64']] */ MOVE(NAME, DTYPE0) { npy_DTYPE0 ai; npy_DTYPE1 yi_tmp; pairs *extreme_pair; pairs *end; pairs *last; pairs *ring = (pairs *)malloc(window * sizeof(pairs)); INIT(NPY_DTYPE1) BN_BEGIN_ALLOW_THREADS WHILE { end = ring + window; last = ring; extreme_pair = ring; ai = A0(DTYPE0); extreme_pair->value = ai; extreme_pair->death = window; WHILE0 { MACRO_INT(DTYPE0, DTYPE1, BN_NAN, ) } WHILE1 { MACRO_INT(DTYPE0, DTYPE1, VALUE, ) } WHILE2 { MACRO_INT(DTYPE0, DTYPE1, VALUE, if (extreme_pair->death == INDEX) { extreme_pair++; if (extreme_pair >= end) extreme_pair = ring; }) } NEXT2 } free(ring); BN_END_ALLOW_THREADS return y; } /* dtype end */ MOVE_MAIN(NAME, 0) /* repeat end */ /* move_median ----------------------------------------------------------- */ /* dtype = [['float64'], ['float32']] */ MOVE(move_median, DTYPE0) { npy_DTYPE0 ai; mm_handle *mm = mm_new_nan(window, min_count); INIT(NPY_DTYPE0) if (window == 1) { mm_free(mm); return PyArray_Copy(a); } if (mm == NULL) { MEMORY_ERR("Could not allocate memory for move_median"); } BN_BEGIN_ALLOW_THREADS WHILE { WHILE0 { ai = AI(DTYPE0); YI(DTYPE0) = mm_update_init_nan(mm, ai); } WHILE1 { ai = AI(DTYPE0); YI(DTYPE0) = mm_update_init_nan(mm, ai); } WHILE2 { ai = AI(DTYPE0); YI(DTYPE0) = mm_update_nan(mm, ai); } mm_reset(mm); NEXT2 } mm_free(mm); BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64', 'float64'], ['int32', 'float64']] */ MOVE(move_median, DTYPE0) { npy_DTYPE0 ai; mm_handle *mm = mm_new(window, min_count); INIT(NPY_DTYPE1) if (window == 1) { return PyArray_CastToType(a, PyArray_DescrFromType(NPY_DTYPE1), PyArray_CHKFLAGS(a, NPY_ARRAY_F_CONTIGUOUS)); } if (mm == NULL) { MEMORY_ERR("Could not allocate memory for move_median"); } BN_BEGIN_ALLOW_THREADS WHILE { WHILE0 { ai = AI(DTYPE0); YI(DTYPE1) = mm_update_init(mm, ai); } WHILE1 { ai = AI(DTYPE0); YI(DTYPE1) = mm_update_init(mm, ai); } WHILE2 { ai = AI(DTYPE0); YI(DTYPE1) = mm_update(mm, ai); } mm_reset(mm); NEXT2 } mm_free(mm); BN_END_ALLOW_THREADS return y; } /* dtype end */ MOVE_MAIN(move_median, 0) /* move_rank-------------------------------------------------------------- */ #define MOVE_RANK(dtype0, dtype1, limit) \ Py_ssize_t j; \ npy_##dtype0 ai, aj; \ npy_##dtype1 g, e, n, r; \ ai = AI(dtype0); \ if (ai == ai) { \ g = 0; \ e = 1; \ n = 1; \ r = 0; \ for (j = limit; j < INDEX; j++) { \ aj = AX(dtype0, j); \ if (aj == aj) { \ n++; \ if (ai > aj) g += 2; \ else if (ai == aj) e++; \ } \ } \ if (n < min_count) { \ r = BN_NAN; \ } \ else if (n == 1) { \ r = 0.0; \ } \ else { \ r = 0.5 * (g + e - 1.0); \ r = r / (n - 1.0); \ r = 2.0 * (r - 0.5); \ } \ } \ else { \ r = BN_NAN; \ } \ /* dtype = [['float64', 'float64'], ['float32', 'float32']] */ MOVE(move_rank, DTYPE0) { INIT(NPY_DTYPE1) BN_BEGIN_ALLOW_THREADS WHILE { WHILE0 { YI(DTYPE1) = BN_NAN; } WHILE1 { MOVE_RANK(DTYPE0, DTYPE1, 0) YI(DTYPE1) = r; } WHILE2 { MOVE_RANK(DTYPE0, DTYPE1, INDEX - window + 1) YI(DTYPE1) = r; } NEXT2 } BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64', 'float64'], ['int32', 'float64']] */ MOVE(move_rank, DTYPE0) { Py_ssize_t j; npy_DTYPE0 ai, aj; npy_DTYPE1 g, e, r, window_inv = 0.5 * 1.0 / (window - 1); INIT(NPY_DTYPE1) BN_BEGIN_ALLOW_THREADS WHILE { WHILE0 { YI(DTYPE1) = BN_NAN; } WHILE1 { ai = AI(DTYPE0); g = 0; e = 1; r = 0; for (j = 0; j < INDEX; j++) { aj = AX(DTYPE0, j); if (ai > aj) g += 2; else if (ai == aj) e++; } if (INDEX < min_count - 1) { r = BN_NAN; } else if (INDEX == 0) { r = 0.0; } else { r = 0.5 * (g + e - 1.0); r = r / INDEX; r = 2.0 * (r - 0.5); } YI(DTYPE1) = r; } WHILE2 { ai = AI(DTYPE0); g = 0; e = 1; r = 0; for (j = INDEX - window + 1; j < INDEX; j++) { aj = AX(DTYPE0, j); if (aj == aj) { if (ai > aj) g += 2; else if (ai == aj) e++; } } if (window == 1) { r = 0.0; } else { r = window_inv * (g + e - 1.0); r = 2.0 * (r - 0.5); } YI(DTYPE1) = r; } NEXT2 } BN_END_ALLOW_THREADS return y; } /* dtype end */ MOVE_MAIN(move_rank, 0) /* python strings -------------------------------------------------------- */ PyObject *pystr_a = NULL; PyObject *pystr_window = NULL; PyObject *pystr_min_count = NULL; PyObject *pystr_axis = NULL; PyObject *pystr_ddof = NULL; static int intern_strings(void) { pystr_a = PyString_InternFromString("a"); pystr_window = PyString_InternFromString("window"); pystr_min_count = PyString_InternFromString("min_count"); pystr_axis = PyString_InternFromString("axis"); pystr_ddof = PyString_InternFromString("ddof"); return pystr_a && pystr_window && pystr_min_count && pystr_axis && pystr_ddof; } /* mover ----------------------------------------------------------------- */ static BN_INLINE int parse_args(PyObject *args, PyObject *kwds, int has_ddof, PyObject **a, PyObject **window, PyObject **min_count, PyObject **axis, PyObject **ddof) { const Py_ssize_t nargs = PyTuple_GET_SIZE(args); const Py_ssize_t nkwds = kwds == NULL ? 0 : PyDict_Size(kwds); if (nkwds) { int nkwds_found = 0; PyObject *tmp; switch (nargs) { case 4: if (has_ddof) { *axis = PyTuple_GET_ITEM(args, 3); } else { TYPE_ERR("wrong number of arguments"); return 0; } case 3: *min_count = PyTuple_GET_ITEM(args, 2); case 2: *window = PyTuple_GET_ITEM(args, 1); case 1: *a = PyTuple_GET_ITEM(args, 0); case 0: break; default: TYPE_ERR("wrong number of arguments"); return 0; } switch (nargs) { case 0: *a = PyDict_GetItem(kwds, pystr_a); if (*a == NULL) { TYPE_ERR("Cannot find `a` keyword input"); return 0; } nkwds_found += 1; case 1: *window = PyDict_GetItem(kwds, pystr_window); if (*window == NULL) { TYPE_ERR("Cannot find `window` keyword input"); return 0; } nkwds_found++; case 2: tmp = PyDict_GetItem(kwds, pystr_min_count); if (tmp != NULL) { *min_count = tmp; nkwds_found++; } case 3: tmp = PyDict_GetItem(kwds, pystr_axis); if (tmp != NULL) { *axis = tmp; nkwds_found++; } case 4: if (has_ddof) { tmp = PyDict_GetItem(kwds, pystr_ddof); if (tmp != NULL) { *ddof = tmp; nkwds_found++; } break; } break; default: TYPE_ERR("wrong number of arguments"); return 0; } if (nkwds_found != nkwds) { TYPE_ERR("wrong number of keyword arguments"); return 0; } if (nargs + nkwds_found > 4 + has_ddof) { TYPE_ERR("too many arguments"); return 0; } } else { switch (nargs) { case 5: if (has_ddof) { *ddof = PyTuple_GET_ITEM(args, 4); } else { TYPE_ERR("wrong number of arguments"); return 0; } case 4: *axis = PyTuple_GET_ITEM(args, 3); case 3: *min_count = PyTuple_GET_ITEM(args, 2); case 2: *window = PyTuple_GET_ITEM(args, 1); *a = PyTuple_GET_ITEM(args, 0); break; default: TYPE_ERR("wrong number of arguments"); return 0; } } return 1; } static PyObject * mover(char *name, PyObject *args, PyObject *kwds, move_t move_float64, move_t move_float32, move_t move_int64, move_t move_int32, int has_ddof) { int mc; int window; int axis; int ddof; int dtype; int ndim; Py_ssize_t length; PyArrayObject *a; PyObject *a_obj = NULL; PyObject *window_obj = NULL; PyObject *min_count_obj = Py_None; PyObject *axis_obj = NULL; PyObject *ddof_obj = NULL; if (!parse_args(args, kwds, has_ddof, &a_obj, &window_obj, &min_count_obj, &axis_obj, &ddof_obj)) { return NULL; } /* convert to array if necessary */ if PyArray_Check(a_obj) { a = (PyArrayObject *)a_obj; } else { a = (PyArrayObject *)PyArray_FROM_O(a_obj); if (a == NULL) { return NULL; } } /* check for byte swapped input array */ if PyArray_ISBYTESWAPPED(a) { return slow(name, args, kwds); } /* window */ window = PyArray_PyIntAsInt(window_obj); if (error_converting(window)) { TYPE_ERR("`window` must be an integer"); return NULL; } /* min_count */ if (min_count_obj == Py_None) { mc = window; } else { mc = PyArray_PyIntAsInt(min_count_obj); if (error_converting(mc)) { TYPE_ERR("`min_count` must be an integer or None"); return NULL; } if (mc > window) { PyErr_Format(PyExc_ValueError, "min_count (%d) cannot be greater than window (%d)", mc, window); return NULL; } else if (mc <= 0) { VALUE_ERR("`min_count` must be greater than zero."); return NULL; } } ndim = PyArray_NDIM(a); /* defend against 0d beings */ if (ndim == 0) { VALUE_ERR("moving window functions require ndim > 0"); return NULL; } /* defend against the axis of negativity */ if (axis_obj == NULL) { axis = ndim - 1; } else { axis = PyArray_PyIntAsInt(axis_obj); if (error_converting(axis)) { TYPE_ERR("`axis` must be an integer"); return NULL; } if (axis < 0) { axis += ndim; if (axis < 0) { PyErr_Format(PyExc_ValueError, "axis(=%d) out of bounds", axis); return NULL; } } else if (axis >= ndim) { PyErr_Format(PyExc_ValueError, "axis(=%d) out of bounds", axis); return NULL; } } /* ddof */ if (ddof_obj == NULL) { ddof = 0; } else { ddof = PyArray_PyIntAsInt(ddof_obj); if (error_converting(ddof)) { TYPE_ERR("`ddof` must be an integer"); return NULL; } } length = PyArray_DIM(a, axis); if ((window < 1) || (window > length)) { PyErr_Format(PyExc_ValueError, "Moving window (=%d) must between 1 and %zu, inclusive", window, length); return NULL; } dtype = PyArray_TYPE(a); if (dtype == NPY_float64) { return move_float64(a, window, mc, axis, ddof); } else if (dtype == NPY_float32) { return move_float32(a, window, mc, axis, ddof); } else if (dtype == NPY_int64) { return move_int64(a, window, mc, axis, ddof); } else if (dtype == NPY_int32) { return move_int32(a, window, mc, axis, ddof); } else { return slow(name, args, kwds); } } /* docstrings ------------------------------------------------------------- */ static char move_doc[] = "Bottleneck moving window functions."; static char move_sum_doc[] = /* MULTILINE STRING BEGIN move_sum(a, window, min_count=None, axis=-1) Moving window sum along the specified axis, optionally ignoring NaNs. This function cannot handle input arrays that contain Inf. When the window contains Inf, the output will correctly be Inf. However, when Inf moves out of the window, the remaining output values in the slice will incorrectly be NaN. Parameters ---------- a : ndarray Input array. If `a` is not an array, a conversion is attempted. window : int The number of elements in the moving window. min_count: {int, None}, optional If the number of non-NaN values in a window is less than `min_count`, then a value of NaN is assigned to the window. By default `min_count` is None, which is equivalent to setting `min_count` equal to `window`. axis : int, optional The axis over which the window is moved. By default the last axis (axis=-1) is used. An axis of None is not allowed. Returns ------- y : ndarray The moving sum of the input array along the specified axis. The output has the same shape as the input. Examples -------- >>> a = np.array([1.0, 2.0, 3.0, np.nan, 5.0]) >>> bn.move_sum(a, window=2) array([ nan, 3., 5., nan, nan]) >>> bn.move_sum(a, window=2, min_count=1) array([ 1., 3., 5., 3., 5.]) MULTILINE STRING END */ static char move_mean_doc[] = /* MULTILINE STRING BEGIN move_mean(a, window, min_count=None, axis=-1) Moving window mean along the specified axis, optionally ignoring NaNs. This function cannot handle input arrays that contain Inf. When the window contains Inf, the output will correctly be Inf. However, when Inf moves out of the window, the remaining output values in the slice will incorrectly be NaN. Parameters ---------- a : ndarray Input array. If `a` is not an array, a conversion is attempted. window : int The number of elements in the moving window. min_count: {int, None}, optional If the number of non-NaN values in a window is less than `min_count`, then a value of NaN is assigned to the window. By default `min_count` is None, which is equivalent to setting `min_count` equal to `window`. axis : int, optional The axis over which the window is moved. By default the last axis (axis=-1) is used. An axis of None is not allowed. Returns ------- y : ndarray The moving mean of the input array along the specified axis. The output has the same shape as the input. Examples -------- >>> a = np.array([1.0, 2.0, 3.0, np.nan, 5.0]) >>> bn.move_mean(a, window=2) array([ nan, 1.5, 2.5, nan, nan]) >>> bn.move_mean(a, window=2, min_count=1) array([ 1. , 1.5, 2.5, 3. , 5. ]) MULTILINE STRING END */ static char move_std_doc[] = /* MULTILINE STRING BEGIN move_std(a, window, min_count=None, axis=-1, ddof=0) Moving window standard deviation along the specified axis, optionally ignoring NaNs. This function cannot handle input arrays that contain Inf. When Inf enters the moving window, the outout becomes NaN and will continue to be NaN for the remainer of the slice. Unlike bn.nanstd, which uses a two-pass algorithm, move_nanstd uses a one-pass algorithm called Welford's method. The algorithm is slow but numerically stable for cases where the mean is large compared to the standard deviation. Parameters ---------- a : ndarray Input array. If `a` is not an array, a conversion is attempted. window : int The number of elements in the moving window. min_count: {int, None}, optional If the number of non-NaN values in a window is less than `min_count`, then a value of NaN is assigned to the window. By default `min_count` is None, which is equivalent to setting `min_count` equal to `window`. axis : int, optional The axis over which the window is moved. By default the last axis (axis=-1) is used. An axis of None is not allowed. ddof : int, optional Means Delta Degrees of Freedom. The divisor used in calculations is ``N - ddof``, where ``N`` represents the number of elements. By default `ddof` is zero. Returns ------- y : ndarray The moving standard deviation of the input array along the specified axis. The output has the same shape as the input. Examples -------- >>> a = np.array([1.0, 2.0, 3.0, np.nan, 5.0]) >>> bn.move_std(a, window=2) array([ nan, 0.5, 0.5, nan, nan]) >>> bn.move_std(a, window=2, min_count=1) array([ 0. , 0.5, 0.5, 0. , 0. ]) MULTILINE STRING END */ static char move_var_doc[] = /* MULTILINE STRING BEGIN move_var(a, window, min_count=None, axis=-1, ddof=0) Moving window variance along the specified axis, optionally ignoring NaNs. This function cannot handle input arrays that contain Inf. When Inf enters the moving window, the outout becomes NaN and will continue to be NaN for the remainer of the slice. Unlike bn.nanvar, which uses a two-pass algorithm, move_nanvar uses a one-pass algorithm called Welford's method. The algorithm is slow but numerically stable for cases where the mean is large compared to the standard deviation. Parameters ---------- a : ndarray Input array. If `a` is not an array, a conversion is attempted. window : int The number of elements in the moving window. min_count: {int, None}, optional If the number of non-NaN values in a window is less than `min_count`, then a value of NaN is assigned to the window. By default `min_count` is None, which is equivalent to setting `min_count` equal to `window`. axis : int, optional The axis over which the window is moved. By default the last axis (axis=-1) is used. An axis of None is not allowed. ddof : int, optional Means Delta Degrees of Freedom. The divisor used in calculations is ``N - ddof``, where ``N`` represents the number of elements. By default `ddof` is zero. Returns ------- y : ndarray The moving variance of the input array along the specified axis. The output has the same shape as the input. Examples -------- >>> a = np.array([1.0, 2.0, 3.0, np.nan, 5.0]) >>> bn.move_var(a, window=2) array([ nan, 0.25, 0.25, nan, nan]) >>> bn.move_var(a, window=2, min_count=1) array([ 0. , 0.25, 0.25, 0. , 0. ]) MULTILINE STRING END */ static char move_min_doc[] = /* MULTILINE STRING BEGIN move_min(a, window, min_count=None, axis=-1) Moving window minimum along the specified axis, optionally ignoring NaNs. float64 output is returned for all input data types. Parameters ---------- a : ndarray Input array. If `a` is not an array, a conversion is attempted. window : int The number of elements in the moving window. min_count: {int, None}, optional If the number of non-NaN values in a window is less than `min_count`, then a value of NaN is assigned to the window. By default `min_count` is None, which is equivalent to setting `min_count` equal to `window`. axis : int, optional The axis over which the window is moved. By default the last axis (axis=-1) is used. An axis of None is not allowed. Returns ------- y : ndarray The moving minimum of the input array along the specified axis. The output has the same shape as the input. The dtype of the output is always float64. Examples -------- >>> a = np.array([1.0, 2.0, 3.0, np.nan, 5.0]) >>> bn.move_min(a, window=2) array([ nan, 1., 2., nan, nan]) >>> bn.move_min(a, window=2, min_count=1) array([ 1., 1., 2., 3., 5.]) MULTILINE STRING END */ static char move_max_doc[] = /* MULTILINE STRING BEGIN move_max(a, window, min_count=None, axis=-1) Moving window maximum along the specified axis, optionally ignoring NaNs. float64 output is returned for all input data types. Parameters ---------- a : ndarray Input array. If `a` is not an array, a conversion is attempted. window : int The number of elements in the moving window. min_count: {int, None}, optional If the number of non-NaN values in a window is less than `min_count`, then a value of NaN is assigned to the window. By default `min_count` is None, which is equivalent to setting `min_count` equal to `window`. axis : int, optional The axis over which the window is moved. By default the last axis (axis=-1) is used. An axis of None is not allowed. Returns ------- y : ndarray The moving maximum of the input array along the specified axis. The output has the same shape as the input. The dtype of the output is always float64. Examples -------- >>> a = np.array([1.0, 2.0, 3.0, np.nan, 5.0]) >>> bn.move_max(a, window=2) array([ nan, 2., 3., nan, nan]) >>> bn.move_max(a, window=2, min_count=1) array([ 1., 2., 3., 3., 5.]) MULTILINE STRING END */ static char move_argmin_doc[] = /* MULTILINE STRING BEGIN move_argmin(a, window, min_count=None, axis=-1) Moving window index of minimum along the specified axis, optionally ignoring NaNs. Index 0 is at the rightmost edge of the window. For example, if the array is monotonically decreasing (increasing) along the specified axis then the output array will contain zeros (window-1). If there is a tie in input values within a window, then the rightmost index is returned. float64 output is returned for all input data types. Parameters ---------- a : ndarray Input array. If `a` is not an array, a conversion is attempted. window : int The number of elements in the moving window. min_count: {int, None}, optional If the number of non-NaN values in a window is less than `min_count`, then a value of NaN is assigned to the window. By default `min_count` is None, which is equivalent to setting `min_count` equal to `window`. axis : int, optional The axis over which the window is moved. By default the last axis (axis=-1) is used. An axis of None is not allowed. Returns ------- y : ndarray The moving index of minimum values of the input array along the specified axis. The output has the same shape as the input. The dtype of the output is always float64. Examples -------- >>> a = np.array([1.0, 2.0, 3.0, 4.0, 5.0]) >>> bn.move_argmin(a, window=2) array([ nan, 1., 1., 1., 1.]) >>> a = np.array([5.0, 4.0, 3.0, 2.0, 1.0]) >>> bn.move_argmin(a, window=2) array([ nan, 0., 0., 0., 0.]) >>> a = np.array([2.0, 3.0, 4.0, 1.0, 7.0, 5.0, 6.0]) >>> bn.move_argmin(a, window=3) array([ nan, nan, 2., 0., 1., 2., 1.]) MULTILINE STRING END */ static char move_argmax_doc[] = /* MULTILINE STRING BEGIN move_argmax(a, window, min_count=None, axis=-1) Moving window index of maximum along the specified axis, optionally ignoring NaNs. Index 0 is at the rightmost edge of the window. For example, if the array is monotonically increasing (decreasing) along the specified axis then the output array will contain zeros (window-1). If there is a tie in input values within a window, then the rightmost index is returned. float64 output is returned for all input data types. Parameters ---------- a : ndarray Input array. If `a` is not an array, a conversion is attempted. window : int The number of elements in the moving window. min_count: {int, None}, optional If the number of non-NaN values in a window is less than `min_count`, then a value of NaN is assigned to the window. By default `min_count` is None, which is equivalent to setting `min_count` equal to `window`. axis : int, optional The axis over which the window is moved. By default the last axis (axis=-1) is used. An axis of None is not allowed. Returns ------- y : ndarray The moving index of maximum values of the input array along the specified axis. The output has the same shape as the input. The dtype of the output is always float64. Examples -------- >>> a = np.array([1.0, 2.0, 3.0, 4.0, 5.0]) >>> bn.move_argmax(a, window=2) array([ nan, 0., 0., 0., 0.]) >>> a = np.array([5.0, 4.0, 3.0, 2.0, 1.0]) >>> bn.move_argmax(a, window=2) array([ nan, 1., 1., 1., 1.]) >>> a = np.array([2.0, 3.0, 4.0, 1.0, 7.0, 5.0, 6.0]) >>> bn.move_argmax(a, window=3) array([ nan, nan, 0., 1., 0., 1., 2.]) MULTILINE STRING END */ static char move_median_doc[] = /* MULTILINE STRING BEGIN move_median(a, window, min_count=None, axis=-1) Moving window median along the specified axis, optionally ignoring NaNs. float64 output is returned for all input data types. Parameters ---------- a : ndarray Input array. If `a` is not an array, a conversion is attempted. window : int The number of elements in the moving window. min_count: {int, None}, optional If the number of non-NaN values in a window is less than `min_count`, then a value of NaN is assigned to the window. By default `min_count` is None, which is equivalent to setting `min_count` equal to `window`. axis : int, optional The axis over which the window is moved. By default the last axis (axis=-1) is used. An axis of None is not allowed. Returns ------- y : ndarray The moving median of the input array along the specified axis. The output has the same shape as the input. Examples -------- >>> a = np.array([1.0, 2.0, 3.0, 4.0]) >>> bn.move_median(a, window=2) array([ nan, 1.5, 2.5, 3.5]) >>> bn.move_median(a, window=2, min_count=1) array([ 1. , 1.5, 2.5, 3.5]) MULTILINE STRING END */ static char move_rank_doc[] = /* MULTILINE STRING BEGIN move_rank(a, window, min_count=None, axis=-1) Moving window ranking along the specified axis, optionally ignoring NaNs. The output is normalized to be between -1 and 1. For example, with a window width of 3 (and with no ties), the possible output values are -1, 0, 1. Ties are broken by averaging the rankings. See the examples below. The runtime depends almost linearly on `window`. The more NaNs there are in the input array, the shorter the runtime. Parameters ---------- a : ndarray Input array. If `a` is not an array, a conversion is attempted. window : int The number of elements in the moving window. min_count: {int, None}, optional If the number of non-NaN values in a window is less than `min_count`, then a value of NaN is assigned to the window. By default `min_count` is None, which is equivalent to setting `min_count` equal to `window`. axis : int, optional The axis over which the window is moved. By default the last axis (axis=-1) is used. An axis of None is not allowed. Returns ------- y : ndarray The moving ranking along the specified axis. The output has the same shape as the input. For integer input arrays, the dtype of the output is float64. Examples -------- With window=3 and no ties, there are 3 possible output values, i.e. [-1., 0., 1.]: >>> a = np.array([1, 2, 3, 9, 8, 7, 5, 6, 4]) >>> bn.move_rank(a, window=3) array([ nan, nan, 1., 1., 0., -1., -1., 0., -1.]) Ties are broken by averaging the rankings of the tied elements: >>> a = np.array([1, 2, 3, 3, 3, 4]) >>> bn.move_rank(a, window=3) array([ nan, nan, 1. , 0.5, 0. , 1. ]) In an increasing sequence, the moving window ranking is always equal to 1: >>> a = np.array([1, 2, 3, 4, 5]) >>> bn.move_rank(a, window=2) array([ nan, 1., 1., 1., 1.]) MULTILINE STRING END */ /* python wrapper -------------------------------------------------------- */ static PyMethodDef move_methods[] = { {"move_sum", (PyCFunction)move_sum, VARKEY, move_sum_doc}, {"move_mean", (PyCFunction)move_mean, VARKEY, move_mean_doc}, {"move_std", (PyCFunction)move_std, VARKEY, move_std_doc}, {"move_var", (PyCFunction)move_var, VARKEY, move_var_doc}, {"move_min", (PyCFunction)move_min, VARKEY, move_min_doc}, {"move_max", (PyCFunction)move_max, VARKEY, move_max_doc}, {"move_argmin", (PyCFunction)move_argmin, VARKEY, move_argmin_doc}, {"move_argmax", (PyCFunction)move_argmax, VARKEY, move_argmax_doc}, {"move_median", (PyCFunction)move_median, VARKEY, move_median_doc}, {"move_rank", (PyCFunction)move_rank, VARKEY, move_rank_doc}, {NULL, NULL, 0, NULL} }; #if PY_MAJOR_VERSION >= 3 static struct PyModuleDef move_def = { PyModuleDef_HEAD_INIT, "move", move_doc, -1, move_methods }; #endif PyMODINIT_FUNC #if PY_MAJOR_VERSION >= 3 #define RETVAL m PyInit_move(void) #else #define RETVAL initmove(void) #endif { #if PY_MAJOR_VERSION >=3 PyObject *m = PyModule_Create(&move_def); #else PyObject *m = Py_InitModule3("move", move_methods, move_doc); #endif if (m == NULL) return RETVAL; import_array(); if (!intern_strings()) { return RETVAL; } return RETVAL; } bottleneck-1.2.0/bottleneck/src/nonreduce_axis_template.c000066400000000000000000000715401300536544100236420ustar00rootroot00000000000000#include "bottleneck.h" #include "iterators.h" /* function signatures --------------------------------------------------- */ /* low-level functions such as move_sum_float64 */ #define NRA(name, dtype) \ static PyObject * \ name##_##dtype(PyArrayObject *a, int axis, int n) /* top-level functions such as move_sum */ #define NRA_MAIN(name, parse) \ static PyObject * \ name(PyObject *self, PyObject *args, PyObject *kwds) \ { \ return nonreducer_axis(#name, \ args, \ kwds, \ name##_float64, \ name##_float32, \ name##_int64, \ name##_int32, \ parse); \ } /* typedefs and prototypes ----------------------------------------------- */ /* how should input be parsed? */ typedef enum {PARSE_PARTITION, PARSE_RANKDATA, PARSE_PUSH} parse_type; /* function pointer for functions passed to nonreducer_axis */ typedef PyObject *(*nra_t)(PyArrayObject *, int, int); static PyObject * nonreducer_axis(char *name, PyObject *args, PyObject *kwds, nra_t, nra_t, nra_t, nra_t, parse_type); /* partition ------------------------------------------------------------- */ #define B(dtype, i) AX(dtype, i) /* used by PARTITION */ /* dtype = [['float64'], ['float32'], ['int64'], ['int32']] */ NRA(partition, DTYPE0) { npy_intp i; npy_intp j, l, r, k; iter it; a = (PyArrayObject *)PyArray_NewCopy(a, NPY_ANYORDER); init_iter_one(&it, a, axis); if (LENGTH == 0) return (PyObject *)a; if (n < 0 || n > LENGTH - 1) { PyErr_Format(PyExc_ValueError, "`n` (=%d) must be between 0 and %zd, inclusive.", n, LENGTH - 1); return NULL; } BN_BEGIN_ALLOW_THREADS k = n; WHILE { l = 0; r = LENGTH - 1; PARTITION(DTYPE0) NEXT } BN_END_ALLOW_THREADS return (PyObject *)a; } /* dtype end */ NRA_MAIN(partition, PARSE_PARTITION) /* argpartition ----------------------------------------------------------- */ #define BUFFER_NEW(dtype) dtype *B = malloc(LENGTH * sizeof(dtype)); #define BUFFER_DELETE free(B); #define ARGWIRTH(dtype0, dtype1) \ x = B[k]; \ i = l; \ j = r; \ do { \ while (B[i] < x) i++; \ while (x < B[j]) j--; \ if (i <= j) { \ npy_##dtype0 atmp = B[i]; \ B[i] = B[j]; \ B[j] = atmp; \ ytmp = YX(dtype1, i); \ YX(dtype1, i) = YX(dtype1, j); \ YX(dtype1, j) = ytmp; \ i++; \ j--; \ } \ } while (i <= j); \ if (j < k) l = i; \ if (k < i) r = j; #define ARGPARTITION(dtype0, dtype1) \ while (l < r) { \ npy_##dtype0 x; \ npy_##dtype0 al = B[l]; \ npy_##dtype0 ak = B[k]; \ npy_##dtype0 ar = B[r]; \ npy_##dtype1 ytmp; \ if (al > ak) { \ if (ak < ar) { \ if (al < ar) { \ B[k] = al; \ B[l] = ak; \ ytmp = YX(dtype1, k); \ YX(dtype1, k) = YX(dtype1, l); \ YX(dtype1, l) = ytmp; \ } \ else { \ B[k] = ar; \ B[r] = ak; \ ytmp = YX(dtype1, k); \ YX(dtype1, k) = YX(dtype1, r); \ YX(dtype1, r) = ytmp; \ } \ } \ } \ else { \ if (ak > ar) { \ if (al > ar) { \ B[k] = al; \ B[l] = ak; \ ytmp = YX(dtype1, k); \ YX(dtype1, k) = YX(dtype1, l); \ YX(dtype1, l) = ytmp; \ } \ else { \ B[k] = ar; \ B[r] = ak; \ ytmp = YX(dtype1, k); \ YX(dtype1, k) = YX(dtype1, r); \ YX(dtype1, r) = ytmp; \ } \ } \ } \ ARGWIRTH(dtype0, dtype1) \ } #define ARGPARTSORT(dtype0, dtype1) \ for (i = 0; i < LENGTH; i++) { \ B[i] = AX(dtype0, i); \ YX(dtype1, i) = i; \ } \ l = 0; \ r = LENGTH - 1; \ ARGPARTITION(dtype0, dtype1) /* dtype = [['float64', 'intp'], ['float32', 'intp'], ['int64', 'intp'], ['int32', 'intp']] */ NRA(argpartition, DTYPE0) { npy_intp i; PyObject *y = PyArray_EMPTY(PyArray_NDIM(a), PyArray_SHAPE(a), NPY_DTYPE1, 0); iter2 it; init_iter2(&it, a, y, axis); if (LENGTH == 0) return y; if (n < 0 || n > LENGTH - 1) { PyErr_Format(PyExc_ValueError, "`n` (=%d) must be between 0 and %zd, inclusive.", n, LENGTH - 1); return NULL; } BN_BEGIN_ALLOW_THREADS BUFFER_NEW(npy_DTYPE0) npy_intp j, l, r, k; k = n; WHILE { l = 0; r = LENGTH - 1; ARGPARTSORT(DTYPE0, DTYPE1) NEXT2 } BUFFER_DELETE BN_END_ALLOW_THREADS return y; } /* dtype end */ NRA_MAIN(argpartition, PARSE_PARTITION) /* rankdata -------------------------------------------------------------- */ /* dtype = [['float64', 'float64', 'intp'], ['float32', 'float64', 'intp'], ['int64', 'float64', 'intp'], ['int32', 'float64', 'intp']] */ NRA(rankdata, DTYPE0) { Py_ssize_t j=0, k, idx, dupcount=0, i; npy_DTYPE1 old, new, averank, sumranks = 0; PyObject *z = PyArray_ArgSort(a, axis, NPY_QUICKSORT); PyObject *y = PyArray_EMPTY(PyArray_NDIM(a), PyArray_SHAPE(a), NPY_DTYPE1, 0); iter3 it; init_iter3(&it, a, y, z, axis); BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { Py_ssize_t size = PyArray_SIZE((PyArrayObject *)y); npy_DTYPE1 *py = (npy_DTYPE1 *)PyArray_DATA(a); for (i = 0; i < size; i++) YPP = BN_NAN; } else { WHILE { idx = ZX(DTYPE2, 0); old = AX(DTYPE0, idx); sumranks = 0; dupcount = 0; for (i = 0; i < LENGTH - 1; i++) { sumranks += i; dupcount++; k = i + 1; idx = ZX(DTYPE2, k); new = AX(DTYPE0, idx); if (old != new) { averank = sumranks / dupcount + 1; for (j = k - dupcount; j < k; j++) { idx = ZX(DTYPE2, j); YX(DTYPE1, idx) = averank; } sumranks = 0; dupcount = 0; } old = new; } sumranks += (LENGTH - 1); dupcount++; averank = sumranks / dupcount + 1; for (j = LENGTH - dupcount; j < LENGTH; j++) { idx = ZX(DTYPE2, j); YX(DTYPE1, idx) = averank; } NEXT3 } } BN_END_ALLOW_THREADS Py_DECREF(z); return y; } /* dtype end */ NRA_MAIN(rankdata, PARSE_RANKDATA) /* nanrankdata ----------------------------------------------------------- */ /* dtype = [['float64', 'float64', 'intp'], ['float32', 'float64', 'intp']] */ NRA(nanrankdata, DTYPE0) { Py_ssize_t j=0, k, idx, dupcount=0, i; npy_DTYPE1 old, new, averank, sumranks = 0; PyObject *z = PyArray_ArgSort(a, axis, NPY_QUICKSORT); PyObject *y = PyArray_EMPTY(PyArray_NDIM(a), PyArray_SHAPE(a), NPY_DTYPE1, 0); iter3 it; init_iter3(&it, a, y, z, axis); BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { Py_ssize_t size = PyArray_SIZE((PyArrayObject *)y); npy_DTYPE1 *py = (npy_DTYPE1 *)PyArray_DATA(a); for (i = 0; i < size; i++) YPP = BN_NAN; } else { WHILE { idx = ZX(DTYPE2, 0); old = AX(DTYPE0, idx); sumranks = 0; dupcount = 0; for (i = 0; i < LENGTH - 1; i++) { sumranks += i; dupcount++; k = i + 1; idx = ZX(DTYPE2, k); new = AX(DTYPE0, idx); if (old != new) { if (old == old) { averank = sumranks / dupcount + 1; for (j = k - dupcount; j < k; j++) { idx = ZX(DTYPE2, j); YX(DTYPE1, idx) = averank; } } else { idx = ZX(DTYPE2, i); YX(DTYPE1, idx) = BN_NAN; } sumranks = 0; dupcount = 0; } old = new; } sumranks += (LENGTH - 1); dupcount++; averank = sumranks / dupcount + 1; if (old == old) { for (j = LENGTH - dupcount; j < LENGTH; j++) { idx = ZX(DTYPE2, j); YX(DTYPE1, idx) = averank; } } else { idx = ZX(DTYPE2, LENGTH - 1); YX(DTYPE1, idx) = BN_NAN; } NEXT3 } } BN_END_ALLOW_THREADS Py_DECREF(z); return y; } /* dtype end */ static PyObject * nanrankdata(PyObject *self, PyObject *args, PyObject *kwds) { return nonreducer_axis("nanrankdata", args, kwds, nanrankdata_float64, nanrankdata_float32, rankdata_int64, rankdata_int32, PARSE_RANKDATA); } /* push ------------------------------------------------------------------ */ /* dtype = [['float64'], ['float32']] */ NRA(push, DTYPE0) { npy_intp index; npy_DTYPE0 ai, ai_last, n_float; PyObject *y = PyArray_Copy(a); iter it; init_iter_one(&it, (PyArrayObject *)y, axis); if (LENGTH == 0 || NDIM == 0) { return y; } n_float = n < 0 ? BN_INFINITY : (npy_DTYPE0)n; BN_BEGIN_ALLOW_THREADS WHILE { index = 0; ai_last = BN_NAN; FOR { ai = AI(DTYPE0); if (ai == ai) { ai_last = ai; index = INDEX; } else { if (INDEX - index <= n_float) { AI(DTYPE0) = ai_last; } } } NEXT } BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64'], ['int32']] */ NRA(push, DTYPE0) { PyObject *y = PyArray_Copy(a); return y; } /* dtype end */ NRA_MAIN(push, PARSE_PUSH) /* python strings -------------------------------------------------------- */ PyObject *pystr_a = NULL; PyObject *pystr_n = NULL; PyObject *pystr_kth = NULL; PyObject *pystr_axis = NULL; static int intern_strings(void) { pystr_a = PyString_InternFromString("a"); pystr_n = PyString_InternFromString("n"); pystr_kth = PyString_InternFromString("kth"); pystr_axis = PyString_InternFromString("axis"); return pystr_a && pystr_n && pystr_axis; } /* nonreducer_axis ------------------------------------------------------- */ static BN_INLINE int parse_partition(PyObject *args, PyObject *kwds, PyObject **a, PyObject **n, PyObject **axis) { const Py_ssize_t nargs = PyTuple_GET_SIZE(args); const Py_ssize_t nkwds = kwds == NULL ? 0 : PyDict_Size(kwds); if (nkwds) { int nkwds_found = 0; PyObject *tmp; switch (nargs) { case 2: *n = PyTuple_GET_ITEM(args, 1); case 1: *a = PyTuple_GET_ITEM(args, 0); case 0: break; default: TYPE_ERR("wrong number of arguments"); return 0; } switch (nargs) { case 0: *a = PyDict_GetItem(kwds, pystr_a); if (*a == NULL) { TYPE_ERR("Cannot find `a` keyword input"); return 0; } nkwds_found += 1; case 1: *n = PyDict_GetItem(kwds, pystr_kth); if (*n == NULL) { TYPE_ERR("Cannot find `kth` keyword input"); return 0; } nkwds_found++; case 2: tmp = PyDict_GetItem(kwds, pystr_axis); if (tmp != NULL) { *axis = tmp; nkwds_found++; } break; default: TYPE_ERR("wrong number of arguments"); return 0; } if (nkwds_found != nkwds) { TYPE_ERR("wrong number of keyword arguments"); return 0; } if (nargs + nkwds_found > 3) { TYPE_ERR("too many arguments"); return 0; } } else { switch (nargs) { case 3: *axis = PyTuple_GET_ITEM(args, 2); case 2: *n = PyTuple_GET_ITEM(args, 1); *a = PyTuple_GET_ITEM(args, 0); break; default: TYPE_ERR("wrong number of arguments"); return 0; } } return 1; } static BN_INLINE int parse_rankdata(PyObject *args, PyObject *kwds, PyObject **a, PyObject **axis) { const Py_ssize_t nargs = PyTuple_GET_SIZE(args); const Py_ssize_t nkwds = kwds == NULL ? 0 : PyDict_Size(kwds); if (nkwds) { int nkwds_found = 0; PyObject *tmp; switch (nargs) { case 1: *a = PyTuple_GET_ITEM(args, 0); case 0: break; default: TYPE_ERR("wrong number of arguments"); return 0; } switch (nargs) { case 0: *a = PyDict_GetItem(kwds, pystr_a); if (*a == NULL) { TYPE_ERR("Cannot find `a` keyword input"); return 0; } nkwds_found += 1; case 1: tmp = PyDict_GetItem(kwds, pystr_axis); if (tmp != NULL) { *axis = tmp; nkwds_found++; } break; default: TYPE_ERR("wrong number of arguments"); return 0; } if (nkwds_found != nkwds) { TYPE_ERR("wrong number of keyword arguments"); return 0; } if (nargs + nkwds_found > 2) { TYPE_ERR("too many arguments"); return 0; } } else { switch (nargs) { case 2: *axis = PyTuple_GET_ITEM(args, 1); case 1: *a = PyTuple_GET_ITEM(args, 0); break; default: TYPE_ERR("wrong number of arguments"); return 0; } } return 1; } static BN_INLINE int parse_push(PyObject *args, PyObject *kwds, PyObject **a, PyObject **n, PyObject **axis) { const Py_ssize_t nargs = PyTuple_GET_SIZE(args); const Py_ssize_t nkwds = kwds == NULL ? 0 : PyDict_Size(kwds); if (nkwds) { int nkwds_found = 0; PyObject *tmp; switch (nargs) { case 2: *n = PyTuple_GET_ITEM(args, 1); case 1: *a = PyTuple_GET_ITEM(args, 0); case 0: break; default: TYPE_ERR("wrong number of arguments"); return 0; } switch (nargs) { case 0: *a = PyDict_GetItem(kwds, pystr_a); if (*a == NULL) { TYPE_ERR("Cannot find `a` keyword input"); return 0; } nkwds_found += 1; case 1: tmp = PyDict_GetItem(kwds, pystr_n); if (tmp != NULL) { *n = tmp; nkwds_found++; } case 2: tmp = PyDict_GetItem(kwds, pystr_axis); if (tmp != NULL) { *axis = tmp; nkwds_found++; } break; default: TYPE_ERR("wrong number of arguments"); return 0; } if (nkwds_found != nkwds) { TYPE_ERR("wrong number of keyword arguments"); return 0; } if (nargs + nkwds_found > 3) { TYPE_ERR("too many arguments"); return 0; } } else { switch (nargs) { case 3: *axis = PyTuple_GET_ITEM(args, 2); case 2: *n = PyTuple_GET_ITEM(args, 1); case 1: *a = PyTuple_GET_ITEM(args, 0); break; default: TYPE_ERR("wrong number of arguments"); return 0; } } return 1; } static PyObject * nonreducer_axis(char *name, PyObject *args, PyObject *kwds, nra_t nra_float64, nra_t nra_float32, nra_t nra_int64, nra_t nra_int32, parse_type parse) { int n; int axis; int dtype; PyArrayObject *a; PyObject *a_obj = NULL; PyObject *n_obj = NULL; PyObject *axis_obj = NULL; if (parse == PARSE_PARTITION) { if (!parse_partition(args, kwds, &a_obj, &n_obj, &axis_obj)) { return NULL; } } else if (parse == PARSE_RANKDATA) { if (!parse_rankdata(args, kwds, &a_obj, &axis_obj)) { return NULL; } } else if (parse == PARSE_PUSH) { if (!parse_push(args, kwds, &a_obj, &n_obj, &axis_obj)) { return NULL; } } else { RUNTIME_ERR("Unknown parse type; please report error."); } /* convert to array if necessary */ if PyArray_Check(a_obj) { a = (PyArrayObject *)a_obj; } else { a = (PyArrayObject *)PyArray_FROM_O(a_obj); if (a == NULL) { return NULL; } } /* check for byte swapped input array */ if PyArray_ISBYTESWAPPED(a) { return slow(name, args, kwds); } /* defend against the axis of negativity */ if (axis_obj == NULL) { if (parse == PARSE_PARTITION || parse == PARSE_PUSH) { axis = PyArray_NDIM(a) - 1; if (axis < 0) { PyErr_Format(PyExc_ValueError, "axis(=%d) out of bounds", axis); return NULL; } } else { if (PyArray_NDIM(a) != 1) { a = (PyArrayObject *)PyArray_Ravel(a, NPY_CORDER); } axis = 0; } } else if (axis_obj == Py_None) { if (parse == PARSE_PUSH) { VALUE_ERR("`axis` cannot be None"); return NULL; } if (PyArray_NDIM(a) != 1) { a = (PyArrayObject *)PyArray_Ravel(a, NPY_CORDER); } axis = 0; } else { axis = PyArray_PyIntAsInt(axis_obj); if (error_converting(axis)) { TYPE_ERR("`axis` must be an integer"); return NULL; } if (axis < 0) { axis += PyArray_NDIM(a); if (axis < 0) { PyErr_Format(PyExc_ValueError, "axis(=%d) out of bounds", axis); return NULL; } } else if (axis >= PyArray_NDIM(a)) { PyErr_Format(PyExc_ValueError, "axis(=%d) out of bounds", axis); return NULL; } } /* ddof */ if (n_obj == NULL) { n = -1; } else { n = PyArray_PyIntAsInt(n_obj); if (error_converting(n)) { TYPE_ERR("`n` must be an integer"); return NULL; } if (n < 0 && parse == PARSE_PUSH) { VALUE_ERR("`n` must be nonnegative"); return NULL; } } dtype = PyArray_TYPE(a); if (dtype == NPY_float64) return nra_float64(a, axis, n); else if (dtype == NPY_float32) return nra_float32(a, axis, n); else if (dtype == NPY_int64) return nra_int64(a, axis, n); else if (dtype == NPY_int32) return nra_int32(a, axis, n); else return slow(name, args, kwds); } /* docstrings ------------------------------------------------------------- */ static char nra_doc[] = "Bottleneck non-reducing functions that operate along an axis."; static char partition_doc[] = /* MULTILINE STRING BEGIN partition(a, kth, axis=-1) Partition array elements along given axis. A 1d array B is partitioned at array index `kth` if three conditions are met: (1) B[kth] is in its sorted position, (2) all elements to the left of `kth` are less than or equal to B[kth], and (3) all elements to the right of `kth` are greater than or equal to B[kth]. Note that the array elements in conditions (2) and (3) are in general unordered. Shuffling the input array may change the output. The only guarantee is given by the three conditions above. This functions is not protected against NaN. Therefore, you may get unexpected results if the input contains NaN. Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. kth : int The value of the element at index `kth` will be in its sorted postion. Smaller (larger) or equal values will be to the left (right) of index `kth`. axis : {int, None}, optional Axis along which the partition is performed. The default (axis=-1) is to partition along the last axis. Returns ------- y : ndarray A partitioned copy of the input array with the same shape and type of `a`. See Also -------- bottleneck.argpartition: Indices that would partition an array Notes ----- Unexpected results may occur if the input array contains NaN. Examples -------- Create a numpy array: >>> a = np.array([1, 0, 3, 4, 2]) Partition array so that the first 3 elements (indices 0, 1, 2) are the smallest 3 elements (note, as in this example, that the smallest 3 elements may not be sorted): >>> bn.partition(a, kth=2) array([1, 0, 2, 4, 3]) Now Partition array so that the last 2 elements are the largest 2 elements: >>> bn.partition(a, kth=3) array([1, 0, 2, 3, 4]) MULTILINE STRING END */ static char argpartition_doc[] = /* MULTILINE STRING BEGIN argpartition(a, kth, axis=-1) Return indices that would partition array along the given axis. A 1d array B is partitioned at array index `kth` if three conditions are met: (1) B[kth] is in its sorted position, (2) all elements to the left of `kth` are less than or equal to B[kth], and (3) all elements to the right of `kth` are greater than or equal to B[kth]. Note that the array elements in conditions (2) and (3) are in general unordered. Shuffling the input array may change the output. The only guarantee is given by the three conditions above. This functions is not protected against NaN. Therefore, you may get unexpected results if the input contains NaN. Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. kth : int The value of the element at index `kth` will be in its sorted postion. Smaller (larger) or equal values will be to the left (right) of index `kth`. axis : {int, None}, optional Axis along which the partition is performed. The default (axis=-1) is to partition along the last axis. Returns ------- y : ndarray An array the same shape as the input array containing the indices that partition `a`. The dtype of the indices is numpy.intp. See Also -------- bottleneck.partition: Partition array elements along given axis. Notes ----- Unexpected results may occur if the input array contains NaN. Examples -------- Create a numpy array: >>> a = np.array([10, 0, 30, 40, 20]) Find the indices that partition the array so that the first 3 elements are the smallest 3 elements: >>> index = bn.argpartition(a, kth=2) >>> index array([0, 1, 4, 3, 2]) Let's use the indices to partition the array (note, as in this example, that the smallest 3 elements may not be in order): >>> a[index] array([10, 0, 20, 40, 30]) MULTILINE STRING END */ static char rankdata_doc[] = /* MULTILINE STRING BEGIN rankdata(a, axis=None) Ranks the data, dealing with ties appropriately. Equal values are assigned a rank that is the average of the ranks that would have been otherwise assigned to all of the values within that set. Ranks begin at 1, not 0. Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which the elements of the array are ranked. The default (axis=None) is to rank the elements of the flattened array. Returns ------- y : ndarray An array with the same shape as `a`. The dtype is 'float64'. See also -------- bottleneck.nanrankdata: Ranks the data dealing with ties and NaNs. Examples -------- >>> bn.rankdata([0, 2, 2, 3]) array([ 1. , 2.5, 2.5, 4. ]) >>> bn.rankdata([[0, 2], [2, 3]]) array([ 1. , 2.5, 2.5, 4. ]) >>> bn.rankdata([[0, 2], [2, 3]], axis=0) array([[ 1., 1.], [ 2., 2.]]) >>> bn.rankdata([[0, 2], [2, 3]], axis=1) array([[ 1., 2.], [ 1., 2.]]) MULTILINE STRING END */ static char nanrankdata_doc[] = /* MULTILINE STRING BEGIN nanrankdata(a, axis=None) Ranks the data, dealing with ties and NaNs appropriately. Equal values are assigned a rank that is the average of the ranks that would have been otherwise assigned to all of the values within that set. Ranks begin at 1, not 0. NaNs in the input array are returned as NaNs. Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which the elements of the array are ranked. The default (axis=None) is to rank the elements of the flattened array. Returns ------- y : ndarray An array with the same shape as `a`. The dtype is 'float64'. See also -------- bottleneck.rankdata: Ranks the data, dealing with ties and appropriately. Examples -------- >>> bn.nanrankdata([np.nan, 2, 2, 3]) array([ nan, 1.5, 1.5, 3. ]) >>> bn.nanrankdata([[np.nan, 2], [2, 3]]) array([ nan, 1.5, 1.5, 3. ]) >>> bn.nanrankdata([[np.nan, 2], [2, 3]], axis=0) array([[ nan, 1.], [ 1., 2.]]) >>> bn.nanrankdata([[np.nan, 2], [2, 3]], axis=1) array([[ nan, 1.], [ 1., 2.]]) MULTILINE STRING END */ static char push_doc[] = /* MULTILINE STRING BEGIN push(a, n=None, axis=-1) Fill missing values (NaNs) with most recent non-missing values. Filling proceeds along the specified axis from small index values to large index values. Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. n : {int, None}, optional How far to push values. If the most recent non-NaN array element is more than `n` index positions away, than a NaN is returned. The default (n = None) is to push the entire length of the slice. If `n` is an integer it must be nonnegative. axis : int, optional Axis along which the elements of the array are pushed. The default (axis=-1) is to push along the last axis of the input array. Returns ------- y : ndarray An array with the same shape and dtype as `a`. See also -------- bottleneck.replace: Replace specified value of an array with new value. Examples -------- >>> a = np.array([5, np.nan, np.nan, 6, np.nan]) >>> bn.push(a) array([ 5., 5., 5., 6., 6.]) >>> bn.push(a, n=1) array([ 5., 5., nan, 6., 6.]) >>> bn.push(a, n=2) array([ 5., 5., 5., 6., 6.]) MULTILINE STRING END */ /* python wrapper -------------------------------------------------------- */ static PyMethodDef nra_methods[] = { {"partition", (PyCFunction)partition, VARKEY, partition_doc}, {"argpartition", (PyCFunction)argpartition, VARKEY, argpartition_doc}, {"rankdata", (PyCFunction)rankdata, VARKEY, rankdata_doc}, {"nanrankdata", (PyCFunction)nanrankdata, VARKEY, nanrankdata_doc}, {"push", (PyCFunction)push, VARKEY, push_doc}, {NULL, NULL, 0, NULL} }; #if PY_MAJOR_VERSION >= 3 static struct PyModuleDef nra_def = { PyModuleDef_HEAD_INIT, "nonreduce_axis", nra_doc, -1, nra_methods }; #endif PyMODINIT_FUNC #if PY_MAJOR_VERSION >= 3 #define RETVAL m PyInit_nonreduce_axis(void) #else #define RETVAL initnonreduce_axis(void) #endif { #if PY_MAJOR_VERSION >=3 PyObject *m = PyModule_Create(&nra_def); #else PyObject *m = Py_InitModule3("nonreduce_axis", nra_methods, nra_doc); #endif if (m == NULL) return RETVAL; import_array(); if (!intern_strings()) { return RETVAL; } return RETVAL; } bottleneck-1.2.0/bottleneck/src/nonreduce_template.c000066400000000000000000000212431300536544100226110ustar00rootroot00000000000000#include "bottleneck.h" #include "iterators.h" /* typedefs and prototypes ----------------------------------------------- */ /* function pointer for functions passed to nonreducer */ typedef PyObject *(*nr_t)(PyArrayObject *, double, double); static PyObject * nonreducer(char *name, PyObject *args, PyObject *kwds, nr_t, nr_t, nr_t, nr_t, int inplace); /* replace --------------------------------------------------------------- */ /* dtype = [['float64'], ['float32']] */ static PyObject * replace_DTYPE0(PyArrayObject *a, double old, double new) { npy_DTYPE0 ai; iter it; init_iter_all(&it, a, 0, 1); BN_BEGIN_ALLOW_THREADS if (old == old) { WHILE { FOR { if (AI(DTYPE0) == old) AI(DTYPE0) = new; } NEXT } } else { WHILE { FOR { ai = AI(DTYPE0); if (ai != ai) AI(DTYPE0) = new; } NEXT } } BN_END_ALLOW_THREADS Py_INCREF(a); return (PyObject *)a; } /* dtype end */ /* dtype = [['int64'], ['int32']] */ static PyObject * replace_DTYPE0(PyArrayObject *a, double old, double new) { npy_DTYPE0 oldint, newint; iter it; init_iter_all(&it, a, 0, 1); if (old == old) { oldint = (npy_DTYPE0)old; newint = (npy_DTYPE0)new; if (oldint != old) { VALUE_ERR("Cannot safely cast `old` to int"); return NULL; } if (newint != new) { VALUE_ERR("Cannot safely cast `new` to int"); return NULL; } BN_BEGIN_ALLOW_THREADS WHILE { FOR { if (AI(DTYPE0) == oldint) AI(DTYPE0) = newint; } NEXT } BN_END_ALLOW_THREADS } Py_INCREF(a); return (PyObject *)a; } /* dtype end */ static PyObject * replace(PyObject *self, PyObject *args, PyObject *kwds) { return nonreducer("replace", args, kwds, replace_float64, replace_float32, replace_int64, replace_int32, 1); } /* python strings -------------------------------------------------------- */ PyObject *pystr_a = NULL; PyObject *pystr_old = NULL; PyObject *pystr_new = NULL; static int intern_strings(void) { pystr_a = PyString_InternFromString("a"); pystr_old = PyString_InternFromString("old"); pystr_new = PyString_InternFromString("new"); return pystr_a && pystr_old && pystr_new; } /* nonreduce ------------------------------------------------------------- */ static BN_INLINE int parse_args(PyObject *args, PyObject *kwds, PyObject **a, PyObject **old, PyObject **new) { const Py_ssize_t nargs = PyTuple_GET_SIZE(args); const Py_ssize_t nkwds = kwds == NULL ? 0 : PyDict_Size(kwds); if (nkwds) { int nkwds_found = 0; switch (nargs) { case 2: *old = PyTuple_GET_ITEM(args, 1); case 1: *a = PyTuple_GET_ITEM(args, 0); case 0: break; default: TYPE_ERR("wrong number of arguments 1"); return 0; } switch (nargs) { case 0: *a = PyDict_GetItem(kwds, pystr_a); if (*a == NULL) { TYPE_ERR("Cannot find `a` keyword input"); return 0; } nkwds_found += 1; case 1: *old = PyDict_GetItem(kwds, pystr_old); if (*old == NULL) { TYPE_ERR("Cannot find `old` keyword input"); return 0; } nkwds_found += 1; case 2: *new = PyDict_GetItem(kwds, pystr_new); if (*new == NULL) { TYPE_ERR("Cannot find `new` keyword input"); return 0; } nkwds_found += 1; break; default: TYPE_ERR("wrong number of arguments 2"); return 0; } if (nkwds_found != nkwds) { TYPE_ERR("wrong number of keyword arguments 3"); return 0; } if (nargs + nkwds_found > 3) { TYPE_ERR("too many arguments"); return 0; } } else { switch (nargs) { case 3: *a = PyTuple_GET_ITEM(args, 0); *old = PyTuple_GET_ITEM(args, 1); *new = PyTuple_GET_ITEM(args, 2); break; default: TYPE_ERR("wrong number of arguments 4"); return 0; } } return 1; } static PyObject * nonreducer(char *name, PyObject *args, PyObject *kwds, nr_t nr_float64, nr_t nr_float32, nr_t nr_int64, nr_t nr_int32, int inplace) { int dtype; double old, new; PyArrayObject *a; PyObject *a_obj = NULL; PyObject *old_obj = NULL; PyObject *new_obj = NULL; if (!parse_args(args, kwds, &a_obj, &old_obj, &new_obj)) return NULL; /* convert to array if necessary */ if PyArray_Check(a_obj) { a = (PyArrayObject *)a_obj; } else { if (inplace) { TYPE_ERR("works in place so input must be an array, " "not (e.g.) a list"); return NULL; } a = (PyArrayObject *)PyArray_FROM_O(a_obj); if (a == NULL) { return NULL; } } /* check for byte swapped input array */ if PyArray_ISBYTESWAPPED(a) { return slow(name, args, kwds); } /* old */ if (old_obj == NULL) { RUNTIME_ERR("`old_obj` should never be NULL; please report this bug."); return NULL; } else { old = PyFloat_AsDouble(old_obj); if (error_converting(old)) { TYPE_ERR("`old` must be a number"); return NULL; } } /* new */ if (new_obj == NULL) { RUNTIME_ERR("`new_obj` should never be NULL; please report this bug."); return NULL; } else { new = PyFloat_AsDouble(new_obj); if (error_converting(new)) { TYPE_ERR("`new` must be a number"); return NULL; } } dtype = PyArray_TYPE(a); if (dtype == NPY_float64) return nr_float64(a, old, new); else if (dtype == NPY_float32) return nr_float32(a, old, new); else if (dtype == NPY_int64) return nr_int64(a, old, new); else if (dtype == NPY_int32) return nr_int32(a, old, new); else return slow(name, args, kwds); } /* docstrings ------------------------------------------------------------- */ static char nonreduce_doc[] = "Bottleneck nonreducing functions."; static char replace_doc[] = /* MULTILINE STRING BEGIN replace(a, old, new) Replace (inplace) given scalar values of an array with new values. The equivalent numpy function: a[a==old] = new Or in the case where old=np.nan: a[np.isnan(old)] = new Parameters ---------- a : numpy.ndarray The input array, which is also the output array since this functions works inplace. old : scalar All elements in `a` with this value will be replaced by `new`. new : scalar All elements in `a` with a value of `old` will be replaced by `new`. Returns ------- Returns a view of the input array after performing the replacements, if any. Examples -------- Replace zero with 3 (note that the input array is modified): >>> a = np.array([1, 2, 0]) >>> bn.replace(a, 0, 3) >>> a array([1, 2, 3]) Replace np.nan with 0: >>> a = np.array([1, 2, np.nan]) >>> bn.replace(a, np.nan, 0) >>> a array([ 1., 2., 0.]) MULTILINE STRING END */ /* python wrapper -------------------------------------------------------- */ static PyMethodDef nonreduce_methods[] = { {"replace", (PyCFunction)replace, VARKEY, replace_doc}, {NULL, NULL, 0, NULL} }; #if PY_MAJOR_VERSION >= 3 static struct PyModuleDef nonreduce_def = { PyModuleDef_HEAD_INIT, "nonreduce", nonreduce_doc, -1, nonreduce_methods }; #endif PyMODINIT_FUNC #if PY_MAJOR_VERSION >= 3 #define RETVAL m PyInit_nonreduce(void) #else #define RETVAL initnonreduce(void) #endif { #if PY_MAJOR_VERSION >=3 PyObject *m = PyModule_Create(&nonreduce_def); #else PyObject *m = Py_InitModule3("nonreduce", nonreduce_methods, nonreduce_doc); #endif if (m == NULL) return RETVAL; import_array(); if (!intern_strings()) { return RETVAL; } return RETVAL; } bottleneck-1.2.0/bottleneck/src/reduce_template.c000066400000000000000000001325771300536544100221130ustar00rootroot00000000000000#include "bottleneck.h" #include "iterators.h" /* init macros ----------------------------------------------------------- */ #define INIT_ALL \ iter it; \ init_iter_all(&it, a, 0, 1); #define INIT_ALL_RAVEL \ iter it; \ init_iter_all(&it, a, 1, 0); #define INIT_ONE(dtype0, dtype1) \ iter it; \ PyObject *y; \ npy_##dtype1 *py; \ init_iter_one(&it, a, axis); \ y = PyArray_EMPTY(NDIM - 1, SHAPE, NPY_##dtype0, 0); \ py = (npy_##dtype1 *)PyArray_DATA((PyArrayObject *)y); /* function signatures --------------------------------------------------- */ /* low-level functions such as nansum_all_float64 */ #define REDUCE_ALL(name, dtype) \ static PyObject * \ name##_all_##dtype(PyArrayObject *a, int ddof) /* low-level functions such as nansum_one_float64 */ #define REDUCE_ONE(name, dtype) \ static PyObject * \ name##_one_##dtype(PyArrayObject *a, int axis, int ddof) /* top-level functions such as nansum */ #define REDUCE_MAIN(name, has_ddof) \ static PyObject * \ name(PyObject *self, PyObject *args, PyObject *kwds) \ { \ return reducer(#name, \ args, \ kwds, \ name##_all_float64, \ name##_all_float32, \ name##_all_int64, \ name##_all_int32, \ name##_one_float64, \ name##_one_float32, \ name##_one_int64, \ name##_one_int32, \ has_ddof); \ } /* typedefs and prototypes ----------------------------------------------- */ typedef PyObject *(*fall_t)(PyArrayObject *a, int ddof); typedef PyObject *(*fone_t)(PyArrayObject *a, int axis, int ddof); static PyObject * reducer(char *name, PyObject *args, PyObject *kwds, fall_t fall_float64, fall_t fall_float32, fall_t fall_int64, fall_t fall_int32, fone_t fone_float64, fone_t fone_float32, fone_t fone_int64, fone_t fone_int32, int has_ddof); /* nansum ---------------------------------------------------------------- */ /* dtype = [['float64'], ['float32']] */ REDUCE_ALL(nansum, DTYPE0) { npy_DTYPE0 ai, asum = 0; INIT_ALL BN_BEGIN_ALLOW_THREADS WHILE { FOR { ai = AI(DTYPE0); if (ai == ai) asum += ai; } NEXT } BN_END_ALLOW_THREADS return PyFloat_FromDouble(asum); } REDUCE_ONE(nansum, DTYPE0) { npy_DTYPE0 ai, asum; INIT_ONE(DTYPE0, DTYPE0) BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { FILL_Y(0) } else { WHILE { asum = 0; FOR { ai = AI(DTYPE0); if (ai == ai) asum += ai; } YPP = asum; NEXT } } BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64'], ['int32']] */ REDUCE_ALL(nansum, DTYPE0) { npy_DTYPE0 asum = 0; INIT_ALL BN_BEGIN_ALLOW_THREADS WHILE { FOR asum += AI(DTYPE0); NEXT } BN_END_ALLOW_THREADS return PyInt_FromLong(asum); } REDUCE_ONE(nansum, DTYPE0) { npy_DTYPE0 asum; INIT_ONE(DTYPE0, DTYPE0) BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { FILL_Y(0) } else { WHILE { asum = 0; FOR asum += AI(DTYPE0); YPP = asum; NEXT } } BN_END_ALLOW_THREADS return y; } /* dtype end */ REDUCE_MAIN(nansum, 0) /* nanmean ---------------------------------------------------------------- */ /* dtype = [['float64'], ['float32']] */ REDUCE_ALL(nanmean, DTYPE0) { Py_ssize_t count = 0; npy_DTYPE0 ai, asum = 0; INIT_ALL BN_BEGIN_ALLOW_THREADS WHILE { FOR { ai = AI(DTYPE0); if (ai == ai) { asum += ai; count += 1; } } NEXT } BN_END_ALLOW_THREADS if (count > 0) { return PyFloat_FromDouble(asum / count); } else { return PyFloat_FromDouble(BN_NAN); } } REDUCE_ONE(nanmean, DTYPE0) { Py_ssize_t count; npy_DTYPE0 ai, asum; INIT_ONE(DTYPE0, DTYPE0) BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { FILL_Y(BN_NAN) } else { WHILE { asum = count = 0; FOR { ai = AI(DTYPE0); if (ai == ai) { asum += ai; count += 1; } } if (count > 0) { asum /= count; } else { asum = BN_NAN; } YPP = asum; NEXT } } BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64', 'float64'], ['int32', 'float64']] */ REDUCE_ALL(nanmean, DTYPE0) { Py_ssize_t total_length = 0; npy_DTYPE1 asum = 0; INIT_ALL BN_BEGIN_ALLOW_THREADS WHILE { FOR asum += AI(DTYPE0); total_length += LENGTH; NEXT } BN_END_ALLOW_THREADS if (total_length > 0) { return PyFloat_FromDouble(asum / total_length); } else { return PyFloat_FromDouble(BN_NAN); } } REDUCE_ONE(nanmean, DTYPE0) { npy_DTYPE1 asum; INIT_ONE(DTYPE1, DTYPE1) BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { FILL_Y(BN_NAN) } else { WHILE { asum = 0; FOR asum += AI(DTYPE0); if (LENGTH > 0) { asum /= LENGTH; } else { asum = BN_NAN; } YPP = asum; NEXT } } BN_END_ALLOW_THREADS return y; } /* dtype end */ REDUCE_MAIN(nanmean, 0) /* nanstd, nanvar- ------------------------------------------------------- */ /* repeat = {'NAME': ['nanstd', 'nanvar'], 'FUNC': ['sqrt', '']} */ /* dtype = [['float64'], ['float32']] */ REDUCE_ALL(NAME, DTYPE0) { Py_ssize_t count = 0; npy_DTYPE0 ai, amean, out, asum = 0; INIT_ALL BN_BEGIN_ALLOW_THREADS WHILE { FOR { ai = AI(DTYPE0); if (ai == ai) { asum += ai; count++; } } NEXT } if (count > ddof) { amean = asum / count; asum = 0; RESET WHILE { FOR { ai = AI(DTYPE0); if (ai == ai) { ai -= amean; asum += ai * ai; } } NEXT } out = FUNC(asum / (count - ddof)); } else { out = BN_NAN; } BN_END_ALLOW_THREADS return PyFloat_FromDouble(out); } REDUCE_ONE(NAME, DTYPE0) { Py_ssize_t count; npy_DTYPE0 ai, asum, amean; INIT_ONE(DTYPE0, DTYPE0) BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { FILL_Y(BN_NAN) } else { WHILE { asum = count = 0; FOR { ai = AI(DTYPE0); if (ai == ai) { asum += ai; count++; } } if (count > ddof) { amean = asum / count; asum = 0; FOR { ai = AI(DTYPE0); if (ai == ai) { ai -= amean; asum += ai * ai; } } asum = FUNC(asum / (count - ddof)); } else { asum = BN_NAN; } YPP = asum; NEXT } } BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64', 'float64'], ['int32', 'float64']] */ REDUCE_ALL(NAME, DTYPE0) { npy_DTYPE1 out; Py_ssize_t size = 0; npy_DTYPE1 ai, amean, asum = 0; INIT_ALL BN_BEGIN_ALLOW_THREADS WHILE { FOR asum += AI(DTYPE0); size += LENGTH; NEXT } if (size > ddof) { amean = asum / size; asum = 0; RESET WHILE { FOR { ai = AI(DTYPE0) - amean; asum += ai * ai; } NEXT } out = FUNC(asum / (size - ddof)); } else { out = BN_NAN; } BN_END_ALLOW_THREADS return PyFloat_FromDouble(out); } REDUCE_ONE(NAME, DTYPE0) { npy_DTYPE1 ai, asum, amean, length_inv, length_ddof_inv; INIT_ONE(DTYPE1, DTYPE1) BN_BEGIN_ALLOW_THREADS length_inv = 1.0 / LENGTH; length_ddof_inv = 1.0 / (LENGTH - ddof); if (LENGTH == 0) { FILL_Y(BN_NAN) } else { WHILE { asum = 0; FOR asum += AI(DTYPE0); if (LENGTH > ddof) { amean = asum * length_inv; asum = 0; FOR { ai = AI(DTYPE0) - amean; asum += ai * ai; } asum = FUNC(asum * length_ddof_inv); } else { asum = BN_NAN; } YPP = asum; NEXT } } BN_END_ALLOW_THREADS return y; } /* dtype end */ REDUCE_MAIN(NAME, 1) /* repeat end */ /* nanmin, nanmax -------------------------------------------------------- */ /* repeat = {'NAME': ['nanmin', 'nanmax'], 'COMPARE': ['<=', '>='], 'BIG_FLOAT': ['BN_INFINITY', '-BN_INFINITY'], 'BIG_INT': ['NPY_MAX_DTYPE0', 'NPY_MIN_DTYPE0']} */ /* dtype = [['float64'], ['float32']] */ REDUCE_ALL(NAME, DTYPE0) { npy_DTYPE0 ai, extreme = BIG_FLOAT; int allnan = 1; INIT_ALL if (SIZE == 0) { VALUE_ERR("numpy.NAME raises on a.size==0 and axis=None; " "So Bottleneck too."); return NULL; } BN_BEGIN_ALLOW_THREADS WHILE { FOR { ai = AI(DTYPE0); if (ai COMPARE extreme) { extreme = ai; allnan = 0; } } NEXT } if (allnan) extreme = BN_NAN; BN_END_ALLOW_THREADS return PyFloat_FromDouble(extreme); } REDUCE_ONE(NAME, DTYPE0) { npy_DTYPE0 ai, extreme; int allnan; INIT_ONE(DTYPE0, DTYPE0) if (LENGTH == 0) { VALUE_ERR("numpy.NAME raises on a.shape[axis]==0; " "So Bottleneck too."); return NULL; } BN_BEGIN_ALLOW_THREADS WHILE { extreme = BIG_FLOAT; allnan = 1; FOR { ai = AI(DTYPE0); if (ai COMPARE extreme) { extreme = ai; allnan = 0; } } if (allnan) extreme = BN_NAN; YPP = extreme; NEXT } BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64'], ['int32']] */ REDUCE_ALL(NAME, DTYPE0) { npy_DTYPE0 ai, extreme = BIG_INT; INIT_ALL if (SIZE == 0) { VALUE_ERR("numpy.NAME raises on a.size==0 and axis=None; " "So Bottleneck too."); return NULL; } BN_BEGIN_ALLOW_THREADS WHILE { FOR { ai = AI(DTYPE0); if (ai COMPARE extreme) extreme = ai; } NEXT } BN_END_ALLOW_THREADS return PyInt_FromLong(extreme); } REDUCE_ONE(NAME, DTYPE0) { npy_DTYPE0 ai, extreme; INIT_ONE(DTYPE0, DTYPE0) if (LENGTH == 0) { VALUE_ERR("numpy.NAME raises on a.shape[axis]==0; " "So Bottleneck too."); return NULL; } BN_BEGIN_ALLOW_THREADS WHILE { extreme = BIG_INT; FOR { ai = AI(DTYPE0); if (ai COMPARE extreme) extreme = ai; } YPP = extreme; NEXT } BN_END_ALLOW_THREADS return y; } /* dtype end */ REDUCE_MAIN(NAME, 0) /* repeat end */ /* nanargmin, nanargmax -------------------------------------------------- */ /* repeat = {'NAME': ['nanargmin', 'nanargmax'], 'COMPARE': ['<=', '>='], 'BIG_FLOAT': ['BN_INFINITY', '-BN_INFINITY'], 'BIG_INT': ['NPY_MAX_DTYPE0', 'NPY_MIN_DTYPE0']} */ /* dtype = [['float64'], ['float32']] */ REDUCE_ALL(NAME, DTYPE0) { npy_DTYPE0 ai, extreme = BIG_FLOAT; int allnan = 1; Py_ssize_t idx = 0; INIT_ALL_RAVEL if (SIZE == 0) { VALUE_ERR("numpy.NAME raises on a.size==0 and axis=None; " "So Bottleneck too."); return NULL; } BN_BEGIN_ALLOW_THREADS FOR_REVERSE { ai = AI(DTYPE0); if (ai COMPARE extreme) { extreme = ai; allnan = 0; idx = INDEX; } } BN_END_ALLOW_THREADS if (allnan) { VALUE_ERR("All-NaN slice encountered"); return NULL; } else { return PyInt_FromLong(idx); } } REDUCE_ONE(NAME, DTYPE0) { int allnan, err_code = 0; Py_ssize_t idx = 0; npy_DTYPE0 ai, extreme; INIT_ONE(INTP, intp) if (LENGTH == 0) { VALUE_ERR("numpy.NAME raises on a.shape[axis]==0; " "So Bottleneck too."); return NULL; } BN_BEGIN_ALLOW_THREADS WHILE { extreme = BIG_FLOAT; allnan = 1; FOR_REVERSE { ai = AI(DTYPE0); if (ai COMPARE extreme) { extreme = ai; allnan = 0; idx = INDEX; } } if (allnan == 0) { YPP = idx; } else { err_code = 1; } NEXT } BN_END_ALLOW_THREADS if (err_code) { VALUE_ERR("All-NaN slice encountered"); return NULL; } return y; } /* dtype end */ /* dtype = [['int64', 'intp'], ['int32', 'intp']] */ REDUCE_ALL(NAME, DTYPE0) { npy_DTYPE1 idx = 0; npy_DTYPE0 ai, extreme = BIG_INT; INIT_ALL_RAVEL if (SIZE == 0) { VALUE_ERR("numpy.NAME raises on a.size==0 and axis=None; " "So Bottleneck too."); return NULL; } BN_BEGIN_ALLOW_THREADS FOR_REVERSE { ai = AI(DTYPE0); if (ai COMPARE extreme) { extreme = ai; idx = INDEX; } } BN_END_ALLOW_THREADS return PyInt_FromLong(idx); } REDUCE_ONE(NAME, DTYPE0) { npy_DTYPE1 idx = 0; npy_DTYPE0 ai, extreme; INIT_ONE(DTYPE1, DTYPE1) if (LENGTH == 0) { VALUE_ERR("numpy.NAME raises on a.shape[axis]==0; " "So Bottleneck too."); return NULL; } BN_BEGIN_ALLOW_THREADS WHILE { extreme = BIG_INT; FOR_REVERSE{ ai = AI(DTYPE0); if (ai COMPARE extreme) { extreme = ai; idx = INDEX; } } YPP = idx; NEXT } BN_END_ALLOW_THREADS return y; } /* dtype end */ REDUCE_MAIN(NAME, 0) /* repeat end */ /* ss ---------------------------------------------------------------- */ /* dtype = [['float64'], ['float32']] */ REDUCE_ALL(ss, DTYPE0) { npy_DTYPE0 ai, asum = 0; INIT_ALL BN_BEGIN_ALLOW_THREADS WHILE { FOR { ai = AI(DTYPE0); asum += ai * ai; } NEXT } BN_END_ALLOW_THREADS return PyFloat_FromDouble(asum); } REDUCE_ONE(ss, DTYPE0) { npy_DTYPE0 ai, asum; INIT_ONE(DTYPE0, DTYPE0) BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { FILL_Y(0) } else { WHILE { asum = 0; FOR{ ai = AI(DTYPE0); asum += ai * ai; } YPP = asum; NEXT } } BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64'], ['int32']] */ REDUCE_ALL(ss, DTYPE0) { npy_DTYPE0 ai, asum = 0; INIT_ALL BN_BEGIN_ALLOW_THREADS WHILE { FOR { ai = AI(DTYPE0); asum += ai * ai; } NEXT } BN_END_ALLOW_THREADS return PyInt_FromLong(asum); } REDUCE_ONE(ss, DTYPE0) { npy_DTYPE0 ai, asum; INIT_ONE(DTYPE0, DTYPE0) BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { FILL_Y(0) } else { WHILE { asum = 0; FOR{ ai = AI(DTYPE0); asum += ai * ai; } YPP = asum; NEXT } } BN_END_ALLOW_THREADS return y; } /* dtype end */ REDUCE_MAIN(ss, 0) /* median, nanmedian MACROS ---------------------------------------------- */ #define B(dtype, i) buffer[i] /* used by PARTITION */ #define EVEN_ODD(dtype, N) \ if (N % 2 == 0) { \ npy_##dtype amax = B(dtype, 0); \ for (i = 1; i < k; i++) { \ ai = B(dtype, i); \ if (ai > amax) amax = ai; \ } \ med = 0.5 * (B(dtype, k) + amax); \ } \ else { \ med = B(dtype, k); \ } \ #define MEDIAN(dtype) \ npy_intp j, l, r, k; \ npy_##dtype ai; \ l = 0; \ for (i = 0; i < LENGTH; i++) { \ ai = AX(dtype, i); \ if (ai == ai) { \ B(dtype, l++) = ai; \ } \ } \ if (l != LENGTH) { \ med = BN_NAN; \ goto done; \ } \ k = LENGTH >> 1; \ l = 0; \ r = LENGTH - 1; \ PARTITION(dtype) \ EVEN_ODD(dtype, LENGTH) #define MEDIAN_INT(dtype) \ npy_intp j, l, r, k; \ npy_##dtype ai; \ for (i = 0; i < LENGTH; i++) { \ B(dtype, i) = AX(dtype, i); \ } \ k = LENGTH >> 1; \ l = 0; \ r = LENGTH - 1; \ PARTITION(dtype) \ EVEN_ODD(dtype, LENGTH) #define NANMEDIAN(dtype) \ npy_intp j, l, r, k, n; \ npy_##dtype ai; \ l = 0; \ for (i = 0; i < LENGTH; i++) { \ ai = AX(dtype, i); \ if (ai == ai) { \ B(dtype, l++) = ai; \ } \ } \ n = l; \ k = n >> 1; \ l = 0; \ r = n - 1; \ if (n == 0) { \ med = BN_NAN; \ goto done; \ } \ PARTITION(dtype) \ EVEN_ODD(dtype, n) #define BUFFER_NEW(dtype, length) \ npy_##dtype *buffer = malloc(length * sizeof(npy_##dtype)); #define BUFFER_DELETE free(buffer); /* median, nanmedian ----------------------------------------------------- */ /* repeat = {'NAME': ['median', 'nanmedian'], 'FUNC': ['MEDIAN', 'NANMEDIAN']} */ /* dtype = [['float64', 'float64'], ['float32', 'float32']] */ REDUCE_ALL(NAME, DTYPE0) { npy_intp i; npy_DTYPE1 med; INIT_ALL_RAVEL BN_BEGIN_ALLOW_THREADS BUFFER_NEW(DTYPE0, LENGTH) if (LENGTH == 0) { med = BN_NAN; } else { FUNC(DTYPE0) } done: BUFFER_DELETE BN_END_ALLOW_THREADS return PyFloat_FromDouble(med); } REDUCE_ONE(NAME, DTYPE0) { npy_intp i; npy_DTYPE1 med; INIT_ONE(DTYPE1, DTYPE1) BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { FILL_Y(BN_NAN) } else { BUFFER_NEW(DTYPE0, LENGTH) WHILE { FUNC(DTYPE0) done: YPP = med; NEXT } BUFFER_DELETE } BN_END_ALLOW_THREADS return y; } /* dtype end */ /* repeat end */ /* dtype = [['int64', 'float64'], ['int32', 'float64']] */ REDUCE_ALL(median, DTYPE0) { npy_intp i; npy_DTYPE1 med; INIT_ALL_RAVEL BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { med = BN_NAN; } else { BUFFER_NEW(DTYPE0, LENGTH) MEDIAN_INT(DTYPE0) BUFFER_DELETE } BN_END_ALLOW_THREADS return PyFloat_FromDouble(med); } REDUCE_ONE(median, DTYPE0) { npy_intp i; npy_DTYPE1 med; INIT_ONE(DTYPE1, DTYPE1) BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { FILL_Y(BN_NAN) } else { BUFFER_NEW(DTYPE0, LENGTH) WHILE { MEDIAN_INT(DTYPE0) YPP = med; NEXT } BUFFER_DELETE } BN_END_ALLOW_THREADS return y; } /* dtype end */ REDUCE_MAIN(median, 0) static PyObject * nanmedian(PyObject *self, PyObject *args, PyObject *kwds) { return reducer("nanmedian", args, kwds, nanmedian_all_float64, nanmedian_all_float32, median_all_int64, median_all_int32, nanmedian_one_float64, nanmedian_one_float32, median_one_int64, median_one_int32, 0); } /* anynan ---------------------------------------------------------------- */ /* dtype = [['float64'], ['float32']] */ REDUCE_ALL(anynan, DTYPE0) { int f = 0; npy_DTYPE0 ai; INIT_ALL BN_BEGIN_ALLOW_THREADS WHILE { FOR { ai = AI(DTYPE0); if (ai != ai) { f = 1; goto done; } } NEXT } done: BN_END_ALLOW_THREADS if (f) Py_RETURN_TRUE; Py_RETURN_FALSE; } REDUCE_ONE(anynan, DTYPE0) { int f; npy_DTYPE0 ai; INIT_ONE(BOOL, uint8) BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { FILL_Y(0) } else { WHILE { f = 0; FOR { ai = AI(DTYPE0); if (ai != ai) { f = 1; break; } } YPP = f; NEXT } } BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64'], ['int32']] */ REDUCE_ALL(anynan, DTYPE0) { Py_RETURN_FALSE; } REDUCE_ONE(anynan, DTYPE0) { INIT_ONE(BOOL, uint8) BN_BEGIN_ALLOW_THREADS FILL_Y(0); BN_END_ALLOW_THREADS return y; } /* dtype end */ REDUCE_MAIN(anynan, 0) /* allnan ---------------------------------------------------------------- */ /* dtype = [['float64'], ['float32']] */ REDUCE_ALL(allnan, DTYPE0) { int f = 0; npy_DTYPE0 ai; INIT_ALL BN_BEGIN_ALLOW_THREADS WHILE { FOR { ai = AI(DTYPE0); if (ai == ai) { f = 1; goto done; } } NEXT } done: BN_END_ALLOW_THREADS if (f) Py_RETURN_FALSE; Py_RETURN_TRUE; } REDUCE_ONE(allnan, DTYPE0) { int f; npy_DTYPE0 ai; INIT_ONE(BOOL, uint8) BN_BEGIN_ALLOW_THREADS if (LENGTH == 0) { FILL_Y(1) } else { WHILE { f = 1; FOR { ai = AI(DTYPE0); if (ai == ai) { f = 0; break; } } YPP = f; NEXT } } BN_END_ALLOW_THREADS return y; } /* dtype end */ /* dtype = [['int64'], ['int32']] */ REDUCE_ALL(allnan, DTYPE0) { if (PyArray_SIZE(a) == 0) Py_RETURN_TRUE; Py_RETURN_FALSE; } REDUCE_ONE(allnan, DTYPE0) { INIT_ONE(BOOL, uint8) BN_BEGIN_ALLOW_THREADS if (SIZE == 0) { FILL_Y(1); } else { FILL_Y(0); } BN_END_ALLOW_THREADS return y; } /* dtype end */ REDUCE_MAIN(allnan, 0) /* python strings -------------------------------------------------------- */ PyObject *pystr_a = NULL; PyObject *pystr_axis = NULL; PyObject *pystr_ddof = NULL; static int intern_strings(void) { pystr_a = PyString_InternFromString("a"); pystr_axis = PyString_InternFromString("axis"); pystr_ddof = PyString_InternFromString("ddof"); return pystr_a && pystr_axis && pystr_ddof; } /* reducer --------------------------------------------------------------- */ static BN_INLINE int parse_args(PyObject *args, PyObject *kwds, int has_ddof, PyObject **a, PyObject **axis, PyObject **ddof) { const Py_ssize_t nargs = PyTuple_GET_SIZE(args); const Py_ssize_t nkwds = kwds == NULL ? 0 : PyDict_Size(kwds); if (nkwds) { int nkwds_found = 0; PyObject *tmp; switch (nargs) { case 2: if (has_ddof) { *axis = PyTuple_GET_ITEM(args, 1); } else { TYPE_ERR("wrong number of arguments"); return 0; } case 1: *a = PyTuple_GET_ITEM(args, 0); case 0: break; default: TYPE_ERR("wrong number of arguments"); return 0; } switch (nargs) { case 0: *a = PyDict_GetItem(kwds, pystr_a); if (*a == NULL) { TYPE_ERR("Cannot find `a` keyword input"); return 0; } nkwds_found += 1; case 1: tmp = PyDict_GetItem(kwds, pystr_axis); if (tmp != NULL) { *axis = tmp; nkwds_found++; } case 2: if (has_ddof) { tmp = PyDict_GetItem(kwds, pystr_ddof); if (tmp != NULL) { *ddof = tmp; nkwds_found++; } break; } break; default: TYPE_ERR("wrong number of arguments"); return 0; } if (nkwds_found != nkwds) { TYPE_ERR("wrong number of keyword arguments"); return 0; } if (nargs + nkwds_found > 2 + has_ddof) { TYPE_ERR("too many arguments"); return 0; } } else { switch (nargs) { case 3: if (has_ddof) { *ddof = PyTuple_GET_ITEM(args, 2); } else { TYPE_ERR("wrong number of arguments"); return 0; } case 2: *axis = PyTuple_GET_ITEM(args, 1); case 1: *a = PyTuple_GET_ITEM(args, 0); break; default: TYPE_ERR("wrong number of arguments"); return 0; } } return 1; } static PyObject * reducer(char *name, PyObject *args, PyObject *kwds, fall_t fall_float64, fall_t fall_float32, fall_t fall_int64, fall_t fall_int32, fone_t fone_float64, fone_t fone_float32, fone_t fone_int64, fone_t fone_int32, int has_ddof) { int ndim; int axis; int dtype; int ddof; int reduce_all = 0; PyArrayObject *a; PyObject *a_obj = NULL; PyObject *axis_obj = Py_None; PyObject *ddof_obj = NULL; if (!parse_args(args, kwds, has_ddof, &a_obj, &axis_obj, &ddof_obj)) { return NULL; } /* convert to array if necessary */ if PyArray_Check(a_obj) { a = (PyArrayObject *)a_obj; } else { a = (PyArrayObject *)PyArray_FROM_O(a_obj); if (a == NULL) { return NULL; } } /* check for byte swapped input array */ if PyArray_ISBYTESWAPPED(a) { return slow(name, args, kwds); } /* does user want to reduce over all axes? */ if (axis_obj == Py_None) { reduce_all = 1; } else { axis = PyArray_PyIntAsInt(axis_obj); if (error_converting(axis)) { TYPE_ERR("`axis` must be an integer or None"); return NULL; } ndim = PyArray_NDIM(a); if (axis < 0) { axis += ndim; if (axis < 0) { PyErr_Format(PyExc_ValueError, "axis(=%d) out of bounds", axis); return NULL; } } else if (axis >= ndim) { PyErr_Format(PyExc_ValueError, "axis(=%d) out of bounds", axis); return NULL; } if (ndim == 1) { reduce_all = 1; } } /* ddof */ if (ddof_obj == NULL) { ddof = 0; } else { ddof = PyArray_PyIntAsInt(ddof_obj); if (error_converting(ddof)) { TYPE_ERR("`ddof` must be an integer"); return NULL; } } dtype = PyArray_TYPE(a); if (reduce_all == 1) { /* we are reducing the array along all axes */ if (dtype == NPY_FLOAT64) { return fall_float64(a, ddof); } else if (dtype == NPY_FLOAT32) { return fall_float32(a, ddof); } else if (dtype == NPY_INT64) { return fall_int64(a, ddof); } else if (dtype == NPY_INT32) { return fall_int32(a, ddof); } else { return slow(name, args, kwds); } } else { /* we are reducing an array with ndim > 1 over a single axis */ if (dtype == NPY_FLOAT64) { return fone_float64(a, axis, ddof); } else if (dtype == NPY_FLOAT32) { return fone_float32(a, axis, ddof); } else if (dtype == NPY_INT64) { return fone_int64(a, axis, ddof); } else if (dtype == NPY_INT32) { return fone_int32(a, axis, ddof); } else { return slow(name, args, kwds); } } } /* docstrings ------------------------------------------------------------- */ static char reduce_doc[] = "Bottleneck functions that reduce the input array along a specified axis."; static char nansum_doc[] = /* MULTILINE STRING BEGIN nansum(a, axis=None) Sum of array elements along given axis treating NaNs as zero. The data type (dtype) of the output is the same as the input. On 64-bit operating systems, 32-bit input is NOT upcast to 64-bit accumulator and return values. Parameters ---------- a : array_like Array containing numbers whose sum is desired. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which the sum is computed. The default (axis=None) is to compute the sum of the flattened array. Returns ------- y : ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if axis is None, a scalar is returned. Notes ----- No error is raised on overflow. If positive or negative infinity are present the result is positive or negative infinity. But if both positive and negative infinity are present, the result is Not A Number (NaN). Examples -------- >>> bn.nansum(1) 1 >>> bn.nansum([1]) 1 >>> bn.nansum([1, np.nan]) 1.0 >>> a = np.array([[1, 1], [1, np.nan]]) >>> bn.nansum(a) 3.0 >>> bn.nansum(a, axis=0) array([ 2., 1.]) When positive infinity and negative infinity are present: >>> bn.nansum([1, np.nan, np.inf]) inf >>> bn.nansum([1, np.nan, np.NINF]) -inf >>> bn.nansum([1, np.nan, np.inf, np.NINF]) nan MULTILINE STRING END */ static char nanmean_doc[] = /* MULTILINE STRING BEGIN nanmean(a, axis=None) Mean of array elements along given axis ignoring NaNs. `float64` intermediate and return values are used for integer inputs. Parameters ---------- a : array_like Array containing numbers whose mean is desired. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which the means are computed. The default (axis=None) is to compute the mean of the flattened array. Returns ------- y : ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if axis is None, a scalar is returned. `float64` intermediate and return values are used for integer inputs. See also -------- bottleneck.nanmedian: Median along specified axis, ignoring NaNs. Notes ----- No error is raised on overflow. (The sum is computed and then the result is divided by the number of non-NaN elements.) If positive or negative infinity are present the result is positive or negative infinity. But if both positive and negative infinity are present, the result is Not A Number (NaN). Examples -------- >>> bn.nanmean(1) 1.0 >>> bn.nanmean([1]) 1.0 >>> bn.nanmean([1, np.nan]) 1.0 >>> a = np.array([[1, 4], [1, np.nan]]) >>> bn.nanmean(a) 2.0 >>> bn.nanmean(a, axis=0) array([ 1., 4.]) When positive infinity and negative infinity are present: >>> bn.nanmean([1, np.nan, np.inf]) inf >>> bn.nanmean([1, np.nan, np.NINF]) -inf >>> bn.nanmean([1, np.nan, np.inf, np.NINF]) nan MULTILINE STRING END */ static char nanstd_doc[] = /* MULTILINE STRING BEGIN nanstd(a, axis=None, ddof=0) Standard deviation along the specified axis, ignoring NaNs. `float64` intermediate and return values are used for integer inputs. Instead of a faster one-pass algorithm, a more stable two-pass algorithm is used. An example of a one-pass algorithm: >>> np.sqrt((a*a).mean() - a.mean()**2) An example of a two-pass algorithm: >>> np.sqrt(((a - a.mean())**2).mean()) Note in the two-pass algorithm the mean must be found (first pass) before the squared deviation (second pass) can be found. Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which the standard deviation is computed. The default (axis=None) is to compute the standard deviation of the flattened array. ddof : int, optional Means Delta Degrees of Freedom. The divisor used in calculations is ``N - ddof``, where ``N`` represents the number of non-NaN elements. By default `ddof` is zero. Returns ------- y : ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if axis is None, a scalar is returned. `float64` intermediate and return values are used for integer inputs. If ddof is >= the number of non-NaN elements in a slice or the slice contains only NaNs, then the result for that slice is NaN. See also -------- bottleneck.nanvar: Variance along specified axis ignoring NaNs Notes ----- If positive or negative infinity are present the result is Not A Number (NaN). Examples -------- >>> bn.nanstd(1) 0.0 >>> bn.nanstd([1]) 0.0 >>> bn.nanstd([1, np.nan]) 0.0 >>> a = np.array([[1, 4], [1, np.nan]]) >>> bn.nanstd(a) 1.4142135623730951 >>> bn.nanstd(a, axis=0) array([ 0., 0.]) When positive infinity or negative infinity are present NaN is returned: >>> bn.nanstd([1, np.nan, np.inf]) nan MULTILINE STRING END */ static char nanvar_doc[] = /* MULTILINE STRING BEGIN nanvar(a, axis=None, ddof=0) Variance along the specified axis, ignoring NaNs. `float64` intermediate and return values are used for integer inputs. Instead of a faster one-pass algorithm, a more stable two-pass algorithm is used. An example of a one-pass algorithm: >>> (a*a).mean() - a.mean()**2 An example of a two-pass algorithm: >>> ((a - a.mean())**2).mean() Note in the two-pass algorithm the mean must be found (first pass) before the squared deviation (second pass) can be found. Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which the variance is computed. The default (axis=None) is to compute the variance of the flattened array. ddof : int, optional Means Delta Degrees of Freedom. The divisor used in calculations is ``N - ddof``, where ``N`` represents the number of non_NaN elements. By default `ddof` is zero. Returns ------- y : ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if axis is None, a scalar is returned. `float64` intermediate and return values are used for integer inputs. If ddof is >= the number of non-NaN elements in a slice or the slice contains only NaNs, then the result for that slice is NaN. See also -------- bottleneck.nanstd: Standard deviation along specified axis ignoring NaNs. Notes ----- If positive or negative infinity are present the result is Not A Number (NaN). Examples -------- >>> bn.nanvar(1) 0.0 >>> bn.nanvar([1]) 0.0 >>> bn.nanvar([1, np.nan]) 0.0 >>> a = np.array([[1, 4], [1, np.nan]]) >>> bn.nanvar(a) 2.0 >>> bn.nanvar(a, axis=0) array([ 0., 0.]) When positive infinity or negative infinity are present NaN is returned: >>> bn.nanvar([1, np.nan, np.inf]) nan MULTILINE STRING END */ static char nanmin_doc[] = /* MULTILINE STRING BEGIN nanmin(a, axis=None) Minimum values along specified axis, ignoring NaNs. When all-NaN slices are encountered, NaN is returned for that slice. Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which the minimum is computed. The default (axis=None) is to compute the minimum of the flattened array. Returns ------- y : ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if axis is None, a scalar is returned. The same dtype as `a` is returned. See also -------- bottleneck.nanmax: Maximum along specified axis, ignoring NaNs. bottleneck.nanargmin: Indices of minimum values along axis, ignoring NaNs. Examples -------- >>> bn.nanmin(1) 1 >>> bn.nanmin([1]) 1 >>> bn.nanmin([1, np.nan]) 1.0 >>> a = np.array([[1, 4], [1, np.nan]]) >>> bn.nanmin(a) 1.0 >>> bn.nanmin(a, axis=0) array([ 1., 4.]) MULTILINE STRING END */ static char nanmax_doc[] = /* MULTILINE STRING BEGIN nanmax(a, axis=None) Maximum values along specified axis, ignoring NaNs. When all-NaN slices are encountered, NaN is returned for that slice. Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which the maximum is computed. The default (axis=None) is to compute the maximum of the flattened array. Returns ------- y : ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if axis is None, a scalar is returned. The same dtype as `a` is returned. See also -------- bottleneck.nanmin: Minimum along specified axis, ignoring NaNs. bottleneck.nanargmax: Indices of maximum values along axis, ignoring NaNs. Examples -------- >>> bn.nanmax(1) 1 >>> bn.nanmax([1]) 1 >>> bn.nanmax([1, np.nan]) 1.0 >>> a = np.array([[1, 4], [1, np.nan]]) >>> bn.nanmax(a) 4.0 >>> bn.nanmax(a, axis=0) array([ 1., 4.]) MULTILINE STRING END */ static char nanargmin_doc[] = /* MULTILINE STRING BEGIN nanargmin(a, axis=None) Indices of the minimum values along an axis, ignoring NaNs. For all-NaN slices ``ValueError`` is raised. Unlike NumPy, the results can be trusted if a slice contains only NaNs and Infs. Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which to operate. By default (axis=None) flattened input is used. See also -------- bottleneck.nanargmax: Indices of the maximum values along an axis. bottleneck.nanmin: Minimum values along specified axis, ignoring NaNs. Returns ------- index_array : ndarray An array of indices or a single index value. Examples -------- >>> a = np.array([[np.nan, 4], [2, 3]]) >>> bn.nanargmin(a) 2 >>> a.flat[2] 2.0 >>> bn.nanargmin(a, axis=0) array([1, 1]) >>> bn.nanargmin(a, axis=1) array([1, 0]) MULTILINE STRING END */ static char nanargmax_doc[] = /* MULTILINE STRING BEGIN nanargmax(a, axis=None) Indices of the maximum values along an axis, ignoring NaNs. For all-NaN slices ``ValueError`` is raised. Unlike NumPy, the results can be trusted if a slice contains only NaNs and Infs. Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which to operate. By default (axis=None) flattened input is used. See also -------- bottleneck.nanargmin: Indices of the minimum values along an axis. bottleneck.nanmax: Maximum values along specified axis, ignoring NaNs. Returns ------- index_array : ndarray An array of indices or a single index value. Examples -------- >>> a = np.array([[np.nan, 4], [2, 3]]) >>> bn.nanargmax(a) 1 >>> a.flat[1] 4.0 >>> bn.nanargmax(a, axis=0) array([1, 0]) >>> bn.nanargmax(a, axis=1) array([1, 1]) MULTILINE STRING END */ static char ss_doc[] = /* MULTILINE STRING BEGIN ss(a, axis=None) Sum of the square of each element along the specified axis. Parameters ---------- a : array_like Array whose sum of squares is desired. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which the sum of squares is computed. The default (axis=None) is to sum the squares of the flattened array. Returns ------- y : ndarray The sum of a**2 along the given axis. Examples -------- >>> a = np.array([1., 2., 5.]) >>> bn.ss(a) 30.0 And calculating along an axis: >>> b = np.array([[1., 2., 5.], [2., 5., 6.]]) >>> bn.ss(b, axis=1) array([ 30., 65.]) MULTILINE STRING END */ static char median_doc[] = /* MULTILINE STRING BEGIN median(a, axis=None) Median of array elements along given axis. Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which the median is computed. The default (axis=None) is to compute the median of the flattened array. Returns ------- y : ndarray An array with the same shape as `a`, except that the specified axis has been removed. If `a` is a 0d array, or if axis is None, a scalar is returned. `float64` return values are used for integer inputs. NaN is returned for a slice that contains one or more NaNs. See also -------- bottleneck.nanmedian: Median along specified axis ignoring NaNs. Examples -------- >>> a = np.array([[10, 7, 4], [3, 2, 1]]) >>> bn.median(a) 3.5 >>> bn.median(a, axis=0) array([ 6.5, 4.5, 2.5]) >>> bn.median(a, axis=1) array([ 7., 2.]) MULTILINE STRING END */ static char nanmedian_doc[] = /* MULTILINE STRING BEGIN nanmedian(a, axis=None) Median of array elements along given axis ignoring NaNs. Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which the median is computed. The default (axis=None) is to compute the median of the flattened array. Returns ------- y : ndarray An array with the same shape as `a`, except that the specified axis has been removed. If `a` is a 0d array, or if axis is None, a scalar is returned. `float64` return values are used for integer inputs. See also -------- bottleneck.median: Median along specified axis. Examples -------- >>> a = np.array([[np.nan, 7, 4], [3, 2, 1]]) >>> a array([[ nan, 7., 4.], [ 3., 2., 1.]]) >>> bn.nanmedian(a) 3.0 >> bn.nanmedian(a, axis=0) array([ 3. , 4.5, 2.5]) >> bn.nanmedian(a, axis=1) array([ 5.5, 2. ]) MULTILINE STRING END */ static char anynan_doc[] = /* MULTILINE STRING BEGIN anynan(a, axis=None) Test whether any array element along a given axis is NaN. Returns the same output as np.isnan(a).any(axis) Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which NaNs are searched. The default (`axis` = ``None``) is to search for NaNs over a flattened input array. Returns ------- y : bool or ndarray A boolean or new `ndarray` is returned. See also -------- bottleneck.allnan: Test if all array elements along given axis are NaN Examples -------- >>> bn.anynan(1) False >>> bn.anynan(np.nan) True >>> bn.anynan([1, np.nan]) True >>> a = np.array([[1, 4], [1, np.nan]]) >>> bn.anynan(a) True >>> bn.anynan(a, axis=0) array([False, True], dtype=bool) MULTILINE STRING END */ static char allnan_doc[] = /* MULTILINE STRING BEGIN allnan(a, axis=None) Test whether all array elements along a given axis are NaN. Returns the same output as np.isnan(a).all(axis) Note that allnan([]) is True to match np.isnan([]).all() and all([]) Parameters ---------- a : array_like Input array. If `a` is not an array, a conversion is attempted. axis : {int, None}, optional Axis along which NaNs are searched. The default (`axis` = ``None``) is to search for NaNs over a flattened input array. Returns ------- y : bool or ndarray A boolean or new `ndarray` is returned. See also -------- bottleneck.anynan: Test if any array element along given axis is NaN Examples -------- >>> bn.allnan(1) False >>> bn.allnan(np.nan) True >>> bn.allnan([1, np.nan]) False >>> a = np.array([[1, np.nan], [1, np.nan]]) >>> bn.allnan(a) False >>> bn.allnan(a, axis=0) array([False, True], dtype=bool) An empty array returns True: >>> bn.allnan([]) True which is similar to: >>> all([]) True >>> np.isnan([]).all() True MULTILINE STRING END */ /* python wrapper -------------------------------------------------------- */ static PyMethodDef reduce_methods[] = { {"nansum", (PyCFunction)nansum, VARKEY, nansum_doc}, {"nanmean", (PyCFunction)nanmean, VARKEY, nanmean_doc}, {"nanstd", (PyCFunction)nanstd, VARKEY, nanstd_doc}, {"nanvar", (PyCFunction)nanvar, VARKEY, nanvar_doc}, {"nanmin", (PyCFunction)nanmin, VARKEY, nanmin_doc}, {"nanmax", (PyCFunction)nanmax, VARKEY, nanmax_doc}, {"nanargmin", (PyCFunction)nanargmin, VARKEY, nanargmin_doc}, {"nanargmax", (PyCFunction)nanargmax, VARKEY, nanargmax_doc}, {"ss", (PyCFunction)ss, VARKEY, ss_doc}, {"median", (PyCFunction)median, VARKEY, median_doc}, {"nanmedian", (PyCFunction)nanmedian, VARKEY, nanmedian_doc}, {"anynan", (PyCFunction)anynan, VARKEY, anynan_doc}, {"allnan", (PyCFunction)allnan, VARKEY, allnan_doc}, {NULL, NULL, 0, NULL} }; #if PY_MAJOR_VERSION >= 3 static struct PyModuleDef reduce_def = { PyModuleDef_HEAD_INIT, "reduce", reduce_doc, -1, reduce_methods }; #endif PyMODINIT_FUNC #if PY_MAJOR_VERSION >= 3 #define RETVAL m PyInit_reduce(void) #else #define RETVAL initreduce(void) #endif { #if PY_MAJOR_VERSION >=3 PyObject *m = PyModule_Create(&reduce_def); #else PyObject *m = Py_InitModule3("reduce", reduce_methods, reduce_doc); #endif if (m == NULL) return RETVAL; import_array(); if (!intern_strings()) { return RETVAL; } return RETVAL; } bottleneck-1.2.0/bottleneck/src/template.py000066400000000000000000000117351300536544100207620ustar00rootroot00000000000000import os import re import ast def make_c_files(): modules = ['reduce', 'move', 'nonreduce', 'nonreduce_axis'] dirpath = os.path.dirname(__file__) for module in modules: filepath = os.path.join(dirpath, module + '_template.c') with open(filepath, 'r') as f: src_str = f.read() src_str = template(src_str) filepath = os.path.join(dirpath, module + '.c') with open(filepath, 'w') as f: f.write(src_str) def template(src_str): src_list = src_str.splitlines() src_list = repeat_templating(src_list) src_list = dtype_templating(src_list) src_list = string_templating(src_list) src_str = '\n'.join(src_list) src_str = re.sub(r'\n\s*\n\s*\n', r'\n\n', src_str) return src_str # repeat -------------------------------------------------------------------- REPEAT_BEGIN = r'^/\*\s*repeat\s*=\s*' REPEAT_END = r'^/\*\s*repeat end' COMMENT_END = r'.*\*\/.*' def repeat_templating(lines): index = 0 while True: idx0, idx1 = next_block(lines, index, REPEAT_BEGIN, REPEAT_END) if idx0 is None: break func_list = lines[idx0:idx1] func_list = expand_functions_repeat(func_list) # the +1 below is to skip the /* repeat end */ line lines = lines[:idx0] + func_list + lines[idx1+1:] index = idx0 return lines def expand_functions_repeat(lines): idx = first_occurence(COMMENT_END, lines) repeat_dict = repeat_info(lines[:idx + 1]) lines = lines[idx + 1:] func_str = '\n'.join(lines) func_list = expand_repeat(func_str, repeat_dict) return func_list def repeat_info(lines): line = ''.join(lines) repeat = re.findall(r'\{.*\}', line) repeat_dict = ast.literal_eval(repeat[0]) return repeat_dict def expand_repeat(func_str, repeat_dict): nrepeats = [len(repeat_dict[key]) for key in repeat_dict] if len(set(nrepeats)) != 1: raise ValueError("All repeat lists must be the same length") nrepeat = nrepeats[0] func_list = [] for i in range(nrepeat): f = func_str[:] for key in repeat_dict: f = f.replace(key, repeat_dict[key][i]) func_list.append('\n' + f) func_list = (''.join(func_list)).splitlines() return func_list # dtype --------------------------------------------------------------------- DTYPE_BEGIN = r'^/\*\s*dtype\s*=\s*' DTYPE_END = r'^/\*\s*dtype end' def dtype_templating(lines): index = 0 while True: idx0, idx1 = next_block(lines, index, DTYPE_BEGIN, DTYPE_END) if idx0 is None: break func_list = lines[idx0:idx1] func_list = expand_functions_dtype(func_list) # the +1 below is to skip the /* dtype end */ line lines = lines[:idx0] + func_list + lines[idx1+1:] index = idx0 return lines def expand_functions_dtype(lines): idx = first_occurence(COMMENT_END, lines) dtypes = dtype_info(lines[:idx + 1]) lines = lines[idx + 1:] func_str = '\n'.join(lines) func_list = expand_dtypes(func_str, dtypes) return func_list def dtype_info(lines): line = ''.join(lines) dtypes = re.findall(r'\[.*\]', line) if len(dtypes) != 1: raise ValueError("expecting exactly one dtype specification") dtypes = ast.literal_eval(dtypes[0]) return dtypes def expand_dtypes(func_str, dtypes): if 'DTYPE' not in func_str: raise ValueError("cannot find dtype marker") func_list = [] for dtype in dtypes: f = func_str[:] for i, dt in enumerate(dtype): f = f.replace('DTYPE%d' % i, dt) if i > 0: f = f + '\n' func_list.append('\n\n' + f) return func_list # multiline strings --------------------------------------------------------- STRING_BEGIN = r'.*MULTILINE STRING BEGIN.*' STRING_END = r'.*MULTILINE STRING END.*' def string_templating(lines): index = 0 while True: idx0, idx1 = next_block(lines, index, STRING_BEGIN, STRING_END) if idx0 is None: break str_list = lines[idx0+1:idx1] str_list = quote_string(str_list) lines = lines[:idx0] + str_list + lines[idx1+1:] index = idx0 return lines def quote_string(lines): for i in range(len(lines)): lines[i] = "\"" + lines[i] + r"\n" + "\"" lines[-1] = lines[-1] + ";" return lines # utility ------------------------------------------------------------------- def first_occurence(pattern, lines): for i in range(len(lines)): if re.match(pattern, lines[i]): return i raise ValueError("`pattern` not found") def next_block(lines, index, begine_pattern, end_pattern): idx = None for i in range(index, len(lines)): line = lines[i] if re.match(begine_pattern, line): idx = i elif re.match(end_pattern, line): if idx is None: raise ValueError("found end of function before beginning") return idx, i return None, None bottleneck-1.2.0/bottleneck/tests/000077500000000000000000000000001300536544100171415ustar00rootroot00000000000000bottleneck-1.2.0/bottleneck/tests/__init__.py000066400000000000000000000000001300536544100212400ustar00rootroot00000000000000bottleneck-1.2.0/bottleneck/tests/input_modifcation_test.py000066400000000000000000000041451300536544100242710ustar00rootroot00000000000000"Test functions." import warnings import numpy as np from numpy.testing import assert_equal import bottleneck as bn from .util import DTYPES def arrays(dtypes): "Iterator that yield arrays to use for unit testing." ss = {} ss[1] = {'size': 4, 'shapes': [(4,)]} ss[2] = {'size': 6, 'shapes': [(2, 3)]} ss[3] = {'size': 6, 'shapes': [(1, 2, 3)]} rs = np.random.RandomState([1, 2, 3]) for ndim in ss: size = ss[ndim]['size'] shapes = ss[ndim]['shapes'] for dtype in dtypes: a = np.arange(size, dtype=dtype) if issubclass(a.dtype.type, np.inexact): idx = rs.rand(*a.shape) < 0.2 a[idx] = np.inf idx = rs.rand(*a.shape) < 0.2 a[idx] = np.nan idx = rs.rand(*a.shape) < 0.2 a[idx] *= -1 for shape in shapes: a = a.reshape(shape) yield a def unit_maker(func, nans=True): "Test that bn.xxx gives the same output as np.xxx." msg = "\nInput array modifed by %s.\n\n" msg += "input array before:\n%s\nafter:\n%s\n" for i, a in enumerate(arrays(DTYPES)): for axis in list(range(-a.ndim, a.ndim)) + [None]: with np.errstate(invalid='ignore'): a1 = a.copy() a2 = a.copy() if ('move_' in func.__name__) or ('sort' in func.__name__): if axis is None: continue with warnings.catch_warnings(): warnings.simplefilter("ignore") func(a1, 1, axis=axis) else: try: with warnings.catch_warnings(): warnings.simplefilter("ignore") func(a1, axis=axis) except: continue assert_equal(a1, a2, msg % (func.__name__, a1, a2)) def test_modification(): "Test for illegal inplace modification of input array" for func in bn.get_functions('all'): yield unit_maker, func bottleneck-1.2.0/bottleneck/tests/list_input_test.py000066400000000000000000000032001300536544100227370ustar00rootroot00000000000000"Check that functions can handle list input" import warnings import numpy as np from numpy.testing import assert_array_almost_equal import bottleneck as bn from .util import DTYPES def test_list_input(): "Check that functions can handle list input" for func in bn.get_functions('all'): if func.__name__ != 'replace': yield unit_maker, func def lists(dtypes=DTYPES): "Iterator that yields lists to use for unit testing." ss = {} ss[1] = {'size': 4, 'shapes': [(4,)]} ss[2] = {'size': 6, 'shapes': [(1, 6), (2, 3)]} ss[3] = {'size': 6, 'shapes': [(1, 2, 3)]} ss[4] = {'size': 24, 'shapes': [(1, 2, 3, 4)]} for ndim in ss: size = ss[ndim]['size'] shapes = ss[ndim]['shapes'] a = np.arange(size) for shape in shapes: a = a.reshape(shape) for dtype in dtypes: yield a.astype(dtype).tolist() def unit_maker(func): "Test that bn.xxx gives the same output as bn.slow.xxx for list input." msg = '\nfunc %s | input %s (%s) | shape %s\n' msg += '\nInput array:\n%s\n' name = func.__name__ func0 = eval('bn.slow.%s' % name) for i, a in enumerate(lists()): with warnings.catch_warnings(): warnings.simplefilter("ignore") try: actual = func(a) desired = func0(a) except TypeError: actual = func(a, 2) desired = func0(a, 2) a = np.array(a) tup = (name, 'a'+str(i), str(a.dtype), str(a.shape), a) err_msg = msg % tup assert_array_almost_equal(actual, desired, err_msg=err_msg) bottleneck-1.2.0/bottleneck/tests/move_test.py000066400000000000000000000167311300536544100215300ustar00rootroot00000000000000"Test moving window functions." from nose.tools import assert_true import numpy as np from numpy.testing import (assert_equal, assert_array_almost_equal, assert_raises) import bottleneck as bn from .util import arrays, array_order def test_move(): "test move functions" for func in bn.get_functions('move'): yield unit_maker, func def unit_maker(func): "Test that bn.xxx gives the same output as a reference function." fmt = ('\nfunc %s | window %d | min_count %s | input %s (%s) | shape %s | ' 'axis %s | order %s\n') fmt += '\nInput array:\n%s\n' aaae = assert_array_almost_equal func_name = func.__name__ func0 = eval('bn.slow.%s' % func_name) if func_name == "move_var": decimal = 3 else: decimal = 5 for i, a in enumerate(arrays(func_name)): axes = range(-1, a.ndim) for axis in axes: windows = range(1, a.shape[axis]) for window in windows: min_counts = list(range(1, window + 1)) + [None] for min_count in min_counts: actual = func(a, window, min_count, axis=axis) desired = func0(a, window, min_count, axis=axis) tup = (func_name, window, str(min_count), 'a'+str(i), str(a.dtype), str(a.shape), str(axis), array_order(a), a) err_msg = fmt % tup aaae(actual, desired, decimal, err_msg) err_msg += '\n dtype mismatch %s %s' da = actual.dtype dd = desired.dtype assert_equal(da, dd, err_msg % (da, dd)) # --------------------------------------------------------------------------- # Test argument parsing def test_arg_parsing(): "test argument parsing" for func in bn.get_functions('move'): yield unit_maker_argparse, func def unit_maker_argparse(func, decimal=5): "test argument parsing." name = func.__name__ func0 = eval('bn.slow.%s' % name) a = np.array([1., 2, 3]) fmt = '\n%s' % func fmt += '%s\n' fmt += '\nInput array:\n%s\n' % a actual = func(a, 2) desired = func0(a, 2) err_msg = fmt % "(a, 2)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a, 2, 1) desired = func0(a, 2, 1) err_msg = fmt % "(a, 2, 1)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a, window=2) desired = func0(a, window=2) err_msg = fmt % "(a, window=2)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a, window=2, min_count=1) desired = func0(a, window=2, min_count=1) err_msg = fmt % "(a, window=2, min_count=1)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a, window=2, min_count=1, axis=0) desired = func0(a, window=2, min_count=1, axis=0) err_msg = fmt % "(a, window=2, min_count=1, axis=0)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a, min_count=1, window=2, axis=0) desired = func0(a, min_count=1, window=2, axis=0) err_msg = fmt % "(a, min_count=1, window=2, axis=0)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a, axis=-1, min_count=None, window=2) desired = func0(a, axis=-1, min_count=None, window=2) err_msg = fmt % "(a, axis=-1, min_count=None, window=2)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a=a, axis=-1, min_count=None, window=2) desired = func0(a=a, axis=-1, min_count=None, window=2) err_msg = fmt % "(a=a, axis=-1, min_count=None, window=2)" assert_array_almost_equal(actual, desired, decimal, err_msg) if name in ('move_std', 'move_var'): actual = func(a, 2, 1, -1, ddof=1) desired = func0(a, 2, 1, -1, ddof=1) err_msg = fmt % "(a, 2, 1, -1, ddof=1)" assert_array_almost_equal(actual, desired, decimal, err_msg) # regression test: make sure len(kwargs) == 0 doesn't raise args = (a, 1, 1, -1) kwargs = {} func(*args, **kwargs) def test_arg_parse_raises(): "test argument parsing raises in move" for func in bn.get_functions('move'): yield unit_maker_argparse_raises, func def unit_maker_argparse_raises(func): "test argument parsing raises in move" a = np.array([1., 2, 3]) assert_raises(TypeError, func) assert_raises(TypeError, func, axis=a) assert_raises(TypeError, func, a, 2, axis=0, extra=0) assert_raises(TypeError, func, a, 2, axis=0, a=a) assert_raises(TypeError, func, a, 2, 2, 0, 0, 0) assert_raises(TypeError, func, a, 2, axis='0') assert_raises(TypeError, func, a, 1, min_count='1') if func.__name__ not in ('move_std', 'move_var'): assert_raises(TypeError, func, a, 2, ddof=0) # --------------------------------------------------------------------------- # move_median.c is complicated. Let's do some more testing. # # If you make changes to move_median.c then do lots of tests by increasing # range(100) in the two functions below to range(10000). And for extra credit # increase size to 30. With those two changes the unit tests will take a # LONG time to run. def test_move_median_with_nans(): "test move_median.c with nans" fmt = '\nfunc %s | window %d | min_count %s\n\nInput array:\n%s\n' aaae = assert_array_almost_equal min_count = 1 size = 10 func = bn.move_median func0 = bn.slow.move_median rs = np.random.RandomState([1, 2, 3]) for i in range(100): a = np.arange(size, dtype=np.float64) idx = rs.rand(*a.shape) < 0.1 a[idx] = np.inf idx = rs.rand(*a.shape) < 0.2 a[idx] = np.nan rs.shuffle(a) for window in range(2, size + 1): actual = func(a, window=window, min_count=min_count) desired = func0(a, window=window, min_count=min_count) err_msg = fmt % (func.__name__, window, min_count, a) aaae(actual, desired, decimal=5, err_msg=err_msg) def test_move_median_without_nans(): "test move_median.c without nans" fmt = '\nfunc %s | window %d | min_count %s\n\nInput array:\n%s\n' aaae = assert_array_almost_equal min_count = 1 size = 10 func = bn.move_median func0 = bn.slow.move_median rs = np.random.RandomState([1, 2, 3]) for i in range(100): a = np.arange(size, dtype=np.int64) rs.shuffle(a) for window in range(2, size + 1): actual = func(a, window=window, min_count=min_count) desired = func0(a, window=window, min_count=min_count) err_msg = fmt % (func.__name__, window, min_count, a) aaae(actual, desired, decimal=5, err_msg=err_msg) # ---------------------------------------------------------------------------- # Regression test for square roots of negative numbers def test_move_std_sqrt(): "Test move_std for neg sqrt." a = [0.0011448196318903589, 0.00028718669878572767, 0.00028718669878572767, 0.00028718669878572767, 0.00028718669878572767] err_msg = "Square root of negative number. ndim = %d" b = bn.move_std(a, window=3) assert_true(np.isfinite(b[2:]).all(), err_msg % 1) a2 = np.array([a, a]) b = bn.move_std(a2, window=3, axis=1) assert_true(np.isfinite(b[:, 2:]).all(), err_msg % 2) a3 = np.array([[a, a], [a, a]]) b = bn.move_std(a3, window=3, axis=2) assert_true(np.isfinite(b[:, :, 2:]).all(), err_msg % 3) bottleneck-1.2.0/bottleneck/tests/nonreduce_axis_test.py000066400000000000000000000155551300536544100235730ustar00rootroot00000000000000import numpy as np from numpy.testing import (assert_equal, assert_array_equal, assert_array_almost_equal, assert_raises) import bottleneck as bn from .reduce_test import (unit_maker as reduce_unit_maker, unit_maker_argparse as unit_maker_parse_rankdata) from .util import arrays, array_order # --------------------------------------------------------------------------- # partition, argpartition def test_partition(): "test partition" for func in (bn.partition,): yield unit_maker, func def test_argpartition(): "test argpartition" for func in (bn.argpartition,): yield unit_maker, func def unit_maker(func): "test partition or argpartition" msg = '\nfunc %s | input %s (%s) | shape %s | n %d | axis %s | order %s\n' msg += '\nInput array:\n%s\n' name = func.__name__ func0 = eval('bn.slow.%s' % name) rs = np.random.RandomState([1, 2, 3]) for i, a in enumerate(arrays(name)): if a.ndim == 0 or a.size == 0 or a.ndim > 3: continue for axis in list(range(-1, a.ndim)) + [None]: if axis is None: nmax = a.size - 1 else: nmax = a.shape[axis] - 1 if nmax < 1: continue n = rs.randint(nmax) s0 = func0(a, n, axis) s1 = func(a, n, axis) if name == 'argpartition': s0 = complete_the_argpartition(s0, a, n, axis) s1 = complete_the_argpartition(s1, a, n, axis) else: s0 = complete_the_partition(s0, n, axis) s1 = complete_the_partition(s1, n, axis) tup = (name, 'a'+str(i), str(a.dtype), str(a.shape), n, str(axis), array_order(a), a) err_msg = msg % tup assert_array_equal(s1, s0, err_msg) def complete_the_partition(a, n, axis): def func1d(a, n): a[:n] = np.sort(a[:n]) a[n+1:] = np.sort(a[n+1:]) return a a = a.copy() ndim = a.ndim if axis is None: if ndim != 1: raise ValueError("`a` must be 1d when axis is None") axis = 0 elif axis < 0: axis += ndim if axis < 0: raise ValueError("`axis` out of range") a = np.apply_along_axis(func1d, axis, a, n) return a def complete_the_argpartition(index, a, n, axis): a = a.copy() ndim = a.ndim if axis is None: if index.ndim != 1: raise ValueError("`index` must be 1d when axis is None") axis = 0 ndim = 1 a = a.reshape(-1) elif axis < 0: axis += ndim if axis < 0: raise ValueError("`axis` out of range") if ndim == 1: a = a[index] elif ndim == 2: if axis == 0: for i in range(a.shape[1]): a[:, i] = a[index[:, i], i] elif axis == 1: for i in range(a.shape[0]): a[i] = a[i, index[i]] else: raise ValueError("`axis` out of range") elif ndim == 3: if axis == 0: for i in range(a.shape[1]): for j in range(a.shape[2]): a[:, i, j] = a[index[:, i, j], i, j] elif axis == 1: for i in range(a.shape[0]): for j in range(a.shape[2]): a[i, :, j] = a[i, index[i, :, j], j] elif axis == 2: for i in range(a.shape[0]): for j in range(a.shape[1]): a[i, j, :] = a[i, j, index[i, j, :]] else: raise ValueError("`axis` out of range") else: raise ValueError("`a.ndim` must be 1, 2, or 3") a = complete_the_partition(a, n, axis) return a def test_transpose(): "partition transpose test" a = np.arange(12).reshape(4, 3) actual = bn.partition(a.T, 2, -1).T desired = bn.slow.partition(a.T, 2, -1).T assert_equal(actual, desired, 'partition transpose test') # --------------------------------------------------------------------------- # rankdata, nanrankdata, push def test_nonreduce_axis(): "Test nonreduce axis functions" funcs = [bn.rankdata, bn.nanrankdata, bn.push] for func in funcs: yield reduce_unit_maker, func def test_push(): "Test push" ns = (0, 1, 2, 3, 4, 5) a = np.array([np.nan, 1, 2, np.nan, np.nan, np.nan, np.nan, 3, np.nan]) for n in ns: actual = bn.push(a.copy(), n=n) desired = bn.slow.push(a.copy(), n=n) assert_array_equal(actual, desired, "failed on n=%s" % str(n)) # --------------------------------------------------------------------------- # Test argument parsing def test_arg_parsing(): "test argument parsing in nonreduce_axis" for func in bn.get_functions('nonreduce_axis'): name = func.__name__ if name in ('partition', 'argpartition'): yield unit_maker_parse, func elif name in ('push'): yield unit_maker_parse, func elif name in ('rankdata', 'nanrankdata'): yield unit_maker_parse_rankdata, func else: fmt = "``%s` is an unknown nonreduce_axis function" raise ValueError(fmt % name) yield unit_maker_raises, func def unit_maker_parse(func, decimal=5): "test argument parsing." name = func.__name__ func0 = eval('bn.slow.%s' % name) a = np.array([1., 2, 3]) fmt = '\n%s' % func fmt += '%s\n' fmt += '\nInput array:\n%s\n' % a actual = func(a, 1) desired = func0(a, 1) err_msg = fmt % "(a, 1)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a, 1, axis=0) desired = func0(a, 1, axis=0) err_msg = fmt % "(a, 1, axis=0)" assert_array_almost_equal(actual, desired, decimal, err_msg) if name != 'push': actual = func(a, 2, None) desired = func0(a, 2, None) err_msg = fmt % "(a, 2, None)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a, 1, axis=None) desired = func0(a, 1, axis=None) err_msg = fmt % "(a, 1, axis=None)" assert_array_almost_equal(actual, desired, decimal, err_msg) # regression test: make sure len(kwargs) == 0 doesn't raise args = (a, 1, -1) kwargs = {} func(*args, **kwargs) else: # regression test: make sure len(kwargs) == 0 doesn't raise args = (a, 1) kwargs = {} func(*args, **kwargs) def unit_maker_raises(func): "test argument parsing raises in nonreduce_axis" a = np.array([1., 2, 3]) assert_raises(TypeError, func) assert_raises(TypeError, func, axis=a) assert_raises(TypeError, func, a, axis=0, extra=0) assert_raises(TypeError, func, a, axis=0, a=a) if func.__name__ in ('partition', 'argpartition'): assert_raises(TypeError, func, a, 0, 0, 0, 0, 0) assert_raises(TypeError, func, a, axis='0') bottleneck-1.2.0/bottleneck/tests/nonreduce_test.py000066400000000000000000000066621300536544100225460ustar00rootroot00000000000000"Test replace()." import warnings import numpy as np from numpy.testing import assert_equal, assert_array_equal, assert_raises import bottleneck as bn from .util import arrays, array_order def test_nonreduce(): "test nonreduce functions" for func in bn.get_functions('nonreduce'): yield unit_maker, func def unit_maker(func): "Test that bn.xxx gives the same output as np.xxx." msg = '\nfunc %s | input %s (%s) | shape %s | old %f | new %f | order %s\n' msg += '\nInput array:\n%s\n' name = func.__name__ func0 = eval('bn.slow.%s' % name) rs = np.random.RandomState([1, 2, 3]) news = [1, 0, np.nan, -np.inf] for i, arr in enumerate(arrays(name)): for idx in range(2): if arr.size == 0: old = 0 else: idx = rs.randint(max(arr.size, 1)) old = arr.flat[idx] for new in news: if not issubclass(arr.dtype.type, np.inexact): if not np.isfinite(old): # Cannot safely cast to int continue if not np.isfinite(new): # Cannot safely cast to int continue actual = arr.copy() with warnings.catch_warnings(): warnings.simplefilter("ignore") func(actual, old, new) desired = arr.copy() with warnings.catch_warnings(): warnings.simplefilter("ignore") func0(desired, old, new) tup = (name, 'a'+str(i), str(arr.dtype), str(arr.shape), old, new, array_order(arr), arr) err_msg = msg % tup assert_array_equal(actual, desired, err_msg=err_msg) err_msg += '\n dtype mismatch %s %s' if hasattr(actual, 'dtype') or hasattr(desired, 'dtype'): da = actual.dtype dd = desired.dtype assert_equal(da, dd, err_msg % (da, dd)) # --------------------------------------------------------------------------- # Check that exceptions are raised def test_replace_unsafe_cast(): "Test replace for unsafe casts" dtypes = ['int32', 'int64'] for dtype in dtypes: a = np.zeros(3, dtype=dtype) assert_raises(ValueError, bn.replace, a.copy(), 0.1, 0) assert_raises(ValueError, bn.replace, a.copy(), 0, 0.1) assert_raises(ValueError, bn.slow.replace, a.copy(), 0.1, 0) assert_raises(ValueError, bn.slow.replace, a.copy(), 0, 0.1) def test_non_array(): "Test that non-array input raises" a = [1, 2, 3] assert_raises(TypeError, bn.replace, a, 0, 1) a = (1, 2, 3) assert_raises(TypeError, bn.replace, a, 0, 1) # --------------------------------------------------------------------------- # Make sure bn.replace and bn.slow.replace can handle int arrays where # user wants to replace nans def test_replace_nan_int(): "Test replace, int array, old=nan, new=0" a = np.arange(2*3*4).reshape(2, 3, 4) actual = a.copy() bn.replace(actual, np.nan, 0) desired = a.copy() msg = 'replace failed on int input looking for nans' assert_array_equal(actual, desired, err_msg=msg) actual = a.copy() bn.slow.replace(actual, np.nan, 0) msg = 'slow.replace failed on int input looking for nans' assert_array_equal(actual, desired, err_msg=msg) bottleneck-1.2.0/bottleneck/tests/reduce_test.py000066400000000000000000000163221300536544100220250ustar00rootroot00000000000000"Test reduce functions." import warnings import traceback # from itertools import permutations from nose.tools import ok_ import numpy as np from numpy.testing import (assert_equal, assert_raises, assert_array_almost_equal) import bottleneck as bn from .util import arrays, array_order, DTYPES def test_reduce(): "test reduce functions" for func in bn.get_functions('reduce'): yield unit_maker, func def unit_maker(func, decimal=5): "Test that bn.xxx gives the same output as bn.slow.xxx." fmt = '\nfunc %s | input %s (%s) | shape %s | axis %s | order %s\n' fmt += '\nInput array:\n%s\n' name = func.__name__ func0 = eval('bn.slow.%s' % name) for i, a in enumerate(arrays(name)): if a.ndim == 0: axes = [None] # numpy can't handle e.g. np.nanmean(9, axis=-1) else: axes = list(range(-1, a.ndim)) + [None] for axis in axes: actual = 'Crashed' desired = 'Crashed' actualraised = False try: # do not use a.copy() here because it will C order the array actual = func(a, axis=axis) except: actualraised = True desiredraised = False try: with warnings.catch_warnings(): warnings.simplefilter("ignore") desired = func0(a, axis=axis) except: desiredraised = True if actualraised and desiredraised: pass else: tup = (name, 'a'+str(i), str(a.dtype), str(a.shape), str(axis), array_order(a), a) err_msg = fmt % tup if actualraised != desiredraised: if actualraised: fmt2 = '\nbn.%s raised\nbn.slow.%s ran\n\n%s' else: fmt2 = '\nbn.%s ran\nbn.slow.%s raised\n\n%s' msg = fmt2 % (name, name, traceback.format_exc()) err_msg += msg ok_(False, err_msg) assert_array_almost_equal(actual, desired, decimal, err_msg) err_msg += '\n dtype mismatch %s %s' if hasattr(actual, 'dtype') and hasattr(desired, 'dtype'): da = actual.dtype dd = desired.dtype assert_equal(da, dd, err_msg % (da, dd)) # --------------------------------------------------------------------------- # Test argument parsing def test_arg_parsing(): "test argument parsing" for func in bn.get_functions('reduce'): yield unit_maker_argparse, func def unit_maker_argparse(func, decimal=5): "test argument parsing." name = func.__name__ func0 = eval('bn.slow.%s' % name) a = np.array([1., 2, 3]) fmt = '\n%s' % func fmt += '%s\n' fmt += '\nInput array:\n%s\n' % a actual = func(a) desired = func0(a) err_msg = fmt % "(a)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a, 0) desired = func0(a, 0) err_msg = fmt % "(a, 0)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a, None) desired = func0(a, None) err_msg = fmt % "(a, None)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a, axis=0) desired = func0(a, axis=0) err_msg = fmt % "(a, axis=0)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a, axis=None) desired = func0(a, axis=None) err_msg = fmt % "(a, axis=None)" assert_array_almost_equal(actual, desired, decimal, err_msg) actual = func(a=a) desired = func0(a=a) err_msg = fmt % "(a)" assert_array_almost_equal(actual, desired, decimal, err_msg) # regression test: make sure len(kwargs) == 0 doesn't raise args = (a, 0) kwargs = {} func(*args, **kwargs) def test_arg_parse_raises(): "test argument parsing raises in reduce" for func in bn.get_functions('reduce'): yield unit_maker_argparse_raises, func def unit_maker_argparse_raises(func): "test argument parsing raises in reduce" a = np.array([1., 2, 3]) assert_raises(TypeError, func) assert_raises(TypeError, func, axis=a) assert_raises(TypeError, func, a, axis=0, extra=0) assert_raises(TypeError, func, a, axis=0, a=a) assert_raises(TypeError, func, a, 0, 0, 0, 0, 0) assert_raises(TypeError, func, a, axis='0') if func.__name__ not in ('nanstd', 'nanvar'): assert_raises(TypeError, func, a, ddof=0) assert_raises(TypeError, func, a, a) # assert_raises(TypeError, func, None) results vary # --------------------------------------------------------------------------- # Check that exceptions are raised def test_nanmax_size_zero(dtypes=DTYPES): "Test nanmax for size zero input arrays." shapes = [(0,), (2, 0), (1, 2, 0)] for shape in shapes: for dtype in dtypes: a = np.zeros(shape, dtype=dtype) assert_raises(ValueError, bn.nanmax, a) assert_raises(ValueError, bn.slow.nanmax, a) def test_nanmin_size_zero(dtypes=DTYPES): "Test nanmin for size zero input arrays." shapes = [(0,), (2, 0), (1, 2, 0)] for shape in shapes: for dtype in dtypes: a = np.zeros(shape, dtype=dtype) assert_raises(ValueError, bn.nanmin, a) assert_raises(ValueError, bn.slow.nanmin, a) # --------------------------------------------------------------------------- # nanstd and nanvar regression test (issue #60) def test_nanstd_issue60(): "nanstd regression test (issue #60)" f = bn.nanstd([1.0], ddof=1) with np.errstate(invalid='ignore'): s = bn.slow.nanstd([1.0], ddof=1) assert_equal(f, s, err_msg="bn.nanstd([1.0], ddof=1) wrong") f = bn.nanstd([1], ddof=1) with np.errstate(invalid='ignore'): s = bn.slow.nanstd([1], ddof=1) assert_equal(f, s, err_msg="bn.nanstd([1], ddof=1) wrong") f = bn.nanstd([1, np.nan], ddof=1) with np.errstate(invalid='ignore'): s = bn.slow.nanstd([1, np.nan], ddof=1) assert_equal(f, s, err_msg="bn.nanstd([1, nan], ddof=1) wrong") f = bn.nanstd([[1, np.nan], [np.nan, 1]], axis=0, ddof=1) with np.errstate(invalid='ignore'): s = bn.slow.nanstd([[1, np.nan], [np.nan, 1]], axis=0, ddof=1) assert_equal(f, s, err_msg="issue #60 regression") def test_nanvar_issue60(): "nanvar regression test (issue #60)" f = bn.nanvar([1.0], ddof=1) with np.errstate(invalid='ignore'): s = bn.slow.nanvar([1.0], ddof=1) assert_equal(f, s, err_msg="bn.nanvar([1.0], ddof=1) wrong") f = bn.nanvar([1], ddof=1) with np.errstate(invalid='ignore'): s = bn.slow.nanvar([1], ddof=1) assert_equal(f, s, err_msg="bn.nanvar([1], ddof=1) wrong") f = bn.nanvar([1, np.nan], ddof=1) with np.errstate(invalid='ignore'): s = bn.slow.nanvar([1, np.nan], ddof=1) assert_equal(f, s, err_msg="bn.nanvar([1, nan], ddof=1) wrong") f = bn.nanvar([[1, np.nan], [np.nan, 1]], axis=0, ddof=1) with np.errstate(invalid='ignore'): s = bn.slow.nanvar([[1, np.nan], [np.nan, 1]], axis=0, ddof=1) assert_equal(f, s, err_msg="issue #60 regression") bottleneck-1.2.0/bottleneck/tests/scalar_input_test.py000066400000000000000000000014031300536544100232340ustar00rootroot00000000000000"Check that functions can handle scalar input" from numpy.testing import assert_array_almost_equal import bottleneck as bn def unit_maker(func, func0, args=tuple()): "Test that bn.xxx gives the same output as bn.slow.xxx for scalar input." msg = '\nfunc %s | input %s\n' a = -9 argsi = [a] + list(args) actual = func(*argsi) desired = func0(*argsi) err_msg = msg % (func.__name__, a) assert_array_almost_equal(actual, desired, err_msg=err_msg) def test_scalar_input(): "Test scalar input" funcs = bn.get_functions('reduce') + bn.get_functions('nonreduce_axis') for func in funcs: if func.__name__ not in ('partition', 'argpartition', 'push'): yield unit_maker, func, eval('bn.slow.%s' % func.__name__) bottleneck-1.2.0/bottleneck/tests/util.py000066400000000000000000000132641300536544100204760ustar00rootroot00000000000000import numpy as np import bottleneck as bn DTYPES = [np.float64, np.float32, np.int64, np.int32] def get_functions(module_name, as_string=False): "Returns a list of functions, optionally as string function names" if module_name == 'all': funcs = [] funcs_in_dict = func_dict() for key in funcs_in_dict: for func in funcs_in_dict[key]: funcs.append(func) else: funcs = func_dict()[module_name] if as_string: funcs = [f.__name__ for f in funcs] return funcs def func_dict(): d = {} d['reduce'] = [bn.nansum, bn.nanmean, bn.nanstd, bn.nanvar, bn.nanmin, bn.nanmax, bn.median, bn.nanmedian, bn.ss, bn.nanargmin, bn.nanargmax, bn.anynan, bn.allnan, ] d['move'] = [bn.move_sum, bn.move_mean, bn.move_std, bn.move_var, bn.move_min, bn.move_max, bn.move_argmin, bn.move_argmax, bn.move_median, bn.move_rank, ] d['nonreduce'] = [bn.replace] d['nonreduce_axis'] = [bn.partition, bn.argpartition, bn.rankdata, bn.nanrankdata, bn.push, ] return d # --------------------------------------------------------------------------- def arrays(func_name, dtypes=DTYPES): return array_iter(array_generator, func_name, dtypes) def array_iter(arrays_func, *args): for a in arrays_func(*args): if a.ndim < 2: yield a # this is good for an extra check but in everyday development it # is a pain because it doubles the unit test run time # elif a.ndim == 3: # for axes in permutations(range(a.ndim)): # yield np.transpose(a, axes) else: yield a yield a.T def array_generator(func_name, dtypes): "Iterator that yields arrays to use for unit testing." # define nan and inf if func_name in ('partition', 'argpartition'): nan = 0 else: nan = np.nan if func_name in ('move_sum', 'move_mean', 'move_std', 'move_var'): # these functions can't handle inf inf = 8 else: inf = np.inf # nan and inf yield np.array([inf, nan]) yield np.array([inf, -inf]) yield np.array([nan, 2, 3]) yield np.array([-inf, 2, 3]) if func_name != 'nanargmin': yield np.array([nan, inf]) # byte swapped yield np.array([1, 2, 3], dtype='>f4') yield np.array([1, 2, 3], dtype='²hC`XôÚp¥76ÂR#æÑrÔ„ÁõwŒj#žçw è%v÷Hžéô-Ô@Q@oL?õ %£B’ymm+Q%JT[[¼˜æ6Àß(ö0%ZJ=@]&=-¦€QåH‚`Н³Ö6jšRæáÍQælÓê({”s‹e­Š$¾‰*°÷r7k­*çá }"dYËFãðý»‚Z½xu¼(è Ÿ»¨”âˆñÚ} ÿ9¸êéG¬0Ë7¦ °ƒ+Ø7@§šMZ8'7Ëh…ŸäiUä㣺¤7œwkg'¬sOÿZ´&<¬”Ž}|/¿XXdkÇjþÞëA²8b`ÒD ÙÞÃÓb(—¡@`×õÙl  _Í”IEND®B`‚bottleneck-1.2.0/doc/image/icon.xcf000066400000000000000000000067541300536544100171420ustar00rootroot00000000000000gimp xcf file@@BBG gimp-commentCreated with GIMPgimp-image-grid(style solid) (fgcolor (color-rgba 0.000000 0.000000 0.000000 1.000000)) (bgcolor (color-rgba 1.000000 1.000000 1.000000 1.000000)) (xspacing 10.000000) (yspacing 10.000000) (spacing-unit inches) (xoffset 0.000000) (yoffset 0.000000) (offset-unit inches) ®@@ Backgroundÿ     ]@@q@@Wÿþÿ<ÿ ÿ(ÿüÿÿ ÿþÿÿþÿ%ÿýÿÿ ÿþÿÿþÿ%ÿûÿÿÿ ÿþÿÿþÿ%ÿþÿÿþÿ ÿþÿÿþÿ$ÿþÿÿþÿ ÿþÿÿþÿ%ÿþÿÿþÿ ÿüÿÿ%ÿþÿÿþÿ ÿ#ÿ ÿþÿÿ!ÿÿþÿ ÿþÿÿ ÿþÿÿþÿ ÿþÿÿþÿÿþÿÿþÿ ÿþÿÿÿÿ ÿþÿÿ ÿþÿÿþÿÿüÿÿ7ÿyÿ!ÿ ÿÿÿþÿÿ ÿþÿ ÿþÿÿþÿÿ ÿþÿ ÿþÿÿþÿÿ ÿþÿ ÿþÿÿþÿÿ ÿþÿ ÿþÿÿþÿÿÿþÿ ÿþÿÿþÿÿÿþÿ ÿÿþÿÿÿ ÿþÿÿþÿÿÿþÿ ÿþÿÿþÿÿÿþÿ ÿþÿÿþÿÿÿþÿ ÿþÿÿþÿÿÿþÿÿþÿÿþÿÿÿþÿÿþÿÿÿþÿÿÿÿôÿ üÿÿÿÿÿþÿ ÿþÿÿÿÿþÿ ÿþÿÿ ÿÿÿþÿ ÿþÿÿÿþÿÿþÿ ÿþÿÿÿþÿÿþÿ ÿþÿÿÿÿþÿ ÿþÿÿÿþÿÿþÿ ÿÿÿþÿÿþÿ ÿþÿÿÿþÿÿþÿ ÿþÿÿÿÿþÿ ÿþÿÿÿÿþÿ ÿþÿÿÿÿ ÿþÿÿÿþÿÿþÿ ÿþÿÿ ÿ ÿþÿ ÿþÿÿÿþÿÿÿÿ ÿ SÿWÿþÿ<ÿ ÿ(ÿüÿÿ ÿþÿÿþÿ%ÿýÿÿ ÿþÿÿþÿ%ÿûÿÿÿ ÿþÿÿþÿ%ÿþÿÿþÿ ÿþÿÿþÿ$ÿþÿÿþÿ ÿþÿÿþÿ%ÿþÿÿþÿ ÿüÿÿ%ÿþÿÿþÿ ÿ#ÿ ÿþÿÿ!ÿÿþÿ ÿþÿÿ ÿþÿÿþÿ ÿþÿÿþÿÿþÿÿþÿ ÿþÿÿÿÿ ÿþÿÿ ÿþÿÿþÿÿüÿÿ7ÿyÿ!ÿ ÿ ÿþÿÿ ÿ ÿþÿÿ ÿ ÿþÿÿ ÿ ÿþÿÿ ÿ ÿþÿÿÿ ÿþÿÿÿ ÿþÿÿÿ ÿþÿÿÿ ÿþÿÿÿ ÿþÿÿÿ ÿþÿÿÿ ÿþÿÿÿ ÿÿ ÿÿôÿ üÿÿÿÿÿÿÿÿÿ ÿÿÿÿÿþÿÿÿÿþÿÿÿÿÿÿÿþÿÿÿÿþÿÿÿÿþÿÿÿÿÿÿÿÿÿÿÿÿÿþÿÿÿ ÿ ÿÿÿÿÿ ÿ SÿWÿþÿ<ÿ ÿ(ÿüÿÿ ÿþÿÿþÿ%ÿýÿÿ ÿþÿÿþÿ%ÿûÿÿÿ ÿþÿÿþÿ%ÿþÿÿþÿ ÿþÿÿþÿ$ÿþÿÿþÿ ÿþÿÿþÿ%ÿþÿÿþÿ ÿüÿÿ%ÿþÿÿþÿ ÿ#ÿ ÿþÿÿ!ÿÿþÿ ÿþÿÿ ÿþÿÿþÿ ÿþÿÿþÿÿþÿÿþÿ ÿþÿÿÿÿ ÿþÿÿ ÿþÿÿþÿÿüÿÿ7ÿyÿ!ÿ ÿ ÿþÿÿþÿ ÿþÿÿ ÿþÿÿþÿ ÿþÿÿ ÿþÿÿþÿ ÿþÿÿ ÿþÿÿþÿ ÿþÿÿ ÿþÿÿþÿ ÿþÿÿ ÿþÿÿþÿ ÿþÿÿ ÿþÿÿþÿ ÿþÿÿ ÿþÿÿþÿ ÿþÿÿ ÿþÿÿþÿ ÿþÿÿ ÿþÿÿþÿ ÿþÿÿ ÿþÿÿþÿ ÿÿ ÿþÿÿ ÿþÿÿ ÿþÿ ÿþÿÿ ÿÿôÿ üÿÿÿÿþÿ ÿûÿÿÿÿÿþÿ ÿþÿ ÿÿÿþÿ ÿþÿ ÿÿÿÿþÿ ÿþÿÿþÿÿÿþÿ ÿþÿÿþÿÿÿþÿ ÿþÿÿÿÿþÿ ÿþÿÿþÿÿÿ ÿþÿÿþÿÿÿþÿ ÿþÿÿþÿÿÿþÿ ÿþÿÿÿÿþÿ ÿþÿÿÿÿþÿ ÿþÿÿÿÿþÿÿþÿ ÿþÿÿÿþÿÿþÿ ÿ ÿÿþÿÿþÿÿÿþÿÿÿ ÿ SÿWþÿ<ÿ ÿ(üÿÿ þÿþÿ%ÿýÿ þÿþÿ%ûÿÿ þÿþÿ%þÿþÿ þÿþÿ$þÿþÿ þÿþÿ%þÿþÿ üÿÿÿ%þÿþÿ ÿ#ÿ þÿÿ!ÿþÿ þÿÿ þÿþÿ þÿþÿþÿþÿ þÿÿÿÿ þÿÿ þÿþÿüÿÿÿ7ÿyÿ! ÿ ÿþÿ ÿ ÿþÿ ÿ ÿþÿ ÿ ÿþÿ ÿ ÿþÿÿ ÿþÿÿ ÿþÿÿ ÿþÿÿ ÿþÿÿ ÿþÿÿ ÿþÿÿ ÿþÿÿ ÿÿ ÿÿÿô ÿüÿÿÿÿÿÿÿÿÿ ÿÿÿÿþÿÿÿþÿÿÿÿÿÿþÿÿÿþÿÿÿþÿÿÿÿÿÿÿÿÿÿÿÿþÿÿÿ ÿÿÿÿÿ ÿ ÿSbottleneck-1.2.0/doc/image/icon14.png000066400000000000000000000010301300536544100172710ustar00rootroot00000000000000‰PNG  IHDRH-ÑsRGB®ÎébKGDÿÿÿ ½§“ pHYs  šœtIMEÚ .Ÿ`tEXtCommentCreated with GIMPWsIDAT(ÏпkSaÆñÏÍÆ‰) "Á Õâ$( ‚tqtìì.êê_ "t‡€›è_àRÔR¨ÅA ¶ŠXhH5 M½…ж*I_‡ÜÈE«ƒgyÏ9ïùòœçðgT1ŽLR—p{ÓC™À£ ’: ‘ÃÈ¢‡ýI/‹C8‚ÊßÀ=XÆãx€cu"ÄØNG8ÜO'ÏòâíçܽpƒÆO™ºÕ·ñK1J<¡²Új§»õ?Ã(åÎU©ÿ¾jÔ>éõóõ|d (§VÍ&W{9¥RÆ|;cv&2ý XzùE5×"¾§Ð¸©<ÍæÒ.äˆ ×…ócœŽØžcyR¸M£Éñ˜•@ë2׿ªq Ü„@xH‹KüNZ+'iïð ¯6Ù:õñ8£vŠj¨WX}C/ÇZ‡æ<³äó,Æ|]ãm‡Oí4ø"™U®f)>c«Éë]OÐé2ó°áÿã"ø z7né½.;©IEND®B`‚bottleneck-1.2.0/doc/source/000077500000000000000000000000001300536544100157125ustar00rootroot00000000000000bottleneck-1.2.0/doc/source/.gitignore000066400000000000000000000000121300536544100176730ustar00rootroot00000000000000intro.rst bottleneck-1.2.0/doc/source/conf.py000066400000000000000000000156071300536544100172220ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # la documentation build configuration file, created by # sphinx-quickstart on Thu Jan 14 16:31:34 2010. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.append(os.path.abspath('.')) sys.path.insert(0, os.path.abspath('../sphinxext')) # -- General configuration ----------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc','numpydoc'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Bottleneck' copyright = u'2010-2015 Keith Goodman' # Grab version from bottleneck/version.py ver_file = os.path.join('..', '..', 'bottleneck', 'version.py') fid = file(ver_file, 'r') VER = fid.read() fid.close() VER = VER.split("= ") VER = VER[1].strip() VER = VER.strip('"') # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = VER # The full version, including alpha/beta/rc tags. release = VER # JP: added from sphinxdocs #autosummary_generate = True # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = [] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. html_theme_options = { "headtextcolor": "#333333", "sidebarbgcolor": "#dddddd", "footerbgcolor": "#cccccc", "footertextcolor": "black", "headbgcolor": "#cccccc", "sidebartextcolor": "#333333", "sidebarlinkcolor": "default", "relbarbgcolor": "#cccccc", "relbartextcolor": "default", "relbarlinkcolor": "default", "codebgcolor": "#ffffff", "textcolor": "#333333", "bgcolor": "#f5f5f5" } # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. html_logo = '../image/icon.png' # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". #html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_use_modindex = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'bottleneckdoc' # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'bottleneck.tex', u'bottleneck Documentation', u'Keith Goodman', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True bottleneck-1.2.0/doc/source/index.rst000066400000000000000000000003651300536544100175570ustar00rootroot00000000000000========== Bottleneck ========== Fast NumPy array functions written in C. .. toctree:: :maxdepth: 2 intro reference release license Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` bottleneck-1.2.0/doc/source/license.rst000066400000000000000000000000461300536544100200660ustar00rootroot00000000000000.. include:: ../../bottleneck/LICENSE bottleneck-1.2.0/doc/source/reference.rst000066400000000000000000000076601300536544100204130ustar00rootroot00000000000000================== Function reference ================== Bottleneck provides the following functions: ================================= ============================================================================================== reduce :meth:`nansum `, :meth:`nanmean `, :meth:`nanstd `, :meth:`nanvar `, :meth:`nanmin `, :meth:`nanmax `, :meth:`median `, :meth:`nanmedian `, :meth:`ss `, :meth:`nanargmin `, :meth:`nanargmax `, :meth:`anynan `, :meth:`allnan ` non-reduce :meth:`replace ` non-reduce with axis :meth:`rankdata `, :meth:`nanrankdata `, :meth:`partition `, :meth:`argpartition `, :meth:`push ` moving window :meth:`move_sum `, :meth:`move_mean `, :meth:`move_std `, :meth:`move_var `, :meth:`move_min `, :meth:`move_max `, :meth:`move_argmin `, :meth:`move_argmax `, :meth:`move_median `, :meth:`move_rank ` ================================= ============================================================================================== Reduce ------ Functions the reduce the input array along the specified axis. ------------ .. autofunction:: bottleneck.nansum ------------ .. autofunction:: bottleneck.nanmean ------------ .. autofunction:: bottleneck.nanstd ------------ .. autofunction:: bottleneck.nanvar ------------ .. autofunction:: bottleneck.nanmin ------------ .. autofunction:: bottleneck.nanmax ------------ .. autofunction:: bottleneck.median ------------ .. autofunction:: bottleneck.nanmedian ------------ .. autofunction:: bottleneck.ss ------------ .. autofunction:: bottleneck.nanargmin ------------ .. autofunction:: bottleneck.nanargmax ------------ .. autofunction:: bottleneck.anynan ------------ .. autofunction:: bottleneck.allnan Non-reduce ---------- Functions that do not reduce the input array and do not take `axis` as input. ------------ .. autofunction:: bottleneck.replace Non-reduce with axis -------------------- Functions that do not reduce the input array but operate along a specified axis. ------------ .. autofunction:: bottleneck.rankdata ------------ .. autofunction:: bottleneck.nanrankdata ------------ .. autofunction:: bottleneck.partition ------------ .. autofunction:: bottleneck.argpartition ------------ .. autofunction:: bottleneck.push Moving window functions ----------------------- Functions that operate along a (1d) moving window. ------------ .. autofunction:: bottleneck.move_sum ------------ .. autofunction:: bottleneck.move_mean ------------ .. autofunction:: bottleneck.move_std ------------ .. autofunction:: bottleneck.move_var ------------ .. autofunction:: bottleneck.move_min ------------ .. autofunction:: bottleneck.move_max ------------ .. autofunction:: bottleneck.move_argmin ------------ .. autofunction:: bottleneck.move_argmax ------------ .. autofunction:: bottleneck.move_median ------------ .. autofunction:: bottleneck.move_rank bottleneck-1.2.0/doc/source/release.rst000066400000000000000000000000371300536544100200640ustar00rootroot00000000000000.. include:: ../../RELEASE.rst bottleneck-1.2.0/doc/sphinxext/000077500000000000000000000000001300536544100164445ustar00rootroot00000000000000bottleneck-1.2.0/doc/sphinxext/LICENSE.txt000066400000000000000000000136231300536544100202740ustar00rootroot00000000000000------------------------------------------------------------------------------- The files - numpydoc.py - autosummary.py - autosummary_generate.py - docscrape.py - docscrape_sphinx.py - phantom_import.py have the following license: Copyright (C) 2008 Stefan van der Walt , Pauli Virtanen Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ------------------------------------------------------------------------------- The files - compiler_unparse.py - comment_eater.py - traitsdoc.py have the following license: This software is OSI Certified Open Source Software. OSI Certified is a certification mark of the Open Source Initiative. Copyright (c) 2006, Enthought, Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Enthought, Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ------------------------------------------------------------------------------- The files - only_directives.py - plot_directive.py originate from Matplotlib (http://matplotlib.sf.net/) which has the following license: Copyright (c) 2002-2008 John D. Hunter; All Rights Reserved. 1. This LICENSE AGREEMENT is between John D. Hunter (“JDHâ€), and the Individual or Organization (“Licenseeâ€) accessing and otherwise using matplotlib software in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, JDH hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use matplotlib 0.98.3 alone or in any derivative version, provided, however, that JDH’s License Agreement and JDH’s notice of copyright, i.e., “Copyright (c) 2002-2008 John D. Hunter; All Rights Reserved†are retained in matplotlib 0.98.3 alone or in any derivative version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates matplotlib 0.98.3 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to matplotlib 0.98.3. 4. JDH is making matplotlib 0.98.3 available to Licensee on an “AS IS†basis. JDH MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, JDH MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF MATPLOTLIB 0.98.3 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. JDH SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF MATPLOTLIB 0.98.3 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING MATPLOTLIB 0.98.3, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between JDH and Licensee. This License Agreement does not grant permission to use JDH trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By copying, installing or otherwise using matplotlib 0.98.3, Licensee agrees to be bound by the terms and conditions of this License Agreement. bottleneck-1.2.0/doc/sphinxext/MANIFEST.in000066400000000000000000000000531300536544100202000ustar00rootroot00000000000000recursive-include tests *.py include *.txt bottleneck-1.2.0/doc/sphinxext/PKG-INFO000066400000000000000000000007401300536544100175420ustar00rootroot00000000000000Metadata-Version: 1.0 Name: numpydoc Version: 0.4 Summary: Sphinx extension to support docstrings in Numpy format Home-page: http://github.com/numpy/numpy/tree/master/doc/sphinxext Author: Pauli Virtanen and others Author-email: pav@iki.fi License: BSD Description: UNKNOWN Keywords: sphinx numpy Platform: UNKNOWN Classifier: Development Status :: 3 - Alpha Classifier: Environment :: Plugins Classifier: License :: OSI Approved :: BSD License Classifier: Topic :: Documentation bottleneck-1.2.0/doc/sphinxext/README.txt000066400000000000000000000026021300536544100201420ustar00rootroot00000000000000===================================== numpydoc -- Numpy's Sphinx extensions ===================================== Numpy's documentation uses several custom extensions to Sphinx. These are shipped in this ``numpydoc`` package, in case you want to make use of them in third-party projects. The following extensions are available: - ``numpydoc``: support for the Numpy docstring format in Sphinx, and add the code description directives ``np:function``, ``np-c:function``, etc. that support the Numpy docstring syntax. - ``numpydoc.traitsdoc``: For gathering documentation about Traits attributes. - ``numpydoc.plot_directive``: Adaptation of Matplotlib's ``plot::`` directive. Note that this implementation may still undergo severe changes or eventually be deprecated. numpydoc ======== Numpydoc inserts a hook into Sphinx's autodoc that converts docstrings following the Numpy/Scipy format to a form palatable to Sphinx. Options ------- The following options can be set in conf.py: - numpydoc_use_plots: bool Whether to produce ``plot::`` directives for Examples sections that contain ``import matplotlib``. - numpydoc_show_class_members: bool Whether to show all members of a class in the Methods and Attributes sections automatically. - numpydoc_edit_link: bool (DEPRECATED -- edit your HTML template instead) Whether to insert an edit link after docstrings. bottleneck-1.2.0/doc/sphinxext/__init__.py000066400000000000000000000000331300536544100205510ustar00rootroot00000000000000from numpydoc import setup bottleneck-1.2.0/doc/sphinxext/comment_eater.py000066400000000000000000000120751300536544100216450ustar00rootroot00000000000000from cStringIO import StringIO import compiler import inspect import textwrap import tokenize from compiler_unparse import unparse class Comment(object): """ A comment block. """ is_comment = True def __init__(self, start_lineno, end_lineno, text): # int : The first line number in the block. 1-indexed. self.start_lineno = start_lineno # int : The last line number. Inclusive! self.end_lineno = end_lineno # str : The text block including '#' character but not any leading spaces. self.text = text def add(self, string, start, end, line): """ Add a new comment line. """ self.start_lineno = min(self.start_lineno, start[0]) self.end_lineno = max(self.end_lineno, end[0]) self.text += string def __repr__(self): return '%s(%r, %r, %r)' % (self.__class__.__name__, self.start_lineno, self.end_lineno, self.text) class NonComment(object): """ A non-comment block of code. """ is_comment = False def __init__(self, start_lineno, end_lineno): self.start_lineno = start_lineno self.end_lineno = end_lineno def add(self, string, start, end, line): """ Add lines to the block. """ if string.strip(): # Only add if not entirely whitespace. self.start_lineno = min(self.start_lineno, start[0]) self.end_lineno = max(self.end_lineno, end[0]) def __repr__(self): return '%s(%r, %r)' % (self.__class__.__name__, self.start_lineno, self.end_lineno) class CommentBlocker(object): """ Pull out contiguous comment blocks. """ def __init__(self): # Start with a dummy. self.current_block = NonComment(0, 0) # All of the blocks seen so far. self.blocks = [] # The index mapping lines of code to their associated comment blocks. self.index = {} def process_file(self, file): """ Process a file object. """ for token in tokenize.generate_tokens(file.next): self.process_token(*token) self.make_index() def process_token(self, kind, string, start, end, line): """ Process a single token. """ if self.current_block.is_comment: if kind == tokenize.COMMENT: self.current_block.add(string, start, end, line) else: self.new_noncomment(start[0], end[0]) else: if kind == tokenize.COMMENT: self.new_comment(string, start, end, line) else: self.current_block.add(string, start, end, line) def new_noncomment(self, start_lineno, end_lineno): """ We are transitioning from a noncomment to a comment. """ block = NonComment(start_lineno, end_lineno) self.blocks.append(block) self.current_block = block def new_comment(self, string, start, end, line): """ Possibly add a new comment. Only adds a new comment if this comment is the only thing on the line. Otherwise, it extends the noncomment block. """ prefix = line[:start[1]] if prefix.strip(): # Oops! Trailing comment, not a comment block. self.current_block.add(string, start, end, line) else: # A comment block. block = Comment(start[0], end[0], string) self.blocks.append(block) self.current_block = block def make_index(self): """ Make the index mapping lines of actual code to their associated prefix comments. """ for prev, block in zip(self.blocks[:-1], self.blocks[1:]): if not block.is_comment: self.index[block.start_lineno] = prev def search_for_comment(self, lineno, default=None): """ Find the comment block just before the given line number. Returns None (or the specified default) if there is no such block. """ if not self.index: self.make_index() block = self.index.get(lineno, None) text = getattr(block, 'text', default) return text def strip_comment_marker(text): """ Strip # markers at the front of a block of comment text. """ lines = [] for line in text.splitlines(): lines.append(line.lstrip('#')) text = textwrap.dedent('\n'.join(lines)) return text def get_class_traits(klass): """ Yield all of the documentation for trait definitions on a class object. """ # FIXME: gracefully handle errors here or in the caller? source = inspect.getsource(klass) cb = CommentBlocker() cb.process_file(StringIO(source)) mod_ast = compiler.parse(source) class_ast = mod_ast.node.nodes[0] for node in class_ast.code.nodes: # FIXME: handle other kinds of assignments? if isinstance(node, compiler.ast.Assign): name = node.nodes[0].name rhs = unparse(node.expr).strip() doc = strip_comment_marker(cb.search_for_comment(node.lineno, default='')) yield name, rhs, doc bottleneck-1.2.0/doc/sphinxext/compiler_unparse.py000066400000000000000000000602001300536544100223630ustar00rootroot00000000000000""" Turn compiler.ast structures back into executable python code. The unparse method takes a compiler.ast tree and transforms it back into valid python code. It is incomplete and currently only works for import statements, function calls, function definitions, assignments, and basic expressions. Inspired by python-2.5-svn/Demo/parser/unparse.py fixme: We may want to move to using _ast trees because the compiler for them is about 6 times faster than compiler.compile. """ import sys import cStringIO from compiler.ast import Const, Name, Tuple, Div, Mul, Sub, Add def unparse(ast, single_line_functions=False): s = cStringIO.StringIO() UnparseCompilerAst(ast, s, single_line_functions) return s.getvalue().lstrip() op_precedence = { 'compiler.ast.Power':3, 'compiler.ast.Mul':2, 'compiler.ast.Div':2, 'compiler.ast.Add':1, 'compiler.ast.Sub':1 } class UnparseCompilerAst: """ Methods in this class recursively traverse an AST and output source code for the abstract syntax; original formatting is disregarged. """ ######################################################################### # object interface. ######################################################################### def __init__(self, tree, file = sys.stdout, single_line_functions=False): """ Unparser(tree, file=sys.stdout) -> None. Print the source for tree to file. """ self.f = file self._single_func = single_line_functions self._do_indent = True self._indent = 0 self._dispatch(tree) self._write("\n") self.f.flush() ######################################################################### # Unparser private interface. ######################################################################### ### format, output, and dispatch methods ################################ def _fill(self, text = ""): "Indent a piece of text, according to the current indentation level" if self._do_indent: self._write("\n"+" "*self._indent + text) else: self._write(text) def _write(self, text): "Append a piece of text to the current line." self.f.write(text) def _enter(self): "Print ':', and increase the indentation." self._write(": ") self._indent += 1 def _leave(self): "Decrease the indentation level." self._indent -= 1 def _dispatch(self, tree): "_dispatcher function, _dispatching tree type T to method _T." if isinstance(tree, list): for t in tree: self._dispatch(t) return meth = getattr(self, "_"+tree.__class__.__name__) if tree.__class__.__name__ == 'NoneType' and not self._do_indent: return meth(tree) ######################################################################### # compiler.ast unparsing methods. # # There should be one method per concrete grammar type. They are # organized in alphabetical order. ######################################################################### def _Add(self, t): self.__binary_op(t, '+') def _And(self, t): self._write(" (") for i, node in enumerate(t.nodes): self._dispatch(node) if i != len(t.nodes)-1: self._write(") and (") self._write(")") def _AssAttr(self, t): """ Handle assigning an attribute of an object """ self._dispatch(t.expr) self._write('.'+t.attrname) def _Assign(self, t): """ Expression Assignment such as "a = 1". This only handles assignment in expressions. Keyword assignment is handled separately. """ self._fill() for target in t.nodes: self._dispatch(target) self._write(" = ") self._dispatch(t.expr) if not self._do_indent: self._write('; ') def _AssName(self, t): """ Name on left hand side of expression. Treat just like a name on the right side of an expression. """ self._Name(t) def _AssTuple(self, t): """ Tuple on left hand side of an expression. """ # _write each elements, separated by a comma. for element in t.nodes[:-1]: self._dispatch(element) self._write(", ") # Handle the last one without writing comma last_element = t.nodes[-1] self._dispatch(last_element) def _AugAssign(self, t): """ +=,-=,*=,/=,**=, etc. operations """ self._fill() self._dispatch(t.node) self._write(' '+t.op+' ') self._dispatch(t.expr) if not self._do_indent: self._write(';') def _Bitand(self, t): """ Bit and operation. """ for i, node in enumerate(t.nodes): self._write("(") self._dispatch(node) self._write(")") if i != len(t.nodes)-1: self._write(" & ") def _Bitor(self, t): """ Bit or operation """ for i, node in enumerate(t.nodes): self._write("(") self._dispatch(node) self._write(")") if i != len(t.nodes)-1: self._write(" | ") def _CallFunc(self, t): """ Function call. """ self._dispatch(t.node) self._write("(") comma = False for e in t.args: if comma: self._write(", ") else: comma = True self._dispatch(e) if t.star_args: if comma: self._write(", ") else: comma = True self._write("*") self._dispatch(t.star_args) if t.dstar_args: if comma: self._write(", ") else: comma = True self._write("**") self._dispatch(t.dstar_args) self._write(")") def _Compare(self, t): self._dispatch(t.expr) for op, expr in t.ops: self._write(" " + op + " ") self._dispatch(expr) def _Const(self, t): """ A constant value such as an integer value, 3, or a string, "hello". """ self._dispatch(t.value) def _Decorators(self, t): """ Handle function decorators (eg. @has_units) """ for node in t.nodes: self._dispatch(node) def _Dict(self, t): self._write("{") for i, (k, v) in enumerate(t.items): self._dispatch(k) self._write(": ") self._dispatch(v) if i < len(t.items)-1: self._write(", ") self._write("}") def _Discard(self, t): """ Node for when return value is ignored such as in "foo(a)". """ self._fill() self._dispatch(t.expr) def _Div(self, t): self.__binary_op(t, '/') def _Ellipsis(self, t): self._write("...") def _From(self, t): """ Handle "from xyz import foo, bar as baz". """ # fixme: Are From and ImportFrom handled differently? self._fill("from ") self._write(t.modname) self._write(" import ") for i, (name,asname) in enumerate(t.names): if i != 0: self._write(", ") self._write(name) if asname is not None: self._write(" as "+asname) def _Function(self, t): """ Handle function definitions """ if t.decorators is not None: self._fill("@") self._dispatch(t.decorators) self._fill("def "+t.name + "(") defaults = [None] * (len(t.argnames) - len(t.defaults)) + list(t.defaults) for i, arg in enumerate(zip(t.argnames, defaults)): self._write(arg[0]) if arg[1] is not None: self._write('=') self._dispatch(arg[1]) if i < len(t.argnames)-1: self._write(', ') self._write(")") if self._single_func: self._do_indent = False self._enter() self._dispatch(t.code) self._leave() self._do_indent = True def _Getattr(self, t): """ Handle getting an attribute of an object """ if isinstance(t.expr, (Div, Mul, Sub, Add)): self._write('(') self._dispatch(t.expr) self._write(')') else: self._dispatch(t.expr) self._write('.'+t.attrname) def _If(self, t): self._fill() for i, (compare,code) in enumerate(t.tests): if i == 0: self._write("if ") else: self._write("elif ") self._dispatch(compare) self._enter() self._fill() self._dispatch(code) self._leave() self._write("\n") if t.else_ is not None: self._write("else") self._enter() self._fill() self._dispatch(t.else_) self._leave() self._write("\n") def _IfExp(self, t): self._dispatch(t.then) self._write(" if ") self._dispatch(t.test) if t.else_ is not None: self._write(" else (") self._dispatch(t.else_) self._write(")") def _Import(self, t): """ Handle "import xyz.foo". """ self._fill("import ") for i, (name,asname) in enumerate(t.names): if i != 0: self._write(", ") self._write(name) if asname is not None: self._write(" as "+asname) def _Keyword(self, t): """ Keyword value assignment within function calls and definitions. """ self._write(t.name) self._write("=") self._dispatch(t.expr) def _List(self, t): self._write("[") for i,node in enumerate(t.nodes): self._dispatch(node) if i < len(t.nodes)-1: self._write(", ") self._write("]") def _Module(self, t): if t.doc is not None: self._dispatch(t.doc) self._dispatch(t.node) def _Mul(self, t): self.__binary_op(t, '*') def _Name(self, t): self._write(t.name) def _NoneType(self, t): self._write("None") def _Not(self, t): self._write('not (') self._dispatch(t.expr) self._write(')') def _Or(self, t): self._write(" (") for i, node in enumerate(t.nodes): self._dispatch(node) if i != len(t.nodes)-1: self._write(") or (") self._write(")") def _Pass(self, t): self._write("pass\n") def _Printnl(self, t): self._fill("print ") if t.dest: self._write(">> ") self._dispatch(t.dest) self._write(", ") comma = False for node in t.nodes: if comma: self._write(', ') else: comma = True self._dispatch(node) def _Power(self, t): self.__binary_op(t, '**') def _Return(self, t): self._fill("return ") if t.value: if isinstance(t.value, Tuple): text = ', '.join([ name.name for name in t.value.asList() ]) self._write(text) else: self._dispatch(t.value) if not self._do_indent: self._write('; ') def _Slice(self, t): self._dispatch(t.expr) self._write("[") if t.lower: self._dispatch(t.lower) self._write(":") if t.upper: self._dispatch(t.upper) #if t.step: # self._write(":") # self._dispatch(t.step) self._write("]") def _Sliceobj(self, t): for i, node in enumerate(t.nodes): if i != 0: self._write(":") if not (isinstance(node, Const) and node.value is None): self._dispatch(node) def _Stmt(self, tree): for node in tree.nodes: self._dispatch(node) def _Sub(self, t): self.__binary_op(t, '-') def _Subscript(self, t): self._dispatch(t.expr) self._write("[") for i, value in enumerate(t.subs): if i != 0: self._write(",") self._dispatch(value) self._write("]") def _TryExcept(self, t): self._fill("try") self._enter() self._dispatch(t.body) self._leave() for handler in t.handlers: self._fill('except ') self._dispatch(handler[0]) if handler[1] is not None: self._write(', ') self._dispatch(handler[1]) self._enter() self._dispatch(handler[2]) self._leave() if t.else_: self._fill("else") self._enter() self._dispatch(t.else_) self._leave() def _Tuple(self, t): if not t.nodes: # Empty tuple. self._write("()") else: self._write("(") # _write each elements, separated by a comma. for element in t.nodes[:-1]: self._dispatch(element) self._write(", ") # Handle the last one without writing comma last_element = t.nodes[-1] self._dispatch(last_element) self._write(")") def _UnaryAdd(self, t): self._write("+") self._dispatch(t.expr) def _UnarySub(self, t): self._write("-") self._dispatch(t.expr) def _With(self, t): self._fill('with ') self._dispatch(t.expr) if t.vars: self._write(' as ') self._dispatch(t.vars.name) self._enter() self._dispatch(t.body) self._leave() self._write('\n') def _int(self, t): self._write(repr(t)) def __binary_op(self, t, symbol): # Check if parenthesis are needed on left side and then dispatch has_paren = False left_class = str(t.left.__class__) if (left_class in op_precedence.keys() and op_precedence[left_class] < op_precedence[str(t.__class__)]): has_paren = True if has_paren: self._write('(') self._dispatch(t.left) if has_paren: self._write(')') # Write the appropriate symbol for operator self._write(symbol) # Check if parenthesis are needed on the right side and then dispatch has_paren = False right_class = str(t.right.__class__) if (right_class in op_precedence.keys() and op_precedence[right_class] < op_precedence[str(t.__class__)]): has_paren = True if has_paren: self._write('(') self._dispatch(t.right) if has_paren: self._write(')') def _float(self, t): # if t is 0.1, str(t)->'0.1' while repr(t)->'0.1000000000001' # We prefer str here. self._write(str(t)) def _str(self, t): self._write(repr(t)) def _tuple(self, t): self._write(str(t)) ######################################################################### # These are the methods from the _ast modules unparse. # # As our needs to handle more advanced code increase, we may want to # modify some of the methods below so that they work for compiler.ast. ######################################################################### # # stmt # def _Expr(self, tree): # self._fill() # self._dispatch(tree.value) # # def _Import(self, t): # self._fill("import ") # first = True # for a in t.names: # if first: # first = False # else: # self._write(", ") # self._write(a.name) # if a.asname: # self._write(" as "+a.asname) # ## def _ImportFrom(self, t): ## self._fill("from ") ## self._write(t.module) ## self._write(" import ") ## for i, a in enumerate(t.names): ## if i == 0: ## self._write(", ") ## self._write(a.name) ## if a.asname: ## self._write(" as "+a.asname) ## # XXX(jpe) what is level for? ## # # def _Break(self, t): # self._fill("break") # # def _Continue(self, t): # self._fill("continue") # # def _Delete(self, t): # self._fill("del ") # self._dispatch(t.targets) # # def _Assert(self, t): # self._fill("assert ") # self._dispatch(t.test) # if t.msg: # self._write(", ") # self._dispatch(t.msg) # # def _Exec(self, t): # self._fill("exec ") # self._dispatch(t.body) # if t.globals: # self._write(" in ") # self._dispatch(t.globals) # if t.locals: # self._write(", ") # self._dispatch(t.locals) # # def _Print(self, t): # self._fill("print ") # do_comma = False # if t.dest: # self._write(">>") # self._dispatch(t.dest) # do_comma = True # for e in t.values: # if do_comma:self._write(", ") # else:do_comma=True # self._dispatch(e) # if not t.nl: # self._write(",") # # def _Global(self, t): # self._fill("global") # for i, n in enumerate(t.names): # if i != 0: # self._write(",") # self._write(" " + n) # # def _Yield(self, t): # self._fill("yield") # if t.value: # self._write(" (") # self._dispatch(t.value) # self._write(")") # # def _Raise(self, t): # self._fill('raise ') # if t.type: # self._dispatch(t.type) # if t.inst: # self._write(", ") # self._dispatch(t.inst) # if t.tback: # self._write(", ") # self._dispatch(t.tback) # # # def _TryFinally(self, t): # self._fill("try") # self._enter() # self._dispatch(t.body) # self._leave() # # self._fill("finally") # self._enter() # self._dispatch(t.finalbody) # self._leave() # # def _excepthandler(self, t): # self._fill("except ") # if t.type: # self._dispatch(t.type) # if t.name: # self._write(", ") # self._dispatch(t.name) # self._enter() # self._dispatch(t.body) # self._leave() # # def _ClassDef(self, t): # self._write("\n") # self._fill("class "+t.name) # if t.bases: # self._write("(") # for a in t.bases: # self._dispatch(a) # self._write(", ") # self._write(")") # self._enter() # self._dispatch(t.body) # self._leave() # # def _FunctionDef(self, t): # self._write("\n") # for deco in t.decorators: # self._fill("@") # self._dispatch(deco) # self._fill("def "+t.name + "(") # self._dispatch(t.args) # self._write(")") # self._enter() # self._dispatch(t.body) # self._leave() # # def _For(self, t): # self._fill("for ") # self._dispatch(t.target) # self._write(" in ") # self._dispatch(t.iter) # self._enter() # self._dispatch(t.body) # self._leave() # if t.orelse: # self._fill("else") # self._enter() # self._dispatch(t.orelse) # self._leave # # def _While(self, t): # self._fill("while ") # self._dispatch(t.test) # self._enter() # self._dispatch(t.body) # self._leave() # if t.orelse: # self._fill("else") # self._enter() # self._dispatch(t.orelse) # self._leave # # # expr # def _Str(self, tree): # self._write(repr(tree.s)) ## # def _Repr(self, t): # self._write("`") # self._dispatch(t.value) # self._write("`") # # def _Num(self, t): # self._write(repr(t.n)) # # def _ListComp(self, t): # self._write("[") # self._dispatch(t.elt) # for gen in t.generators: # self._dispatch(gen) # self._write("]") # # def _GeneratorExp(self, t): # self._write("(") # self._dispatch(t.elt) # for gen in t.generators: # self._dispatch(gen) # self._write(")") # # def _comprehension(self, t): # self._write(" for ") # self._dispatch(t.target) # self._write(" in ") # self._dispatch(t.iter) # for if_clause in t.ifs: # self._write(" if ") # self._dispatch(if_clause) # # def _IfExp(self, t): # self._dispatch(t.body) # self._write(" if ") # self._dispatch(t.test) # if t.orelse: # self._write(" else ") # self._dispatch(t.orelse) # # unop = {"Invert":"~", "Not": "not", "UAdd":"+", "USub":"-"} # def _UnaryOp(self, t): # self._write(self.unop[t.op.__class__.__name__]) # self._write("(") # self._dispatch(t.operand) # self._write(")") # # binop = { "Add":"+", "Sub":"-", "Mult":"*", "Div":"/", "Mod":"%", # "LShift":">>", "RShift":"<<", "BitOr":"|", "BitXor":"^", "BitAnd":"&", # "FloorDiv":"//", "Pow": "**"} # def _BinOp(self, t): # self._write("(") # self._dispatch(t.left) # self._write(")" + self.binop[t.op.__class__.__name__] + "(") # self._dispatch(t.right) # self._write(")") # # boolops = {_ast.And: 'and', _ast.Or: 'or'} # def _BoolOp(self, t): # self._write("(") # self._dispatch(t.values[0]) # for v in t.values[1:]: # self._write(" %s " % self.boolops[t.op.__class__]) # self._dispatch(v) # self._write(")") # # def _Attribute(self,t): # self._dispatch(t.value) # self._write(".") # self._write(t.attr) # ## def _Call(self, t): ## self._dispatch(t.func) ## self._write("(") ## comma = False ## for e in t.args: ## if comma: self._write(", ") ## else: comma = True ## self._dispatch(e) ## for e in t.keywords: ## if comma: self._write(", ") ## else: comma = True ## self._dispatch(e) ## if t.starargs: ## if comma: self._write(", ") ## else: comma = True ## self._write("*") ## self._dispatch(t.starargs) ## if t.kwargs: ## if comma: self._write(", ") ## else: comma = True ## self._write("**") ## self._dispatch(t.kwargs) ## self._write(")") # # # slice # def _Index(self, t): # self._dispatch(t.value) # # def _ExtSlice(self, t): # for i, d in enumerate(t.dims): # if i != 0: # self._write(': ') # self._dispatch(d) # # # others # def _arguments(self, t): # first = True # nonDef = len(t.args)-len(t.defaults) # for a in t.args[0:nonDef]: # if first:first = False # else: self._write(", ") # self._dispatch(a) # for a,d in zip(t.args[nonDef:], t.defaults): # if first:first = False # else: self._write(", ") # self._dispatch(a), # self._write("=") # self._dispatch(d) # if t.vararg: # if first:first = False # else: self._write(", ") # self._write("*"+t.vararg) # if t.kwarg: # if first:first = False # else: self._write(", ") # self._write("**"+t.kwarg) # ## def _keyword(self, t): ## self._write(t.arg) ## self._write("=") ## self._dispatch(t.value) # # def _Lambda(self, t): # self._write("lambda ") # self._dispatch(t.args) # self._write(": ") # self._dispatch(t.body) bottleneck-1.2.0/doc/sphinxext/docscrape.py000066400000000000000000000357061300536544100207740ustar00rootroot00000000000000"""Extract reference documentation from the NumPy source tree. """ import inspect import textwrap import re import pydoc from StringIO import StringIO from warnings import warn class Reader(object): """A line-based string reader. """ def __init__(self, data): """ Parameters ---------- data : str String with lines separated by '\n'. """ if isinstance(data,list): self._str = data else: self._str = data.split('\n') # store string as list of lines self.reset() def __getitem__(self, n): return self._str[n] def reset(self): self._l = 0 # current line nr def read(self): if not self.eof(): out = self[self._l] self._l += 1 return out else: return '' def seek_next_non_empty_line(self): for l in self[self._l:]: if l.strip(): break else: self._l += 1 def eof(self): return self._l >= len(self._str) def read_to_condition(self, condition_func): start = self._l for line in self[start:]: if condition_func(line): return self[start:self._l] self._l += 1 if self.eof(): return self[start:self._l+1] return [] def read_to_next_empty_line(self): self.seek_next_non_empty_line() def is_empty(line): return not line.strip() return self.read_to_condition(is_empty) def read_to_next_unindented_line(self): def is_unindented(line): return (line.strip() and (len(line.lstrip()) == len(line))) return self.read_to_condition(is_unindented) def peek(self,n=0): if self._l + n < len(self._str): return self[self._l + n] else: return '' def is_empty(self): return not ''.join(self._str).strip() class NumpyDocString(object): def __init__(self, docstring, config={}): docstring = textwrap.dedent(docstring).split('\n') self._doc = Reader(docstring) self._parsed_data = { 'Signature': '', 'Summary': [''], 'Extended Summary': [], 'Parameters': [], 'Returns': [], 'Raises': [], 'Warns': [], 'Other Parameters': [], 'Attributes': [], 'Methods': [], 'See Also': [], 'Notes': [], 'Warnings': [], 'References': '', 'Examples': '', 'index': {} } self._parse() def __getitem__(self,key): return self._parsed_data[key] def __setitem__(self,key,val): if not self._parsed_data.has_key(key): warn("Unknown section %s" % key) else: self._parsed_data[key] = val def _is_at_section(self): self._doc.seek_next_non_empty_line() if self._doc.eof(): return False l1 = self._doc.peek().strip() # e.g. Parameters if l1.startswith('.. index::'): return True l2 = self._doc.peek(1).strip() # ---------- or ========== return l2.startswith('-'*len(l1)) or l2.startswith('='*len(l1)) def _strip(self,doc): i = 0 j = 0 for i,line in enumerate(doc): if line.strip(): break for j,line in enumerate(doc[::-1]): if line.strip(): break return doc[i:len(doc)-j] def _read_to_next_section(self): section = self._doc.read_to_next_empty_line() while not self._is_at_section() and not self._doc.eof(): if not self._doc.peek(-1).strip(): # previous line was empty section += [''] section += self._doc.read_to_next_empty_line() return section def _read_sections(self): while not self._doc.eof(): data = self._read_to_next_section() name = data[0].strip() if name.startswith('..'): # index section yield name, data[1:] elif len(data) < 2: yield StopIteration else: yield name, self._strip(data[2:]) def _parse_param_list(self,content): r = Reader(content) params = [] while not r.eof(): header = r.read().strip() if ' : ' in header: arg_name, arg_type = header.split(' : ')[:2] else: arg_name, arg_type = header, '' desc = r.read_to_next_unindented_line() desc = dedent_lines(desc) params.append((arg_name,arg_type,desc)) return params _name_rgx = re.compile(r"^\s*(:(?P\w+):`(?P[a-zA-Z0-9_.-]+)`|" r" (?P[a-zA-Z0-9_.-]+))\s*", re.X) def _parse_see_also(self, content): """ func_name : Descriptive text continued text another_func_name : Descriptive text func_name1, func_name2, :meth:`func_name`, func_name3 """ items = [] def parse_item_name(text): """Match ':role:`name`' or 'name'""" m = self._name_rgx.match(text) if m: g = m.groups() if g[1] is None: return g[3], None else: return g[2], g[1] raise ValueError("%s is not a item name" % text) def push_item(name, rest): if not name: return name, role = parse_item_name(name) items.append((name, list(rest), role)) del rest[:] current_func = None rest = [] for line in content: if not line.strip(): continue m = self._name_rgx.match(line) if m and line[m.end():].strip().startswith(':'): push_item(current_func, rest) current_func, line = line[:m.end()], line[m.end():] rest = [line.split(':', 1)[1].strip()] if not rest[0]: rest = [] elif not line.startswith(' '): push_item(current_func, rest) current_func = None if ',' in line: for func in line.split(','): if func.strip(): push_item(func, []) elif line.strip(): current_func = line elif current_func is not None: rest.append(line.strip()) push_item(current_func, rest) return items def _parse_index(self, section, content): """ .. index: default :refguide: something, else, and more """ def strip_each_in(lst): return [s.strip() for s in lst] out = {} section = section.split('::') if len(section) > 1: out['default'] = strip_each_in(section[1].split(','))[0] for line in content: line = line.split(':') if len(line) > 2: out[line[1]] = strip_each_in(line[2].split(',')) return out def _parse_summary(self): """Grab signature (if given) and summary""" if self._is_at_section(): return summary = self._doc.read_to_next_empty_line() summary_str = " ".join([s.strip() for s in summary]).strip() if re.compile('^([\w., ]+=)?\s*[\w\.]+\(.*\)$').match(summary_str): self['Signature'] = summary_str if not self._is_at_section(): self['Summary'] = self._doc.read_to_next_empty_line() else: self['Summary'] = summary if not self._is_at_section(): self['Extended Summary'] = self._read_to_next_section() def _parse(self): self._doc.reset() self._parse_summary() for (section,content) in self._read_sections(): if not section.startswith('..'): section = ' '.join([s.capitalize() for s in section.split(' ')]) if section in ('Parameters', 'Returns', 'Raises', 'Warns', 'Other Parameters', 'Attributes', 'Methods'): self[section] = self._parse_param_list(content) elif section.startswith('.. index::'): self['index'] = self._parse_index(section, content) elif section == 'See Also': self['See Also'] = self._parse_see_also(content) else: self[section] = content # string conversion routines def _str_header(self, name, symbol='-'): return [name, len(name)*symbol] def _str_indent(self, doc, indent=4): out = [] for line in doc: out += [' '*indent + line] return out def _str_signature(self): if self['Signature']: return [self['Signature'].replace('*','\*')] + [''] else: return [''] def _str_summary(self): if self['Summary']: return self['Summary'] + [''] else: return [] def _str_extended_summary(self): if self['Extended Summary']: return self['Extended Summary'] + [''] else: return [] def _str_param_list(self, name): out = [] if self[name]: out += self._str_header(name) for param,param_type,desc in self[name]: out += ['%s : %s' % (param, param_type)] out += self._str_indent(desc) out += [''] return out def _str_section(self, name): out = [] if self[name]: out += self._str_header(name) out += self[name] out += [''] return out def _str_see_also(self, func_role): if not self['See Also']: return [] out = [] out += self._str_header("See Also") last_had_desc = True for func, desc, role in self['See Also']: if role: link = ':%s:`%s`' % (role, func) elif func_role: link = ':%s:`%s`' % (func_role, func) else: link = "`%s`_" % func if desc or last_had_desc: out += [''] out += [link] else: out[-1] += ", %s" % link if desc: out += self._str_indent([' '.join(desc)]) last_had_desc = True else: last_had_desc = False out += [''] return out def _str_index(self): idx = self['index'] out = [] out += ['.. index:: %s' % idx.get('default','')] for section, references in idx.iteritems(): if section == 'default': continue out += [' :%s: %s' % (section, ', '.join(references))] return out def __str__(self, func_role=''): out = [] out += self._str_signature() out += self._str_summary() out += self._str_extended_summary() for param_list in ('Parameters', 'Returns', 'Other Parameters', 'Raises', 'Warns'): out += self._str_param_list(param_list) out += self._str_section('Warnings') out += self._str_see_also(func_role) for s in ('Notes','References','Examples'): out += self._str_section(s) for param_list in ('Attributes', 'Methods'): out += self._str_param_list(param_list) out += self._str_index() return '\n'.join(out) def indent(str,indent=4): indent_str = ' '*indent if str is None: return indent_str lines = str.split('\n') return '\n'.join(indent_str + l for l in lines) def dedent_lines(lines): """Deindent a list of lines maximally""" return textwrap.dedent("\n".join(lines)).split("\n") def header(text, style='-'): return text + '\n' + style*len(text) + '\n' class FunctionDoc(NumpyDocString): def __init__(self, func, role='func', doc=None, config={}): self._f = func self._role = role # e.g. "func" or "meth" if doc is None: if func is None: raise ValueError("No function or docstring given") doc = inspect.getdoc(func) or '' NumpyDocString.__init__(self, doc) if not self['Signature'] and func is not None: func, func_name = self.get_func() try: # try to read signature argspec = inspect.getargspec(func) argspec = inspect.formatargspec(*argspec) argspec = argspec.replace('*','\*') signature = '%s%s' % (func_name, argspec) except TypeError, e: signature = '%s()' % func_name self['Signature'] = signature def get_func(self): func_name = getattr(self._f, '__name__', self.__class__.__name__) if inspect.isclass(self._f): func = getattr(self._f, '__call__', self._f.__init__) else: func = self._f return func, func_name def __str__(self): out = '' func, func_name = self.get_func() signature = self['Signature'].replace('*', '\*') roles = {'func': 'function', 'meth': 'method'} if self._role: if not roles.has_key(self._role): print("Warning: invalid role %s" % self._role) out += '.. %s:: %s\n \n\n' % (roles.get(self._role,''), func_name) out += super(FunctionDoc, self).__str__(func_role=self._role) return out class ClassDoc(NumpyDocString): def __init__(self, cls, doc=None, modulename='', func_doc=FunctionDoc, config={}): if not inspect.isclass(cls) and cls is not None: raise ValueError("Expected a class or None, but got %r" % cls) self._cls = cls if modulename and not modulename.endswith('.'): modulename += '.' self._mod = modulename if doc is None: if cls is None: raise ValueError("No class or documentation string given") doc = pydoc.getdoc(cls) NumpyDocString.__init__(self, doc) if config.get('show_class_members', True): if not self['Methods']: self['Methods'] = [(name, '', '') for name in sorted(self.methods)] if not self['Attributes']: self['Attributes'] = [(name, '', '') for name in sorted(self.properties)] @property def methods(self): if self._cls is None: return [] return [name for name,func in inspect.getmembers(self._cls) if not name.startswith('_') and callable(func)] @property def properties(self): if self._cls is None: return [] return [name for name,func in inspect.getmembers(self._cls) if not name.startswith('_') and func is None] bottleneck-1.2.0/doc/sphinxext/docscrape_sphinx.py000066400000000000000000000171171300536544100223610ustar00rootroot00000000000000import re, inspect, textwrap, pydoc import sphinx from docscrape import NumpyDocString, FunctionDoc, ClassDoc class SphinxDocString(NumpyDocString): def __init__(self, docstring, config={}): self.use_plots = config.get('use_plots', False) NumpyDocString.__init__(self, docstring, config=config) # string conversion routines def _str_header(self, name, symbol='`'): return ['.. rubric:: ' + name, ''] def _str_field_list(self, name): return [':' + name + ':'] def _str_indent(self, doc, indent=4): out = [] for line in doc: out += [' '*indent + line] return out def _str_signature(self): return [''] if self['Signature']: return ['``%s``' % self['Signature']] + [''] else: return [''] def _str_summary(self): return self['Summary'] + [''] def _str_extended_summary(self): return self['Extended Summary'] + [''] def _str_param_list(self, name): out = [] if self[name]: out += self._str_field_list(name) out += [''] for param,param_type,desc in self[name]: out += self._str_indent(['**%s** : %s' % (param.strip(), param_type)]) out += [''] out += self._str_indent(desc,8) out += [''] return out @property def _obj(self): if hasattr(self, '_cls'): return self._cls elif hasattr(self, '_f'): return self._f return None def _str_member_list(self, name): """ Generate a member listing, autosummary:: table where possible, and a table where not. """ out = [] if self[name]: out += ['.. rubric:: %s' % name, ''] prefix = getattr(self, '_name', '') if prefix: prefix = '~%s.' % prefix autosum = [] others = [] for param, param_type, desc in self[name]: param = param.strip() if not self._obj or hasattr(self._obj, param): autosum += [" %s%s" % (prefix, param)] else: others.append((param, param_type, desc)) if autosum: out += ['.. autosummary::', ' :toctree:', ''] out += autosum if others: maxlen_0 = max([len(x[0]) for x in others]) maxlen_1 = max([len(x[1]) for x in others]) hdr = "="*maxlen_0 + " " + "="*maxlen_1 + " " + "="*10 fmt = '%%%ds %%%ds ' % (maxlen_0, maxlen_1) n_indent = maxlen_0 + maxlen_1 + 4 out += [hdr] for param, param_type, desc in others: out += [fmt % (param.strip(), param_type)] out += self._str_indent(desc, n_indent) out += [hdr] out += [''] return out def _str_section(self, name): out = [] if self[name]: out += self._str_header(name) out += [''] content = textwrap.dedent("\n".join(self[name])).split("\n") out += content out += [''] return out def _str_see_also(self, func_role): out = [] if self['See Also']: see_also = super(SphinxDocString, self)._str_see_also(func_role) out = ['.. seealso::', ''] out += self._str_indent(see_also[2:]) return out def _str_warnings(self): out = [] if self['Warnings']: out = ['.. warning::', ''] out += self._str_indent(self['Warnings']) return out def _str_index(self): idx = self['index'] out = [] if len(idx) == 0: return out out += ['.. index:: %s' % idx.get('default','')] for section, references in idx.iteritems(): if section == 'default': continue elif section == 'refguide': out += [' single: %s' % (', '.join(references))] else: out += [' %s: %s' % (section, ','.join(references))] return out def _str_references(self): out = [] if self['References']: out += self._str_header('References') if isinstance(self['References'], str): self['References'] = [self['References']] out.extend(self['References']) out += [''] # Latex collects all references to a separate bibliography, # so we need to insert links to it if sphinx.__version__ >= "0.6": out += ['.. only:: latex',''] else: out += ['.. latexonly::',''] items = [] for line in self['References']: m = re.match(r'.. \[([a-z0-9._-]+)\]', line, re.I) if m: items.append(m.group(1)) out += [' ' + ", ".join(["[%s]_" % item for item in items]), ''] return out def _str_examples(self): examples_str = "\n".join(self['Examples']) if (self.use_plots and 'import matplotlib' in examples_str and 'plot::' not in examples_str): out = [] out += self._str_header('Examples') out += ['.. plot::', ''] out += self._str_indent(self['Examples']) out += [''] return out else: return self._str_section('Examples') def __str__(self, indent=0, func_role="obj"): out = [] out += self._str_signature() out += self._str_index() + [''] out += self._str_summary() out += self._str_extended_summary() for param_list in ('Parameters', 'Returns', 'Other Parameters', 'Raises', 'Warns'): out += self._str_param_list(param_list) out += self._str_warnings() out += self._str_see_also(func_role) out += self._str_section('Notes') out += self._str_references() out += self._str_examples() for param_list in ('Attributes', 'Methods'): out += self._str_member_list(param_list) out = self._str_indent(out,indent) return '\n'.join(out) class SphinxFunctionDoc(SphinxDocString, FunctionDoc): def __init__(self, obj, doc=None, config={}): self.use_plots = config.get('use_plots', False) FunctionDoc.__init__(self, obj, doc=doc, config=config) class SphinxClassDoc(SphinxDocString, ClassDoc): def __init__(self, obj, doc=None, func_doc=None, config={}): self.use_plots = config.get('use_plots', False) ClassDoc.__init__(self, obj, doc=doc, func_doc=None, config=config) class SphinxObjDoc(SphinxDocString): def __init__(self, obj, doc=None, config={}): self._f = obj SphinxDocString.__init__(self, doc, config=config) def get_doc_object(obj, what=None, doc=None, config={}): if what is None: if inspect.isclass(obj): what = 'class' elif inspect.ismodule(obj): what = 'module' elif callable(obj): what = 'function' else: what = 'object' if what == 'class': return SphinxClassDoc(obj, func_doc=SphinxFunctionDoc, doc=doc, config=config) elif what in ('function', 'method'): return SphinxFunctionDoc(obj, doc=doc, config=config) else: if doc is None: doc = pydoc.getdoc(obj) return SphinxObjDoc(obj, doc, config=config) bottleneck-1.2.0/doc/sphinxext/numpydoc.py000066400000000000000000000127131300536544100206600ustar00rootroot00000000000000""" ======== numpydoc ======== Sphinx extension that handles docstrings in the Numpy standard format. [1] It will: - Convert Parameters etc. sections to field lists. - Convert See Also section to a See also entry. - Renumber references. - Extract the signature from the docstring, if it can't be determined otherwise. .. [1] http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines#docstring-standard """ import os, re, pydoc from docscrape_sphinx import get_doc_object, SphinxDocString from sphinx.util.compat import Directive import inspect def mangle_docstrings(app, what, name, obj, options, lines, reference_offset=[0]): cfg = dict(use_plots=app.config.numpydoc_use_plots, show_class_members=app.config.numpydoc_show_class_members) if what == 'module': # Strip top title title_re = re.compile(ur'^\s*[#*=]{4,}\n[a-z0-9 -]+\n[#*=]{4,}\s*', re.I|re.S) lines[:] = title_re.sub(u'', u"\n".join(lines)).split(u"\n") else: doc = get_doc_object(obj, what, u"\n".join(lines), config=cfg) lines[:] = unicode(doc).split(u"\n") if app.config.numpydoc_edit_link and hasattr(obj, '__name__') and \ obj.__name__: if hasattr(obj, '__module__'): v = dict(full_name=u"%s.%s" % (obj.__module__, obj.__name__)) else: v = dict(full_name=obj.__name__) lines += [u'', u'.. htmlonly::', ''] lines += [u' %s' % x for x in (app.config.numpydoc_edit_link % v).split("\n")] # replace reference numbers so that there are no duplicates references = [] for line in lines: line = line.strip() m = re.match(ur'^.. \[([a-z0-9_.-])\]', line, re.I) if m: references.append(m.group(1)) # start renaming from the longest string, to avoid overwriting parts references.sort(key=lambda x: -len(x)) if references: for i, line in enumerate(lines): for r in references: if re.match(ur'^\d+$', r): new_r = u"R%d" % (reference_offset[0] + int(r)) else: new_r = u"%s%d" % (r, reference_offset[0]) lines[i] = lines[i].replace(u'[%s]_' % r, u'[%s]_' % new_r) lines[i] = lines[i].replace(u'.. [%s]' % r, u'.. [%s]' % new_r) reference_offset[0] += len(references) def mangle_signature(app, what, name, obj, options, sig, retann): # Do not try to inspect classes that don't define `__init__` if (inspect.isclass(obj) and (not hasattr(obj, '__init__') or 'initializes x; see ' in pydoc.getdoc(obj.__init__))): return '', '' if not (callable(obj) or hasattr(obj, '__argspec_is_invalid_')): return if not hasattr(obj, '__doc__'): return doc = SphinxDocString(pydoc.getdoc(obj)) if doc['Signature']: sig = re.sub(u"^[^(]*", u"", doc['Signature']) return sig, u'' def setup(app, get_doc_object_=get_doc_object): global get_doc_object get_doc_object = get_doc_object_ app.connect('autodoc-process-docstring', mangle_docstrings) app.connect('autodoc-process-signature', mangle_signature) app.add_config_value('numpydoc_edit_link', None, False) app.add_config_value('numpydoc_use_plots', None, False) app.add_config_value('numpydoc_show_class_members', True, True) # Extra mangling domains app.add_domain(NumpyPythonDomain) app.add_domain(NumpyCDomain) #------------------------------------------------------------------------------ # Docstring-mangling domains #------------------------------------------------------------------------------ from docutils.statemachine import ViewList from sphinx.domains.c import CDomain from sphinx.domains.python import PythonDomain class ManglingDomainBase(object): directive_mangling_map = {} def __init__(self, *a, **kw): super(ManglingDomainBase, self).__init__(*a, **kw) self.wrap_mangling_directives() def wrap_mangling_directives(self): for name, objtype in self.directive_mangling_map.items(): self.directives[name] = wrap_mangling_directive( self.directives[name], objtype) class NumpyPythonDomain(ManglingDomainBase, PythonDomain): name = 'np' directive_mangling_map = { 'function': 'function', 'class': 'class', 'exception': 'class', 'method': 'function', 'classmethod': 'function', 'staticmethod': 'function', 'attribute': 'attribute', } class NumpyCDomain(ManglingDomainBase, CDomain): name = 'np-c' directive_mangling_map = { 'function': 'function', 'member': 'attribute', 'macro': 'function', 'type': 'class', 'var': 'object', } def wrap_mangling_directive(base_directive, objtype): class directive(base_directive): def run(self): env = self.state.document.settings.env name = None if self.arguments: m = re.match(r'^(.*\s+)?(.*?)(\(.*)?', self.arguments[0]) name = m.group(2).strip() if not name: name = self.arguments[0] lines = list(self.content) mangle_docstrings(env.app, objtype, name, None, None, lines) self.content = ViewList(lines, self.content.parent) return base_directive.run(self) return directive bottleneck-1.2.0/doc/sphinxext/phantom_import.py000066400000000000000000000130651300536544100220630ustar00rootroot00000000000000""" ============== phantom_import ============== Sphinx extension to make directives from ``sphinx.ext.autodoc`` and similar extensions to use docstrings loaded from an XML file. This extension loads an XML file in the Pydocweb format [1] and creates a dummy module that contains the specified docstrings. This can be used to get the current docstrings from a Pydocweb instance without needing to rebuild the documented module. .. [1] http://code.google.com/p/pydocweb """ import imp, sys, compiler, types, os, inspect, re def setup(app): app.connect('builder-inited', initialize) app.add_config_value('phantom_import_file', None, True) def initialize(app): fn = app.config.phantom_import_file if (fn and os.path.isfile(fn)): print("[numpydoc] Phantom importing modules from", fn, "...") import_phantom_module(fn) #------------------------------------------------------------------------------ # Creating 'phantom' modules from an XML description #------------------------------------------------------------------------------ def import_phantom_module(xml_file): """ Insert a fake Python module to sys.modules, based on a XML file. The XML file is expected to conform to Pydocweb DTD. The fake module will contain dummy objects, which guarantee the following: - Docstrings are correct. - Class inheritance relationships are correct (if present in XML). - Function argspec is *NOT* correct (even if present in XML). Instead, the function signature is prepended to the function docstring. - Class attributes are *NOT* correct; instead, they are dummy objects. Parameters ---------- xml_file : str Name of an XML file to read """ import lxml.etree as etree object_cache = {} tree = etree.parse(xml_file) root = tree.getroot() # Sort items so that # - Base classes come before classes inherited from them # - Modules come before their contents all_nodes = dict([(n.attrib['id'], n) for n in root]) def _get_bases(node, recurse=False): bases = [x.attrib['ref'] for x in node.findall('base')] if recurse: j = 0 while True: try: b = bases[j] except IndexError: break if b in all_nodes: bases.extend(_get_bases(all_nodes[b])) j += 1 return bases type_index = ['module', 'class', 'callable', 'object'] def base_cmp(a, b): x = cmp(type_index.index(a.tag), type_index.index(b.tag)) if x != 0: return x if a.tag == 'class' and b.tag == 'class': a_bases = _get_bases(a, recurse=True) b_bases = _get_bases(b, recurse=True) x = cmp(len(a_bases), len(b_bases)) if x != 0: return x if a.attrib['id'] in b_bases: return -1 if b.attrib['id'] in a_bases: return 1 return cmp(a.attrib['id'].count('.'), b.attrib['id'].count('.')) nodes = root.getchildren() nodes.sort(base_cmp) # Create phantom items for node in nodes: name = node.attrib['id'] doc = (node.text or '').decode('string-escape') + "\n" if doc == "\n": doc = "" # create parent, if missing parent = name while True: parent = '.'.join(parent.split('.')[:-1]) if not parent: break if parent in object_cache: break obj = imp.new_module(parent) object_cache[parent] = obj sys.modules[parent] = obj # create object if node.tag == 'module': obj = imp.new_module(name) obj.__doc__ = doc sys.modules[name] = obj elif node.tag == 'class': bases = [object_cache[b] for b in _get_bases(node) if b in object_cache] bases.append(object) init = lambda self: None init.__doc__ = doc obj = type(name, tuple(bases), {'__doc__': doc, '__init__': init}) obj.__name__ = name.split('.')[-1] elif node.tag == 'callable': funcname = node.attrib['id'].split('.')[-1] argspec = node.attrib.get('argspec') if argspec: argspec = re.sub('^[^(]*', '', argspec) doc = "%s%s\n\n%s" % (funcname, argspec, doc) obj = lambda: 0 obj.__argspec_is_invalid_ = True obj.func_name = funcname obj.__name__ = name obj.__doc__ = doc if inspect.isclass(object_cache[parent]): obj.__objclass__ = object_cache[parent] else: class Dummy(object): pass obj = Dummy() obj.__name__ = name obj.__doc__ = doc if inspect.isclass(object_cache[parent]): obj.__get__ = lambda: None object_cache[name] = obj if parent: if inspect.ismodule(object_cache[parent]): obj.__module__ = parent setattr(object_cache[parent], name.split('.')[-1], obj) # Populate items for node in root: obj = object_cache.get(node.attrib['id']) if obj is None: continue for ref in node.findall('ref'): if node.tag == 'class': if ref.attrib['ref'].startswith(node.attrib['id'] + '.'): setattr(obj, ref.attrib['name'], object_cache.get(ref.attrib['ref'])) else: setattr(obj, ref.attrib['name'], object_cache.get(ref.attrib['ref'])) bottleneck-1.2.0/doc/sphinxext/plot_directive.py000066400000000000000000000463721300536544100220460ustar00rootroot00000000000000""" A special directive for generating a matplotlib plot. .. warning:: This is a hacked version of plot_directive.py from Matplotlib. It's very much subject to change! Usage ----- Can be used like this:: .. plot:: examples/example.py .. plot:: import matplotlib.pyplot as plt plt.plot([1,2,3], [4,5,6]) .. plot:: A plotting example: >>> import matplotlib.pyplot as plt >>> plt.plot([1,2,3], [4,5,6]) The content is interpreted as doctest formatted if it has a line starting with ``>>>``. The ``plot`` directive supports the options format : {'python', 'doctest'} Specify the format of the input include-source : bool Whether to display the source code. Default can be changed in conf.py and the ``image`` directive options ``alt``, ``height``, ``width``, ``scale``, ``align``, ``class``. Configuration options --------------------- The plot directive has the following configuration options: plot_include_source Default value for the include-source option plot_pre_code Code that should be executed before each plot. plot_basedir Base directory, to which plot:: file names are relative to. (If None or empty, file names are relative to the directoly where the file containing the directive is.) plot_formats File formats to generate. List of tuples or strings:: [(suffix, dpi), suffix, ...] that determine the file format and the DPI. For entries whose DPI was omitted, sensible defaults are chosen. plot_html_show_formats Whether to show links to the files in HTML. TODO ---- * Refactor Latex output; now it's plain images, but it would be nice to make them appear side-by-side, or in floats. """ import sys, os, glob, shutil, imp, warnings, cStringIO, re, textwrap, traceback import sphinx import warnings warnings.warn("A plot_directive module is also available under " "matplotlib.sphinxext; expect this numpydoc.plot_directive " "module to be deprecated after relevant features have been " "integrated there.", FutureWarning, stacklevel=2) #------------------------------------------------------------------------------ # Registration hook #------------------------------------------------------------------------------ def setup(app): setup.app = app setup.config = app.config setup.confdir = app.confdir app.add_config_value('plot_pre_code', '', True) app.add_config_value('plot_include_source', False, True) app.add_config_value('plot_formats', ['png', 'hires.png', 'pdf'], True) app.add_config_value('plot_basedir', None, True) app.add_config_value('plot_html_show_formats', True, True) app.add_directive('plot', plot_directive, True, (0, 1, False), **plot_directive_options) #------------------------------------------------------------------------------ # plot:: directive #------------------------------------------------------------------------------ from docutils.parsers.rst import directives from docutils import nodes def plot_directive(name, arguments, options, content, lineno, content_offset, block_text, state, state_machine): return run(arguments, content, options, state_machine, state, lineno) plot_directive.__doc__ = __doc__ def _option_boolean(arg): if not arg or not arg.strip(): # no argument given, assume used as a flag return True elif arg.strip().lower() in ('no', '0', 'false'): return False elif arg.strip().lower() in ('yes', '1', 'true'): return True else: raise ValueError('"%s" unknown boolean' % arg) def _option_format(arg): return directives.choice(arg, ('python', 'lisp')) def _option_align(arg): return directives.choice(arg, ("top", "middle", "bottom", "left", "center", "right")) plot_directive_options = {'alt': directives.unchanged, 'height': directives.length_or_unitless, 'width': directives.length_or_percentage_or_unitless, 'scale': directives.nonnegative_int, 'align': _option_align, 'class': directives.class_option, 'include-source': _option_boolean, 'format': _option_format, } #------------------------------------------------------------------------------ # Generating output #------------------------------------------------------------------------------ from docutils import nodes, utils try: # Sphinx depends on either Jinja or Jinja2 import jinja2 def format_template(template, **kw): return jinja2.Template(template).render(**kw) except ImportError: import jinja def format_template(template, **kw): return jinja.from_string(template, **kw) TEMPLATE = """ {{ source_code }} {{ only_html }} {% if source_link or (html_show_formats and not multi_image) %} ( {%- if source_link -%} `Source code <{{ source_link }}>`__ {%- endif -%} {%- if html_show_formats and not multi_image -%} {%- for img in images -%} {%- for fmt in img.formats -%} {%- if source_link or not loop.first -%}, {% endif -%} `{{ fmt }} <{{ dest_dir }}/{{ img.basename }}.{{ fmt }}>`__ {%- endfor -%} {%- endfor -%} {%- endif -%} ) {% endif %} {% for img in images %} .. figure:: {{ build_dir }}/{{ img.basename }}.png {%- for option in options %} {{ option }} {% endfor %} {% if html_show_formats and multi_image -%} ( {%- for fmt in img.formats -%} {%- if not loop.first -%}, {% endif -%} `{{ fmt }} <{{ dest_dir }}/{{ img.basename }}.{{ fmt }}>`__ {%- endfor -%} ) {%- endif -%} {% endfor %} {{ only_latex }} {% for img in images %} .. image:: {{ build_dir }}/{{ img.basename }}.pdf {% endfor %} """ class ImageFile(object): def __init__(self, basename, dirname): self.basename = basename self.dirname = dirname self.formats = [] def filename(self, format): return os.path.join(self.dirname, "%s.%s" % (self.basename, format)) def filenames(self): return [self.filename(fmt) for fmt in self.formats] def run(arguments, content, options, state_machine, state, lineno): if arguments and content: raise RuntimeError("plot:: directive can't have both args and content") document = state_machine.document config = document.settings.env.config options.setdefault('include-source', config.plot_include_source) # determine input rst_file = document.attributes['source'] rst_dir = os.path.dirname(rst_file) if arguments: if not config.plot_basedir: source_file_name = os.path.join(rst_dir, directives.uri(arguments[0])) else: source_file_name = os.path.join(setup.confdir, config.plot_basedir, directives.uri(arguments[0])) code = open(source_file_name, 'r').read() output_base = os.path.basename(source_file_name) else: source_file_name = rst_file code = textwrap.dedent("\n".join(map(str, content))) counter = document.attributes.get('_plot_counter', 0) + 1 document.attributes['_plot_counter'] = counter base, ext = os.path.splitext(os.path.basename(source_file_name)) output_base = '%s-%d.py' % (base, counter) base, source_ext = os.path.splitext(output_base) if source_ext in ('.py', '.rst', '.txt'): output_base = base else: source_ext = '' # ensure that LaTeX includegraphics doesn't choke in foo.bar.pdf filenames output_base = output_base.replace('.', '-') # is it in doctest format? is_doctest = contains_doctest(code) if options.has_key('format'): if options['format'] == 'python': is_doctest = False else: is_doctest = True # determine output directory name fragment source_rel_name = relpath(source_file_name, setup.confdir) source_rel_dir = os.path.dirname(source_rel_name) while source_rel_dir.startswith(os.path.sep): source_rel_dir = source_rel_dir[1:] # build_dir: where to place output files (temporarily) build_dir = os.path.join(os.path.dirname(setup.app.doctreedir), 'plot_directive', source_rel_dir) if not os.path.exists(build_dir): os.makedirs(build_dir) # output_dir: final location in the builder's directory dest_dir = os.path.abspath(os.path.join(setup.app.builder.outdir, source_rel_dir)) # how to link to files from the RST file dest_dir_link = os.path.join(relpath(setup.confdir, rst_dir), source_rel_dir).replace(os.path.sep, '/') build_dir_link = relpath(build_dir, rst_dir).replace(os.path.sep, '/') source_link = dest_dir_link + '/' + output_base + source_ext # make figures try: results = makefig(code, source_file_name, build_dir, output_base, config) errors = [] except PlotError, err: reporter = state.memo.reporter sm = reporter.system_message( 2, "Exception occurred in plotting %s: %s" % (output_base, err), line=lineno) results = [(code, [])] errors = [sm] # generate output restructuredtext total_lines = [] for j, (code_piece, images) in enumerate(results): if options['include-source']: if is_doctest: lines = [''] lines += [row.rstrip() for row in code_piece.split('\n')] else: lines = ['.. code-block:: python', ''] lines += [' %s' % row.rstrip() for row in code_piece.split('\n')] source_code = "\n".join(lines) else: source_code = "" opts = [':%s: %s' % (key, val) for key, val in options.items() if key in ('alt', 'height', 'width', 'scale', 'align', 'class')] only_html = ".. only:: html" only_latex = ".. only:: latex" if j == 0: src_link = source_link else: src_link = None result = format_template( TEMPLATE, dest_dir=dest_dir_link, build_dir=build_dir_link, source_link=src_link, multi_image=len(images) > 1, only_html=only_html, only_latex=only_latex, options=opts, images=images, source_code=source_code, html_show_formats=config.plot_html_show_formats) total_lines.extend(result.split("\n")) total_lines.extend("\n") if total_lines: state_machine.insert_input(total_lines, source=source_file_name) # copy image files to builder's output directory if not os.path.exists(dest_dir): os.makedirs(dest_dir) for code_piece, images in results: for img in images: for fn in img.filenames(): shutil.copyfile(fn, os.path.join(dest_dir, os.path.basename(fn))) # copy script (if necessary) if source_file_name == rst_file: target_name = os.path.join(dest_dir, output_base + source_ext) f = open(target_name, 'w') f.write(unescape_doctest(code)) f.close() return errors #------------------------------------------------------------------------------ # Run code and capture figures #------------------------------------------------------------------------------ import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt import matplotlib.image as image from matplotlib import _pylab_helpers import exceptions def contains_doctest(text): try: # check if it's valid Python as-is compile(text, '', 'exec') return False except SyntaxError: pass r = re.compile(r'^\s*>>>', re.M) m = r.search(text) return bool(m) def unescape_doctest(text): """ Extract code from a piece of text, which contains either Python code or doctests. """ if not contains_doctest(text): return text code = "" for line in text.split("\n"): m = re.match(r'^\s*(>>>|\.\.\.) (.*)$', line) if m: code += m.group(2) + "\n" elif line.strip(): code += "# " + line.strip() + "\n" else: code += "\n" return code def split_code_at_show(text): """ Split code at plt.show() """ parts = [] is_doctest = contains_doctest(text) part = [] for line in text.split("\n"): if (not is_doctest and line.strip() == 'plt.show()') or \ (is_doctest and line.strip() == '>>> plt.show()'): part.append(line) parts.append("\n".join(part)) part = [] else: part.append(line) if "\n".join(part).strip(): parts.append("\n".join(part)) return parts class PlotError(RuntimeError): pass def run_code(code, code_path, ns=None): # Change the working directory to the directory of the example, so # it can get at its data files, if any. pwd = os.getcwd() old_sys_path = list(sys.path) if code_path is not None: dirname = os.path.abspath(os.path.dirname(code_path)) os.chdir(dirname) sys.path.insert(0, dirname) # Redirect stdout stdout = sys.stdout sys.stdout = cStringIO.StringIO() # Reset sys.argv old_sys_argv = sys.argv sys.argv = [code_path] try: try: code = unescape_doctest(code) if ns is None: ns = {} if not ns: exec setup.config.plot_pre_code in ns exec code in ns except (Exception, SystemExit), err: raise PlotError(traceback.format_exc()) finally: os.chdir(pwd) sys.argv = old_sys_argv sys.path[:] = old_sys_path sys.stdout = stdout return ns #------------------------------------------------------------------------------ # Generating figures #------------------------------------------------------------------------------ def out_of_date(original, derived): """ Returns True if derivative is out-of-date wrt original, both of which are full file paths. """ return (not os.path.exists(derived) or os.stat(derived).st_mtime < os.stat(original).st_mtime) def makefig(code, code_path, output_dir, output_base, config): """ Run a pyplot script *code* and save the images under *output_dir* with file names derived from *output_base* """ # -- Parse format list default_dpi = {'png': 80, 'hires.png': 200, 'pdf': 50} formats = [] for fmt in config.plot_formats: if isinstance(fmt, str): formats.append((fmt, default_dpi.get(fmt, 80))) elif type(fmt) in (tuple, list) and len(fmt)==2: formats.append((str(fmt[0]), int(fmt[1]))) else: raise PlotError('invalid image format "%r" in plot_formats' % fmt) # -- Try to determine if all images already exist code_pieces = split_code_at_show(code) # Look for single-figure output files first all_exists = True img = ImageFile(output_base, output_dir) for format, dpi in formats: if out_of_date(code_path, img.filename(format)): all_exists = False break img.formats.append(format) if all_exists: return [(code, [img])] # Then look for multi-figure output files results = [] all_exists = True for i, code_piece in enumerate(code_pieces): images = [] for j in xrange(1000): img = ImageFile('%s_%02d_%02d' % (output_base, i, j), output_dir) for format, dpi in formats: if out_of_date(code_path, img.filename(format)): all_exists = False break img.formats.append(format) # assume that if we have one, we have them all if not all_exists: all_exists = (j > 0) break images.append(img) if not all_exists: break results.append((code_piece, images)) if all_exists: return results # -- We didn't find the files, so build them results = [] ns = {} for i, code_piece in enumerate(code_pieces): # Clear between runs plt.close('all') # Run code run_code(code_piece, code_path, ns) # Collect images images = [] fig_managers = _pylab_helpers.Gcf.get_all_fig_managers() for j, figman in enumerate(fig_managers): if len(fig_managers) == 1 and len(code_pieces) == 1: img = ImageFile(output_base, output_dir) else: img = ImageFile("%s_%02d_%02d" % (output_base, i, j), output_dir) images.append(img) for format, dpi in formats: try: figman.canvas.figure.savefig(img.filename(format), dpi=dpi) except exceptions.BaseException, err: raise PlotError(traceback.format_exc()) img.formats.append(format) # Results results.append((code_piece, images)) return results #------------------------------------------------------------------------------ # Relative pathnames #------------------------------------------------------------------------------ try: from os.path import relpath except ImportError: def relpath(target, base=os.curdir): """ Return a relative path to the target from either the current dir or an optional base dir. Base can be a directory specified either as absolute or relative to current dir. """ if not os.path.exists(target): raise OSError('Target does not exist: '+target) if not os.path.isdir(base): raise OSError('Base is not a directory or does not exist: '+base) base_list = (os.path.abspath(base)).split(os.sep) target_list = (os.path.abspath(target)).split(os.sep) # On the windows platform the target may be on a completely # different drive from the base. if os.name in ['nt','dos','os2'] and base_list[0] <> target_list[0]: raise OSError('Target is on a different drive to base. Target: '+target_list[0].upper()+', base: '+base_list[0].upper()) # Starting from the filepath root, work out how much of the # filepath is shared by base and target. for i in range(min(len(base_list), len(target_list))): if base_list[i] <> target_list[i]: break else: # If we broke out of the loop, i is pointing to the first # differing path elements. If we didn't break out of the # loop, i is pointing to identical path elements. # Increment i so that in all cases it points to the first # differing path elements. i+=1 rel_list = [os.pardir] * (len(base_list)-i) + target_list[i:] return os.path.join(*rel_list) bottleneck-1.2.0/doc/sphinxext/setup.cfg000066400000000000000000000000731300536544100202650ustar00rootroot00000000000000[egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 bottleneck-1.2.0/doc/sphinxext/setup.py000066400000000000000000000017251300536544100201630ustar00rootroot00000000000000from distutils.core import setup import setuptools import sys, os version = "0.4" setup( name="numpydoc", packages=["numpydoc"], package_dir={"numpydoc": ""}, version=version, description="Sphinx extension to support docstrings in Numpy format", # classifiers from http://pypi.python.org/pypi?%3Aaction=list_classifiers classifiers=["Development Status :: 3 - Alpha", "Environment :: Plugins", "License :: OSI Approved :: BSD License", "Topic :: Documentation"], keywords="sphinx numpy", author="Pauli Virtanen and others", author_email="pav@iki.fi", url="http://github.com/numpy/numpy/tree/master/doc/sphinxext", license="BSD", zip_safe=False, install_requires=["Sphinx >= 1.0.1"], package_data={'numpydoc': 'tests', '': ''}, entry_points={ "console_scripts": [ "autosummary_generate = numpydoc.autosummary_generate:main", ], }, ) bottleneck-1.2.0/doc/sphinxext/traitsdoc.py000066400000000000000000000100501300536544100210060ustar00rootroot00000000000000""" ========= traitsdoc ========= Sphinx extension that handles docstrings in the Numpy standard format, [1] and support Traits [2]. This extension can be used as a replacement for ``numpydoc`` when support for Traits is required. .. [1] http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines#docstring-standard .. [2] http://code.enthought.com/projects/traits/ """ import inspect import os import pydoc import docscrape import docscrape_sphinx from docscrape_sphinx import SphinxClassDoc, SphinxFunctionDoc, SphinxDocString import numpydoc import comment_eater class SphinxTraitsDoc(SphinxClassDoc): def __init__(self, cls, modulename='', func_doc=SphinxFunctionDoc): if not inspect.isclass(cls): raise ValueError("Initialise using a class. Got %r" % cls) self._cls = cls if modulename and not modulename.endswith('.'): modulename += '.' self._mod = modulename self._name = cls.__name__ self._func_doc = func_doc docstring = pydoc.getdoc(cls) docstring = docstring.split('\n') # De-indent paragraph try: indent = min(len(s) - len(s.lstrip()) for s in docstring if s.strip()) except ValueError: indent = 0 for n,line in enumerate(docstring): docstring[n] = docstring[n][indent:] self._doc = docscrape.Reader(docstring) self._parsed_data = { 'Signature': '', 'Summary': '', 'Description': [], 'Extended Summary': [], 'Parameters': [], 'Returns': [], 'Raises': [], 'Warns': [], 'Other Parameters': [], 'Traits': [], 'Methods': [], 'See Also': [], 'Notes': [], 'References': '', 'Example': '', 'Examples': '', 'index': {} } self._parse() def _str_summary(self): return self['Summary'] + [''] def _str_extended_summary(self): return self['Description'] + self['Extended Summary'] + [''] def __str__(self, indent=0, func_role="func"): out = [] out += self._str_signature() out += self._str_index() + [''] out += self._str_summary() out += self._str_extended_summary() for param_list in ('Parameters', 'Traits', 'Methods', 'Returns','Raises'): out += self._str_param_list(param_list) out += self._str_see_also("obj") out += self._str_section('Notes') out += self._str_references() out += self._str_section('Example') out += self._str_section('Examples') out = self._str_indent(out,indent) return '\n'.join(out) def looks_like_issubclass(obj, classname): """ Return True if the object has a class or superclass with the given class name. Ignores old-style classes. """ t = obj if t.__name__ == classname: return True for klass in t.__mro__: if klass.__name__ == classname: return True return False def get_doc_object(obj, what=None, config=None): if what is None: if inspect.isclass(obj): what = 'class' elif inspect.ismodule(obj): what = 'module' elif callable(obj): what = 'function' else: what = 'object' if what == 'class': doc = SphinxTraitsDoc(obj, '', func_doc=SphinxFunctionDoc, config=config) if looks_like_issubclass(obj, 'HasTraits'): for name, trait, comment in comment_eater.get_class_traits(obj): # Exclude private traits. if not name.startswith('_'): doc['Traits'].append((name, trait, comment.splitlines())) return doc elif what in ('function', 'method'): return SphinxFunctionDoc(obj, '', config=config) else: return SphinxDocString(pydoc.getdoc(obj), config=config) def setup(app): # init numpydoc numpydoc.setup(app, get_doc_object) bottleneck-1.2.0/ez_setup.py000066400000000000000000000263221300536544100160620ustar00rootroot00000000000000#!/usr/bin/env python """Bootstrap setuptools installation To use setuptools in your package's setup.py, include this file in the same directory and add this to the top of your setup.py:: from ez_setup import use_setuptools use_setuptools() To require a specific version of setuptools, set a download mirror, or use an alternate download directory, simply supply the appropriate options to ``use_setuptools()``. This file can also be run as a script to install or upgrade setuptools. """ import os import shutil import sys import tempfile import tarfile import optparse import subprocess import platform import textwrap from distutils import log try: from site import USER_SITE except ImportError: USER_SITE = None DEFAULT_VERSION = "2.2" DEFAULT_URL = "https://pypi.python.org/packages/source/s/setuptools/" def _python_cmd(*args): """ Return True if the command succeeded. """ args = (sys.executable,) + args return subprocess.call(args) == 0 def _install(tarball, install_args=()): # extracting the tarball tmpdir = tempfile.mkdtemp() log.warn('Extracting in %s', tmpdir) old_wd = os.getcwd() try: os.chdir(tmpdir) tar = tarfile.open(tarball) _extractall(tar) tar.close() # going in the directory subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0]) os.chdir(subdir) log.warn('Now working in %s', subdir) # installing log.warn('Installing Setuptools') if not _python_cmd('setup.py', 'install', *install_args): log.warn('Something went wrong during the installation.') log.warn('See the error message above.') # exitcode will be 2 return 2 finally: os.chdir(old_wd) shutil.rmtree(tmpdir) def _build_egg(egg, tarball, to_dir): # extracting the tarball tmpdir = tempfile.mkdtemp() log.warn('Extracting in %s', tmpdir) old_wd = os.getcwd() try: os.chdir(tmpdir) tar = tarfile.open(tarball) _extractall(tar) tar.close() # going in the directory subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0]) os.chdir(subdir) log.warn('Now working in %s', subdir) # building an egg log.warn('Building a Setuptools egg in %s', to_dir) _python_cmd('setup.py', '-q', 'bdist_egg', '--dist-dir', to_dir) finally: os.chdir(old_wd) shutil.rmtree(tmpdir) # returning the result log.warn(egg) if not os.path.exists(egg): raise IOError('Could not build the egg.') def _do_download(version, download_base, to_dir, download_delay): egg = os.path.join(to_dir, 'setuptools-%s-py%d.%d.egg' % (version, sys.version_info[0], sys.version_info[1])) if not os.path.exists(egg): tarball = download_setuptools(version, download_base, to_dir, download_delay) _build_egg(egg, tarball, to_dir) sys.path.insert(0, egg) # Remove previously-imported pkg_resources if present (see # https://bitbucket.org/pypa/setuptools/pull-request/7/ for details). if 'pkg_resources' in sys.modules: del sys.modules['pkg_resources'] import setuptools setuptools.bootstrap_install_from = egg def use_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, download_delay=15): to_dir = os.path.abspath(to_dir) rep_modules = 'pkg_resources', 'setuptools' imported = set(sys.modules).intersection(rep_modules) try: import pkg_resources except ImportError: return _do_download(version, download_base, to_dir, download_delay) try: pkg_resources.require("setuptools>=" + version) return except pkg_resources.DistributionNotFound: return _do_download(version, download_base, to_dir, download_delay) except pkg_resources.VersionConflict as VC_err: if imported: msg = textwrap.dedent(""" The required version of setuptools (>={version}) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U setuptools'. (Currently using {VC_err.args[0]!r}) """).format(VC_err=VC_err, version=version) sys.stderr.write(msg) sys.exit(2) # otherwise, reload ok del pkg_resources, sys.modules['pkg_resources'] return _do_download(version, download_base, to_dir, download_delay) def _clean_check(cmd, target): """ Run the command to download target. If the command fails, clean up before re-raising the error. """ try: subprocess.check_call(cmd) except subprocess.CalledProcessError: if os.access(target, os.F_OK): os.unlink(target) raise def download_file_powershell(url, target): """ Download the file at url to target using Powershell (which will validate trust). Raise an exception if the command cannot complete. """ target = os.path.abspath(target) cmd = [ 'powershell', '-Command', "(new-object System.Net.WebClient).DownloadFile(%(url)r, %(target)r)" % vars(), ] _clean_check(cmd, target) def has_powershell(): if platform.system() != 'Windows': return False cmd = ['powershell', '-Command', 'echo test'] devnull = open(os.path.devnull, 'wb') try: try: subprocess.check_call(cmd, stdout=devnull, stderr=devnull) except: return False finally: devnull.close() return True download_file_powershell.viable = has_powershell def download_file_curl(url, target): cmd = ['curl', url, '--silent', '--output', target] _clean_check(cmd, target) def has_curl(): cmd = ['curl', '--version'] devnull = open(os.path.devnull, 'wb') try: try: subprocess.check_call(cmd, stdout=devnull, stderr=devnull) except: return False finally: devnull.close() return True download_file_curl.viable = has_curl def download_file_wget(url, target): cmd = ['wget', url, '--quiet', '--output-document', target] _clean_check(cmd, target) def has_wget(): cmd = ['wget', '--version'] devnull = open(os.path.devnull, 'wb') try: try: subprocess.check_call(cmd, stdout=devnull, stderr=devnull) except: return False finally: devnull.close() return True download_file_wget.viable = has_wget def download_file_insecure(url, target): """ Use Python to download the file, even though it cannot authenticate the connection. """ try: from urllib.request import urlopen except ImportError: from urllib2 import urlopen src = dst = None try: src = urlopen(url) # Read/write all in one block, so we don't create a corrupt file # if the download is interrupted. data = src.read() dst = open(target, "wb") dst.write(data) finally: if src: src.close() if dst: dst.close() download_file_insecure.viable = lambda: True def get_best_downloader(): downloaders = [ download_file_powershell, download_file_curl, download_file_wget, download_file_insecure, ] for dl in downloaders: if dl.viable(): return dl def download_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, delay=15, downloader_factory=get_best_downloader): """Download setuptools from a specified location and return its filename `version` should be a valid setuptools version number that is available as an egg for download under the `download_base` URL (which should end with a '/'). `to_dir` is the directory where the egg will be downloaded. `delay` is the number of seconds to pause before an actual download attempt. ``downloader_factory`` should be a function taking no arguments and returning a function for downloading a URL to a target. """ # making sure we use the absolute path to_dir = os.path.abspath(to_dir) tgz_name = "setuptools-%s.tar.gz" % version url = download_base + tgz_name saveto = os.path.join(to_dir, tgz_name) if not os.path.exists(saveto): # Avoid repeated downloads log.warn("Downloading %s", url) downloader = downloader_factory() downloader(url, saveto) return os.path.realpath(saveto) def _extractall(self, path=".", members=None): """Extract all members from the archive to the current working directory and set owner, modification time and permissions on directories afterwards. `path' specifies a different directory to extract to. `members' is optional and must be a subset of the list returned by getmembers(). """ import copy import operator from tarfile import ExtractError directories = [] if members is None: members = self for tarinfo in members: if tarinfo.isdir(): # Extract directories with a safe mode. directories.append(tarinfo) tarinfo = copy.copy(tarinfo) tarinfo.mode = 448 # decimal for oct 0700 self.extract(tarinfo, path) # Reverse sort directories. directories.sort(key=operator.attrgetter('name'), reverse=True) # Set correct owner, mtime and filemode on directories. for tarinfo in directories: dirpath = os.path.join(path, tarinfo.name) try: self.chown(tarinfo, dirpath) self.utime(tarinfo, dirpath) self.chmod(tarinfo, dirpath) except ExtractError as e: if self.errorlevel > 1: raise else: self._dbg(1, "tarfile: %s" % e) def _build_install_args(options): """ Build the arguments to 'python setup.py install' on the setuptools package """ return ['--user'] if options.user_install else [] def _parse_args(): """ Parse the command line for options """ parser = optparse.OptionParser() parser.add_option( '--user', dest='user_install', action='store_true', default=False, help='install in user site package (requires Python 2.6 or later)') parser.add_option( '--download-base', dest='download_base', metavar="URL", default=DEFAULT_URL, help='alternative URL from where to download the setuptools package') parser.add_option( '--insecure', dest='downloader_factory', action='store_const', const=lambda: download_file_insecure, default=get_best_downloader, help='Use internal, non-validating downloader' ) options, args = parser.parse_args() # positional arguments are ignored return options def main(version=DEFAULT_VERSION): """Install or upgrade setuptools and EasyInstall""" options = _parse_args() tarball = download_setuptools( download_base=options.download_base, downloader_factory=options.downloader_factory) return _install(tarball, _build_install_args(options)) if __name__ == '__main__': sys.exit(main()) bottleneck-1.2.0/setup.py000066400000000000000000000077321300536544100153700ustar00rootroot00000000000000#!/usr/bin/env python import os import sys try: import setuptools # noqa except ImportError: from ez_setup import use_setuptools use_setuptools() from setuptools import setup, find_packages from setuptools.extension import Extension from setuptools.command.build_ext import build_ext as _build_ext # workaround for installing bottleneck when numpy is not present class build_ext(_build_ext): # taken from: stackoverflow.com/questions/19919905/ # how-to-bootstrap-numpy-installation-in-setup-py#21621689 def finalize_options(self): _build_ext.finalize_options(self) # prevent numpy from thinking it is still in its setup process __builtins__.__NUMPY_SETUP__ = False import numpy self.include_dirs.append(numpy.get_include()) def prepare_modules(): from bottleneck.src.template import make_c_files make_c_files() ext = [Extension("bottleneck.reduce", sources=["bottleneck/src/reduce.c"], extra_compile_args=['-O2'])] ext += [Extension("bottleneck.move", sources=["bottleneck/src/move.c", "bottleneck/src/move_median/move_median.c"], extra_compile_args=['-O2'])] ext += [Extension("bottleneck.nonreduce", sources=["bottleneck/src/nonreduce.c"], extra_compile_args=['-O2'])] ext += [Extension("bottleneck.nonreduce_axis", sources=["bottleneck/src/nonreduce_axis.c"], extra_compile_args=['-O2'])] return ext def get_long_description(): with open('README.rst', 'r') as fid: long_description = fid.read() idx = max(0, long_description.find("Bottleneck is a collection")) long_description = long_description[idx:] return long_description def get_version_str(): ver_file = os.path.join('bottleneck', 'version.py') with open(ver_file, 'r') as fid: version = fid.read() version = version.split("= ") version = version[1].strip() version = version.strip("\"") return version CLASSIFIERS = ["Development Status :: 4 - Beta", "Environment :: Console", "Intended Audience :: Science/Research", "License :: OSI Approved :: BSD License", "Operating System :: OS Independent", "Programming Language :: C", "Programming Language :: Python", "Programming Language :: Python :: 3", "Topic :: Scientific/Engineering"] metadata = dict(name='Bottleneck', maintainer="Keith Goodman", maintainer_email="bottle-neck@googlegroups.com", description="Fast NumPy array functions written in C", long_description=get_long_description(), url="https://github.com/kwgoodman/bottleneck", download_url="http://pypi.python.org/pypi/Bottleneck", license="Simplified BSD", classifiers=CLASSIFIERS, platforms="OS Independent", version=get_version_str(), packages=find_packages(), package_data={'bottleneck': ['LICENSE']}, requires=['numpy'], install_requires=['numpy'], cmdclass={'build_ext': build_ext}, setup_requires=['numpy']) if not(len(sys.argv) >= 2 and ('--help' in sys.argv[1:] or sys.argv[1] in ('--help-commands', 'egg_info', '--version', 'clean', 'build_sphinx'))): # build bottleneck metadata['ext_modules'] = prepare_modules() elif sys.argv[1] == 'build_sphinx': # create intro.rst (from readme file) for sphinx manual readme = 'README.rst' intro = os.path.join('doc', 'source', 'intro.rst') with open(readme, 'r') as infile, open(intro, 'w') as outfile: txt = infile.readlines()[4:] # skip travis, appveyor build status outfile.write(''.join(txt)) setup(**metadata) bottleneck-1.2.0/tools/000077500000000000000000000000001300536544100150055ustar00rootroot00000000000000bottleneck-1.2.0/tools/appveyor/000077500000000000000000000000001300536544100166525ustar00rootroot00000000000000bottleneck-1.2.0/tools/appveyor/conda_setup.py000066400000000000000000000007211300536544100215300ustar00rootroot00000000000000# -*- coding: utf-8 -*- from __future__ import absolute_import import logging from os import environ from conda_wrapper import CondaWrapper if __name__ == "__main__": logging.basicConfig(level=logging.INFO) with CondaWrapper(environ['PYTHON_VERSION'], environ['CONDA_HOME'], environ['CONDA_VENV']) as conda: conda.configure() conda.update() conda.create(*environ['DEPS'].split(' ')) logging.shutdown() bottleneck-1.2.0/tools/appveyor/conda_wrapper.py000066400000000000000000000043451300536544100220560ustar00rootroot00000000000000# -*- coding: utf-8 -*- from __future__ import absolute_import import sys import logging from subprocess import check_output if sys.version_info[0] == 2: def decode(string): return string else: def decode(string): return string.decode() class CondaWrapper(object): """Manage the AppVeyor Miniconda installation through Python. AppVeyor has pre-installed Python 2.7.x as well as Miniconda (2 and 3). Thus we only need to configure that properly and create the desired environment. """ def __init__(self, version, home, venv, **kw_args): super(CondaWrapper, self).__init__(**kw_args) self.logger = logging.getLogger("{}.{}".format( __name__, self.__class__.__name__ )) self.version = version self.home = home self.venv = venv def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): return False # False reraises the exception def configure(self): self.logger.info("Configuring '%s'...", self.home) cmd = ["conda", "config", "--set", "always_yes", "yes", "--set", "changeps1", "no"] msg = check_output(cmd, shell=True) self.logger.debug(decode(msg)) self.logger.info("Done.") def update(self): self.logger.info("Updating '%s'...", self.home) cmd = ["conda", "update", "-q", "conda"] msg = check_output(cmd, shell=True) self.logger.debug(decode(msg)) self.logger.info("Done.") def create(self, *args): self.logger.info("Creating environment '%s'...", self.venv) cmd = ["conda", "create", "-q", "-n", self.venv, "python=" + self.version] + list(args) msg = check_output(cmd, shell=True) self.logger.debug(decode(msg)) cmd = ["activate", self.venv] msg = check_output(cmd, shell=True) self.logger.debug(decode(msg)) # consider only for debugging cmd = ["conda", "info", "-a"] msg = check_output(cmd, shell=True) self.logger.debug(decode(msg)) cmd = ["conda", "list"] msg = check_output(cmd, shell=True) self.logger.debug(decode(msg)) self.logger.info("Done.") bottleneck-1.2.0/tools/appveyor/windows_sdk.cmd000066400000000000000000000047161300536544100217020ustar00rootroot00000000000000:: To build extensions for 64 bit Python 3.5 or later no special environment needs :: to be configured. :: :: To build extensions for 64 bit Python 3.4 or earlier, we need to configure environment :: variables to use the MSVC 2010 C++ compilers from GRMSDKX_EN_DVD.iso of: :: MS Windows SDK for Windows 7 and .NET Framework 4 (SDK v7.1) :: :: To build extensions for 64 bit Python 2, we need to configure environment :: variables to use the MSVC 2008 C++ compilers from GRMSDKX_EN_DVD.iso of: :: MS Windows SDK for Windows 7 and .NET Framework 3.5 (SDK v7.0) :: :: 32 bit builds do not require specific environment configurations. :: :: Note: this script needs to be run with the /E:ON and /V:ON flags for the :: cmd interpreter, at least for (SDK v7.0) :: :: More details at: :: https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows :: https://stackoverflow.com/a/13751649/163740 :: :: Original Author: Olivier Grisel :: License: CC0 1.0 Universal: https://creativecommons.org/publicdomain/zero/1.0/ :: This version based on updates for python 3.5 by Phil Elson at: :: https://github.com/pelson/Obvious-CI/tree/master/scripts @ECHO OFF SET COMMAND_TO_RUN=%* SET WIN_SDK_ROOT=C:\Program Files\Microsoft SDKs\Windows SET MAJOR_PYTHON_VERSION="%PYTHON_VERSION:~0,1%" SET MINOR_PYTHON_VERSION=%PYTHON_VERSION:~2,1% IF %MAJOR_PYTHON_VERSION% == "2" ( SET WINDOWS_SDK_VERSION="v7.0" SET SET_SDK_64=Y ) ELSE IF %MAJOR_PYTHON_VERSION% == "3" ( SET WINDOWS_SDK_VERSION="v7.1" IF %MINOR_PYTHON_VERSION% LEQ 4 ( SET SET_SDK_64=Y ) ELSE ( SET SET_SDK_64=N ) ) ELSE ( ECHO Unsupported Python version: "%MAJOR_PYTHON_VERSION%" EXIT 1 ) IF "%PYTHON_ARCH%"=="64" ( IF %SET_SDK_64% == Y ( ECHO Configuring Windows SDK %WINDOWS_SDK_VERSION% for Python %MAJOR_PYTHON_VERSION% on a 64 bit architecture SET DISTUTILS_USE_SDK=1 SET MSSdk=1 "%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Setup\WindowsSdkVer.exe" -q -version:%WINDOWS_SDK_VERSION% "%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Bin\SetEnv.cmd" /x64 /release ECHO Executing: %COMMAND_TO_RUN% call %COMMAND_TO_RUN% || EXIT 1 ) ELSE ( ECHO Using default MSVC build environment for 64 bit architecture ECHO Executing: %COMMAND_TO_RUN% call %COMMAND_TO_RUN% || EXIT 1 ) ) ELSE ( ECHO Using default MSVC build environment for 32 bit architecture ECHO Executing: %COMMAND_TO_RUN% call %COMMAND_TO_RUN% || EXIT 1 ) bottleneck-1.2.0/tools/test-installed-bottleneck.py000066400000000000000000000037651300536544100224560ustar00rootroot00000000000000#!/usr/bin/env python from __future__ import division import sys from optparse import OptionParser import bottleneck # This file is a modified version of the original numpy file: # test-installed-numpy.py # A simple script to test the installed version of bottleneck by calling # 'bottleneck.test()'. Key features: # -- convenient command-line syntax # -- sets exit status appropriately, useful for automated test environments # It would be better to set this up as a module in the bottleneck namespace, so # that it could be run as: # python -m numpy.run_tests # But, python2.4's -m switch only works with top-level modules, not modules # that are inside packages. So, once we drop 2.4 support, maybe... # TODO: Bottleneck doesn't support python 2.4 # In case we are run from the source directory, we don't want to import # bottleneck from there, we want to import the installed version: sys.path.pop(0) parser = OptionParser("usage: %prog [options] -- [nosetests options]") parser.add_option("-v", "--verbose", action="count", dest="verbose", default=1, help="increase verbosity") parser.add_option("--doctests", action="store_true", dest="doctests", default=False, help="Run doctests in module") parser.add_option("--coverage", action="store_true", dest="coverage", default=False, help="report coverage requires 'coverage' module") parser.add_option("-m", "--mode", action="store", dest="mode", default="fast", help="'fast', 'full', or something that could be " "passed to nosetests -A [default: %default]") (options, args) = parser.parse_args() result = bottleneck.test(options.mode, verbose=options.verbose, extra_argv=args, doctests=options.doctests, coverage=options.coverage) if result.wasSuccessful(): sys.exit(0) else: sys.exit(1) bottleneck-1.2.0/tools/travis/000077500000000000000000000000001300536544100163155ustar00rootroot00000000000000bottleneck-1.2.0/tools/travis/bn_setup.sh000066400000000000000000000007311300536544100204710ustar00rootroot00000000000000#!/usr/bin/env bash set -ev # exit on first error, print commands if [ "${TEST_RUN}" = "style" ]; then flake8 --exclude=doc . else if [ "${TEST_RUN}" = "sdist" ]; then python setup.py sdist ARCHIVE=`ls dist/*.tar.gz` pip install --verbose "${ARCHIVE[0]}" else pip install --verbose "." fi # Workaround for https://github.com/travis-ci/travis-ci/issues/6522 set +e python "tools/test-installed-bottleneck.py" fi bottleneck-1.2.0/tools/travis/conda_install.sh000066400000000000000000000012101300536544100214550ustar00rootroot00000000000000#!/usr/bin/env bash set -ev # exit on first error, print commands if [ "${PYTHON_ARCH}" == "32" ]; then set CONDA_FORCE_32BIT=1 fi if [ -n "${TEST_RUN}" ]; then TEST_NAME="test-${TEST_RUN}-python-${PYTHON_VERSION}_${PYTHON_ARCH}bit" else TEST_NAME="test-python-${PYTHON_VERSION}_${PYTHON_ARCH}bit" fi export TEST_NAME # split dependencies into separate packages IFS=" " TEST_DEPS=(${TEST_DEPS}) echo "Creating environment '${TEST_NAME}'..." conda create -q -n "${TEST_NAME}" "${TEST_DEPS[@]}" python="${PYTHON_VERSION}" set +v # we dont want to see commands in the conda script source activate "${TEST_NAME}" conda info -a conda list bottleneck-1.2.0/tools/travis/conda_setup.sh000066400000000000000000000013601300536544100211550ustar00rootroot00000000000000#!/usr/bin/env bash set -ev # exit on first error, print commands CONDA_URL="http://repo.continuum.io/miniconda" if [ "${PYTHON_VERSION:0:1}" == "2" ]; then CONDA="Miniconda2" else CONDA="Miniconda3" fi if [ "${TRAVIS_OS_NAME}" == "osx" ]; then CONDA_OS="MacOSX" else CONDA_OS="Linux" fi if [ "${PYTHON_ARCH}" == "64" ]; then URL="${CONDA_URL}/${CONDA}-latest-${CONDA_OS}-x86_64.sh" else URL="${CONDA_URL}/${CONDA}-latest-${CONDA_OS}-x86.sh" fi echo "Downloading '${URL}'..." set +e travis_retry wget "${URL}" -O miniconda.sh set -e chmod +x miniconda.sh ./miniconda.sh -b -p "${HOME}/miniconda" export PATH="${HOME}/miniconda/bin:${PATH}" hash -r conda config --set always_yes yes --set changeps1 no conda update -q conda bottleneck-1.2.0/tools/update_readme.py000066400000000000000000000031521300536544100201570ustar00rootroot00000000000000from cStringIO import StringIO import sys import os import bottleneck as bn def update_readme(): # run benchmark suite while capturing output; indent with Capturing() as bench_list: bn.bench() bench_list = [' ' + b for b in bench_list] # read readme cwd = os.path.dirname(__file__) readme_path = os.path.join(cwd, '../README.rst') with open(readme_path) as f: readme_list = f.readlines() readme_list = [r.strip('\n') for r in readme_list] # remove old benchmark result from readme idx1 = readme_list.index(' Bottleneck performance benchmark') idx2 = [i for i, line in enumerate(readme_list) if line == ''] idx2 = [i for i in idx2 if i > idx1] idx2 = idx2[1] del readme_list[idx1:idx2] # insert new benchmark result into readme; remove trailing whitespace readme_list = readme_list[:idx1] + bench_list + readme_list[idx1:] readme_list = [r.rstrip() for r in readme_list] # replace readme file os.remove(readme_path) with open(readme_path, 'w') as f: f.write('\n'.join(readme_list)) # --------------------------------------------------------------------------- # Capturing class taken from # http://stackoverflow.com/questions/16571150/ # how-to-capture-stdout-output-from-a-python-function-call class Capturing(list): def __enter__(self): self._stdout = sys.stdout sys.stdout = self._stringio = StringIO() return self def __exit__(self, *args): self.extend(self._stringio.getvalue().splitlines()) sys.stdout = self._stdout if __name__ == '__main__': update_readme() bottleneck-1.2.0/tox.ini000066400000000000000000000010241300536544100151550ustar00rootroot00000000000000# Tox (http://tox.testrun.org/) configuration [tox] envlist = py27_np1112, py34_np1112 [testenv] changedir={envdir} commands={envpython} {toxinidir}/tools/test-installed-bottleneck.py {posargs:} [testenv:py27_np1112] basepython = python2.7 deps = nose numpy==1.11.2 [testenv:py34_np1112] basepython = python3.4 deps = nose numpy==1.11.2 # Not run by default. Use 'tox -e py27_npmaster' to call it [testenv:py27_npmaster] basepython = python2.7 deps = nose https://github.com/numpy/numpy/zipball/master